https://im.kommersant.ru/Issues.photo/DAILY/2020/127/KSP_016623_00053_1_t218_225552.jpg

The economy Ministry introduced the government concept for the development of regulation in the field of artificial intelligence (AI) and robotics to 2024. This framework document lists the gaps and problems of legal regulation of preventing the introduction of such technologies: from data protection to the delineation of responsibility for damage caused by AI systems and robots. The experts among the tasks, highlight the difficulties in using the data sets required to train the AI.The Ministry of economy together with Sberbank and the SKOLKOVO center has developed the concept of regulation in the field of artificial intelligence (AI) and robotics to 2024. According to “Kommersant”, the concept has already received the necessary approvals and the draft government regulation on its statement submitted to the government. It is assumed that the Federal government and “Rosatom” in the next three months will present their proposals for the implementation of the concept in the framework of the Federal project “regulation of the digital environment” and “Artificial intelligence” national programme “Digital economy”.While in Russia there is no special legal regulation, taking into account the specifics of the application of AI and robots the proposed concept describes areas where these rules need to change. It is the principle “incentives before regulation” (restrictions are imposed only if there is a risk of innovation) and person-oriented approach (application of the AI should not bear harm). One of the main tools — experimental regimes (“regulatory sandboxing” see “Kommersant” on 20 July).The concept describes the obstacles to introducing new technologies. This, in particular, the problem of protection of personal data and their use — this requires the adaptation of legislation in the field data (including government agencies and medical institutions), providing safe access to them for developers and removing restrictions on the circulation and use. Another problem is the division of responsibility for causing harm with the use of AI systems and robots. Her decision is complicated by the impossibility of full disclosure of the algorithm because of the use of probabilistic assessments. One of the solutions the authors consider the development of insurance instruments allowing to compensate the damage incurred and a description of the conditions of identity in AI systems in direct interaction with the person.The concept points to the need to facilitate export of such technologies to eliminate discrepancies in regulation, forcing you to develop separately for the Russian and international markets, and to expand the list of exemptions list of goods and dual-use technologies. You want and adapt thingsstuudy technical standards, and the change of the legal regime of intellectual property protection — while it does not provide protection to the performance of AI systems is obtained without the creative contribution of the person.Described in the document and introduction of AI and robots in specific areas. In particular, in medicine there is no list of cases in which it is allowed full or partial decision making with the use of such technologies, for their registration it takes too much time. The development of goservices prevents that based on AI can’t make decisions. In the transport sector requires regulation of the use of drones in Finance — implementation of experimental modes in industry — the emergence of the test sites for General use and ensuring safe use of industrial data. In addition, the regulatory changes required for the creation of “smart cities” and the development of private spaceflight.Executive Director, Sberbank.ai Andrei Neznamov explained “Kommersant” one of the key issues is the use of the data: “there is an important balance to the rights of citizens were protected and that developers could create and use data sets. It now looks for all the world.” Not less important question — regulation of drones, especially in the field of road transport. “Russia is competitive on the world market developments, but legal regulation remains quite conservative. An urgent need to create legal conditions for the safe testing of all types of drones. At the same time Moscow “sandbox” AI is a great tool, but it will be insufficient as it is limited by its territory and opportunities,” said Andrei Neznamov.Director of the technology practice of KPMG in Russia and the CIS Sergei Vikharev believes that in order decisions could not leave the Internet giants, and a wide range of medium-sized companies and startups that need easing regulatory requirements — as the process of learning AI requires its application in the real world. Director of the ANO “Information culture” Ivan begtin says that the regulation of artificial intelligence in Russia is lagging behind, since there is no national strategy for working with data. “The problem is that for training the AI needs a huge volume of information. But, for example, in the case of cancer diagnosis using AI is impossible, because such data covers patient confidentiality,” he notes.Tatiana Edovina, Julia Silence