This Text I, the author, type, and what I write here corresponds to the truth. Although – can you really believe that? These lines could easily have come from the virtual spring of a new artificial intelligence (AI). Such a self-writing programs, there were already, however, the Written revealed will always make sense.
But a new AI technique, named GPT-2 now delivers texts in unprecedented quality. According to its developers the GPT-2 is so good, that the exact details about its function for the time being must remain secret. To be great is the danger, so the developer company OpenAI says that the technology will fall into the wrong hands and used to bring a mass of “high quality” Fake News in circulation.
KI is creative
The new language model which is based on a form of artificial intelligence called Deep Learning – could yet try only a few journalists, among them those of the British “Guardian”. They fed the text generator sets from George Orwell’s “1984”, and the model wrote of the connection sets that picked up on the style of the story amazingly well. Also the beginning of a news story about the Brexit GPT-2 was able to complete sense, spiced with quotations of important British politicians.
The Curious thing is: The technology comes from the Californian company, OpenAI, which acted until recently as a Non-Profit organization. This was caused, among other things, of Elon Musk by the end of 2015, as a response to the supposedly growing threat to humanity posed by a generalized artificial intelligence, super intelligence, called. The idea is to minimize the ethical risks of the AI, by developing them as open and transparent as possible. In the OpenAI-Charter it means, among other things: “We will actively work with other research and policy institutions.” Research and patents the company has made other researchers and to the Public – until now.
That OpenAI now breaks with its own Mission, the AI researchers Zachary Lipton of the American Carnegie Mellon University unjustified. “The results, describing OpenAI, are interesting, but they are not surprising. This kind of progress was to be expected. In a few months, other developers will be similar.”
The Animashree Anand Kumar from the California Institute of Technology, agrees. “There are many other research groups working on very similar models,” says the Professor at the Department of Computing and Mathematical Sciences and the Director of machine Learning at the company Nvidia. That OpenAI did not examine his development as usual by other researchers, but the media has turned, would have led to an exaggerated representation of the capabilities of the model.
the ethical Argument of OpenAI want to leave Anand Kumar does not apply. Firstly, the technically Savvy could also operate with the previously published information to model the abuse. And secondly, those that bring up Fake News in the great style in circulation would do, even without the use of artificial intelligence quite effectively. The Only thing that is causing OpenAI with his behavior, a slowing down of progress, because other scientists could not examine the model.
Systematic ethics research
OpenAI explained his action in a blog post. The reluctance of the model was an “Experiment in a responsible disclosure” means. It was a difficult balancing act, talks of the OpenAI-corporate strategy, responsible for Jack Clark on Twitter. There is as yet no good guidelines for the dissemination of transformative technologies.
“I think, if the company has an ethical concern, it is a responsible and sensible strategy to publish the results for the time being, only partially,” says Cansu Canca, philosopher and Director of the AI Ethics Lab, a Start-up based in Boston and Istanbul, which computer scientists, philosophers, and legal scholars. Canca says: “This Problem was in sight. It is important to explore at the same time with the development of a product’s potential risks.” However, neither at OpenAI even in the case of other companies, the KI-technology develop – among them Google, Amazon and Facebook – instead find a systematic and serious research into the ethical aspects of their products and their risks for the company.
So many companies sell AI Software to automatic face recognition. However, a study from last year showed: Often Algorithms do not treat all people equally. Applications from Microsoft and IBM worked very well for fair-skinned people; you should recognize the gender of dark-skinned persons, they performed significantly worse. An Amazon product had to be stopped during the development. It should automatically sift through the application documents. However, the managers noted: The program had a striking preference for white men.
OpenAI joins the discussion on
However, in the machine ethics is not just a question of how an artificial intelligence decides, but also how your application changed the society.
“I think the decision of OpenAI well,” says Jessica Heesen, information and media ethics at the University of Tübingen. Even if OpenAI said the new application could not be stopped – with his prudent publication policy of the company to push an important debate. Because Fake News, there is already a long time, but because the media change the landscape, would this be more effective always, and could determine the public discourse more effectively.
in addition, there are more and more technical possibilities for the falsification of news content. So the artificial intelligence can forge a scrap of Voice already sound recordings of a Person with the help of less. Or so-called deep fakes to create Videos in which faces of people, thanks to the Deep-Learning technology in any of the shots copied.
In-the-world mood, you do not need to expire. For one thing, the new text generator of OpenAI is probably not as dangerous as he is presented by the company and in media reports. On the other, a curse and a blessing as are so often close to each other. A number of research projects are currently involved with the development of AI systems, to unmask fakes. Whether behind a Text, a human or machine – in the future, the AI itself will help to recognize the.
to Know More on the higgs, the magazine for all who want to know.