“(The) OpenAI (Institute was founded) to make new AI discoveries and give them away for the common good. Now, the institute’s researchers are sufficiently worried by something they built that they won’t release it to the public.
The AI system that gave its creators pause was designed to learn the patterns of language. It does that very well—scoring better on some reading-comprehension tests than any other automated system. But when OpenAI’s researchers configured the system to generate text, they began to think about their achievement differently.
“It looks pretty darn real,” says David Luan, vice president of engineering at OpenAI, of the text the system generates. He and his fellow researchers began to imagine how it might be used for unfriendly purposes. “It could be that someone who has malicious intent would be able to generate high-quality fake news,” Luan says.” – Source
This is an understandable reaction, but wrong I think. Understandable because the effect of fake news online has had massive consequences for society and democracy. So I can see why they would be hesitant to release this.
But two things spring to mind:
At some point we have to accept that we are already in the era of large scale AI, and hiding the progress does not stop it. Much better to educate people by publicising this, that what they see or read online can not be trusted . As an example, look at the worldwide news around the Deepfake phenomena. At least in that case it’s out in the open and now future videos will be looked at with an appropriate level of skepticism.