Filter Theory -

The AI Text Generator That’s Too Dangerous To Make Public

“(The) OpenAI (Institute was founded) to make new AI discoveries and give them away for the common good. Now, the institute’s researchers are sufficiently worried by something they built that they won’t release it to the public.

The AI system that gave its creators pause was designed to learn the patterns of language. It does that very well—scoring better on some reading-comprehension tests than any other automated system. But when OpenAI’s researchers configured the system to generate text, they began to think about their achievement differently.

“It looks pretty darn real,” says David Luan, vice president of engineering at OpenAI, of the text the system generates. He and his fellow researchers began to imagine how it might be used for unfriendly purposes. “It could be that someone who has malicious intent would be able to generate high-quality fake news,” Luan says.” – Source

This is an understandable reaction, but wrong I think. Understandable because the effect of fake news online has had massive consequences for society and democracy. So I can see why they would be hesitant to release this.

But two things spring to mind:

  • Withholding this project won’t stop it. There is enough money being poured into AI research by corporations and nation-states worldwide that this is inevitable. There is no reason to assume that only OpenAI could produce this. For all we know, this technology is already in use by other actors, or if not, will be soon.
  • One thing we’ve learned since the wholesale move online of global commerce, secrets don’t stay secret. If it exists, it will get out. This is the same reason government and corporate promises of “We have introduced industry best practice safeguards” should be interpreted as an admission that they can’t guarantee security. However because OpenAI have kept it secret, when it does get out we’ll likely not be aware, so we won’t be looking for its effects.

At some point we have to accept that we are already in the era of large scale AI, and hiding the progress does not stop it. Much better to educate people by publicising this, that what they see or read online can not be trusted . As an example, look at the worldwide news around the Deepfake phenomena. At least in that case it’s out in the open and now future videos will be looked at with an appropriate level of skepticism.

DATE 12 Jun
← BackNext →
Filter Theory - HARMONIC BUZZ presents a FILTER THEORY production
  • Home
  • About
  • News
  • Music
  • Videos
  • Shop
  • Streaming
  • Contact
© 2023 Filter Theory.
Filter Theory -
  • Home
  • About
  • News
  • Music
  • Videos
  • Shop
  • Streaming
  • Ephemera
  • Contact
  • Sign Up

    Sign-up to the Filter Theory mailing list for all the latest news on forthcoming releases and to access exclusive content