OpenAI has published the text-generating AI it said was too dangerous to share
- by 7wData
The research lab OpenAI has released the full version of a text-generating AI system that experts warned could be used for malicious purposes.
The institute originally announced the system, GPT-2, in February this year, but withheld the full version of the program out of fear it would be used to spread fake news, spam, and disinformation. Since then it’s released smaller, less complex versions of GPT-@ and studied their reception. Others also replicated the work. In a blog post this week, OpenAI now says it’s seen “no strong evidence of misuse” and has released the model in full.
GPT-2 is part of a new breed of text-generation systems that have impressed experts with their ability to generate coherent text from minimal prompts. The system was trained on eight million text documents scraped from the web and responds to text snippets supplied by users. Feed it a fake headline, for example, and it will write a news story; give it the first line of a poem and it’ll supply a whole verse.
It’s tricky to convey exactly how good GPT-2’s output is but the model frequently produces eerily cogent writing that can often give the appearance of intelligence (though that’s not to say what GPT-2 is doing involves anything we’d recognize as cognition.) But play around with the system long enough and its limitations become clear. It particularly suffers with the challenge of long-term coherence; for example, consistently using the names and attributes of characters in a story, or sticking to a single subject in a news article.
The best way to get a feel for GPT-2’s abilities is to try it out yourself. You can access a web version at TalkToTransformer.com and enter your own prompts. (A “Transformer” is a component of machine learning architecture used to create GPT-2 and its fellows.
[Social9_Share class=”s9-widget-wrapper”]
Upcoming Events
From Text to Value: Pairing Text Analytics and Generative AI
21 May 2024
5 PM CET – 6 PM CET
Read More