Last spring, artificial intelligence research institute OpenAI said it had made software so good at generating text—including fake news articles—that it was too dangerous to release .Machine-learning algorithms are directed to analyze vast collections of text scraped from the web to discover the statistical patterns in language use.
The newborn nonprofit said it had commitments totaling $1 billion and would work on AI “to benefit humanity as a whole, unconstrained by a need to generate financial return.” But OpenAI restructured into a for-profit this year, saying it needed more money to fulfill its goals, and took $1 billion from Microsoft in a deal that involves helping the company’s cloud division develop new AI technology.
The report also said that a smaller version of GPT-2 OpenAI had released was roughly as good as the full withheld one at creating fake news articles.Gokaslan and Cohen did make contact with OpenAI Thursday, after a tweet announcing their release began circulating among AI researchers.
“In order to build AGI you need to have billions of dollars of investment” in computing resources, says Ilya Sutskever, initially OpenAI’s research director, but now chief scientist of the new for-profit, OpenAI LP.
“This is all about letting the robot supervise itself, rather than humans going in and doing annotations,” says coauthor Lucas Manuelli, also of MIT CSAIL.“I can see how this is very useful in industrial applications where the hard part is finding a good point to grasp,” says Matthias Plappert, an engineer at OpenAI who has developed a system for a robot hand to teach itself how to manipulate, but who wasn’t involved in this work.