OpenAI and The Quest for Benevolent AI

Does a for-profit AI company have the best interests of humanity at heart with regards to the rise of AI? OpenAI has the self-stated mission of leading the world towards the Age of Artificial General Intelligence, while keeping human interests at the forefront of this movement. Still, their rapid rise to industry stardom has not been without controversy, including Elon Musk parting ways with the firm earlier this year.

Reportedly, the split was due to creative differences on the direction that OpenAI chose to take. Still, it isn’t exactly clear what this direction is.

On the one hand, as Wired and others have claimed, OpenAI seems to be on a mission to show the world how AI can improve our lives. It is their recent efforts, however, like the release of their findings related to their GPT-2 text generator, that have drawn their relatively altruistic goals into question.

Shortly after they were released, these findings were quickly pulled, due to several criticisms of what could come from an unsupervised, text-based system, even though none of them were completely based on its’ present capabilities.

Now, just a few months after this occurred, OpenAI is back in the headlines again, related to how they are using video games to train their AI systems. Reportedly, they have been using AIs as players in e-sports tournaments.

Their primary goal in doing so appears to be to teach the AIs to both beat and use the in-game strategies that professional players use. In essence, the live-games act as training environments or black boxes related to acquiring certain skills. With this in mind, it is logical to ask how training AI systems in e-sports tournaments relates to both AGI related-research and creating AIs that may be considered “benevolent.”

To even begin to make these connections, it’s important to understand some of the theory behind the “Open AI Five,” which refers to the company’s automated e-sports team that hopes to gain the skills necessary to consistently beat human players. Perhaps, given their report related to training AI to be teammates as well, this collaboration relates more directly to their company mission?

Judging by their posts on the subject, it’s difficult to tell. What can be said is that OpenAI decided to become involved in e-sports both due to the sheer amount of data that their systems can take in by doing so, and because it represents an interesting opportunity to educate people on how AIs can be helpful to humanity.

If you’re wondering why, this boils down to the AIs’ ability to be both teammates and adversaries in the e-sports space, as suggested above. Beyond this, any sort of research into how this sort of effort can result in more “human” AI may be considered to be very early-stage. Until we know more about the other areas in which Open.ai aims to push its’ mission of humanizing AI forwards, it is difficult to make any further conclusions related to their efforts in e-sports. Perhaps, therefore, a good place to continue would be with another area of their research like Musenet.

At this point, Musenet is in a beta-stage, though with the information that is available on the project, it is easy to make one particular conclusion. Musenet is meant as less of a threat to human musicians, and more of a tool to help them compose new and innovative musical pieces. Overall, what it aims to do is educate musicians on how AIs can augment and improve what they do.

If this proves successful in practice over the long-term, then it falls in line with the current thesis that most of the AI industry seems to hold. By educating the global populace on how AIs can help and not hurt us, perhaps we will one day achieve the age of ” truly benevolent AI.”

Until more news emerges on this subject, consider familiarizing yourself with our list of suggested resources below and remember that for now, Artificial General Intelligence appears to be a distant star on a far-away horizon. For that to even begin to change, under-appreciated research areas like Reinforcement Learning and Generative Adversarial Networks will have to gain much more traction than they already have. Overarching all of this is the need for re-framing humanity’s understanding of AI’s purpose.

Resources:

https://openai.com/blog/musenet/

https://towardsdatascience.com/what-is-artificial-general-intelligence-5b395e63f88b

https://openai.com/blog/openai-five-finals/

https://openai.com/blog/how-to-train-your-openai-five/

https://techxplore.com/news/2018-05-adversarial-networks-unleashed-video-games.html


https://singularitynet.io

https://www.bloomberg.com/news/articles/2019-02-17/elon-musk-left-openai-on-disagreements-about-company-pathway

https://www.wired.com/story/company-wants-billions-make-ai-safe-humanity/

https://www.bloomberg.com/news/articles/2019-02-17/elon-musk-left-openai-on-disagreements-about-company-pathway

https://openai.com/

https://techcrunch.com/2019/02/17/openai-text-generator-dangerous/

About Ian LeViness 113 Articles
Professional Writer/Teacher, dedicated to making emergent industries acceptable to the general populace