Multiplayer games are seen as a fruitful arena in which to simulate many real-world AI application scenarios—such as autonomous vehicles, swarming drones, and collaborative commerce—that may be too expensive, speculative, or risky to test fully in the real world. And they’re an environment in which researchers can explore the frontiers of game theory, decision theory, and other branches of advanced mathematics and the computational sciences.
But when you dig deeper into AI in gaming, it becomes obvious that very little translates to the real world, so be very cautious of any business-oriented AI that comes out gaming contexts. But the underlying game-theory principles could be applied to the real world, only if the test cases reflected the real world, not artificial fantasy environments.
OpenAI Five is a prime example of gaming-based AI’s dubious promises
AI-centric gaming testbeds are all the rage. Fun and games are a great thing, but it remains to be seen whether AI researchers can apply the lessons learned in their design to problems in the real world. In particular, I’m thinking of OpenAI, the nonprofit research community that’s pushing boundaries in artificial general intelligence.
I’m a bit concerned about the hoopla surrounding the ongoing OpenAI Five research project, which has been hyped for its supposed ability to “self-train” by using AI to play itself in multiple iterations of a multiplayer game called Defense of the Ancients (DotA) 2. We’ve seen these kinds of AI-enriched gaming initiatives for many years, but it’s not clear whether any of them has produced any significant breakthroughs in new AI approaches that can spawn new types of applications beyond the narrowly constrained gaming domains for which they were developed.