DeepMind this week open-sourced Lab2D, a software system designed to support the creation of 2D environments for AI and machine learning research. The Alphabet subsidiary says that Lab2D was built with the needs of deep reinforcement learning researchers in mind, but that it can be useful beyond that particular subfield of machine learning.

The DeepMind team behind Lab2D makes the case that 2D environments are inherently easier to understand than 3D ones at little loss of expressiveness. Even a game as simple as Pong, which essentially consists of three moving rectangles on a black background, can capture something fundamental about the real game of table tennis, the researchers assert. This abstraction ostensibly makes it easier to capture the essence of problems and concepts in AI.

“Rich complexity along numerous dimensions can be studied in 2D just as readily as in 3D, if not more so … In addition, 2D worlds are significantly less resource-intensive to run, and typically do not require any specialized hardware (like GPUs) to attain reasonable performance,” the researchers continued in their paper describing Lab2D. “2D worlds have been successfully used to study problems as diverse as social complexity, navigation, imperfect information, abstract reasoning, exploration, and many more.”

Lab2D is a platform facilitating the creation of 2D, layered, discrete “grid-world” environments in which pieces akin to chess pieces move around. It supports multiple simultaneous players interacting in the same environment, and these players can be either human or computer-controlled. Each player can have a custom view of the world that reveals or obscures particular information; a global view, potentially hidden from the players, can be set up and include certain information. This can be used for imperfect information games, where players don’t share common knowledge, as well as for human behavioral experiments where the experimenter can see the global state of the environment as the episode is progressing.

DeepMind Lab2D

Above: A testing environment in Lab2D.

Lab2D provides several mechanisms for exposing internal environment information, the simplest being observations that allow researchers to add specific information from the environment at each time step. The second is events, which aren’t tied to time steps but instead are triggered on specific conditions. Finally, there’s the properties API, which provides a way to read and write parameters of the environment.

Experts like Facebook’s Tim Rocktäschel and AI and games researcher Julian Togelius, director of the New York University Game Lab, note that similar frameworks with more capabilities including GVGAI, Griddle, Pommerman, gym-minigrid, BabyAI, NetHack, Malmo, and microRTS have existed for years. But DeepMind asserts that Lab2D is a step toward “robust” simulation platforms that might enable learning, skill acquisition, and measurement of AI systems at scale.

“[Lab2D] generalizes and extends a popular internal system at DeepMind which supported a large range of research projects. It was especially popular for multi-agent research involving workflows with significant environment-side iteration,” the Lab2D team wrote. “In our own experience, we have found that DeepMind Lab2D facilitates researcher creativity in the design of learning environments and intelligence tests. We are excited to see what the research community uses it to build in the future.”

The open-sourcing of Lab2D comes after DeepMind released OpenSpiel, a collection of AI training tools for video games. At its core, it’s a collection of environments and algorithms for research in general reinforcement learning and search and planning in games, with tools to analyze learning dynamics and other common evaluation metrics.