Check out the on-demand sessions from the Low-Code/No-Code Summit to learn how to successfully innovate and achieve efficiency by upskilling and scaling citizen developers. Watch now.
Over the past couple of weeks, you’ve likely seen one of your Facebook friends sharing an article about how a pair of AI-driven Facebook chatbots invented their own language in a deviation from what they were originally programmed to do. The gist is this: Facebook created twin AI chat programs to converse with each other (and learn from each other), and they eventually stopped communicating in English and began communicating in a non-English language they “invented.”
The headlines reporting about this range from dramatized to exploitative, such as the Telegraph’s “Facebook shuts down robots after they invent their own language,” making it sound like these chatbots were conspiring against humanity or posing some other existential threat. A quick look at any comment feed will show you users responding with fear, excitement, and amusement, with lines like “And so it begins…” or references to the works of Isaac Asimov.
But as Facebook’s AI Dhruv Batra noted on Monday, AIs have been “inventing” new ways of communicating with each other for decades, so the news was, in fact, not news. Batra also explained that, contrary to the headlines, the experiment wasn’t shut down but simply altered to tweak the linguistic exchange.
Furthermore, there has been no attempt to hide details of the experiment. Everything is out in the open, with all details publicly available on GitHub, allowing other coders replicate the scenario.
Intelligent Security Summit
Learn the critical role of AI & ML in cybersecurity and industry specific case studies on December 8. Register for your free pass today.
The bottom line is this: New robot languages should be the least of our concerns when it comes to AI.
Here’s what should worry us
Tech-minded influencers like Elon Musk and Bill Gates have donated significant time and money to explain AI is a threat and figure out new ways to advance it ethically.
These are some of the main areas where we should be focusing our attention:
Weapons. Military drones are already in operation, and autonomous weaponry has been described as the third revolution in warfare. Automated intelligent weaponry puts fewer soldiers in harm’s way and is far less expensive than other advanced technologies like fighter jets or nuclear materials. For example, current Reaper drones cost about $13 million, compared to $100 million for a fighter jet. In the future, they’ll become far cheaper and ubiquitously available in unregulated (and that means incredibly powerful) mobile weapons that anyone can program to kill and destroy (unless we proactively impose security measures).
Control. What may be most important isn’t the AI itself, but who’s controlling it. We’re dealing with forces that have the power to reshape our world, and if they’re monopolized by greedy corporations, power-hungry nations, or even well-intentioned individuals who simply don’t know what they’re doing, the technology could easily be abused or misused. That’s why several Silicon Valley influencers and AI researchers have come together to form OpenAI, an initiative to make AI available to the entire world, not just one group of people. The initiative hasn’t been cheap, with one top AI researcher reportedly being offered two to three times his market value (which is already more than a starting NFL quarterback).
Growth. The other problem with AI is the pace of its growth, when uncontrolled by limiting parameters. Artificial general intelligence (AGI) has the power to improve exponentially, since it will hypothetically be able to improve itself, and at some point it will surpass the intelligence of its creators. The agricultural age lasted millennia, the industrial age lasted centuries, the information age has lasted decades, and now the age of AI may last mere years; technology growth accelerates rather than progressing linearly, and now we’re at a point where any further acceleration will leave us woefully unprepared to deal with the consequences (if we take a reactive, rather than a proactive stance). We haven’t been able to create superintelligence yet, but we already have the processing power in place — the Tianhe-2 in China is capable of 34 quadrillion computations per second (cps), far more than the human brain (at 10 quadrillion cps). If we want to wield this computational power responsibly, we need to have a foundation in place to deal with its unrestricted growth. That means understanding the ethics of AI development, how limitations could work, and having safeguards in place in case something goes awry.
There are some legitimate and serious existential concerns about how AI is being used and how it may develop in the future. However, it’s irresponsible to overly personify the systems being tested or allow headlines to shape our perspectives. Chatbots inventing a new language isn’t threatening; it’s natural, and our attention belongs elsewhere.
Tony Tie is senior search marketer at Expedia. He has previously worked with a number of Fortune 500 companies to improve their online presence. He is also a marketing and entrepreneurship lecturer at various universities.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.