Facebook is under fire from a lot of critics this week as fallout continues from the Cambridge Analytica scandal, including from an unexpected source: Google. In a series of tweets published Thursday, Google researcher François Chollet warns that the problem with Facebook isn’t just the privacy breach or recent lack of trust, it’s the fact that Facebook, powered by AI, can soon become a “totalitarian panopticon.”
“We’re looking at a powerful entity that builds fine-grained psychological profiles of over two billion humans, that runs large-scale behavior manipulation experiments, and that aims at developing the best AI technology the world has ever seen. Personally, it really scares me,” Chollet said in a Twitter thread. “If you work in AI, please don’t help them. Don’t play their game. Don’t participate in their research ecosystem. Please show some conscience.”
A good chunk of the field of AI research (especially the bits that Facebook has been investing in) is about developing algorithms to solve such optimization problems as efficiently as possible, to close the loop and achieve full control of the phenomenon at hand. In this case, us
— François Chollet (@fchollet) March 21, 2018
Chollet has been a software engineer and researcher focused on deep learning at Google since August 2015, according to his LinkedIn profile, and is perhaps best known as the primary author of Keras, an open-source deep learning library used by hundreds of thousands of AI practitioners around the world.
The Twitter thread focused heavily on how advances in AI, particularly the field of deep learning, could lead to “mass population control” by Facebook, with the use of psychological tactics that are “devastatingly effective.” The threat will grow, Chollet said, as advances continue in AI and deep learning.
“The human mind is a static, vulnerable system that will come increasingly under attack from ever-smarter AI algorithms that will simultaneously have a complete view of everything we do and believe, and complete control of the information we consume,” he wrote. ”Importantly, mass population control — in particular political control — arising from placing AI algorithms in charge of our information diet does not necessarily require very advanced AI. You don’t need self-aware, superintelligent AI for this to be a dire threat.”
Though Chollet’s scathing comments are noteworthy, consider the source: Google and Facebook are both known for serving content to users with AI algorithms, and both are known for corralling much of the world’s AI talent.
Chollet’s comments echo WhatsApp cofounder Brian Acton’s call for users to delete their Facebook accounts, as well as growing calls in recent days for CEO Mark Zuckerberg to testify before committees at the U.S. Congress and U.K. parliament.
Last Friday, Facebook divulged the extent of Cambridge Analytica’s misuse of user data likely to benefit the Trump presidential campaign. Reporting by The Guardian and other news outlets stated that the breach resulted in targeting 50 million Facebook users ahead of the 2016 presidential election, and that Facebook was aware of the misuse of user data years before the admission.
In one of his first public statements since then, Zuckerberg apologized Wednesday in an interview with CNN, vowed to make changes, and considered the possibility of testifying before a Congressional committee.
Also Wednesday, Facebook announced changes for its Facebook Platform, promising an audit of any app given access to large amounts of data in recent years, and to require a signed contract from users in order to gain access to their Facebook posts.
The audio problem: Learn how new cloud-based API solutions are solving imperfect, frustrating audio in video conferences. Access here