In recent days, companies like Twitter, Reddit, PornHub, and Discord have taken a stand against deepfakes, videos generated with machine learning to graft the face of one person onto the body of another person. The name and practice were popularized in recent months in subreddit communities where users put the heads of actresses like Scarlett Johansson and Gal Gadot on the bodies of porn stars having sex.
It’s the first time since white supremacists got the boot following violence in Charlottesville that tech platforms acted in unison to eliminate something horrendous that many agree shouldn’t exist on the internet — but that’s not the end of it. The story with deepfakes appears to be that we’re just getting started.
Just as we all got used to recognizing doctored photoshopped images, people will get smarter about spotting deepfakes. Some are already easy to spot with the naked eye, while others can be really convincing.
But what happens when deepfakes stop being the fare of redditors with free time and a fixation on Game of Thrones actresses and become a weapon for state actors interested in destabilization of governments like the United States?
Are we ready for that? Cause it kind of looks like fake news in 2016 was the opening salvo in a continuous, deep mind fuck, and that attempts to influence midterm elections have already begun.
Are tech platforms ready? As the #releasethememo drama surrounding Congress and the Mueller investigation demonstrated in recent weeks, platforms like Twitter still haven’t stopped Russian bots from meddling in matters of U.S. politics, and that doesn’t inspire confidence.
In other computer vision-related news this week fit for a dystopian novel, in China, where a national facial recognition database is being created, police at railway stations now wear glasses to scan the faces of travelers and search for criminals.
How long do we have until these two trends merge, and deepfakes successfully trick facial recognition software to put the wrong person in the crosshairs of law enforcement officials?
It’s one thing when you live in a place where there’s rule of law and some (not much) legal recourse, and another entirely when deepfakes are used in places where dissidents and political opposition leaders are routinely rounded up.
In an age of fake news and filter bubbles, these instances shred our collective sense of reality and could continue to create distance between agreed-upon facts.
I don’t trust governments or law enforcement agencies to use these tools without limit, and though there is some good deepfake detection software available today, we don’t know yet if tech platforms can keep everyone safe.
What we will witness in the year ahead may tell the world a lot about whether tech companies with platforms used by millions of people can actually control the ways they’re used.
The answer may make clear if companies can be trusted to police their own platforms, and if they either refuse to make the necessary investments or cannot protect against fake crimes, pornography, or attacks on political dissidents, because they must be held to account in some way.
Thanks for reading,
AI Staff Writer
P.S. Deepfakes can be frightening, but it’s not all dystopian. Please enjoy this video compilation of deepfakes used to put Nicolas Cage in movies he never starred in, like Terminator 2 and Forrest Gump.
GUEST: Fraudsters typically line their pockets by forging our signatures, cloning our credit cards, and stealing our personal identities. Yet we’d like to think that folks who know us personally would catch these counterfeiters if they brazenly claimed to be us in public. After all, seeing is believing, isn’t it? If you don’t look like me, […]
ANALYSIS: As Qualcomm’s second annual 5G Day progressed yesterday, I officially went from “cautiously optimistic” about next-generation 5G cellular technology to “genuinely excited.” Qualcomm offered enough specific technical details and demo results to make the abstract idea of new 5G devices feel concrete — and legitimately near-term. You may have already read my story this morning […]
More than two years after bringing its Cortana digital assistant to iPhones, Microsoft today debuted Cortana for iPad, including a new interface specifically optimized for 7.9-inch, 9.7-inch, 10.5-inch, and 12.9-inch screens. The app also boasts 20 percent faster launch times than the prior iPhone-only version, which is important, given that iOS device users will likely […]
EXCLUSIVE: Quizlet, an online learning company best known for its automated study tools, announced today that it is taking on $20 million in a series B round of funding to pursue artificial intelligence products. Those products include Quizlet Learn, a service the company launched last year that creates an adaptive study plan for user-submitted topics. It’s […]
Health care startup Paige.ai today announced it has raised $25 million to continue its work in cancer diagnosis with help from computer vision trained with clinical imaging data. Datasets relating to treatment and genomics will also be included in the company’s deep learning models. Initial work by Paige.ai will center on the detection of breast, prostate, and […]
EXCLUSIVE: AI4All, an organization funded by Melinda Gates and Nvidia founder Jensen Huang, launches its first-ever mentorship program today at Oakstop, a coworking space in Oakland, California. The program will join tech workers from companies like OpenAI, IBM, Ford, and Accenture with high school students underrepresented in AI to work on projects that apply machine learning […]
When it comes to black boxes, there is none more black than the human brain. Our gray matter is so complex, scientists lament, that it can’t quite understand itself. (via Wired)
Language is a fascinating tool, one that allows humans to share thoughts with one another. Often enough, if used with clarity and precision, language leads to an accord of minds. Language is also the tool by which psychiatrists evaluate a patient for particular psychoses or mental disorders, including schizophrenia. However, these evaluations tend to depend on the availability of highly trained professionals and adequate facilities. (via Futurism)
China is working to update the rugged old computer systems on nuclear submarines with artificial intelligence to enhance the potential thinking skills of commanding officers, a senior scientist involved with the program told the South China Morning Post. (via South China Morning Post)
In January, Google launched a new service called Cloud AutoML, which can automate some tricky aspects of designing machine-learning software. While working on this project, the company’s researchers sometimes needed to run as many as 800 graphics chips in unison to train their powerful algorithms. (via MIT Tech Review)
The audio problem: Learn how new cloud-based API solutions are solving imperfect, frustrating audio in video conferences. Access here