It feels senseless to talk about the latest tech gadget or tech startup when the world has been set on fire by recent tragic acts of violence on Black humanity. But I’m here to talk to tech folks about gadgetry just the same. Why? Because the community of people who speak machine is a small one, but we have a lot of sway. Because we are digitally privileged, we have the potential to automate more harm than good — or to automate more good than harm.

Right now is an especially important time to consider what tech privilege can and should be doing, and there’s plenty of important advice being shared right now online. One key lesson I’m seeing is that fighting racism is a 24/7 battle. And the only thing that can really run 24/7 is a computational machine.

Using a Facebook-style engagement metric like DAU/MAU (daily active users to monthly active users), people who speak machine can assess the level of addictiveness that can be achieved in getting your attention. A research paper by Facebook in 2013 revealed that Facebook performed experiments on users to see if their emotional state could be changed by showing them different positive or negative stories. Keep in mind that this wasn’t an experiment where a human being did the work of picking up a piece of paper and shoving it into the field of view of another human being. It was instead a diligent robot that would work 24/7 to automagically deliver information into your direct line of sight on any screen you were using.

There is no escape from information that a computer wishes to show you unless you turn it off.

A diagram of the buyer experience

Above: The interminable loops of the buyer experience and the customer experience attempt to draw you into their complete influence.

People who know how to speak machine are well aware that a robot that’s coded to show you pictures of things that give you a sense of disgust are likely to achieve high stickiness. Whereas it’s difficult to create a robot that can increase your understanding of a complex topic. I get a little disturbed these days knowing how robots can learn much quicker than we can with advances in machine learning — which are like feeding millions of fire hoses of information into a machine brain. And that can happen 24/7 without the machine ever needing a bathroom break or a nap — because they don’t tire at all.

Above: Big Tech sits at Kardashev 4 on the brink of Kardashev 5, while older companies are desperately trying to move from Kardashev 2 and 3.

Machines learn fast. Humans learn slowly. Humans learn bad things fast. Machines learn fast from humans. Machines can spread bad human things fast. Machines can also spread good information quickly. But that’s not in the best interests of the machine’s engagement scores because bad news is tastier on social. The robots are programmed to re-share your distastes at HIGH VOLUME, which begets more reactions that beget even greater reactions in pursuit of the desired jackpot of an obscene level of virality.

The unexpected moral outcome of our global casino of information has been the degree of transparency we now have with respect to acts of racism that can be shared instantly on video.

Not only does bad news travel at the full speed of the Internet, but, due to shelter in place, our attention has been fixated on a screen-based world.

The random interactions inherent to our physical world have vanished for the time being — which removes all possibilities to encounter random information that might have the chance of broadening our understanding. But for many who weren’t aware of the ugliness of racism, witnessing the tragic murder of George Floyd as a random delivery of information by a robot had the opposite effect. Instead of drawing us further into our screens, it’s moved us back into the real world, which is the only place where we can demand change from our leaders with maximum effectiveness — even in such dangerous times as C-19.

Everything happening right now points to an unexpected “bug” in the code of all the robots that have been designed to vacuum up all our attention into infinite loops of engagement. We’ve instead been forced to become awake.

But I know that many of us will be pulled back into sleep again because the 24/7 tireless work of the robots can take us in directions that are out of our control.

So I wonder if the people who engineer the robots that control our minds and put us to sleep could instead design robots that keep us awake? Is the only signal that can jolt us awake, and the algorithms, the loss of another Black life? The answer to this question matters for the future.

What are you making, my fellow speakers of machine?

John Maeda is Chief Experience Officer at Publicis Sapient and author of How to Speak Machine.