Evgeny Morozov has written a provocative article in the Wall Street Journal entitled “Are Smart Gadgets Making Us Dumb?” in which he worries about the advent of “smart technology” and asks “how much control are we willing to give up?” The whole thing feels like a bit of a straw man as you read through what he considers the most worrying examples of how household objects connected via sensors to the Internet are set to relieve us of our critical faculties.
Beginning with BinCam, a garbage bin with a camera inside, designed to make us feel guilty about what we toss out, and then proceeding to the HAPIfork, a fork that tells you when you’re eating too fast, Morozov’s worries revolve mainly around the capacity for technology, especially when linked up to social media, to “engineer” our behaviour by guilting us into behaving well. I’d suggest that if your friends on social media are actually looking at pictures of the inside of your garbage bin, and that you’ve provided permission for them to do so in the first place, well, is this actually a loss of autonomy? Does this have anything to do with increasing the social good? Let’s get a couple things straight about how technology, and how we can get smart.
Contrast his bin with an actual “smart” garbage bin, such as one outfitted with Pandora software, developed by Metro Compactor Service in Toronto. Neither you nor your “friends” are alerted to your waste-related behaviour. The reason it’s smart is because it increases efficiencies along the chain of garbage collection, affecting only those people whose job it is to deal with your waste (out of sight, out of mind). If it were possible to “favourite” this kind of behaviour, I would. But nobody has to even think about it. That’s why it’s smart.
Morozov’s anxiety about the BinCam stems not from any capacity that technology might have to turn our lives into a soulless dystopia, but from his anxiety that other people (his friends on social media) might see what’s in his trash: “The bin doesn’t force us to recycle, but by appealing to our base instincts—Must earn gold bars and rewards! Must compete with other households! Must win and impress friends!—it fails to treat us as autonomous human beings, capable of weighing the options by ourselves.”
What he means by “autonomous” here is each person’s ability to ignore peer pressure. But that’s not what “autonomous” means. Morozov pegs his problem early in his article: “…because our personal identities are now so firmly pegged to our profiles on social networks…”, he says. Here lies the source of the article’s problem. He’s talking about “identity” as if it mainly consisted of driver’s licenses, purchasing history, and what other people might think of you based on a series of fields filled out on a computer, perhaps relating to your hobbies. When you see yourself on a social network: Is that you? If you’re hesitating over the answer: no, it isn’t you.
In his attempt to gin up a nonexistent anxiety over smart technology, Morozov conflates two varieties of social “engineering”.
1) “Insurance companies already offer significant discounts to drivers who agree to install smart sensors in order to monitor their driving habits. How long will it be before customers can’t get auto insurance without surrendering to such surveillance?”
Yes, and? It is already the case that drivers with a clean record pay lower rates than people who probably should not be on the road at all. The question here is not, “How long will it be?” but “When can you begin? Can we get this happening now, sooner, on an even greater scale, please?”
He continues: 2) “And how long will it be before the self-tracking of our health (weight, diet, steps taken in a day) graduates from being a recreational novelty to a virtual requirement?”
Again, this is social media dread, and not a new problem. Who is forcing you to worry about your appearance? Why did you buy that toothbrush that beeps at you to begin with? Because you’re easily distracted and lazy and don’t brush long enough. That’s why. So the toothbrush will do you good.
Example 1 has to do with governments deciding who, based on statistics, should be driving cars and at what cost. Example 2 has to do with people’s anxiety over self-improvement, egged on by the media and the sagging mentality of keeping up with the Joneses. Both examples are frivolous and have nothing to do with each other. So the question remains: What should we worry about? Should we be worried?
In fumbling for an example which is “unambiguously useful and even lifesaving,” Morozov offers up that “smart belts that monitor the balance of the elderly and smart carpets that detect falls seem to fall in this category.” Smart belts for the elderly. How about children? Don’t they deserve to be protected from falls? What about any vulnerable person, or just anyone who feels vulnerable? Near the end of his article, Morozov frets that smart technology will remove “the complexity and richness of the lived human experience—with its gaps, challenges and conflicts.” Like falling down? Like being injured? We blithely speculate that smart belts are an unambiguous good for “the elderly”, but would you wear one yourself? Hell, no, you wouldn’t. It’s an insult to both your dignity and your autonomy.
Let’s get real for a second. Morozov writes, “The most worrisome smart-technology projects start from the assumption that designers know precisely how we should behave, so the only problem is finding the right incentive.”
Most worrisome? Does the prospect of “designers” sitting around dreaming up ways to incentivize your life worry you? Do you sometimes feel in your life like a hamster pawing at a lever in order to extract a pellet? You’re hardly alone. Charlie Brooker has made a fairly good television show about just these themes called Black Mirror. Technology as a metaphor for existential malaise is a very rich vein to mine, if you’re a fiction writer.
Here’s what is actually worrisome. When Morozov imagines the possible impact that the “most worrisome smart-technology projects” might have on our lives, he comes up with scales that tell us we’re fat. Nowhere in his article does he mention drones, governments snooping (or even attempting to snoop) on their citizens, the state of fear created by airport security fuelled by a media fixated on terrorism (far less likely to kill you than a lightning strike), or for that matter automobiles with human drivers (extremely likely to kill you), just displaced last year as the leading cause of death as a result of injury in the United States by…suicide.
Let’s rephrase. Getting hit by a car is “worrisome”. It’s a very real threat to each person’s actual well-being. Since we’re talking about smart technology, bring on the driverless cars. Are driverless cars a loss of autonomy? Is your answer yes? I’m afraid you’re wrong, and worrying about a loss of autonomy you never had to begin with. You want to go someplace. You get in a car. Whether you steer or the car does, your autonomy is unaffected. And you’re much less likely to hit someone else on your way there.
Following statistics, I should be very much more afraid of myself than I am of a conspiracy by designers brainstorming ways to finesse certain bizarre, tiny problems of modern life.
If you read The Magnificent Ambersons today, it is impossible to imagine how disruptive automobile technology was when it was new, to understand the words “men’s minds are going to be changed in subtle ways because of automobiles.” Can you imagine anyone saying that now? Can we imagine what life was like before cars? Or before roads were built for them to drive on? No. It’s as if they were always there. The book is not old; it’s modern. Our minds have been changed in subtle, and not so subtle, ways by all kinds of technology in the meantime.
We are talking seriously now about making it mandatory to microchip household pets, for all kinds of reasons, supposedly beneficial to them and to us. When will someone propose that we microchip ourselves? It will seem horrifying, until we can’t remember what life was like before we were microchipped. I would put such a prospect forward as being more “worrisome” than being told how to dress by my phone.
As Ursula Franklin writes in The Real World of Technology, “Technology is not the sum of artifacts, of the wheels and gears, of the rails and electronic transmitters. Technology is a system. It entails far more than its individual material components. Technology involves organization, procedures, symbols, new words, equations, and, most of all, a mindset.”
If the vertiginous nature of a 24/7 state of surveillance is making your head swim, grab a bit of technology called a pencil and some paper. Walk to the nearest park. Sit on a bench. Spend between 40 minutes and an hour drawing what you see in front of you. Don’t get bogged down thinking about how how the technological innovation of perspective drawing, developed to perfection by Euclid, has shaped your instinct when making a drawing. Just draw. Don’t upload the result to the Internet. Feel better? I knew you would.
Morozov asks us to become preoccupied with an anxiety to do with becoming “mere automatons who assist big data in asking and answering questions.” Are you reading this on a computer? Then Big Data already sees you and knows what you’re up to. You clicked “agree” after reading the EULA. If it’s any consolation, though, it really is nothing personal. Mainly, they want you to buy something, and they’d like to make a buck selling it to you.
We should be concerned about a loss of autonomy. And we should be critical of new technology. But to arrive at serious answers to these concerns, the question to ask cannot be, as Morozov would have you believe: What will my friends think?