What is technology doing to us? Harris versus Harris

Joey Twiddle
7 min readMar 22, 2019

These are my notes from Sam Harris talking to Tristan Harris in this 1 hour 40 minute podcast episode: Waking Up #71 — What Is Technology Doing To Us?

This is heavily paraphrased and re-arranged, with some unique input of my own. They were mostly using the word ‘persuasion’, but I have dropped in the word ‘manipulation’ where I saw fit.

Introduction

Tristan Harris was a design ethicist at Google. He is interested in magic and curious about how cults operate (both forms of hidden persuasion). He and Sam studied, at different times, in Stanford’s Persuasive Technology Lab.

In large tech companies like Google, a small number of designers influence how a billion people think and work every day. People dip in and out of their phones sometimes hundreds of times a day. So designers are effectively scheduling small slices of the user’s time. Give the user a big task, or a small task? Something that helps the user, something that helps other users, or something that helps the company?

There is an arms race between companies in the “attention economy”. Many apps are competing for our attention. Some use advanced techniques, similar to those in the gambling industry, to grab our attention today and get us back again tomorrow. (Snapchat and Twitter were mentioned.)

But the metrics companies use do not always benefit users. Companies measure things like engagement and time-on-site. The more time someone spends on a website, the more adverts can be squeezed in between posts. So sites like Facebook have an infinite appetite for the user’s time.

This isn’t intentionally malicious, it’s just their business model.

A good example: outrage drives engagement. When people are outraged they share and comment. The algorithms on the website have adapted to this, and now they supply content which will outrage the user. Maybe this wasn’t explicitly programmed, the algorithm just optimised for likes and shares, and this is what came out. The result, for some users, is “a day full of outrage”. Nobody would actually choose that, but we do effectively choose it by the clicks we make and the keys we tap.

Nobody would choose a day full of outrage.

So in some ways the Facebook newsfeed is already a runaway AI. Like Universal Paperclips, it pursues its metric for success with no regard for the external consequences.

Another example: YouTube added an autoplay-next-video feature. Most users hate this, but it’s good for the website.

But let’s imagine that YouTube’s algorithm becomes so smart, it really could offer the perfect next video for the user. Educational, informative, ideal. That would be great right? But even then, what if it was better for the user to watch no video at all? Perhaps go for a walk. Right now, there is no incentive for the company to offer that option, let alone promote it.

Extreme manipulation: If an app can learn the psychological biases of multiple users (e.g. one is needy for likes, one needy for thanks, one often responds in a particular way) then it can orchestrate predictable interactions between individuals that will fulfil the app’s goals. (RIP Snapchat)

Virtual Reality will be even more persuasive. So it would be good to tackle this problem before VR arrives!

Theoretical aside

Paradoxically, sometimes something which is badly intentioned or literally bad for you, can be good for you. For example, cult survivors sometimes benefit from what they learned, survivors of a horrible accident or disease may gain newfound inspiration from the experience.

Some of these technologies provide benefits, whilst at the same time being manipulative. Users have difficulty deciding whether the benefits outweigh the losses.

But most people agree, they do not like to be manipulated, and they do not like the techniques to be hidden. (Similarly with cults, showing someone _how_ they have been manipulated, is a good way to get someone out of a cult!)

We might still want to be persuaded to do things, but they should be things which are good for us, things that we have chosen, things that we will feel good about later.

How to tackle the problem? How do we get ethical design?

Incentive for companies is currently wrong. The metrics are problematic for users.

Tristan suggests a new metric: Time-well-spent. (See humanetech.com for the “time well spent” movement.)

Like the organic food movement, first we need to name what we want, and then we can start demanding it from suppliers. And isn’t that what we want? A life with no regrets? Time well spent.

He says we must look at the big picture. Like city planning, redesign the city for people. Individual apps cannot solve this alone (bad actors will have an advantage in the arms race). So systemic change is needed. This could be done by the big players like Apple, Google, Facebook and Twitter, because they run the platforms on which other services operate.

Transparency of intention. The goals of the persuader should align with the goals of the persuadee (the person being persuaded).

A simple example: The app store could let users rate apps according to how well the user’s time was spent with that app.

A stronger example: Your phone’s OS could be proactive. Every now and then it will show you how you have been spending your time on the phone, and ask if you want to change anything. Based on your answers it could suggest alternative schedules that suit you better.

So the phone will really help the user to schedule their time.

One of the options for a particular segment of the day could be “phone is off”.

Another example: Users could have choice over the algorithms. Do they want a day full of outrage? A day full of wholesome cuteness? A day full of intellectual stimulation?

Theory regarding ratings: But users might give different ratings during the experience and after the experience. (E.g. eating a tub of ice cream.) How can we trust the user’s feedback?

We don’t want a world without ice cream. But we also don’t want ice cream every day. We need to find a good balance between activities which give immediate reward and long-term reward.

What can individuals do now?

Dispel the false idea that technology is neutral. Expose how the technology is currently manipulative, and why it is that way, to make people less comfortable with the status quo. Then they can start to ask for change, and seek it out.

People often don’t care if companies have their data. But they do care when they learn they are manipulated (by said data).

Truth is important. (It is already neglected in politics and advertising. Persuasive technologies can exacerbate these problems.)

We don’t want to be watching out for manipulation the whole time. We want the system to be on our side.

A positive perspective

One positive I took from this podcast: Every time they point out a cognitive bias that can be used to manipulate users, I wonder if that bias could also be used to persuade users to do positive things. If we use these techniques for good, they could provide significant benefits, in terms of alternative behaviours.

(One area I noticed in the past is how systems with a user rating can drive people to improve their behaviour. StackExchange is one example. Another is Airbnb, where the rating system encourages both guests and hosts to be on their best behaviour. Is that always a win-win? Who loses out?)

Tristan says this around 1:40:00: “There is a huge potential for the current ‘city planners’ to steer our interactions in a more positive way.”

Politics (sorry, not sorry)

Sam: If the majority is an ocean of attention that can be reliably manipulated to produce a certain outcome, then democracy will become a tool for authoritarian control.

Sam: Trump’s supporters know he is lying. He has managed to convince them that the truth doesn’t matter; they can win without it.

Tristan mentioned, from “democratic psychology”, the kind of psychological mind that is necessary for a functioning democracy. Obviously, one thing is caring about truth. Another is the ability to be persuaded otherwise. “What evidence would convince me to rethink my position?”

Sam says this is a feature severely lacking in the politics of today. Politics should encourage conversations that will inform people, find common values, and change minds. That is necessary for a functioning democracy. But that’s not what is happening today.

What if Facebook tried to help, by rewarding this kind of behaviour?

Or just acknowledging it. For example, a reaction button so somebody can indicate that an article changed their mind. Then it could share that article with people who want to be challenged.

FIN

See also

humanetech.com for the movement Tristan inspired.

How many ways are there to persuade or sway a human? Too many! https://en.wikipedia.org/wiki/List_of_cognitive_biases

Also: https://en.wikipedia.org/wiki/Psychological_manipulation

How about ways to combat manipulation? https://en.wikipedia.org/wiki/Cognitive_bias_mitigation

This was just one of many interesting podcasts in the series Waking Up with Sam Harris

I haven’t listened to many of them, but another I did really enjoy, on history, democracy and the impending future, was #138 The Edge of Humanity, a conversation with Yuval Noah Harari.

Two others of relevance to this topic:

  • In #78 Persuasion and Control, Zeynep Tufekci laments the loss of the Internet’s earlier innocence, criticises WikiLeaks, and explains why she prefers to pay for the services she uses.
  • In #136 Digital Humanism, Jaron Lanier explains how today’s flood of free information now means that our attention is the limiting factor. Some of the information we receive is being fed to us by unknown entities who pay for chunks of our attention. He argues similarly that we would be better off paying for curated feeds, and to support producers of valuable content.

The 2020 movie The Social Dilemma is available on Netflix.

Tristan Harris now co-hosts the podcast Your Undivided Attention which discusses this topic further.

Thanks for reading!

This write up originally appeared on LinkedIn.

--

--