2019-04-30

Market Failures in AI Research

2019-05

Is Observation Without Effect Possible?

There has been a lot of speculation on whether AIs can be taught ethics and safety. Here at Octoboxy our position is trending towards “no.”

What do the following all have in common?

  • Flat Earth Movement
  • Anti-Vax Movement / Measles Outbreak
  • Brexit
  • Make America Great Again

→ If you answer “AI” you are correct.

These are all wide-spread effects on society enabled by, or arguably caused by, AI research.

Two companies are represented by that list above: Google and Facebook. Google seems to deserve the most blame for Flat Earth and Measles, Facebook for Brexit and MAGA.

To be clear, it’s unlikely anyone at either company ever said ‘let’s change the structure of society.’ Far more likely they said, ‘let’s build an AI that fits our product to each customer’s desires better than has ever been done before.’

Google

It’s come to light since the beginning of 2019 that Google has a serious problem with radicalization, specifically through their YouTube brand. It turns out that some huge amount of internet traffic is YouTube videos, and three out of every four videos watched on YouTube are picked by the recommendation engine.

YouTube recommendations are powered by AI of course. Google is the world leader on AI research. The YouTube recommendation engine is an AI that continuously learns from each and every YouTube customer, trying to decide what next video will keep that customer on the site, consuming content and ads, for even a tiny bit longer.

Unfortunately it seems that rage, fear, uncertainty, and doubt are some things that some customers will stay longer for. As such, videos that promote extremist viewpoints and conspiracy theories get recommended by the AI disproportionally more often than boring, fact-filled videos.

Insiders have known for years that YouTube was having radicalization problems. But since the beginning of 2019 we’ve had those inside voices gaining publicity about how bad the problem really is. These days we are pretty sure that Flat Earth and Anti-Vax would not be significant movements except for YouTube’s AI pushing them.

Facebook

While Google’s influence was arguably accidental, Facebook’s has seemed a little more intentional.

To be sure, Facebook also is unlikely to have deliberately aimed to change political landscapes. But Facebook too has built a complex AI that continuously solves each and every person, figuring out how to keep that person scrolling their news feed just a little bit longer. In the Facebook world too, rage, fear, uncertainty, and doubt are sticky messages that drive higher customer engagement.

As the Brexit fiasco has been unwinding, the UK parliament invited Facebook to help figure out what happened. Facebook has repeatedly declined, and the government had to make some pretty nasty threats. Finally the company gave in and turned over some files. It’s now a matter of record that Facebook ad targeting was substantively behind Brexit’s coming to be. Further, that the Brexit referendum was the “petri dish” for Make America Great Again, even organized by many of the same project leaders.

We leave it to other news channels to explore who paid for these efforts and why.

The main take-away we see here is that without Facebook’s specific ability to target advertisements to each individual, neither Brexit nor MAGA would be as powerful of movements as they are. As with Google, Facebook’s ability to target each individual with the messages they are most susceptible to has been entirely enabled by AI.

Market Failures

The term “market failure” can include a lot of ideas, but one of the main ones is externalities to a system, such as pollution.

You can have a completely successful economic model, but there’s some pollution you give off as a side effect. You keep doing your business, minding all your assets and liabilities, and the whole world works out for you. But that pollution you emit keeps building up as a side effect, and after a while it starts to cause problems for the general public. This is entirely external to your business model, so we call such things externalities, or market failures. I guess it’s the market’s fault, as the market failed to take into account all the conditions of the trade.

That seems to be what is happening here.

Nobody at either company said “let’s change the beliefs of society.” Google was just fitting videos to each customer. Facebook was just fitting news items to each customer. Both companies were solving the problem of capitalism: how to keep customers happier, for longer.

The societal problems that emerged are entirely side effects. It turns out there’s a certain amount of thought pollution buried in videos and news items. If allowed to build uncontrolled in viewer’s minds, it can change belief.

Until now, entertainment services have always been one-size-fits-most. But with the advent of AI, which is sophisticated enough to adapt to each of the billions of us completely individually, this no longer holds.

These days, services such as YouTube and Facebook can draw from the entire palette of human experience to curate entertainment that fits each customer perfectly and individually. But as this happens, the individual person’s beliefs can be changed as a side effect of the AI playing back to us our own own deep, fantastical fears and desires.

Some experts talking about this emergent phenomenon in AI research have coined the term, “race to the bottom of the brainstem.” That is, our basest, animal feelings are the ones that algorithmically drive prolonged customer engagement.

In aggregate, the direction of society has been changed.

Google and Facebook set out to help people be a little less bored. It’s entirely accidental that we brought back measles as a consequence.

Can AI Be Made Safe?

Society has been changed, not because anyone built an AI to change society, but because people built AIs to fit to society. The act of fitting to the individual more precisely than has been possible before ended up changing society solely as an unpredicted side effect.

This is why we’re positing that “No, AI can not be made safe.”

Neither of the AIs involved are in any way arguably self-aware. Neither of the AIs are “Artificial General Intelligence”, or the holy-grail of Lt. Cmd. Data walking and talking to us. Neither even was designed with the idea of making an effect in the outside world.

No, both AIs are “Artificial Narrow Intelligence”. That is, plain, dumb algorithms, solving their narrow field of optimizing some business model, in the same way that business strategists have tried to do forever. The fact that society has been shaped as a side effect is totally unexpected, and a fully external behavior that has been enabled by the shear size of each AI experiment.

Here is our takeaway message:

When any single AI picks the news content that hundreds of millions of people believe as true, it doesn’t much matter what intentions the AI is built with, it’s going to affect society.

So no, AIs can not be made safe. This is because the problems that are emerging are not problems with the AIs themselves. It’s not the AIs trying to control us that’s causing society to shake. It’s entirely the unexpected side effects, the externalities, that are changing us.

Said another way, we’re changing ourselves in unexpected and dangerous ways because completely benign AIs are enabling us to explore our own desires in new ways.

We are changing, and it’s absolutely not the AI’s fault.

Recommended Related Research: