• Home
  • Events
  • How can we rebuild trust in the digital world? Part 2

How can we rebuild trust in the digital world? Part 2

Main visual : How can we rebuild trust in the digital world? Part 2

Catch up on the introduction and part 1 of this inspiring discussion from Fujitsu Forum Tokyo 2019 here.

How much do we trust AI?

Michael Sandel

I would like to shift now to another ethical question that arises about big data, AI, and algorithms. Do you think that AI can make better judgments than human beings?

Let’s take the practical example of a medical diagnosis. Research has already demonstrated that AIs are better at analyzing CT scans than human doctors.

Suppose a loved one was suffering from a serious illness and had a CT scan. Would you rather diagnostic analysis of the scan was performed by a doctor or by an AI? Hold up your cards to indicate your choice

The audience is fairly evenly split between the AI and the doctor. Our panelists, with the exception of two people, say they prefer to have the AI make the diagnosis. Hazumu, you chose the doctor...why do you think the doctor is better?

Hazumu Yamazaki

As for statistical judgment, AI is superior, and the more case data we have in the future, the more accurate diagnoses will be made. But for now, I still want to rely on the doctor.

Michael Sandel

Who else voted for the doctor? Sebastian, you also chose the doctor...

Sebastian Mathews

I would want the doctor to review the results of the AI analysis, and then make a final diagnosis.

Michael Sandel

But why wouldn’t you just rely on the AI device?

Sebastian Mathews

An AI may be able to analyze the data, and be less prone to error, but I don’t necessarily think its reasoning can be trusted. I believe that the AI and the doctor should work together.

Michael Sandel

So you would ultimately rely on people to make the final diagnosis?

Sebastian Mathews

In the end, yes.

Michael Sandel

Hazumu? What do you think?

Hazumu Yamazaki

People have consciousness that is required for reasoning and that AIs don’t possess. An AI can only perform statistical analysis — it doesn’t really understand what it is doing. The process of human perception is totally different from the way a machine functions.

SA1_9040.jpg

Michael Sandel

But in this case, we are only relying on the machine for CT image analysis...

Hazumu Yamazaki

I don’t believe we can trust AIs that much yet.

Michael Sandel

So you don’t trust machines then?

Hazumu Yamazaki

If they are used more widely, we will be able to place more trust on them. For example, autonomous driving is expected to help reduce traffic accidents significantly. But many people are still afraid of self-driving cars. It’s not just a matter of statistics, it’s a question of being able to accept it mentally.

Michael Sandel

Let’s look at another example — this time involving the evaluation of employee performance.

When performance reviews are carried out by human bosses, they are often subject to certain biases. An AI device, on the other hand, can evaluate performance using algorithms with clearly defined metrics. Putting yourself in the position of an employee, which would you rather be judged by, your boss or an AI device?

So it seems that almost 80% of the audience would love or trust their bosses more! Our panelists, on the other hand, are evenly divided. Yumiko, why did you choose the AI?

Yumiko Kajiwara

I thought the AI would be more objective and fair, whereas people tend to have likes and dislikes that might affect their evaluation. I don’t think the AI would be subject to such biases.

 Michael Sandel

So you think it would be more objective.

Yumiko Kajiwara

Human judgements are inevitably more subjective. And the evaluation might be affected by the relationship between the boss and the employee. I guess many of the people in the audience are in positions to perform the evaluation.

Michael Sandel

You are in a position where you have evaluated a lot of employees yourself. It sounds like you are almost mistrusting your own judgements you have made in the past?

Yumiko Kajiwara

It may soud that way, but I have tried my best to evaluate my subordinates’ work objectively. It’s just that if I put myself in the position of the one being evaluated, I think I would choose the objectivity of the AI.

Michael Sandel

Yuko, you chose the boss. What do you say to Yumiko’s argument about objectivity, overcoming bias?

Yuko Yasuda

I don’t think an AI can understand context. Even if a project doesn’t go as planned, employees may have made important contributions that are not reflected in the numbers or data. For example, there are people who provide mental support and boost their coworkers’ spirits when a team’s mood goes down. I doubt if an AI can properly evaluate those contributions.

Yoshikuni Takashige

Let me say something here. I think it is very doubtful that any of us can evaluate our work strictly by the numbers, compressing it all into metrics that can be measured objectively. There are invisible elements that can only be measured by human sensitivity. It’s true that if a relationship is bad, it will negatively affect the evaluation. But in a way, that is a realistic reflection of the workplace relationship.

Michael Sandel

So you’re suggesting that with many jobs, it may not be possible to evaluate performance with the metrics that an AI can measure. So how can we evaluate complex human skills that cannot be assessed with such metrics? Ian, you chose evaluation by a human boss. What are your thoughts?

Ian Bradbury

I wonder if an AI can properly evaluate employees who play supporting roles on a project.  And I don’t think it can effectively take into account social perspectives like inclusivity in the workplace.

Michael Sandel

Yumiko, what do you think?

Yumiko Kajiwara

We use both absolute and relative evaluation methods. For absolute evaluation, it is important to identify and consider personal merits that numbers cannot reveal as well. In the case of relative evaluation, however, we have to rely on numerical values as a basis for comparison.

SA1_8675.jpg

In the end, the best solution may be that human bosses make a final evaluation incorporating the AI’s evaluation. To be honest, I also want to have personal words of encouragement from my boss.

Michael Sandel

Edmund, what do you think?

Edmund Cheong

There is some truth in what Yumiko said, but I think AI always comes first. For example, a young, hard-working employee may put in long hours, but because he is self-effacing, his efforts may not be visible by his boss. So it is important for the skills and work of such a person to be evaluated objectively. We have used predictive AI models, and have found them to be free of emotional bias. The important thing is to give everybody an equal opportunity.

Michael Sandel

So emotion gets in the way of reason and judgement?

Edmund Cheong

Exactly! If I argue with my wife in the morning, I may give a bad review to someone because of getting emotional.

Michael Sandel

So emotion always destroys judgement?

Edmund Cheong

Yes. We are made of biological chemical substances. We are influenced by our hormones.

Michael Sandel

Alright, we are getting into an important issue here — the idea that emotion always interferes with good judgment, so better to use the AI machine which is emotionless. Who disagrees with that? Hazumu, what about you?

Hazumu Yamazaki

I work in the field of emotion analysis. Clinical cases are revealing that people actually cannot make decisions without emotion. In his 1994 book Descartes’ Error, Antonio Damasio describes about a person struck in the head by a spear. The damage to the frontal lobe of his brain became incapable of making any decisions because he had to rely on logics only. Recent philosophical thinking also suggests that emotions are central to our ability to make choices.

Michael Sandel

To make a good decision, do you need emotion?

Hazumu Yamazaki

That is what I think.

Michael Sandel

In our discussion so far some panelists have expressed the opinion that AI’s judgement is good because it is objective and unbiased, while others have suggested that emotions actually play an important role in judgement. I would like to test these ideas by considering another question, one that deals with matchmaking.

Let’s imagine an AI-powered application that has analyzed massive amounts of data and produced a shortlist of three people that it predicts to be your best lifetime partners. In choosing someone to marry, would you trust the AI’s recommendations, or would you trust the advice of friends and parents?

The audience seems to generally favor the AI. On the panel, we have five who chose the AI, two who chose friends and parents, and one who did not choose either. Yuko, you chose friends and parents. Can you tell us why?

Yuko Yasuda

Rather than my parents, I would seek the advice of my friends. Mainly because I think AI matchmaking would place greater emphasis on factors like educational background and income, and less emphasis on human traits that are hard to quantify. I myself don’t want to be judged by such quantifiable factors, and I don’t want to live in a society that believes those factors should be the basis for determining if someone is a good match. It’s not romantic at all. Our life can be more fulfilling, if we happen to meet somebody. I feel unexpected personal chemistry is more impotant. There’s no joy in it if my potential partner’s suitability is based on data.

Michael Sandel

Okay, data lacks surprise, and in romance surprise is important, not data.

Does anyone disagree with Yuko’s argument? What about you, Yasu? Is there anything you can say to change Yuko’s mind?

Yasuhiro Sasaki

Depending on how the data is gathered, I think it is possible for an AI application to evaluate a wide range of personality traits. Your friends may be able to introduce 100 potential candidates to you, but an AI allows you to choose from100,000 or 1,000,000 potential candidates. Wouldn’t that be better? You could even program the AI to surprise you, and it might match you with someone you never imagined.

Michael Sandel

Wait a minute, Yasu. Does that mean that you program the algorithm to choose someone who does not seem a good match every 30 or 100 times? What does it mean to program ‘surprise’ into an algorithm?

Yasuhiro Sasaki

For example, Candidate A might seem to be the most compatible and the best match for you based on the life you have led so far. But there might be a Candidate B who — although they may not seem like a perfect match based on the life you have led so far — has interesting qualities and may take your life in a whole new direction. In other words, you can program your expectations of spontaneity into the AI individually. I think that probably can be done.

Michael Sandel

What do you say, Yuko?

Yuko Yasuda

If those kinds of multifaceted human characteristics can be entered in the program, then I admit its matchmaking accuracy would probably improve. But marriage involves a long-term commitment to a partner, and I wonder if the program can reasonably predict the future for the next 30 years based on past data. In that respect, I think it’s better to rely on intuition rather than data.

Yasuhiro Sasaki

Well, one out of three Japanese couples get divorced anyway, so intuition may not be the best guide either (laughter).

Michael Sandel

Yoshi, earlier you questioned whether a job performance could be captured by a single metric. Do you think factors like “romance” and “surprise” can be incorporated into a matchmaking program?

Yoshikuni Takashige

I don’t think AI can measure what is working in the deepest place of the human mind. We’ve talked a lot about emotions today, and although AI technology can be useful in suggesting potential life partners, it cannot reach the core of the human soul.

Michael Sandel

Now let’s ask a slightly different question. Japan is experiencing a declining marriage rate and birth rate — and as Yasu mentioned, a rising divorce rate. What would you think of the idea of the government sponsoring a program to support AI-based matchmaking in hopes of increasing the marriage rate and possibly the birth rate? Would you be for it or against it?

By a narrow margin, the audience seems to be for it. But most of our panelists disagree. Mika, why are you against it?

Mika Takahashi

I really dislike the idea of an AI choosing my marriage partner. In the first place, I’m not sure how much an AI can really know me, or what kind of data it will use to choose a partner. I would trust the advice of a good friend more than an AI. I don’t want an AI to make decisions for me.

Michael Sandel

Edmund, you seem to be consistently in favor of using AI...

Edmund Cheong

I trust humanity, of course, but I still support the use of AI. Nowadays, people spend a lot of time immersed in a virtual world. Just looking around Tokyo Station, you can see that most of people keep staring at their smartphones. All over the world, people are no longer interacting face-to-face.

Michael Sandel

Is that a good thing or a bad thing?

Edmund Cheong

It’s a bad thing, of course. I believe in people, but humanity as whole seems to be moving from the physical world to a digital, virtual world. So if digital can help create romance, I’m for it.

Michael Sandel

Create romance digitally?

Edmund Cheong

You can’t fall in love digitally, but digital technology can create opportunities to fall in love.

Michael Sandel

Mika, do you want to reply?

Mika Takahashi

I don’t think love can be digitalized. I don’t think data can address all the issues involved, and I’m sure there are feelings, preferences, and choices that people make unconsciously with respect to the opposite sex that are too complex to digitalize.

SA1_8786.jpg

 

Catch up on Part 1, or carry on to read Part 3 of this series.

Inspired to find out more about Fujitsu's vision to build a Trusted Future? Visit the Fujitsu Technology & Service Microsite to learn more.

And we'll be continuing the discussion at Fujitsu Forum Europe in Munich, November 6-7 2019 - click below for all the details.

FFM banner.jpg