Frank Pasquale: How To Regulate Google, Facebook and Credit Scoring?

"We can either say that as civil society and as individuals we work with governments to regulate algorithmic systems or we will see partnerships between government and those running algorithmic systems to regulate and control us."

Video abspielen
Frank Pasquale: How To Regulate Google, Facebook and Credit Scoring? (Video)
Youtube

Ist Künstliche Intelligenz gut oder schlecht? Das ist gar nicht die Frage, meint Frank Pasquale, Rechtswissenschaftler an der University of Maryland (USA). Es kommt darauf an, wie man Algorithmen einsetzt und kontrolliert. Das "Recht auf Vergessen" ist ein gutes Beispiel: Haben wir es in der Hand, was andere über uns wissen, oder bestimmen Datensammler künftig über unser Leben?

Is Artificial Intelligende good or bad? That's not the point, says Frank Pasquale of Francis King Carey School of Law at the University of Maryland (USA). The question is how to implement algorithms and how to control them. The "right to be forgotten" is a good example: Do we have the power to decide what others know about us? Or are data miners going to determine our future lifes?
 

Das Gespräch wurde am Rande der re:publica 2017 in Berlin aufgezeichnet.

Jede Woche neu beim Stifterverband: 
Die Zukunftsmacher und ihre Visionen für Bildung und Ausbildung, Forschung und Technik

Autorin: Corina Niebuhr
Produktion: Webclip Medien Berlin
für den YouTube-Kanal des Stifterverbande

Transkript des Videos

I am very worried about the future because I feel like there are the concentration of power over algorithms and artificial intelligence is worrisome.

The Google, Apple, Facebook, Amazon, these major firms have so much data. They are so influential on how it develops, and I feel like as the algorithms are moving beyond internet context, business context to things that affect how we are treated at the hospital, how law enforcement works, how the military works. In each of those fields there is not enough discussion. There are not enough ethical experts trying to control what data is being used, how it's being parsed, who it affects, and you can appeal, how you can complain if you feel that you've been unfairly affected.

So there is a natural transition from algorithmic determinations of, say, credit scores, health scores, whether you're a risk, whether a person is potentially criminal etc. and artificial intelligence and, say, robots that would have these algorithmic systems as their brains, and they would be able to act immediately in the world. The stakes are raised enormously as we move from algorithms online to robots. That's very, very important. And so I think what we need to do is we need to have institutions that guarantee algorithmic accountabiity in Facebook, in Google, in credit scoring, in health and finance scores first before we allow artificial intelligence to take over education, health care, other areas or to have much influence in those areas. 

I think that the key is rather than asking the question: Is technology unbalanced good or bad? The quesions is: Are we implementing it in a way that's inclusive and that allows everyone to be included in the restructuring of society? Or are we implementing it in a way so that there are few elites and plutocrats at the very top that control how technology is implemented? And the rest of us are just the subjects of technology? That's to me is the real question.

We have such an enormous amount of content online that we can see on Facebook, that we can see on Google and these other larger intermediares, and that is very important content, and it's great that these firms are developing algorithms that are allowing us to sort it and filter it. But the problem becomes when these algorithms for example can prioritize really troubling content, racist content, extremist content, terroristic content. When any of these things get prioritized that helps corrode the public sphere. It helps undermine the sort of basic commitments that we all have to our forms of democracy, social justice, tolerance, diversity. And when that happens that's really troubling.

There is a lot of concern about Facebook and Google not being responsible enough here. I think that they are trying to now take some first steps towards responsibility. For example, Google has paired up with Factcheckers, that's a positive step. Facebook has also tried to put notices underneath fake news that helps people whenan article is obviously fake or when it's been disputed. I would go a little bit further though, if I were them on a level of substance that I would actually either change the headline or make the warning larger. Because right now for example when Google returns results it might say: Pope Francis endorses Donald Trump, right? That's a lie. The result probably should say: Pope Francis does not endorse Donald Trump. That should be the first result there. So that's one thing that they actually are doing with medical results. So to give an example, the positive thing that Google did in the past, it used to be that when your search said "I have a stomach ache" on Google you would get all these random sites, that would be whatever happens to be the most popular thing about a stomach ache. So we had bad information about stomach aches. What Google did eventually is they partnered with the Mayo Clinic, and they said: Let's make sure that the first things people see are very reliable, vetted materials that doctors have looked at. I think they need more partnerships like that with journalists, just as doctors are profession, journalists are profession. They are not going to solve this with just machine learning and software. So to the extent they do that that would be a positive step. If they feel to then I think you got to consider having regulators come in, media authorities, other authorities who could say: This is where you need to step it up, this is where you need to lots, and that could involves things like hater-extremist speech. It could also involve fake news. But there are many examples where I think the co-operation between government and civil society would be very positive here.

I would say that with respect to these larger intermediares we are in a desparate situation. We're really in an alert situation because if we let them be unregulated for much longer we're failing to gather the information we need to do sensible regulation. That's the first step, right? Because the title of my book is "The Black Box Society", and I called it black box because so often what's going on inside these companies is not transparent to anyone on the outside. In fact, it's not even transparent to anyone except the top, say, 10 or 100 people in the company. So this is a problem for regulation. So the very least we have to do is set up institutions that can inspect what's going on, like what we do with banks, what we do with other very large, important firms. And I think a very good example is, you know, with the whole Volkswagen scandal, we saw that even in an example where a company that was doing something that was relatively concrete they did a software hiding what was going on. And that danger is 10 times greater in the example of a Google or a Facebook because the software is 10 times more complicated, if not more. So I think that's the first step, it's monitoring. The second step is that just establishing some sort of civil society-corporate partnerships in terms of how to deal with big problems like antisemitic content, racist content, extremist content. That's going to be very important first step towards saying that the online sphere is something that we own in common. It's not something that just can be dominated and controlled by a few major American companies. It's something that all countries in the world can be involved with, can regulate etc. And if we don't do that the problem is not the distinction between regulation or no regulation, the distinction is between letting these large companies continue to regulate unaccountably or having the people's voice influence what's going on.

The key is going to be: Are we going to get initiatives on the European level that are going to bring in fact finding, bring in people who have complaints about these intermediares and listen to them? That's going to be a first step, and have an ongoing consultative process that will lead to some laws that will be more positive. I would say that a really good example of a first step in this direction is the right to be forgotten. I know it's controversial in the media somewhat to have a right to be forgotten, this l'art d'oublier, you know, that is seen as a way of censorship but it's truly not censorship because ultimately the data is still there at the source in the media. What it is really responsive to it is when people are searching at each other by name, what are the first results that come up? And as we've seen Google rather rapidly adapted to set up a process to allow people to object to certain results showing up on their name search results. That was a positive first step. And allowing the company to, sort of, govern that process but to have an appeals process to a governmental body that could develop a jurisprudence about it, that's how these problems could be solved. And I could see the same institutional model behind the right to be forgotten being applied in cases of discrimination, of extremist content, of hate content, of other content like that. It's multipronged, there is a lot of responsibility for the company. But there is always some juridical or administrative body in the background that can handle suits over and gradually develop a jurisprudence in the area. So I feel very positive about it. I don't think laws are automatically going to be behind. It's just a matter of: Do we set up the institutions that can keep it abreast of technological development or do we try to use older forms of regulation that maybe are fit for purpose in the modern era?

I had some consultations with Taiwan earlier this year, and I met with some folks in Taipeh. I think that in Taiwan they are very forward-thinking and they have a very robust community that involves people at the governmental level about algorithmic accountability and about keeping platforms accountable. The problem is that Taiwan is such a small conutry that it doesn't have much leverage. The European Union is large enough that is have leverage that it can actually set conditions on these big tech companies. Canada surprisingly also has a fair amount of leverage. I think it is actually doing some very good things as well. In Japan, unfortunately my impression of Japan was that there is a certain romance about artificial intelligence even at the level of the very highest governmental levels where it is seen as the future of the economy. And so I'm afraid that there's a bit too much of a trust in the machine at some of the highest levels of the Japanese government. But perhaps that's an artefact of the current political party in power there, it's not the future. I think in China, unfortunately, there is often an embrace of algorithmic systems as forms of social control. So China has actually released something called a plan for a social credit scoring system. And the idea behind that is that algorithmic monitoring individuals will lead to scores of them that will, say, even include not just the credit worthiness or their criminal history but whether they have produed political dissent. And if someone politically dissents, there is a possibility that not only will their social credit score be lowered but that of all of their friend network, say, their Facebook friend network, or whatever they call their Facebook, like Weibo or Tencent or Wechat, all the people that are connected on those Chinese social networks, their score would go low, too. So immediately if you dissented, all of your friends would know that and they know it would hurt them in the social credit scoring system. That to me is one of the most perfect tools for autoritarian rule ever deviced. And to me that shows how high the stakes are. Because we can either say that as civil society and as individuals we work with governments to regulate algorithmic systems or we will see partnerships between government and those running algorithmic systems to regulate and control us. So the power has to go in one direction or another, that's the big problem. And we have to really be able to either assert power as governing entities against these systems or they will take power over us.