Digital Democracy

Luciano Floridi, Professor of Philosophy and Ethics of Information and Director of the Digital Ethics Lab, at the Oxford Internet Institute, talks about politics, digital tech, social media, platforms and information pollution.

Share Discuss


The digital technologies have already had a huge impact on a variety of aspects of our lives, politics included, and in fact, these days, if you read the newspapers, that seems to be the corner where they are having the biggest impact of all. It's natural, inevitably politics translates into social transformation switched in, and transformed into everyday life. So yes, the digital technologies are making a huge impact there. Is it a good impact? Is a positive impact or not? Well, so far, they have not been terribly helpful, to put it mildly. Some people have underestimated the impact of social media, for example, and platforms. Why? Well, they thought that by having just a few advertisements of the wrong kind, hijacking maybe in personal profiles and feeding us with the sort of wrong advertisements or the wrong kind of information that would determine our particular choices, that surely that would not be such a big impact, it would be negative, but it would be just a small part of a much bigger picture. So, the argument goes that- yes, social media; yes, platforms have had some impact on our political choices: Brexit, the election of Trump, the emergence of populism in Italy, and so forth. And yet, that must've been marginal, almost negligible. What's the counter argument here? Why that picture is wrong? Well let me use a simple analogy. Imagine you have a wonderful bottle of wine, old one, you kept it away for a special occasion, in fact, it’s been years since you bought it. And you finally want to open that bottle wine, is called national elections, maybe is an important decision for the future of your country. Someone, somewhere, just put one drop of vinegar in it. Small, tiny, negligible. Really, you are going to throw away the whole bottle. So, let's not confuse the small impact that social media had in the general election of, we can say like, United States with the fact that they polluted the environment. They transform an environment when something was unacceptable, unheard off, something that you would have been ruining your intellectual and political career forever, into normality. Swearing, lying, saying the opposite or special alternative truths, being able to go online and say “we would never, ever form a government with that particular party”, and yet, only months later, getting together and say “oh, that's okay, anyway”. Well, this is a polluted environment that in social media have enabled us to generate, unfortunately. So, remember, it's a drop, and there’s always more, but it's a drop the ruins the whole bottle of wine. That’s why the social media need to be carefully handled, and it's not enough to say “oh, marginally limited, not often”. No, they have to be on the right side of the divide, they need to take a stance and work for democracy. Neutrality is not a stance, is on a position. It’s very hard to imagine a world where social media in other countries, forget now for a moment Europe, or the United States, or Great Britain in terms of Brexit, but let's consider the rest of the world, maybe Brazil or an African country, say South Africa, or maybe South Korea, the Philippines. Well, it is very hard to imagine that those places will not be equally affected by the wrong handling of social media. So, social media have an enormous responsibility and being able to say I’m hiding between that responsibility and a tiny, small impact that I might have had on the life of a few million people, for an hour a day, every now and then- that's not good enough.



It's hard to identify the biggest challenges that such a growing sector is facing, but maybe a couple of them may be useful to reflect on large responsibilities. One is, what relationship we want to have between these digital technologies and the environment? Again, just to be clear, the digital is not replacing the analog. It’s, in the best case, supporting it, making it more efficient. For example, we fly much more today than we have ever been able in the past, but cheap airlines and economy flights are available also because of digital technologies and efficiency built on that corner of the world. So, the digital needs to be a friend of the environment and the environment needs to look at the digital as an ally. How are we going to build this alliance between the green of the environment and the blue of the digital, so this green and blue strategy is one of the challenges we had today. For example, this enormous companies are often very aware of the impact, and they look carefully at the sustainability of their business, in terms of buying renewable energy. But the renewable energy bought by Google for example, isn't available to someone else. So, we need to be a little bit more astute, in building this green and blue strategy, moving forward. The second point that I would like to highlight, is not just the environmental sustainability, but social preferability to have another label. In other words, what kind of society do we want to build, in which the environment and the digital are friends, and at the same time life is worth living. Now we might have the best technology and the best environment and very miserable lives altogether. How do we square this particular issue with the social standards, low in inequality, improving living standards elsewhere, not in Europe, not in the United States, but where standards are, really, horrible? Now that is the second challenge this seems to be in front of us. How we transform this extraordinary development in human intelligence, these digital technologies, into engines for the creation of wealth and then we use that wealth to help the rest of humanity, to have a more decent life. Well, that would be a project for the twenty first century, worthy of our all efforts.



Cambridge Analytica has been a very welcome wakeup call. It was well known, academics like myself have been discussing the Cambridge Analytica problem for almost a couple of years. The truth is that it had not hit the fan so to speak and it had not hit the news. Also, the magnitude of the problem had not become clear yet. But the problem originally was quite well known. Facebook had allowed an app or several apps, Cambridge Analytica being one of them, to collect information and information about other people connected to the people about which was collecting information so secondary profiles and so on in a way that it was unacceptable. In other words, I was running perhaps an app, I was able to collect your data, and then the data of the people connected to you and then do pretty much what I wanted with the data collected. Now this simple but not over-simplistic picture is behind the Cambridge Analytica scandal. A scandal because those data were then used allegedly to influence American elections, Brexit and god knows what else. Now why a welcome wake-up call. Facebook had already changed the rules, had already stopped allowing those apps to collect the data and the data about other people connected to those data. But it hadn't been enough, had not taken care of alerting the people involved, had not moved forward into restoring the trust in making sure that nothing wrong would happen. It was a moment of as long as I do what the law says I should be doing that's good enough. It is called compliance. Compliance is when you follow the law and you do exactly what the law and only what the law says you should be doing. In a complex world like digital technologies where things are changing on a daily basis where trust is essential, where lives of billions, and I mean billions of people are one click away from your business, compliance is certainly not good enough. It is necessary, you have to do it but it's hugely insufficient. We see this in Cambridge Analytica. Facebook probably didn't do anything wrong, legally speaking, and yet the backlash has been ferocious, and in some sense rightly so. There was a breach of trust and Mark Zuckerberg apologized repeatedly in different contexts for it, and again rightly so. So, what can we learn from it? Well partly that the industry is already a bit ahead of legislation in this case too. Facebook had already changed its strategy anyway. Partly that a sociopolitical engagement with these companies is vital. We need to stop thinking that the market will take care of it by themselves. There are corners of the world that the markets either don't care for or that they cannot care for. And that is where social and political innovation and engagement is vital. And the lesson: even when the markets are effective, what are effective in what way? The markets are very good in creating wealth. They're terrible at distributing wealth and terrible under sustainable engagement in creating that wealth. So, when it comes to creating wealth let business do what business does best. But when it comes to distributing the advantages and making sure that the ethics is the right thing to do in terms of sustainability and social profitability that is our task—that is the task of society.



Among the several changes that we have witnessed in the last few years several fashions and several technologies come to mind. We spoke about big data. We were speaking about cloud computing. We're now talking a lot about artificial intelligence in a variety of ways. The last of these developments is agents: things that do things for you, instead of you, and sometimes better than you or at least they save you time, effort, efficiency. Now, agents of all kinds might be a little piece of software that may be cleans up your computer--that's already an agent--or it might be the little piece of software that makes sure that the fire alarm in your house is constantly running and updated and doing a good job or it might be the little app that reminds you that there's no milk in the fridge and so on. All these little bots, sometimes also called, little pieces of software that are helping us in everyday life, they come with almost like a too easy not to fall a trap. Because they do things for us, instead of us, better than us, and they save us from the tiresome, the boring, the I don't want to do it today kind of jobs, was very easy to delegate to make them decide for us instead of us. And so next thing you know, maybe it is your particular little robot in the house that decides when you need to clean the sofa or when you need to clean the carpet. It is your little robot that tells you what now is the time to take the car for an MOT, or a revision, or double checking. In other words, we are delegating not just the doing but also the decision about the doing. Now that's a big difference. It is one thing to decide that something is to be done and then delegate an agent, maybe an artificial agent to do it for you, like the dishwasher really has to be done and has to be done now. Press the button and you go and maybe read a newspaper and watch Netflix. Another thing to do is to be told by the dishwasher what you need to do, when you do it, and maybe you shouldn't make so many dishes soiled next time. Now being in the hands of a delegated artificial agent that's the risk. So, delegation of actions or processes is probably a good idea. Control, decision, strategy—what is going to happen if something goes wrong. What should really stand as a warning and should remain the hands.