Eva Kaili, Member of the European Parliament, Chair of STOA, the European Parliament’s Science and Technology Options Assessment body talks about education systems, the digital single market, policy makers and the need for an Ai ethics code.
As service and technology of the European Parliament I realize that new technologies are evolving really fast. Sometimes we have politicians that have to make decisions without even being able to understand in depth the technology that is disrupting business models. So what we do need, and I see we have to be fast about it, is to make sure that education systems will adapt and will give us tools to be able to survive in any environment. And we need of course new skills and this would mean digital skills but not only. If we do have the smart technologies that you can ask a question and then some huge data hub gives you the best response, then it's not a matter memory, it’s matter of critical analytic skills and this means that we have to give people enough tools to make creative decisions, to be creative, to think in a creative way and out of the box. And so I do believe that what we have to achieve with the new educational system is to give them digital skills so that they have an understanding even if they don't use them in their work to understand the technologies and also to be creative because this is what can define humanity from artificial intelligence.
In the European Parliament we've been working in several files on trying to achieve a digital single market, digital economy. This means at an environment where there are no borders and we have to make sure that there are common definitions and this means that also policy makers will try and compromise a few things and find common ground to achieve that. This is not easy because we are not used to not having borders first of all. Second different things can affect us in different ways and as it's known we have different understanding of technology. We don't have politicians that are computer scientists. Maybe a few but not enough. So they have to be able to be open minded and then think and make the right decisions. I see that especially in the European Parliament we have open minded politicians and at the moment they try to be innovation friendly. They try to accept that technologies can improve our lives and we see the best of the technologies. Because when somebody tells you that more data can make technology smarter and we can have prediction of diseases or better treatment of diseases than this can lead us to be open minded. Well at the same time it creates some concerns about privacy, mass surveillance, about how it can affect our lives, about discrimination or biases that can happen and can take place through an algorithm because they're done by humans, so it could actually bring their biases there. But it seems we are aware of that. We're really trying to make sure that technology can even have less biases than human data and of course everybody has the right to an explanation. So if there is a decision made by an algorithm you have the right to ask for an explanation. And I think this is essential. At the same time we try to work for this year. We have the commission working for artificial intelligence ethical code. And this means that we would try by the end of the year of 2018 to have a code where you cannot be discriminated by an algorithm or an artificial intelligence and be excluded for example from your insurance to just maximize profit. We want AI for good. We want AI to balance profit and good not to go one way or the other. It would not be viable. It would not be the AI we want and this would actually scare people. And I know that when we talk about AI, people think about humanoids or robots or drones following us. I'm talking about AI that improves our lives and the quality of our lives and gives back to society more and gives back more potential and options and possibilities to citizens to participate to improve their lives. At the same time, we have to safeguard our values and avoid funding any harmful artificial intelligence, weaponized drones for example because you never know who this will end up being controlled by. We are aware of that and the people of parliament we try to be fast especially in cyber security and to control where the funding goes and make sure that there would be an ethical code around AI.