Conference on opportunities and dangers of AI: ‘Europe needs a daring vision’
< Highlited

Conference on opportunities and dangers of AI: ‘Europe needs a daring vision’

The SAILS conference The Future of AI is Here (and Guess What … it’s Human) brought together researchers and policy makers to discuss the important issues in the area of artificial intelligence (AI). Where are the opportunities and what are the dangers?

Developments in artificial intelligence have gained pace dramatically in recent years. In a very short time, AI applications have become accessible to a large audience. This creates both opportunities and dangers. ‘We live in interesting times,’ said Rector Magnificus Hester Bijl in her opening speech. ‘But what do these developments mean for humans?’ The speakers tried to answer this question and to separate the sense from the nonsense in the current AI craze.

Professor Virginia Dignum of Umeå University spoke about the issue of how we can use AI responsibly. She said it involves finding a balance between innovation and social responsibility. ‘Whatever you do with these brilliant systems, they have unexpected effects. We can do a great deal of good with AI, but there’s definitely also a lot that can go wrong.’

Dignum explained that it’s important to ask ourselves, while developing AI systems, what we actually want AI to be. ‘Should it be human-like and, if so, what does that mean? What kind of tools do we want these systems to be? And who are we designing them for?’ She said that to answer these questions it is important to bring different disciplines together. ‘AI is more than a technology, it’s a social construct. This calls for an interdisciplinary approach.’

Conference on opportunities and dangers of AI: ‘Europe needs a daring vision’
Virginia Dignum.

Dangers of ChatGPT

Is ChatGPT actually intelligent? It’s better to assume that it is, said Associate Professor Joost Broekens during one of the panel discussions. ‘Language models like ChatGPT can do many tasks better than the average human, even when the model hasn’t been specifically trained in them.’ For instance, these models can make calculations and solve puzzles.

Associate Professor Francien Dechesne warned that when we use systems of this kind, we’re ‘putting the cognitive infrastructure of society at risk’. As an example, she mentioned students who use ChatGPT to do their study assignments. ‘It’s not about them being able to produce a perfect answer, but rather being able to put their thoughts into words, so that other people can understand them. We need to keep training the skill of expressing our thoughts.’

Conference on opportunities and dangers of AI: ‘Europe needs a daring vision’
From left to right: Joost Broekens, Francien Dechesne en Eva Hofman.

During the panel discussion, journalist Eva Hofman drew attention to the dubious way in which the developer of ChatGPT collected data to train the model. They not only include many journalistic sources that were not paid for, but also personal PDF files belonging to people who never gave permission for their use. A lot of those data are, in fact, already available on the internet, but the danger of AI systems is that they can connect all those data together, Hofman warned.

Europe increasingly dependent on American tech companies

AI is crucial for the future of Europe, but what will that future be like? Professor Holger Hoos (Machine Learning) said he was worried about it. ‘If we continue as we are now, Europe will be third, after the US and China. Europe is becoming increasingly dependent on a couple of American tech companies. We mustn’t accept that.’

Hoos wants Europe to make big investments in the development of ethical artificial intelligence for the benefit of humans. ‘Our public institutions need to be leaders in the foundations of AI. Only then can we train the talents that companies need.’ Holger argued for a large European research institute, like CERN but for artificial intelligence, with large-scale infrastructure for the development of AI. ‘To achieve great things, we need a daring vision.’

Conference on opportunities and dangers of AI: ‘Europe needs a daring vision’

AI and diversity

When we design AI systems, who are we doing it for? For example, the data used to train ChatGPT mostly come from rich, Western, democratic countries with a highly educated population. ‘Africa is used for mining cobalt for chips, or getting workers to watch and describe terrible images so that AI models can be trained with them. But apart from that, the continent isn’t included in the development of AI,’ said junior researcher Yasmin Ismail during the panel discussion on AI and diversity.

Oumaima Hajri, researcher and lecturer at Rotterdam University of Applied Sciences, agreed with her. ‘When we talk about “AI for the benefit of humans”, exactly which humans are we talking about? The reflection on the ethical frameworks takes no account of different ways of thinking, such as those in the Muslim world. Everyone should be able to join in with that discussion.’ When the dangers of AI are being discussed, she sees that the ‘tech bros’ (young men in the tech industry) are mainly afraid that AI will threaten their position. ‘That kind of existential fear of AI is a luxury problem. You don’t see that fear in marginalised populations.’

‘These systems are what we make of them’

Kathleen Ferrier, chair of the Dutch Unesco Committee, also reflected on the role that AI can have in strengthening diversity. ‘AI isn’t right or wrong. These systems are what we make of them. AI can be used to help marginalised groups and to protect human rights.’

Professor Bram Klievink (Digitalisation and Public Policy) closed the afternoon and looked back on a successful conference. ‘We’re in the eye of the storm and trying to sketch a clear picture of a period full of change. We’re trying to find out what rules we need, and looking for a balance between innovation and control. It’s good that we’re having these important discussions with each other.’

Text: Tom Janssen
Photographs: Monique Shaw

To stay informed about Leiden University events of this kind, sign up for our newsletter now.

About SAILS

The Leiden University SAILS (Society Artificial Intelligence and Life Sciences) initiative is a university-wide network of AI researchers from all seven faculties: Law, Science, Archaeology, Social & Behavioural Sciences, Humanities, Governance & Global Affairs and Leiden University Medical Center.

Our view on AI is human-centred, focusing on the fair and just use of AI in and for society. We aim to expand knowledge of AI and its uses for the benefit of society as a whole, in order to help it withstand the challenges it is facing. We are convinced that this can only be achieved by bringing together talents and insights from all academic disciplines. SAILS researchers collaborate in interdisciplinary research projects, share their knowledge, inspire others at events, and educate our students to be the AI professionals of the future. Because at Leiden University we believe the future of AI is human.