All Human Hall projects

INCLUSIVE ARTIFICIAL INTELLIGENCE

Artificial intelligence is a technology with a huge potential. Its impact on society is intended to grow, but there are risks and challenges too. When efficiently managed, with a proper data set and invisible algorithm, it can be useful for the protection of human rights too and of individual liberties.
This project aims to develop the best technological practices to contrast discriminations and promote social inclusion. Through some specific tools such as the NOME system, it will be possible to analyze documents and administrative acts, to identify those terms which do not suit to anti- discriminatory guidelines. Artificial intelligence allows us to understand how and where we should act, supporting the usage of an inclusive language.

TEAM

DISCOVER THE VIDEOS OF
INCLUSIVE AI

AI & Inclusion: NOME

An empirical studies
of text annotation

WHAT DO WE DO

INIZIATIVE

ARTIFICIAL INTELLIGENCE AGAINST DISCRIMINATION

Interview with Paolo Ceravolo, associate professor of ICT

What kind of connection there is between AI and Human Rights?

AI are always more and more present in our everyday life, and this condition raises important ethic issues. The algorithmic decision can affect people’s life in a significant way. It is important for algorithm developers to bear in mind the social and political consequences of their actions in order to guarantee equity and justice for all, regardless of their gender or social status. 

How the artificial intelligence can “work” for the inclusion?

The aim is to be sure that the choices made by the artificial intelligence won’t be discriminatory. In order to achieve this goal, we can act on two aspects: on the one hand the “cleaning” of the data set, which must not include toxic elements; on the other the accuracy and the choices made by the algorithm. I think that the most important aspect is the first one. 

Why isn’t the artificial intelligence “neutral”?

Artificial intelligence reflects the balance and the discriminatory elements learned from the data produced by a society. A “cleaning” phase is so necessary. We could say that artificial intelligence is the summary of the knowledge gained by mankind. And so, if there are some weaknesses, it replicates them too. In a society that uses more and more automation tools, we must be even more responsible: we are the ones who have to educate the systems, not the opposite. So today we have to improve our overall knowledge to be sure that the future decision-making processes will be even fairer.

From this point of view one useful tool is the system NOME. How does it work?

NOME is an assessment tool which analyzes texts to identify those words which do not follow the guidelines of inclusive language. It compiles statistics about the different kinds of acts and the most critical situations. It could give you real time suggestions while you are writing your texts. In the past it was used for the administrative acts of the Università degli Studi di Milano, so it could be used in wider contexts too. 

Why is it important in the realization of artificial intelligence to follow an integrated approach?

It is highly relevant. Planning the research according to the hypothetical “end users” helps both the society and the scientific community, who are driven to establish their priorities. Another key element is the sharing of different skills. For example, the IT community has already worked a lot on machine learning and on non-discrimination. The link with juridical and regulatory aspects is a plus, that can help to reach important results.

What kind of connection there is between AI and Human Rights?

AI are always more and more present in our everyday life, and this condition raises important ethic issues. The algorithmic decision can affect people’s life in a significant way. It is important for algorithm developers to bear in mind the social and political consequences of their actions in order to guarantee equity and justice for all, regardless of their gender or social status. 

How the artificial intelligence can “work” for the inclusion?

The aim is to be sure that the choices made by the artificial intelligence won’t be discriminatory. In order to achieve this goal, we can act on two aspects: on the one hand the “cleaning” of the data set, which must not include toxic elements; on the other the accuracy and the choices made by the algorithm. I think that the most important aspect is the first one.

Why isn’t the artificial intelligence “neutral”?

Artificial intelligence reflects the balance and the discriminatory elements learned from the data produced by a society. A “cleaning” phase is so necessary. We could say that artificial intelligence is the summary of the knowledge gained by mankind. And so, if there are some weaknesses, it replicates them too. In a society that uses more and more automation tools, we must be even more responsible: we are the ones who have to educate the systems, not the opposite. So today we have to improve our overall knowledge to be sure that the future decision-making processes will be even fairer.

From this point of view one useful tool is the system NOME. How does it work?

NOME is an assessment tool which analyzes texts to identify those words which do not follow the guidelines of inclusive language. It compiles statistics about the different kinds of acts and the most critical situations. It could give you real time suggestions while you are writing your texts. In the past it was used for the administrative acts of the Università degli Studi di Milano, so it could be used in wider contexts too.

Why is it important in the realization of artificial intelligence to follow an integrated approach?

It is highly relevant. Planning the research according to the hypothetical “end users” helps both the society and the scientific community, who are driven to establish their priorities. Another key element is the sharing of different skills. For example, the IT community has already worked a lot on machine learning and on non-discrimination. The link with juridical and regulatory aspects is a plus, that can help to reach important results.

Paolo Ceravolo, Samira Maghool, Costanza Nardocci

Scroll to Top
Search
15
January
2025
09.00 AM – Aula Malliani – University of Milano, via Festa del Perdono 7
The role of Agroecology – In the sphere of Global Health
15
January
2025
02.00 PM – Aula Malliani – University of Milano, via Festa del Perdono 7
Companies and Universities for Education: Reporting on Sustainability