INESC TEC Open Talks on Research Ethics and Defence
INESC TEC will organise a series of sessions - starting in September – to discuss the ethical dimension of research in the Defence sector.
The goal is to explore the latest international geopolitical changes, bringing the Defence sector to the forefront of discussions in the European Union. This issue has a key impact on the scientific community at both European and national levels, as the connection between research projects and the Defence sector - with possible military applications - presents new and complex questions.
To ensure a productive discussion with an impact on public policies, INESC TEC will bring together a group of national and international experts for each session. Each one will open with an initial lecture, followed by a debate between the speakers and the audience.
Three sessions have already been confirmed.
September 19 | 5 pm – 6 pm | Online | Open Talk 1: HUMANISM WITHOUT BORDERS - CANCELED
(this conference will be held in Portuguese)
Speaker: Álvaro Vasconcelos, founder of the Demos Forum and Holder of the José Bonifácio Chair at the University of São Paulo (2023-2024)
Abstract: To the popularity of geopolitical theories, always with a nihilistic undertone; to the emergence of Nationalism, and the return to power politics with war as a consequence; to the disdain for human rights in the name of the superior interests of states, we must reply with a borderless humanism, an agenda of compassion, hospitality, defence of the rights of the rightless, social justice, and a stable climate. This is the agenda of a new multilateralism, at the service of humanity, that will allow us to overcome current fractures and preserve democracy.
Short bio: As an opponent of the Estado Novo regime and the Portuguese colonial war, Álvaro Vasconcelos lived in exile in Belgium and France (1967 – 1974). He returned to Portugal after the April 25 Revolution, where he participated in the democratic transition process. Founder of the Demos Forum, he features regularly on several media outlets: Público, Diário de Notícias, and Expresso newspapers, radio and television (RTP, Porto Canal, SIC) shows.
Click here to register.
OCTOBER 1 | 5 pm – 6 pm | Online | Open Talk 2: MY FACIAL RECOGNITION SYSTEM IS 100% ACCURATE, IS IT GOOD NEWS?
(this conference will be held in English)
Speaker: Caterine Tessier, Director of Research and Research Integrity and Ethics Officer at ONERA
Abstract:
Digital research has applications in many fields including military applications. Furthermore, a given technique, e.g. facial recognition, may be used for different purposes for instance to unlock your smartphone, for tracking people in the streets or for targeting people automatically. The talk will focus on the necessary ethical deliberation researchers should put in place alongside their scientific work in order to go further than a mere acceptance or objection: what are my own personal values, do they conflict with one another, with professional values, with my country’s values? What are the different arguments that can be put forward to participate or not in this research, to publish it or not, according to which ethical reference frameworks? How to integrate ethical thoughts in research dissemination?
Short bio:
Dr. Catherine Tessier is a Director of Research at ONERA in Toulouse, France, and ONERA's Research Integrity and Ethics Officer. Her research focuses on the modelling of ethical frameworks and on ethical issues related to the "autonomy" of robots. She is a member of the French National Committee for Digital Ethics and a member of the French Defence Ethics Committee. She was a member of the UNESCO ad hoc Expert Group for the elaboration of the Recommendation on the Ethics of Artificial Intelligence.
Click here to register.
NOVEMBER 26 | 5 pm – 6 pm | Online | Open Talk 3: BEYOND THE AI HYPE: BALANCING INNOVATION AND SOCIAL RESPONSIBILITY
(this conference will be held in English)
Speaker: Virginia Dignun, professor at Umeå University (Sweden) - Department of Computer Science, and member of the EC’s High-Level Expert Group on Artificial Intelligence.
Abstract: AI can extend human capabilities but requires addressing challenges in education, jobs, and biases. Taking a responsible approach involves understanding AI's nature, design choices, societal role, and ethical considerations. Recent AI developments, including foundational models, transformer models, generative models, and large language models (LLMs), raise questions about whether they are changing the paradigm of AI, and about the responsibility of those that are developing and deploying AI systems. In all these developments, is vital to understand that AI is not an autonomous entity but rather dependent on human responsibility and decision-making. In this talk, I will further discuss the need for a responsible approach to AI that emphasize trust, cooperation, and the common good. Taking responsibility involves regulation, governance, and awareness. Ethics and dilemmas are ongoing considerations but require understanding that trade-offs must be made and that decision processes are always contextual. Taking responsibility requires designing AI systems with values in mind, implementing regulations, governance, monitoring, agreements, and norms. Rather than viewing regulation as a constraint, it should be seen as a stepping stone for innovation, ensuring public acceptance, driving transformation, and promoting business differentiation. Responsible Artificial Intelligence (AI) is not an option but the only possible way to go forward in AI.
Short Bio: Virginia Dignum is Professor at the Department of Computing Science at Umeå University, Sweden where she leads the research group Social and Ethical Artificial Intelligence. She is a Fellow of the European Artificial Intelligence Association (EURAI) and an associated with the Faculty Technology Policy and Management at the Delft University of Technology. Given the increasing importance of understanding the impact of AI at societal, ethical and legal level, Virginia Dignum is actively involved in several international initiatives on policy and strategy guidelines for AI research and applications. As such she is a member of the European Commission High Level Expert Group on Artificial Intelligence, of the IEEE Initiative on Ethics of Autonomous Systems, the Delft Design for Values Institute, the European Global Forum on AI (AI4People), the Responsible Robotics Foundation, the Dutch AI Alliance on AI (ALLAI-NL) and of the ADA-AI foundation.
Click here to register.
There will be other sessions – and there are still many questions to address:
- How can we seek to ensure transparency in the development and application of defence systems, as well as effective accountability mechanisms to ensure that these systems are used responsibly and ethically?
- How to approach the development and application of rapidly evolving autonomous and AI systems in defence and military contexts?
- How to ensure that the development of surveillance or emotional recognition technologies, especially when combined with AI systems and the use of biometric data, do not become excessive, jeopardizing human rights including the right to privacy, the right to life and respect for the values and principles of a Democratic Rule of Law?
- To what extent is it possible to ensure that defence systems are designed and used in a way that minimizes the risk of collateral damage to civilians and non-combatant infrastructure?
- Will it be possible to develop and implement operating protocols that seek to guarantee the safety of civilians?
- And how reasonable is (and how to address) the possible objection of some researchers to participating in research related to defence projects, based on ethical or moral convictions?
- Should we — or even can we — place professional, contractual or legal limits on an objection that may originate from general opposition to violence, concern for human rights and conflict with personal, ethical or religious values?
Would you like to address other topics? Make sure to grab your spot and participate in a major and informed debate about science and society.