Important Dates

Journal track paper:
submit now, max Apr 1, 2017

Paper submission (TT):
Apr 15, 2017 May 1, 2017

Paper acceptance (TT):
May 31, 2017

Jun 15, 2017

Doctoral Symposium & BAAI session:
Paper submission: Jun 15, 2017
Paper acceptance: Jul 15, 2017
Camera-ready: Jul 25, 2017

Discovery Challenge:
Predictions: Jun 1, 2017
Papers: Jul 8, 2017 Jul 31, 2017

Geometry Friends:
Aug 8, 2017

Sep 5-8, 2017



Beneficial Artificial Intelligence is a concern of most scientists working in AI. A set of AI Principles have recently been proposed to guide research and development on AI and its societal impact for the coming years.

The Beneficial AI Panel at EPIA 2017 intends to echo such concerns in an open debate, including distinguished participants with different and complementary views of the field.


Members of the panel

Cristiano Castelfranchi

ISTC/CNR, Rome, Italy

Luís Sarmento

Former applied Scientist at Amazon, CTO at TonicApp

Ernesto Costa

University of Coimbra, Portugal

Philipp Slusallek

DFKI, Saarbrücken, Germany

Eugénio Oliveira

University of Porto, Portugal

Simon M. Lucas

University of Essex, UK

Moderator: What do I mean by Beneficial AI?

Besides the obvious, pointing to the production of new technology at the service of new businesses, corporations, armies, or governments, I am more in favor that beneficial AI should be measured by how much it complies with human rights and harmonious progress of humanity as a whole.

It is an accepted fact that the most relevant outcomes of civilization derive from the fair use of intelligence. How can we improve and enlarge those benefits, through AI-based systems?


Q1: Can AI continue to provide a new revolutionary conceptual and formal instrument for understanding and modeling (human) mind, intelligence, intentional behavior, emotions, communication, social structures and their dynamics? Can AI develop a theory of cognitive and social phenomena of its own, not imported from social sciences?


Q2: About the possibility of the so-called strong AI. Is consciousness a possible state for an AI-based system to achieve?


Q3: Considering purposeful systems that truly integrate humans and machines, would it be possible to formulate general rules for humans to retain control?