Artificial and Augmented Intelligence systems have the potential to make a real difference in the scientific discovery domain however this brings a new wealth of ethical and societal implications to consider with regards to this research (e.g. human enhancement, algorithmic biases, risk of detriment). This workshop looks to explore the ethical and societal issues centered around using intelligent technologies (Artificial Intelligence, Augmented Intelligence, Machine Learning, and in general Semantic Web Knowledge Technologies) to further scientific discovery, with a strong consideration of data ethics and algorithmic accountability. Advances in technology and software are rarely inherently bad in themselves, however that unfortunately does not preclude them from being subverted to ill intent by others; furthermore, as demonstrated by the examples above, even an unintentional lack of care towards ethical codes and algorithmic accountability can lead to societal and ethical implications of scientific discovery. It is our responsibility as researchers to consider these issues in our research; are we conducting studies ethically? What ethical codes can we put in place for scientific discovery research to mitigate against ethical and societal issues. These are really important issues, and they require an interdisciplinary focus between scientists, social scientists and technical experts in order to be comprehensively addressed. There are five main working group topics identified for this workshop, and two talks will be given on the different aspects that need to be considered with respect to general Ethics for AI, and Ethics for AI for Scientific Discovery, before we break into the working groups for more formal discussions. There are multiple sessions for the working group discussions, and so there will be opportunities to take part in as many group discussions as you wish. The workshop will be formally recorded.


Working Group Topics:

  • WG1: Data Sharing / Data Bias
  • WG2: System Design & Data Decision Making
  • WG3: Transparency / Explainable AI
  • WG4: Responsible AI / AI4Good
  • WG5: Subverting Research with ill intent

More information and the discussion points for this workshop can be downloaded from here.