In November 2019 AI3SD held their first annual conference to mark the end of the first year of the Network.
A full report on this event has been written by Michelle Pauli. This report can be downloaded from here: https://eprints.soton.ac.uk/444601/
A majority of the presentations given at this event, and all of the posters can be found online on our Paperless Content Page.
Below is a blog post written by Michelle Pauli about the Network Conference:
AI3SD Network+ Conference – Michelle Pauli
“Children are still dying of malaria at the rate of two full 747s crashing every day. You don’t see it, it’s not in the headlines, but it’s an absurd number of people dying,” says Professor Matthew Todd, chair of drug discovery at University College London.
“Every single infectious disease suffers from the problem of resistance,” he continued. “It’s not simply a case of having a disease and coming up with a molecule: you have to come up with a continuous stream of molecules very quickly, and have them in reserve. To get to a pace of discovery where we respond quickly to an infectious threat, we need all the help we can get. For me, AI and machine learning are a way of speeding that up. But it’s not just about doing it more quickly, it’s about changing the way you’re doing it, by getting all the help you can.”
The huge potential of artificial intelligence to change the way we discover new drugs – to change the way we do science – was at the heart of the AI3SD Network’s two-day conference in Winchester, UK. From the rise of the Robot Scientists to the latest developments in deep machine learning of quantum chemical Hamiltonians, participants heard about a wide range of research and initiatives, across the disciplines, with a common theme: how cutting edge artificial and augmented intelligence technologies can be used to push the boundaries of scientific discovery.
The conference marked the halfway point of the network and a highlight was hearing the interim reports of AI3SD-funded projects. Professor Todd presented his open source drug discovery project that aims to “look for things that kill malaria and not people”. Professor Tim Albrecht, from the University of Birmingham, discussed his work combining quantum tunnelling-based biosensing with advanced machine learning methods, while Professor Reinhard Maurer set out how he is pushing the limits of materials discovery with deep learning-enhanced quantum chemistry.
Keynote speakers included Dr Lucy Colwell from the University of Cambridge, who shared her work on the data-driven challenge of predicting the functional properties of a protein from its sequence to discover new proteins with specific required functionality, such as a potential new cure for ebola. Professor Juan Garrahan from the University of Nottingham discussed his work at the interface of current questions in non-equilibrium physics and machine learning methods, focusing on the general statistical mechanics issue of accessing and characterising rare dynamic events in stochastic systems. Among the speakers from industry were Dr Andrew Senior, who described a system his team at DeepMind built to predict protein structure, and Dr Richard Tomsett from IBM, who tackled AI and explainability.
These presentations, plus a wealth of poster sessions and contributed talks, all demonstrated the sheer range and depth of work in this field.
However, while it is clear that we’re currently in a golden age of AI, and particularly in relation to how far we can push the boundaries of scientific discovery with AI, to what extent are the ethics of using AI for scientific discovery being considered? The conference opened with a half-day workshop on AI Ethics for Scientific Discovery, emphasising the need to place ethics centre stage in AI decision-making.
The stark verdict of Dr Will McNeill, lecturer in philosophy at the University of Southampton, who opened the AI ethics workshop, is that “ethics cannot provide us with answers.” However, ethics can offer frameworks that can help us show – and justify – how decisions were made and the trade-offs that are an inevitable part of that decision-making.
There is no algorithm that is capable of determining how the tensions between our moral duties are resolved. Instead, ethical frameworks for AI will be required. But ethical frameworks cannot replace ethical judgement; rather they highlight tensions that only human judgement can reconcile.
“Augmented intelligence is really the symbiosis of what humans are good at and what computers are good at,” says Andy Stanford Clark, chief technology officer at IBM UK and Ireland. “If you need to go through a ton of data, millions and millions of rows, you wouldn’t give that to a human because it can be really boring and they’d fall asleep after five minutes. Whereas people can make use of the synthesised outputs, the information and knowledge and insights from that sea of data, to be able to make operational decisions.”
Humans will always be part of the equation, whether explaining critical ethical decisions or working with the AI material to make the kind of scientific discoveries that prevent malaria or cure ebola. It is in this combining of forces, this augmented intelligence, that the future lies.