This event was the fifth of the AI3SD Autumn Seminar Series that was run from October 2021 to December 2021. This seminar was hosted online via a zoom webinar and the theme for this seminar was Quantum Machine Learning, and consisted of three talks on the subject. Below are the videos of the talk and speaker biographies. The full playlist of this seminar can be found here.
Learning to Control Quantum Systems Robustly – Dr Frank C Langbein
I am a senior lecturer in Computer Science at Cardiff University, where I am a member of the Visual Computing Research Section. I co-lead the Qyber\black international research network in quantum control, which arose from the Quantum Technologies and Engineering research priority area at Cardiff University, and the Healthcare Technologies Research Group at the School of Computer Science and Informatics. My research interests lie in control, machine learning and geometry applied in quantum technologies, visual computing, geometric modelling and healthcare. My teaching responsibilities cover individual and group projects and various programming techniques. See my website Ex Tenebris Scientia for recent results and full details.
Q & A
Q1. I just want to go back to the beginning. This issue with the quantum control and the ability to be robust as well as optimal in a way that you’d expect the normal control to compromise. This seems really very interesting and holds out hope for much more than we might have expected. Can you say something a bit more about that?
In some sense that’s why we stuck with energy landscape control schemes because initially we started with our system to show “well, here’s an example that is not fully controllable, so it should not work”. And then it works. That’s literally the first paper that came out on this. And then we’ll see that extra robustness effectively. I don’t want to go on too much about this because we don’t have a complete explanation, but we have an additional degree of freedom. Because the global faces cannot be observed. And that seems to be responsible for why we will see that contradiction to the classical theory. That’s as far as I would go with that. The sort of difficult discussions with the global phase of course involved, but we have to struggle a lot particularly for the new analysis to get that inverse working strictly with the theory and not just “oh it should work, and let’s compute some kind of inverse with it to actually fully prove this”. And the papers are very, very technical. They’re not easy to read. So out of that, there is a lot of promise. Can we fully explain why we see that? Not yet. We might still be missing something in the whole process. But there are systems that seem to be able to break that, finding these controllers. And then also actually experimentally realising this. We are not just sending an RF pulse into a system, we need to be able to shape the energy landscape, possibly locally very, very precisely. But then if you get some leakage of some fields into neighbouring spins, if they are, then sufficiently robust that there is a lot of promise in that. The proof will lie in the experiment to verify this, and it can be sure that this will actually happen right now.
It’s a very interesting alternative view on things.
We’ll stick with it for now.
Q2. Although you’ve phrased this in terms of SPIN systems and so on. Presumably this does apply to any sort of system where you got an energy landscape that in principle you could control, like a chemical system. With the laser we can control some of these levels, and so on. It’s not as easy as a SPIN system, because there was quite a lot of early work done on optimal control to divert quantum chemical systems around, but it seemed to peter out a bit. We could do some basic stuff of shifting populations into certain states, but we never seem to do much chemistry with it, but I just wonder whether the time is right for a new look?
I mean, in some sense a lot of this can work. I skipped very quickly over the MRI scheme, which is still the dynamic control and the robust dynamic control. But you cannot do so much from the mathematics with it. All of this up and everything moves, so you have no handle on any limits with it anymore. Yes, I think quantum control actually comes out of this Chemistry can we control, our reactions. The problem seems to be that you can never get to any sensible yields. OK, if you have something very valuable and only need a few atoms, maybe more than a few, but really not as a mass production type of system, I think that that remains the problem. There has been quite a bit of progress what they can do in chemistry, well known on that end. In sort of breaking bonds, in actually forming bonds again, and controlled so driving reactions, but it is very low yield, which I think is the problem in the chemistry space. But in principle what we’re doing here should apply to a lot of these other systems. The only option is of course if you make your system ever larger, the decoherence becomes ever harder to fight and then we get the classical behaviour then. So, we’re looking at very small isolated systems here. But curiously, on the other hand, with NMR or in MR sort of in low fields, messy room temperature, biological systems, some of these controls actually still work. MRI works, so these pulses do something. They’re not as complex as what we need for quantum computing, we don’t need to implement or gates in that sense. But they’re still operating, this still works, so I would not completely rule out the possibility that at some stage you might be able to get there.
The Variational Quantum Eigensolver – progress and near term applications for quantum chemistry – Jules Tilly
Jules specialises in developing quantum machine learning methods for drug discovery with a focus on optimizing algorithm implementation on current / NISQ Quantum Computers. He is a Quantum Research Scientist at Rahko, and is currently completing his PhD at UCL under the supervision of Pr. J. Tennyson. Prior to this, Jules worked for 6+ years in financial services acting as regulatory and strategic advisor for global investment banks such Goldman Sachs, UBS and Citi Bank. He holds degrees in Mathematics, Quantum Physics, Law, Economics, Finance and Public Policy.
Q & A
Q1: I was struck by the slide you had for making the things that make the computation doable in a sense about the coherence and the overlaps and so on. I had this other naive view that the problems that arise that make it difficult for us to solve quantum chemistry problems on the classical computers, are indeed when we have a lot of delocalisation and a lot of issues around that. When we can find an effective way of localising things, we’ve found very clever algorithms for doing that and keeping things apart, and then, putting them together and clever DfT overlaps and so on, and so I’d rather hope that the quantum ones were at the opposite end, but you seem to be suggesting that’s actually not entirely true here?
So you’re right it’s a very good point. First of all, it’s unfortunately not entirely the case for contributors, but it’s worth pointing out sort of two different aspects of this. The first one is we care about the problem being local, so we care about the cost function being local, but the wave functions of the wave function that we’re modelling, doesn’t have that much of a requirement, so the operations that we’re doing on the qubits don’t need to be local operations. So that means that we can create a model of the electronic wavefunction that is very, very highly entangled. And still, measure it against an operator or series of operator which isn’t so Hamilton, which is a design so that each operator is very local. The advantage of this is obviously that you can model is very complex wave functions on the quantum computer, which is something that you shouldn’t be able to do with sort of common conventional methods. But as you pointed out, actually this remains an open question. We’re not sure yet whether we can actually produce this sort of wavefunctions intractable manner so with number of parameters that is small enough for it to be optimisable and with a depth that is shallow enough so that it can be implemented on the near-term quantum devices. So yes, it’s a very narrow margin.
The other intriguing part that is where the mental exercise seems to be going, is that creating of those suitable Hamiltonians from the beginning. How hard is it to learn how to do this, starting from a sort of classical outlook of saying, “well, I can write down the treasure equation, and I can think of the normal sub orbitals that we use, or I can conceive of a basis set, or somehow…”, translating that into this kind of SPIN type Hamiltonian which is, I guess, what we’re going to do, it seems my colleagues who do NMR and so on are frequently, it’s a language they work on and when think about, but not maybe a language that makes most of us chemists who came not doing quantum chemistry, but thinking that you can go and get a quantum answer. There are people in between who very happy with density matrices and whatever. I mean, what if your record if somebody’s listening hear anything, right? I know a little bit about what quantum chemistry is. I need to go away and learn. Obviously we’re going to do a review, this is clearly the place to start… but what does one need to learn about to be able to think in that way? To go from a classically driven quantum Hamiltonian to something that is more amenable to a quantum computer?
It’s a very deep question, so there’s a lot to possibly to answer here, so I’ll just try to summarise quickly. So the first thing is make sure you understand you know second quantisation. The reason why second quantisation exists for the representation of a molecule in terms of Firmenich operators, and I guess this is you know, core to quantum chemistry. So, any quantum chemistry textbooks will cover this. The next step is to understand why we why we needed to have this spin operators and why we cannot have the fermionic operators directly. And the main reason is because fermionic operators are very very abstract concepts which they defined for the need of sedimentation. They were defined by King in the 1920s for the need of verifying this, the antisymmetry of the of the electronic wavefunction. So, therefore now that we are with machines which are basically effectively spin states, qubits are spin state and that can only be measured on spin operators. We need to find a translation of these somewhat archaic operators of the Firmenich into spin operators, which I’m glad we’ve been defining a very long time ago as well. But, we still need to find translation between another. And the issue is that you know spin operators themselves don’t obey these anti-commuting relationship by default, and so therefore we need to construct the specific operators and and in a manner that they obey this, relationship. The very first paper about this actually predates you know any thoughts of quantum computing which is a paper by Jordan Wigner back in 1928, which basically shows that you can explicitly express the Firmenich operators into spin operators, and that’s the very first attempt at doing so and obviously over the last couple of years many people have tried to make this more efficient. But, if you’re interested in sort of understanding how this translation works, so the translation from the molecular structure to the Firmenich Hamilton into the quantum modelling, the first step is to look at the second quantisation and to understand how it works from a sort of quantum chemistry perspective and the second is probably to look at the Jordan Wigner encoding. Jordan are big on mapping and there is many resources on the Internet about this and that would be the basic point to understand this language that people in the quantum computing community talk about.
I’ll just say the reason why I was asking that is that, we’re holding these particular this group of seminars now because we think that it’s really important for this network there to have resource available, to have a greater understanding of where quantum computing is going, and that we actually are looking to put on a set of workshops to actually provide some hands-on help and some guidance through this, at least in simple way. But when I’m looking at the various courses around, there doesn’t seem to be a course that takes a sort of normal sort of chemist who knows a bit about quantum computing and quantum chemistry, and so knows something about some basic quantum mechanics, but doesn’t know about quantum gates, but equally doesn’t know the right bits and pieces. But they just didn’t seem to be a course that starts from the right point, and so we’re looking at doing that. This has been very helpful in trying to understand what algorithms one should concentrate on and where some background should come from. We’ll have to have a chat with you as well.
I think bridging between the two communities is very challenging and it’s very important, so I think you’re on the right path.
Quantum Machine Learning – Professor Anatole von Lilenfeld
O. Anatole von Lilienfeld is a full university professor of computational materials discovery at the Faculty of Physics at the University of Vienna. Research in his laboratory deals with the development of improved methods for a first-principles-based understanding of chemical compound space using perturbation theory, machine learning, and high-performance computing. Previously, he was an associate and assistant professor at the University of Basel, Switzerland, and at the Free University of Brussels, Belgium. From 2007 to 2013, he worked for Argonne and Sandia National Laboratories after postdoctoral studies with Mark Tuckerman at New York University and at the Institute for Pure and Applied Mathematics at the University of California Los Angeles. In 2005, he was awarded a Ph.D. in computational chemistry from EPF Lausanne under the guidance of Ursula Rothlisberger. His diploma thesis work was done at ETH Zurich with Martin Quack and the University of Cambridge with Nicholas Handy. He studied chemistry at ETH Zurich, the Ecole de Chimie Polymers et Materiaux in Strasbourg, and the University of Leipzig. He serves as editor in chief of the IOP journal Machine Learning: Science and Technology and on the editorial board of Science Advances. He has been on the editorial board of Nature’s Scientific Data from 2014 to 2019. He was the chair of the long IPAM “UCLA program ‘Navigating Chemical Compound Space for Materials and Bio Design’” which took place in 2011. He is the recipient of multiple awards including the Swiss National Science foundation postdoctoral grant (2005), Harry S. Truman postdoctoral fellowship (2007), Thomas Kuhn Paradigm Shift award (2013), Swiss National Science professor fellowship (2013), Odysseus grant from Flemish Science foundation (2016), ERC consolidator grant (2017), and Feynman Prize in Nanotechnology (2018).
Q & A
Q1: With the ability to go from one element to another and do the alchemical transitions. Presumably you grow atoms as well by just going from a zero charge atom to some nuclear charge on it, to grow in atoms to make your molecules bigger. Is that something that the set will work on?
We actually did this, I think we published it two years ago, you take a box with a uniform electron gas and the background charge, and then you reduce that background charge, and you can grow individual atoms. And then if you apply the chain rule you can even print out the atomic contributions to the total energy, so you get a measure of the Atomic Energy, and you can even also with the same trick, get the contribution to the total electron density. So you also get atomic electron densities in your molecule.
So, if you take your box and make it a shape, that might happen to be the active area of an enzyme or a drug or something, of a drug target and have an external field there that’s represented by the thing you’re trying to bind to, can I grow a potential binding drug?
I don’t see why not? My main concern would be that you would be tied to this particular configuration of your external field and so on. Maybe some averaging might be helpful or to do this one ensemble of configuration. But in principle, yes absolutely this should be possible. I don’t see any physical reason why not.
Yes because, the feedback between what you’re growing in and the protein that would have to come from the ensemble.
Q2: In the fragment system, you have these fragments and you’ve shown how representative of the whole of very reasonable accessible chemical space you can reproduce really quite accurately. With the generative models, can we turn that around and see if we could generate molecules which are not simply obtainable from those fragments? So, find that new novel, the C60’s of the world etc., suggestions to work in places of chemical space that are not in fact representations. In a sense, finding new fragments, is what we’re talking about.
Yes, that’s an interesting Point. I won’t exclude it, but I think the chances are minimal. If you think about it in terms of an alpha principle or so, right where you go, you start with an atom and you try to exhaustively list all it’s possible chemical environments. Then at some point, you’ve exhausted it and that’s it and then you build the next shell, and you do that exhaustively, etc. So of course, at some point the system will be so large it also starts to explode, combinatorial, and then maybe there might be this into this region where it’s so large you cannot do it exhaustively, so then you could use a generative model to possibly find the most relevant fragments. I think that that would be the most likely scenario where this could work, but for these really small fragments I think it’s quite finite actually.
I think the evidence you present is that with the number of very reasonable number of fragments you are able to cover a lot. What we have experimentally accessed so far you seem to predict very well.
Mind you, it’s just CHON (Carbon, Hydrogen, Oxygen and Nitrogen) so just the few elements from organic chemistry. So, as we start adding the later elements in the main groups or some transition methods and so on, the space will we still grow. but it it’s still finite, I think it’s very feasible.
Yeah, yes. I mean you’re right. We do have another sort of 100 odd elements that we could in theory use, and then the relativistic corrections will start to change some things and then we need new stuff. But, there’s not so many experimental data to put in to do the calculation, so that becomes more of a problem. At that point.
I think this this might be really some fantastic challenge for humanity to sort of have a systematic experimental data set on those little fragments from which we could then build projected or extrapolating models.
Well, I think that’s something where looking at this the other way around. Asking the question how many of those fragments do we not have reliable bits of information on? I think that would be a really interesting list to publish for the synthetic chemists to target making interesting molecules from those fragments to provide exactly that ground truth. You always worry if you don’t have some ground truth somewhere, there’s always a worry that something’s gone wrong.
I find it shocking how scarse the literature looks like in terms of data, so I once asked a postdoc to collect melting points for inorganic systems, and he could only find less than 2000 examples in the literature of different systems, so that’s what we have.
It’s not enough, considering how many elements that is.
it’s very sobering.
Thank you. And I must agree with you that the one of the exciting things about this community is the fact that the data and the algorithms are shared so that you know you can work out whether you’ve understood it because you can run it and see what going on and move on from there and that is something that is particularly marked in this community, which I think is another reason why there’s been so much progress.