Today, Iâm talking with Jerry Chow. Heâs the director of quantum systems at IBM, meaning heâs trying to build the future one qubit at a time.
Technology
IBM’s Jerry Chow on the future of quantum computing
Jerry Chow, director of quantum computing at IBM, explains qubits, utility, System One, System Two, and the Heron, Condor, and other bird-themed chips.
IBM made some announcements this week about its plans for the next 10 years of quantum computing: there are new chips, new computers, and new APIs. Youâll hear us get into more of the details as we go, but the important thing to know upfront is that quantum computers could have theoretically incredible amounts of processing power and could entirely revolutionize the way we think of computers⦠if, that is, someone can build one thatâs actually useful.Â
Hereâs Jerry, explaining the basics of what a quantum computer is:
A quantum computer is basically a fundamentally different way of computing. It relies on the laws of quantum mechanics, but it just changes how information is handled. So instead of using bits, we have quantum bits or qubits.
A regular computer â the quantum folks call them âclassical computersâ â like an iPhone or a laptop or even a fancy Nvidia GPU works by encoding data in bits. Bits basically have two states, which we call zero and one. Theyâre on or theyâre off.Â
But the laws of quantum mechanics that Jerry just mentioned mean that qubits behave very, very differently. They can be zero or one, but they might also be a whole lot of things in between.
You still have two states: a zero and a one. But they can also be in superpositions of zero and one, which means that thereâs a probability that when you measure it, it will be zero or one with particular probability. In terms of how we physically build these, theyâre not switches anymore, theyâre not transistors, but theyâre actually elements that have quantum mechanical behavior.
One of my favorite things about all this is that in order to make these new quantum computers work, you have to cool them to within fractions of a degree of absolute zero, which means a lot of companies have had to work very hard on cryogenic cooling systems just so other people could work on quantum chips. Jerry calls early quantum computers âscience projects,â but his goal is to engineer actual products people can use.
Youâll hear Jerry talk about making a useful quantum computer in terms of âutility,â which is when quantum computers start to push against the limits of what regular computers can simulate. IBM has been chasing after utility for a while now. It first made quantum computers available on the cloud in 2016, itâs shipped System One quantum computers to partners around the world, and now, this week, itâs announcing System Two along with a roadmap for the future. Itâs Decoder, so I asked Jerry exactly how he and his team sit down and build a roadmap for the next 10 years of applied research in a field that requires major breakthroughs at every level of the product. Oh, and we talked about Ant-Man.Â
Itâs a fun one â very few people sit at the bleeding edge all day like Jerry.
Okay. Jerry Chow, director of quantum systems at IBM. Here we go.
This transcript has been lightly edited for length and clarity.
Jerry Chow, you are an IBM fellow and director of quantum systems. Welcome to Decoder.
Glad to be here.
Iâm really excited to talk to you. Thereâs quite a lot to talk about â quantum computing in general, where it is. But youâve got some news to announce today, so I want to make sure we talk about the news right off the bat. What is going on in IBM Quantum?
Yeah, so we have our annual Quantum Summit coming up, where we basically invite our network of members and users to come, and we talk about some of the really exciting news. What weâre announcing this year is actually we have a really exciting upgraded quantum processor that weâre talking about. Itâs called the IBM Quantum Heron. It has 133 qubits. Itâs the highest performance processor that weâve ever built, and itâs going to be available for users to access via our cloud services.
Weâre also launching IBM Quantum System Two and introducing this as a new architecture for scaling our quantum computers into the future. Weâre also talking about a 10-year roadmap looking ahead. We, at IBM Quantum, like to sort of call our shots, tell everyone what weâre doing because that keeps us honest, keeps everyone in the industry on the same benchmark of seeing what progress is. And weâre expanding that roadmap, which we actually first introduced a couple of years ago and have hit all our milestones thus far. But we are extending it out to 2033, pushing forward into this next realm where we really want to drive toward pushing quantum computing at scale.
So youâve got a new processor, youâve got a new computing architecture in System Two, youâve got a longer roadmap. Put that in context for me: weâve been hearing about quantum computing for quite a long time. I have stared at a number of quantum computers and been told, âThis is the coldest piece of the universe that has ever existed.â Itâs been very entertaining, at the very least. Weâre only now at the point where weâre actually solving real problems with quantum computers.
Weâre not even at the point of solving real problems.
Not even yet?
Not yet. But we are, really excitingly, just this past year, at the point where weâre calling this utility-scale quantum computing. Weâre using 100-plus qubits. We used a processor earlier in the year called Eagle, where we were able to look at a particular problem that you couldnât really solve with brute-force methods using a classical computer, but also it challenged the best classical approximation methods that are used on high-performance computing. So whatâs interesting there is that now the quantum computer becomes like the benchmark. You almost need it to verify whether your approximate classical methods are working properly. And that just happens when you go over 100 qubits.
At 100 qubits, things all change so that you just canât use, say, GPUs or any kind of classical computers to simulate whatâs going on accurately. This is why weâre in this phase where we call it utility scale because thereâs going to be this back and forth between using a quantum as a tool compared with what you can still potentially do in classical. But then thereâs a long road there that weâre going to try to drive value using the quantum to get toward quantum manage.
I think the word utility there threw me off. This is the branch point where the problems you solve with a quantum computer start to become meaningfully different than the problems you could solve with a regular computer.
Thatâs right. We see this really as an inflection point. There are a lot of industries that use high-performance computation already, and they are looking at very, very hard problems that use the Oak Ridge supercomputers and whatnot. And now quantum becomes an additional tool that opens up a new lens for them to look at a different area of compute space that they werenât able to look at before.
So IBM has a huge program in quantum. The other big companies do, too â Microsoft, Google, what have you, theyâre all investing in this space. Does this feel like a classical capitalist competition, âWeâre all racing forward to get the first product to marketâ? Is it a bunch of researchers who know that thereâs probably a pot of gold at the end of this rainbow, but weâre nowhere close to it yet, so weâre all kind of friendly? Whatâs the vibe?
Iâd say that itâs a very exciting time to be in this field. How often do you get to say youâre building from the ground floor of a completely new computational architecture? Something that is just fundamentally different from traditional classical computing. And so yeah, Iâd say that thereâs certainly a lot of groundswell, thereâs a lot of buzz. Sometimes a little too much buzz, maybe. But also I think from the perspective of competition, it helps drive the industry forward.Â
We, at IBM, have been at the forefront of computation for decades. And so itâs in our blood. The ideas of roadmaps and pushing the next big development, the next big innovations in computation, have always been something that is just native to IBM, and quantum is no different. Weâve been in the game with quantum since the early theoretical foundings for probably 30 years, 30-plus years. But now weâre really starting to bear a lot of that fruit in terms of building the architectures, building the systems, putting out the hardware, developing the framework for how to make it usable and accessible.
Let me give you just a much dumber comparison. We had the CEO of AWS on the show, Adam Selipsky. AWS is furiously competitive with Microsoft Azure and Google Cloud. They are trying to take market share from each other, and they do a lot of innovative things to make better products, but the end goal of that is taking one customer away from Google. Youâre not there yet, right? Thereâs not market share to be moved around yet?
Certainly not at that scale.
But are there quantum customers that you compete for?
Thereâs certainly a growing quantum community.Â
[Laughs] Itâs not a customer; there are people who are interested.
âAt 100 qubits, things all changeâ
There are people that are interested across the board, from developers, to students, to Fortune 500 companies. We have a lot of interest. So just as an example, we first put systems on the cloud in 2016. We put a very simple five-qubit computer, five-qubit quantum computer, on the cloud. But it reflected a real fundamental shift in how quantum could be approached. Before, you had to be sort of a physicist. You had to be in a laboratory turning knobs. Youâre taking data, youâre running physicist code; youâre not programming a computer.
Wow. [Laughs] Shout out to physicists.
Well, Iâm a physicist, and you donât want to see my code. [Laughs] But the whole point is that we developed a whole framework around it to actually deploy it and to make it programmable. And think about the early days of computers and all the infrastructure you needed to build in terms of the right assembly language and compilers and the application layers all above that. Weâve been building that for the last seven years since that first launched. And in that time, weâve had over 500,000 users of our platform and of our services.
Iâm always curious how things are structured and how decisions are made. Thatâs really what we talk about on the show. And thereâs a forcing function that comes when itâs a business, and thereâs a growth path. Quantum seems very much like one day it will be a huge business because it will solve problems that regular computers canât. But right now, itâs on the very early part of the curve where youâre investing a lot into R&D, on an aggressive roadmap, but youâre nowhere close to the business yet.
I would say that weâre knocking on the door of business value and looking for that business value, because especially when weâre in this realm where we know that it can be used as a tool pitted against the best classical computers, thereâs something there to be explored. A lot of times, even with traditional computers, there are very few proven algorithms that are where we drive all the value. A lot of the value that gets driven is done through heuristics, through just trial and error, through having the tool and using it on a particular problem. Thatâs why we see this fundamental game-changer of this inflection point going toward utility scale systems of over 100 qubits as now this is the tool that we want users to actually go and find business advantage, find the problems that map appropriately onto these systems for exploration.
So put that in the context of IBM. IBMâs a huge company, itâs over 100 years old, it does a lot of things. This is probably the most cutting-edge thing IBM is doing, I imagine. Iâm guessing youâre not going to disagree with me. But it feels like the most cutting-edge thing that most of the Big Tech companies are doing.
Yes, absolutely.
How is that structured inside of IBM? How does that work?
So weâre IBM Quantum within IBM Research. IBM Research has always been the organic growth engine for all of IBM. Itâs where a lot of the innovative ideas come in, but overall, a particular strategy within IBM and IBM Research is that weâre not just doing research and then weâre going to do development and then itâs going to go on this very linearized product journey. Itâs all integrated together as we are moving forward. And so therefore, we have the opportunity within IBM Quantum that weâre developing products, weâre putting it on the cloud, weâre integrating with IBM Cloud. Weâre actually pushing these things forward to build that user base, build that groundswell, before all the various different technology elements are finished. Thatâs sort of this agile methodology of building this from the ground up, but also getting it out early and often to drive excitement and to really build up the other parts of the ecosystem.
So how is IBM Quantum structured? How many people is it? How is it organized?
So we donât speak explicit numbers, but we have several hundred people. And then we have parts of the team which are focused on the actual hardware elements, all the way down to the actual quantum processor and the system around it in terms of making those processors function by cooling it down in the cryogenic system, talking to it with control electronics, talking to it with classical computing. So it all needs to tie together.
Then you have software development teams. We also have a cloud and services team that helps to deliver our offerings as a service. And then we have applications teams looking at the next algorithms, the next novel ways of making use of our quantum services. We also have teams that are more outward-looking for business development â trying to drive adoption, working with various clients to engage in the problems of their interests. We also have a part of our team which runs an offering called the Quantum Accelerator. Itâs like a consulting arm, working with the clients to get quantum-ready, start understanding how their problems can be impacted by quantum computing and start using our systems.
Is that all flat? Every one of those teams reports to you, or is there structure in between?
No, so all those different ones report to our vice president of quantum computing, which is Jay Gambetta. I take care of the systems part. Basically, the wrapping of the processor and how it runs, executing problems for the users, thatâs the piece that I own.
Thereâs a tension there. It sounds like IBM is designed to attack this tension head-on, which is: âWeâre doing a bunch of pure research in cryogenics to make sure that quantum computing can run because it has to be really cold to run.â Then thereâs a business development team thatâs just off and running, doing sales stuff, and at some point theyâre going to come back and say, âWe sold this thing.â And the cryogenics team is going to say, âNot yet.â Every business has a problem like that. When youâre in pure research mode, the ânot yetâ is a real problem.
Oh, yeah.
How often do you run into that?
We have a very good strategy across the team. We know our core services and what the core product we have is. And also we have a roadmap. The concept of the roadmap is both great for the R&D side but also great for the client perspective, business development angle view of seeing whatâs coming next. From the internal side, we know weâve got to continue to drive toward this, and these are our deliverables and these are the new innovations that we need to do. In fact, in our new roadmap that weâre releasing, we have that separated. Both a development roadmap, which is more product focused and more like what the end userâs going to get and clientâs going to get. And we have an innovation roadmap to show those things which weâre still going to need to turn to crank and figure out what feeds in.
I often say the roadmap is our mantra, and it really is our calling card both internally and externally. Not many people really show a lot of detail in their roadmap, but it serves as a guiding tool for us all.
I was looking at that roadmap, and it is very aggressive. Weâre at Heron, there are many birds to come from what I understand. And the goal is that a truly functional quantum computer needs thousands or millions of qubits, right?
We have a transition toward what we are calling quantum at scale, which I think what youâre referring to is when you will get to the point where you can run quantum error correction, correct for all the errors that are underlying within these qubits, which are noisy. People throw around that number â millions of qubits â in a way that almost drives fear into the hearts of people. One actually really exciting thing that weâve done this past year is weâve developed a set of novel error correction codes that brings down that resource count a lot.
So actually, youâll need potentially hundreds of thousands of qubits, 100,000 qubits or so, to build a fault-tolerant quantum error-correction-based quantum computer of a particular size to do some of those problems that weâre talking about at scale. And thatâs part of the roadmap, too. So thatâs what weâre looking at further to the Blue Jay system in 2033. So thereâs certainly a number of birds to get there, but we have concrete ideas for the technological hurdles to overcome to get there.
Thatâs the goal. Youâre going to get to some massively larger scale than you are today. Orders of magnitude. Today the chip has 133 qubits, you need to get to thousands. Some people, terrifyingly, are saying millions.
Part of your strategy is linking the chips together into these more modular systems and then putting control circuitry around them. Iâm a person who came up in what you might call the classical computing environment, thatâs very familiar. Thatâs a very familiar strategy; weâre just going to do more cores. Thatâs what that looks like to me. Lots of companies have run up against a lot of problems here. In that part of the world, thereâs just Mooreâs law, and we sit around talking about it all day long. And Nvidia and maybe TSMC have gotten over it this time, and Intel has struggled to get the next process node and increase the transistor density. Is there an equivalent to Mooreâs law in quantum that you were thinking about?
Our roadmap is showing that type of progression.
I look at that roadmap, and you are definitely assuming a number of breakthroughs along the way â in a way that Intel just assumed it for years and years and they achieved it, and then kind of hit the end of the road.
Even where we are today with Heron, and actually complementary to Heron this year, we also already built a 1,000-qubit processor, Condor. Its explicit goal was to push the limits of how many qubits could we put on a single chip, push the limits of how much architecture could we put in an entire system. How much could we actually cool down in the dilution refrigerators that we know today, the cryogenic refrigerators that we have today? Push the boundaries of everything to understand where things break. And if you look at the early part of our roadmap, the birds are there with various technological hurdles that weâve already overcome to get toward this thousand-qubit level. And now those next birds that you see in the rest of the innovation roadmap are different types of couplers, different types of technologies, that are those technological hurdles, like in semiconductors, that allow us to bridge the gap.
Are they the same? Is it the same kind of, âWe need to double transistor density,â or is it a different set of challenges?
âIâd say, the decades of experience matterâ
Theyâre different, because with this sort of modular approach, thereâs some that are like, how many can we place into a single chip? How many can we place into a single package? How many can we package together within the system? So they all require slightly different technological innovations within the whole value chain. But we donât see them as not doable; we see them certainly as things that we will handle over the next few years. Weâre already starting to test linking between two packages via a cryogenic cable. This is toward our Flamingo demonstration, which weâre planning for next year.
Do you get to leverage any of the things that are happening on the process side with classical computers?
Oh, yeah.
Like TSMC hits three nanometers and you get to pull that forward, or is that different?
Not so explicitly to the newest stuff thatâs happening today in semiconductors. But IBM has been in the semiconductors game for many, many decades. And a lot of the work that weâve achieved with even achieving a 100 qubits with Eagle a couple of years ago was because we had that deep-rooted semiconductor background. So just to give you an example, at 100 qubits, the challenge is how do you actually wire to 100 qubits in a chip? The standard thing you do in semiconductors is you go to more layers, but itâs not so easy to do that just in these superconducting quantum circuits because they might mess up the qubits. It might cause them to decohere.
But because of our know-how with packaging, we found the right materials, we found the right way of using our fabrication techniques to implement that type of multilayer wiring and still talk to these 100 qubits. We evolved that further this past year to actually get to 1,000. And so that type of semiconductor know-how is just ingrained and something that is, Iâd say, the decades of experience matter.
So youâre going to build the next-generation quantum computing chip, Heron. Itâs got 133 qubits. How is that chip manufactured?Â
Okay. Well, to build the next-generation quantum computing chip, we rely on advanced packaging techniques that involve multiple layers of superconducting metal to package and to wire up various superconducting qubits. With Heron, weâre also using a novel tunable coupler architecture, which allows us to have world-record performing two-qubit gate qualities. And all this is done in a standard fabrication facility that we have at IBM and package up this chip, and we have to cool it down into a cryogenic environment.
So silicon goes in one side of the building, Heron comes out the other?
I mean, certainly more steps than that. [Laughs] And thereâs this know-how of how to do it properly to have high-performing qubits, which weâve just built up.
Explain to me what a high-performing qubit is.
Yeah, so the tricky thing with these qubits⦠There are different ways of building qubits. There are people who use ions and atoms and electrons and things like that, but ours are actually just metal on a substrate; theyâre circuits. Theyâre much like the circuits that you might see when you look inside of a standard chip. But the problem with these circuits is that you can build, so you can basically arrange them in a certain way and use the right materials. And you have a qubit that, in this case, for superconducting qubits, it resonates at five gigahertz.
If you choose the wrong materials, the lifetimes of these qubits can be extremely short. So when we first started in the field of building superconducting qubits in 1999, superconducting qubits lasted for maybe like two nanoseconds, five nanoseconds. Today, weâve gotten up to close to a millisecond, hundreds of microseconds to a millisecond. Already in numbers orders of magnitude longer. But that took many years of development. And at the point of a few hundred microseconds, weâre able to do all these complex operations that weâve been talking about to push this utility scale that we discussed earlier. So that know-how to increase that lifetime comes down to engineering, comes down to understanding the core pieces that generate loss in the materials, and thatâs something that we certainly have expertise at.
Tell me about the industry at large. So IBM has one approach: you said youâre using metals on a substrate. Youâre leveraging all of the semiconductor know-how that IBM has. When youâre out in the market and youâre looking at all your competitors, Microsoft is doing something else, Google something else. Go through the list for me. What are the approaches, and how do you think theyâre going?
When we think about competitors, you can think about the platform competitors of whoâs building the services, but I think what youâre pointing to more is the hardware side.
When it comes down to it, thereâs a simple set of metrics for you to compare the performance of the quantum processors. Itâs scale: what number of qubits can you get to and build reliably? Quality: how long do those qubits live for you to perform operations and calculations on? And speed: how quickly can you actually run executions and problems through these quantum processors? And that speed part is something where itâs an interplay between your quantum processor and your classical computing infrastructure because they talk to one another. You donât control a quantum computer without a classical computer. And so you need to be able to get your data in, data out and process it on the classical side.
So scale, quality, speed. Our approach with superconducting qubits, to the best of our knowledge, we can hit all three of those in a very strong way. Scale, pushed up to over 1,000 qubits. We know that we can build up to 1,000 qubits already with the technologies that weâve built. From the quality, Heron â which weâre releasing â has the best gate quality. So the gates, the operations, the gate qualities that have been shown across a large device. And then speed, in terms of just the execution times, weâre on the order of microseconds for some of the clock rates, whereas other approaches can be a thousand orders of magnitude slower.
What are the other approaches in the industry that you see, and where are they beating you and where are you ahead?
So there are trapped ions: basically theyâre using molecular ions like caesium and things that you might use for clocks, atomic clocks. They can have very good quality. In fact, there are some results that have tremendous performance across a number of those types of trapped-ion qubits in terms of their two-qubit gate qualities. But theyâre slow. In terms of the clock rates of getting your operations in, getting your operations out, you do operations to recycle the ion sometimes. And thatâs where it, Iâd say, has a downside.
Iâd say, right now, superconducting qubits and trapped ions are the approaches that have the most prominence at the moment that have been put out in terms of usable services. Atoms have also emerged; itâs very similar to the trapped ions. There, they use these fun little things called optical tweezers to hold atoms into little arrays. And there are some exciting results that have been coming out from various atom groups there. But again, it comes down to that speed. Anytime you have these actual atomic items, either an ion or an atom, your clock rates end up hurting you.
Alright, let me make a comparison to semiconductors again. So in semiconductors there was multiple pattern lithography that everyone chased for a minute, and it hit an end state. And then TSMC had bet really big on EUV and that let them push ahead. And Intel had to make a big shift over there. Youâre looking at your roadmap, youâre doing superconductors, cryogenics, metals on substrates, and over here some guys are doing optical tweezers on atoms. Is there a thought in your head like, âWe better keep an eye on this because that might be the process innovation that we actually needâ?
I think overall, in the landscape, weâre always keeping track of whatâs going on. Youâre always seeing what are the latest innovations in the various different technologies.
Is that even a good comparison to semiconductors in that way?
The whole systems are completely different. The architectures are not so compatible. At some level, with your nodes of your semiconductors, there might be certain kinds of know-how that translate how you route and layout, maybe. And here, above a certain layer, thereâs also going to be commonality in terms of the compute platform, how the quantum circuits are generated. The software layers might be similar, but the actual physical hardware are very different.
It feels like the thing weâre talking about is how do you make a qubit? And itâs not settled yet. You have an approach that youâre very confident in, but thereâs not a winner in the market.
I mean, weâre pretty confident. Weâre pretty confident in superconducting qubits.
Fair enough. [Laughs] I was just wondering.
Itâs why weâre able to prognosticate 10 years forward, that we see the direction weâre going. And to me itâs more that there are going to be innovations within that are going to continue to compound over those 10 years, that might make it even more attractive as time goes on. And thatâs just the nature of technology.
Youâve got to make decisions on maybe the longest timeline of anyone Iâve had on the show. Itâs always the chip people who have the longest timelines. I talk to social media CEOs, and itâs like their timeline is like five minutes from now, like, âWhat are we going to ban today?â Thatâs the timeline. I talk to chip folks, and your timelines are decades. You just casually mentioned a chip youâre going to ship in 2033. Thatâs a long time from now. How do you make decisions on that kind of timeline?
Thereâs the near-term stuff, obviously, and the roadmap serves as that guide. That roadmap is constructed so that all these various things do impact that long-term delivery.
Just walk me through: What does the quantum computing roadmap meeting look like? Youâre all in a conference room, are you at the whiteboard? Paint the picture for me.
âItâs mainly an inertia thing to move entire industries, move banks, move commerce, to adopt those standardsâ
Yeah, that is a great question. I mean, we have a number of us who are sitting there. We certainly know that we have certain types of technical foundations that we know that we need to include into these next-generation chips and systems.
For this roadmap, we said, âWe know at some point we need to get quantum error correction into our roadmap.â And with that technical lead, we know what are the requirements? So first we said, âOkay, letâs put it here. Now letâs work backward. It says that we need to do this innovation and this innovation by this date, and this other innovation in the software stack or whatever by this date.â And then we say, âOh shoot, we ran out of time. Letâs move back a little bit.â And so we do a little bit of that planning, because we also want to do it so that we lay out this roadmap that we often call no-regrets decisions. We donât want to do things that are just for the near term. We want to really pick strategies that give us this long-term path.
Itâs why we talk about utility scale so much in terms of what we can do with Herons and soon Flamingos. But everything that we want to build on top of what we can do there will translate to what we can do when we get those systems at scale, including error correction. And in terms of the roadmap planning⦠Weâre not done, by the way. We have this overall framework for the 10-year roadmap, and then we need to refine. Weâve got a lot of details still to come to work on in terms of what are those things that need to be worked on across the software layer, the compiler layer, the control electronics layer, and certainly at the processor layer.
Is there commercial pressure on this? Again, this is a lot of cost at a big public company. Is the CEO of IBM in that room saying, âWhenâs this going to make money? Move it upâ?
I think the point is, our mission is to bring useful quantum computing to the world. Iâve been working in this area for 20 years now. Weâve never been this close to being able to build something that is driving real value. And so I think when you look at our team, we are all aligned along that mission. That we want to drive this to something that⦠We started with just getting it out there in the cloud in terms of building the community. Now, we fundamentally see this as a tool that will alter how users are going to perform computation. And so there has to be, and I expect there to be, value there. And weâve seen how the HPC community has progressed and weâve seen how supercomputing has... You could see whatâs happening with the uptake of AI and everything. We build it, we will build the community around it, weâll drive value.
Letâs talk about AI for a second. This is a really good example of this. AI demand is through the roof. The industry is hot. Weâll see if the products are long lasting, but there seems to be real consumer demand for them. And that is all translated into a lot of people want a lot of Nvidia H100 chips. Itâs very narrowly focused on one kind of processor. Do you see quantum systems coming into that zone where weâre going to run a lot of AI workloads on them? Like future AI workloads.
Whatâs happened in AI is phenomenal, but weâre not at the point where the quantum computer is this commodity item that weâre just buying tons of chips. Youâre not fabricating millions of these chips. But we are going to build this supercomputer based off of quantum computing, which is going to be exquisitely good at certain types of tasks. And so the framework that I actually see is ⦠already youâre going to have your AI compute clusters. The way that people run workloads today, Iâm sure they are running some parts on their regular computers, on their own laptops, but parts of the job get fed out to the cloud, to their hyperscalers, and some of them are going to use the AI compute nodes.
We see that also for how quantum will feed in. Itâll be another part of that overall cloud access landscape where youâre going to take a problem, youâre going to break it down. Youâre going to have parts of it that run on classical computing, parts of it that might run on AI, parts of it that will leverage what we call quantum-centric supercomputing. Thatâs the best place to solve that one part of the problem. Then it comes back, and youâve got to stitch all that together. So from the IBM perspective, where we often talk about hybrid cloud, thatâs the hybrid cloud that connects all these pieces together. And differentiation is there in terms of building this quantum-centric supercomputer within there.
So your quantum-centric supercomputer in the cloud. Weâve talked a lot about superconducting now. You need a data center thatâs very cold. This does not seem like a thing thatâs going to happen locally, for me, ever, unless LK-99 is real. This isnât going to happen for anyone in their home outside of an IBM data center for quite some time.
I would say this. So when I was first working in this area and did my PhD in this area â I worked on superconducting qubits â we required these large canisters, these refrigerators, where we need to wheel up these huge jugs of liquid helium and fill them every three days to keep them cold. Now, thatâs a physics experiment. I mean, there have already been innovations in cryogenics that theyâre turnkey: you plug them in, they stay running, they can run for years and maintain your payloads at the right temperatures. Youâre paying electricity, obviously, to keep them cold. But weâre seeing innovations there, too, in terms of driving infrastructure-scale cryogenics. Honestly, weâre going to evolve the data center of the future, just like data centers today have evolved to handle increased compute resources needed. We will work hand in hand with how to build these quantum data centers, and weâre already doing that. So we have a quantum data center up in Poughkeepsie, which hosts the majority of our systems, and weâre planning on expanding that further.
I think AI has very much complicated the question of what youâre allowed to do with a computer chip. The White House just released an executive order about AI. And somewhere in there is the idea that you should not be able to do some things with AI. And I talked to AMD CEO Lisa Su at the Code Conference, and I said, âWould you accept a regulation that limits what people can do on an AMD chip?â And she said, âWell, yeah, we might have to. There might be some stuff we just donât let these computers do anymore.â Which is very challenging when youâre talking about someoneâs laptop.
It is way less challenging when youâre talking about a data center. Like AWS can just keep you from doing a workload. IBM, Iâm sure, has rules and regulations about what its cloud is capable of doing and what you will allow to be done with its cloud computing. You fast-forward quantum, people are worried that youâre going to break AES encryption with quantum one day, and then the world will fall apart because the world runs on AES encryption. Are you thinking about that yet: thereâs some stuff we should not allow people to do? And as we build the cloud system, we should make sure we put the controls in place?
There are certainly threads of that type of discourse, especially throughout the community. Personally, what I see is, the encryption one, we already know that there are quantum-safe encryption standards. And a fun thing is, in terms of IBM Quantum, our mission is to bring useful quantum computing to the world. The other side of it is make the world quantum-safe. We want to actually help clients figure out how to update their encryption standards to those quantum-safe ones. They exist. NIST has approved a number of them, which itâs mainly an inertia thing to move entire industries, move banks, move commerce, to adopt those standards.Â
I canât get people to stop using four-character passwords. Will you talk to them?
Yeah, right. Exactly, thatâs the challenge. And itâs almost a social challenge that needs to be overcome to make that happen. Removing that, if we look across at what you can or cannot do on quantum computers, I honestly think we need to just watch whatâs happening with AI, see whatâs been done in the past with high-performance computing. Again, not everybody has a high-performance computer at home. And so we expect a lot of the frameworks to be very similar. And so my concern about putting too many safeguards around it early is stifling progress, is stifling the development early.
But this conversation is now happening, I would say, in a much more heated way in the AI space. I mean, itâs almost like two religions are competing to see what the future of AI will be: âJust run as fast as you canâ and âWe should have more safety.â And this culminated potentially in whatever happened at OpenAI.
Thatâs right.
Who knows? We still donât know. I donât even know if thatâs the case. But that is one narrative about that chaos that certainly exists. Is there anything like that in Quantum? Are there Quantum researchers who are like, âThat person is out of controlâ? Name names. [Laughs]
No, weâre not at that stage yet, Iâd say. But there are responsible quantum computing initiatives. There are things that are looking at it, and I think thereâs a lot to lean on in terms of learning from whatâs happening right now with those AI stories.
Whatâs the thing â outside of just the pure entertainment value â whatâs the thing about AI accelerationism that youâve pulled into how youâre thinking about your roadmap and building the systems?
âItâs always cool to see tremendous excitement about computing capabilitiesâ
Itâs actually really cool. Something that weâre talking about at our computing summit, too, is that we have Watsonx at IBM, and we actually brought in some GenAI methods to help users program in Qiskit. So thereâs actually an engine that we built there to help users code that weâre going to be previewing. And then another thing is that translating problems into the right circuits that can run on physical hardware is a very challenging task. It, in itself, is an optimization task in terms of thereâs a particular problem you want to run, and my hardware is configured in this particular way. We call that transpilation: how to map one to the other. And our teams actually used AI methods to find basically more optimal paths of that mapping. Itâs actually really fun in that AI impacts how we can accelerate quantum. Thereâs another flip side, which is we are looking into how quantum can actually boost classification methods for AI. So itâs all tied together in some ways here.
Has that changed your roadmap, the explosion of demand for AI systems? A year ago, there was no ChatGPT. Now weâre sitting at the end of it, and Iâm going to go to CES in a couple of weeks and everyoneâs going to tell me that AIâs in everything. The industry just sort of reacts to buzzwords. Has this moved your path at all?
This AI transpilation thing did come in all of a sudden and is part of our roadmap. Itâs an innovation, and now we want to feed it into something that we want to drive toward product. So in that micro sense, it has. In the more macro sense, I just say that itâs always cool to see tremendous excitement about computing capabilities. If the buzz stayed more on the AI and let quantum off the hook for a little bit, itâs not so bad.
Wow. The encryption doomers are like, âPay attention to us.â There are some problems that quantum has always been promised to solve: molecular behavior, mapping proteins. Some of those problems have been attacked by AI very directly. We just had Demis Hassabis on the show â obviously DeepMind, they just did proteins. Itâs done now, you can have it, weâre going to walk away. Is there an overlap between where AI is expanding to in terms of the problem set or what it can do that is competitive with what you want to accomplish with quantum?
Iâm not the foremost expert about what molecular problems can be solved here. But I can at least say that we know that there are certain sizes and certain scales of problems that, in terms of supercomputing resources, push summit, push frontier to its max limits of what users can actually simulate. Again, I donât know how much of that can actually be looked at using AI for approximate methods, but even then, itâd still be approximate methods. And hereâs where quantum is really going to be something that allows one to look at it differently.
When youâre looking at what you have right now â you have partners, you have potential customers, you have people interested â whatâs the largest volume of interest from the community?
There are those that are using various materials. For example, Department of Energy, Oak Ridge National Lab, those that already use high-performance computing. They are super interested in using our platforms. Boeing actually has been working with us for quite a bit. Theyâre just looking at super tough problems like composites of materials and layers of materials and how best to arrange them. And they have problems with thousands of variables that are tremendous, that basically cannot work on classical computers. And weâve been working with them to understand how to map their problems into quantum. And then you have the financial services industry. You have a number of players there that are looking at things like portfolio optimization, trying to understand all these things.
Itâs always portfolio optimization, man. At the end of the day, itâs like Boeingâs doing some cool shit and portfolio optimization. Itâs always lurking in the background somewhere. Itâs fine, they pay the bills. Itâs good.
Youâve been talking a lot about the cloud. Youâve got your cloud systems. Youâve also put System Ones on college campuses. How does that work? You buy a System One, itâs got some qubits in it. Is there a person rolling the helium up to it?
Theyâre still owned by IBM. Theyâre actually managed services deployed on the premises of the client locations. So we have one actually that, earlier this year, is with Cleveland Clinic in Ohio. Thatâs probably the most interesting place that weâve deployed a system, in that itâs in their cafeteria.
Thatâs amazing.
People have their morning coffees and eat their lunch around it.
And thatâs just a self-contained local supercomputer.
You can think of it as a self-contained, local managed service that theyâre able to build a network and ecosystem around with their researchers, other partner university institutions that might want to use it. So thatâs sort of the idea. Again, we have our main data center and cloud-accessible systems as you had mentioned. And you have these other ones that you drive regional ecosystems. And weâre actually launching a European data center around our system over in Germany next year because, in different locations, people care about how their data is handled. And so then you never have to send information overseas and things like that. So at that level, we can certainly build that type of flexibility into how we manage that service in terms of user data and everything.
Part of the news today is System Two. Do you have System One customers who are like, âOh, shit, I should have waitedâ? How does that work with a quantum supercomputer? Is there an upgrade cycle?
Even with our System Ones, weâve actually upgraded those over time. And again with our roadmap, some of them, we in fact first launched with 27-qubit Falcons. As an example, we just announced that our system in Japan with the University of Tokyo got upgraded to a 127-qubit Eagle processor. But in terms of the infrastructure from System One to System Two, itâs wholly different. So System One is great in that it first showed that we can put these things almost anywhere â a cafeteria, for example. You didnât have to be in a physics laboratory for them to function.
And in the cafeteria thereâs the superconducting, super-cooled cryogenic system?
Yeah. Like I say, you have your morning coffeeâ¦
And youâre just looking at it.
â¦next to a really, really cold 15 millikelvin quantum processor.Â
Do people know itâs there? Is there a sign?
Itâs hard to miss. [Laughs] Itâs this glass box that is... Actually, funny story is that we work with this vendor, Goppion, that actually handles the glass that encases the Mona Lisa to help build the enclosures for our systems.
Thatâs cool. Alright, so System Two.
Yeah. So System Two: whole new level of infrastructure. But itâs designed to scale. And so thatâs where certainly upgradeability and modularity is inherently built into it. You want to increase the number of processors, increase the cryogenic cooling environment? We can do that. Like Lego, like modular blocks. You want to increase the amount of control electronics? We can do that. You want to increase the amount of classical computation to interface with the quantum computer? We can do that, too. Thatâs the idea behind System Two, that itâs really designed for scalability in a modular way within a data center environment.
So IBM is announcing a new chip, new supercomputer, System Two, new roadmaps. If youâre just a regular person and youâre looking at the pace of supercomputer development, what should you be looking out for?
Iâd say that youâve just got to be looking out at the fact that itâs actually not hard to get started and learn about it. Thereâs an entire set of resources that you can go and program a quantum computer tomorrow. And the fact that we have this 10-year roadmap, and the fact that we are building this ecosystem and driving toward these new generations of chips and systems, we want to develop the developer of the future. And so if youâre at all interested in learning about using a quantum computer and getting involved, thereâs a tremendous opportunity for growth here. And weâre going to need that. To build an entire industry, and to build this as a computer platform that works together seamlessly with todayâs most high-performance computers, is going to require a groundswell of people. So to me, you touch so many different people out there, itâs like, get out there. You can run and program a quantum computer tomorrow. We have freely available systems to run circuits on.
Stop playing around with your LLMs; get on the quantum train. Thatâs what Iâm taking away from this.
Yeah.
Alright. Last very silly question. When you watch the Ant-Man movies, are you just furious all the time?
Iâd say that the first few Ant-Man movies, some of the quantum focus was interesting, it was cute. But the most recent one where they had an entire civilization inside, oof.
Itâs a little rough.
That was a little rough.
Alright, Jerry, this was amazing. Thank you so much for coming on Decoder.
Yeah, youâre very welcome. Glad to be here.
Decoder with Nilay Patel /
A podcast about big ideas and other problems.