Connect with us

Regional

Qubit by qubit, the quantum computers of tomorrow are coming into being

The quantum computing industry has a road map to the future — but can it reach its destination?

Published

on

Qubit by qubit, the quantum computers of tomorrow are coming into being
Qubit by qubit, the quantum computers of tomorrow are coming into being

A couple of weeks ago, I navigated the holiday traffic from New York to the town of Yorktown Heights in suburban Westchester County, the location of IBM’s Thomas J. Watson Research Center, to stand before perhaps the most advanced quantum computer in the world.

The IBM Quantum System Two is sheathed in gray metal, roughly the size and shape of an industrial refrigerator. Which makes sense — much of the architecture inside is composed of cooling equipment that keep the three IBM Heron quantum processors “colder than deep space,” as IBM senior vice president and director of research Darío Gil told me in an interview. Up close, it emits a quiet electronic hum.

The Quantum System Two looks forbidding, like some offspring of the supercomputer from 2001: A Space Odyssey, minus HAL’s eerie red eye. And it should — with hundreds of qubits, the quantum counterpart to classical computing bits, operating on three connected processors, the Quantum System Two represents a significant step forward in the very long path to bring quantum computing from the lab into the practical world, where such machines could one day solve problems that even the fastest classical supercomputers couldn’t crack in millions of years.

But appearances can be deceiving. The qubits that do the real work of quantum computing inside the System Two are as sensitive and as prone to error as the hardware itself looks indestructible. That’s why quantum computers need to be kept colder than cold — even the slightest increase in temperature or vibration or noise can cause qubits to drop out of the fickle quantum state that allows them to do their magic.

And that’s why as impressive as the Quantum System Two looks, it still represents the early stages of the quantum computing revolution — more 1960s-era CDC 6600 than, well, HAL. For quantum computers to truly become big and reliable enough to begin to serve the needs of business, leading companies like IBM — and Google and Honeywell and an assortment of other competitors in the quantum race — will need to invest billions of dollars and overcome serious technical challenges. “We’ve been on this journey for many years, and we’re going to be on it for many more years,” Gil told me. “But we have a roadmap.”

Why quantum matters

The difference between a classical computer — which includes everything from the machine you’re reading this article on to the Frontier supercomputer in Oak Ridge National Laboratory — and a quantum computer starts at the most fundamental level: the bit, or for quantum computers, the qubits.

While bits express information in binary state fashion — on/off, 1/0 — qubits, because they can take advantage of the strange properties of quantum physics, can express multiple states at the same time, much as quantum particles in physics can be in many different states at the same time. (If that breaks your brain, don’t worry — Einstein felt much the same way.) What this means in terms of computing is that while classical machines calculate possibilities one after the other, quantum machines can carry out many, many calculations simultaneously. In a recent interview with 60 Minutes, the physicist Michio Kaku compared a quantum computer to a mouse in a maze that can scan all possible routes out at the same time, rather than turn by turn like a classical machine.

That means a quantum computer could solve highly complex problems that would always be outside the reach of a classical computer, no matter how fast it runs or how much time it has to work. It’s as if computing had always existed in two dimensions, and suddenly, we discovered a third.

The practical possibilities are enormous. Beyond potentially being able to break as-yet-unbreakable cryptography — one reason why security agencies around the world, including in the US and China, are so focused on quantum — quantum computers could one day perfectly model the behavior of the physical world, which, after all, is ultimately underpinned by quantum physics, not classical. The Cleveland Clinic is already using an earlier IBM quantum computer in an effort to screen and optimize drugs targeted at specific proteins.

Quantum will also become more important as classical computing reaches its physical limits. Computers today have gotten more powerful because scientists have managed to pack more and more transistors onto each chip. That was how Moore’s law — the idea that computers would become ever more powerful and cheaper with every passing year — went from a prediction to a reality. But transistors today are as small as five atoms, which means there’s not much more room to grow (or shrink). Quantum represents an escape hatch from these physical limits.

A road map to the future

The money behind what Politico called “the quantum hype cycle” is accumulating. Last week, the House passed a $3.6 billion reauthorization of the National Quantum Initiative Act, while China has committed $15.3 billion in funds for quantum computing research. Alphabet’s Google and China’s Baidu have gone deep into quantum as well, on top of a growing ecosystem of quantum startups.

This summer, IBM’s own team of scientists (which includes recent Future Perfect 50 selectee Jerry Chow) notched a notable scientific advance in the use of quantum computers when one of their quantum systems came up with a better answer to a complex physics problem than a conventional supercomputer. On Monday the company crossed a technical barrier with its new Condor processor made of 1,121 superconducting qubits, the largest such quantum chip ever released.

As Jay Gambetta, IBM’s vice president of quantum, told me, this series of advances signifies that “we’re entering the age of quantum utility.” What that means is that while in the past, quantum computers were chiefly used to study, well, quantum computing, the hardware has now reached the point where they can now be used to advance science in other fields, and eventually, much more.

But that utility is still as fragile as the qubits themselves. While the hardware keeps improving, actual revenue for the entire quantum industry is less than a billion dollars. Even IBM forecasts that it won’t develop quantum computers that can handle a useful — meaning able to do practical work — number of qubits until the end of the decade.

The very ability that makes quantum computers so potentially powerful — harnessing quantum mechanics to process information in a way that is simply impossible in a classical system — is what makes them so fragile. Allow the slightest perturbation to your qubits, and errors will creep in that will render the output of your quantum computer useless.

That’s one reason why a major focus of IBM’s quantum research is on error correction and mitigation — producing high-quality qubits in large numbers, which act as a kind of support team for the much smaller number of “logical qubits” that actually carry information. But a roadmap that still demands years before a machine can actually fulfill the promise of the quantum computing revolution means a lot of time and money poured into what is still, for the most part, pure research.

It’s impossible to be 100 percent sure whether we’ll get to a useful quantum computer by the end of the decade, at least without the kind of all-knowing quantum computer we won’t have for decades or more. But I hope they succeed. Advances in computing power have been the fundamental driving force of technological progress, from a satellite in space to a phone in your pocket. The kind of leap forward that useful quantum computing promises would truly be revolutionary. The technology itself may be fragile, but there’s no limit to what we could build on its foundation.

A version of this newsletter originally appeared in the Future Perfect newsletter. Sign up here!

Trending