Here I am checking the status of a specially built ultra-high vacuum (UHV) metal deposition chamber used for developing thin films compatible with use in quantum information devices. Credit: C. Suplee/NIST

Seeking the Power of Quantum Computing in Silicon

Josh Pomeroy, Physicist, National Institute of Standards and Technology

Despite the vast computing speed and power available to us in our palms, our desktops and massive server farms, classical computing is reaching its limits, leaving a surprising number of conceptually simple problems essentially intractable. In general, these tend to be problems that ask, “What is the most efficient way to do something?” These problems become exponentially more difficult as the number of possible choices grows, and rather than explicitly determine the answer, we instead rely on “best practices” or instinct. But quantum computing can change that.

Getting to work

The daily commute to work provides a real-world example we can all identify with. Personally, every morning I have four or five choices for my route to work. Each day I contemplate several factors that I think will affect which is most efficient: Is it a popular telework day like Monday or Friday or a holiday week? Is it raining? Do I feel like a rural, winding drive, or a “stimulating,” lane-switching, high-speed commute? Once I choose a particular route, I engage in another layer of optimization: I note which lanes tend to move most efficiently over which distances at which times of day; I adjust my speed once I clear one green light to improve my chances of hitting the next green light; and I preemptively change lanes to avoid the backup for the doughnut shop drive-through or the I-270 ramp.

OK, I admit I am a bit neurotic in how I approach my commute. But consciously or not, we all engage in some sort of decision-making process when we depart for work in the morning. Maybe you just choose the same route and tune out as your mind prepares for the day ahead. Or maybe you rely on a navigation app, perhaps one that considers traffic conditions to adjust routing. But even these are surprisingly limited, usually only considering arterial routes (avoiding small back roads) and giving little or no attention to how traffic conditions are likely to change by the time you reach a particular segment. These limitations are not simply a deficiency of data — considering those weighting factors makes it a really hard problem!

Our drive to work is one of the simplest examples of a problem academically known as the “Traveling Salesman Problem,” but it is perhaps more apropos today to call it the “Package Delivery Problem.” The general idea is that a courier is tasked with visiting a number of locations to make deliveries and must choose a route. For simplicity, let’s suppose it’s a light day and only 10 deliveries are needed — that is still a whopping 3,628,800 different possible combinations of stops!

The package delivery problem

Obviously, the courier doesn’t work through all these possibilities and instead uses intuition or experience to quickly discard many of them. Common sense dictates you choose a network of next nearest locations and progress through the list, but in reality, the complexity continues to rapidly grow with real-world constraints; for example, geographic features may intercede. What if some locations are in northern Virginia and others in southern Maryland, where the Potomac River drastically increases the “cost” of moving between Maryland and Virginia? Then, as illustrated by the single destination example above, many choices may exist between a pair of locations, each affected by many factors (road quality, traffic, weather, etc.). All these factors adjust the effective length of each link and change in time as the route progresses, all of which makes finding the optimal solution untenable.

Package couriers seeking to optimize time and fuel are presented with a new set of locations to visit each shift, but something as simple as a fender-bender can disrupt the entire route by removing critical connections. N. Hanacek/NIST

Enter quantum. Classical computing must calculate the effective distance between each node (which may also depend on time), and then work through the total length of all possible combinations in order to find the optimal route. However, unlike the binary bits used in classical computing with only two possible values, 0 and 1, quantum algorithms use “qubits” in which information is encoded in multiple dimensions, i.e., a qubit can be composed of both 0 and 1 at the same time. Furthermore, the information in qubits can be coded to generate “entanglement,” a unique quantum property that vastly increases the computational power for problems of complexity, making a classically intractable problem trivial. The bevy of possible problems that could be solved by quantum makes our classical computational power look like we are counting on our fingers.

But while an easier commute, incrementally faster package delivery or smoother air traffic system may all improve our lives in small ways, scientists imagine much more substantial opportunities. For example, our bodies are incredibly complex networks with massive interconnectivity. When a new medicine is introduced, the effect ripples through the whole system. Hopefully, the main result is a reduction in swelling or stabilized heart rate as intended, but that often comes along with negative side effects. Modern medicine must rely on statistics to predict how most people will respond to a certain medicine at a certain dose. But each of our bodies is unique, so each of us has our own optimal solution, often significantly different from others.

Quantum silicon

A future with more effective medicine, more efficient energy use and better commutes(!) is what motivates thousands of physicists, chemists, engineers and computer scientists to develop practical quantum computing. What is the optimal approach? The majority opinion is that modifying conventional silicon transistor technology to host quantum computation seems like the best way forward. But despite the many excellent attributes of silicon, silicon dioxide and our nanofabrication technology, silicon technology is not yet sufficient for quantum.

Why? In classical computing, information is represented with electric charges in silicon, but quantum computing is expected to rely on “spin.” Spin is the property of an electron or atom that gives it magnetic properties. In the simplest case, there are two possible values of spin: “spin down” can represent a 0 and “spin up” can represent a 1, providing the values that are needed for a qubit.

Spin is much more fragile than charge when you scale down to only one spin. In fact, for one spin, the subatomic makeup of silicon atoms becomes important — silicon always has 14 protons, but exists in three “isotopes,” i.e., it may have 14, 15 or 16 neutrons and still be silicon. When a silicon atom has 15 neutrons (silicon-29), then it acts like a tiny magnet, which disrupts the quantum information in a spin qubit. So, if we can isolate silicon-28 atoms (14 neutrons), eliminating silicon-29, then we can create a “semiconductor vacuum,” free from both electric and magnetic disturbance.

This glowing piece of silicon is being prepared in a custom UHV chamber for enriched silicon deposition and eventual fabrication of quantum information devices. Credit: NIST

Isolating, or “enriching,” silicon-28 to eliminate the pesky silicon-29 is the focus of a project I lead to discover what is needed for quantum silicon. During the past several years, our efforts have produced the world’s most highly enriched silicon, reducing the silicon-29 atoms from the 5% found in natural silicon to less than 0.00001% in our specimens. For context, this is about 100 times more enriched than the next best, which is silicon specially commissioned for the International Avogadro Project. We have learned how to make single crystals from our enriched silicon, and then how to make capacitors, diodes and transistors, the basic components of computing. And while our enrichment techniques will not provide the silicon needed for a silicon quantum revolution, they can inform industry about how much enrichment is needed before very expensive enrichment plants are spun up.

How do we learn what enrichment is sufficient for quantum in silicon? Currently, that question can only be answered by making qubits, a long, risky path that is suboptimal. In our work, we seek to both answer that question and to develop measurements that are simpler and easier than making qubits to answer it in the future. As an example, in a recent paper, we measured the properties of electron motion in silicon transistors due to quantum effects and compared two specimens, a commercial wafer and highly enriched silicon grown here at the National Institute of Standards and Technology (NIST). We think these measurements could provide tests for a specimen’s material quality and also serve as early indicators during manufacturing of whether a process needs recalibrating before a whole production run is lost.

Unfortunately, though, to show that these simpler measurements can be used as guides, we also need to make and measure qubits, since new measurement approaches must be validated. So, now we are making the qubits needed to see if these simpler methods correlate with full-blown quantum measurements. With this full suite of tools, we can deliberately change how much silicon is enriched and explicitly determine how much benefit comes with different levels of enrichment.

Quantum makes everything harder! The materials requirements are harder, the fabrication demands are harder, and many of the measurements needed don’t even exist yet. Furthermore, we face the ultimate chicken-and-egg problem: How do you know you have made something if you can’t measure it, and how do you measure something if you can’t make it? Fortunately, this kind of challenge is NIST’s bread and butter, and with the contributions of thousands of scientists from NIST and elsewhere, quantum computation will become a reality … and then we can find out how close to optimal our path to making it was!

This post originally appeared on Taking Measure, the official blog of the National Institute of Standards and Technology (NIST) on February 25, 2020.

To make sure you never miss our blog posts or other news from NIST, sign up for our email alerts.

About the Author

Josh Pomeroy has been an experimental physicist at the National Institute for Standards and Technology (NIST) since 2003. He performs fundamental research that seeks to understand how improvements in existing materials, devices fabricated from those materials, and their processing can improve the performance of systems used in technology and metrology. He graduated from Cornell University with a Ph.D. in physics and received his undergraduate degree in physics from Boston University.



Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
National Institute of Standards and Technology

National Institute of Standards and Technology


NIST promotes U.S. innovation by advancing measurement science, standards and technology in ways that enhance economic security and improve our quality of life.