NIST physicist Elizabeth Donley with a compact atomic clock design that could help improve precision in ultraportable clocks. About 1 million cold rubidium atoms are held in a vacuum chamber in the lower left of the photo. On the screen is a close-up of the atom trapping region of the apparatus. Credit: von Dauster/NIST

Keeping Time at NIST

--

Mark Esser, Writer/Editor, National Institute of Standards and Technology (NIST)

Einstein is reported to have once said that time is what a clock measures. Some say that what we experience as time is really our experience of the phenomenon of entropy, the second law of thermodynamics. Entropy, loosely explained, is the tendency for things to become disorganized. Hot coffee always goes cold. It never reheats itself. Eggs don’t unscramble themselves. Your room gets messy and you have to expend energy to clean it, until it gets messy again.

Here at NIST, we don’t worry about any of these philosophical notions of time. For us, time is the interval between two events. That could be the rising and setting of the sun, the swing of a pendulum from one side to another, or the back-and-forth vibration of a small piece of quartz. For the most precise measurement of the second, we look at the electromagnetic waves that an atom releases and consider the very short time it takes two successive peaks of the wave to hit a detector. This “frequency” — the number of wave cycles that hit a detector per second — can be used to precisely define very brief time intervals.

James Clerk Maxwell, the father of electromagnetic theory, was the first person to suggest that we might use the frequencies of atomic radiation as a kind of invariant natural pendulum, but he was talking about this in the mid-19th century, long before we could exert any kind of control over individual atoms. We would have to wait a century for NIST’s Harold Lyons to build the world’s first atomic clock.

NIST Director Edward Condon (left) and clock inventor Harold Lyons contemplate the ammonia molecule upon which the clock was based.

Lyons’ atomic clock, which he and his team debuted in 1949, was actually based on the ammonia molecule, but the principle is essentially the same. Inside a chamber, a gas of atoms or molecules fly into a device that emits microwave radiation. The emitter creates microwave radiation with a narrow range of frequencies. When the emitter hits the right frequency, it energizes a maximum number of atoms. The atoms want to lose that energy as quickly as possible, and that loss of energy is manifested as microwaves with a specific frequency. The time it takes a defined number of wavelengths of those microwaves to hit a detector is what we call a second.

Lyons’ clock, while revolutionary, wasn’t any better at keeping time than doing so by astronomical observations. The first clock that used cesium and was accurate enough to be used as a time standard was built by NIST’s counterpart in the U.K., the National Physical Laboratory, in 1955. NIST’s first cesium clock accurate enough to be used as a time standard, NBS-2, was built a few years later in 1958 and went into service as the U.S. official time standard on January 1, 1960. It had an uncertainty of one second in every 3,000 years, meaning that it kept time to within 1/3,000 of a second per year, pretty good compared to an average quartz watch, which might gain or lose a second every month.

The atomic second based on the cesium clock was defined in the International System of Units as the duration of 9,192,631,770 cycles of radiation in 1967. It remains so defined to this day.

While the definition has stayed the same, atomic clocks sure haven’t. Atomic clocks have been continually improved, becoming more and more stable and accurate until the hot clock design reached its peak with the NIST-7, which would neither gain nor lose one second in 6 million years.

Why do we say “hot” clock? That’s because until the 1990s, the temperature of the cesium inside these clocks was a little more than room temperature. At those temperatures, cesium atoms move at around 130 meters per second, pretty fast. So fast, in fact, that it was hard to get a read on them. The clocks simply didn’t have much time to maximize their fluorescence and get a more accurate and stable signal. What we needed to do was give our detectors more time to get the best signal by slowing down the atoms. But how do you slow down an atom? With laser cooling, of course.

But how can lasers cause something to cool down? Aren’t lasers hot? The answer is: It depends. The science of slowing down atoms with lasers was pioneered by Bill Phillips and his colleagues, a feat for which they shared the 1997 Nobel Prize in Physics. Very basically what they did was use a specially tuned array of lasers to bombard the atoms with photons from all angles. These photons are like pingpong balls compared to the bowling-ball-like atoms, but if you have enough of them, they can arrest the motion of the cesium atoms, slowing them from about 130 meters per second to a few centimeters per second, giving the clock plenty of time to get a good read on their signal and vastly improving the accuracy and precision of the clock.

The first clock to use this new technology, NIST-F1, called a fountain clock, was put into service in 1999 and originally offered a threefold improvement over its predecessor, keeping time to within 1/20,000,000 of a second per year. NIST continued to enhance the design of NIST-F1 and subsequent fountain clocks until the accuracies approached one second every 100,000,000 years.

In the JILA/NIST strontium atomic clock, a few thousand atoms of strontium are held in an “optical lattice,” a 30-by-30 micrometer column of about 400 pancake-shaped regions formed by intense laser light. Credit: The Ye group and Brad Baxley, JILA

Not ones to rest on our laurels, NIST and its partner institutions, including JILA, are also working on a series of experimental clocks that operate at optical frequencies with trillions of clock “ticks” per second. One of these clocks, the strontium atomic clock, is accurate to within 1/15,000,000,000 of a second per year. This is so accurate that it would not have gained or lost a second if the clock had started running at the dawn of the universe.

But why do we need such accurate clocks? One thing that wouldn’t exist without such accurate time is the Global Positioning System, or GPS. Each satellite in the GPS network has atomic clocks aboard that beam signals to users below about their position and the time they sent the signal. By measuring the amount of time it takes for the signal to get to you from four different satellites, the receiver in your car or in your phone can figure out where you are to within a few meters or less.

Such accurate time is also used to timestamp financial transactions so that we know exactly when trades are happening, which can mean the difference between making a fortune and going broke. Accurate time is also necessary for synchronizing communications signals so that, for instance, your call isn’t lost as you travel between cellphone towers.

And as new, even more accurate clocks are invented, it’s assured that we will find uses for them. In the meantime, you’ll have to settle for knowing where you are anywhere on Earth at any given time while talking on your cellphone on your way to an appointment. Even if you arrive a few millionths of a second late, we won’t give you a hard time about it.

This post originally appeared on Taking Measure, the official blog of the National Institute of Standards and Technology (NIST) on June 23, 2020.

To make sure you never miss our blog posts or other news from NIST, sign up for our email alerts.

About the Author

Mark is a writer in the NIST Public Affairs Office and the editor of Taking Measure (this blog). When he’s not struggling to cleverly turn a phrase, he enjoys playing classical guitar and weightlifting.

--

--

National Institute of Standards and Technology

NIST promotes U.S. innovation by advancing measurement science, standards and technology in ways that enhance economic security and improve our quality of life.