Researchers address material challenges to make commercial fusion power a reality

Imagine if we could take the energy of the sun, put it in a container, and use it to provide green, sustainable power for the world. Creating commercial fusion power plants would essentially make this idea a reality. However, there are several scientific challenges to overcome before we can successfully harness fusion power in this way.

Researchers from the U. S. Department of Energy (DOE) Ames National Laboratory and Iowa State University are leading efforts to overcome material challenges that could make commercial fusion power a reality. The research teams are part of a DOE Advanced Research Projects Agency-Energy (ARPA-E) program called Creating Hardened And Durable fusion first Wall Incorporating Centralized Knowledge (CHADWICK). They will investigate materials for the first wall of a fusion reactor. The first wall is the structure that surrounds the fusion reaction, so it bears the brunt of the extreme environment in the fusion reactor core.

ARPA-E recently selected 13 projects under the CHADWICK program. Of those 13, Ames Lab leads one of the projects and is collaborating alongside Iowa State on another project, which is led by Pacific Northwest National Laboratory (PNNL).

According to Nicolas Argibay, a scientist at Ames Lab and lead of one project, one of the key challenges in harnessing fusion-based power is containing the plasma core that creates the energy. The plasma is like a miniature sun that needs to be contained by materials that can withstand a combination of extreme temperature, irradiation, and magnetic fields while efficiently extracting heat for conversion to electricity.

Argibay explained that in the reactor core, the plasma is contained by a strong magnetic field, and the first wall would surround this environment. The first wall has two layers of material, one that is closest to the strong magnetic and plasma environments, and one that will help move the energy along to other parts of the system.

The first layer material needs to be structurally sound, resisting cracking and erosion over time. Argibay also said that it cannot stay radioactive for very long, so that the reactor can be turned on and off for maintenance without endangering anyone working on it. The project he is leading is focused on the first layer material.

“I think one of the things we [at Ames Lab] bring is a unique capability for materials design, but also, very importantly, for processing them. It is hard to make and manage these materials,” said Argibay. “On the project I’m leading, we’re using tungsten as a major constituent, and with the exception of some forms of carbon, like diamond, that’s the highest melting temperature element on the periodic table.”

Specialized equipment is necessary to process and test refractory materials, which have extremely high melting temperatures. In Argibay’s lab, the first piece of equipment obtained is a commercial, modular, customizable, open-architecture platform for both making refractory materials and exploring advanced and smart manufacturing methods to make the process more efficient and reliable.

“Basically, we can make castings and powders of alloys up to and including pure tungsten, which is the highest melting temperature element other than diamond,” said Argibay.

By spring of 2025, Argibay said that they will have two additional systems in place for creating these refractory materials at both lab-scale and pilot-scale quantities. He explained it is easier to make small quantities (lab-scale) than larger quantities (pilot-scale), but the larger quantities are important for collecting meaningful and useful data that can translate to a real-world application.

Argibay also has capabilities for measuring the mechanical properties of refractory materials at relevant temperatures. Systems capable of making measurements well above 1,000°C (1,832°F) are rare. Ames Lab now has one of the only commercial testers in the country that can measure tensile properties of alloys at temperatures up to 1,500°C (2,732°F), which puts the lab in a unique position to both support process science and alloy design.

Jordan Tiarks, another scientist at Ames Lab who is working on the project led by PNNL, is focused on a different aspect of this reactor research. His team is relying on Ames Lab’s 35 years of experience leading the field in gas atomization, powder metallurgy, and technology transfer to industry to develop materials for the first wall structural material.

“The first wall structural material is actually the part that holds it all together,” said Tiarks. “You need to have more complexity and more structural strength. You might have things like cooling channels that need to be integrated in the structural wall so that we can extract all of that heat, and don’t just melt the first wall material.”

Tiarks’s team hopes to utilize over a decade of research focused on developing a unique way of creating oxide dispersion strengthened (ODS) steel for next generation nuclear fission. ODS steel contains very small ceramic particles (nanoparticles) that are dispersed throughout the steel. These particles improve the metal’s mechanical properties and ability to withstand high irradiation.

“What this project does is it takes all of our lessons learned on steels, and we’re going to apply them to a brand-new medium, a vanadium-based alloy that is well suited for nuclear fusion,” said Tiarks.

The major challenge Tiarks’s team faces is how vanadium behaves differently from steel. Vanadium has a much higher melting point, and it is more reactive than steel, so it cannot be contained with ceramic. Instead, his team must use a slightly different process for creating vanadium-based powders.

“We use high pressure gas to break up the molten material into tiny droplets which rapidly cool to create the powders we’re working with,” explained Tiarks. “And [in this case] we can’t use any sort of ceramic to be able to deliver the melt. So what we have to do is called ‘free fall gas atomization.” It is essentially a big opening in a gas die where a liquid stream pours through and we use supersonic gas jets to attack that liquid stream.”

There are some challenges with the method Tiarks described. First, he said that it is less efficient than other methods that rely on ceramics. Secondly, due to the high melting point of vanadium, it is harder to add more heat during the pouring process, which would provide more time to break up the liquid into droplets. Finally, vanadium tends to be reactive.

“Powders are reactive. If you aerosolize them, they will explode. However, a fair number of metals will form a thin oxide shell on the outside layer that can help ‘passivate’ them from further reactions,” Tiarks explained. “It’s kind of like an M&M. It’s the candy coating on the outside that protects the rest of the powder particle from further oxidizing.

“A lot of the research we’ve done in the Ames lab is actually figuring out how we passivate these powders so you can handle them safely, so they won’t further react, but without degrading too much of the performance of those powders by adding too much oxygen. If you oxidize them fully, all of a sudden, now we have a ceramic particle, and it’s not a metal anymore, and so we have to be very careful to control the passivation process.”

Tiarks explained that discovering a powder processing method for vanadium-based materials will make them easier to form into the complicated geometric chapes that are necessary for the second layer to function properly. Additionally, vanadium will not interfere with the magnetic fields in the reactor core.

Sid Pathak, an assistant professor at Iowa State, is leading the group that will test the material samples for the second layer. When the material powder made by the Ames Lab group is ready, it will be formed into plates at PNNL by spraying the powder and friction stir processing onto a surface.

“Once you make that plate, we need to test its properties, particularly its response under the extreme radiation conditions present in a fusion reactor, and make sure that we get something better than what is currently available,” said Pathak. “That’s our claim, that our materials will be superior to what is used today.”

Pathak explained that it can take 10–20 years for radiation damage to show up on materials in a nuclear reactor. It would be impossible to recreate that timeline during a 3-year research project. Instead, his team uses ion irradiation to test how materials respond in extreme environments. For this process, his team uses a particle accelerator to bombard a material with ions available at University of Michigan’s Michigan Ion Beam Laboratory. The results simulate how a material is affected by radiation.

“Ion irradiation is a technique where you radiate [the material] with ions instead of neutrons. That can be done in a matter of hours,” said Pathak. “Also, the material does not become radioactive after ion irradiation, so you can handle it much more easily.”

Despite these benefits, there is one disadvantage to using ion irradiation. The damage only penetrates the material one or two micrometers deep, meaning that it can only be seen with a microscope. For reference, the average strand of human hair is about 70-100 micrometers thick. So, testing materials at these very small depths requires specialized tools that work at micro-length scales, which are available at Pathak’s lab at Iowa State University.

“The pathway to commercial nuclear fusion power has some of the greatest technical challenges of our day but also has the potential for one of the greatest payoffs—harnessing the power of the sun to produce abundant, clean energy,” said Tiarks. “It’s incredibly exciting to be able to have a tiny role in solving that greater problem.”

“I’m very excited at the prospect that we are kind of in uncharted water. So there is an opportunity for Ames to demonstrate why we’re here, why we should continue to fund and increase funding for national labs like ours, and why we are going to tackle some things that most companies and other national labs just can’t or aren’t,” said Argibay. “We hope to be part of this next generation of solving fusion energy for the grid.”

Provided by Ames National Laboratory 

Where’s my qubit? Scientists develop technique to detect atom loss

Quiet quitting isn’t just for burned out employees. Atoms carrying information inside quantum computers, known as qubits, sometimes vanish silently from their posts. This problematic phenomenon, called atom loss, corrupts data and spoils calculations.

But Sandia National Laboratories and the University of New Mexico have for the first time demonstrated a practical way to detect these “leakage errors” for neutral atom platforms. This achievement removes a major roadblock for one branch of quantum computing, bringing scientists closer to realizing the technology’s full potential. Many experts believe quantum computers will help reveal truths about the universe that are impossible to glean with current technology.

“We can now detect the loss of an atom without disturbing its quantum state,” said Yuan-Yu Jau, Sandia atomic physicist and principal investigator of the experiment team.

In a paper recently published in the journal PRX Quantum, the team reports its circuit-based method achieved 93.4% accuracy. The detection method enables researchers to flag and correct errors.

Detection heads off a looming crisis

Atoms are squirrely little things. Scientists control them in some quantum computers by freezing them at just above absolute zero, about -460 degrees Fahrenheit. A thousandth of a degree too warm and they spring free. Even at the right temperature, they can escape through random chance.

If an atom slips away in the middle of a calculation, “The result can be completely useless. It’s like garbage,” Jau said.

A detection scheme can tell researchers whether they can trust the result and could lead to a way of correcting errors by filling in detected gaps.

Matthew Chow, who led the research, said atom loss is a manageable nuisance in small-scale machines because they have relatively few qubits, so the odds of losing one at any given moment are generally small.

But the future has been looking bleak. Useful quantum computers will need millions of qubits. With so many, the odds of losing them mid-program spikes. Atoms would be silently walking off the jobsite en masse, leaving scientists with the futile task of trying to use a computer that is literally vanishing before their eyes.

“This is super important because if we don’t have a solution for this, I don’t think there’s a way to keep moving forward,” Jau said.

Researchers have found ways to detect atom loss and other kinds of leakage errors in different quantum computing platforms, like those using electrically charged atoms, called trapped ion qubits, instead of neutral ones. The New Mexico-based team is the first to non-destructively detect atom loss in neutral atom systems. By implementing simple circuit-based techniques to detect leakage errors, the team is helping avert the crisis of uncontrollable future leakage.

Just don’t look

The dilemma of detecting atom loss is that scientists cannot look at the atoms they need to preserve during computation.

“Quantum calculations are extremely fragile,” Jau said.

The operation falls apart if researchers do anything at all to observe the state of a qubit while it’s working.

Austrian physicist Erwin Schrödinger famously compared this concept to having a cat inside a box with something that will randomly kill it. According to quantum physics, Schrödinger explained, the cat can be thought of as simultaneously dead and alive until you open the box.

“It’s very easy to have a mathematical description of everything in terms of quantum computing. But to visualize entangled quantum information, it’s hard,” Jau said.

So how do you check that an atom is in the processor without observing it?

“The idea is analogous to having Schrödinger’s cat in a box, and putting that box on a scale, where the weight of the box tells you whether or not there’s a cat, but it doesn’t tell you whether the cat’s dead or alive,” Chow said.

Where's my qubit? Scientists develop technique to detect atom loss
Objective lenses on either side of the vacuum chamber are used to focus laser light into single-atom traps at Sandia National Laboratories. Credit: Craig Fritz

Surprise finding fuels breakthrough

Chow, a University of New Mexico Ph.D. student and Sandia Labs intern at the time of the research, said he never expected this breakthrough.

“This was certainly not a paper that we had planned to write,” he said.

He was debugging a small bit of quantum computing code at Sandia for his dissertation. The code diagnoses the entangling interaction—a unique quantum process that links the states of atoms—by repeatedly applying an operation and comparing the results when two atoms interact versus when only one atom is present. When the atoms interact, the repeated application of the operation makes them switch between entangled and disentangled states. In this comparison, he observed a key pattern.

Every other run, when the atoms were disentangled, the outcome for the two-atom case was markedly different from the solo-atom case.

Without trying, Chow realized, he had found a subtle signal to indicate a neighboring atom was present in a quantum computer without observing it directly. The oscillating measurement was the scale to measure whether the cat is still in the box.

“This was the thing that got me really excited—that made me show it to Vikas.”

Vikas Buchemmavari, another Ph.D. student at UNM and a frequent collaborator, knew more quantum theory than Chow. He works in a research group led by the director of UNM’s Center for Quantum Information and Control, Ivan Deutsch.

“I was simultaneously very impressed by the gate quality and very excited about what the idea meant: we could detect if the atom was there or not without damaging the information in it,” Buchemmavari said.

Verifying the technique

He went to work formalizing the idea into a set of code tailored to detect atom loss. It would use a second atom, not involved in any calculation, to indirectly detect whether an atom of interest is missing.

“Quantum systems are very error-prone. To build useful quantum computers, we need quantum error correction techniques that correct the errors and make the calculations reliable. Atom loss— and leakage errors—are some of the worst kinds of errors to deal with,” he said.

The two then developed ways to test their idea.

“You need to test not only your ability to detect an atom, but to detect an atom that starts in many different states,” Chow said. “And then the second part is to check that it doesn’t disturb that state of the first atom.”

Chow’s Sandia team jumped onboard, too, helping test the new routine and verify its results by comparing them to a method of directly observing the atoms.

“We had the capability at Sandia to verify it was working because we have this measurement where we can say the atom is in the one state or the zero state or it’s gone. A lot of people don’t have that third option,” Sandia’s Bethany Little said.

A guide for correcting atom loss

Looking ahead, Buchemmavari said, “We hope this work serves as a guide for other groups implementing these techniques to overcome these errors in their systems. We also hope this spurs deeper research into the advantages and trade-offs of these techniques in real systems.”

Chow, who has since earned his doctoral degree, said he is proud of the discovery because it shows the problem of atom loss is solvable, even if future quantum computers do not use his exact method.

“If you’re careful to keep your eyes open, you might spot something really useful.”

More information: Matthew N. H. Chow et al, Circuit-Based Leakage-to-Erasure Conversion in a Neutral-Atom Quantum Processor, PRX Quantum (2024). DOI: 10.1103/PRXQuantum.5.040343

Journal information: PRX Quantum 

Provided by Sandia National Laboratories 

Chinese detector to hunt elusive neutrinos deep underground

Underneath a granite hill in southern China, a massive detector is nearly complete that will sniff out the mysterious ghost particles lurking around us.

The Jiangmen Underground Neutrino Observatory will soon begin the difficult task of spotting neutrinos: tiny cosmic particles with a mind-bogglingly small mass.

The detector is one of three being built across the globe to study these elusive ghost particles in the finest detail yet. The other two, based in the United States and Japan, are still under construction.

Spying neutrinos is no small feat in the quest to understand how the universe came to be. The Chinese effort, set to go online next year, will push the technology to new limits, said Andre de Gouvea, a theoretical physicist at Northwestern University who is not involved with the project.

“If they can pull that off,” he said, “it would be amazing.”

What are neutrinos?

Neutrinos date back to the Big Bang, and trillions zoom through our bodies every second. They spew from stars like the sun and stream out when atomic bits collide in a particle accelerator.

Scientists have known about the existence of neutrinos for almost a century, but they’re still in the early stages of figuring out what the particles really are.

Chinese detector to hunt elusive neutrinos deep underground
An aerial view of the Jiangmen Underground Neutrino Observatory where a cosmic detector is located 2297 feet (700 meters) underground in Kaiping, southern China’s Guangdong province on Friday, Oct. 11, 2024. Credit: AP Photo/Ng Han Guan

“It’s the least understood particle in our world,” said Cao Jun, who helps manage the detector known as JUNO. “That’s why we need to study it.”

There’s no way to spot the tiny neutrinos whizzing around on their own. Instead, scientists measure what happens when they collide with other bits of matter, producing flashes of light or charged particles.

Neutrinos bump into other particles only very rarely, so to up their chances of catching a collision, physicists have to think big.

“The solution for how we measure these neutrinos is to build very, very big detectors,” de Gouvea said.

A big detector to measure tiny particles

The $300 million detector in Kaiping, China, took over nine years to build. Its location 2,297 feet (700 meters) underground protects from pesky cosmic rays and radiation that could throw off its neutrino-sniffing abilities.

Chinese detector to hunt elusive neutrinos deep underground
Visitors take a train ride to visit the cosmic detector located 2297 feet (700 meters) underground at the Jiangmen Underground Neutrino Observatory in Kaiping, southern China’s Guangdong province on Friday, Oct. 11, 2024. Credit: AP Photo/Ng Han Guan

On Wednesday, workers began the final step in construction. Eventually, they’ll fill the orb-shaped detector with a liquid designed to emit light when neutrinos pass through and submerge the whole thing in purified water.

It’ll study antineutrinos—an opposite to neutrinos which allow scientists to understand their behavior—produced from collisions inside two nuclear power plants located over 31 miles (50 kilometers) away. When the antineutrinos come into contact with particles inside the detector, they’ll produce a flash of light.

The detector is specially designed to answer a key question about a longstanding mystery. Neutrinos switch between three flavors as they zip through space, and scientists want to rank them from lightest to heaviest.

Sensing these subtle shifts in the already evasive particles will be a challenge, said Kate Scholberg, a physicist at Duke University who is not involved with the project.

Chinese detector to hunt elusive neutrinos deep underground
Wang Yifang, chief scientist and project manager at the Jiangmen Underground Neutrino Observatory briefs visitors on the cosmic detector located 2297 feet (700 meters) underground in Kaiping, southern China’s Guangdong province on Friday, Oct. 11, 2024. Credit: AP Photo/Ng Han Guan

“It’s actually a very daring thing to even go after it,” she said.

China’s detector is set to operate during the second half of next year. After that, it’ll take some time to collect and analyze the data—so scientists will have to keep waiting to fully unearth the secret lives of neutrinos.

Two similar neutrino detectors—Japan’s Hyper-Kamiokande and the Deep Underground Neutrino Experiment based in the United States—are under construction. They’re set to go online around 2027 and 2031 and will cross-check the China detector‘s results using different approaches.

“In the end, we have a better understanding of the nature of physics,” said Wang Yifang, chief scientist and project manager of the Chinese effort.

Understanding how the universe formed

Though neutrinos barely interact with other particles, they’ve been around since the dawn of time. Studying these Big Bang relics can clue scientists into how the universe evolved and expanded billions of years ago.

One question researchers hope neutrinos can help answer is why the universe is overwhelmingly made up of matter with its opposing counterpart—called antimatter—largely snuffed out.

Scientists don’t know how things got to be so out of balance, but they think neutrinos could have helped write the earliest rules of matter.

The proof, scientists say, may lie in the particles. They’ll have to catch them to find out.

© 2024 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed without permission.

Nonlinear ‘skin effect’ unveiled in antiferromagnetic materials

A team of researchers has identified a unique phenomenon, a “skin effect,” in the nonlinear optical responses of antiferromagnetic materials. The research, published in Physical Review Letters, provides new insights into the properties of these materials and their potential applications in advanced technologies.

Nonlinear optical effects occur when light interacts with materials that lack inversion symmetry. It was previously thought that these effects were uniformly distributed throughout the material. However, the research team discovered that in antiferromagnets, the nonlinear optical response can be concentrated on the surfaces, similar to the “skin effect” seen in conductors, where currents flow primarily on the surface.

In this study, the team developed a self-designed computational method to investigate the nonlinear optical responses in antiferromagnets, using the bulk photovoltaic effect as a representative example. Their results showed that, while the global inversion symmetry was broken, the local inversion symmetry deep inside the antiferromagnet was almost untouched.

As a result, the nonlinear optical response was primarily confined to the top and bottom surfaces of the antiferromagnet, with negligible contribution from its interior.

To demonstrate the findings, the researchers conducted first-principles calculations on the two-dimensional antiferromagnet CrI3, confirming the surface-dominant behavior of the bulk photovoltaic effect. Additionally, they calculated the second-harmonic generation effect, finding consistent results with their theoretical models.

The discovery of the skin effect in nonlinear optical responses opens exciting opportunities for both the fundamental sciences and the optoelectronic technology. “It offers a new perspective on how nonlinear optical effects can be utilized in high-performance device applications,” said Prof. Shao Dingfu from the Hefei Institutes of Physical Science of the Chinese Academy of Sciences.

More information: Hang Zhou et al, Skin Effect of Nonlinear Optical Responses in Antiferromagnets, Physical Review Letters (2024). DOI: 10.1103/PhysRevLett.133.236903

Journal information: Physical Review Letters 

Provided by Chinese Academy of Sciences 

Antineutrino detection gets a boost with novel plastic scintillator

How do you find and measure nuclear particles, like antineutrinos, that travel near the speed of light?

Antineutrinos are the antimatter partner of a neutrino, one of nature’s most elusive and least understood subatomic particles. They are commonly observed near nuclear reactors, which emit copious amounts of antineutrinos, but they also are found abundantly throughout the universe as a result of Earth’s natural radioactivity, with most of them originating from the decay of potassium-40, thorium-232 and uranium-238 isotopes.

When an antineutrino collides with a proton, a positron and a neutron are produced—a process known as inverse beta decay (IBD). This event causes scintillating materials to light up, making it possible to detect these antineutrinos; and if they can be detected, they can be used to study the properties of a reactor’s core or Earth’s interior.

Researchers at Lawrence Livermore National Laboratory (LLNL), in partnership with Eljen Technology, are working on one possible detection solution—a plastic, lithium-6 doped scintillator for detecting reactor antineutrinos that represents over a decade of materials science research. Their research appears in the journal of Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment.

Magic in plastic

In the early 2010s, LLNL materials scientist Natalia Zaitseva and her team were the first to develop a plastic scintillator capable of pulse-shape discrimination (PSD), i.e., efficiently distinguishing neutrons from gamma rays (important for detecting IBD events). Building upon this work, the new lithium-6-doped plastic scintillator formulation is also PSD-capable.

“Lithium-6 is particularly advantageous, because in addition to having a significant thermal-neutron-capture cross section, it offers a localized capture location, further enhancing the detector’s ability to effectively reject unwanted background noise,” said LLNL scientist Viacheslav “Slava” Li. This enhanced detection is made possible through the IBD process.

“While integrating lithium-6 into liquid scintillators has proven to be a challenging yet rewarding endeavor—successfully demonstrated by PROSPECT, another reactor-antineutrino experiment with fundamental LLNL contributions—achieving this in a solid, compact and easily transportable plastic scintillator has not been accomplished before, especially not at a scale suitable for effective antineutrino detection,” said Cristian Roca, LLNL scientist and corresponding author of the paper.

Compared to liquid scintillators, which have been the standard technology for reactor–antineutrino detection for decades, plastic scintillators offer superior safety and mobility with fewer of the regulatory and practicality constraints that are typically placed upon liquid scintillators and their operating environment.

Optimizing detector performance

To ready the scintillator (commercially known as EJ-299-50) for the market, researchers in LLNL’s Rare Event Detection group conducted a series of characterization measurements of the material’s performance in a large-scale detector system.

For almost six months, researchers studied the aging process of these scintillators to ensure the long-term stability of the plastic. After demonstrating the reliable optical performance and neutron identification capabilities of EJ-299-50 during this time, researchers installed 36 of the plastic scintillator “bars” in a 6 × 6 grid configuration on a detection system called the Reactor Operations Antineutrino Detection Surface Testbed Rover (ROADSTR). A follow-on study is currently underway to evaluate ROADSTR’s performance with these bars.

Alongside their scintillator work, scientists in the Rare Event Detection group are collaborating with researchers at the University of Hawai’i to improve the directional sensitivity of detectors; i.e., the ability to determine the direction of the incoming antineutrino in relation to the detector. This information can be extracted by correlating the events that take place during the IBD reaction and is especially useful in constraining the illicit production of weapons material.

The team’s research, published in Physical Review Applied and supported by the Consortium for Monitoring, Technology and Verification, explores different detector designs, finding that certain detector geometries outperform others in terms of directional resolution.

With applications in reactor safeguards and monitoring, as well as homeland security and nuclear non-proliferation, these combined research efforts are opening the door to a new era of antineutrino detection.

More information: C. Roca et al, Performance of large-scale 6Li-doped pulse-shape discriminating plastic scintillators, Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment (2024). DOI: 10.1016/j.nima.2024.169916

Mark J. Duvall et al, Directional response of several geometries for reactor-neutrino detectors, Physical Review Applied (2024). DOI: 10.1103/PhysRevApplied.22.054030

Journal information: Physical Review Applied 

Provided by Lawrence Livermore National Laboratory 

Thin-film tech makes nuclear clocks a 1,000 times less radioactive and more affordable

In the quest for ultra-precise timekeeping, scientists have turned to nuclear clocks. Unlike optical atomic clocks—which rely on electronic transitions—nuclear clocks utilize the energy transitions in the atom’s nucleus, which are less affected by outside forces, meaning this type of clock could potentially keep time more accurately than any previously existing technology.

However, building such a clock has posed major challenges—thorium-229, one of the isotopes used in nuclear clocks, is rare, radioactive, and extremely costly to acquire in the substantial quantities required for this purpose.

Reported in a study published in Nature, a team of researchers, led by JILA and NIST Fellow and University of Colorado Boulder Physics professor Jun Ye, in collaboration with Professor Eric Hudson’s team at UCLA’s Department of Physics and Astronomy, have found a way to make nuclear clocks a thousand times less radioactive and more cost-effective, thanks to a method creating thin films of thorium tetrafluoride (ThF4).

The successful use of thin films marks a potential turning point in the development of nuclear clocks. Using thin-film technology in nuclear clocks is commensurate with semiconductors and photonic integrated circuits, suggesting that future nuclear clocks could be more accessible and scalable.

“A key advantage of nuclear clocks is their portability, and to fully unleash such an attractive potential, we need to make the systems more compact, less expensive, and more radiation-friendly to users,” said Ye.

The costs of nuclear clockmaking

JILA has been at the forefront of atomic and optical clock research for decades, with Ye’s laboratory making pioneering contributions to advancing the concept, design, and implementation of optical lattice clocks, which set new standards in precision timekeeping.

Physicists have been trying to observe the energy transition of thorium-229 for nearly 50 years. In September 2024, researchers in Ye’s laboratory reported the first high-resolution spectrum of the nuclear transition and determined the absolute frequency based on the JILA Sr optical lattice clock.

Their result was published as a cover article in Nature.

To build their nuclear clock setup, the team worked with radioactive thorium-229 crystals, collaborating with researchers at the University of Vienna.

“The growth of that crystal is an art in itself, and our collaborators in Vienna spent many years of effort to grow a nice single crystal for this measurement,” explains Chuankun Zhang, a graduate student at JILA and first author of both Nature studies.

Previous approaches using thorium-doped crystals required more radioactive material. As thorium-229 is often sourced from uranium via nuclear decay, this leads to additional radiation safety and cost considerations.

“Thorium-229 by weight is more expensive than some of the custom proteins I’ve worked with in the past,” adds JILA postdoctoral researcher Jake Higgins, also involved in this project, “so we had to make this work with as little material as possible.” The researchers collaborated closely with CU Boulder’s Environmental Health & Safety department to safely build and study their nuclear clock.

As the team worked to observe the nuclear transitions in thorium-doped crystals, they simultaneously pursued methods to make the clock safer and more cost-effective by developing thin film coatings to reduce the amount of radioactive thorium needed.

Vaporizing thorium

To produce the thin films, the researchers used a process called physical vapor deposition (PVD), which involved heating thorium fluoride in a chamber until it vaporized. The vaporized atoms then condensed on a substrate, forming a thin, even layer of thorium fluoride about 100 nanometers thick.

The researchers selected sapphire and magnesium fluoride as substrates because of their transparency to the ultraviolet light used to excite the nuclear transition.

“If we have a substrate very close by, the vaporized thorium fluoride molecules touch the substrate and stick to it, so you get a nice, even thin film,” Zhang says.

This method used just micrograms of thorium-229, making the product a thousand times less radioactive while producing a dense layer of active thorium nuclei. Working with the JILA Keck Metrology laboratory and JILA instrument maker Kim Hagen, the researchers reliably reproduced films that could be tested for potential nuclear transitions using a laser.

FInding energy transitions in thin film

However, the team faced a new challenge. Unlike in a crystal, where every thorium atom was situated in an ordered environment, the thin films produced variations in thorium environments, shifting their energy transitions and making them less consistent.

JILA graduate student Jack Doyle, who was also involved in this study, elaborates, “Wolfgang Pauli was rumored to have said that ‘God invented the bulk and the surface is of the devil,’ but he might as well have said this because the number of factors that are hard to learn about for a particular surface is immense.”

After preparing the films, JILA researchers sent them to Professor Eric Hudson at UCLA, who used a high-power laser with a much greater spectral width to test the nuclear transitions.

This broad-spectrum laser has all of its optical power concentrated in one spectral location instead of a frequency comb that has regularly spaced spectral lines over a larger spectral distance. This allowed the UCLA team to excite the thorium nuclei effectively, even though the observed linewidth is broader than previously seen in the previous study.

When the laser’s energy precisely matched the energy required for the transition, the nuclei emitted photons as they relaxed back to their original state. By detecting these emitted photons, the researchers could confirm successful nuclear excitations, verifying the thin film’s potential to serve as a frequency reference for nuclear clocks.

“We made the thin film, we characterized it, and it looked pretty good,” explains JILA graduate student Tian Ooi, who was also involved in this research. “It was cool to see that the nuclear decay signal was actually there.”

A new chapter of precision timekeeping

Based on their findings, the researchers are excited about the improvements in precision timekeeping to be gained by using thin films in nuclear clocks.

“The general advantage of using clocks in a solid state, as opposed to in a trapped-ion setting, is that the number of atoms is much, much larger,” Higgins elaborates. “There are orders and orders of magnitude more atoms than one could feasibly have in an ion trap, which helps with your clock stability.”

These thin films could additionally allow nuclear timekeeping to move beyond laboratory settings by making them compact and portable.

“Imagine something you can wear on your wrist,” Ooi says. “You can imagine being able to miniaturize everything to that level in the far, far future.”

While this level of portability is still a distant goal, it could revolutionize sectors that rely on precise timekeeping, from telecommunications to navigation.

“If we are lucky, it might even tell us about new physics,” Doyle adds.

More information: Eric Hudson, 229ThF4 thin films for solid-state nuclear clocks, Nature (2024). DOI: 10.1038/s41586-024-08256-5www.nature.com/articles/s41586-024-08256-5

Journal information: Nature 

Provided by JILA 

Physicists magnetize a material with light: Terahertz technique could improve memory chip design

MIT physicists have created a new and long-lasting magnetic state in a material, using only light.

In a study that appears in Nature, the researchers report using a terahertz laser—a light source that oscillates more than a trillion times per second—to directly stimulate atoms in an antiferromagnetic material. The laser’s oscillations are tuned to the natural vibrations among the material’s atoms, in a way that shifts the balance of atomic spins toward a new magnetic state.

The results provide a new way to control and switch antiferromagnetic materials, which are of interest for their potential to advance information processing and memory chip technology.

In common magnets, known as ferromagnets, the spins of atoms point in the same direction, in a way that the whole can be easily influenced and pulled in the direction of any external magnetic field.

In contrast, antiferromagnets are composed of atoms with alternating spins, each pointing in the opposite direction from its neighbor. This up, down, up, down order essentially cancels the spins out, giving antiferromagnets a net zero magnetization that is impervious to any magnetic pull.

If a memory chip could be made from antiferromagnetic material, data could be “written” into microscopic regions of the material, called domains. A certain configuration of spin orientations (for example, up-down) in a given domain would represent the classical bit “0,” and a different configuration (down-up) would mean “1.” Data written on such a chip would be robust against outside magnetic influence.

For this and other reasons, scientists believe antiferromagnetic materials could be a more robust alternative to existing magnetic-based storage technologies. A major hurdle, however, has been in how to control antiferromagnets in a way that reliably switches the material from one magnetic state to another.

“Antiferromagnetic materials are robust and not influenced by unwanted stray magnetic fields,” says Nuh Gedik, the Donner Professor of Physics at MIT. “However, this robustness is a double-edged sword; their insensitivity to weak magnetic fields makes these materials difficult to control.”

Using carefully tuned terahertz light, the MIT team was able to controllably switch an antiferromagnet to a new magnetic state. Antiferromagnets could be incorporated into future memory chips that store and process more data while using less energy and taking up a fraction of the space of existing devices, owing to the stability of magnetic domains.

“Generally, such antiferromagnetic materials are not easy to control,” Gedik says. “Now we have some knobs to be able to tune and tweak them.”

Gedik is the senior author of the new study, which also includes MIT co-authors Batyr Ilyas, Tianchuang Luo, Alexander von Hoegen, Zhuquan Zhang and Keith Nelson, along with collaborators at the Max Planck Institute for the Structure and Dynamics of Matter in Germany, University of the Basque Country in Spain, Seoul National University, and the Flatiron Institute in New York.

Off balance

Gedik’s group at MIT develops techniques to manipulate quantum materials in which interactions among atoms can give rise to exotic phenomena.

“In general, we excite materials with light to learn more about what holds them together fundamentally,” Gedik says. “For instance, why is this material an antiferromagnet, and is there a way to perturb microscopic interactions such that it turns into a ferromagnet?”

Physicists magnetize a material with light
“Generally, such antiferromagnetic materials are not easy to control,” Nuh Gedik says, pictured in between Tianchuang Luo, left, and Alexander von Hoegen. Additional MIT co-authors include Batyr Ilyas, Zhuquan Zhang and Keith Nelson. Credit: Adam Glanzman

In their new study, the team worked with FePS3—a material that transitions to an antiferromagnetic phase at a critical temperature of around 118 Kelvin (-247 degrees Fahrenheit).

The team suspected they might control the material’s transition by tuning into its atomic vibrations.

“In any solid, you can picture it as different atoms that are periodically arranged, and between atoms are tiny springs,” von Hoegen explains. “If you were to pull one atom, it would vibrate at a characteristic frequency which typically occurs in the terahertz range.”

The way in which atoms vibrate also relates to how their spins interact with each other. The team reasoned that if they could stimulate the atoms with a terahertz source that oscillates at the same frequency as the atoms’ collective vibrations, called phonons, the effect could also nudge the atoms’ spins out of their perfectly balanced, magnetically alternating alignment.

Once knocked out of balance, atoms should have larger spins in one direction than the other, creating a preferred orientation that would shift the inherently nonmagnetized material into a new magnetic state with finite magnetization.

“The idea is that you can kill two birds with one stone: You excite the atoms’ terahertz vibrations, which also couples to the spins,” Gedik says.

Shake and write

To test this idea, the team worked with a sample of FePS3 that was synthesized by colleagues at Seoul National University. They placed the sample in a vacuum chamber and cooled it down to temperatures at and below 118 K.

They then generated a terahertz pulse by aiming a beam of near-infrared light through an organic crystal, which transformed the light into the terahertz frequencies. They then directed this terahertz light toward the sample.

“This terahertz pulse is what we use to create a change in the sample,” Luo says. “It’s like ‘writing’ a new state into the sample.”

To confirm that the pulse triggered a change in the material’s magnetism, the team also aimed two near-infrared lasers at the sample, each with an opposite circular polarization. If the terahertz pulse had no effect, the researchers should see no difference in the intensity of the transmitted infrared lasers.

“Just seeing a difference tells us the material is no longer the original antiferromagnet, and that we are inducing a new magnetic state, by essentially using terahertz light to shake the atoms,” Ilyas says.

Over repeated experiments, the team observed that a terahertz pulse successfully switched the previously antiferromagnetic material to a new magnetic state—a transition that persisted for a surprisingly long time, over several milliseconds, even after the laser was turned off.

“People have seen these light-induced phase transitions before in other systems, but typically they live for very short times on the order of a picosecond, which is a trillionth of a second,” Gedik says.

In just a few milliseconds, scientists now might have a decent window of time during which they could probe the properties of the temporary new state before it settles back into its inherent antiferromagnetism.

Then, they might be able to identify new knobs to tweak antiferromagnets and optimize their use in next-generation memory storage technologies.

More information: Nuh Gedik, Terahertz field-induced metastable magnetization near criticality in FePS3Nature (2024). DOI: 10.1038/s41586-024-08226-xwww.nature.com/articles/s41586-024-08226-x

Journal information: Nature 

Provided by Massachusetts Institute of Technology 

Simple machine learning techniques can cut costs for quantum error mitigation while maintaining accuracy

Quantum computers have the potential of outperforming classical computers in some optimization and data processing tasks. However, quantum systems are also more sensitive to noise and thus prone to errors, due to the known physical challenges associated with reliably manipulating qubits, their underlying units of information.

Engineers recently devised various methods to reduce the impact of these errors, which are known as quantum error mitigation (QEM) techniques. While some of these techniques achieved promising results, executing them on real quantum computers is often too expensive or unfeasible.

Researchers at IBM Quantum recently showed that simple and more accessible machine learning (ML) techniques could be used for QEM. Their paper, published in Nature Machine Intelligence, demonstrates that these techniques could achieve accuracies comparable to those of other QEM techniques, at a far lower cost.

“We started to think about how to depart from conventional physics-based QEM methods and whether ML techniques can help to reduce the cost of QEM so that more complex and larger-scale experiments could come within reach,” Haoran Liao, co-first author of the paper, told Phys.org.

“However, there seemed to be a fundamental paradox: How can classical ML learn what noise is doing in a quantum calculation running on a quantum computer that is doing something beyond classical computers? After all, quantum computers are of interest for their ability to run problems beyond the power of classical computers.”

As part of their study, the researchers carried out a series of tests, where they used state-of-the-art quantum computers and up to 100 qubits to complete different tasks. They focused on tasks that are impossible to complete via brute-force calculations performed on classical computers, but that could be tackled using more sophisticated computational methods running on powerful classical computers.

“This interesting ‘paradox’ is another motivation for us to think about whether careful designs can tailor ML models to find the complicated relationships between noisy and ideal outputs of a quantum computer to help quantum computations,” said Liao.

The primary objective of the recent study by Liao and his colleagues was to accelerate QEM using widely used ML techniques, demonstrating the potential of these techniques in real-world experiments. The team first started experimenting with a complex graph neural network (GNN), which they used to encode the entire structure of a quantum circuit and its properties, yet they found that this model performed not very well.

“It surprised us to find that a simpler model, random forest, worked very well across different types of circuits and noise models,” explained Liao.

“In our exploration of the ML techniques for QEM, we tried to shape and vary the noise to see how different techniques ML or conventional, perform in different scenarios, so we can better understand the capability of ML models in ‘learning’ the noise–we are not passive bystanders, but active agents.

“We not only tried to benchmark but also demystify the ‘black box’ nature of the ML models in the context of learning relationships between noisy and ideal outputs of a more powerful quantum computer. “

Study demonstrates the potential of machine learning for quantum error mitigation
Top three panels: Average expectation values from 100-qubit Trotterized 1D TFIM circuits run on the ibm_brisbane QPU. Top panel corresponds to a Clifford circuit, whose ideal, noise-free expectation values are represented by the green dots. The random-forest-mimicking-ZNE (RF-ZNE) curve corresponds to training the random forest (RF) model on zero-noise-extrapolation (ZNE) and readout-error-mitigated hardware data. This approach enables efficient, low-overhead error mitigation in scenarios where ideal outcomes from classical simulation are infeasible. Bottom panel: The error between ZNE-mitigated and RF-ZNE mitigated expectation values. The approach achieves a 25% reduction in overall and 50% reduction in runtime deployment in terms of quantum resource overhead compared to ZNE, as shown in the inset. Credit: Liao et al.

The experiments carried out by Liao and his colleagues demonstrate that ML could help to accelerate physics-based QEM. Remarkably, the model that they found to be most promising, known as random forest, is also fairly simple and could thus be easy to implement on a large scale.

“We assessed how much data is needed to train the ML models well to mitigate errors on a much larger set of testing data and clearly demonstrated a substantially lower overall overhead of the ML techniques for QEM without sacrificing accuracy,” said Liao.

The findings gathered by this team of researchers could have both theoretical and practical implications. In the future, they could help to enrich the present understanding of quantum errors, while also potentially improving the accessibility of QEM methods.

“Is the relationship between noisy and ideal outputs of a quantum computer fundamentally unlearnable? We didn’t know the answer,” said Liao.

“There are a lot of reasons why this would be impossible to determine. However, we showed this key relationship is in fact learnable by ML models in practice. We also showed that we are not passive bystanders, but we can shape the noise in the computation to improve the learnability further.”

Liao and his colleagues also successfully demonstrated that ML techniques for QEM are less costly, yet they can achieve accuracies comparable to those of alternative physics-based QEM techniques. Their experiments were the first to demonstrate the potential of machine learning algorithms for QEM at a utility scale.

“Even in the most conservative setting, ML-QEM demonstrates more than a 2-fold reduction in runtime overhead, addressing the primary bottleneck of error mitigation and a fundamentally challenging problem with a substantial leap in efficiency,” said Liao.

“Most conservatively, this translates into at least halving experiment durations—e.g., cutting an 80-hour experiment to just 40 hours—drastically reducing operational costs and doubling the regime of accessible experiments.”

After gathering these promising results, Liao and his colleagues plan to continue exploring the potential of AI algorithms for simplifying and upscaling QEM. Their work could inspire other research groups to conduct similar studies, potentially contributing to the advancement and future deployment of quantum computers.

“This is just the beginning, and we’re excited about the field of AI for quantum,” added Liao. “We would like to emphasize that at least in the context of QEM, ML is not to replace, but to facilitate physics-based methods, that they can build off each other.

“This opens the door for what’s possible with ML in quantum and is an invitation to the community that ML can perhaps accelerate and improve many other aspects of quantum computations.”

More information: Haoran Liao et al, Machine learning for practical quantum error mitigation, Nature Machine Intelligence (2024). DOI: 10.1038/s42256-024-00927-2.

Journal information: Nature Machine Intelligence 

© 2024 Science X Network

Pioneering approach expands possibilities for measuring quantum geometry in solids

Understanding and reliably measuring the geometric properties of quantum states can shed new light on the intricate underpinning of various physical phenomena. The quantum geometric tensor (QGT) is a mathematical object that provides a detailed description of how quantum states change in response to perturbations, thus offering insights about their underlying geometry.

While this mathematical object has been the focus of numerous theoretical studies, measuring it in experimental settings has proved more challenging. As a result, direct measurements of the QGT have so far been limited to artificial two-level systems.

Researchers at Massachusetts Institute of Technology, Seoul National University and other institutions recently devised a new approach to measure the QGT in crystalline solids. Their proposed method, introduced in a Nature Physics paper, relies on photoemission spectroscopy, a technique typically used to examine the electronic structure of materials.

“The work started as we were thinking about ways to probe the Berry curvature of electrons in solids,” Riccardo Comin, senior author of the paper, told Phys.org. “We originally devised an experiment based on the relationship between orbital angular momentum (probed by circular dichroic ARPES) and Berry curvature.”

The first experiment carried out by Comin and his colleagues was successful and it allowed them to compile the dataset that they used to conduct their recent study. This ultimately allowed them to develop their new approach for measuring QGT in solids, which they called “reconstruction of the full QGT.”

“The full scope of our method was developed thanks to work in Prof. Yang’s group, where the approach was broadened to include the reconstruction of the real part of the quantum geometric tensor (the quantum distance) from the energy dispersion of electronic bands,” said Comin.

“From there, we were able together to develop an approach that connects band theory with experimental data from ARPES, which is the key advancement of this paper.”

A method to measure the quantum geometric tensor in solids
Credit: Kang et al.

The approach devised by Comin, Prof. Yang and their colleagues is based on two independent but complementary approaches. Both of these approaches entail the analysis of data collected via angle-resolved photoemission spectroscopy (ARPES), as a means of retrieving both the real (i.e., quantum distance) and imaginary (i.e., Berry curvature) parts of the QGT.

“The method requires the use of spin- and polarization-resolved ARPES and relies on a minor set of approximations, which are outlined in the paper,” explained Comin.

“Notably, the method was conceived to be applicable to any generic material, regardless of its band structure details or symmetry properties. What makes our approach more powerful is that the QGT is resolved for each electron in reciprocal space.

“This is a significant step forward from existing methods which can mainly detect an integrated Berry curvature (a.k.a., the Chern number) via linear or nonlinear transport measurements.”

The recent study by Comin, Prof. Yang and their colleagues opens new possibilities for research focusing on the geometric properties of quantum states in solids. The new approach they developed could soon be used to study various crystalline systems, which could enrich the current understanding of their quantum geometric responses.

“The most important implication is that we now have a way to retrieve information about the electron wavefunction, and not just the electron energy levels (i.e., the electronic bands),” added Comin.

“This will make it possible to establish an even closer connection between experiments and theory. In our next studies, we plan to apply this method to a broad class of materials with nontrivial topology, to elucidate the detailed origin of quantum geometrical effects.”

More information: Mingu Kang et al, Measurements of the quantum geometric tensor in solids, Nature Physics (2024). DOI: 10.1038/s41567-024-02678-8.

Journal information: Nature Physics 

© 2024 Science X Network

Need to accurately measure time in space? Use a COMPASSO

Telling time in space is difficult, but it is absolutely critical for applications ranging from testing relativity to navigating down the road. Atomic clocks, such as those used on the Global Navigation Satellite System network, are accurate, but only up to a point.

Moving to even more precise navigation tools would require even more accurate clocks. There are several solutions at various stages of technical development, and one from Germany’s DLR, COMPASSO, plans to prove quantum optical clocks in space as a potential successor.

There are several problems with existing atomic clocks—one has to do with their accuracy, and one has to do with their size, weight, and power (SWaP) requirements. Current atomic clocks used in the GNSS are relatively compact, coming in at around .5 kg and 125 x 100 x 40 mm, but they lack accuracy. In the highly accurate clock world terminology, they have a “stability” of 10e-9 over 10,000 seconds. That sounds absurdly accurate, but it is not good enough for a more precise GNSS.

Alternatives, such as atomic lattice clocks, are more accurate, down to 10e-18 stability for 10,000. However, they can measure .5 x .5 x .5m and weigh hundreds of kilograms. Given satellite space and weight constraints, those are way too large to be adopted as a basis for satellite timekeeping.

Need to Accurately Measure Time in Space? Use a COMPASSO
Spectroscopy board for Doppler-free MTS. Credit: GPS Solutions (2023). DOI: 10.1007/s10291-023-01551-0

To find a middle ground, ESA has developed a technology development roadmap focusing on improving clock stability while keeping it small enough to fit on a satellite. One such example of a technology on the roadmap is a cesium-based clock cooled by lasers and combined with a hydrogen-based maser, a microwave laser. NASA is not missing out on the fun either, with its work on a mercury ion clock that has already been orbitally tested for a year.

The work is published in the journal GPS Solutions.

COMPASSO hopes to surpass them all. Three key technologies enable the mission: two iodine frequency references, a “frequency comb,” and a “laser communication and ranging terminal.” Ideally, the mission will be launched to the ISS, where it will sit in space for two years, constantly keeping time. The accuracy of those measurements will be compared to alternatives over that time frame.

Need to accurately measure time in space? Use a COMPASSO
Rendering of a passive hydrogen maser atomic clock. Credit: Universe Today

Lasers are the key to the whole system. The iodine frequency references display the very distinct absorption lines of molecular iodine, which can be used as a frequency reference for the frequency comb, a specialized laser whose output spectrum looks like it has comb teeth at specific frequencies. Those frequencies can be tuned to the frequency of the iodine reference, allowing for the correction of any drift in the comb.

The comb then provides a method for phase locking for a microwave oscillator, a key part of a standard atomic clock. Overall, this means that the stability of the iodine frequency reference is transferred to the frequency comb, which is then again transferred to the microwave oscillator and, therefore, the atomic clock. In COMPASSO’s case, the laser communication terminal is used to transmit frequency and timing information back to a ground station while it is active.

COMPASSO was initially begun in 2021, and a paper describing its details and some breadboarding prototypes were released this year. It will hop on a ride to the ISS in 2025 to start its mission to make the world a more accurately timed place—and maybe improve our navigation abilities as well.

More information: Frederik Kuschewski et al, COMPASSO mission and its iodine clock: outline of the clock design, GPS Solutions (2023). DOI: 10.1007/s10291-023-01551-0

Provided by Universe Today