Researchers at the Fritz Haber Institute have developed the Automatic Process Explorer (APE), an approach that enhances our understanding of atomic and molecular processes. By dynamically refining simulations, APE has uncovered unexpected complexities in the oxidation of palladium (Pd) surfaces, offering new insights into catalyst behavior. The study is published in the journal Physical Review Letters.
Kinetic Monte Carlo (kMC) simulations are essential for studying the long-term evolution of atomic and molecular processes. They are widely used in fields like surface catalysis, where reactions on material surfaces are crucial for developing efficient catalysts that accelerate reactions in energy production and pollution control. Traditional kMC simulations rely on predefined inputs, which can limit their ability to capture complex atomic movements. This is where the Automatic Process Explorer (APE) comes in.
Developed by the Theory Department at the Fritz Haber Institute, APE overcomes biases in traditional kMC simulations by dynamically updating the list of processes based on the system’s current state. This approach encourages exploration of new structures, promoting diversity and efficiency in structural exploration. APE separates process exploration from kMC simulations, using fuzzy machine-learning classification to identify distinct atomic environments. This allows for a broader exploration of potential atomic movements.
By integrating APE with machine-learned interatomic potentials (MLIPs), researchers applied it to the early-stage oxidation of palladium surfaces, a key system in pollution control. When applied to the early-stage oxidation of a palladium surface, a key material used in catalytic converters for cars to reduce emissions, APE uncovered nearly 3,000 processes, far exceeding the capabilities of traditional kMC simulations. These findings reveal complex atomic motions and restructuring processes that occur on timescales similar to molecular processes in catalysis.
The APE methodology provides a detailed understanding of Pd surface restructuring during oxidation, revealing complexities previously unseen. This research enhances our knowledge of nanostructure evolution and its role in surface catalysis. By improving the efficiency of catalysts, these insights have the potential to significantly impact energy production and environmental protection, contributing to cleaner technologies and more sustainable industrial processes.
More information: King Chun Lai et al, Automatic Process Exploration through Machine Learning Assisted Transition State Searches, Physical Review Letters (2025). DOI: 10.1103/PhysRevLett.134.096201
In a new Physical Review Letters study, researchers propose an experimental approach that could finally determine whether gravity is fundamentally classical or quantum in nature.
The nature of gravity has puzzled physicists for decades. Gravity is one of the four fundamental forces, but it has resisted integration into the quantum framework, unlike the electromagnetic, strong, and weak nuclear forces.
Rather than directly tackling the challenging problem of constructing a complete quantum theory of gravity or trying to detect individual gravitons—the hypothetical mediator of gravity—the researchers take a different approach.
Phys.org spoke to the researchers behind the study to gain insight into their unique approach.
“Several proposals have appeared in the past years that, in principle, allow us to determine gravity’s nature experimentally, but their experimental requirements are extraordinarily challenging. So our motivation was to come up with a more feasible experiment that would have the power to at least falsify that gravity is classical,” explained Serhii Kryhin, a third-year graduate student at Harvard University and a co-author of the study.
The researchers aimed to rephrase the age-old question into one that could provide more concrete results: “What measurable differences would tell us whether gravity needs to be quantized?”
Quantum vs. classical fluctuations
“The idea is very simple yet remained unnoticed all this time. If gravity is quantum, as a long-range force, it should be able to induce quantum entanglement of distant matter. However, if gravity is fundamentally classical, no entanglement can be produced,” said Vivishek Sudhir, Associate Professor at MIT and co-author of the study.
The insight is that if gravity is classical, it must exhibit irreducible stochastic fluctuations. These fluctuations are a necessity, stemming from a fundamental inconsistency that would arise without them—the deterministic nature of gravity (due to being classical) would violate the principles of quantum mechanics.
The brilliance comes in recognizing that these fluctuations would leave behind a signature in the cross-correlation spectrum as a phase shift, differing from what would be produced if gravity was quantum.
“Quantum fluctuations always arise as quantum fluctuations of dynamic degrees of freedom of general relativity. From a practical perspective, the main difference between quantum and classical gravity fluctuations comes in the magnitude. Being relativistic effects, quantum fluctuations are notoriously weak and thus incredibly challenging to measure,” said Kryhin.
“On the other hand, classical fluctuations, if they exist and have to remain consistent with everything else we know, appear to be much larger,” added Prof. Sudhir.
Mathematical framework
The researchers propose a theoretical framework for this quantum-classical interaction in the Newtonian limit of gravity. In this framework, classical gravity and quantum matter co-exist.
They created a quantum-classical master equation describing how quantum matter and classical gravity evolve together. They also derived a Hamiltonian for Newtonian gravity’s interaction with quantum masses through two complementary approaches—Dirac’s theory of constrained systems and the Newtonian limit of gravity.
Next, they formulated a modified quantum Newton’s law that accounts for stochastic gravitational effects and finally calculated the distinctive correlation patterns between two quantum oscillators interacting gravitationally.
This mathematical framework led them to a closed Lindblad equation (Markovian master equation) for quantum matter interacting with classical gravity. This equation includes a term proportional to the parameter ε, which distinguishes between classical gravity (ε ≠ 0) and quantum gravity (ε = 0).
Identifying measurable quantities
The researchers derived several crucial results. They showed that, contrary to previous claims, a consistent theory of classical gravity interacting with quantum matter is indeed possible.
Their calculations reveal that classical gravity would induce fluctuations distinct from its quantum counterpart. Crucially, they identify an experimentally measurable signature.
When two quantum harmonic oscillators interact gravitationally, their cross-correlation spectrum shows a characteristic phase shift of π or 180 degrees at a specific detuning from resonance if gravity is classical.
To test these predictions, the researchers propose what amounts to a quantum version of the historic Cavendish experiment, using two highly coherent quantum mechanical oscillators coupled gravitationally.
The researchers could observe the characteristic phase shift by precisely measuring the cross-correlation of their motions.
What distinguishes this approach from previous proposals is its experimental feasibility. Unlike other tests that might require creating massive objects in quantum superposition states, this experiment relies on correlations between quantum oscillators within the reach of current or near-future technology.
Prof. Sudhir noted, “Semiclassical models of gravity usually explicitly neglect the backaction of quantum fluctuations of matter onto the classical gravity dynamics. In contrast, our theory allows self-consistent dynamics of classical gravity field and quantum matter.”
Discover the latest in science, tech, and space with over 100,000 subscribers who rely on Phys.org for daily insights. Sign up for our free newsletter and get updates on breakthroughs, innovations, and research that matter—daily or weekly.Subscribe
Jury’s still out
An experimental confirmation that gravity is classical would have profound implications for our physical theories.
“At present, it is taken as a self-evident fact that gravity has to be quantum, although nobody precisely knows what that means!” said Kryhin.
“Immense effort has been made to understand the behavior of quantized general relativity and construct a complete theory of quantum gravity, which resulted in the construction of string theory as one of the byproducts. If experimentally proven that gravity is classical, we will have to start from the beginning in a search for a satisfactory ontological picture of the world.”
While the study offers a fresh perspective on a question that has plagued physicists for decades, they acknowledge the many problems to be addressed, from formalism development and model-building to demonstrating the technologies required for the experiment.
“From an experimental standpoint, we need two gravitating masses, noise isolation, and measurement techniques, all of which need to come together to realize the sensitivity needed for a decisive experiment,” concluded Kryhin.
Neutrinos generated through solar fusion reactions travel effortlessly through the sun’s dense core. Each specific fusion process creates neutrinos with distinctive signatures, potentially providing a method to examine the sun’s internal structure. Multiple neutrino detection observatories on Earth are now capturing these solar particles, which can be analyzed alongside reactor-produced neutrinos with the data eventually enabling researchers to construct a detailed map of the interior of the sun.
The sun is a massive sphere of hot plasma at the center of our solar system and provides the light and heat to make life on Earth possible. Composed mostly of hydrogen and helium, it generates energy through nuclear fusion, converting hydrogen into helium in its core. This process releases an enormous amount of energy which we perceive as heat and light.
The sun’s surface, or photosphere, is around 5,500°C, while its core reaches over 15 million°C. It influences everything from our climate to space weather, sending out solar wind and occasional bursts of radiation known as solar flares. As an average middle-aged star, the sun is about 4.6 billion years old and will (hopefully) continue burning for another 5 billion years before evolving into a red giant and eventually becoming a white dwarf.
The standard solar model (SSM) is used to understand and predict the sun’s internal structure and evolution. It’s how we work out what’s going on inside the sun. It explains how, in the sun’s core, different nuclear fusion reactions are constantly pumping out neutrinos—tiny, nearly massless particles that travel through almost anything.
Each type of reaction creates neutrinos with their own properties. These neutrinos may help us to understand more about the interior of the sun. Right now, we only know about its internal density structure from theoretical models based on the SSM, matched with what we can see on the sun’s surface. The neutrinos may hold information that will give us more direct data about the solar interior.
Chinese researchers are working on a new neutrino observatory called TRIDENT. They built an underwater simulator to develop their plan.
In a paper published by Peter B. Denton from the Brookhaven National Laboratory and Charles Gourley from Rensselaer Polytechnic Institute show how solar neutrinos can help us to look inside the sun and establish its density structure. In contrast, photons of light only tell us about the surface of the sun as it is right now, and give us little information about the sun’s interior hundreds of thousands of years ago. This delay in photons exiting the sun is because they bounce around the dense solar interior for centuries before escaping. Neutrinos, on the other hand, give us up-to-the-minute information because they can zip straight through the sun without getting stopped.
The study is published on the arXiv preprint server.
It has long since been known that neutrinos change their flavor or type (electron neutrino, muon neutrino or tau neutrino) as they travel through matter and that depends on the local density. This is well documented as the Mikheyev-Smirnov-Wolfenstein effect and, by measuring the flux of the neutrino as observed on Earth, compared to the unoscillating predicted flux, the density where the neutrinos were produced can be calculated. Input is also required from independent measurements of neutrino oscillations that have been created inside nuclear reactors.
The team demonstrated that the approach does have its limitations and that there are constraints on just how much density information can be gleaned from the SSM alone. Further data from projects like JUNO and DUNE is needed to further improve the solar internal density profile and give us a more realistic view of the internal workings of our local star.
More information: Peter B. Denton et al, Determining the Density of the Sun with Neutrinos, arXiv (2025). DOI: 10.48550/arxiv.2502.17546
A small international team of nanotechnologists, engineers and physicists has developed a way to force laser light into becoming a supersolid. Their paper is published in the journal Nature. The editors at Nature have published a Research Briefing in the same issue summarizing the work.
Supersolids are entities that exist only in the quantum world, and, up until now, they have all been made using atoms. Prior research has shown that they have zero viscosity and are formed in crystal-like structures similar to the way atoms are arranged in salt crystals.
Because of their nature, supersolids have been created in extremely cold environments where the quantum effects can be seen. Notably, one of the team members on this new effort was part of the team that demonstrated more than a decade ago that light could become a fluid under the right set of circumstances.
To create their supersolid, the researchers fired a laser at a piece of gallium arsenide that had been shaped with special ridges. As the light struck the ridges, interactions between it and the material resulted in the formation of polaritons—a kind of hybrid particle—which were constrained by the ridges in a predesigned way. Doing so forced the polaritons into forming themselves into a supersolid.
The research team then set themselves the task of testing it to make sure it truly was a supersolid—a task made more difficult by the fact that a supersolid made from light had never been created before. Despite the difficulties, they were able to show that their supersolid was both a solid and a fluid and that it had no viscosity.
The team plans to continue their work with the light-made supersolid to learn more about its structure. They note that supersolids made from light might be easier to work with than those made with atoms, which could help us better understand the nature of supersolids in general.
More information: Dimitrios Trypogeorgos et al, Emerging supersolidity in photonic-crystal polariton condensates, Nature (2025). DOI: 10.1038/s41586-025-08616-9
Many physicists and engineers have recently been trying to demonstrate the potential of quantum computers for tackling some problems that are particularly demanding and are difficult to solve for classical computers. A task that has been found to be challenging for both quantum and classical computers is finding the ground state (i.e., lowest possible energy state) of systems with multiple interacting quantum particles, called quantum many-body systems.
When one of these systems is placed in a thermal bath (i.e., an environment with a fixed temperature that interacts with the systems), it is known to cool down without always reaching its ground state. In some instances, a quantum system can get trapped at a so-called local minimum; a state in which its energy is lower than other neighboring states but not at the lowest possible level.
Researchers at California Institute of Technology and the AWS Center for Quantum Computing recently showed that while finding the local minimum for a system is difficult for classical computers, it could be far easier for quantum computers.
Their paper, published in Nature Physics, introduces a new quantum algorithm that simulates natural cooling processes, which was successfully used to predict the local minima of quantum many-body systems.
“This paper emerged from a fundamental question: should quantum theorists focus solely on ground states when they’re often physically unrealizable due to the inherent computational hardness in finding them?” Hsin-Yuan (Robert) Huang, co-first author of the paper, told Phys.org.
“In machine learning, local minima—not global minima—are what practical algorithms find and use successfully. This sparked our curiosity about local minima in quantum systems.”
The recent work by Huang and his colleagues combines approaches from three different areas of physics research. These include the study of local minima and their physical relevance, the ongoing quest to demonstrate the advantages of quantum computers in optimization problems and recent insights from the field of quantum thermodynamics.
“This convergence enabled us to define quantum local minima through thermal perturbations—a physically meaningful approach that mirrors what happens when nature cools a physical system,” said Huang. “Our objective was to determine if finding local minima could provide a provable quantum advantage while maintaining direct connections to natural physical processes.”
To tackle the problem of finding a local minimum, the researchers first formalized the natural cooling process of quantum systems. Instead of seeking ground states, which are global energy minima, they focused on local minima, states in which small perturbations no longer decrease the energy of a system in a thermal bath.
“Our analysis proceeded to show that the problem of cooling to local minima is classically hard and quantumly easy,” said Leo Zhou.
“To establish classical hardness, we provide explicit construction of quantum systems where any local minima can be used to encode universal quantum computation, a task widely believed to be classically intractable.
“We then developed a quantum thermal gradient descent algorithm, which enables a quantum computer to efficiently find a local minimum by mimicking natural cooling processes.”
Comparison between thermal perturbations and local unitary perturbations. Credit: Chen et al. The greatest technical challenge that the researchers had to overcome for this study was proving that some classically hard Hamiltonians have no suboptimal local minima, or that, in other words, their energy landscapes have a perfect bowl-like shape.
Discover the latest in science, tech, and space with over 100,000 subscribers who rely on Phys.org for daily insights. Sign up for our free newsletter and get updates on breakthroughs, innovations, and research that matter—daily or weekly.
e-mail To achieve this, they employed clever constructions from quantum complexity theory and sophisticated mathematical tools for analyzing the effects of thermal perturbations on energy landscapes.
“We found that cooling physical systems to local minima is universal for quantum computation,” said Huang.
“In other words, quantum computers can efficiently find local minima while classical computers cannot, assuming that quantum computers are more powerful than classical ones. This result is compelling because it has a clear physical interpretation: when nature cools a quantum system, it effectively solves the problem of finding local minima under thermal perturbations.”
“Furthermore, our result points to a new approach to characterize quantum many-body systems that challenges conventional wisdom,” said Zhou.
“Instead of focusing solely on ground states, we can study their local minima and overall energy landscape. Optimizing over the energy landscape can even lead to discovery of new physics—for example, by finding an anomalous local minimum with unexpected physical properties.”
The new quantum algorithms developed by Huang and his colleagues were found to formalize and replicate the natural cooling of quantum systems. Using this algorithm, the researchers showed that quantum computers could significantly enhance energy optimization, outperforming classical computers by a large margin.
“After classical algorithms reach their ‘best’ solution, our quantum algorithm could find even lower energy states—potentially transforming computational approaches in materials science, chemistry, and physics,” explained Huang.
The results attained by this team of researchers highlight the potential of quantum computing systems for finding the local minima of quantum systems. In their next studies, Huang and his colleagues plan to build on their recent work by further testing their algorithm and applying it to a broader range of scenarios.
“First, we aim to characterize physically relevant quantum systems with favorable energy landscapes where our approach could provide practical quantum advantages,” said Huang. “Second, we’re investigating whether these techniques could yield quantum advantages for classical optimization problems—potentially expanding the impact beyond quantum systems.”
As part of their next studies, the researchers are planning to carry out experimental demonstrations of their proposed method using near-term quantum devices. In addition, they will try to engineer synthetic quantum processes that could outperform the natural cooling capabilities of quantum systems.
“Our ultimate goal is not only to bridge the gap between theoretical quantum advantage and practical applications but also to pioneer new ways of understanding and controlling quantum many-body systems,” added Huang and Zhou.
More information: Chi-Fang Chen et al, Local minima in quantum systems, Nature Physics (2025). DOI: 10.1038/s41567-025-02781-4.
For over a century, physicists have grappled with one of the most profound questions in science: How do the rules of quantum mechanics, which govern the smallest particles, fit with the laws of general relativity, which describe the universe on the largest scales?
The optical lattice clock, one of the most precise timekeeping devices, is becoming a powerful tool used to tackle this great challenge. Within an optical lattice clock, atoms are trapped in a “lattice” potential formed by laser beams and are manipulated with precise control of quantum coherence and interactions governed by quantum mechanics.
Simultaneously, according to Einstein’s laws of general relativity, time moves slower in stronger gravitational fields. This effect, known as gravitational redshift, leads to a tiny shift of atoms’ internal energy levels depending on their position in gravitational fields, causing their “ticking”—the oscillations that define time in optical lattice clocks—to change.
By measuring the tiny shifts of oscillation frequency in these ultra-precise clocks, researchers are able to explore the influences of Einstein’s theory of relativity on quantum systems.
While relativistic effects are well-understood for individual atoms, their role in many-body quantum systems, where atoms can interact and become entangled, remains largely unexplored.
Making a step forward in this direction, researchers led by JILA and NIST Fellows and University of Colorado Boulder physics professors Jun Ye and Ana Maria Rey—in collaboration with scientists at the Leibnitz University in Hanover, the Austrian Academy of Sciences, and the University of Innsbruck—proposed practical protocols to explore the effects of relativity, such as the gravitational redshift, on quantum entanglement and interactions in an optical atomic clock.
Their work revealed that the interplay between gravitational effects and quantum interactions can lead to unexpected phenomena, such as atomic synchronization and quantum entanglement among particles.
“One of our key findings is that interactions between atoms can help to lock them together so that now they behave as a unified system instead of ticking independently due to the gravitational redshift,” explains Dr. Anjun Chu, a former JILA graduate student, now a postdoctoral researcher at the University of Chicago and the paper’s first author.
“This is really cool because it directly shows the interplay between quantum interactions and gravitational effects.”
“The interplay between general relativity [GR] and quantum entanglement has puzzled physicists for years,” Rey adds.
“The challenge lies in the fact that GR corrections in most tabletop experiments are minuscule, making them extremely difficult to detect. However, atomic clocks are now reaching unprecedented precision, bringing these elusive effects within measurable range.
“Since these clocks simultaneously interrogate many atoms, they provide a unique platform to explore the intersection of GR and many-body quantum physics. In this work, we investigated a system where atoms interact by exchanging photons within an optical cavity.
“Interestingly, we found out that while individual interactions alone can have no direct effect on the ticking of the clock, their collective influence on the redshift can significantly modify the dynamics and even generate entanglement among the atoms, which is very exciting.”
Distinguishing gravitational effects
To explore this challenge, the team devised innovative protocols to observe how gravitational redshift interferes with quantum behavior.
The first issue they focused on was to uniquely distinguish gravitational effects in an optical lattice clock from other noise sources contributing to the tiny frequency shifts.
They utilized a technique called a dressing protocol, which involves manipulating the internal states of particles with laser light. While dressing protocols are a standard tool in quantum optics, this is one of the first instances of the protocol being used to fine-tune gravitational effects.
The tunability is based on the mechanism known as mass-energy equivalence (from Einstein’s famous equation E=mc2), which means that changes in a particle’s internal energy can subtly alter its mass. Based on this mechanism, an atom in the excited state has a slightly larger mass compared to the same atom in the ground state.
The mass difference in gravitational potential energy is equivalent to gravitational redshift. The dressing protocol provides a flexible way to tune the mass difference, and thus the gravitational redshift, by controlling the particles to stay in a superposition of the two internal energy states.
Instead of being strictly in the ground or excited state, the particles can be tuned to occupy both of the states simultaneously with a continuous change of occupation probability between these two levels. This technique provides unprecedented control of internal states, enabling the researchers to fine-tune the size of gravitational effects.
In this way, the researchers could distinguish genuine gravitational redshift effects from other influences, like magnetic field gradients, within the system.
“By changing the superpositions of internal levels of the particles you’re addressing, you can change how large the gravitational effects appear,” notes JILA graduate student Maya Miklos. “This is a really clever way to probe mass-energy equivalence at the quantum level.”
Seeing synchronization and entanglement
After providing a recipe to distinguish genuine gravitational effects, the researchers explored gravitational manifestations in quantum many-body dynamics. They made use of the photon-mediated interactions generated by placing the atoms in an optical cavity.
If one atom is in an excited state, it can relax back to the ground state by emitting a photon into the cavity. This photon doesn’t necessarily escape the system but can be absorbed by another atom in the ground state, exciting it in turn.
Such an exchange of energy—known as photon-mediated interactions—is key to making particles interact, even when they cannot physically touch each other.
Such types of quantum interactions can compete with gravitational effects on individual atoms inside the cavity. Typically, particles positioned at different “heights” within a gravitational field experience slight differences in how they “tick” due to gravitational redshift. Without interactions between particles, the slight difference in oscillation frequencies will cause them to fall out of sync over time.
However, when photon-mediated interactions were introduced, something remarkable happened: the particles began to synchronize, effectively “locking” their ticking together despite the differences in oscillation frequencies induced by gravity.
“It’s fascinating,” Chu says. “You can think of each particle as its own little clock. But when they interact, they start to tick in unison, even though gravity is trying to pull their timing apart.”
This synchronization showcased a fascinating interplay between gravitational effects and quantum interactions, where the latter can override the natural desynchronization caused by gravitational redshift.
This synchronization wasn’t just an oddity—it also led to the creation of quantum entanglement, a phenomenon where particles become interconnected, with the state of one instantly affecting the other.
Remarkably, the researchers found that the speed of synchronization could also serve as an indirect measure of entanglement, offering an insight into quantifying the interplay between two effects.
“Synchronization is the first phenomenon we can see that reveals this competition between gravitational redshift and quantum interactions,” adds JILA postdoctoral researcher Dr. Kyungtae Kim. “It’s a window into how these two forces balance each other.”
While this study revealed the initial interactions between these two fields of physics, the protocols developed could help refine experimental techniques, making them even more precise—with applications ranging from quantum computing to fundamental physics experiments.
“Detecting this GR-facilitated entanglement would be a groundbreaking achievement, and our theoretical calculations suggest that it is within reach of current or near-term experiments,” says Rey.
Future experiments could explore how particles behave under different conditions or how interactions can amplify gravitational effects, bringing us closer to unifying the two great pillars of modern physics.
More information: Anjun Chu et al, Exploring the Dynamical Interplay between Mass-Energy Equivalence, Interactions, and Entanglement in an Optical Lattice Clock, Physical Review Letters (2025). DOI: 10.1103/PhysRevLett.134.093201. On arXiv: DOI: 10.48550/arxiv.2406.03804
Scientists at Yokohama National University, in collaboration with RIKEN and other institutions in Japan and Korea, have made an important discovery about how electrons move and behave in molecules. This discovery could potentially lead to advances in electronics, energy transfer, and chemical reactions.
Published in the Science, their study reveals a new way to control the distribution of electrons in molecules using very fast phase-controlled pulses of light in the terahertz range.
Atoms and molecules contain negatively charged electrons that usually stay in specific energy levels, like layers, around the positively charged nucleus. The way these electrons are arranged in the molecule is key to how the molecule behaves.
This arrangement affects important processes like how light is emitted, how charges move between molecules, and how chemical reactions happen. For example, when light hits an electron and gives it enough energy, the electron moves to a higher energy level, leaving behind a positively charged “hole.” This creates an exciton—a tiny energy packet in the molecule that can emit the light.
This process is key to technologies like solar cells, where excitons help convert sunlight into electricity, and LEDs, where they release energy as light.
However, there are other important states that molecules can exist in, like charged states and charged excited states. Charged states occur when a molecule gains or loses an electron, while charged excited states involve both a charge change and an electron in a higher energy level.
These are important for many processes, but it has been very difficult to control these states, especially on ultrafast timescales, using traditional technology. Normally, light from the visible spectrum doesn’t provide enough energy to change the charge of the molecule and therefore cannot change the number of electrons in it.
To overcome this challenge, the researchers used terahertz light pulses, a type of light with a much lower frequency than visible light. These pulses cause electrons to move between a molecule and the metal tip of a specialized microscope that can manipulate individual molecules, allowing the team to either remove or add an electron to the molecule.
This new method offers a way to control not only excitons in a controlled manner which is both quick and precise, but also other important molecular states that are essential for chemical reactions, energy transfer and many other processes.
The team also demonstrated that terahertz light, which is invisible to the human eye, can be converted into visible light within a molecule, revealing a novel way to transform one type of light into another through molecular energy changes.
“While excitons typically form when light is absorbed by a material, our findings reveal they can also be created through charged states using these specially designed terahertz pulses,” says Professor Ikufumi Katayama, the study’s corresponding author from the Faculty of Engineering at Yokohama National University.
Discover the latest in science, tech, and space with over 100,000 subscribers who rely on Phys.org for daily insights. Sign up for our free newsletter and get updates on breakthroughs, innovations, and research that matter—daily or weekly.
e-mail “This opens new possibilities for controlling how charge moves within molecules, which could lead to better solar cells, smaller light-based devices, and faster electronics.”
The team’s main achievement was the ability to control exciton formation at the single-molecule level. Professor Jun Takeda, another corresponding author from the Faculty of Engineering at Yokohama National University, explains, “By precisely controlling how electrons move between a single molecule and the metal tip of the specialized microscope, we were able to guide exciton formation and the chemical reactions that follow.
“These processes usually happen randomly, but with terahertz pulses, we can determine exactly when and how reactions occur at the molecular level. This could lead to breakthroughs in nanotechnology, advanced materials, and more efficient catalysts for energy and industry.”
More information: Kensuke Kimura et al, Ultrafast on-demand exciton formation in a single-molecule junction by tailored terahertz pulses, Science (2025). DOI: 10.1126/science.ads2776. www.science.org/doi/10.1126/science.ads2776
The ability to better steer particles suspended in liquids could lead to better water purification processes, new drug delivery systems, and other applications. The key ingredient, say Yale researchers, is a pinch of salt.
The research team, led by Prof. Amir Pahlavan, has published their results in Physical Review Letters.
The phenomenon of diffusiophoresis causes suspended particles known as colloids to move due to differing concentrations of a dissolved substance—the gradient—in the solution. Haoyu Liu, a graduate student in Pahlavan’s lab, notes that the phenomenon was discovered more than 50 years ago, yet its applications in microfluidics and implications in environmental flows have just recently been realized.
“Chemical gradient is actually everywhere in our natural systems and also in our industrial processes,” said Liu, lead author of the study. “So this phenomenon has drawn very much interest from scientists and engineers.”
Scientists have traditionally used electric or magnetic fields to manipulate colloids. But Pahlavan and Liu report that varying concentrations of salt can lead to the spontaneous motion of colloids. These effects can lead to unexpected outcomes, and even create a swirling vortex that reverses the particles’ usual paths.
Using gradients in salt, polymer, or other molecular solutes, they say, offers some advantages over the other processes.
“One is the simplicity of using a salt gradient,” said Pahlavan, assistant professor of mechanical engineering & materials science. “All you need is just more salty or less salty water to manipulate the colloids, as opposed to a more sophisticated setup.”
This process might be most useful in natural systems “You don’t always have an electric field or a magnetic field, but you do always have a salt or chemical gradient, either because of human activities, or because of many other processes that might happen in nature.”
As far as applications, Pahlavan and Liu said it could have benefits for environmental cleanups.
“With contaminant remediation, where they inject polymer particles to react with a chemical plume somewhere to prevent its further spreading, we can utilize the solute gradients to make sure that the particles that are being injected end up at the right location,” Pahlavan said.
Drug delivery is another potential application.
“Here, you want to deliver particles to certain tumor cells or perhaps a biofilm,” he said. “Maybe we can use the solute gradients to guide the particles to go where we want. These are hidden targets whose location we don’t know a priori; solute gradients, however, could steer the colloids toward the right destination.”
More information: Haoyu Liu et al, Diffusioosmotic Reversal of Colloidal Focusing Direction in a Microfluidic T-Junction, Physical Review Letters (2025). DOI: 10.1103/PhysRevLett.134.098201
Imagine if we could take the energy of the sun, put it in a container, and use it to provide green, sustainable power for the world. Creating commercial fusion power plants would essentially make this idea a reality. However, there are several scientific challenges to overcome before we can successfully harness fusion power in this way.
Researchers from the U. S. Department of Energy (DOE) Ames National Laboratory and Iowa State University are leading efforts to overcome material challenges that could make commercial fusion power a reality. The research teams are part of a DOE Advanced Research Projects Agency-Energy (ARPA-E) program called Creating Hardened And Durable fusion first Wall Incorporating Centralized Knowledge (CHADWICK). They will investigate materials for the first wall of a fusion reactor. The first wall is the structure that surrounds the fusion reaction, so it bears the brunt of the extreme environment in the fusion reactor core.
ARPA-E recently selected 13 projects under the CHADWICK program. Of those 13, Ames Lab leads one of the projects and is collaborating alongside Iowa State on another project, which is led by Pacific Northwest National Laboratory (PNNL).
According to Nicolas Argibay, a scientist at Ames Lab and lead of one project, one of the key challenges in harnessing fusion-based power is containing the plasma core that creates the energy. The plasma is like a miniature sun that needs to be contained by materials that can withstand a combination of extreme temperature, irradiation, and magnetic fields while efficiently extracting heat for conversion to electricity.
Argibay explained that in the reactor core, the plasma is contained by a strong magnetic field, and the first wall would surround this environment. The first wall has two layers of material, one that is closest to the strong magnetic and plasma environments, and one that will help move the energy along to other parts of the system.
The first layer material needs to be structurally sound, resisting cracking and erosion over time. Argibay also said that it cannot stay radioactive for very long, so that the reactor can be turned on and off for maintenance without endangering anyone working on it. The project he is leading is focused on the first layer material.
“I think one of the things we [at Ames Lab] bring is a unique capability for materials design, but also, very importantly, for processing them. It is hard to make and manage these materials,” said Argibay. “On the project I’m leading, we’re using tungsten as a major constituent, and with the exception of some forms of carbon, like diamond, that’s the highest melting temperature element on the periodic table.”
Specialized equipment is necessary to process and test refractory materials, which have extremely high melting temperatures. In Argibay’s lab, the first piece of equipment obtained is a commercial, modular, customizable, open-architecture platform for both making refractory materials and exploring advanced and smart manufacturing methods to make the process more efficient and reliable.
“Basically, we can make castings and powders of alloys up to and including pure tungsten, which is the highest melting temperature element other than diamond,” said Argibay.
By spring of 2025, Argibay said that they will have two additional systems in place for creating these refractory materials at both lab-scale and pilot-scale quantities. He explained it is easier to make small quantities (lab-scale) than larger quantities (pilot-scale), but the larger quantities are important for collecting meaningful and useful data that can translate to a real-world application.
Argibay also has capabilities for measuring the mechanical properties of refractory materials at relevant temperatures. Systems capable of making measurements well above 1,000°C (1,832°F) are rare. Ames Lab now has one of the only commercial testers in the country that can measure tensile properties of alloys at temperatures up to 1,500°C (2,732°F), which puts the lab in a unique position to both support process science and alloy design.
Jordan Tiarks, another scientist at Ames Lab who is working on the project led by PNNL, is focused on a different aspect of this reactor research. His team is relying on Ames Lab’s 35 years of experience leading the field in gas atomization, powder metallurgy, and technology transfer to industry to develop materials for the first wall structural material.
“The first wall structural material is actually the part that holds it all together,” said Tiarks. “You need to have more complexity and more structural strength. You might have things like cooling channels that need to be integrated in the structural wall so that we can extract all of that heat, and don’t just melt the first wall material.”
Tiarks’s team hopes to utilize over a decade of research focused on developing a unique way of creating oxide dispersion strengthened (ODS) steel for next generation nuclear fission. ODS steel contains very small ceramic particles (nanoparticles) that are dispersed throughout the steel. These particles improve the metal’s mechanical properties and ability to withstand high irradiation.
“What this project does is it takes all of our lessons learned on steels, and we’re going to apply them to a brand-new medium, a vanadium-based alloy that is well suited for nuclear fusion,” said Tiarks.
The major challenge Tiarks’s team faces is how vanadium behaves differently from steel. Vanadium has a much higher melting point, and it is more reactive than steel, so it cannot be contained with ceramic. Instead, his team must use a slightly different process for creating vanadium-based powders.
“We use high pressure gas to break up the molten material into tiny droplets which rapidly cool to create the powders we’re working with,” explained Tiarks. “And [in this case] we can’t use any sort of ceramic to be able to deliver the melt. So what we have to do is called ‘free fall gas atomization.” It is essentially a big opening in a gas die where a liquid stream pours through and we use supersonic gas jets to attack that liquid stream.”
There are some challenges with the method Tiarks described. First, he said that it is less efficient than other methods that rely on ceramics. Secondly, due to the high melting point of vanadium, it is harder to add more heat during the pouring process, which would provide more time to break up the liquid into droplets. Finally, vanadium tends to be reactive.
“Powders are reactive. If you aerosolize them, they will explode. However, a fair number of metals will form a thin oxide shell on the outside layer that can help ‘passivate’ them from further reactions,” Tiarks explained. “It’s kind of like an M&M. It’s the candy coating on the outside that protects the rest of the powder particle from further oxidizing.
“A lot of the research we’ve done in the Ames lab is actually figuring out how we passivate these powders so you can handle them safely, so they won’t further react, but without degrading too much of the performance of those powders by adding too much oxygen. If you oxidize them fully, all of a sudden, now we have a ceramic particle, and it’s not a metal anymore, and so we have to be very careful to control the passivation process.”
Tiarks explained that discovering a powder processing method for vanadium-based materials will make them easier to form into the complicated geometric chapes that are necessary for the second layer to function properly. Additionally, vanadium will not interfere with the magnetic fields in the reactor core.
Sid Pathak, an assistant professor at Iowa State, is leading the group that will test the material samples for the second layer. When the material powder made by the Ames Lab group is ready, it will be formed into plates at PNNL by spraying the powder and friction stir processing onto a surface.
“Once you make that plate, we need to test its properties, particularly its response under the extreme radiation conditions present in a fusion reactor, and make sure that we get something better than what is currently available,” said Pathak. “That’s our claim, that our materials will be superior to what is used today.”
Pathak explained that it can take 10–20 years for radiation damage to show up on materials in a nuclear reactor. It would be impossible to recreate that timeline during a 3-year research project. Instead, his team uses ion irradiation to test how materials respond in extreme environments. For this process, his team uses a particle accelerator to bombard a material with ions available at University of Michigan’s Michigan Ion Beam Laboratory. The results simulate how a material is affected by radiation.
“Ion irradiation is a technique where you radiate [the material] with ions instead of neutrons. That can be done in a matter of hours,” said Pathak. “Also, the material does not become radioactive after ion irradiation, so you can handle it much more easily.”
Despite these benefits, there is one disadvantage to using ion irradiation. The damage only penetrates the material one or two micrometers deep, meaning that it can only be seen with a microscope. For reference, the average strand of human hair is about 70-100 micrometers thick. So, testing materials at these very small depths requires specialized tools that work at micro-length scales, which are available at Pathak’s lab at Iowa State University.
“The pathway to commercial nuclear fusion power has some of the greatest technical challenges of our day but also has the potential for one of the greatest payoffs—harnessing the power of the sun to produce abundant, clean energy,” said Tiarks. “It’s incredibly exciting to be able to have a tiny role in solving that greater problem.”
“I’m very excited at the prospect that we are kind of in uncharted water. So there is an opportunity for Ames to demonstrate why we’re here, why we should continue to fund and increase funding for national labs like ours, and why we are going to tackle some things that most companies and other national labs just can’t or aren’t,” said Argibay. “We hope to be part of this next generation of solving fusion energy for the grid.”
Quiet quitting isn’t just for burned out employees. Atoms carrying information inside quantum computers, known as qubits, sometimes vanish silently from their posts. This problematic phenomenon, called atom loss, corrupts data and spoils calculations.
But Sandia National Laboratories and the University of New Mexico have for the first time demonstrated a practical way to detect these “leakage errors” for neutral atom platforms. This achievement removes a major roadblock for one branch of quantum computing, bringing scientists closer to realizing the technology’s full potential. Many experts believe quantum computers will help reveal truths about the universe that are impossible to glean with current technology.
“We can now detect the loss of an atom without disturbing its quantum state,” said Yuan-Yu Jau, Sandia atomic physicist and principal investigator of the experiment team.
In a paper recently published in the journal PRX Quantum, the team reports its circuit-based method achieved 93.4% accuracy. The detection method enables researchers to flag and correct errors.
Detection heads off a looming crisis
Atoms are squirrely little things. Scientists control them in some quantum computers by freezing them at just above absolute zero, about -460 degrees Fahrenheit. A thousandth of a degree too warm and they spring free. Even at the right temperature, they can escape through random chance.
If an atom slips away in the middle of a calculation, “The result can be completely useless. It’s like garbage,” Jau said.
A detection scheme can tell researchers whether they can trust the result and could lead to a way of correcting errors by filling in detected gaps.
Matthew Chow, who led the research, said atom loss is a manageable nuisance in small-scale machines because they have relatively few qubits, so the odds of losing one at any given moment are generally small.
But the future has been looking bleak. Useful quantum computers will need millions of qubits. With so many, the odds of losing them mid-program spikes. Atoms would be silently walking off the jobsite en masse, leaving scientists with the futile task of trying to use a computer that is literally vanishing before their eyes.
“This is super important because if we don’t have a solution for this, I don’t think there’s a way to keep moving forward,” Jau said.
Researchers have found ways to detect atom loss and other kinds of leakage errors in different quantum computing platforms, like those using electrically charged atoms, called trapped ion qubits, instead of neutral ones. The New Mexico-based team is the first to non-destructively detect atom loss in neutral atom systems. By implementing simple circuit-based techniques to detect leakage errors, the team is helping avert the crisis of uncontrollable future leakage.
Just don’t look
The dilemma of detecting atom loss is that scientists cannot look at the atoms they need to preserve during computation.
“Quantum calculations are extremely fragile,” Jau said.
The operation falls apart if researchers do anything at all to observe the state of a qubit while it’s working.
Austrian physicist Erwin Schrödinger famously compared this concept to having a cat inside a box with something that will randomly kill it. According to quantum physics, Schrödinger explained, the cat can be thought of as simultaneously dead and alive until you open the box.
“It’s very easy to have a mathematical description of everything in terms of quantum computing. But to visualize entangled quantum information, it’s hard,” Jau said.
So how do you check that an atom is in the processor without observing it?
“The idea is analogous to having Schrödinger’s cat in a box, and putting that box on a scale, where the weight of the box tells you whether or not there’s a cat, but it doesn’t tell you whether the cat’s dead or alive,” Chow said.
Objective lenses on either side of the vacuum chamber are used to focus laser light into single-atom traps at Sandia National Laboratories. Credit: Craig Fritz
Surprise finding fuels breakthrough
Chow, a University of New Mexico Ph.D. student and Sandia Labs intern at the time of the research, said he never expected this breakthrough.
“This was certainly not a paper that we had planned to write,” he said.
He was debugging a small bit of quantum computing code at Sandia for his dissertation. The code diagnoses the entangling interaction—a unique quantum process that links the states of atoms—by repeatedly applying an operation and comparing the results when two atoms interact versus when only one atom is present. When the atoms interact, the repeated application of the operation makes them switch between entangled and disentangled states. In this comparison, he observed a key pattern.
Every other run, when the atoms were disentangled, the outcome for the two-atom case was markedly different from the solo-atom case.
Without trying, Chow realized, he had found a subtle signal to indicate a neighboring atom was present in a quantum computer without observing it directly. The oscillating measurement was the scale to measure whether the cat is still in the box.
“This was the thing that got me really excited—that made me show it to Vikas.”
Vikas Buchemmavari, another Ph.D. student at UNM and a frequent collaborator, knew more quantum theory than Chow. He works in a research group led by the director of UNM’s Center for Quantum Information and Control, Ivan Deutsch.
“I was simultaneously very impressed by the gate quality and very excited about what the idea meant: we could detect if the atom was there or not without damaging the information in it,” Buchemmavari said.
Verifying the technique
He went to work formalizing the idea into a set of code tailored to detect atom loss. It would use a second atom, not involved in any calculation, to indirectly detect whether an atom of interest is missing.
“Quantum systems are very error-prone. To build useful quantum computers, we need quantum error correction techniques that correct the errors and make the calculations reliable. Atom loss— and leakage errors—are some of the worst kinds of errors to deal with,” he said.
The two then developed ways to test their idea.
“You need to test not only your ability to detect an atom, but to detect an atom that starts in many different states,” Chow said. “And then the second part is to check that it doesn’t disturb that state of the first atom.”
Chow’s Sandia team jumped onboard, too, helping test the new routine and verify its results by comparing them to a method of directly observing the atoms.
“We had the capability at Sandia to verify it was working because we have this measurement where we can say the atom is in the one state or the zero state or it’s gone. A lot of people don’t have that third option,” Sandia’s Bethany Little said.
A guide for correcting atom loss
Looking ahead, Buchemmavari said, “We hope this work serves as a guide for other groups implementing these techniques to overcome these errors in their systems. We also hope this spurs deeper research into the advantages and trade-offs of these techniques in real systems.”
Chow, who has since earned his doctoral degree, said he is proud of the discovery because it shows the problem of atom loss is solvable, even if future quantum computers do not use his exact method.
“If you’re careful to keep your eyes open, you might spot something really useful.”
More information: Matthew N. H. Chow et al, Circuit-Based Leakage-to-Erasure Conversion in a Neutral-Atom Quantum Processor, PRX Quantum (2024). DOI: 10.1103/PRXQuantum.5.040343