Scientists transport protons in truck, paving way for antimatter delivery

by Sarah Charley, CERN

BASE experiment takes a big step towards portable antimatter
The BASE-STEP transportable trap system, lifted by crane through the AD hall before being loaded onto a truck. The team monitored all the parameters during transport. Credit: CERN

Antimatter might sound like something out of science fiction, but at the CERN Antiproton Decelerator (AD), scientists produce and trap antiprotons every day. The BASE experiment can even contain them for more than a year—an impressive feat considering that antimatter and matter annihilate upon contact.

The CERN AD hall is the only place in the world where scientists are able to store and study antiprotons. But this is something that scientists working on the BASE experiment hope to change one day with their subproject BASE-STEP: an apparatus designed to store and transport antimatter.

Most recently, the team of scientists and engineers took an important step towards this goal by transporting a cloud of 70 protons in a truck across CERN’s main site.

“If you can do it with protons, it will also work with antiprotons,” said Christian Smorra, the leader of BASE-STEP. “The only difference is that you need a much better vacuum chamber for the antiprotons.”

This is the first time that loose particles have been transported in a reusable trap that scientists can then open in a new location and then transfer the contents into another experiment. The end goal is to create an antiproton-delivery service from CERN to experiments located at other laboratories.

Antimatter is a naturally occurring class of particles that is almost identical to ordinary matter except that the charges and magnetic properties are reversed. This has baffled scientists for decades, because according to the laws of physics, the Big Bang should have produced equal amounts of matter and antimatter. These equal-but-opposite particles would have quickly annihilated each other, leaving a simmering but empty universe. Physicists suspect that there are hidden differences that can explain why matter survived and antimatter all but disappeared.

The BASE experiment aims to answer this question by precisely measuring the properties of antiprotons, such as their intrinsic magnetic moment, and then comparing these measurements with those taken with protons. However, the precision the experiment can achieve is limited by its location.

“The accelerator equipment in the AD hall generates magnetic field fluctuations that limit how far we can push our precision measurements,” said BASE spokesperson Stefan Ulmer. “If we want to get an even deeper understanding of the fundamental properties of antiprotons, we need to move out.”

This is where BASE-STEP comes in. The goal is to trap antiprotons and then transfer them to a facility where scientists can study them with a greater precision. To be able to do this, they need a device that is small enough to be loaded onto a truck and can resist the bumps and vibrations that are inevitable during ground transport.

BASE experiment takes a big step towards portable antimatter
The transportable trap being carefully loaded in the truck before going for a road trip across CERN’s main site. Credit: CERN

The current apparatus—which includes a superconducting magnet, cryogenic cooling, power reserves, and a vacuum chamber that traps the particles using magnetic and electric fields—weighs 1,000 kilograms and needs two cranes to be lifted out of the experimental hall and onto the truck. Even though it weighs a ton, BASE-STEP is much more compact than any existing system used to study antimatter. For example, it has a footprint that is five times smaller than the original BASE experiment, as it must be narrow enough to fit through ordinary laboratory doors.

During the rehearsal, the scientists used trapped protons as a stand-in for antiprotons. Protons are a key ingredient of every atom, the simplest of which is hydrogen (one proton and one electron.) But storing protons as loose particles and then moving them onto a truck is a challenge because any tiny disturbance will draw the unbonded protons back into an atomic nucleus.

“When it’s transported by road, our trap system is exposed to acceleration and vibrations, and laboratory experiments are usually not designed for this,” Smorra said. “We needed to build a trap system that is robust enough to withstand these forces, and we have now put this to a real test for the first time.”

However, Smorra noted that the biggest potential hurdle isn’t currently the bumpiness of the road but traffic jams.

“If the transport takes too long, we will run out of helium at some point,” he said. Liquid helium keeps the trap’s superconducting magnet at a temperature below 8.2 Kelvin: its maximum operating temperature. If the drive takes too long, the magnetic field will be lost and the trapped particles will be released and vanish as soon as they touch ordinary matter.

“Eventually, we want to be able to transport antimatter to our dedicated precision laboratories at the Heinrich Heine University in Düsseldorf, which will allow us to study antimatter with at least 100-fold improved precision,” Smorra said. “In the longer term, we want to transport it to any laboratory in Europe. This means that we need to have a power generator on the truck. We are currently investigating this possibility.”

After this successful test, which included ample monitoring and data-taking, the team plans to refine its procedure with the goal of transporting antimatter next year.

“This is a totally new technology that will open the door for new possibilities of study, not only with antiprotons but also with other exotic particles, such as ultra-highly-charged ions,” Ulmer said.

Another experiment, PUMA, is preparing a transportable trap. Next year, it plans to transport antiprotons 600 meters from the ADH hall to CERN’s ISOLDE facility in order to use them to study the properties and structure of exotic atomic nuclei.

Provided by CERN 

Cool journey to the center of the Earth: Researchers build superconducting cryomodule prototype

by Karyn Houston, US Department of Energy

Cool journey to the center of the Earth
The fully assembled prototype high-beta 650-megahertz cryomodule. Four of these will make up the final stage in Fermilab’s new linear accelerator. Credit: Saravan Chandrasekaran, Fermilab

Patience and complexity are the hallmarks of fundamental scientific research. Work at the Department of Energy (DOE) Office of Science takes time.

Case in point: Technical staff at the DOE’s Fermi National Accelerator Laboratory have built a prototype of a superconducting cryomodule for the Proton Improvement Plan II (PIP-II) project.

Four of these 39-foot-long vessels, which weigh an astonishing 27,500 pounds each, will be responsible for accelerating hydrogen ions to more than 80% of the speed of light. Ultimately, the cryomodules will comprise the last section of the new linear accelerator, or linac, that will drive Fermilab’s accelerator complex.

Physicists like to accelerate particles to higher and higher energies. The higher the energy, the more finely penetrating and discriminating a particle probe can be. That increased precision allows scientists to study the tiniest of structures.

There are many benefits of faster and faster accelerators. To name a few: destroying cancer cells; revealing the structure of proteins and viruses; creating vaccines and new drugs; and advancing our knowledge of the origins of our universe.

For the PIP-II linac, each superconducting cryomodule vessel will contain a chain of devices called “cavities” at its core. These cavities look like oversized soda cans stacked end-to-end. They’re made of pure niobium, a superconducting material. Electricity flows through the superconducting material with no energy loss when the niobium is kept well below the average temperature of outer space.

Note the snippet “cryo” in the word cryomodule, meaning involving or producing cold. Especially extreme cold. In order to reach superconducting state, the cavities need to be kept at super-cold temperatures, hovering around absolute zero.

To keep things cool, the team fills the inside of the vessel with liquid helium. The vessel has many layers of insulation to protect the cavities from outside temperatures that are too warm.

Once the prototype is functioning properly, four of the modules will be assembled to build out the last section of Fermilab’s new linear accelerator.

Here’s how the journey will unfold. The superconducting cryomodules will power beams of hydrogen anions, which are hydrogen atoms made up of one proton and two electrons, instead of the usual one proton and one electron.

The beams will reach a final energy of 800 million electronvolts, or MeV, before they exit the accelerator.

From there, the beam will transfer to the upgraded Booster and Main Injector accelerators. There it will gain more energy before being turned into neutrinos.

The machine will then send these neutrinos on a 1,300-kilometer journey (800 miles) through Earth to the Deep Underground Neutrino Experiment (DUNE) at the Long Baseline Neutrino Facility in Lead, South Dakota.

The team is now making sure that all the preparations have paid off as the modules are tested at Fermilab’s Cryomodule Test Facility. This will reveal how well the modules function after practice shipments between Fermilab and the United Kingdom.

The final modules will be built by PIP-II’s partners around the world. Three will come together at Daresbury Laboratory, run by the Science and Technology Facilities Council of United Kingdom Research and Innovation, and shipped to Fermilab.

The fourth will be assembled at Fermilab using components provided by the Raja Rammana Center for Advanced Technology of India’s Department of Atomic Energy.

International partners from India, Italy, France, Poland and the United Kingdom are contributing to many aspects of the PIP-II project.

All of this work is done as part of the PIP-II project, an essential enhancement to the Fermilab accelerator complex. PIP-II will provide neutrinos for DUNE scientists to study.

In parallel, the high-power proton beams delivered by the PIP-II accelerator will enable muon-based experiments to search for new particles and forces at unprecedented levels of precision. The diverse physics program is powering new discoveries for decades to come.

Provided by US Department of Energy 

Study observes a phase transition in magic of a quantum system with random circuits

by Ingrid Fadelli , Phys.org

Study observes a phase transition in magic of a quantum system with random circuits
Picture of a trapped-ion quantum computer on which the experiment was conducted. Credit: IonQ

In the context of quantum mechanics and information, “magic” is a key property of quantum states that describes the extent to which they deviate from so-called stabilizer states. Stabilizer states are a class of states that can be effectively simulated on classical computers.

Magic in quantum states is crucial to the realization of universal and fault-tolerant quantum computing via simple gate operations. Gaining insight about the mechanisms behind this property could help engineers to effectively create it and leverage it, thus potentially enabling the development of better performing quantum computers.

Researchers at University of Maryland and NIST, IonQ Inc. and the Duke Quantum Center recently showed that a random stabilizer code (i.e., a code designed to protect quantum information from errors) presents vastly different behavior with regards to magic when exposed to coherent errors.

Their observations, outlined in a paper published in Nature Physics, could broaden the understanding of how magic states originate, which could facilitate the generation of these states in quantum computing systems.

“Even though superposition and entanglement are the terms people most often associate with quantum computers, it turns out they aren’t enough to make quantum computers more powerful than classical computers,” Pradeep Niroula, co-author of the paper, told Phys.org.

“To attain a quantum advantage over traditional or classical computers, you need one another ingredient called ‘magic’ or ‘non-stabilizer-ness.’ If your quantum system has no ‘magic,’ it can be simulated by a classical computer, making the quantum computer unnecessary. It is only when your system has a lot of magic that you go beyond what’s possible with a classical computer.”

For error-resistant quantum computers, creating superpositions or entanglement between states is relatively easy. In contrast, adding magic to the state or dislocating them further from easy-to-simulate stabilizer states is expected to be highly challenging.

“In the literature of quantum information, you often encounter terms like ‘magic state distillation’ or ‘magic state cultivation,’ which refer to pretty arduous processes to create special quantum states with magic that the quantum computer can make use of,” said Niroula.

“Prior to this paper, we had written a paper that observed a similar phase transition in entanglement, in which we had observed phases where measurements of a quantum system preserved or destroyed entanglement depending on how frequent they are.”

While there is an extensive amount of literature focusing on the realization of entanglement in error-corrected quantum computing systems, the underpinnings of magic states remain less understood.

The main goal of the recent study by Niroula and his colleagues was to determine whether a similar phase transition as that previously observed for entanglement also exists for magic. The existence of such a transition may hint at the existence of a more general theory that is applicable to different quantum properties, including both entanglement and magic.

Study observes a phase transition in magic of a quantum system with random circuits
A) The circuit model for used in the study. Coherent error is used to tune magic on a random stabilizer code. B) A schematic illustration of how magic is created and destroyed from the circuit. The coherent errors dislocate a quantum state away from stabilizer states which are easy to represent and simulate. The final measurements sometimes destroy the injected magic, revert the states back to stabilizer states, and sometimes leave the magic intact. C) The phase diagram of magic. Credit: Niroula et al.

“A general feature of such phase transitions is that it involves two competing forces or processes,” explained Niroula. “One of these creates the resource and one which destroys it—tuning the relative strength or proportion of those processes seems to reveal such transitions.

“In the case of entanglement, a quantum gate acting between two qubits tends to produce entanglement between them, whereas a measurement of one of those qubits tends to destroy the entanglement. Now if you had a quantum circuit with many gates, you can randomly add measurements in the circuit and control the spread of entanglement in the system.”

Past studies focusing on entanglement in quantum circuits have established that if there are too few measurements in a quantum circuit, the entire quantum system becomes entangled. In contrast, if there are too many measurements, entanglement is suppressed and thus minimal. Moreover, if one gradually increases the density of measurements in a system, the entanglement will rapidly shift from high to almost null.

“Measurements destroy magic too, but to be able to controllably add magic to the system, you need to be able to do small rotations of the qubit,” said Niroula. “So, the two competing forces here are ‘how much you measure’ and ‘how much you rotate the qubits.’ What we observed is that at a fixed rate of measurement, you can tune your rotation angle and go from a phase where you have a lot of magic to a phase where you have no magic.”

As part of their study, Niroula and his colleagues first ran a series of numerical simulations, which offered a strong indication that a phase transition in magic did in fact take place. Encouraged by these findings, they then set out to test their hypothesis in an experimental setting, using real quantum circuits.

“In our experiment, we observed the signature of the phase transition even in a noisy machine,” said Niroula “Our work thus uncovered a phase transition in magic.

“Earlier works have uncovered other kinds of transitions in entanglement and in charges etc. and this raises the questions: what other resources might exhibit similar transitions? Do they all belong to some universal type of transition? Are they all distinct or are they all related somehow? Also importantly, what does the presence of phase transition teach us about building noise-resilient quantum computers?”

The findings gathered by this team of researchers open new avenues for research focusing on resources in error-corrected quantum computing systems. Future studies could, for instance, explore other properties and resources that exhibit a phase transition resembling those observed for entanglement and magic.

“Magic states are important for error-correction,” added Niroula. “Our work gives us some insights on when we can concentrate magic and when we can suppress it. One avenue that would be interesting to explore is to see if we can use our experiment as a ‘magic state factory’ where you are producing good magic states for consumption by the quantum computer.

“Currently, there is a lot of interest in the field in demonstrating the primitives or the building blocks of error-correction, and our work could be a part of that.”

More information: Pradeep Niroula et al, Phase transition in magic with random quantum circuits, Nature Physics (2024). DOI: 10.1038/s41567-024-02637-3.

Journal information: Nature Physics 

Study finds optimal standing positions in airport smoking lounges

by American Institute of Physics

Optimal standing positions and ventilation in airport smoking lounges
Researchers modeled the trail of nicotine particles that are released from the mouth, nose, and cigarette. Credit: Younes Bakhshan

While many smoking rooms in U.S. airports have closed in recent years, they are still common in other airports around the world. These lounges can be ventilated, but how much does it actually help the dispersion of smoke?

Research published in Physics of Fluids shows that not all standing positions in airport smoking lounges are created equal.

Researchers from the University of Hormozgan in Iran studied nicotine particles in a simulated airport smoking room and found that the thermal environment and positioning of smokers influenced how particles settle in the room.

Additionally, smokers seated farther from ventilation inlets experienced the lowest levels of pollution in the room.

“We expected that people who are standing in the corners would report the same amount of particles settling on their body,” author Younes Bakhshan said. “But according to the numbers that we determined, the wave created by the ventilation in the room is not the same every time.”

The researchers created a smoking room using computational models and placed heated and unheated manikins in the room to simulate smokers. They also modeled the ventilation system with three exhaust air diffusers.

The manikin smokers “exhaled” cigarette smoke through their mouths and noses, and the flow of the particles was modeled and observed. They found that over time, as the concentration of particles decreases in the air, the particles settling on the smokers increases.

“According to the results, body heat causes more absorption of cigarette pollution,” Bakhshan said. “We suggest that if people have to smoke in the room, empty places are the best to choose.”

The results gave insight into improving ventilation in smoking lounges.

“According to previous research, displacement ventilation system is the best for a smoking room,” Bakhshan said. “But if we want to optimize the HVAC system, we suggest that the exhaust should be installed on the wall in addition to the vents placed on the ceiling.”

Next, the researchers want to take a step beyond measuring particle dispersion to particle reduction.

“We believe that smokers who go into the smoking room for the sake of others’ health should be also protected from the harmful effects of secondhand smoke,” Bakhshan said.

More information: Numerical simulation of particles distribution of environmental tobacco smoke and its concentration in the smoking room of Shiraz airport, Physics of Fluids (2024). DOI: 10.1063/5.0223568

Journal information: Physics of Fluids 

Provided by American Institute of Physics 

Scientists provide direct evidence of breakdown of spin statistics in ion-atom charge exchange collisions

by Liu Jia, Chinese Academy of Sciences

Scientists Provide Direct Evidence of Breakdown of Spin Statistics in Ion-Atom Charge Exchange Collisions
The reaction microscope at IMP. Credit: IMP

Since the first X-ray image of a comet was reported using an X-ray telescope in 1996, the investigation of charge exchange in collisions between highly charged ions and atoms or molecules has emerged as a hot research topic.

Astrophysicists require more atomic data to model observed X-ray spectra. Traditionally, the charge exchange is assumed to follow statistical rules regarding the total spin quantum number. These assumptions of pure spin statistics are of fundamental importance across various fields.

However, a new study published in Physical Review Letters on October 22 has challenged the assumptions by providing direct evidence of the breakdown of spin statistics in ion-atom charge exchange collisions. This study was led by scientists from the Institute of Modern Physics (IMP) of the Chinese Academy of Sciences (CAS).

The experiment was performed at the low-energy setups of the Heavy Ion Research Facility in Lanzhou, employing the high-resolution reaction microscope, which is characterized by high precision, sensitivity and detection efficiency. The neutral helium was used as a target in collisions with C3+ ions in the experiment.

“The C3+ ion is a good candidate for this study because it has no long-lived excited states and is always in its ground state in the collision region. Using the reaction microscope, we can easily determine the atomic states at the moment of electron captured in collisions, overcoming the difficulties encountered in previous experiments. Thus, it is relatively easier to accurately analyze the underlying mechanisms,” said Prof. Zhu Xiaolong from IMP, the first author of this study.

Through experimental and theoretical approaches, scientists directly measured spin-resolved cross section ratios, as a probe of spin statistics, which demonstrated the breakdown of spin statistics assumptions at high impact energies where they are traditionally expected to be valid.

“The novel finding raises intriguing questions both in understanding the electronic dynamics during such fast collisional processes and in exploring quantum manipulation of atomic and molecular reactivity,” said Prof. Ma Xinwen from IMP, one of the corresponding authors of this study.

More information: XiaoLong Zhu et al, Direct Evidence of Breakdown of Spin Statistics in Ion-Atom Charge Exchange Collisions, Physical Review Letters (2024). DOI: 10.1103/PhysRevLett.133.173002

Journal information: Physical Review Letters 

Provided by Chinese Academy of Sciences 

Scientists demonstrate precise control over artificial microswimmers using electric fields

by Tejasri Gururaj , Phys.org

Scientists demonstrate control over artificial swimmers using electric fields
Active droplet electrotaxis in a microchannel, with an increasing electric field. Credit: Physical Review Letters (2024). DOI: 10.1103/PhysRevLett.133.158301

In a new study in Physical Review Letters, scientists have demonstrated a method to control artificial microswimmers using electric fields and fluid flow. These microscopic droplets could pave the way for targeted drug delivery and microrobotics.

In the natural world, biological swimmers, like algae and bacteria, can change their direction of movement (or swimming) in response to an external stimulus, like light or electricity. The ability of biological swimmers to change directions in response to electrical fields is known as electrotaxis.

Artificial swimmers that can respond to external stimuli can be extremely helpful for targeted drug delivery applications. In this study, researchers chose to model artificial swimmers that respond to electric fields.

Phys.org spoke to the co-authors of the paper: Ranabir Dey, an Assistant Professor at the Indian Institute of Technology Hyderabad; and Corinna Maaß, an Associate Professor at the University of Twente. Both were formerly at the Max Planck Institute for Dynamics and Self-Organization Göttingen, where the study germinated.

Speaking of their motivation behind the study, Prof. Dey said, “The physics driving active, intrinsic motion is fascinatingly rich and different from the one governing passive, externally driven matter, and we find many complex, even counterintuitive phenomena.”

Prof. Maaß added, “Discovering the working principle behind such effects in a simple model system can help us understand and control far more complicated, even biological systems.”

Artificial swimmers

Artificial swimmers mainly belong to two categories, active colloids (also known as Janus particles) and active droplets. They are called “active” because they move in response to a stimulus.

Janus particles, named after the two-faced Roman god Janus, have two distinct surfaces with different chemical or physical properties. The design allows these surfaces to have an asymmetry for self-propulsion. For example, one side might attract water while the other repels it.

However, Janus particles require specialized materials, external stimuli to move, and asymmetry complications. They can be challenging to study and work with.

Active droplets, on the other hand, are much simpler in structure. They are oil-based droplets suspended in an aqueous solution. They do not require external stimuli to self-propel, instead relying on internal reactions.

External stimuli like electric fields can be used to change their motion, making them very useful in confined environments like microchannels, which are narrow channels often used in lab-on-a-chip devices and microfluidic systems.

Electrotaxis in artificial swimmers is understudied, especially in confined spaces involving flowing fluids (like microchannels). Electrotaxis offers advantages over other taxis, such as the ability to be instantly turned on and off, adjusting the swimmers’ motion for direction and speed, and it can also be scaled to operate over short and long distances.

Biological swimmers respond naturally to electric fields generated by potential differences across cellular boundaries or tissue structure. However, artificial swimmers don’t, and must be engineered to do so.

Active droplets in microchannels

The researchers aimed to study how active droplets respond to external electric fields in confined microchannels.

“Swimmers have to communicate with the world outside their local environment via interactions with the system boundaries. Imagine guiding a swimmer along a channel—one might want to avoid the swimmer crashing into or adhering to the walls, reorienting it in a specific direction, or staying in a specific area,” explained Prof. Maaß.

Prof. Dey added, “This can be engineered for a wide range of swimmers by choosing appropriate values for an externally applied flow and electric field in the channel.”

The researchers used oil droplets containing a compound called CB15 (commonly used for active droplet studies) mixed in with a surfactant. These droplets were placed in microchannels, with electrodes placed at the ends to apply electric fields. The radius of these droplets was roughly 21 micrometers.

Along with the electric field, the researchers could also control the fluid flow, i.e., the pressure for more comprehensive control. The voltage varied up to 30 volts.

To analyze the trajectories of the active droplets, the researchers used video tracking and particle image velocimetry, which can measure the velocities in fluid flows.

Additionally, they developed a hydrodynamic model incorporating the droplet’s surface charge, movement direction, flow interactions, and electric field orientation to predict electrotactic dynamics.

Controlling flow and electric fields

The experiment found that the droplets showed a range of responses to the varying electric field. The researchers observed that the active droplets perform U-turns when the electric field opposes their motion. They also noted that the velocity of the droplets increases with the strength of the electric field.

By controlling the electric field in conjunction with the flow, the researchers could direct the precise motion of the droplets. This is known as electrorheotaxis.

When the electric field opposed the flow of the droplets, their oscillatory motion was reduced, and the researchers were able to achieve stable centerline swimming.

When the electric field aligned with the flow of the droplets, the researchers were able to maintain upstream swimming with modified oscillations. At high voltages, this switched to downstream swimming, following the wall of the microchannel.

The hydrodynamic modeling revealed the reason behind the motion of the droplets in the electric field. They found that these droplets carry an inherent electric charge, which affects their movement when exposed to an electric field.

They further found that the channel walls also played a role in affecting the droplets’ movement, due to their interactions with the surrounding fluid dynamics. The observed data aligned well with the predictions made by the researchers’ hydrodynamic model.

“We demonstrated that tuning two parameters (flow and electric field) gives access to a distinct number of motility states, encompassing upstream oscillation, wall and centerline motion, and motion reversal (U-turns),” said Prof. Dey.

Potential for more

The study demonstrates that simple droplets can mimic complex biological behaviors, making it a very promising avenue for biomedical applications.

Electric field and pressure-driven flow are readily available methods, which makes this application extremely appealing.

Discussing potential applications, Prof. Maaß said, “Since these guidance principles apply to any swimmer with surface charges in a narrow environment, they could be used to guide motile cells in medical applications, lab-on-a-chip or bioreactor scenarios, and in the design of motile carriers, such as microreactors or intelligent sensors.”

More information: Carola M. Buness et al, Electrotaxis of Self-Propelling Artificial Swimmers in Microchannels, Physical Review Letters (2024). DOI: 10.1103/PhysRevLett.133.158301

Journal information: Physical Review Letters 

New image recognition technique for counting particles provides diffusion information

by David Appell , Phys.org

New Image Recognition Technique for Counting Particles Provides Diffusion Information
(a) an illustration of the environment where the countoscope operates, (b) an imaginary two-dimensional box with 54 particles inside, and (c) a plot of the prototype particle number fluctuations over time, from simulations. Credit: Physical Review X (2024). DOI: 10.1103/PhysRevX.14.041016

A team of scientists have invented a new technique to determine the dynamics of microscopic interacting particles by using image recognition to count the number of particles in an imaginary box. By changing the size of the observation box, such counting enables the study of the dynamics of the collective system, even for a dense group of particles suspended in a fluid.

Their work has been published in Physical Review X.

For over a century, scientists of all kinds have sought to exploit counts of particles, such as molecules undergoing Browning motion in a liquid, something scientists in many disciplines would like to know, from biology studying cells to chemists studying molecules to physics.

A useful way to characterize this motion is via the “diffusion constant,” which describes how fast the average particle in the fluid moves. This number can be calculated by following an individual particle as it randomly walks through the fluid. The diffusion constant is then half the proportionality constant between the average displacement and time.

To address this limitation, Sophia Marbach of Sorbonne Université in Paris and her colleagues invented a technique they call the “countoscope.” It uses image recognition software to count the number of particles in an imaginary box in the sample, which can be in the thousands.

The system of particles could be a colloid—particles suspended in a liquid—or cellular organisms, or even artificial. The number of particles in these boxes—finite observation volumes—can change as particles move into or out of the field of view, much like they do in a microscope. The user can select the size of the countoscope box desired in order to study the particles’ dynamics at larger or smaller scales.

But following particle paths and displacements can be difficult, if not impossible, if there are a large number of particles and/or they are indistinguishable.

To address this, the group developed an equation that instead used fluctuating particle counts in the boxes, which can also be used to calculate the diffusion constant and to infer the dynamic properties of the interacting particle suspensions. That constant can then be deduced simply by counting and calculating.

The group tested their technique on a two-dimensional layer of 2.8-micron diameter plastic spheres in a cell filled with water. Using this artificial colloidal system, they choose square boxes with sides from 4- to 32-microns long. The boxes were imaged by a custom-built inverted microscope. Their software then counted, box by box, the number of particles in each box.

With this data they could calculate the mean change in particle number relative to the first box, which they found increased as the square root of time. By this methodology, their value for the diffusion constant matched that obtained from more traditional methods that reconstruct particle trajectories.

When they increased the number of particles in their simulated colloid, particles diffused away from their starting points, as was expected. Their method still worked, but they began to see the formation of temporary bunches of particles, about 10 or so, in their prototype setup. This was something not seen in traditional studies, simply because tracking only a single particle at a time cannot reveal bunches.

While the particles did not interact in their prototype colloid, real world experiments usually cannot be approximated as a noninteracting system. Unlike less dense systems (specified by the “packing fraction” of the spheres), the team found that significant deviations from their mathematical expressions took place at high packing fractions.

This was due to interactions between particles, and they were able to modify their analysis when both hydrodynamic and/or steric factors complicated the system. (Hydrodynamic effects are those induced by the particles’ movement through the fluid, and steric effects arise from the spatial arrangement of the particles.)

In fact, a new length scale appeared in their analysis, characterizing a transition between hyperuniform-like particle behavior and collective states.

The groups believe their methodology can be extended. “We trust our analytical approach can be extended to 3D [three-dimensions], to solids or crystals,” they wrote in their paper.

“We definitely have received interest in use by other scientists,” said Marbach. “It’s such an easy thing to do actually that some colleagues just tried it on their own data and could see similar or different things depending on the system they were investigating.”

She continued, “Many scientists would like to use the framework to investigate very diverse systems beyond colloids: microalgae, bacteria, active colloids, colloidal glasses, molecules, etc.,” she said.

She said there are many directions for future research—to improve the countoscope technique, expand it and generalize it to “include the possibility of probing different dynamical features beyond diffusion. For instance, in microalgae/bacteria/active colloids, we need to know how to resolve active swimming velocities.”

More information: Eleanor K. R. Mackay et al, The Countoscope: Measuring Self and Collective Dynamics without Trajectories, Physical Review X (2024). DOI: 10.1103/PhysRevX.14.041016

Journal information: Physical Review X 

Chromium-62 study helps researchers better understand shapes around islands of inversion

by Michigan State University Facility for Rare Isotope Beams

Chromium-62 study helps researchers better understand shapes around islands of inversion
In a recent paper in Nature Physics, an international research collaboration used world-class instrumentation at the Facility for Rare Isotope Beams (FRIB) to study the rare isotope chromium-62. Researchers used a gamma-ray spectroscopy experiment in tandem with theoretical models to identify an unexpected variety of shapes in chromium-62. The finding provides more insight into islands of inversion. Credit: Facility for Rare Isotope Beams

In a recent paper in Nature Physics, an international research collaboration used world-class instrumentation at the Facility for Rare Isotope Beams (FRIB) to study the exotic nuclide, or rare isotope, chromium-62.

The researchers used a gamma-ray spectroscopy experiment in tandem with theoretical models to identify an unexpected variety of shapes in chromium-62. The finding provides more insight into so-called “islands of inversion,” or regions in the nuclear chart where certain nuclides diverge from traditional viewpoints based on the properties of stable nuclei.

The work involved the joint effort of 23 researchers with 12 different affiliations among them. Led by Alexandra Gade, professor of physics at FRIB and in MSU’s Department of Physics and Astronomy and FRIB scientific director, the collaboration also included Robert Janssens, Edward G. Bilpuch Distinguished Professor at the University of North Carolina at Chapel Hill; and Brenden Longfellow, former FRIB graduate researcher and current staff scientist at Lawrence Livermore National Laboratory, as significant contributors.

“One goal of nuclear theory is to develop a model that describes the properties of all nuclei, including rare isotopes that have many more neutrons than protons and that often do not follow the textbook physics established for their stable cousins,” Gade said.

“Models must be able to describe the structural change in islands of inversion, otherwise they do not incorporate the correct physics and further extrapolation using them may not be useful. In that sense, nuclei in islands of inversion are some of the best stepping stones for testing nuclear models before extrapolating into the unknown.”

Unexpected shapes abound in islands of inversion

Using new, powerful particle accelerators that can probe more exotic nuclei, many researchers are focused on understanding the properties of short-lived, neutron-rich nuclei, including their shape. Scientists know that the more familiar side of the nuclear chart abides by magic numbers of both neutrons and protons.

In recent decades, however, researchers have started to notice that isotopes with many more neutrons than protons can break these rules, and that magic numbers are not as immutable as once thought. Consequently, certain neutron-rich nuclei differ markedly in their nuclear structure when compared to their stable counterparts.

“The interesting thing about these islands of inversion is that the nuclei there are expected to be spherical since they have a magic number of neutrons, but instead they have deformed ground states,” Longfellow said. “The way the protons and neutrons are filling their orbitals in the nuclear shell model is different, far from stability.”

Janssens and Gade have worked together investigating magic nuclear numbers for over 20 years. Janssens pointed out that the technological and infrastructural investments that grew FRIB out of its predecessor, the National Superconducting Cyclotron Laboratory, enabled the researchers to advance work on the frontier of neutron-heavy exotic matter.

“We’ve done many experiments through the years, but until FRIB came online and we also had access to the GRETINA gamma-ray detector, we were almost at a roadblock in this work,” Janssens said. “This is actually the first experiment at FRIB to use the facility’s fragmentation beams in flight.”

GRETINA boosts collaborative research

To investigate chromium-62, the FRIB fragment separator team first shot a high-energy zinc isotope beam toward a beryllium target. In the process, the researchers produced iron-64 isotopes. By knocking out two protons from these iron isotopes, the team was able to form chromium-62.

Even more important to the experiment, however, was access to the Gamma-Ray Energy Tracking In-Beam Nuclear Array (GRETINA). GRETINA was developed by a collaboration led by scientists from Lawrence Berkeley National Laboratory (Berkeley Lab) to serve as a state-of-the-art gamma-ray detection instrument for use at the nation’s leading particle accelerator facilities.

“GRETINA was an integral part of the work,” Gade said. “We tagged the excited states of chromium-62 via their emitted gamma rays. The ways that excited states decay are unique fingerprints, and by selecting them, we can study the properties of individual final states of chromium-62.”

With the help of the FRIB infrastructure and GRETINA, the team found that chromium-62 had a deformed shape in its ground state but was less deformed and with a non-axially symmetric shape at higher excitation energy. The team extrapolated its findings to calcium isotopes near chromium-62 in the nuclear chart and has a line of investigation for future experimental work.

“Using these findings as a springboard, we will continue our work in this region and measure other observables that characterize these nuclei in the island of inversion. And, as FRIB continues to ramp up its capabilities, we will have access to more neutron-rich tenants of this island of inversion,” Gade said.

In addition, GRETINA will soon be transformed into the Gamma-Ray Energy Tracking Array (GRETA). This will increase the number of gamma-ray detectors that are part of the instrument and enable the detection of signals from nuclei produced in even weaker quantities. Berkeley Lab has had a leadership role in the creation of GRETINA and now GRETA.

The researchers emphasized that in addition to FRIB’s infrastructure, their work benefited from collaborations between multiple U.S.-based research institutions and several European facilities. Gade and Janssens both emphasized that advancing the frontier of nuclear physics requires both investment in research infrastructure and a healthy spirit of collaboration and exchange of ideas.

“Experimental nuclear physics is a team sport,” Gade said. “It takes a group of people with diverse skills to conceive and propose the experiment, run the instruments, analyze, and interpret the data in the framework of many-body computations or nuclear structure and nuclear reactions.”

More information: Alexandra Gade et al, In-beam spectroscopy reveals competing nuclear shapes in the rare isotope 62Cr, Nature Physics (2024). DOI: 10.1038/s41567-024-02680-0

Journal information: Nature Physics 

Provided by Michigan State University Facility for Rare Isotope Beams

How a classical computer beat a quantum computer at its own game

by Mara Johnson-Groh, Simons Foundation

How a Classical Computer Beat a Quantum Computer at Its Own Game
An illustration of a quantum system that was simulated by both classical and quantum computers. The highlighted sections show how the influence of the system’s components is confined to nearby neighbors. Credit: Lucy Reading-Ikkanda/Simons Foundation

Earlier this year, researchers at the Flatiron Institute’s Center for Computational Quantum Physics (CCQ) announced that they had successfully used a classical computer and sophisticated mathematical models to thoroughly outperform a quantum computer on a task that some thought only quantum computers could solve.

Now, those researchers have determined why they were able to trounce the quantum computer at its own game. Their answer, presented in Physical Review Letters, reveals that the quantum problem they tackled—involving a particular two-dimensional quantum system of flipping magnets—displays a behavior known as confinement. This behavior had previously been seen in quantum condensed matter physics only in one-dimensional systems.

This unexpected finding is helping scientists better understand the line dividing the abilities of quantum and classical computers and provides a framework for testing new quantum simulations, says lead author Joseph Tindall, a research fellow at the CCQ.

“There is some boundary that separates what can be done with quantum computing and what can be done with classical computers,” he says. “At the moment, that boundary is incredibly blurry. I think our work helps clarify that boundary a bit more.”

By harnessing principles from quantum mechanics, quantum computers promise huge advantages in processing power and speed over classical computers. While classical computations are limited by the binary operations of ones and zeros, quantum computers can use qubits, which can represent both 0 and 1 simultaneously, to process information in a fundamentally different way.

Quantum technology is still in its infancy, though, and has yet to convincingly demonstrate its superiority over classical computers. As scientists work to figure out where quantum computers might have an edge, they’re coming up with complex problems that test the limits of classical and quantum computers.

The results of one recent test of quantum computers came out in June 2023, when IBM researchers published a paper in the journal Nature. Their paper detailed an experiment simulating a system with an array of tiny flipping magnets evolving over time. The researchers claimed that this simulation was only feasible with a quantum computer, not a classical one. After learning about the new paper through press coverage, Tindall decided to take up the challenge.

Tindall has been working with colleagues over the last several years to develop better algorithms and codes for solving complex quantum problems with classical computers. He applied these methods to IBM’s simulation, and in just two weeks he proved he could solve the problem with very little computing power—it could even be done on a smartphone.

“We didn’t really introduce any cutting-edge techniques,” Tindall says. “We brought a lot of ideas together in a concise and elegant way that made the problem solvable. It was a method that IBM had overlooked and was not easily implemented without well-written software and codes.”

Tindall and his colleagues published their findings in the journal PRX Quantum in January 2024, but Tindall didn’t stop there. Inspired by the simplicity of the results, he and his co-author Dries Sels of the Flatiron Institute and New York University set out to determine why this system could be so easily solved with a classical computer when, on the surface, it appeared to be a very complex problem.

“We started thinking about this question and noticed a number of similarities in the system’s behavior to something people had seen in one dimension called confinement,” Tindall says.

Confinement is a phenomenon that can arise under special circumstances in closed quantum systems and is analogous to the quark confinement known in particle physics. To understand confinement, let’s begin with some quantum basics. On quantum scales, an individual magnet can be oriented up or down, or it can be in a “superposition”—a quantum state in which it points both up and down simultaneously. How up or down the magnet is affects how much energy it has when it’s in a magnetic field.

In the system’s initial setup, the magnets all pointed in the same direction. The system was then perturbed by a small magnetic field, making some of the magnets want to flip, which also encouraged neighboring magnets to flip. This behavior—where the magnets influence each other’s flipping—can lead to entanglement, a linking of the magnets’ superpositions. Over time, the increased entanglement of the system makes it hard for a classical computer to simulate.

However, in a closed system, there’s only so much energy to go around. In their closed system, Tindall and Sels showed that there was only enough energy to flip small, sparsely separated clusters of orientations, directly limiting the growth of entanglement. This energy-based limitation on entanglement is known as confinement, and it occurred as a completely natural consequence of the system’s two-dimensional geometry.

“In this system, the magnets won’t just suddenly scramble up; they will actually just oscillate around their initial state, even on very long timescales,” Tindall says. “It is quite interesting from a physics perspective because that means the system remains in a state which has a very specific structure to it and isn’t just completely disordered.”

Serendipitously, IBM had, in their initial test, set up a problem where the organization of the magnets in a closed two-dimensional array led to confinement. Tindall and Sels realized that since the confinement of the system reduced the amount of entanglement, it kept the problem simple enough to be described by classical methods. Using simulations and mathematical calculations, Tindall and Sels came up with a simple, accurate mathematical model that describes this behavior.

“One of the big open questions in quantum physics is understanding when entanglement grows rapidly and when it doesn’t,” Tindall says. “This experiment gives us a good understanding of an example where we didn’t get large-scale entanglement due to the model used and the two-dimensional structure of the quantum processor.”

The results suggest that confinement itself could show up in a range of two-dimensional quantum systems. If it does, the mathematical model developed by Tindall and Sels offers an invaluable tool for understanding the physics happening in those systems. Additionally, the codes used in the paper can provide a benchmarking tool for experimental scientists to use as they develop new computer simulations for other quantum problems.

More information: Joseph Tindall et al, Confinement in the Transverse Field Ising Model on the Heavy Hex Lattice, Physical Review Letters (2024). DOI: 10.1103/PhysRevLett.133.180402

Journal information: Physical Review Letters  PRX Quantum  Nature 

Provided by Simons Foundation 

Stochastic thermodynamics may be key to understanding energy costs of computation

by Santa Fe Institute

Stochastic thermodynamics may be key to understanding energy costs of computation
The mapping between the design features of a computer and its performance when computing a function is mediated by its resource costs. Credit: Proceedings of the National Academy of Sciences (2024). DOI: 10.1073/pnas.2321112121

Two systems exist in thermal equilibrium if no heat passes between them. Computers, which consume energy and give off heat as they process information, operate far from thermal equilibrium. Were they to stop consuming energy—say you let your laptop discharge completely—they would stop functioning.

But how does the amount of energy required by a physical system to perform a computation depend on the details of the computation?

Physicists and computer scientists have been trying to connect thermodynamics and computation for more than a century. The tradeoff has always been a theoretical concern, but the ubiquity of digital devices makes it a practical one, too. Until recently, researchers lacked a rigorous way to study these kinds of systems.

That changed in the early 21st century with the introduction of a new field called stochastic thermodynamics. “This was a major revolution in nonequilibrium physics,” says SFI Professor David Wolpert.

The field’s mathematical tools are exactly what scientists need to use to probe the inner workings of computational systems, since those systems are (far) out of equilibrium, according to a Perspective published this week in the Proceedings of the National Academy of Sciences. The authors, led by Wolpert and Jan Korbel, a postdoctoral researcher at Complexity Science Hub in Vienna, argue that stochastic thermodynamics can unearth deep connections between computation and thermodynamics.

“It provides us with the tools to investigate and quantify with equations all that’s going on with systems, even arbitrarily far from equilibrium,” Wolpert says. The tools include mathematical theorems, uncertainty relations, and even thermodynamic speed limits that apply to the behavior of nonequilibrium systems at all scales, from the very small to the macroscopic.

These considerations were absent in the work of 20th-century physicists, Wolpert says. “They provide us with a way to think about the actual energetics of these systems, and we’ve never had them before.”

Korbel notes that these tools can help researchers probe connections among energy, computation, and the effects on the climate. “Every calculation in every computer requires energy, some of which is lost as heat—warming not only the system but also the planet,” he says. “As the energy demands of computation continue to grow, it is essential to minimize these losses.”

Wolpert emphasizes that the potential gains from using stochastic thermodynamics reach far beyond artificial computers like laptops and phones. Cells carry out computations far from equilibrium; so do neurons in the brain. On larger time scales, social systems and even biological evolution operate out of equilibrium.

On a practical level, says Wolpert, a closer understanding of the energy of computation could point to more energy-efficient ways to design real-world devices. Findings in stochastic thermodynamics, he says, “are ubiquitous across anything we might consider to be computing. In many ways, it provides a unifying glue by which to relate and integrate all these different fields.”

More information: David H. Wolpert et al, Is stochastic thermodynamics the key to understanding the energy costs of computation? Proceedings of the National Academy of Sciences (2024). DOI: 10.1073/pnas.2321112121

Journal information: Proceedings of the National Academy of Sciences 

Provided by Santa Fe Institute