Scientists provide direct evidence of breakdown of spin statistics in ion-atom charge exchange collisions

by Liu Jia, Chinese Academy of Sciences

Scientists Provide Direct Evidence of Breakdown of Spin Statistics in Ion-Atom Charge Exchange Collisions
The reaction microscope at IMP. Credit: IMP

Since the first X-ray image of a comet was reported using an X-ray telescope in 1996, the investigation of charge exchange in collisions between highly charged ions and atoms or molecules has emerged as a hot research topic.

Astrophysicists require more atomic data to model observed X-ray spectra. Traditionally, the charge exchange is assumed to follow statistical rules regarding the total spin quantum number. These assumptions of pure spin statistics are of fundamental importance across various fields.

However, a new study published in Physical Review Letters on October 22 has challenged the assumptions by providing direct evidence of the breakdown of spin statistics in ion-atom charge exchange collisions. This study was led by scientists from the Institute of Modern Physics (IMP) of the Chinese Academy of Sciences (CAS).

The experiment was performed at the low-energy setups of the Heavy Ion Research Facility in Lanzhou, employing the high-resolution reaction microscope, which is characterized by high precision, sensitivity and detection efficiency. The neutral helium was used as a target in collisions with C3+ ions in the experiment.

“The C3+ ion is a good candidate for this study because it has no long-lived excited states and is always in its ground state in the collision region. Using the reaction microscope, we can easily determine the atomic states at the moment of electron captured in collisions, overcoming the difficulties encountered in previous experiments. Thus, it is relatively easier to accurately analyze the underlying mechanisms,” said Prof. Zhu Xiaolong from IMP, the first author of this study.

Through experimental and theoretical approaches, scientists directly measured spin-resolved cross section ratios, as a probe of spin statistics, which demonstrated the breakdown of spin statistics assumptions at high impact energies where they are traditionally expected to be valid.

“The novel finding raises intriguing questions both in understanding the electronic dynamics during such fast collisional processes and in exploring quantum manipulation of atomic and molecular reactivity,” said Prof. Ma Xinwen from IMP, one of the corresponding authors of this study.

More information: XiaoLong Zhu et al, Direct Evidence of Breakdown of Spin Statistics in Ion-Atom Charge Exchange Collisions, Physical Review Letters (2024). DOI: 10.1103/PhysRevLett.133.173002

Journal information: Physical Review Letters 

Provided by Chinese Academy of Sciences 

Scientists demonstrate precise control over artificial microswimmers using electric fields

by Tejasri Gururaj , Phys.org

Scientists demonstrate control over artificial swimmers using electric fields
Active droplet electrotaxis in a microchannel, with an increasing electric field. Credit: Physical Review Letters (2024). DOI: 10.1103/PhysRevLett.133.158301

In a new study in Physical Review Letters, scientists have demonstrated a method to control artificial microswimmers using electric fields and fluid flow. These microscopic droplets could pave the way for targeted drug delivery and microrobotics.

In the natural world, biological swimmers, like algae and bacteria, can change their direction of movement (or swimming) in response to an external stimulus, like light or electricity. The ability of biological swimmers to change directions in response to electrical fields is known as electrotaxis.

Artificial swimmers that can respond to external stimuli can be extremely helpful for targeted drug delivery applications. In this study, researchers chose to model artificial swimmers that respond to electric fields.

Phys.org spoke to the co-authors of the paper: Ranabir Dey, an Assistant Professor at the Indian Institute of Technology Hyderabad; and Corinna Maaß, an Associate Professor at the University of Twente. Both were formerly at the Max Planck Institute for Dynamics and Self-Organization Göttingen, where the study germinated.

Speaking of their motivation behind the study, Prof. Dey said, “The physics driving active, intrinsic motion is fascinatingly rich and different from the one governing passive, externally driven matter, and we find many complex, even counterintuitive phenomena.”

Prof. Maaß added, “Discovering the working principle behind such effects in a simple model system can help us understand and control far more complicated, even biological systems.”

Artificial swimmers

Artificial swimmers mainly belong to two categories, active colloids (also known as Janus particles) and active droplets. They are called “active” because they move in response to a stimulus.

Janus particles, named after the two-faced Roman god Janus, have two distinct surfaces with different chemical or physical properties. The design allows these surfaces to have an asymmetry for self-propulsion. For example, one side might attract water while the other repels it.

However, Janus particles require specialized materials, external stimuli to move, and asymmetry complications. They can be challenging to study and work with.

Active droplets, on the other hand, are much simpler in structure. They are oil-based droplets suspended in an aqueous solution. They do not require external stimuli to self-propel, instead relying on internal reactions.

External stimuli like electric fields can be used to change their motion, making them very useful in confined environments like microchannels, which are narrow channels often used in lab-on-a-chip devices and microfluidic systems.

Electrotaxis in artificial swimmers is understudied, especially in confined spaces involving flowing fluids (like microchannels). Electrotaxis offers advantages over other taxis, such as the ability to be instantly turned on and off, adjusting the swimmers’ motion for direction and speed, and it can also be scaled to operate over short and long distances.

Biological swimmers respond naturally to electric fields generated by potential differences across cellular boundaries or tissue structure. However, artificial swimmers don’t, and must be engineered to do so.

Active droplets in microchannels

The researchers aimed to study how active droplets respond to external electric fields in confined microchannels.

“Swimmers have to communicate with the world outside their local environment via interactions with the system boundaries. Imagine guiding a swimmer along a channel—one might want to avoid the swimmer crashing into or adhering to the walls, reorienting it in a specific direction, or staying in a specific area,” explained Prof. Maaß.

Prof. Dey added, “This can be engineered for a wide range of swimmers by choosing appropriate values for an externally applied flow and electric field in the channel.”

The researchers used oil droplets containing a compound called CB15 (commonly used for active droplet studies) mixed in with a surfactant. These droplets were placed in microchannels, with electrodes placed at the ends to apply electric fields. The radius of these droplets was roughly 21 micrometers.

Along with the electric field, the researchers could also control the fluid flow, i.e., the pressure for more comprehensive control. The voltage varied up to 30 volts.

To analyze the trajectories of the active droplets, the researchers used video tracking and particle image velocimetry, which can measure the velocities in fluid flows.

Additionally, they developed a hydrodynamic model incorporating the droplet’s surface charge, movement direction, flow interactions, and electric field orientation to predict electrotactic dynamics.

Controlling flow and electric fields

The experiment found that the droplets showed a range of responses to the varying electric field. The researchers observed that the active droplets perform U-turns when the electric field opposes their motion. They also noted that the velocity of the droplets increases with the strength of the electric field.

By controlling the electric field in conjunction with the flow, the researchers could direct the precise motion of the droplets. This is known as electrorheotaxis.

When the electric field opposed the flow of the droplets, their oscillatory motion was reduced, and the researchers were able to achieve stable centerline swimming.

When the electric field aligned with the flow of the droplets, the researchers were able to maintain upstream swimming with modified oscillations. At high voltages, this switched to downstream swimming, following the wall of the microchannel.

The hydrodynamic modeling revealed the reason behind the motion of the droplets in the electric field. They found that these droplets carry an inherent electric charge, which affects their movement when exposed to an electric field.

They further found that the channel walls also played a role in affecting the droplets’ movement, due to their interactions with the surrounding fluid dynamics. The observed data aligned well with the predictions made by the researchers’ hydrodynamic model.

“We demonstrated that tuning two parameters (flow and electric field) gives access to a distinct number of motility states, encompassing upstream oscillation, wall and centerline motion, and motion reversal (U-turns),” said Prof. Dey.

Potential for more

The study demonstrates that simple droplets can mimic complex biological behaviors, making it a very promising avenue for biomedical applications.

Electric field and pressure-driven flow are readily available methods, which makes this application extremely appealing.

Discussing potential applications, Prof. Maaß said, “Since these guidance principles apply to any swimmer with surface charges in a narrow environment, they could be used to guide motile cells in medical applications, lab-on-a-chip or bioreactor scenarios, and in the design of motile carriers, such as microreactors or intelligent sensors.”

More information: Carola M. Buness et al, Electrotaxis of Self-Propelling Artificial Swimmers in Microchannels, Physical Review Letters (2024). DOI: 10.1103/PhysRevLett.133.158301

Journal information: Physical Review Letters 

New image recognition technique for counting particles provides diffusion information

by David Appell , Phys.org

New Image Recognition Technique for Counting Particles Provides Diffusion Information
(a) an illustration of the environment where the countoscope operates, (b) an imaginary two-dimensional box with 54 particles inside, and (c) a plot of the prototype particle number fluctuations over time, from simulations. Credit: Physical Review X (2024). DOI: 10.1103/PhysRevX.14.041016

A team of scientists have invented a new technique to determine the dynamics of microscopic interacting particles by using image recognition to count the number of particles in an imaginary box. By changing the size of the observation box, such counting enables the study of the dynamics of the collective system, even for a dense group of particles suspended in a fluid.

Their work has been published in Physical Review X.

For over a century, scientists of all kinds have sought to exploit counts of particles, such as molecules undergoing Browning motion in a liquid, something scientists in many disciplines would like to know, from biology studying cells to chemists studying molecules to physics.

A useful way to characterize this motion is via the “diffusion constant,” which describes how fast the average particle in the fluid moves. This number can be calculated by following an individual particle as it randomly walks through the fluid. The diffusion constant is then half the proportionality constant between the average displacement and time.

To address this limitation, Sophia Marbach of Sorbonne Université in Paris and her colleagues invented a technique they call the “countoscope.” It uses image recognition software to count the number of particles in an imaginary box in the sample, which can be in the thousands.

The system of particles could be a colloid—particles suspended in a liquid—or cellular organisms, or even artificial. The number of particles in these boxes—finite observation volumes—can change as particles move into or out of the field of view, much like they do in a microscope. The user can select the size of the countoscope box desired in order to study the particles’ dynamics at larger or smaller scales.

But following particle paths and displacements can be difficult, if not impossible, if there are a large number of particles and/or they are indistinguishable.

To address this, the group developed an equation that instead used fluctuating particle counts in the boxes, which can also be used to calculate the diffusion constant and to infer the dynamic properties of the interacting particle suspensions. That constant can then be deduced simply by counting and calculating.

The group tested their technique on a two-dimensional layer of 2.8-micron diameter plastic spheres in a cell filled with water. Using this artificial colloidal system, they choose square boxes with sides from 4- to 32-microns long. The boxes were imaged by a custom-built inverted microscope. Their software then counted, box by box, the number of particles in each box.

With this data they could calculate the mean change in particle number relative to the first box, which they found increased as the square root of time. By this methodology, their value for the diffusion constant matched that obtained from more traditional methods that reconstruct particle trajectories.

When they increased the number of particles in their simulated colloid, particles diffused away from their starting points, as was expected. Their method still worked, but they began to see the formation of temporary bunches of particles, about 10 or so, in their prototype setup. This was something not seen in traditional studies, simply because tracking only a single particle at a time cannot reveal bunches.

While the particles did not interact in their prototype colloid, real world experiments usually cannot be approximated as a noninteracting system. Unlike less dense systems (specified by the “packing fraction” of the spheres), the team found that significant deviations from their mathematical expressions took place at high packing fractions.

This was due to interactions between particles, and they were able to modify their analysis when both hydrodynamic and/or steric factors complicated the system. (Hydrodynamic effects are those induced by the particles’ movement through the fluid, and steric effects arise from the spatial arrangement of the particles.)

In fact, a new length scale appeared in their analysis, characterizing a transition between hyperuniform-like particle behavior and collective states.

The groups believe their methodology can be extended. “We trust our analytical approach can be extended to 3D [three-dimensions], to solids or crystals,” they wrote in their paper.

“We definitely have received interest in use by other scientists,” said Marbach. “It’s such an easy thing to do actually that some colleagues just tried it on their own data and could see similar or different things depending on the system they were investigating.”

She continued, “Many scientists would like to use the framework to investigate very diverse systems beyond colloids: microalgae, bacteria, active colloids, colloidal glasses, molecules, etc.,” she said.

She said there are many directions for future research—to improve the countoscope technique, expand it and generalize it to “include the possibility of probing different dynamical features beyond diffusion. For instance, in microalgae/bacteria/active colloids, we need to know how to resolve active swimming velocities.”

More information: Eleanor K. R. Mackay et al, The Countoscope: Measuring Self and Collective Dynamics without Trajectories, Physical Review X (2024). DOI: 10.1103/PhysRevX.14.041016

Journal information: Physical Review X 

Chromium-62 study helps researchers better understand shapes around islands of inversion

by Michigan State University Facility for Rare Isotope Beams

Chromium-62 study helps researchers better understand shapes around islands of inversion
In a recent paper in Nature Physics, an international research collaboration used world-class instrumentation at the Facility for Rare Isotope Beams (FRIB) to study the rare isotope chromium-62. Researchers used a gamma-ray spectroscopy experiment in tandem with theoretical models to identify an unexpected variety of shapes in chromium-62. The finding provides more insight into islands of inversion. Credit: Facility for Rare Isotope Beams

In a recent paper in Nature Physics, an international research collaboration used world-class instrumentation at the Facility for Rare Isotope Beams (FRIB) to study the exotic nuclide, or rare isotope, chromium-62.

The researchers used a gamma-ray spectroscopy experiment in tandem with theoretical models to identify an unexpected variety of shapes in chromium-62. The finding provides more insight into so-called “islands of inversion,” or regions in the nuclear chart where certain nuclides diverge from traditional viewpoints based on the properties of stable nuclei.

The work involved the joint effort of 23 researchers with 12 different affiliations among them. Led by Alexandra Gade, professor of physics at FRIB and in MSU’s Department of Physics and Astronomy and FRIB scientific director, the collaboration also included Robert Janssens, Edward G. Bilpuch Distinguished Professor at the University of North Carolina at Chapel Hill; and Brenden Longfellow, former FRIB graduate researcher and current staff scientist at Lawrence Livermore National Laboratory, as significant contributors.

“One goal of nuclear theory is to develop a model that describes the properties of all nuclei, including rare isotopes that have many more neutrons than protons and that often do not follow the textbook physics established for their stable cousins,” Gade said.

“Models must be able to describe the structural change in islands of inversion, otherwise they do not incorporate the correct physics and further extrapolation using them may not be useful. In that sense, nuclei in islands of inversion are some of the best stepping stones for testing nuclear models before extrapolating into the unknown.”

Unexpected shapes abound in islands of inversion

Using new, powerful particle accelerators that can probe more exotic nuclei, many researchers are focused on understanding the properties of short-lived, neutron-rich nuclei, including their shape. Scientists know that the more familiar side of the nuclear chart abides by magic numbers of both neutrons and protons.

In recent decades, however, researchers have started to notice that isotopes with many more neutrons than protons can break these rules, and that magic numbers are not as immutable as once thought. Consequently, certain neutron-rich nuclei differ markedly in their nuclear structure when compared to their stable counterparts.

“The interesting thing about these islands of inversion is that the nuclei there are expected to be spherical since they have a magic number of neutrons, but instead they have deformed ground states,” Longfellow said. “The way the protons and neutrons are filling their orbitals in the nuclear shell model is different, far from stability.”

Janssens and Gade have worked together investigating magic nuclear numbers for over 20 years. Janssens pointed out that the technological and infrastructural investments that grew FRIB out of its predecessor, the National Superconducting Cyclotron Laboratory, enabled the researchers to advance work on the frontier of neutron-heavy exotic matter.

“We’ve done many experiments through the years, but until FRIB came online and we also had access to the GRETINA gamma-ray detector, we were almost at a roadblock in this work,” Janssens said. “This is actually the first experiment at FRIB to use the facility’s fragmentation beams in flight.”

GRETINA boosts collaborative research

To investigate chromium-62, the FRIB fragment separator team first shot a high-energy zinc isotope beam toward a beryllium target. In the process, the researchers produced iron-64 isotopes. By knocking out two protons from these iron isotopes, the team was able to form chromium-62.

Even more important to the experiment, however, was access to the Gamma-Ray Energy Tracking In-Beam Nuclear Array (GRETINA). GRETINA was developed by a collaboration led by scientists from Lawrence Berkeley National Laboratory (Berkeley Lab) to serve as a state-of-the-art gamma-ray detection instrument for use at the nation’s leading particle accelerator facilities.

“GRETINA was an integral part of the work,” Gade said. “We tagged the excited states of chromium-62 via their emitted gamma rays. The ways that excited states decay are unique fingerprints, and by selecting them, we can study the properties of individual final states of chromium-62.”

With the help of the FRIB infrastructure and GRETINA, the team found that chromium-62 had a deformed shape in its ground state but was less deformed and with a non-axially symmetric shape at higher excitation energy. The team extrapolated its findings to calcium isotopes near chromium-62 in the nuclear chart and has a line of investigation for future experimental work.

“Using these findings as a springboard, we will continue our work in this region and measure other observables that characterize these nuclei in the island of inversion. And, as FRIB continues to ramp up its capabilities, we will have access to more neutron-rich tenants of this island of inversion,” Gade said.

In addition, GRETINA will soon be transformed into the Gamma-Ray Energy Tracking Array (GRETA). This will increase the number of gamma-ray detectors that are part of the instrument and enable the detection of signals from nuclei produced in even weaker quantities. Berkeley Lab has had a leadership role in the creation of GRETINA and now GRETA.

The researchers emphasized that in addition to FRIB’s infrastructure, their work benefited from collaborations between multiple U.S.-based research institutions and several European facilities. Gade and Janssens both emphasized that advancing the frontier of nuclear physics requires both investment in research infrastructure and a healthy spirit of collaboration and exchange of ideas.

“Experimental nuclear physics is a team sport,” Gade said. “It takes a group of people with diverse skills to conceive and propose the experiment, run the instruments, analyze, and interpret the data in the framework of many-body computations or nuclear structure and nuclear reactions.”

More information: Alexandra Gade et al, In-beam spectroscopy reveals competing nuclear shapes in the rare isotope 62Cr, Nature Physics (2024). DOI: 10.1038/s41567-024-02680-0

Journal information: Nature Physics 

Provided by Michigan State University Facility for Rare Isotope Beams

How a classical computer beat a quantum computer at its own game

by Mara Johnson-Groh, Simons Foundation

How a Classical Computer Beat a Quantum Computer at Its Own Game
An illustration of a quantum system that was simulated by both classical and quantum computers. The highlighted sections show how the influence of the system’s components is confined to nearby neighbors. Credit: Lucy Reading-Ikkanda/Simons Foundation

Earlier this year, researchers at the Flatiron Institute’s Center for Computational Quantum Physics (CCQ) announced that they had successfully used a classical computer and sophisticated mathematical models to thoroughly outperform a quantum computer on a task that some thought only quantum computers could solve.

Now, those researchers have determined why they were able to trounce the quantum computer at its own game. Their answer, presented in Physical Review Letters, reveals that the quantum problem they tackled—involving a particular two-dimensional quantum system of flipping magnets—displays a behavior known as confinement. This behavior had previously been seen in quantum condensed matter physics only in one-dimensional systems.

This unexpected finding is helping scientists better understand the line dividing the abilities of quantum and classical computers and provides a framework for testing new quantum simulations, says lead author Joseph Tindall, a research fellow at the CCQ.

“There is some boundary that separates what can be done with quantum computing and what can be done with classical computers,” he says. “At the moment, that boundary is incredibly blurry. I think our work helps clarify that boundary a bit more.”

By harnessing principles from quantum mechanics, quantum computers promise huge advantages in processing power and speed over classical computers. While classical computations are limited by the binary operations of ones and zeros, quantum computers can use qubits, which can represent both 0 and 1 simultaneously, to process information in a fundamentally different way.

Quantum technology is still in its infancy, though, and has yet to convincingly demonstrate its superiority over classical computers. As scientists work to figure out where quantum computers might have an edge, they’re coming up with complex problems that test the limits of classical and quantum computers.

The results of one recent test of quantum computers came out in June 2023, when IBM researchers published a paper in the journal Nature. Their paper detailed an experiment simulating a system with an array of tiny flipping magnets evolving over time. The researchers claimed that this simulation was only feasible with a quantum computer, not a classical one. After learning about the new paper through press coverage, Tindall decided to take up the challenge.

Tindall has been working with colleagues over the last several years to develop better algorithms and codes for solving complex quantum problems with classical computers. He applied these methods to IBM’s simulation, and in just two weeks he proved he could solve the problem with very little computing power—it could even be done on a smartphone.

“We didn’t really introduce any cutting-edge techniques,” Tindall says. “We brought a lot of ideas together in a concise and elegant way that made the problem solvable. It was a method that IBM had overlooked and was not easily implemented without well-written software and codes.”

Tindall and his colleagues published their findings in the journal PRX Quantum in January 2024, but Tindall didn’t stop there. Inspired by the simplicity of the results, he and his co-author Dries Sels of the Flatiron Institute and New York University set out to determine why this system could be so easily solved with a classical computer when, on the surface, it appeared to be a very complex problem.

“We started thinking about this question and noticed a number of similarities in the system’s behavior to something people had seen in one dimension called confinement,” Tindall says.

Confinement is a phenomenon that can arise under special circumstances in closed quantum systems and is analogous to the quark confinement known in particle physics. To understand confinement, let’s begin with some quantum basics. On quantum scales, an individual magnet can be oriented up or down, or it can be in a “superposition”—a quantum state in which it points both up and down simultaneously. How up or down the magnet is affects how much energy it has when it’s in a magnetic field.

In the system’s initial setup, the magnets all pointed in the same direction. The system was then perturbed by a small magnetic field, making some of the magnets want to flip, which also encouraged neighboring magnets to flip. This behavior—where the magnets influence each other’s flipping—can lead to entanglement, a linking of the magnets’ superpositions. Over time, the increased entanglement of the system makes it hard for a classical computer to simulate.

However, in a closed system, there’s only so much energy to go around. In their closed system, Tindall and Sels showed that there was only enough energy to flip small, sparsely separated clusters of orientations, directly limiting the growth of entanglement. This energy-based limitation on entanglement is known as confinement, and it occurred as a completely natural consequence of the system’s two-dimensional geometry.

“In this system, the magnets won’t just suddenly scramble up; they will actually just oscillate around their initial state, even on very long timescales,” Tindall says. “It is quite interesting from a physics perspective because that means the system remains in a state which has a very specific structure to it and isn’t just completely disordered.”

Serendipitously, IBM had, in their initial test, set up a problem where the organization of the magnets in a closed two-dimensional array led to confinement. Tindall and Sels realized that since the confinement of the system reduced the amount of entanglement, it kept the problem simple enough to be described by classical methods. Using simulations and mathematical calculations, Tindall and Sels came up with a simple, accurate mathematical model that describes this behavior.

“One of the big open questions in quantum physics is understanding when entanglement grows rapidly and when it doesn’t,” Tindall says. “This experiment gives us a good understanding of an example where we didn’t get large-scale entanglement due to the model used and the two-dimensional structure of the quantum processor.”

The results suggest that confinement itself could show up in a range of two-dimensional quantum systems. If it does, the mathematical model developed by Tindall and Sels offers an invaluable tool for understanding the physics happening in those systems. Additionally, the codes used in the paper can provide a benchmarking tool for experimental scientists to use as they develop new computer simulations for other quantum problems.

More information: Joseph Tindall et al, Confinement in the Transverse Field Ising Model on the Heavy Hex Lattice, Physical Review Letters (2024). DOI: 10.1103/PhysRevLett.133.180402

Journal information: Physical Review Letters  PRX Quantum  Nature 

Provided by Simons Foundation 

Stochastic thermodynamics may be key to understanding energy costs of computation

by Santa Fe Institute

Stochastic thermodynamics may be key to understanding energy costs of computation
The mapping between the design features of a computer and its performance when computing a function is mediated by its resource costs. Credit: Proceedings of the National Academy of Sciences (2024). DOI: 10.1073/pnas.2321112121

Two systems exist in thermal equilibrium if no heat passes between them. Computers, which consume energy and give off heat as they process information, operate far from thermal equilibrium. Were they to stop consuming energy—say you let your laptop discharge completely—they would stop functioning.

But how does the amount of energy required by a physical system to perform a computation depend on the details of the computation?

Physicists and computer scientists have been trying to connect thermodynamics and computation for more than a century. The tradeoff has always been a theoretical concern, but the ubiquity of digital devices makes it a practical one, too. Until recently, researchers lacked a rigorous way to study these kinds of systems.

That changed in the early 21st century with the introduction of a new field called stochastic thermodynamics. “This was a major revolution in nonequilibrium physics,” says SFI Professor David Wolpert.

The field’s mathematical tools are exactly what scientists need to use to probe the inner workings of computational systems, since those systems are (far) out of equilibrium, according to a Perspective published this week in the Proceedings of the National Academy of Sciences. The authors, led by Wolpert and Jan Korbel, a postdoctoral researcher at Complexity Science Hub in Vienna, argue that stochastic thermodynamics can unearth deep connections between computation and thermodynamics.

“It provides us with the tools to investigate and quantify with equations all that’s going on with systems, even arbitrarily far from equilibrium,” Wolpert says. The tools include mathematical theorems, uncertainty relations, and even thermodynamic speed limits that apply to the behavior of nonequilibrium systems at all scales, from the very small to the macroscopic.

These considerations were absent in the work of 20th-century physicists, Wolpert says. “They provide us with a way to think about the actual energetics of these systems, and we’ve never had them before.”

Korbel notes that these tools can help researchers probe connections among energy, computation, and the effects on the climate. “Every calculation in every computer requires energy, some of which is lost as heat—warming not only the system but also the planet,” he says. “As the energy demands of computation continue to grow, it is essential to minimize these losses.”

Wolpert emphasizes that the potential gains from using stochastic thermodynamics reach far beyond artificial computers like laptops and phones. Cells carry out computations far from equilibrium; so do neurons in the brain. On larger time scales, social systems and even biological evolution operate out of equilibrium.

On a practical level, says Wolpert, a closer understanding of the energy of computation could point to more energy-efficient ways to design real-world devices. Findings in stochastic thermodynamics, he says, “are ubiquitous across anything we might consider to be computing. In many ways, it provides a unifying glue by which to relate and integrate all these different fields.”

More information: David H. Wolpert et al, Is stochastic thermodynamics the key to understanding the energy costs of computation? Proceedings of the National Academy of Sciences (2024). DOI: 10.1073/pnas.2321112121

Journal information: Proceedings of the National Academy of Sciences 

Provided by Santa Fe Institute 

Optical amplifier and record-sensitive receiver pave the way for faster space communication

by Chalmers University of Technology

Faster space communication with record-sensitive receiver
In the new communication system from researchers at Chalmers University of Technology, in Sweden, a weak optical signal (red) from the spacecraft’s transmitter can be amplified noise-free when it encounters two so-called pump waves (blue and green) of different frequencies in a receiver on Earth. Thanks to the researchers’ noise-free amplifiers in the receiver, the signal is kept undisturbed and the reception on Earth becomes record-sensitive, which in turn paves the way for a more error-free and faster data transmission in space in the future. Credit: Chalmers University of Technology | Rasmus Larsson

In space exploration, long-distance optical links can now be used to transmit images, films and data from space probes to Earth using light. But in order for the signals to reach all the way and not be disturbed along the way, hypersensitive receivers and noise-free amplifiers are required.

Now, researchers at Chalmers University of Technology, in Sweden, have created a system that, with a silent amplifier and record-sensitive receiver, paves the way for faster and improved space communication.

Their study, “Ultralow noise preamplified optical receiver using conventional single wavelength transmission,” is published in Optica.

Space communication systems are increasingly based on optical laser beams rather than radio waves, as the signal loss has been shown to be less when light is used to carry information over very long distances. But even information carried by light loses its power during the journey, and optical systems for space communication therefore require extremely sensitive receivers capable of sensing signals that have been greatly weakened before they finally reach Earth.

The Chalmers researchers’ concept of optical space communication opens up new communication opportunities—and discoveries—in space.

“We can demonstrate a new system for optical communication with a receiver that is more sensitive than has been demonstrated previously at high data rates. This means that you can get a faster and more error-free transfer of information over very long distances, for example when you want to send high-resolution images or videos from the moon or Mars to Earth,” says Peter Andrekson, Professor of Photonics at Chalmers and one of the lead authors of the study.

Silent amplifier with simplified transmitter improves communication

The researchers’ communication system uses an optical amplifier in the receiver that amplifies the signal with the least possible noise so that its information can be recycled.

Just like the glow of a flashlight, the light from the transmitter widens and weakens with distance. Without amplification, the signal is so weak after the space flight that it is drowned out by the electronic noise of the receiver.

After 20 years of struggling with disturbing noise that impaired the signals, the research team at Chalmers was able to demonstrate a noise-free optical amplifier a few years ago. But until now, the silent amplifier has not been able to be used practically in optical communication links, as it has placed completely new, significantly more complex, demands on both transmitter and receiver.

Due to the limited resources and minimal space on board a space probe, it is important that the transmitter is as simple as possible.

By allowing the receiver on Earth to generate two of the three light frequencies needed for noise-free amplification, and at the same time allowing the transmitter to generate only one frequency, the Chalmers researchers were able to implement the noise-free amplifier in an optical communication system for the first time. The results show outstanding sensitivity, while complexity at the transmitter is modest.

“This phase-sensitive optical amplifier does not, in principle, generate any extra noise, which contributes to a more sensitive receiver and that error-free data transmission is achieved even when the power of the signal is lower,” says Rasmus Larsson, Postdoctoral Researcher in Photonics at Chalmers and one of the lead authors of the study.

“By generating two extra waves of different frequencies in the receiver, rather than as previously done in the transmitter, a conventional laser transmitter with one wave can now be used to implement the amplifier. Our simplification of the transmitter means that already existing optical transmitters on board satellites and probes could be used together with the noise-free amplifier in a receiver on Earth.”

Can solve problematic bottleneck

The progress means that the researchers’ silent amplifiers can eventually be used in practice in communication links between space and Earth. The system is thus poised to contribute to solving a well-known bottleneck problem among space agencies today.

“NASA talks about ‘the science return bottleneck,’ and here the speed of the collection of scientific data from space to Earth is a factor that constitutes an obstacle in the chain. We believe that our system is an important step forward towards a practical solution that can resolve this bottleneck,” says Peter Andrekson.

The next step for the researchers is to test the optical communication system with the implemented amplifier during field studies on Earth, and later also in communication links between a satellite and Earth.

More information: Rasmus Larsson et al, Ultralow-noise preamplified optical receiver using conventional single-wavelength transmission, Optica (2024). DOI: 10.1364/OPTICA.539544

Journal information: Optica 

Provided by Chalmers University of Technology 

Professor calculates optimal glass shape for preserving chill in beer glasses

by Bob Yirka , Phys.org

Professor calculates optimal glass shape for preserving chill in beer glasses
Optimum glass of the categories Brazilian tulip (left) Imperial pint (center) and American pint (right). Credit: arXiv (2024). DOI: 10.48550/arxiv.2410.12043

Claudio Pellegrini, a professor of thermal and fluid sciences at the Federal University of São João del-Rei in Brazil, has calculated the optimal shape for a beer glass to keep the beer cold for as long as possible. He has written a paper describing his analysis of beer glass shapes and posted it on the arXiv preprint server.

Prior research and a lot of anecdotal evidence suggests that beer consumers prefer their beverage cold—generally as cold as possible. Many such beer drinkers also prefer to consume their beverages from a clear glass—doing so allows for enjoying the look of the beer as it is being consumed and makes for an easy and tasty consumption.

Unfortunately, the two desires represent a conundrum—drinking from a glass allows the beer to lose its chill very quickly. Because of that, beer glass makers have developed a variety of designs meant to retain as much chill as possible, for as long as possible.

In this new effort, Pellegrini put such designs to the test by calculating the optimal glass design, based on physics principles, for keeping a beer cold in a drinking glass.

In his work, Pellegrini did not include external factors, such as the warmth of a hand holding the glass, or the types of glass used. Instead, he went for the basics, testing nothing but shape to determine heat transfer rates.

To determine such a shape, he began by starting with the simplest model—a glass with a smooth curve, fixed around a vertical axis—one with a standard height, radius and base to top ratio. He also assumed an insulated base, ensuring that heat loss would occur only out the top and sides.

He also assumed a fixed starting beer temperature for all testing purposes, and that the glass would have negligible thermal resistance. Such a scenario ensured that changes in heat transfer would be a direct result of changes in shape.

In doing his calculations, Pellegrini found, unsurprisingly, that the best shape is also the one that is the most popular—a glass with a small base that grows wider as it approaches the top, such as the pilsner.

He also acknowledges that the true best result is one where the glass is so small that the beer is consumed in one or two quick gulps, but he insists drinking beer in such an ugly fashion misses the point of drinking beer altogether.

More information: Cláudio C. Pellegrini, Optimizing Beer Glass Shapes to Minimize Heat Transfer — New Results, arXiv (2024). DOI: 10.48550/arxiv.2410.12043

Journal information: arXiv 

Study introduces novel conservation law that operates down to the subcycle level during strong-field ionization

by Ultrafast Science

Subcycle conservation law in strong-field ionization
Illustration of the dynamical symmetry in a circularly polarized laser field. The Hamiltonian is invariant under an arbitrary time translation P̂t=t→t+δt combined with a rotation operation P̂φ=φ→φ+δφ with δφ = ωδt. Consequently, an infinite-order continuous dynamical symmetry P̂=P̂φt emerges, providing support for the introduction of conservation laws on the subcycle scale. Credit: Ultrafast Science (2024). DOI: 10.34133/ultrafastscience.0071

The conservation law is a fundamental tool that significantly aids our quest to understand the world, playing a crucial role across various scientific disciplines. Particularly in strong-field physics, these laws enhance our comprehension of atomic and molecular structures as well as the ultrafast dynamics of electrons.

For example, when atoms interact with linearly polarized light, the Hamiltonian of the system displays a second-order dynamical symmetry, which stays invariant under the symmetry operation that involves a half-period time translation together with a spatial inversion. This characteristic symmetry is known to result in exclusive odd-order harmonics during high-harmonic generation of rare-gas atoms.

An intriguing phenomenon occurs when atoms interact with circularly polarized light. There, the Hamiltonian exhibits an infinite-order continuous dynamical symmetry, which stays invariant under a symmetry operation that comprises an arbitrary time translation combined with a corresponding rotational operation. The implication of this symmetry for conservation laws presents a compelling topic for exploration.

A team of researchers from the State Key Laboratory of Precision Spectroscopy at East China Normal University has delineated a conservation law between angular momentum and energy at the subcycle level. This was achieved through the analysis of the correlated spectrum of angular momentum and energy (SAME) of photoelectrons, both at the tunnel exit and in the asymptotic region, in the context of strong-field ionization using circularly and elliptically polarized light pulses.

The researchers have confirmed that this conservation law stays applicable down to the subcycle level. They have also introduced a protocol utilizing interference-induced electron vortices to directly visualize the conservation law at the subcycle level. Their findings have been published in the journal Ultrafast Science.

In the case of circular polarization, while the individual distributions of angular momentum and energy are broad, their correlated distribution forms a distinct straight line. This pattern underscores a rigorously obeyed conservation law between angular momentum and energy, represented by the equation.

The team has further substantiated that this conservation law is consistently applicable throughout an entire optical cycle. For elliptical polarization, the conservation law is naturally extended and can be articulated with an effective angular frequency within the optical cycle.

This work introduces a novel conservation law between angular momentum and energy that operates down to the subcycle level during strong-field ionization. The discovery of this subcycle conservation law is attributed to the infinite-order continuous dynamical symmetry inherent in the interaction between atoms and light pulses with circular or elliptical polarization.

This study lays a theoretical groundwork that is instrumental for a profound comprehension of light-matter interactions on the subcycle scale.

More information: Yongzhe Ma et al, Subcycle Conservation Law in Strong-Field Ionization, Ultrafast Science (2024). DOI: 10.34133/ultrafastscience.0071

Provided by Ultrafast Science

AI training method can drastically shorten time for calculations in quantum mechanics

by The Korea Advanced Institute of Science and Technology (KAIST)

KAIST Proposes AI Training Method that will Drastically Shorten Time for Calculations in Quantum Mechanics
Overview of the DeepSCF model. Credit: npj Computational Materials (2024). DOI: 10.1038/s41524-024-01433-0

The close relationship between AI and highly complicated scientific computing can be seen in the fact that both the 2024 Nobel Prizes in Physics and Chemistry were awarded to scientists for devising AI for their respective fields of study. KAIST researchers have now succeeded in dramatically shortening the calculation time of highly sophisticated quantum mechanical computer simulations by predicting atomic-level chemical bonding information distributed in 3D space using a novel approach to teach AI.

Professor Yong-Hoon Kim’s team from the School of Electrical Engineering has developed a 3D computer vision artificial neural network-based calculation methodology that bypasses the complex algorithms required for atomic-level quantum mechanical calculations performed using supercomputers to derive the properties of materials.

The density functional theory (DFT) calculations in quantum mechanics using supercomputers have become an essential and standard tool in a wide range of research and development fields, including advanced materials and drug design, as they allow for fast and accurate prediction of quantum properties.

However, in current density functional theory (DFT) calculations, a complex self-consistent field (SCF) process of generating three-dimensional electron densities and solving quantum mechanical equations must be repeated tens to hundreds of times, which limits its application to hundreds or thousands of atoms.

Professor Yong-Hoon Kim’s research team asked whether it would be possible to avoid the self-consistent field process using the artificial intelligence technique that has recently been rapidly developing. As a result, they developed the DeepSCF model to accelerate calculations by learning chemical bond information distributed in three-dimensional space through a neural network algorithm in the field of computer vision.

The research is published in the journal npj Computational Materials.

The research team focused on the fact that according to density functional theory, electron density contains all the quantum mechanical information of electrons, and in addition, the residual electron density, which is the difference between the total electron density and the sum of the electron densities of the constituent atoms, contains chemical bond information, and selected it as a target for machine learning.

Afterwards, the team adopted a data set of organic molecules containing various chemical bond characteristics, and the atomic structures of the molecules included in it were subjected to arbitrary rotations and deformations to further improve the accuracy and generalization performance of the model. Finally, the research team demonstrated the validity and efficiency of the DeepSCF methodology for complex and large systems.

Professor Yong-Hoon Kim, who led this research, said, “We have found a way to correspond quantum mechanical chemical bonding information distributed in three-dimensional space to an artificial neural network. Since quantum mechanical electronic structure calculations are the basis for all-scale material property simulations, we have established the overall basic principles for accelerating material calculations through artificial intelligence.”

More information: Ryong-Gyu Lee et al, Convolutional network learning of self-consistent electron density via grid-projected atomic fingerprints, npj Computational Materials (2024). DOI: 10.1038/s41524-024-01433-0

Journal information: npj Computational Materials 

Provided by The Korea Advanced Institute of Science and Technology (KAIST)