Novel insights on the interplay of electromagnetism and the weak nuclear force

New insights on the interplay of electromagnetism and the weak nuclear force
A spinning neutron disintegrates into a proton, electron, and antineutrino when a down quark in the neutron emits a W boson and converts into an up quark. The exchange of quanta of light (γ) among charged particles changes the strength of this transition. Credit: Vincenzo Cirigliano, Institute for Nuclear Theory

Outside atomic nuclei, neutrons are unstable particles, with a lifetime of about fifteen minutes. The neutron disintegrates due to the weak nuclear force, leaving behind a proton, an electron, and an antineutrino. The weak nuclear force is one of the four fundamental forces in the universe, along with the strong force, the electromagnetic force, and the gravitational force.

Comparing experimental measurements of neutron decay with theoretical predictions based on the weak nuclear force can reveal as-yet undiscovered interactions. To do so, researchers must achieve extremely high levels of precision. A team of nuclear theorists has uncovered a new, relatively large effect in neutron decay that arises from the interplay of the weak and electromagnetic forces.

This research identified a shift in the strength with which a spinning neutron experiences the weak nuclear force. This has two major implications. First, scientists have known since 1956 that due to the weak force, a system and one built like its mirror image do not behave in the same way. In other words, mirror reflection symmetry is broken. This research affects the search for new interactions, technically known as “right-handed currents,” that, at very short distances of less than one hundred quadrillionths of a centimeter, restore the universe’s mirror-reflection symmetry. Second, this research points to the need to compute electromagnetic effects with higher precision. Doing so will require the use of future high-performance computers.

A team of researchers computed the impact of electromagnetic interactions on neutron decay due to the emission and absorption of photons, the quanta of light. The team included nuclear theorists from the Institute for Nuclear Theory at the University of Washington, North Carolina State University, the University of Amsterdam, Los Alamos National Laboratory, and Lawrence Berkeley National Laboratory and their results have been published in Physical Review Letters.

The calculation was performed with a modern method, known as “effective field theory,” that efficiently organizes the importance of fundamental interactions in phenomena involving strongly interacting particles. The team identified a new percent-level shift to the nucleon axial coupling, gA, which governs the strength of decay of a spinning neutron. The new correction originates from the emission and absorption of electrically charged pions, which are mediators of the strong nuclear force. While effective field theory provides an estimate of the uncertainties, improving on the current precision will require advanced calculations on Department of Energy supercomputers.

The researchers also assessed the impact on searches of right-handed current. They found that after including the new correction, experimental data and theory are in good agreement and current uncertainties still allow for new physics at a relatively low mass scale.

More information: Vincenzo Cirigliano et al, Pion-Induced Radiative Corrections to Neutron β Decay, Physical Review Letters (2022). DOI: 10.1103/PhysRevLett.129.121801

Journal information: Physical Review Letters 

Provided by US Department of Energy 

Optical memristors review: Shining a light on neuromorphic computing

Shining a light on neuromorphic computing
Optical memristive platforms for nonvolatile transmission modulation. Credit: Nature Photonics (2023). DOI: 10.1038/s41566-023-01217-w

AI, machine learning, and ChatGPT may be relatively new buzzwords in the public domain, but developing a computer that functions like the human brain and nervous system—both hardware and software combined—has been a decades-long challenge. Engineers at the University of Pittsburgh are today exploring how optical “memristors” may be a key to developing neuromorphic computing.

Resistors with memory, or memristors, have already demonstrated their versatility in electronics, with applications as computational circuit elements in neuromorphic computing and compact memory elements in high-density data storage. Their unique design has paved the way for in-memory computing and captured significant interest from scientists and engineers alike.

A new review article published in Nature Photonics, titled “Integrated Optical Memristors,” sheds light on the evolution of this technology—and the work that still needs to be done for it to reach its full potential.

Led by Nathan Youngblood, assistant professor of electrical and computer engineering at the University of Pittsburgh Swanson School of Engineering, the article explores the potential of optical devices which are analogs of electronic memristors. This new class of device could play a major role in revolutionizing high-bandwidth neuromorphic computing, machine learning hardware, and artificial intelligence in the optical domain.

“Researchers are truly captivated by optical memristors because of their incredible potential in high-bandwidth neuromorphic computing, machine learning hardware, and artificial intelligence,” explained Youngblood. “Imagine merging the incredible advantages of optics with local information processing. It’s like opening the door to a whole new realm of technological possibilities that were previously unimaginable.”

The review article presents a comprehensive overview of recent progress in this emerging field of photonic integrated circuits. It explores the current state-of-the-art and highlights the potential applications of optical memristors, which combine the benefits of ultrafast, high-bandwidth optical communication with local information processing. However, scalability emerged as the most pressing issue that future research should address.

“Scaling up in-memory or neuromorphic computing in the optical domain is a huge challenge. Having a technology that is fast, compact, and efficient makes scaling more achievable and would represent a huge step forward,” explained Youngblood.

“One example of the limitations is that if you were to take phase change materials, which currently have the highest storage density for optical memory, and try to implement a relatively simplistic neural network on-chip, it would take a wafer the size of a laptop to fit all the memory cells needed,” he continued. “Size matters for photonics, and we need to find a way to improve the storage density, energy efficiency, and programming speed to do useful computing at useful scales.”

Using light to revolutionize computing

Optical memristors can revolutionize computing and information processing across several applications. They can enable active trimming of photonic integrated circuits (PICs), allowing for on-chip optical systems to be adjusted and reprogrammed as needed without continuously consuming power. They also offer high-speed data storage and retrieval, promising to accelerate processing, reduce energy consumption, and enable parallel processing.

Optical memristors can even be used for artificial synapses and brain-inspired architectures. Dynamic memristors with nonvolatile storage and nonlinear output replicate the long-term plasticity of synapses in the brain and pave the way for spiking integrate-and-fire computing architectures.

Research to scale up and improve optical memristor technology could unlock unprecedented possibilities for high-bandwidth neuromorphic computing, machine learning hardware, and artificial intelligence.

“We looked at a lot of different technologies. The thing we noticed is that we’re still far away from the target of an ideal optical memristor–something that is compact, efficient, fast, and changes the optical properties in a significant manner,” Youngblood said. “We’re still searching for a material or a device that actually meets all these criteria in a single technology in order for it to drive the field forward.”

More information: Nathan Youngblood et al, Integrated optical memristors, Nature Photonics (2023). DOI: 10.1038/s41566-023-01217-w

Journal information: Nature Photonics 

Provided by University of Pittsburgh 

Shrinking endoscopes with meta-optical fibers

Shrinking endoscopes with meta-optical fibers
A meta-optic is optimized for integration with the coherent fiber bundle, whereas the individual fiber cores are taken as the imaging limitation. The MOFIE achieves a reduced tip length while maintaining a wide field of view of 22.5° and a large depth of field exceeding 30 mm, compared with a traditional GRIN lens. Credit: Johannes E. Fröch, Luocheng Huang, Quentin A.A. Tanguy, Shane Colburn, Alan Zhan, Andrea Ravagli, Eric J. Seibel, Karl Böhringer, Arka Majumdar

Ultra-compact, agile endoscopes with a large field of view (FoV), long depth of field (DoF), and short rigid tip length are essential for developing minimally invasive operations and new experimental surgeries. As these fields develop, the requirement for miniaturization and increased precision become progressively demanding.

In existing endoscopes, the rigid tip length is a fundamental limitation of the device’s agility within small tortuous ducts, such as an artery. It is primarily constrained by the size of the optical elements required for imaging. Thus, alternative solutions are urgently needed to reduce the tip length.

In a new paper published in eLight, a team of scientists led by Dr. Johannes Fröch and Prof Arka Majumdar from the University of Washington have developed a novel technique for reducing the rigid tip length.

Existing solutions include lensless and computational imaging with single fibers or coherent fiber bundles. However, these are typically limited to a short working distance and often extremely sensitive to bending and twisting of the optical fiber, affecting or even precluding accurate computational reconstruction.

Flat meta-optics are an emerging and versatile idea in the photonics community to create miniaturized optical elements. These are sub-wavelength diffractive optical elements composed of nano-scale scatterer arrays. They are designed to shape an incident wavefront’s phase, amplitude, and spectral response. Such ultrathin flat optics not only dramatically shrink the size of traditional optics but can also combine multiple functionalities in a single surface.

Flat meta-optics are compatible with high-volume semiconductor manufacturing technology and can create disposable optics. These properties have already inspired researchers to explore the potential of meta-optics for endoscopy, including fiber-integrated endoscopy, side-viewing single fiber scanning endoscopy, and scanning fiber forward-viewing endoscopy.

Shrinking endoscopes with meta-optical fibers
An optical microscope image of the fabricated meta-optic(left ) placed in front of the coherent fiber bundle. Scanning electron microscope imagesof the meta-optic (right) show the individual scatterer, which spans the entire aperture of the device. Credit: Johannes E. Fröch, Luocheng Huang, Quentin A.A. Tanguy, Shane Colburn, Alan Zhan, Andrea Ravagli, Eric J. Seibel, Karl Böhringer, Arka Majumdar

Unfortunately, meta-optics traditionally suffer from strong aberrations, making large FoV and full-color imaging challenging. Several works have shown that the standard metalens design is unsuitable for simultaneously capturing color information across the visible spectrum.

It typically results in crisp images for the design wavelength (e.g. green) but strongly aberrated/ blurred for other colors (red and blue). While some approaches like dispersion engineering and computational imaging techniques can reduce chromatic aberration, they either suffer from small apertures, low numerical apertures or require a computational post-processing step, complicating real-time video capture.

Similarly, an additional aperture before the meta-optic can provide a larger FoV. However, it comes at the cost of reduced light collection and increased thickness of the optics. So far, these limitations have restricted most meta-optics endoscopes to single wavelength operation.

Although, recently, a meta-optic doublet was demonstrated in conjunction with a coherent fiber bundle for polychromatic imaging. Such polychromatic imaging is unsuitable for broad-band illumination, which is often the case for clinical endoscopy. Additionally, the front aperture was limited to 125 μm, with a short working distance of 200 μm.

The research team noted a desire for broad-band, ultra-thin meta-optics for endoscopy. However, making it smaller than the optical fiber diameter is not conducive and severely limits the light collection. As such, full-color meta-optical endoscopy with acceptable FoV, DoF, and large enough aperture has not yet been achieved.

In this work, the research team demonstrated an inverse-designed meta-optic optimized to capture real-time full color scenes with a 1 mm diameter coherent fiber bundle. The meta-optic enables operations at an FoV of 22.5°, a DoF of > 30 mm (exceeding 300% of the nominal design working distance) and a minimum rigid tip length of only ~ 2.5 mm.

This is a 33% tip length reduction compared to a traditional commercial gradient-index (GRIN) lens integrated fiber bundle endoscope. This is due to the shorter focal length and the ultrathin nature of the meta-optic.

Shrinking endoscopes with meta-optical fibers
The top images display scenes on an OLED screen and captured through the MOFIE, allowing the researchers to directly assess the imaging quality. The bottom three images show images taken of a caterpillar, taken under ambient imaging conditions and real time life capture, without computational deconvolution applied. Credit: Johannes E. Fröch, Luocheng Huang, Quentin A.A. Tanguy, Shane Colburn, Alan Zhan, Andrea Ravagli, Eric J. Seibel, Karl Böhringer, Arka Majumdar

At the same time, comparable imaging performance and working distance are maintained. To achieve exceptional FoV, DoF, and color performance of the Meta-Optical Fiber Endoscope (MOFIE), the research team approached this design problem from a system-level perspective.

They believed that the diameter and spacing of individual fiber cores within the bundle limit the achievable image quality, which also limits the achievable FoV and modulation transfer function (MTF). This aspect is implemented in an automatic differentiation framework using the average volume under the multichromatic modulation transfer function (MTF) curve as the figure of merit.

By ensuring that the meta-optic has an MTF within the limitations of the fiber bundle, the research team achieved full-color operation without requiring a computational reconstruction step, thus facilitating real-time operation. The team emphasized that its design approach fundamentally differs from traditional achromatic metalens design efforts.

The researchers formulated an optimization problem to find the best solution for full-color imaging. This was instead of trying to achieve diffraction-limited performance in all wavelengths, which may pose a physically unsolvable problem.

This approach is important because it is not limited to this particular system. It can be extended to larger aperture sizes and support computational post-processing steps. To highlight this, they also demonstrated an example of a meta-optic with a 1 cm aperture and full-color imaging under ambient light conditions.

More information: Johannes E. Fröch et al, Real time full-color imaging in a Meta-optical fiber endoscope, eLight (2023). DOI: 10.1186/s43593-023-00044-4

Provided by Chinese Academy of Sciences 

New spectroscopy method reveals accelerated relaxation dynamics in compressed cerium-based metallic glass

Accelerated relaxation dynamics in compressed cerium-based metallic glass
Two-time correlation functions of the ce-based MG measured by HP-XPCS at different pressures during compression. At each pressure, the width of the reddish diagonal contour is proportional to the relaxation time, which broadens below 2.9 GPa and then narrows during further compression. Credit: Dr. Qiaoshi Zeng of HPSTAR

A major stumbling block in our understanding of glass and glass phenomena is the elusive relationship between relaxation dynamics and glass structure. A team led by Dr. Qiaoshi Zeng from HPSTAR recently developed a new in situ high-pressure wide-angle X-ray photon correlation spectroscopy method to enable atomic-scale relaxation dynamics studies in metallic glass systems under extreme pressures. The study is published in Proceedings of the National Academy of Sciences (PNAS).

Metallic glasses (MGs), with many superior properties to both conventional metals and glasses, have been the focus of worldwide research. As thermodynamically metastable materials, like typical glasses, MGs spontaneously evolve into their more stable states all the time through various relaxation dynamic behaviors.

These relaxation behaviors have significant effects on the physical properties of MGs. Still, until now, scientists’ ability to deepen the understanding of glass relaxation dynamics and especially its relationships with atomic structures has been limited by the available techniques.

“Thanks to the recent improvements in synchrotron X-ray photon correlation spectroscopy (XPCS), measuring the collective particle motions of glassy samples with a high resolution and broad coverage in the time scale is possible, and thus, various microscopic dynamic processes otherwise inaccessible have been explored in glasses,” said Dr. Zeng.

“However, the change in atomic structures is subtle in previous relaxation process measurements, which makes it still difficult to probe the relationship between the structure and relaxation behavior. To overcome this problem, we decided to employ high pressure because it can effectively alternate the structure of various materials, including MG.”

To this end, the team developed in situ high-pressure synchrotron wide-angle XPCS to probe a cerium-based MG material during compression. In situ high-pressure wide-angle XPCS revealed that the collective atomic motion initially slows down, as generally expected with increasing density. Then, counter-intuitively it accelerates with further compression, showing an unusual non-monotonic pressure-induced steady relaxation dynamics crossover at ~3 GPa.

Furthermore, by combining these results with in situ high-pressure synchrotron X-ray diffraction, the relaxation dynamics anomaly closely correlates with the dramatic changes in local atomic structures during compression, rather than monotonically scaling with either the sample density or overall stress level.

“With density increases, atoms in glasses generally get more difficult to move or diffuse, slowing down its relaxation dynamics. This is what we normally expect from hydrostatic compression,” Dr. Zeng explained.

“So the non-monotonic relaxation behavior observed here in the cerium-based MG under pressure is quite unusual, which indicates besides density, structural details could also play an important role in glass relaxation dynamics,” Dr. Zeng explained.

These findings demonstrate that there is a close relationship between glass relaxation dynamics and atomic structures in MGs. The technique Dr. Qiaoshi Zeng’s group developed here can also be extended to explore the relationship between relaxation dynamics and atomic structures in various glasses, especially those significantly tunable by compression, offering new opportunities for glass relaxation dynamics studies at extreme conditions.

More information: Qiaoshi Zeng et al, Pressure-induced nonmonotonic cross-over of steady relaxation dynamics in a metallic glass, Proceedings of the National Academy of Sciences (2023). DOI: 10.1073/pnas.230228112

Journal information: Proceedings of the National Academy of Sciences 

Provided by Center for High Pressure Science & Technology Advanced Research

Physicists uncover ‘parallel circuits’ of spin currents in antiferromagnets

Physicists uncover "parallel circuits" of spin currents in antiferromagnets
Left: An antiferromagnet can function as “parallel electrical circuits” carrying Néel spin currents. Right: A tunnel junction based on the antiferromagnets hosting Néel spin currents can be regarded as “electrical circuits” with the two ferromagnetic tunnel junctions connected in parallel. Credit: Shao Dingfu

A group of physicists at Hefei Institutes of Physical Science (HFIPS) of Chinese Academy of Sciences (CAS) revealed a secret of antiferromagnets, which could accelerate spintronics, a next-gen data storage and processing technology for overcoming the bottleneck of modern digital electronics.

This finding was reported in Physical Review Letters.

Spintronics is a vigorously developing field employing the spin of electrons within magnetic materials to encode information. Spin-polarized electric currents play a central role in spintronics, due to the capabilities of manipulation and detection of magnetic moment directions for writing and reading 1s and 0s. Currently, most spintronic devices are based on ferromagnets, where the net magnetizations can efficiently spin polarize electric currents.

Antiferromagnets, with opposite magnetic moments aligned alternately, are less investigated but may promise even faster and smaller spintronic devices. However, antiferromagnets have zero net magnetization and thus are commonly believed to carry solely spin-neutral currents useless for spintronics. While antiferromagnets consist of two antiparallel aligned magnetic sublattices, their properties are deemed to be “averaged out” over the sublattices making them spin independent.

Prof. Shao Ding-Fu, who led the team, has a different point of view in this research. He envisioned that collinear antiferromagnets can function as “electrical circuits” with the two magnetic sublattices connected in parallel. With this simple intuitive picture in mind, Prof. Shao and his collaborators theoretically predicted that magnetic sublattices could polarize the electric current locally, thus resulting in the staggered spin currents hidden within the globally spin-neutral current.

He dubbed these staggered spin currents as “Néel spin currents” after Louis Néel, a Nobel laureate, who won the prize due to the fundamental work and discoveries concerning antiferromagnetism.

The Néel spin currents is a unique nature of antiferromagnets which has never been recognized. It is capable to generate useful spin-dependent properties which have been previously considered incompatible with antiferromagnets, such as a spin-transfer torque and tunneling magnetoresistance in antiferromagnetic tunnel junctions, crucial for electrical writing and reading of information in antiferromagnetic spintronics.

“Our work uncovered a previously unexplored potential of antiferromagnets, and offered a straightforward solution to achieve the efficient reading and writing for antiferromagnetic spintronics,” said Prof. Shao Ding-Fu.

More information: Ding-Fu Shao et al, Néel Spin Currents in Antiferromagnets, Physical Review Letters (2023). DOI: 10.1103/PhysRevLett.130.216702

Journal information: Physical Review Letters 

Provided by Hefei Institutes of Physical Science, Chinese Academy of Sciences

Calculation shows why heavy quarks get caught up in the flow

Calculation shows why heavy quarks get caught up in the flow
The data points on this graph show that the interactions of heavy quarks (Q) with the quark-gluon plasma (QGP) are strongest and have a short mean free path (zig zags) right around the transition temperature (T/Tc = 1). The interaction strength (the heavy quark diffusion constant) decreases, and the mean free path lengthens, at higher temperatures. Credit: Brookhaven National Laboratory

Using some of the world’s most powerful supercomputers, a group of theorists has produced a major advance in the field of nuclear physics—a calculation of the “heavy quark diffusion coefficient.” This number describes how quickly a melted soup of quarks and gluons—the building blocks of protons and neutrons, which are set free in collisions of nuclei at powerful particle colliders—transfers its momentum to heavy quarks.

The answer, it turns out, is very fast. As described in a paper just published in Physical Review Letters, the momentum transfer from the “freed up” quarks and gluons to the heavier quarks occurs at the limit of what quantum mechanics will allow. These quarks and gluons have so many short-range, strong interactions with the heavier quarks that they pull the “boulder”-like particles along with their flow.

The work was led by Peter Petreczky and Swagato Mukherjee of the nuclear theory group at the U.S. Department of Energy’s Brookhaven National Laboratory, and included theorists from the Bielefeld, Regensburg, and Darmstadt Universities in Germany, and the University of Stavanger in Norway.

The calculation will help explain experimental results showing heavy quarks getting caught up in the flow of matter generated in heavy ion collisions at the Relativistic Heavy Ion Collider (RHIC) at Brookhaven and the Large Hadron Collider (LHC) at Europe’s CERN laboratory. The new analysis also adds corroborating evidence that this matter, known as a “quark-gluon plasma” (QGP), is a nearly perfect liquid, with a viscosity so low that it also approaches the quantum limit.

“Initially, seeing heavy quarks flow with the QGP at RHIC and the LHC was very surprising,” Petreczky said. “It would be like seeing a heavy rock get dragged along with the water in a stream. Usually, the water flows but the rock stays.”

The new calculation reveals why that surprising picture makes sense when you think about the extremely low viscosity of the QGP.

Frictionless flow

The low viscosity of matter generated in RHIC’s collisions of gold ions, first reported on in 2005, was a major motivator for the new calculation, Petreczky said. When those collisions melt the boundaries of individual protons and neutrons to set free the inner quarks and gluons, the fact that the resulting QGP flows with virtually no resistance is evidence that there are many strong interactions among the quarks and gluons in the hot quark soup.

“The low viscosity implies that the ‘mean free path’ between the ‘melted’ quarks and gluons in the hot, dense QGP is extremely small,” said Mukherjee, explaining that the mean free path is the distance a particle can travel before interacting with another particle.

“If you think about trying to walk through a crowd, it’s the typical distance you can get before you bump into someone or have to change your course,” he said.

With a short mean free path, the quarks and gluons interact frequently and strongly. The collisions dissipate and distribute the energy of the fast-moving particles and the strongly interacting QGP exhibits collective behavior—including nearly frictionless flow.

“It’s much more difficult to change the momentum of a heavy quark because it’s like a train—hard to stop,” Mukherjee noted. “It would have to undergo many collisions to get dragged along with the plasma.”

But if the QGP is indeed a perfect fluid, the mean free path for the heavy quark interactions should be short enough to make that possible. Calculating the heavy quark diffusion coefficient—which is proportional to how strongly the heavy quarks are interacting with the plasma—was a way to check this understanding.

Crunching the numbers

The calculations needed to solve the equations of quantum chromodynamics (QCD)—the theory that describes quark and gluon interactions—are mathematically complex. Several advances in theory and powerful supercomputers helped to pave the way for the new calculation.

“In 2010/11 we started using a mathematical shortcut, which assumed the plasma consisted only of gluons, no quarks,” said Olaf Kaczmarek of Bielefeld University, who led the German part of this effort. Thinking only of gluons helped the team to work out their method using lattice QCD. In this method, scientists run simulations of particle interactions on a discretized four-dimensional space-time lattice.

Essentially, they “place” the particles on discrete positions on an imaginary 3D grid to model their interactions with neighboring particles and see how those interactions change over time (the 4th dimension). They use many different starting arrangements and include varying distances between particles.

After working out the method with only gluons, they figured out how to add in the complexity of the quarks.

The scientists loaded a large number of sample configurations of quarks and gluons onto the 4D lattice and used Monte Carlo methods—repeated random sampling—to try to find the most probable distribution of quarks and gluons within the lattice.

“By averaging over those configurations, you get a correlation function related to the heavy quark diffusion coefficient,” said Luis Altenkort, a University of Bielefeld graduate student who also worked on this research at Brookhaven Lab.

As an analogy, think about estimating the air pressure in a room by sampling the positions and motion of the molecules. “You try to use the most probable distributions of molecules based on another variable, such as temperature, and exclude improbable configurations—such as all the air molecules being clustered in one corner of the room,” Altenkort said.

In the case of the QGP, the scientists were trying to simulate a thermalized system—where even on the tiny-fraction-of-a-second timescale of heavy ion particle collisions, the quarks and gluons come to some equilibrium temperature.

They simulated the QGP at a range of fixed temperatures and calculated the heavy quark diffusion coefficient for each temperature to map out the temperature dependence of the heavy quark interaction strength (and the mean free path of those interactions).

“These demanding calculations were possible only by using some of the world’s most powerful supercomputers,” Kaczmarek said.

The computing resources included Perlmutter at the National Energy Research for Scientific Computing Center (NERSC), a DOE Office of Science User Facility located at Lawrence Berkeley National Laboratory; Juwels Booster at the Juelich Research Center in Germany; Marconi at CINECA in Italy; and dedicated lattice QCD GPU clusters at Thomas Jefferson National Accelerator Facility (Jefferson Lab) and at Bielefeld University.

As Mukherjee noted, “These powerful machines don’t just do the job for us while we sit back and relax; it took years of hard work to develop the codes that can squeeze the most efficient performance out of these supercomputers to do our complex calculations.”

Rapid thermalization, short-range interactions

The calculations show that the heavy quark diffusion coefficient is largest right at the temperature at which the QGP forms, and then decreases with increasing temperatures. This result implies that the QGP comes to an equilibrium extremely rapidly.

“You start with two nuclei, with essentially no temperature, then you collide them and in less than one quadrillionth of a second, you get a thermal system,” Petreczky said. Even the heavy quarks get thermalized.

For that to happen, the heavy quarks have to undergo many scatterings with other particles very quickly—implying that the mean free path of these interactions must be very small. Indeed, the calculations show that, at the transition to QGP, the mean free path of the heavy quark interactions is very close to the shortest distance allowable. That so-called quantum limit is established by the inherent uncertainty of knowing both a particle’s position and momentum simultaneously.

This independent “measure” provides corroborating evidence for the low viscosity of the QGP, substantiating the picture of its perfect fluidity, the scientists say.

“The shorter the mean free path, the lower the viscosity, and the faster the thermalization,” Petreczky said.

Simulating real collisions

Now that scientists know how the heavy quark interactions with the QGP vary with temperature, they can use that information to improve their understanding of how the actual heavy ion collision systems evolve.

“My colleagues are trying to develop more accurate simulations of how the interactions of the QGP affect the motion of heavy quarks,” Petreczky said. “To do that, they need to take into account the dynamical effects of how the QGP expands and cools down—all the complicated stages of the collisions.”

“Now that we know how the heavy quark diffusion coefficient changes with temperature, they can take this parameter and plug it into their simulations of this complicated process and see what else needs to be changed to make those simulations compatible with the experimental data at RHIC and the LHC.”

This effort is the subject of a major collaboration known as the Heavy-Flavor Theory (HEFTY) for QCD Matter Topical Theory Collaboration.

“We’ll be able to better model the motion of heavy quarks in the QGP, and then have a better theory to data comparison,” Petreczky said.

More information: Luis Altenkort et al, Heavy Quark Diffusion from 2+1 Flavor Lattice QCD with 320 MeV Pion Mass, Physical Review Letters (2023). DOI: 10.1103/PhysRevLett.130.231902

Journal information: Physical Review Letters 

Provided by Brookhaven National Laboratory 

Physicists discover an exotic material made of bosons

Physicists discover an exotic material made of bosons
Bosonic correlated insulator.(A) Illustration of a bosonic correlated insulator consisting of interlayer excitons. Magenta spheres indicate holes and cyan spheres, electrons. (Inset) Type II band alignment of WSe2/WS2 heterostructure. (B) Schematics of continuous-wave pump probe spectroscopy. The exciton and electron density are independently controlled by pump light and electrostatic gate. Red and green shading correspond to wide-field pump light and focused probe light, respectively. (C and E) Gate-dependent PL (C) and absorption (E) spectra of a 60°-aligned WSe2/WS2 moiré bilayer (device D1) at zero pump intensity. The PL peak shows a sudden blue shift at electron filling νe= 1 and 2 (yellow arrows), where the absorption spectrum shows kinks and splitting. (D and F) Pump intensity–dependent PL (D) and absorption (F) spectra of device D1 at charge neutrality. Right axes show dipolar-interaction–induced interlayer exciton energy shift Δdipole, which is approximately proportional to νex. The dominant PL peak in (D) at low and high pump intensity are labeled as peak I and II, respectively. All measurements are performed at a base temperature of 1.65 K. Credit: Science (2023). DOI: 10.1126/science.add5574

Take a lattice—a flat section of a grid of uniform cells, like a window screen or a honeycomb—and lay another, similar lattice above it. But instead of trying to line up the edges or the cells of both lattices, give the top grid a twist so that you can see portions of the lower one through it. This new, third pattern is a moiré, and it’s between this type of overlapping arrangement of lattices of tungsten diselenide and tungsten disulfide where UC Santa Barbara physicists found some interesting material behaviors.

“We discovered a new state of matter—a bosonic correlated insulator,” said Richen Xiong, a graduate student researcher in the group of UCSB condensed matter physicist Chenhao Jin, and the lead author of a paper that appears in the journal Science.

According to Xiong, Jin and collaborators from UCSB, Arizona State University and the National Institute for Materials Science in Japan, this is the first time such a material—a highly ordered crystal of bosonic particles called excitons—has been created in a “real” (as opposed to synthetic) matter system.

“Conventionally, people have spent most of their efforts to understand what happens when you put many fermions together,” Jin said. “The main thrust of our work is that we basically made a new material out of interacting bosons.”

Bosonic, correlated, insulator

Subatomic particles come in one of two broad types: fermions and bosons. One of the biggest distinctions is in their behavior, Jin said.

“Bosons can occupy the same energy level; fermions don’t like to stay together,” he said, “Together, these behaviors construct the universe as we know it.”

Fermions, such as electrons, underlie the matter with which we are most familiar as they are stable and interact through the electrostatic force. Meanwhile bosons, such as photons (particles of light), tend to be more difficult to create or manipulate as they are either fleeting or do not interact with each other.

A clue to their distinct behaviors is in their different quantum mechanical characteristics, Xiong explained. Fermions have half-integer “spins” such as 1/2 or 3/2 et cetera, while bosons have whole integer spins (1, 2, etc.). An exciton is a state in which a negatively charged electron (a fermion) is bound to its positively charged opposite “hole” (another fermion), with the two half-integer spins together becoming a whole integer, creating a bosonic particle.

To create and identify excitons in their system, the researchers layered the two lattices and shone strong lights on them in a method they call “pump-probe spectroscopy.” The combination of particles from each of the lattices (electrons from the tungsten disulfide and the holes from the tungsten diselenide) and the light created a favorable environment for the formation of and interactions between the excitons while allowing the researchers to probe these particles’ behaviors.

“And when these excitons reached a certain density, they could not move anymore,” Jin said. Thanks to strong interactions, the collective behaviors of these particles at a certain density forced them into a crystalline state, and created an insulating effect due to their immobility.

“What happened here is that we discovered the correlation that drove the bosons into a highly ordered state,” Xiong added. Generally, a loose collection of bosons under ultracold temperatures will form a condensate, but in this system, with both light and increased density and interaction at relatively higher temperatures, they organized themselves into a symmetric solid and charge-neutral insulator.

The creation of this exotic state of matter proves that the researchers’ moiré platform and pump-probe spectroscopy could become an important means for creating and investigating bosonic materials.

“There are many-body phases with fermions that result in things like superconductivity,” Xiong said. “There are also many-body counterparts with bosons that are also exotic phases. So what we’ve done is create a platform, because we did not really have a great way to study bosons in real materials.” While excitons are well-studied, he added, there hadn’t until this project been a way to coax them to interacting strongly with one another.

With their method, according to Jin, it could be possible to not only study well-known bosonic particles like excitons but also open more windows into the world of condensed matter with new bosonic materials.

“We know that some materials have very bizarre properties,” he said. “And one goal of condensed matter physics is to understand why they have these rich properties and find ways to make these behaviors come out more reliably.”

More information: Richen Xiong et al, Correlated insulator of excitons in WSe 2 /WS 2 moiré superlattices, Science (2023). DOI: 10.1126/science.add5574

Journal information: Science 

Provided by University of California – Santa Barbara 

The hunt for elusive axion particles: Experiments suggest better methods for exploring the dark sector

Scientists develop tools to hunt for elusive axion particles
The expected and actual 90% C.L.s from CCM120 for the ALP-photon coupling g. Also included is the projection region for CCM200 three year run using background taken from CCM120’s spectrum reduced by two orders of magnitude for various conservative improvements (dashed green line) and a background free assumption (extent of shaded green region). QCD axion-model parameter space for the KSVZ benchmark scenario spans the region indicated by the arrows. Credit: Physical Review D (2023). DOI: 10.1103/PhysRevD.107.095036

Since axions were first predicted by theory nearly half a century ago, researchers have hunted for proof of the elusive particle, which may exist outside the visible universe, in the dark sector. But how does one find particles that can’t be seen?

The first physics results from the Coherent CAPTAIN-Mills experiment at Los Alamos —just described in a publication in the journal Physical Review D—suggest that liquid-argon, accelerator-based experimentation, designed initially to look for similarly hypothetical particles such as sterile neutrinos, may also be an ideal set-up for seeking out stealthy axions.

“The confirmation of dark sector particles would have a profound impact on the understanding of the Standard Model of particle physics, as well as the origin and evolution of the universe,” said physicist Richard Van de Water. “A big focus of the physics community is exploring ways to detect and confirm these particles. The Coherent CAPTAIN-Mills experiment couples existing predictions of dark matter particles such as axions with high-intensity particle accelerators capable of producing this hard-to-find dark matter.”

Demystifying the dark sector

Physics theory suggests that only 5% of the universe is made up of visible matter—atoms that form things we can see, touch and feel—and that the remaining 95% is the combination of matter and energy known as the dark sector. Axions, sterile neutrinos and others may explain and account for all or part of that missing energy density.

The existence of axions could also resolve a longstanding problem in the Standard Model, which outlines the known behavior of the subatomic world. Sometimes referred to as “fossils” of the universe, speculated to originate just a second after the Big Bang, axions could also tell us much about the founding moments of the universe.

The Coherent CAPTAIN-Mills experiment was one of several projects to receive Department of Energy funding for dark sector research in 2019, along with substantial funding from the Laboratory Directed Research and Development program at Los Alamos. A prototype detector dubbed the CCM120 was built and run during the 2019 Los Alamos Neutron Science Center (LANSCE) beam cycle. The Physical Review D publication describes results from the CCM120’s initial engineering run.

“Based on the first run of CAPTAIN-Mills research, the experiment has demonstrated the capability to execute the search for axions,” said Bill Louis, also a physicist on the project at Los Alamos. “We’re realizing that the energy regime provided by the proton beam at LANSCE and the liquid argon detector design offers an unexplored paradigm for axion-like particle research.”

Experiment design

Stationed in the Lujan Center adjacent to LANSCE, the Coherent CAPTAIN-Mills experiment is a 10-ton, supercooled, liquid argon detector. (CAPTAIN stands for Cryogenic Apparatus for Precision Tests of Argon Reactions with Neutrinos.)

High-intensity, 800-megaelectron volt protons generated by the LANSCE accelerator hit a tungsten target in the Lujan Center, then traverse 23 meters through extensive steel and concrete shielding to the detector to interact in the liquid argon.

The prototype detector’s interior walls are lined with 120, eight-inch, sensitive photomultiplier tubes (hence the CCM120 moniker) that detect light flashes—single photons—that result when a regular or dark sector particle jostles an atom in the tank of liquid argon.

A special material coating on the interior walls converts the argon light emission into visible light that can be detected by the photo-multiplier tubes. Fast timing of the detector and beam helps remove the effects of background particles such as beam neutrons, cosmic rays and gamma-rays from radioactive decays.

Pieces of the puzzle

Axions are of great interest because they are “highly motivated”; that is, their existence is strongly implied in theories beyond the Standard Model. Developed over more than 70 years, the Standard Model explains three of the four known fundamental forces—electromagnetism, the weak nuclear force and the strong nuclear force—that govern the behavior of atoms, the building blocks of matter. (The fourth force, gravity, is explained by Einsteinian relativity.) But the model isn’t necessarily complete.

An unresolved problem in Standard Model physics is known as the “strong CP problem,” with “CP” meaning charge-parity symmetry. Essentially, particles and their antiparticle counterparts are acted upon similarly by the laws of physics. Nothing in Standard Model physics mandates that behavior, though, so physicists should see at least occasional violations of that symmetry.

In weak-force interactions, charge-parity symmetry violations do occur. But no similar violations have been observed in strong-force interactions. That puzzling absence of theoretically possible behavior represents a problem for Standard Model theory. What prevents violations of charge-parity symmetry from occurring in strong-force interactions?

Abundant, nearly weightless and electrically neutral, axions may be an important part of the puzzle. The axion earned its moniker in 1978, so-coined by physicist Frank Wilczek after a brand of laundry detergent because such a particle could “clean up” the strong CP problem. Physicists speculate that they are components of a dark matter force that preserves charge-parity symmetry, and that they may couple, or interact with, photons and electrons.

Next steps

If axions do exist, finding them might be a matter of devising the right experimental set-up.

“As a result of this initial run with our CCM120 detector, we have a much better understanding of the signatures connected with axion-like particles coupled to photons and to electrons as they move through liquid argon,” said Louis. “These data give us the insight to upgrade the detector to be more sensitive by an order of magnitude.”

More information: A. A. Aguilar-Arevalo et al, Prospects for detecting axionlike particles at the Coherent CAPTAIN-Mills experiment, Physical Review D (2023). DOI: 10.1103/PhysRevD.107.095036

Journal information: Physical Review D 

Provided by Los Alamos National Laboratory 

Physicists develop powerful alternative to dynamic density functional theory

Physicists develop powerful alternative to the dynamic density functional theory
Illustration of a unidirectional flow as investigated in the new study using a Lennard-Jones fluid as an example. The three-dimensional nonequilibrium system is set in motion (red arrows) by a force field (blue arrows) acting along the x-axis. Credit: Matthias Schmidt

Living organisms, ecosystems and the planet Earth are, from a physics point of view, examples of extraordinarily large and complex systems that are not in thermal equilibrium. To physically describe non-equilibrium systems, dynamic density functional theory has been used to date.

However, this theory has weaknesses, as physicists from the University of Bayreuth have now shown in an article published in the Journal of Physics: Condensed Matter. Power functional theory proves to perform substantially better—in combination with artificial intelligence methods, it enables more reliable descriptions and predictions of the dynamics of non-equilibrium systems over time.

Many-particle systems are all kind of systems composed of atoms, electrons, molecules, and other particles invisible to the eye. They are in thermal equilibrium when the temperature is balanced and no heat flow occurs. A system in thermal equilibrium changes its state only when external conditions change. Density functional theory is tailor-made for the study of such systems.

For more than half a century, it has proven its unrestricted value in chemistry and materials science. Based on a powerful classical variant of this theory, states of equilibrium systems can be described and predicted with high accuracy. Dynamic density functional theory (DDFT) extends the scope of this theory to non-equilibrium systems. This involves the physical understanding of systems whose states are not fixed by their external boundary conditions.

These systems have a momentum of their own: they have the ability to change their states without external influences acting on them. Findings and application methods of DDFT are therefore of great interest, for example, for the study of models for living organisms or microscopic flows.

The error potential of dynamic density functional theory

However, DDFT uses an auxiliary construction to make non-equilibrium systems accessible to physical description. It translates the continuous dynamics of these systems into a temporal sequence of equilibrium states. This results in a potential for errors that should not be underestimated, as the Bayreuth team led by Prof. Dr. Matthias Schmidt shows in the new study.

The investigations focused on a comparatively simple example—the unidirectional flow of a gas known in physics as a “Lennard-Jones fluid.” If this nonequilibrium system is interpreted as a chain of successive equilibrium states, one aspect involved in the time-dependent dynamics of the system is neglected, namely the flow field. As a result, DDFT may provide inaccurate descriptions and predictions.

“We do not deny that dynamic density functional theory can provide valuable insights and suggestions when applied to nonequilibrium systems under certain conditions. The problem, however, and we want to draw attention to this in our study using fluid flow as an example, is that it is not possible to determine with sufficient certainty whether these conditions are met in any particular case. The DDFT does not provide any control over whether the restricted framework conditions are given under which it enables reliable calculations. This makes it all the more worthwhile to develop alternative theoretical concepts for understanding nonequilibrium systems,” says Prof. Dr. Daniel de las Heras, first author of the study.

Power functional theory proves to perform substantially better

For ten years, the research team around Prof. Dr. Matthias Schmidt has been making significant contributions to the development of a still young physical theory, which has so far proven to be very successful in the physical study of many-particle systems: power functional theory (PFT). The physicists from Bayreuth are pursuing the goal of being able to describe the dynamics of non-equilibrium systems with the same precision and elegance with which classical density functional theory enables the analysis of equilibrium systems.

In their new study, they now use the example of a fluid flow to show that power functional theory is significantly superior to DDFT when it comes to understanding non-equilibrium systems. PFT allows the dynamics of these systems to be described without having to take a detour via a chain of successive equilibrium states in time. The decisive factor here is the use of artificial intelligence. Machine learning opens up the time-dependent behavior of the fluid flow by including all factors relevant to the system’s inherent dynamics—including the flow field. In this way, the team has even succeeded in controlling the flow of the Lennard-Jones fluid with high precision.

“Our investigation provides further evidence that power function theory is a very promising concept that can be used to describe and explain the dynamics of many-particle systems. In Bayreuth, we intend to further elaborate this theory in the coming years, applying it to nonequilibrium systems that have a much higher degree of complexity than the fluid flow we studied. In this way, the PFT will be able to supersede the dynamic density functional theory, whose systemic weaknesses it avoids according to our findings so far. The original density functional theory, which is tailored to equilibrium systems and has proven its worth, is retained as an elegant special case of PFT,” says Prof. Dr. Matthias Schmidt, who is chair of theoretical physics II at the University of Bayreuth.

More information: Daniel de las Heras et al, Perspective: How to overcome dynamical density functional theory, Journal of Physics: Condensed Matter (2023). DOI: 10.1088/1361-648X/accb33

Provided by Bayreuth University

Gravitational waves innovation could help unlock cosmic secrets

gravitational waves
Credit: Pixabay/CC0 Public Domain

New frontiers in the study of the universe—and gravitational waves—have been opened up following a breakthrough by University of the West of Scotland (UWS) researchers.

The groundbreaking development in thin film technology promises to enhance the sensitivity of current and future gravitational wave detectors. Developed by academics at UWS’s Institute of Thin Films, Sensors and Imaging (ITFSI), the innovation could enhance the understanding of the nature of the universe. The research is published in the journal Applied Optics.

Gravitational waves, first predicted by Albert Einstein’s theory of general relativity, are ripples in the fabric of spacetime caused by the most energetic events in the cosmos, such as black hole mergers and neutron star collisions. Detecting and studying these waves provides invaluable insights into the fundamental nature of the universe.

Dr. Carlos Garcia Nuñez, lecturer at UWS’s School of Computing, Engineering and Physical Sciences said, “At the Institute of Thin Films, Sensors and Imaging, we are working hard to push the limits of thin film materials, exploring new techniques to deposit them, controlling their properties in order to match the requirements of current and future sensing technology for the detection of gravitational waves.”

“The development of high reflecting mirrors with low thermal noise opens a wide range of applications, which covers from the detection of gravitational waves from cosmological events, to the development of quantum computers.”

The technique used in this work—originally developed and patented by Professor Des Gibson, Director of UWS’s Institute of Thin Films, Sensors and Imaging—could enable the production of thin films that achieve low levels of “thermal noise.” The reduction of this kind of noise in mirror coatings is essential to increase the sensitivity of current gravitational wave detectors—allowing the detection of a wider range of cosmological events—and could be deployed to enhance other high-precision devices, such as atomic clocks or quantum computers.

Professor Gibson said, “We are thrilled to unveil this cutting-edge thin film technology for gravitational wave detection. This breakthrough represents a significant step forward in our ability to explore the universe and unlock its secrets through the study of gravitational waves. We believe this advancement will accelerate scientific progress in this field and open up new avenues for discovery.”

“UWS’s thin film technology has already undergone extensive testing and validation in collaboration with renowned scientists and research institutions. The results have been met with great enthusiasm, fueling anticipation for its future impact on the field of gravitational wave astronomy. The coating deposition technology is being commercialized by UWS spinout company, Albasense Ltd.”

The development of coatings with low thermal noise will not only make future generation of gravitational wave detectors more precise and sensitive to cosmic events, but will also provide new solutions to atomic clocks and quantum mechanics, both highly relevant for the United Nations’ Sustainable Development Goals 7, 9 and 11.

More information: Carlos Garcia Nuñez et al, Amorphous dielectric optical coatings deposited by plasma ion-assisted electron beam evaporation for gravitational wave detectors, Applied Optics (2023). DOI: 10.1364/AO.477186

Provided by University of the West of Scotland