Studying muonium to reveal new physics beyond the Standard Model

Studying muonium to reveal new physics beyond the Standard Model
Muonium frequency scan at 22.5 W in the range of 200–800 MHz. The fitted black line is with, the gray line without the 3S contribution. The error bars correspond to the counting statistical error. The colored areas represent the underlying contributions from 2S − 2P1/2 transitions, namely 583 MHz (blue), 1140 MHz (orange), 1326 MHz (green), and the combined 3S − 3P1/2 (yellow). The data point with TL OFF is not displayed in the figure, but is included in the fit; it lies at 20.4(4) × 10−4. Credit: Nature Communications (2022). DOI: 10.1038/s41467-022-34672-0

By studying an exotic atom called muonium, researchers are hoping misbehaving muons will spill the beans on the Standard Model of particle physics. To make muonium, they use the most intense continuous beam of low energy muons in the world at Paul Scherrer Institute PSI. The research is published in Nature Communications.

The muon is often described as the electron’s heavy cousin. A more appropriate description might be its rogue relation. Since its discovery triggered the words “who ordered that” (Isidor Isaac Rabi, Nobel laureate), the muon has been bamboozling scientists with its law-breaking antics.

The muon’s most famous misdemeanor is to wobble slightly too much in a magnetic field: its anomalous magnetic moment hit the headlines with the 2021 muon g-2 experiment at Fermilab. The muon also notably caused trouble when it was used to measure the radius of the proton—giving rise to a wildly different value to previous measurements and what became known as the proton radius puzzle.

Yet rather than being chastised, the muon is cherished for its surprising behavior, which makes it a likely candidate to reveal new physics beyond the Standard Model.

Aiming to make sense of the muon’s strange behavior, researchers from PSI and ETH Zurich turned to an exotic atom known as muonium. Formed from a positive muon orbited by an electron, muonium is similar to hydrogen but much simpler. Whereas hydrogen’s proton is made up of quarks, muonium’s positive muon has no substructure. And this means it provides a very clean model system from which to sort these problems out: for example, by obtaining extremely precise values of fundamental constants such as the mass of the muon.

“With muonium, because we can measure its properties so precisely, we can try to detect any deviation from the Standard Model. And if we see this, we can then infer which of the theories that go beyond the Standard Model are viable or not,” explains Paolo Crivelli from ETH Zurich, who is leading the study supported by a European Research Council Consolidator grant in the frame of the Mu-MASS project.

Making sense of the muon's misdemeanours
By making precise measurements in an exotic atom known as muonium, Crivelli and Prokscha are aiming to understand puzzling results using muons, which may in turn reveal gaps in the laws of physics as we know them. To make the measurements, they use the most intense, continuous source of low energy muons in the world at the Paul Scherrer Institute PSI in Switzerland. Credit: Paul Scherrer Institute / Mahir Dzambegovic

Only one place in the world this is possible

A major challenge to making these measurements very precisely is having an intense beam of muonium particles so that statistical errors can be reduced. Making lots of muonium, which incidentally lasts for only two microseconds, is not simple. There is one place in the world where enough positive muons at low energy are available to create this: PSI’s Swiss Muon Source.

“To make muonium efficiently, we need to use slow muons. When they’re first produced they’re going at a quarter of the speed of light. We then need to slow them down by a factor of a thousand without losing them. At PSI, we’ve perfected this art. We have the most intense continuous source of low energy muons in the world. So we’re uniquely positioned to perform these measurements,” says Thomas Prokscha, who heads the Low Energy Muons group at PSI.

At the Low Energy Muons beamline, slow muons pass through a thin foil target where they pick up electrons to form muonium. As they emerge, Crivelli’s team are waiting to probe their properties using microwave and laser spectroscopy.

Tiny change in energy levels could hold the key

The property of muonium that the researchers are able to study in such detail is its energy levels. In the recent publication, the teams were able to measure for the first time a transition between certain very specific energy sublevels in muonium. Isolated from other so-called hyperfine levels, the transition can be modeled extremely cleanly. The ability to now measure it will facilitate other precision measurements: in particular, to obtain an improved value of an important quantity known as the Lamb shift.

The Lamb shift is a miniscule change in certain energy levels in hydrogen relative to where they “should” be as predicted by classical theory. The shift was explained with the advent of Quantum Electrodynamics (the quantum theory of how light and matter interact). Yet, as discussed, in hydrogen, protons—possessing substructure—complicate things. An ultra-precise Lamb shift measured in muonium could put the theory of Quantum Electrodynamics to the test.

There is more. The muon is nine times lighter than the proton. This means that effects relating to the nuclear mass, such as how a particle recoils after absorbing a photon of light, are enhanced. Indetectable in hydrogen, a route to these values at high precision in muonium could enable scientists to test certain theories that would explain the muon g-2 anomaly: for example, the existence of new particles such as scalar or vector gauge bosons.

Putting the muon on the scales

However exciting the potential of this may be, the team have a greater goal in their sights: weighing the muon. To do this, they will measure a different transition in muonium to a precision one thousand times greater than ever before.

An ultra-high precision value of the muon mass—the goal is 1 part per billion—will support ongoing efforts to reduce uncertainty even further for muon g-2. “The muon mass is a fundamental parameter that we cannot predict with theory, and so as experimental precision improves, we desperately need an improved value of the muon mass as an input for the calculations,” explains Crivelli.

The measurement could also lead to a new value of the Rydberg constant—an important fundamental constant in atomic physics—that is independent of hydrogen spectroscopy. This could explain discrepancies between measurements that gave rise to the proton radius puzzle, and maybe even solve it once and for all.

Muonium spectroscopy poised to fly with IMPACT project

Given that the main limitation for such experiments is producing enough muonium to reduce statistical errors, the outlook for this research at PSI looks bright.

“With the high intensity muon beams planned for the IMPACT project we could potentially go a factor of one hundred higher in precision, and this would be getting very interesting for the Standard Model,” says Prokscha.

More information: Gianluca Janka et al, Measurement of the transition frequency from 2S1/2, F = 0 to 2P1/2, F = 1 states in Muonium, Nature Communications (2022). DOI: 10.1038/s41467-022-34672-0

Journal information: Nature Communications 

Provided by Paul Scherrer Institute 

Covering a cylinder with a magnetic coil triples its energy output in nuclear fusion test

Covering a cylinder with a magnetic coil triples its energy output in nuclear fusion test
(a) Sketch of the magnetized NIF hohlraum constructed from AuTa4 with solenoidal coil to carry current. (b) X-ray drive measured through one of the LEHs and the incident laser powers for a magnetized and unmagnetized AuTa4 hohlraum and two unmagnetized Au hohlraums. “BF” refers to “BigFoot,” the name of the previous ignition design. Credit: Physical Review Letters (2022). DOI: 10.1103/PhysRevLett.129.195002

A team of researchers working at the National Ignition Facility, part of Lawrence Livermore National Laboratory, has found that covering a cylinder containing a small amount of hydrogen fuel with a magnetic coil and firing lasers at it triples its energy output—another step toward the development of nuclear fusion as a power source.

In their paper published in the journal Physical Review Letters, the team, which has members from several facilities in the U.S., one in the U.K. and one in Japan, describes upgrading their setup to allow for the introduction of the magnetic coil.

Last year, a team working at the same facility announced that they had come closer to achieving ignition in a nuclear fusion test than anyone has so far. Unfortunately, the were unable to repeat their results. Since that time, the team has been reviewing their original design, looking for ways to make it better.

The original design involved firing 192 laser beams at a tiny cylinder containing a tiny sphere of hydrogen at its center. This created X-rays that heated the sphere until its atoms began to fuse. Some of the design improvements have involved changing the size of the holes through which the lasers pass, but they have only led to minor changes.

Looking for a better solution, the team studied prior research and found several studies that had shown, via simulation, that encasing a cylinder in a magnetic field should significantly increase the energy output.

Putting the suggestion into practice, the researchers had to modify the cylinder—originally, it was made of gold. Placing it in a strong magnetic field would create an electric current strong enough to tear the cylinder apart, so they made a new one from an alloy of gold and tantalum. They also switched the gas from hydrogen to deuterium (another kind of hydrogen), then covered the whole works with a tesla magnetic field using a coil. Then they fired up the lasers. The researchers saw an immediate improvement—the hot spot on the sphere went up by 40% and the energy output was tripled.

The work marks a step toward the ultimate goal—creating a fusion reactor that can produce more energy than is put into it.

More information: J. D. Moody et al, Increased Ion Temperature and Neutron Yield Observed in Magnetized Indirectly Driven D2 -Filled Capsule Implosions on the National Ignition Facility, Physical Review Letters (2022). DOI: 10.1103/PhysRevLett.129.195002

Journal information: Physical Review Letters 

© 2022 Science X Network

Using machine learning to infer rules for designing complex mechanical metamaterials

Using machine learning to infer rules for designing complex mechanical metamaterials
Two combinatorial mechanical metamaterials designed in such a way that the letters M and L bulge out in the front when being squeezed between two plates (top and bottom). Designing novel metamaterials such as this is made easy by AI. Credit: Daan Haver and Yao Du, University of Amsterdam

Mechanical metamaterials are sophisticated artificial structures with mechanical properties that are driven by their structure, rather than their composition. While these structures have proved to be very promising for the development of new technologies designing them can be both challenging and time-consuming.

Researchers at University of Amsterdam, AMOLF, and Utrecht University have recently demonstrated the potential of convolutional neural networks (CNNs), a class of machine learning algorithms, for designing complex mechanical metamaterials. Their paper, published in Physical Review Letters, specifically introduces two-different CNN-based methods that can derive and capture the subtle combinatorial rules underpinning the design of mechanical metamaterials.

“Our recent study can be considered a continuation of the combinatorial design approach introduced in a previous paper, which can be applied to more complicated building blocks,” Ryan van Mastrigt, one of the researchers who carried out the study, told Phys.org. “Around the time when I started working on this study, Aleksi Bossart and David Dykstra were working on a combinatorial metamaterial that is able to host multiple functionalities, meaning a material that can deform in multiple distinct ways depending on how one actuates it.”

As part of their previous research, van Mastrigt and his colleagues tried to distill the rules underpinning the successful design of complex metamaterials. They soon realized that this was far from an easy task, as the “building blocks” that make up these structures can be deformed and arranged in countless different ways.

Previous studies showed that when metamaterials have small unit cell-sizes (i.e., a limited amount of “building blocks“), simulating all the ways in which these blocks can be deformed and arranged using conventional physics simulation tools is possible. As these unit cell-sizes become larger, however, the task becomes extremely challenging or impossible.

“Since we were unable to reason about any underlying design rules and conventional tools failed at allowing us to explore larger unit cell designs in an efficient way, we decided to consider machine learning as a serious option,” van Mastrigt explained. “Thus, the main objective of our study became to identify a machine learning tool that would allow us to explore the design space much quicker than before. I think that we succeeded and even exceeded our own expectations with our findings.”

To successfully train CNNs to tackle the design of complex metamaterials, van Mastrigt and his colleagues initially had to overcome a series of challenges. Firstly, they had to find a way to effectively represent their metamaterial designs.

“We tried a couple of approaches and finally settled on what we refer to as the pixel representation,” van Mastrigt explained. “This representation encodes the orientation of each building block in a clear visual manner, such that the classification problem is cast to a visual pattern detection problem, which is exactly what CNNs are good at.”

Subsequently, the researchers had to devise methods that considered the huge metamaterials class-imbalance. In other words, as there are currently many known metamaterials belonging to class I, but far fewer belonging to class C (the class that the researchers are interested in), training CNNs to infer combinatorial rules for these different classes might entail different steps.

To tackle this challenge, van Mastrigt and his colleagues devised two different CNN-based techniques. These two techniques are applicable to different metamaterial classes and classification problems.

“In the case of metamaterial M2, we tried to create a training set that is class-balanced,” van Mastrigt said. “We did this using naïve undersampling (i.e., throwing a lot of class I examples away) and combine this with symmetries which we know some designs have, such as translational and rotational symmetry, to create additional class C designs.

“This approach thus requires some domain knowledge. For metamaterial M1, on the other hand, we added a reweight term to the loss function such that the rare class C designs weigh more heavily during training, where the key idea is that this reweighting of class C cancels out with the much larger number of class I designs in the training set. This approach requires no domain knowledge.”

In initial tests, both these CNN-based methods for deriving the combinatorial rules behind the design of mechanical metamaterials achieved highly promising results. The team found that they each performed better on different tasks, depending on the initial dataset used and known (or unknown) design symmetries.

“We showed just how extraordinarily good these networks are at solving complex combinatorial problems,” van Mastrigt said. “This was really surprising for us, since all other conventional (statistical) tools we as physicists commonly use fail for these types of problems. We showed that neural networks really do more than just interpolate the design space based on the examples you give them, as they appear to be somehow biased to find a structure (which comes from rules) in this design space that generalizes extremely well.”

The recent findings gathered by this team of researchers could have far reaching implications for the design of metamaterials. While the networks they trained were so far applied to a few metamaterial structures, they could eventually also be used to create far more complex designs, which would be incredibly difficult to tackle using conventional physics simulation tools.

The work by van Mastrigt and his colleagues also highlights the huge value of CNNs for tackling combinatorial problems, optimization tasks that entail composing an “optimal object” or deriving an “optimal solution” that satisfies all constraints in a set, in instances where there are numerous variables at play. As combinatorial problems are common in numerous scientific fields, this paper could promote the use of CNNs in other research and development settings.

The researchers showed that even if machine learning is typically a “black box” approach (i.e., it does not always allow researchers to view the processes behind a given prediction or outcome), it can still be very valuable for exploring the design space for metamaterials, and potentially other materials, objects, or chemical substances. This could in turn potentially help to reason about and better understand the complex rules underlying effective designs.

“In our next studies, we will turn our attention to inverse design,” van Mastrigt added. “The current tool already helps us enormously to reduce the design space to find suitable (class C) designs, but it does not find us the best design for the task we have in mind. We are now considering machine learning methods that will help us find extremely rare designs that have the properties that we want, ideally even when no examples of such designs are shown to the machine learning method beforehand.

“This is a very hard problem, but after our recent study, we believe, that neural networks will allow us to successfully tackle it.”

More information: Ryan van Mastrigt et al, Machine Learning of Implicit Combinatorial Rules in Mechanical Metamaterials, Physical Review Letters (2022). DOI: 10.1103/PhysRevLett.129.198003

Corentin Coulais et al, Combinatorial design of textured mechanical metamaterials, Nature (2016). DOI: 10.1038/nature18960

Anne S. Meeussen et al, Topological defects produce exotic mechanics in complex metamaterials, Nature Physics (2020). DOI: 10.1038/s41567-019-0763-6

Aleksi Bossart et al, Oligomodal metamaterials with multifunctional mechanics, Proceedings of the National Academy of Sciences (2021). DOI: 10.1073/pnas.2018610118

Journal information: Nature Physics  Nature  Physical Review Letters  Proceedings of the National Academy of Sciences 

Evidence of Higgs boson contributions to the production of Z boson pairs at high energies

Evidence of Higgs boson contributions to the production of Z boson pairs at high energies
Left: two-parameter likelihood scan of the off-shell gg and EW production signal strength parameters, 𝜇off-shellF and 𝜇off-shellV, respectively. The dot-dashed and dashed contours enclose the 68% (−2Δln𝐿=2.30) and 95% (−2Δln𝐿=5.99) CL regions. The cross marks the minimum, and the blue diamond marks the SM expectation. The integrated luminosity reaches only up to 138 fb−1 as on-shell 4ℓ events are not included in performing this scan. Right: observed (solid) and expected (dashed) one-parameter likelihood scans over ΓH. Scans are shown for the combination of 4ℓ on-shell data with 4ℓ off-shell (magenta) or 2ℓ2ν off-shell data (green) alone, or with both datasets (black). The horizontal lines indicate the 68% (−2Δln𝐿=1.0) and 95% (−2Δln𝐿=3.84) CL regions. The integrated luminosity reaches up to 140 fb−1 as on-shell 4ℓ events are included in performing these scans. The exclusion of the no off-shell hypothesis is consistent with 3.6 s.d. in both panels. Credit: The CMS Collaboration.

The Higgs boson, the fundamental subatomic particle associated with the Higgs field, was first discovered in 2012 as part of the ATLAS and CMS experiments, both of which analyze data collected at CERN’s Large Hadron Collider (LHC), the most powerful particle accelerator in existence. Since the discovery of the Higgs boson, research teams worldwide have been trying to better understand this unique particle’s properties and characteristics.

The CMS Collaboration, the large group of researchers involved in the CMS experiment, has recently obtained an updated measurement of the width of the Higgs boson, while also gathering the first evidence of its off-shell contributions to the production of Z boson pairs. Their findings, published in Nature Physics, are consistent with standard model predictions.

“The quantum theoretical description of fundamental particles is probabilistic in nature, and if you consider all the different states of a collection of particles, their probabilities must always add up to 1 regardless of whether you look at this collection now or sometime later,” Ulascan Sarica, researcher for the CMS Collaboration, told Phys.org. “When analyzed mathematically, this simple statement imposes restrictions, the so-called unitarity bounds, on the probabilities of particle interactions at high energies.”

Since the 1970s, physicists have predicted that when pairs of heavy vector bosons Z or W are produced, typical restrictions at high energies would be violated, unless a Higgs boson was contributing to the production of these pairs. Over the past ten years, theoretical physics calculations showed that the occurrence of these Higgs boson contributions at high energies should be measurable using existing data collected by the LHC.

“Other investigations have shown that the total decay width of the Higgs boson, which is inversely proportional to its lifetime and predicted in the standard model to be notably very small (4.1 mega-electron volts in width, or 1.6×10-22 seconds in lifetime) can be determined using these high-energy events at precision at least a hundred times better than other techniques limited by detector resolution (1000 mega-electron volts in total width measurements, and 1.9×10-13 seconds in lifetime measurements),” Sarica explained.

“For these reasons, our paper had two objectives: to look for the presence of Higgs boson contributions to heavy diboson production at high energies, and to measure the Higgs boson total decay width as precisely as possible via these contributions.”

As part of their recent study, the CMS collaboration analyzed some of the data collected between 2015 and 2018, as part of the second data collection run of the LHC. They specifically focused on events characterized by the production of pairs of Z bosons, which subsequently decayed into either four charged leptons (i.e., electrons or muons) or two charged leptons and two neutrinos.

Past experimental analyses suggest that these two unique patterns are the most sensitive to the production of heavy pairs of bosons at high energies. By analyzing events that matched these patterns, therefore, the team hoped to gather clearer and more reliable results.

“We observed the first evidence of the Higgs boson contributions in the production of Z boson pairs at high energies with a statistical significance of more than 3 standard deviations,” Li Yuan, another member of the CMS collaboration, told Phys.org. “The result strongly supports the spontaneous electroweak symmetry breaking mechanism, which preserves unitarity in heavy diboson production at high energies.”

In addition to gathering evidence of Higgs boson contributions to ZZ production, the CMS collaboration was able to significantly improve existing measurements of the Higgs boson’s total decay width or lifetime. The measurement they collected was believed to be unattainable 10 years ago, given the narrow width of the particle (i.e., 4.1 mega-electron volts according to predictions from the standard model of particle physics).

“Our result for this measurement is 3.2 mega-electron volts with an upper error of 2.4 mega-electron volts and a lower error of 1.7 mega-electron volts,” Yuan said. “This result is consistent with the standard model expectation so far, but there is still room that a future measurement with even greater precision could deviate from the prediction.”

The recent work by the CMS collaboration offers new insight about the properties of the Higgs boson, while also highlighting its contribution to the production of Z boson pairs. In their next studies, the researchers plan to continue their exploration of this fascinating subatomic particle using new data collected at the LHC and advanced analysis techniques.

“While our results have reached a statistical significance beyond the threshold of 3 standard deviations, typically taken as evidence in the particle physics community, more data is needed to be able to reach the threshold of 5 standard deviations in order to claim a discovery,” Sarica said.

The third data collection run of the LHC started this year and is expected to continue until the end of 2025. Sarica, Yuan, and the rest of the CMS collaboration have already started preparations that will allow them to measure the Higgs boson’s width with even greater precision using the new data collected as part of this third round of data collection.

“In addition, our CMS analysis does not yet include the analysis of high-energy events with four charged leptons from the 2018 data, and preparations are ongoing for its inclusion in an update,” Sarica added.

“Recent preliminary results from the ATLAS Collaboration, showcased on Nov. 9 during the Higgs 2022 conference, also provide an independent confirmation of the evidence CMS finds, so once their results go through peer-review, we hope the two collaborations can discuss how the two analyses can be combined to provide the best measurements of Higgs boson contributions at high energy and its total width.”

More information: The CMS Collaboration , Measurement of the Higgs boson width and evidence of its off-shell contributions to ZZ production, Nature Physics (2022). DOI: 10.1038/s41567-022-01682-0

Conference: indico.cern.ch/event/1086716/

Journal information: Nature Physics 

© 2022 Science X Network

Scientists demonstrate world’s first continuous-wave lasing of deep-ultraviolet laser diode at room temperature

In world first, scientists demonstrate continuous-wave lasing of deep-ultraviolet laser diode at room temperature
In world first, scientists demonstrate continuous-wave lasing of deep-ultraviolet laser diode at room temperature. Credit: Issey Takahashi

A research group led by 2014 Nobel laureate Hiroshi Amano at Nagoya University’s Institute of Materials and Systems for Sustainability (IMaSS) in central Japan, in collaboration with Asahi Kasei Corporation, has successfully conducted the world’s first room-temperature continuous-wave lasing of a deep-ultraviolet laser diode (wavelengths down to UV-C region).

These results, published in Applied Physics Letters, represent a step toward the widespread use of a technology with the potential for a wide range of applications, including sterilization and medicine.

Since they were introduced in the 1960s, and after decades of research and development, successful commercialization of laser diodes (LDs) was finally achieved for a number of applications with wavelengths ranging from infrared to blue-violet. Examples of this technology include optical communications devices with infrared LDs and Blu-ray discs using blue-violet LDs.

However, despite the efforts of research groups around the world, no one could develop deep ultraviolet LDs. A key breakthrough only occurred after 2007 with the emergence of technology to fabricate aluminum nitride (AlN) substrates, an ideal material for growing aluminum gallium nitride (AlGaN) film for UV light-emitting devices.

Demonstration of room-temperature continuous-wave lasing. Credit: 2022 Asahi Kasei Corp. and Nagoya University

Starting in 2017, Professor Amano’s research group, in cooperation with Asahi Kasei, the company that provided 2-inch AlN substrates, began developing a deep-ultraviolet LD. At first, sufficient injection of current into the device was too difficult, preventing further development of UV-C laser diodes.

But in 2019, the research group successfully solved this problem using a polarization-induced doping technique. For the first time, they produced a short-wavelength ultraviolet-visible (UV-C) LD that operates with short pulses of current. However, the input power required for these current pulses was 5.2 W. This was too high for continuous-wave lasing because the power would cause the diode to quickly heat up and stop lasing.

In world first, scientists demonstrate continuous-wave lasing of deep-ultraviolet laser diode at room temperature
Researchers that successfully conducted the world’s first room-temperature continuous-wave lasing of a deep-ultraviolet laser diode. Credit: 2022 Asahi Kasei Corp. and Nagoya University

But now, researchers from Nagoya University and Asahi Kasei have reshaped the structure of the device itself, reducing the drive power needed for the laser to operate at only 1.1W at room temperature. Earlier devices were found to require high levels of operating power because of the inability of effective current paths due to crystal defects that occur at the laser stripe. But in this study, the researchers found that the strong crystal strain creates these defects.

By clever tailoring of the side walls of the laser stripe, they suppressed the defects, achieving efficient current flow to the active region of the laser diode and reducing the operating power.

Nagoya University’s industry-academic cooperation platform, called the Center for Integrated Research of Future Electronics, Transformative Electronics Facilities (C-TEFs), made possible the development of the new UV laser technology. Under C-TEFs, researchers from partners such as Asahi Kasei share access to state-of-the-art facilities on the Nagoya University campus, providing them with the people and tools needed to build reproducible high-quality devices.

Zhang Ziyi, a representative of the research team, was in his second year at Asahi Kasei when he became involved in the project’s founding. “I wanted to do something new,” he said in an interview. “Back then everyone assumed that the deep ultraviolet laser diode was an impossibility, but Professor Amano told me, ‘We have made it to the blue laser, now is the time for ultraviolet’.”

This research is a milestone in the practical application and development of semiconductor lasers in all wavelength ranges. In the future, UV-C LDs could be applied to healthcare, virus detection, particulate measurement, gas analysis, and high-definition laser processing.

“Its application to sterilization technology could be groundbreaking,” Zhang said. “Unlike the current LED sterilization methods, which are time-inefficient, lasers can disinfect large areas in a short time and over long distances”. This technology could especially benefit surgeons and nurses who need sterilized operating rooms and tap water.

The successful results have been reported in two papers in Applied Physics Letters.

More information: Hiroshi Amano et al, Local stress control to suppress dislocation generation for pseudomorphically grown AlGaN UV-C laser diodes, Applied Physics Letters (2022). DOI: 10.1063/5.0124512

Hiroshi Amano et al, Key temperature-dependent characteristics of AlGaN-based UV-C laser diode and demonstration of room-temperature continuous-wave lasing, Applied Physics Letters (2022). DOI: 10.1063/5.0124480

Journal information: Applied Physics Letters 

Provided by Nagoya University 

Transporting of two-photon quantum states of light through a phase-separated Anderson localization optical fiber

Achieving a quantum fiber
Schematic overview of a phase-separated Anderson localization fiber as quantum channel between a transmitter and receiver. The illustration shows that quantum correlations such as entanglement are maintained during transport from the transmitter (generation) to receiver (detection) all the way along the fiber. Credit: ICFO/ A. Cuevas

Invented in 1970 by Corning Incorporated, low-loss optical fiber became the best means to efficiently transport information from one place to another over long distances without loss of information. The most common way of data transmission nowadays is through conventional optical fibers—one single core channel transmits the information. However, with the exponential increase of data generation, these systems are reaching information-carrying capacity limits.

Thus, research now focuses on finding new ways to utilize the full potential of fibers by examining their inner structure and applying new approaches to signal generation and transmission. Moreover, applications in quantum technology are enabled by extending this research from classical to quantum light.

In the late 50s, the physicist Philip W. Anderson (who also made important contributions to particle physics and superconductivity) predicted what is now called Anderson localization. For this discovery, he received the 1977 Nobel Prize in Physics. Anderson showed theoretically under which conditions an electron in a disordered system can either move freely through the system as a whole, or be tied to a specific position as a “localized electron.” This disordered system can for example be a semiconductor with impurities.

Later, the same theoretical approach was applied to a variety of disordered systems, and it was deduced that light could also experience Anderson localization. Experiments in the past have demonstrated Anderson localization in optical fibers, realizing the confinement or localization of light—classical or conventional light—in two dimensions while propagating it through the third dimension. While these experiments had shown successful results with classical light, so far no one had tested such systems with quantum light—light consisting of quantum correlated states. That is, until recently.

In a study published in Communications Physics, ICFO researchers Alexander Demuth, Robing Camphausen, Alvaro Cuevas, led by ICREA Prof at ICFO Valerio Pruneri, in collaboration with Nick Borrelli, Thomas Seward, Lisa Lamberson and Karl W. Koch from Corning, together with Alessandro Ruggeri from Micro Photon Devices (MPD) and Federica Villa and Francesca Madonini from Politecnico di Milano, have been able to successfully demonstrate the transport of two-photon quantum states of light through a phase-separated Anderson localization optical fiber (PSF).

A conventional optical fiber vs. an Anderson localization fiber

Contrary to conventional single mode optical fibers, where data is transmitted through a single core, a phase-separated fiber (PSF) or phase-separated Anderson localization fiber is made of many glass strands embedded in a glass matrix of two different refractive indexes.

During its fabrication, as borosilicate glass is heated and melted, it is drawn into a fiber, where one of the two phases of different refractive indexes tends to form elongated glass strands. Since there are two refractive indexes within the material, this generates what is known as a lateral disorder, which leads to transverse (2D) Anderson localization of light in the material.

Experts in optical fiber fabrication, Corning created an optical fiber that can propagate multiple optical beams in a single optical fiber by harnessing Anderson localization. Contrary to multicore fiber bundles, this PSF showed to be very suitable for such experiments since many parallel optical beams can propagate through the fiber with minimal spacing between them.

The team of scientists, experts in quantum communications, wanted to transport quantum information as efficiently as possible through Corning’s phase-separated optical fiber. In experiment, the PSF connects a transmitter and a receiver. The transmitter is a quantum light source (built by ICFO). The source generates quantum correlated photon pairs via spontaneous parametric down-conversion (SPDC) in a non-linear crystal, where one photon of high energy is converted to pairs of photons, which have lower energy each.

The low-energy photon pairs have a wavelength of 810 nm. Due to momentum conservation, spatial anti-correlation arises. The receiver is a single-photon avalanche diode (SPAD) array camera, developed by Polimi and MPD. The SPAD array camera, unlike common CMOS cameras, is so sensitive that it can detect single photons with extremely low noise; it also has very high time resolution, such that the arrival time of the single photons is known with high precision.

Quantum light

The ICFO team engineered the optical setup to send the quantum light through the phase-separated Anderson localization fiber and detected its arrival with the SPAD array camera. The SPAD array enabled them not only to detect the pairs of photons but also to identify them as pairs, as they arrive at the same time (coincident).

As the pairs are quantum correlated, knowing where one of the two photons is detected tells us the other photon’s location. The team verified this correlation right before and after sending the quantum light through PSF, successfully showing that the spatial anti-correlation of the photons was indeed maintained.

After this demonstration, the ICFO team then set out to show how to improve their results in future work. For this, they conducted a scaling analysis, in order to find out the optimal size distribution of the elongated glass strands for the quantum light wavelength of 810 nm. After a thorough analysis with classical light they were able to identify the current limitations of phase-separated fiber and propose improvements of its fabrication, in order to minimize attenuation and loss of resolution during transport.

The results of this study have shown this approach to be potentially attractive for scalable fabrication processes in real-world applications in quantum imaging or quantum communications, especially for the fields of high-resolution endoscopy, entanglement distribution and quantum key distribution.

More information: Alexander Demuth et al, Quantum light transport in phase-separated Anderson localization fiber, Communications Physics (2022). DOI: 10.1038/s42005-022-01036-5

Journal information: Communications Physics 

Provided by ICFO 

Microlaser chip adds new dimensions to quantum communication

Microlaser chip adds new dimensions to quantum communication
With only two levels of superposition, the qubits used in today’s quantum communication technologies have limited storage space and low tolerance for interference. The Feng Lab’s hyperdimensional microlaser (above) generates qudits, photons with four simultaneous levels of information. The increase in dimension makes for robust quantum communication technology better suited for real-world applications. Credit: Haoqi Zhao

Researchers at Penn Engineering have created a chip that outstrips the security and robustness of existing quantum communications hardware. Their technology communicates in “qudits,” doubling the quantum information space of any previous on-chip laser.

Liang Feng, Professor in the Departments of Materials Science and Engineering (MSE) and Electrical Systems and Engineering (ESE), along with MSE postdoctoral fellow Zhifeng Zhang and ESE Ph.D. student Haoqi Zhao, debuted the technology in a recent study published in Nature. The group worked in collaboration with scientists from the Polytechnic University of Milan, the Institute for Cross-Disciplinary Physics and Complex Systems, Duke University and the City University of New York (CUNY).

Bits, qubits and qudits

While non-quantum chips store, transmit and compute data using bits, state-of-the-art quantum devices use qubits. Bits can be 1s or 0s, while qubits are units of digital information capable of being both 1 and 0 at the same time. In quantum mechanics, this state of simultaneity is called “superposition.”

quantum bit in a state of superposition greater than two levels is called a qudit to signal these additional dimensions.

“In classical communications,” says Feng, “a laser can emit a pulse coded as either 1 or 0. These pulses can easily be cloned by an interceptor looking to steal information and are therefore not very secure. In quantum communications with qubits, the pulse can have any superposition state between 1 and 0. Superposition makes it so a quantum pulse cannot be copied. Unlike algorithmic encryption, which blocks hackers using complex math, quantum cryptography is a physical system that keeps information secure.”

Qubits, however, aren’t perfect. With only two levels of superposition, qubits have limited storage space and low tolerance for interference.

The Feng Lab device’s four-level qudits enable significant advances in quantum cryptography, raising the maximum secrete key rate for information exchange from 1 bit per pulse to 2 bits per pulse. The device offers four levels of superposition and opens the door to further increases in dimension.

“The biggest challenge,” says Zhang, “was the complexity and non-scalability of the standard setup. We already knew how to generate these four-level systems, but it required a lab and many different optical tools to control all the parameters associated with the increase in dimension. Our goal was to achieve this on a single chip. And that’s exactly what we did.”

The physics of cybersecurity

Quantum communication uses photons in tightly controlled states of superposition. Properties such as location, momentum, polarization and spin exist as multiplicities at the quantum level, each of which is governed by probabilities. These probabilities describe the likelihood of a quantum system—an atom, a particle, a wave—taking on a single attribute when measured.

In other words, quantum systems are neither here nor there. They are both here and there. It is only the act of observation—detecting, looking, measuring—that causes a quantum system to take on a fixed property. Like a subatomic game of Statues, quantum superpositions take on a single state as soon as they are observed, making it impossible to intercept them without detection or copy them.

The hyperdimensional spin-orbit microlaser builds on the team’s earlier work with vortex microlasers, which sensitively tune the orbital angular momentum (OAM) of photons. The most recent device upgrades the capabilities of the previous laser by adding another level of command over photonic spin.

This additional level of control—being able to manipulate and couple OAM and spin—is the breakthrough that allowed them to achieve a four-level system.

The difficulty of controlling all these parameters at once is what had been hindering qudit generation in integrated photonics and represents the major experimental accomplishment of the team’s work.

“Think of the quantum states of our photon as two planets stacked on top of each other,” says Zhao. “Before, we only had information about these planets’ latitude. With that, we could create a maximum of two levels of superposition. We didn’t have enough information to stack them into four. Now, we have longitude as well. This is the information we need to manipulate photons in a coupled way and achieve dimensional increase. We are coordinating each planet’s rotation and spin and holding the two planets in strategic relation to each other.”

Quantum cryptography with Alice, Bob and Eve

Quantum cryptography relies on superposition as a tamper-evident seal. In a popular cryptography protocol known as Quantum Key Distribution (QKD), randomly generated quantum states are sent back and forth between sender and receiver to test the security of a communications channel.

If sender and receiver (always Alice and Bob in the storyworld of cryptography) discover a certain amount of discrepancy between their messages, they know that someone has attempted to intercept their message. But, if the transmission remains mostly intact, Alice and Bob understand the channel to be safe and use the quantum transmission as a key for encrypted messages.

How does this improve on non-quantum communication security? If we imagine the photon as a sphere rotating upwards, we can get a rough idea of how a photon might classically encode the binary digit 1. If we imagine it rotating downwards, we understand 0.

When Alice sends classical photons coded in bits, Eve the eavesdropper can steal, copy and replace them without Alice or Bob realizing. Even if Eve cannot decrypt the data she has stolen, she may be squirreling it away for a near future when advances in computing technology might allow her to break through.

Quantum communication adds a stronger layer of security. If we imagine the photon as a sphere rotating upwards and downwards at the same time, coding 1 and 0 simultaneously, we get an idea of how a qubit maintains dimension in its quantum state.

When Eve tries to steal, copy and replace the qubit, her ability to capture the information will be compromised and her tampering will be apparent in the loss of superposition. Alice and Bob will know the channel is not secure and will not use a security key until they can prove that Eve has not intercepted it. Only then will they send the intended encrypted data using an algorithm enabled by the qubit key.

However, while the laws of quantum physics may prevent Eve from copying the intercepted qubit, she may be able to disturb the quantum channel. Alice and Bob will need to continue generating keys and sending them back and forth until she stops interfering. Accidental disturbances that collapse superposition as the photon travels through space also contribute to interference patterns.

A qubit’s information space, limited to two levels, has a low tolerance for these errors.

To solve these problems, quantum communication requires additional dimensions. If we imagine a photon rotating (the way the earth rotates around the sun) and spinning (the way the earth spins on its own axis) in two different directions at once, we get a sense of how the Feng Lab qudits work.

If Eve tries to steal, copy and replace the qudit, she will not be able to extract any information and her tampering will be clear. The message sent will have a much greater tolerance for error—not only for Eve’s interference, but also for accidental flaws introduced as the message travels through space. Alice and Bob will be able to efficiently and securely exchange information.

“There is a lot of concern,” says Feng, “that mathematical encryption, no matter how complex, will become less and less effective because we are advancing so quickly in computing technologies. Quantum communication’s reliance on physical rather than mathematical barriers make it immune to these future threats. It’s more important than ever that we continue to develop and refine quantum communication technologies.”

More information: Zhifeng Zhang et al, Spin–orbit microlaser emitting in a four-dimensional Hilbert space, Nature (2022). DOI: 10.1038/s41586-022-05339-z

Journal information: Nature 

Provided by University of Pennsylvania 

Global timekeepers vote to scrap leap second by 2035

In search of lost time: The leap second will soon become a thing of the past
In search of lost time: The leap second will soon become a thing of the past.

Scientists and government representatives meeting at a conference in France voted on Friday to scrap leap seconds by 2035, the organization responsible for global timekeeping said.

Similar to leap years, leap seconds have been periodically added to clocks over the last half century to make up for the difference between exact atomic time and the Earth’s slower rotation.

While leap seconds pass by unnoticed for most people, they can cause problems for a range of systems that require an exact, uninterrupted flow of time, such as satellite navigation, software, telecommunication, trade and even space travel.

It has caused a headache for the International Bureau of Weights and Measures (BIPM), which is responsible for Coordinated Universal Time (UTC)—the internationally agreed standard by which the world sets its clocks.

A resolution to stop adding leap seconds by 2035 was passed by the BIPM’s 59 member states and other parties at the General Conference on Weights and Measures, which is held roughly every four years at the Versailles Palace west of Paris.

The head of BIPM’s time department, Patrizia Tavella, told AFP that the “historic decision” would allow “a continuous flow of seconds without the discontinuities currently caused by irregular leap seconds”.

“The change will be effective by or before 2035,” she said via email.

She said that Russia voted against the resolution, “not on principle”, but because Moscow wanted to push the date it comes into force until 2040.

Other countries had called for a quicker timeframe such as 2025 or 2030, so the “best compromise” was 2035, she said.

The United States and France were among the countries leading the way for the change.

Tavella emphasized that “the connection between UTC and the rotation of the Earth is not lost”.

“Nothing will change” for the public, she added.

A leap minute?

Seconds were long measured by astronomers analyzing the Earth’s rotation, however the advent of atomic clocks—which use the frequency of atoms as their tick-tock mechanism—ushered in a far more precise era of timekeeping.

But Earth’s slightly slower rotation means the two times are out of sync.

To bridge the gap, leap seconds were introduced in 1972, and 27 have been added at irregular intervals since—the last in 2016.

Under the proposal, leap seconds will continue to be added as normal for the time being.

But by 2035, the difference between atomic and astronomical time will be allowed to grow to a value larger than one second, Judah Levine, a physicist at the US National Institute of Standards and Technology, told AFP.

“The larger value is yet to be determined,” said Levine, who spent years helping draft the resolution alongside Tavella.

Negotiations will be held to find a proposal by 2035 to determine that value and how it will be handled, according to the resolution.

Levine said it was important to protect UTC time because it is run by “a worldwide community effort” in the BIPM.

GPS time, a potential UTC rival governed by atomic clocks, is run by the US military “without worldwide oversight”, Levine said.

A possible solution to the problem could be letting the discrepancy between the Earth’s rotation and atomic time build up to a minute.

It is difficult to say exactly how long that might take, but Levine estimated anywhere between 50 to 100 years.

Instead of then adding on a leap minute to clocks, Levine proposed a “kind of smear”, in which the last minute of the day takes two minutes.

“The advance of a clock slows, but never stops,” he said.

© 2022 AFP

Scientists closer to solving a superconducting puzzle with applications in medicine, transport and power transmission

Scientists closer to solving a superconducting puzzle with applications in medicine, transport and power transmission
Spin fluctuations and phonons in La2-xSrxCuO4 (x = 0.22) near Qδ. S(Q, ω) as a function of energy and wavevector along a trajectory through two incommensurate wave vectors Qδ = (0.5-δ, 0.5, L) and (0.5, 0.5-δ, L) (see inset to panel a). Integration ranges are a L ∈ [ − 1, 1] and b L ∈ [3.8, 4.2]. Strong phonons are observed (panel b) for L ≈ 4, but these are not visible near L = 0 (panel a) where spin fluctuations are seen. Data were collected on LET (panel a) and MERLIN (panel b). Credit: Nature Physics (2022). DOI: 10.1038/s41567-022-01825-3

Researchers studying the magnetic behavior of a cuprate superconductor may have explained some of the unusual properties of their conduction electrons.

Cuprate superconductors are used in levitating trains, quantum computing and power transmission. They are of a family of materials made of layers of copper oxides alternating with layers of other metal oxides, which act as charge reservoirs.

The largest use of superconductors is currently for manufacturing superconducting magnets used for medical MRI machines and for scientific applications such as particle accelerators.

For the potential applications of superconducting materials to be fully realized, developing superconductors that maintain their properties at higher temperatures is crucial for scientists. The cuprate superconductors currently exhibit relatively high transition point temperatures and therefore give scientists an opportunity to study what makes higher temperature superconductivity possible.

In this study, published in Nature Physics, a collaboration involving the University of Bristol and the ISIS Pulsed Neutron and Muon Source, they focused on the cuprate superconductor La2-xSrxCuO4 (LSCO). Superconductivity in this system is very sensitive to the exact ratio of Lanthanum (La) to Strontium (Sr) offering the ability to understand which properties are correlated with superconductivity. LSCO is also close to being magnetically ordered and one possibility is that the magnetic fluctuations are what enables its superconductivity.

Inelastic neutron scattering offers an excellent method to study these magnetic fluctuations. The researchers were able to measure over a wide range of reciprocal space and energy scales. This enabled them to build a full picture of the spin fluctuations and phonons, allowing very low energy spin fluctuations to be isolated.

Although cuprate superconductors are metals above the temperature where they become superconducting, the electrons that carry current behave very strangely. As the temperature is increased, their ability to carry current is dramatically reduced. The low-energy spin fluctuations could scatter the conduction electrons and explain this strange metal behavior.

Furthermore, when the superconductor was cooled and the superconductivity suppressed with a magnetic field, the spin fluctuations became stronger and slow down suggesting the material is close to magnetic order. This could help to explain the unusual electronic properties of the cuprates.

Prof Stephen Hayden of Bristol’s School of Physics said, “This study has demonstrated the potential importance of spin fluctuations in understanding cuprates. A deeper understanding of their properties and their relation to superconductivity is another step towards designing materials with higher superconducting temperatures.

“In the future they should be used for quantum computing, transport including levitating trains and compact motor as well as power transmission. There are already demonstration projects for the latter.

“The work relies on the unique instrumentation and sample environment available at ISIS.”

More information: M. Zhu et al, Spin fluctuations associated with the collapse of the pseudogap in a cuprate superconductor, Nature Physics (2022). DOI: 10.1038/s41567-022-01825-3

Journal information: Nature Physics 

Provided by University of Bristol 

A new experiment pushes the boundaries of our understanding of topological quantum matter

A new experiment pushes the boundaries of our understanding of topological quantum matter
The upper panel shows a sketch of the experiment. In a magnetic field, a heat current (red arrow) applied to the crystal produces a thermal Hall signal that arises from bosonic excitations (orange balls) moving along the edges. The lower panel is a color map of the thermal Hall signal (scale bar on the right) plotted versus magnetic field H and temperature T. The signal is largest in the red regions, close to zero in the light-green regions and slightly negative in the blue spot. Credit: Peter Czajka, Princeton University

New research conducted by Princeton University physicists is delving with high resolution into the complex and fascinating world of topological quantum matter—a branch of physics that studies the inherent quantum properties of materials that can be deformed but not intrinsically changed. By repeating an experiment first conducted by researchers at Kyoto University, the Princeton team has clarified key aspects of the original experiment, and importantly, reached novel and divergent conclusions—conclusions that advance our understanding of topological matter.

As chronicled in a paper published in the journal Nature Materials, the Princeton researchers used a special type of magnetic insulator realized in ruthenium chloride (α-RuCl3) to demonstrate the first example of a magnetic insulator that exhibits the thermal Hall effect arising from quantum edge modes of bosons in the presence of a novel force field called the Berry curvature.

Background to the experiment

The experiment has its origins in the work of Princeton physicist and 1977 Nobel Prize-winner Phil Anderson, who theorized a novel state of matter called spin liquids. These are classes of magnetic materials that—even under extremely low temperatures—do not undergo what physicists call a magnetic phase transition. This describes an abrupt transition to a state in which the spin at each lattice site either aligns in a perfectly parallel pattern, called ferromagnetic order, or alternates in an orderly fashion between up and down, called antiferromagnetic order. Over ninety-nine percent of magnetic materials experience this phase transition when cooled to sufficiently low temperatures. Anderson suggested the term “geometric frustration” to describe how spin liquids are prevented from undergoing such phase transitions.

“To illustrate this concept, imagine trying to seat couples around a dinner table under the rule that every woman is to be seated between two men and vice versa,” said N. Phuan Ong, the Eugene Higgins Professor of Physics at Princeton University and the senior author of the paper. “If we have a guest who arrives alone, this arrangement is geometrically impossible.”

In 2006, Russian physicist Alexei Kitaev at the California Institute of Technology (Caltech) proposed that Anderson’s spin liquid state could be achieved without invoking Anderson’s concept of geometric frustration. He outlined this in a series of elegant equations, and importantly, predicted the existence of new particles called Majoranas and visons. The Majorana particle is an especially strange and elusive subatomic particle that was first theorized in 1937 by Italian physicist Ettore Majorana. It is a type of fermion; in fact, it is the only fermion recognized as identical to its own antiparticle.

Kitaev’s work sparked a flurry of research to find materials that could realize his model calculations in the laboratory. Two years later, two physicists, George Jackeli and Giniyat Khailyulin of the Max Planck Institute in Stuttgart, Germany, predicted ruthenium chloride (α-RuCl3) to be the closest proximate. This material, which crystallizes in a honeycomb lattice, is an excellent insulator.

Consequently, in the past decade, α-RuCl3 has become one of the most intensively investigated candidates for quantum spin liquids. The research received a considerable boost in 2018 when physicist Yuji Matsuda and his colleagues at Kyoto University reported the observation of the “half-quantized” thermal Hall effect predicted in Kitaev’s calculations.

The thermal Hall effect, which is analogous to the more familiar electrical Hall effect, describes how an intense magnetic field deflects sideways an applied heat current. The sideways deflection engenders a weak temperature difference between two edges of the sample, which reverses sign if the direction of the magnetic field is reversed. While the thermal Hall effect is well established in metals such as copper and gallium, it is very rarely observed in insulators. This is because, in insulators, a heat current is conveyed by lattice vibrations called phonons that are indifferent to the magnetic field, Ong noted.

Matsuda reported that their measurements of the thermal Hall conductivity revealed that it was “half-quantized.” The magnitude depends only on the Planck constant and the Boltzmann constant, and nothing else, as predicted by Kitaev. “This experiment, implying the observation of a current of Majorana particles, attracted enormous interest in the community.”

But Ong and his research team, long familiar with thermal Hall experiments, felt that there was something amiss with Matsuda’s conclusion. “I couldn’t quite put my finger on it,” Ong said.

The experiment

Ong and his colleagues decided to repeat the experiment. But this time, they aimed to conduct the experiment at a higher resolution and over a much larger temperature interval—from one-half degrees Kelvin to ten degrees Kelvin.

The high level of resolution was critical to the success of the experiment, explained Peter Czajka, the lead author of the paper and a graduate student in physics. “Our experiment is a great example of something that is conceptually quite simple, but very difficult in practice. It’s relatively easy to measure the electrical resistance of something but measuring the thermal conductivity of a sample is much harder.”

The first part of the experiment required the researchers to select a sample of ruthenium chloride that had several specific characteristics, including a very thin crystal structure with a distinct hexagonal shape. They then attached sensitive thermometers to measure the temperature gradients.

“All we’re really doing is measuring very small temperature gradients on a crystal,” said Czajka. “But to do this we need a resolution of a thousandth to a millionth of a degree—something in between that scale.”

The researchers cooled the material down to temperatures of one Kelvin or lower, and subjected the sample to a strong magnetic field, which was applied parallel to the heat current. They then used an electrical heater to warm up one edge of the crystal and measured the temperature gradients. The experiment—measurements of temperature gradients—required, amazingly, a period of several months.

“The sample was cold for about six months,” said Czajka, “and during that time we thoroughly mapped out the temperature and field dependence. This was unprecedented because most researchers aren’t willing to put six months into a single experiment.”

The first thing the researchers noticed, in a finding parallel with Matsuda’s, was the presence of the thermal Hall effect. The researchers recognized this when the thermometers detected that the flow of the heat current was deflected to one side or the other depending on the magnetic field.

To explain this, Ong used the analogy of a raft going downstream, with the river current symbolizing the heat current and the raft symbolizing a packet of heat entropy. “Although you’re going with the flow of the river, you find that your raft is being pushed to one side of the river, say the left bank. And all the rafts following you are similarly being pushed to the left bank,” he said. This leads to a slight increase in the left bank’s temperature.

The signal is also sensitive to the direction of the magnetic field, said Ong. “If you repeat the experiment with the magnetic field reversed in direction, you will find all the rafts, which are still going downstream, accumulating on the right bank.”

In the vast majority of insulators, this effect does not occur. “The rafts will not accumulate on either the left or right side; they will just flow down the river,” said Ong.

But in these new topological materials the effect is startling. And the reason for this is because of a phenomenon known as the Berry curvature.

In principle, all crystalline materials display an internal force field called the Berry curvature, named after Michael Berry, a mathematical physicist at the University of Bristol. The Berry Curvature describes how wave functions twist and turn throughout the space spanned by momentum. In magnetic and topological materials, the Berry curvature is finite. It acts on charged particles, such as electrons, as well as neutral ones, such as phonons and spins, much like an intense magnetic field.

“The Berry curvature is a concept that was missing for the last sixty years, but has now come to the fore in the last five years or so,” said Ong. “It’s the Berry Curvature that we proved in this paper that is actually the cause of Matsuda’s experimental observation.”

Equally important, the Princeton researchers were not able to confirm the presence of the Majorana fermion, as originally predicted in Matsuda’s experiment. Rather, the researchers traced the thermal Hall effect to another kind of particle, a boson.

All particles in nature are either fermions or bosons. Electrons are fermions, while particles such as photons, phonons and gluons are bosons. Bosons originate from the wave-like collective excitations of the magnetic moments at high magnetic field. Both types of particles can give rise to the thermal Hall effect if the materials used are topological in nature.

“In our study, we demonstrate rather convincingly that the observed particles are bosons rather than fermions,” said Ong. “If the Kyoto group had been correct—if the particles were identified as fermions—the signal would be independent of temperature. But the signal is, in fact, strongly temperature dependent, and its temperature dependence very precisely corresponds to a quantitative model for topological boson excitations.”

“Our experiment is the first example of what is called a bosonic material displaying quantum edge transport,” Ong added.

Implications and future research

Ong and his team believe their research has robust implications for fundamental physics research.

“What our experiment accomplished—by clarifying the presence of bosons rather than fermions—is to open the door to using the thermal Hall effect in the same way that the quantum Hall Effect has been used to uncover many novel quantum states,” said Ong.

Ong also said that the particles discovered in experiments like this one might have practical applications for such things as topological quantum computing or quantum devices, though achieving such breakthroughs are likely twenty or more years down the road. Ong and the members of his research laboratory intend to continue their research by searching for similar bosonic Hall effects in related materials, and study the quantum possibilities of ruthenium chloride in even greater detail. The experiments were performed in collaboration with scientists at Oak Ridge National Laboratories, the University of Tennessee, Tokyo University and Purdue University.

More information: Peter Czajka et al, Planar thermal Hall effect of topological bosons in the Kitaev magnet α-RuCl3, Nature Materials (2022). DOI: 10.1038/s41563-022-01397-w

Provided by Princeton University