Shrinking endoscopes with meta-optical fibers

Shrinking endoscopes with meta-optical fibers
A meta-optic is optimized for integration with the coherent fiber bundle, whereas the individual fiber cores are taken as the imaging limitation. The MOFIE achieves a reduced tip length while maintaining a wide field of view of 22.5° and a large depth of field exceeding 30 mm, compared with a traditional GRIN lens. Credit: Johannes E. Fröch, Luocheng Huang, Quentin A.A. Tanguy, Shane Colburn, Alan Zhan, Andrea Ravagli, Eric J. Seibel, Karl Böhringer, Arka Majumdar

Ultra-compact, agile endoscopes with a large field of view (FoV), long depth of field (DoF), and short rigid tip length are essential for developing minimally invasive operations and new experimental surgeries. As these fields develop, the requirement for miniaturization and increased precision become progressively demanding.

In existing endoscopes, the rigid tip length is a fundamental limitation of the device’s agility within small tortuous ducts, such as an artery. It is primarily constrained by the size of the optical elements required for imaging. Thus, alternative solutions are urgently needed to reduce the tip length.

In a new paper published in eLight, a team of scientists led by Dr. Johannes Fröch and Prof Arka Majumdar from the University of Washington have developed a novel technique for reducing the rigid tip length.

Existing solutions include lensless and computational imaging with single fibers or coherent fiber bundles. However, these are typically limited to a short working distance and often extremely sensitive to bending and twisting of the optical fiber, affecting or even precluding accurate computational reconstruction.

Flat meta-optics are an emerging and versatile idea in the photonics community to create miniaturized optical elements. These are sub-wavelength diffractive optical elements composed of nano-scale scatterer arrays. They are designed to shape an incident wavefront’s phase, amplitude, and spectral response. Such ultrathin flat optics not only dramatically shrink the size of traditional optics but can also combine multiple functionalities in a single surface.

Flat meta-optics are compatible with high-volume semiconductor manufacturing technology and can create disposable optics. These properties have already inspired researchers to explore the potential of meta-optics for endoscopy, including fiber-integrated endoscopy, side-viewing single fiber scanning endoscopy, and scanning fiber forward-viewing endoscopy.

Shrinking endoscopes with meta-optical fibers
An optical microscope image of the fabricated meta-optic(left ) placed in front of the coherent fiber bundle. Scanning electron microscope imagesof the meta-optic (right) show the individual scatterer, which spans the entire aperture of the device. Credit: Johannes E. Fröch, Luocheng Huang, Quentin A.A. Tanguy, Shane Colburn, Alan Zhan, Andrea Ravagli, Eric J. Seibel, Karl Böhringer, Arka Majumdar

Unfortunately, meta-optics traditionally suffer from strong aberrations, making large FoV and full-color imaging challenging. Several works have shown that the standard metalens design is unsuitable for simultaneously capturing color information across the visible spectrum.

It typically results in crisp images for the design wavelength (e.g. green) but strongly aberrated/ blurred for other colors (red and blue). While some approaches like dispersion engineering and computational imaging techniques can reduce chromatic aberration, they either suffer from small apertures, low numerical apertures or require a computational post-processing step, complicating real-time video capture.

Similarly, an additional aperture before the meta-optic can provide a larger FoV. However, it comes at the cost of reduced light collection and increased thickness of the optics. So far, these limitations have restricted most meta-optics endoscopes to single wavelength operation.

Although, recently, a meta-optic doublet was demonstrated in conjunction with a coherent fiber bundle for polychromatic imaging. Such polychromatic imaging is unsuitable for broad-band illumination, which is often the case for clinical endoscopy. Additionally, the front aperture was limited to 125 μm, with a short working distance of 200 μm.

The research team noted a desire for broad-band, ultra-thin meta-optics for endoscopy. However, making it smaller than the optical fiber diameter is not conducive and severely limits the light collection. As such, full-color meta-optical endoscopy with acceptable FoV, DoF, and large enough aperture has not yet been achieved.

In this work, the research team demonstrated an inverse-designed meta-optic optimized to capture real-time full color scenes with a 1 mm diameter coherent fiber bundle. The meta-optic enables operations at an FoV of 22.5°, a DoF of > 30 mm (exceeding 300% of the nominal design working distance) and a minimum rigid tip length of only ~ 2.5 mm.

This is a 33% tip length reduction compared to a traditional commercial gradient-index (GRIN) lens integrated fiber bundle endoscope. This is due to the shorter focal length and the ultrathin nature of the meta-optic.

Shrinking endoscopes with meta-optical fibers
The top images display scenes on an OLED screen and captured through the MOFIE, allowing the researchers to directly assess the imaging quality. The bottom three images show images taken of a caterpillar, taken under ambient imaging conditions and real time life capture, without computational deconvolution applied. Credit: Johannes E. Fröch, Luocheng Huang, Quentin A.A. Tanguy, Shane Colburn, Alan Zhan, Andrea Ravagli, Eric J. Seibel, Karl Böhringer, Arka Majumdar

At the same time, comparable imaging performance and working distance are maintained. To achieve exceptional FoV, DoF, and color performance of the Meta-Optical Fiber Endoscope (MOFIE), the research team approached this design problem from a system-level perspective.

They believed that the diameter and spacing of individual fiber cores within the bundle limit the achievable image quality, which also limits the achievable FoV and modulation transfer function (MTF). This aspect is implemented in an automatic differentiation framework using the average volume under the multichromatic modulation transfer function (MTF) curve as the figure of merit.

By ensuring that the meta-optic has an MTF within the limitations of the fiber bundle, the research team achieved full-color operation without requiring a computational reconstruction step, thus facilitating real-time operation. The team emphasized that its design approach fundamentally differs from traditional achromatic metalens design efforts.

The researchers formulated an optimization problem to find the best solution for full-color imaging. This was instead of trying to achieve diffraction-limited performance in all wavelengths, which may pose a physically unsolvable problem.

This approach is important because it is not limited to this particular system. It can be extended to larger aperture sizes and support computational post-processing steps. To highlight this, they also demonstrated an example of a meta-optic with a 1 cm aperture and full-color imaging under ambient light conditions.

More information: Johannes E. Fröch et al, Real time full-color imaging in a Meta-optical fiber endoscope, eLight (2023). DOI: 10.1186/s43593-023-00044-4

Provided by Chinese Academy of Sciences 

Physicists uncover ‘parallel circuits’ of spin currents in antiferromagnets

Physicists uncover "parallel circuits" of spin currents in antiferromagnets
Left: An antiferromagnet can function as “parallel electrical circuits” carrying Néel spin currents. Right: A tunnel junction based on the antiferromagnets hosting Néel spin currents can be regarded as “electrical circuits” with the two ferromagnetic tunnel junctions connected in parallel. Credit: Shao Dingfu

A group of physicists at Hefei Institutes of Physical Science (HFIPS) of Chinese Academy of Sciences (CAS) revealed a secret of antiferromagnets, which could accelerate spintronics, a next-gen data storage and processing technology for overcoming the bottleneck of modern digital electronics.

This finding was reported in Physical Review Letters.

Spintronics is a vigorously developing field employing the spin of electrons within magnetic materials to encode information. Spin-polarized electric currents play a central role in spintronics, due to the capabilities of manipulation and detection of magnetic moment directions for writing and reading 1s and 0s. Currently, most spintronic devices are based on ferromagnets, where the net magnetizations can efficiently spin polarize electric currents.

Antiferromagnets, with opposite magnetic moments aligned alternately, are less investigated but may promise even faster and smaller spintronic devices. However, antiferromagnets have zero net magnetization and thus are commonly believed to carry solely spin-neutral currents useless for spintronics. While antiferromagnets consist of two antiparallel aligned magnetic sublattices, their properties are deemed to be “averaged out” over the sublattices making them spin independent.

Prof. Shao Ding-Fu, who led the team, has a different point of view in this research. He envisioned that collinear antiferromagnets can function as “electrical circuits” with the two magnetic sublattices connected in parallel. With this simple intuitive picture in mind, Prof. Shao and his collaborators theoretically predicted that magnetic sublattices could polarize the electric current locally, thus resulting in the staggered spin currents hidden within the globally spin-neutral current.

He dubbed these staggered spin currents as “Néel spin currents” after Louis Néel, a Nobel laureate, who won the prize due to the fundamental work and discoveries concerning antiferromagnetism.

The Néel spin currents is a unique nature of antiferromagnets which has never been recognized. It is capable to generate useful spin-dependent properties which have been previously considered incompatible with antiferromagnets, such as a spin-transfer torque and tunneling magnetoresistance in antiferromagnetic tunnel junctions, crucial for electrical writing and reading of information in antiferromagnetic spintronics.

“Our work uncovered a previously unexplored potential of antiferromagnets, and offered a straightforward solution to achieve the efficient reading and writing for antiferromagnetic spintronics,” said Prof. Shao Ding-Fu.

More information: Ding-Fu Shao et al, Néel Spin Currents in Antiferromagnets, Physical Review Letters (2023). DOI: 10.1103/PhysRevLett.130.216702

Journal information: Physical Review Letters 

Provided by Hefei Institutes of Physical Science, Chinese Academy of Sciences

Calculation shows why heavy quarks get caught up in the flow

Calculation shows why heavy quarks get caught up in the flow
The data points on this graph show that the interactions of heavy quarks (Q) with the quark-gluon plasma (QGP) are strongest and have a short mean free path (zig zags) right around the transition temperature (T/Tc = 1). The interaction strength (the heavy quark diffusion constant) decreases, and the mean free path lengthens, at higher temperatures. Credit: Brookhaven National Laboratory

Using some of the world’s most powerful supercomputers, a group of theorists has produced a major advance in the field of nuclear physics—a calculation of the “heavy quark diffusion coefficient.” This number describes how quickly a melted soup of quarks and gluons—the building blocks of protons and neutrons, which are set free in collisions of nuclei at powerful particle colliders—transfers its momentum to heavy quarks.

The answer, it turns out, is very fast. As described in a paper just published in Physical Review Letters, the momentum transfer from the “freed up” quarks and gluons to the heavier quarks occurs at the limit of what quantum mechanics will allow. These quarks and gluons have so many short-range, strong interactions with the heavier quarks that they pull the “boulder”-like particles along with their flow.

The work was led by Peter Petreczky and Swagato Mukherjee of the nuclear theory group at the U.S. Department of Energy’s Brookhaven National Laboratory, and included theorists from the Bielefeld, Regensburg, and Darmstadt Universities in Germany, and the University of Stavanger in Norway.

The calculation will help explain experimental results showing heavy quarks getting caught up in the flow of matter generated in heavy ion collisions at the Relativistic Heavy Ion Collider (RHIC) at Brookhaven and the Large Hadron Collider (LHC) at Europe’s CERN laboratory. The new analysis also adds corroborating evidence that this matter, known as a “quark-gluon plasma” (QGP), is a nearly perfect liquid, with a viscosity so low that it also approaches the quantum limit.

“Initially, seeing heavy quarks flow with the QGP at RHIC and the LHC was very surprising,” Petreczky said. “It would be like seeing a heavy rock get dragged along with the water in a stream. Usually, the water flows but the rock stays.”

The new calculation reveals why that surprising picture makes sense when you think about the extremely low viscosity of the QGP.

Frictionless flow

The low viscosity of matter generated in RHIC’s collisions of gold ions, first reported on in 2005, was a major motivator for the new calculation, Petreczky said. When those collisions melt the boundaries of individual protons and neutrons to set free the inner quarks and gluons, the fact that the resulting QGP flows with virtually no resistance is evidence that there are many strong interactions among the quarks and gluons in the hot quark soup.

“The low viscosity implies that the ‘mean free path’ between the ‘melted’ quarks and gluons in the hot, dense QGP is extremely small,” said Mukherjee, explaining that the mean free path is the distance a particle can travel before interacting with another particle.

“If you think about trying to walk through a crowd, it’s the typical distance you can get before you bump into someone or have to change your course,” he said.

With a short mean free path, the quarks and gluons interact frequently and strongly. The collisions dissipate and distribute the energy of the fast-moving particles and the strongly interacting QGP exhibits collective behavior—including nearly frictionless flow.

“It’s much more difficult to change the momentum of a heavy quark because it’s like a train—hard to stop,” Mukherjee noted. “It would have to undergo many collisions to get dragged along with the plasma.”

But if the QGP is indeed a perfect fluid, the mean free path for the heavy quark interactions should be short enough to make that possible. Calculating the heavy quark diffusion coefficient—which is proportional to how strongly the heavy quarks are interacting with the plasma—was a way to check this understanding.

Crunching the numbers

The calculations needed to solve the equations of quantum chromodynamics (QCD)—the theory that describes quark and gluon interactions—are mathematically complex. Several advances in theory and powerful supercomputers helped to pave the way for the new calculation.

“In 2010/11 we started using a mathematical shortcut, which assumed the plasma consisted only of gluons, no quarks,” said Olaf Kaczmarek of Bielefeld University, who led the German part of this effort. Thinking only of gluons helped the team to work out their method using lattice QCD. In this method, scientists run simulations of particle interactions on a discretized four-dimensional space-time lattice.

Essentially, they “place” the particles on discrete positions on an imaginary 3D grid to model their interactions with neighboring particles and see how those interactions change over time (the 4th dimension). They use many different starting arrangements and include varying distances between particles.

After working out the method with only gluons, they figured out how to add in the complexity of the quarks.

The scientists loaded a large number of sample configurations of quarks and gluons onto the 4D lattice and used Monte Carlo methods—repeated random sampling—to try to find the most probable distribution of quarks and gluons within the lattice.

“By averaging over those configurations, you get a correlation function related to the heavy quark diffusion coefficient,” said Luis Altenkort, a University of Bielefeld graduate student who also worked on this research at Brookhaven Lab.

As an analogy, think about estimating the air pressure in a room by sampling the positions and motion of the molecules. “You try to use the most probable distributions of molecules based on another variable, such as temperature, and exclude improbable configurations—such as all the air molecules being clustered in one corner of the room,” Altenkort said.

In the case of the QGP, the scientists were trying to simulate a thermalized system—where even on the tiny-fraction-of-a-second timescale of heavy ion particle collisions, the quarks and gluons come to some equilibrium temperature.

They simulated the QGP at a range of fixed temperatures and calculated the heavy quark diffusion coefficient for each temperature to map out the temperature dependence of the heavy quark interaction strength (and the mean free path of those interactions).

“These demanding calculations were possible only by using some of the world’s most powerful supercomputers,” Kaczmarek said.

The computing resources included Perlmutter at the National Energy Research for Scientific Computing Center (NERSC), a DOE Office of Science User Facility located at Lawrence Berkeley National Laboratory; Juwels Booster at the Juelich Research Center in Germany; Marconi at CINECA in Italy; and dedicated lattice QCD GPU clusters at Thomas Jefferson National Accelerator Facility (Jefferson Lab) and at Bielefeld University.

As Mukherjee noted, “These powerful machines don’t just do the job for us while we sit back and relax; it took years of hard work to develop the codes that can squeeze the most efficient performance out of these supercomputers to do our complex calculations.”

Rapid thermalization, short-range interactions

The calculations show that the heavy quark diffusion coefficient is largest right at the temperature at which the QGP forms, and then decreases with increasing temperatures. This result implies that the QGP comes to an equilibrium extremely rapidly.

“You start with two nuclei, with essentially no temperature, then you collide them and in less than one quadrillionth of a second, you get a thermal system,” Petreczky said. Even the heavy quarks get thermalized.

For that to happen, the heavy quarks have to undergo many scatterings with other particles very quickly—implying that the mean free path of these interactions must be very small. Indeed, the calculations show that, at the transition to QGP, the mean free path of the heavy quark interactions is very close to the shortest distance allowable. That so-called quantum limit is established by the inherent uncertainty of knowing both a particle’s position and momentum simultaneously.

This independent “measure” provides corroborating evidence for the low viscosity of the QGP, substantiating the picture of its perfect fluidity, the scientists say.

“The shorter the mean free path, the lower the viscosity, and the faster the thermalization,” Petreczky said.

Simulating real collisions

Now that scientists know how the heavy quark interactions with the QGP vary with temperature, they can use that information to improve their understanding of how the actual heavy ion collision systems evolve.

“My colleagues are trying to develop more accurate simulations of how the interactions of the QGP affect the motion of heavy quarks,” Petreczky said. “To do that, they need to take into account the dynamical effects of how the QGP expands and cools down—all the complicated stages of the collisions.”

“Now that we know how the heavy quark diffusion coefficient changes with temperature, they can take this parameter and plug it into their simulations of this complicated process and see what else needs to be changed to make those simulations compatible with the experimental data at RHIC and the LHC.”

This effort is the subject of a major collaboration known as the Heavy-Flavor Theory (HEFTY) for QCD Matter Topical Theory Collaboration.

“We’ll be able to better model the motion of heavy quarks in the QGP, and then have a better theory to data comparison,” Petreczky said.

More information: Luis Altenkort et al, Heavy Quark Diffusion from 2+1 Flavor Lattice QCD with 320 MeV Pion Mass, Physical Review Letters (2023). DOI: 10.1103/PhysRevLett.130.231902

Journal information: Physical Review Letters 

Provided by Brookhaven National Laboratory 

The hunt for elusive axion particles: Experiments suggest better methods for exploring the dark sector

Scientists develop tools to hunt for elusive axion particles
The expected and actual 90% C.L.s from CCM120 for the ALP-photon coupling g. Also included is the projection region for CCM200 three year run using background taken from CCM120’s spectrum reduced by two orders of magnitude for various conservative improvements (dashed green line) and a background free assumption (extent of shaded green region). QCD axion-model parameter space for the KSVZ benchmark scenario spans the region indicated by the arrows. Credit: Physical Review D (2023). DOI: 10.1103/PhysRevD.107.095036

Since axions were first predicted by theory nearly half a century ago, researchers have hunted for proof of the elusive particle, which may exist outside the visible universe, in the dark sector. But how does one find particles that can’t be seen?

The first physics results from the Coherent CAPTAIN-Mills experiment at Los Alamos —just described in a publication in the journal Physical Review D—suggest that liquid-argon, accelerator-based experimentation, designed initially to look for similarly hypothetical particles such as sterile neutrinos, may also be an ideal set-up for seeking out stealthy axions.

“The confirmation of dark sector particles would have a profound impact on the understanding of the Standard Model of particle physics, as well as the origin and evolution of the universe,” said physicist Richard Van de Water. “A big focus of the physics community is exploring ways to detect and confirm these particles. The Coherent CAPTAIN-Mills experiment couples existing predictions of dark matter particles such as axions with high-intensity particle accelerators capable of producing this hard-to-find dark matter.”

Demystifying the dark sector

Physics theory suggests that only 5% of the universe is made up of visible matter—atoms that form things we can see, touch and feel—and that the remaining 95% is the combination of matter and energy known as the dark sector. Axions, sterile neutrinos and others may explain and account for all or part of that missing energy density.

The existence of axions could also resolve a longstanding problem in the Standard Model, which outlines the known behavior of the subatomic world. Sometimes referred to as “fossils” of the universe, speculated to originate just a second after the Big Bang, axions could also tell us much about the founding moments of the universe.

The Coherent CAPTAIN-Mills experiment was one of several projects to receive Department of Energy funding for dark sector research in 2019, along with substantial funding from the Laboratory Directed Research and Development program at Los Alamos. A prototype detector dubbed the CCM120 was built and run during the 2019 Los Alamos Neutron Science Center (LANSCE) beam cycle. The Physical Review D publication describes results from the CCM120’s initial engineering run.

“Based on the first run of CAPTAIN-Mills research, the experiment has demonstrated the capability to execute the search for axions,” said Bill Louis, also a physicist on the project at Los Alamos. “We’re realizing that the energy regime provided by the proton beam at LANSCE and the liquid argon detector design offers an unexplored paradigm for axion-like particle research.”

Experiment design

Stationed in the Lujan Center adjacent to LANSCE, the Coherent CAPTAIN-Mills experiment is a 10-ton, supercooled, liquid argon detector. (CAPTAIN stands for Cryogenic Apparatus for Precision Tests of Argon Reactions with Neutrinos.)

High-intensity, 800-megaelectron volt protons generated by the LANSCE accelerator hit a tungsten target in the Lujan Center, then traverse 23 meters through extensive steel and concrete shielding to the detector to interact in the liquid argon.

The prototype detector’s interior walls are lined with 120, eight-inch, sensitive photomultiplier tubes (hence the CCM120 moniker) that detect light flashes—single photons—that result when a regular or dark sector particle jostles an atom in the tank of liquid argon.

A special material coating on the interior walls converts the argon light emission into visible light that can be detected by the photo-multiplier tubes. Fast timing of the detector and beam helps remove the effects of background particles such as beam neutrons, cosmic rays and gamma-rays from radioactive decays.

Pieces of the puzzle

Axions are of great interest because they are “highly motivated”; that is, their existence is strongly implied in theories beyond the Standard Model. Developed over more than 70 years, the Standard Model explains three of the four known fundamental forces—electromagnetism, the weak nuclear force and the strong nuclear force—that govern the behavior of atoms, the building blocks of matter. (The fourth force, gravity, is explained by Einsteinian relativity.) But the model isn’t necessarily complete.

An unresolved problem in Standard Model physics is known as the “strong CP problem,” with “CP” meaning charge-parity symmetry. Essentially, particles and their antiparticle counterparts are acted upon similarly by the laws of physics. Nothing in Standard Model physics mandates that behavior, though, so physicists should see at least occasional violations of that symmetry.

In weak-force interactions, charge-parity symmetry violations do occur. But no similar violations have been observed in strong-force interactions. That puzzling absence of theoretically possible behavior represents a problem for Standard Model theory. What prevents violations of charge-parity symmetry from occurring in strong-force interactions?

Abundant, nearly weightless and electrically neutral, axions may be an important part of the puzzle. The axion earned its moniker in 1978, so-coined by physicist Frank Wilczek after a brand of laundry detergent because such a particle could “clean up” the strong CP problem. Physicists speculate that they are components of a dark matter force that preserves charge-parity symmetry, and that they may couple, or interact with, photons and electrons.

Next steps

If axions do exist, finding them might be a matter of devising the right experimental set-up.

“As a result of this initial run with our CCM120 detector, we have a much better understanding of the signatures connected with axion-like particles coupled to photons and to electrons as they move through liquid argon,” said Louis. “These data give us the insight to upgrade the detector to be more sensitive by an order of magnitude.”

More information: A. A. Aguilar-Arevalo et al, Prospects for detecting axionlike particles at the Coherent CAPTAIN-Mills experiment, Physical Review D (2023). DOI: 10.1103/PhysRevD.107.095036

Journal information: Physical Review D 

Provided by Los Alamos National Laboratory 

Gravitational waves innovation could help unlock cosmic secrets

gravitational waves
Credit: Pixabay/CC0 Public Domain

New frontiers in the study of the universe—and gravitational waves—have been opened up following a breakthrough by University of the West of Scotland (UWS) researchers.

The groundbreaking development in thin film technology promises to enhance the sensitivity of current and future gravitational wave detectors. Developed by academics at UWS’s Institute of Thin Films, Sensors and Imaging (ITFSI), the innovation could enhance the understanding of the nature of the universe. The research is published in the journal Applied Optics.

Gravitational waves, first predicted by Albert Einstein’s theory of general relativity, are ripples in the fabric of spacetime caused by the most energetic events in the cosmos, such as black hole mergers and neutron star collisions. Detecting and studying these waves provides invaluable insights into the fundamental nature of the universe.

Dr. Carlos Garcia Nuñez, lecturer at UWS’s School of Computing, Engineering and Physical Sciences said, “At the Institute of Thin Films, Sensors and Imaging, we are working hard to push the limits of thin film materials, exploring new techniques to deposit them, controlling their properties in order to match the requirements of current and future sensing technology for the detection of gravitational waves.”

“The development of high reflecting mirrors with low thermal noise opens a wide range of applications, which covers from the detection of gravitational waves from cosmological events, to the development of quantum computers.”

The technique used in this work—originally developed and patented by Professor Des Gibson, Director of UWS’s Institute of Thin Films, Sensors and Imaging—could enable the production of thin films that achieve low levels of “thermal noise.” The reduction of this kind of noise in mirror coatings is essential to increase the sensitivity of current gravitational wave detectors—allowing the detection of a wider range of cosmological events—and could be deployed to enhance other high-precision devices, such as atomic clocks or quantum computers.

Professor Gibson said, “We are thrilled to unveil this cutting-edge thin film technology for gravitational wave detection. This breakthrough represents a significant step forward in our ability to explore the universe and unlock its secrets through the study of gravitational waves. We believe this advancement will accelerate scientific progress in this field and open up new avenues for discovery.”

“UWS’s thin film technology has already undergone extensive testing and validation in collaboration with renowned scientists and research institutions. The results have been met with great enthusiasm, fueling anticipation for its future impact on the field of gravitational wave astronomy. The coating deposition technology is being commercialized by UWS spinout company, Albasense Ltd.”

The development of coatings with low thermal noise will not only make future generation of gravitational wave detectors more precise and sensitive to cosmic events, but will also provide new solutions to atomic clocks and quantum mechanics, both highly relevant for the United Nations’ Sustainable Development Goals 7, 9 and 11.

More information: Carlos Garcia Nuñez et al, Amorphous dielectric optical coatings deposited by plasma ion-assisted electron beam evaporation for gravitational wave detectors, Applied Optics (2023). DOI: 10.1364/AO.477186

Provided by University of the West of Scotland 

Researchers tune thermal conductivity of materials ‘on the fly’ for more energy-efficient devices

Researchers tune thermal conductivity of materials 'on the fly' for more energy-efficient devices
University of Minnesota Twin Cities mechanical engineering Ph.D. students Yingying Zhang and Chi Zhang conduct measurements using a home-built system involving ultrafast laser pulses to study the lanthanum strontium cobaltite devices. Credit: Dingbin Huang, University of Minnesota

A team led by University of Minnesota Twin Cities scientists and engineers discovered a new method for tuning the thermal conductivity of materials to control heat flow “on the fly.” Their tuning range is the highest ever recorded among one-step processes in the field, and will open a door to developing more energy-efficient and durable electronic devices.

The researchers’ paper is published in Nature Communications.

Just as electrical conductivity determines how well a material can transport electricity, thermal conductivity describes how well a material can transport heat. For example, many metals used to make frying pans have a high thermal conductivity so that they can transport heat efficiently to cook food.

Typically, the thermal conductivity of a material is a constant, unchanging value. However, the University of Minnesota team has discovered a simple process to “tune” this value in lanthanum strontium cobaltite, a material often used in fuel cells. Similar to the way a switch controls the flow of electricity to a light bulb, the researchers’ method provides a way to turn heat flow on and off in devices.

“Controlling how well a material can transfer heat is of great importance in daily life and in industry,” said Xiaojia Wang, co-corresponding author of the study and an associate professor in the University of Minnesota Department of Mechanical Engineering. “With this research, we have achieved a record-high tuning of thermal conductivity, showing promise for effective thermal management and energy consumption in the electronic devices people use every day. A well-designed and functioning thermal management system would enable better user experience and make devices more durable.”

Wang’s team worked in tandem with University of Minnesota Distinguished McKnight University Professor Chris Leighton, whose lab specializes in materials synthesis.

Leighton’s team fabricated the lanthanum strontium cobaltite devices using a process called electrolyte gating, in which ions (molecules with an electrical charge) are driven to the surface of the material. This allowed Wang and her research team to manipulate the material by applying a low voltage to it.

“Electrolyte gating is a tremendously powerful technique for controlling the properties of materials, and is well established for voltage-control of electronic, magnetic, and optical behavior,” said Leighton, co-corresponding author of the study and a faculty member in the University of Minnesota Department of Chemical Engineering and Materials Science. “This new work applies this approach in the realm of thermal properties, where voltage-control of physical behavior is less explored. Our results establish low-power, continuously tunable thermal conductivity over an impressive range, opening up some pretty exciting potential device applications.”

“Although it was challenging to measure the thermal conductivity of lanthanum strontium cobaltite films because they are so ultrathin, it was quite exciting when we finally got the experiments to work,” said Yingying Zhang, first author of the paper and a University of Minnesota mechanical engineering Ph.D. alumnus. “This project not only provides a promising example of tuning materials’ thermal conductivity but also demonstrates the powerful approaches we use in our lab to push the experimental limit for challenging measurements.”

More information: Yingying Zhang et al, Wide-range continuous tuning of the thermal conductivity of La0.5Sr0.5CoO3-δ films via room-temperature ion-gel gating, Nature Communications (2023). DOI: 10.1038/s41467-023-38312-z

Journal information: Nature Communications 

Provided by University of Minnesota 

How Schrödinger’s cat makes better qubits

Schrödinger's cat makes better qubits
An illustration of Schrödinger’s cat code. Credit: Vincenzo Savona (EPFL)

Quantum computing uses the principles of quantum mechanics to encode and elaborate data, meaning that it could one day solve computational problems that are intractable with current computers. While the latter work with bits, which represent either a 0 or a 1, quantum computers use quantum bits, or qubits—the fundamental units of quantum information.

“With applications ranging from drug discovery to optimization and simulations of complex biological systems and materials, quantum computing has the potential to reshape vast areas of science, industry, and society,” says Professor Vincenzo Savona, director of the Center for Quantum Science and Engineering at EPFL.

Unlike classical bits, qubits can exist in a “superposition” of both 0 and 1 states at the same time. This allows quantum computers to explore multiple solutions simultaneously, which could make them significantly faster in certain computational tasks. However, quantum systems are delicate and susceptible to errors caused by interactions with their environment.

“Developing strategies to either protect or qubits from this or to detect and correct errors once they have occurred is crucial for enabling the development of large-scale, fault-tolerant quantum computers,” says Savona. Together with EPFL physicists Luca Gravina, and Fabrizio Minganti, they have made a significant breakthrough by proposing a “critical Schrödinger cat code” for advanced resilience to errors. The study introduces a novel encoding scheme that could revolutionize the reliability of quantum computers.

What is a ‘critical Schrödinger cat code’?

In 1935, physicist Erwin Schrödinger proposed a thought experiment as a critique of the prevailing understanding of quantum mechanics at the time—the Copenhagen interpretation. In Schrödinger’s experiment, a cat is placed in a sealed box with a flask of poison and a radioactive source. If a single atom of the radioactive source decays, the radioactivity is detected by a Geiger counter, which then shatters the flask. The poison is released, killing the cat.

According to the Copenhagen view of quantum mechanics, if the atom is initially in superposition, the cat will inherit the same state and find itself in a superposition of alive and dead. “This state represents exactly the notion of a quantum bit, realized at the macroscopic scale,” says Savona.

In past years, scientists have drawn inspiration by Schrödinger’s cat to build an encoding technique called “Schrödinger’s cat code.” Here, the 0 and 1 states of the qubit are encoded onto two opposite phases of an oscillating electromagnetic field in a resonant cavity, similarly to the dead or alive states of the cat.

“Schrödinger cat codes have been realized in the past using two distinct approaches,” explains Savona. “One leverages anharmonic effects in the cavity, the other relying on carefully engineered cavity losses. In our work, we bridged the two by operating in an intermediate regime, combining the best of both worlds. Although previously believed to be unfruitful, this hybrid regime results in enhanced error suppression capabilities.” The core idea is to operate close to the critical point of a phase transition, which is what the ‘critical’ part of the critical cat code refers to.

The critical cat code has an additional advantage: it exhibits exceptional resistance to errors that result from random frequency shifts, which often pose significant challenges to operations involving multiple qubits. This solves a major problem and paves the way to the realization of devices with several mutually interacting qubits—the minimal requirement for building a quantum computer.

“We are taming the quantum cat,” says Savona. “By operating in a hybrid regime, we have developed a system that surpasses its predecessors, which represents a significant leap forward for cat qubits and quantum computing as a whole. The study is a milestone on the road towards building better quantum computers, and showcases EPFL’s dedication in advancing the field of quantum science and unlocking the true potential of quantum technologies.”

The findings are published in the journal PRX Quantum.

More information: Luca Gravina et al, Critical Schrödinger Cat Qubit, PRX Quantum (2023). DOI: 10.1103/PRXQuantum.4.020337

Journal information: PRX Quantum 

Provided by Ecole Polytechnique Federale de Lausanne 

Researchers ‘split’ phonons in step toward new type of quantum computer

Researchers “split” phonons – or sound – in step toward new type of quantum computer
Artist’s impression of a platform for linear mechanical quantum computing (LMQC). The central transparent element is a phonon beam splitter. Blue and red marbles represent individual phonons, which are the collective mechanical motions of quadrillions of atoms. These mechanical motions can be visualized as surface acoustic waves coming into the beam splitter from opposite directions. The two-phonon interference at the beam splitter is central to LMQC. The output phonons emerging from the image are in a two-phonon state, with one “blue” phonon and one “red” phonon grouped together. Credit: Peter Allen

When we listen to our favorite song, what sounds like a continuous wave of music is actually transmitted as tiny packets of quantum particles called phonons.

The laws of quantum mechanics hold that quantum particles are fundamentally indivisible and therefore cannot be split, but researchers at the Pritzker School of Molecular Engineering (PME) at the University of Chicago are exploring what happens when you try to split a phonon.

In two experiments—the first of their kinds—a team led by Prof. Andrew Cleland used a device called an acoustic beamsplitter to “split” phonons and thereby demonstrate their quantum properties. By showing that the beamsplitter can be used to both induce a special quantum superposition state for one phonon, and further create interference between two phonons, the research team took the first critical steps toward creating a new kind of quantum computer.

The results are published in the journal Science and built on years of breakthrough work on phonons by the team at Pritzker Molecular Engineering.

“Splitting” a phonon into a superposition

In the experiments, researchers used phonons that have roughly a million times higher pitch than can be heard with the human ear. Previously, Cleland and his team figured out how to create and detect single phonons and were the first to entangle two phonons.

To demonstrate these phonons’ quantum capabilities, the team—including Cleland’s graduate student Hong Qiao—created a beamsplitter that can split a beam of sound in half, transmitting half and reflecting the other half back to its source (beamsplitters already exist for light and have been used to demonstrate the quantum capabilities of photons). The whole system, including two qubits to generate and detect phonons, operates at extremely low temperatures and uses individual surface acoustic wave phonons, which travel on the surface of a material, in this case lithium niobate.

However, quantum physics says a single phonon is indivisible. So when the team sent a single phonon to the beamsplitter, instead of splitting, it went into a quantum superposition, a state where the phonon is both reflected and transmitted at the same time. Observing (measuring) the phonon causes this quantum state to collapse into one of the two outputs.

The team found a way to maintain that superposition state by capturing the phonon in two qubits. A qubit is the basic unit of information in quantum computing. Only one qubit actually captures the phonon, but researchers cannot tell which qubit until post-measurement. In other words, the quantum superposition is transferred from the phonon to the two qubits. The researchers measured this two-qubit superposition, yielding “gold standard proof that the beamsplitter is creating a quantum entangled state,” Cleland said.

Showing phonons behave like photons

In the second experiment, the team wanted to show an additional fundamental quantum effect that had first been demonstrated with photons in the 1980s. Now known as the Hong-Ou-Mandel effect, when two identical photons are sent from opposite directions into a beamsplitter at the same time, the superposed outputs interfere so that both photons are always found traveling together, in one or the other output directions.

Importantly, the same happened when the team did the experiment with phonons—the superposed output means that only one of the two detector qubits captures phonons, going one way but not the other. Though the qubits only have the ability to capture a single phonon at a time, not two, the qubit placed in the opposite direction never “hears” a phonon, giving proof that both phonons are going in the same direction. This phenomenon is called two-phonon interference.

Getting phonons into these quantum-entangled state is a much bigger leap than doing so with photons. The phonons used here, though indivisible, still require quadrillions of atoms working together in a quantum mechanical fashion. And if quantum mechanics rules physics at only the tiniest realm, it raises questions of where that realm ends and classical physics begins; this experiment further probes that transition.

“Those atoms all have to behave coherently together to support what quantum mechanics says they should do,” Cleland said. “It’s kind of amazing. The bizarre aspects of quantum mechanics are not limited by size.”

Creating a new linear mechanical quantum computer

The power of quantum computers lies in the “weirdness” of the quantum realm. By harnessing the strange quantum powers of superposition and entanglement, researchers hope to solve previously intractable problems. One approach to doing this is to use photons, in what is called a “linear optical quantum computer.”

A linear mechanical quantum computer—which would use phonons instead of photons—itself could have the ability to compute new kinds of calculations. “The success of the two-phonon interference experiment is the final piece showing that phonons are equivalent to photons,” Cleland said. “The outcome confirms we have the technology we need to build a linear mechanical quantum computer.”

Unlike photon-based linear optical quantum computing, the University of Chicago platform directly integrates phonons with qubits. That means phonons could further be part of a hybrid quantum computer that combines the best of linear quantum computers with the power of qubit-based quantum computers.

The next step is to create a logic gate—an essential part of computing—using phonons, on which Cleland and his team are currently conducting research.

Other authors on the paper include É. Dumur, G. Andersson, H. Yan, M.-H. Chou, J. Grebel, C. R. Conner, Y. J. Joshi, J. M. Miller, R. G. Povey, and X. Wu.

More information: H. Qiao et al, Splitting phonons: Building a platform for linear mechanical quantum computing, Science (2023). DOI: 10.1126/science.adg8715www.science.org/doi/10.1126/science.adg8715

Journal information: Science 

Provided by University of Chicago 

Researchers discover materials exhibiting huge magnetoresistance

Researchers discover materials exhibiting huge magnetoresistance
(a) A schematic diagram of a tunnel magnetoresistive device and magnetoresistance. (b) A schematic diagram of the crystal of the metastable body-centered cubic cobalt-manganese alloy studied. (c) A schematic diagram of the face-centered cubic structure, which is one of the thermodynamically stable phases of cobalt-manganese alloys. Credit: Tohoku University

A group of researchers from Tohoku University has unveiled a new material that exhibits enormous magnetoresistance, paving the way for developments in non-volatile magnetoresistive memory (MRAM).

Details of their unique discovery were published in the Journal of Alloys and Compounds.

Today, the demand for advancements in hardware that can efficiently process large amounts of digital information and in sensors has never been greater, especially with governments deploying technological innovations to achieve smarter societies.

Much of this hardware and sensors rely on MRAM and magnetic sensors, and tunnel magnetoresistive devices make up the majority of such devices.

Tunnel magnetoresistive devices exploit the tunnel magnetoresistance effect to detect and measure magnetic fields. This is tied to the magnetization of ferromagnetic layers in magnetic tunnel junctions. When the magnets are aligned, a low resistance state is observed, and electrons can easily tunnel through the thin insulating barrier between them.

When the magnets are not aligned, the tunneling of electrons becomes less efficient and leads to higher resistance. This change in resistance is expressed as the magnetoresistive ratio, a key figure in determining the efficiency of tunneling magnetoresistive devices. The higher the magnetoresistance ratio, the better the device is.

Current tunnel magnetoresistive devices comprise magnesium oxide and iron-based magnetic alloys, like iron-cobalt. Iron-based alloys have a body-centered cubic crystal structure in ambient conditions and exhibit a huge tunnel magnetoresistance effect in devices with a rock salt-type magnesium oxide.

Researchers discover materials exhibiting huge magnetoresistance
The thermodynamically stable crystal structure in cobalt-manganese-iron ternary alloys which displays the composition of the material in which the huge magnetoresistance ratio was discovered, and the magnetoresistance data collected at low and room temperature. These characteristics were obtained due to the body-centered cubic structure in a metastable state. Credit: Tohoku University

There have been two notable studies using these iron-based alloys that produced magnetoresistive devices displaying high magnetoresistance ratios. The first in 2004 was by the National Institute of Advanced Industrial Science and Technology in Japan and IBM; and the second came in 2008, when researchers from Tohoku University reported on a magnetoresistance ratio exceeding 600% at room temperature, something that jumped to 1000% with temperatures near zero Kelvin.

Since those breakthroughs, various institutes and companies have invested considerable effort in honing device physics, materials, and processes. Yet aside from iron-based alloys, only some Heusler-type ordered magnetic alloys have displayed such enormous magnetoresistance.

Dr. Tomohiro Ichinose and Professor Shigemi Mizukami from Tohoku University recently began exploring thermodynamically metastable materials to develop a new material capable of demonstrating similar magnetoresistance ratios. To do so, they focused on the strong magnetic properties of cobalt-manganese alloys, which have a body-centered cubic metastable crystal structure.

“Cobalt-manganese alloys have face-centered cubic or hexagonal crystal structures as thermodynamically stable phases. Because this stable phase exhibits weak magnetism, it has never been studied as a practical material for tunnel magnetoresistive devices,” said Mizukami.

Back in 2020, the group reported on a device that used a cobalt-manganese alloy with metastable body-centered cubic crystal structure.

Using data science and/or high-throughput experimental methods, they built upon this discovery, and succeeded in obtaining huge magnetoresistance in devices by adding a small amount of iron to the metastable body-centered cubic cobalt-manganese alloy. The magnetoresistance ratio was 350% at room temperature and also exceeded 1000% at a low temperature. Additionally, the device fabrication employed the sputtering method and a heating process, something compatible with current industries.

“We have produced the third instance of a new magnetic alloy for tunneling magnetoresistive devices showing huge magnetoresistance, and it sets an alternative direction of travel for future improvements,” adds Mizukami.

More information: Tomohiro Ichinose et al, Large tunnel magnetoresistance in magnetic tunnel junctions with magnetic electrodes of metastable body-centered cubic CoMnFe alloys, Journal of Alloys and Compounds (2023). DOI: 10.1016/j.jallcom.2023.170750

Provided by Tohoku University 

Researchers demonstrate noise-free communication with structured light

Researchers demonstrate noise-free communication with structured light
New research in structured light means researchers can exploit the many patterns of light as an encoding alphabet without worrying about how noisy the channel is. Credit: Wits University

The patterns of light hold tremendous promise for a large encoding alphabet in optical communications, but progress is hindered by their susceptibility to distortion, such as in atmospheric turbulence or in bent optical fiber.

Now researchers at the University of the Witwatersrand (Wits) have outlined a new optical communication protocol that exploits spatial patterns of light for multi-dimensional encoding in a manner that does not require the patterns to be recognized, thus overcoming the prior limitation of modal distortion in noisy channels. The result is a new encoding state-of-the-art of over 50 vectorial patterns of light sent virtually noise-free across a turbulent atmosphere, opening a new approach to high-bit-rate optical communication.

Published in Laser & Photonics Reviews, the Wits team from the Structured Light Laboratory in the Wits School of Physics used a new invariant property of vectorial light to encode information. This quantity, which the team call “vectorness,” scales from 0 to 1 and remains unchanged when passing through a noisy channel.

Unlike traditional amplitude modulation which is 0 or 1 (only a two-letter alphabet), the team used the invariance to partition the 0 to 1 vectorness range into more than 50 parts (0, 0.02, 0.04 and so on up to 1) for a 50-letter alphabet. Because the channel over which the information is sent does not distort the vectorness, both sender and received will always agree on the value, hence noise-free information transfer.

The critical hurdle that the team overcame is to use patterns of light in a manner that does not require them to be “recognized,” so that the natural distortion of noisy channels can be ignored. Instead, the invariant quantity just “adds up” light in specialized measurements, revealing a quantity that doesn’t see the distortion at all.

“This is a very exciting advance because we can finally exploit the many patterns of light as an encoding alphabet without worrying about how noisy the channel is,” says Professor Andrew Forbes, from the Wits School of Physics. “In fact, the only limit to how big the alphabet can be is how good the detectors are and not at all influenced by the noise of the channel.”

Lead author and Ph.D. candidate Keshaan Singh adds, “To create and detect the vectorness modulation requires nothing more than conventional communications technology, allowing our modal (pattern) based protocol to be deployed immediately in real-world settings.”

The team have already started demonstrations in optical fiber and in fast links across free-space, and believe that the approach can work in other noisy channels, including underwater.

More information: Keshaan Singh et al, A Robust Basis for Multi‐Bit Optical Communication with Vectorial Light, Laser & Photonics Reviews (2023). DOI: 10.1002/lpor.202200844

Provided by Wits University