Researchers achieve calculation of Jones polynomial based on the Majorana zero modes

A research team has experimentally calculated the Jones polynomial based on the quantum simulation of braided Majorana zero modes. The research team determined the Jones polynomials of different links through simulating the braiding operations of Majorana fermions. This study was published in Physical Review Letters.

Link or knot invariants, such as the Jones polynomials, serve as a powerful tool to determine whether or not two knots are topologically equivalent. Currently, there is a lot of interest in determining Jones polynomials as they have applications in various disciplines, such as DNA biology and condensed matter physics.

Unfortunately, even approximating the value of Jones polynomials falls within the #P-hard complexity class, with the most efficient classical algorithms requiring an exponential amount of resources. Yet, quantum simulations offer an exciting way to experimentally investigate properties of non-Abelian anyons and Majorana zero modes (MZMs) are regarded as the most plausible candidate for experimentally realizing non-Abelian statistics.

The team used a photonic quantum simulator that employed two-photon correlations and nondissipative imaginary-time evolution to perform two distinct MZM braiding operations that generate anyonic worldlines of several links. Based on this simulator, the team conducted a series of experimental studies to simulate the topological properties of non-Abelian anyons.

They successfully simulated the exchange operations of a single Kitaev chain MZM, detected the non-Abelian geometric phase of MZMs in a two-Kitaev chain model, and further extended to high dimensions -semion zeroth mode, studying their braiding process which was immune to local noise and maintained the conservation of quantum contextual resources.

Based on this work, the team expanded the previous single-photon encoding method to dual-photon spatial methods, utilizing coincidence counting of dual photons for encoding. This significantly increased the number of quantum states that can be encoded.

Meanwhile, by introducing a Sagnac interferometer-based quantum cooling device, the dissipative evolution had been successfully transformed into a nondissipative evolution, which enhanced the device’s capability to recycle photonic resources, thus contributing to achieving multi-step quantum evolution operations. These techniques greatly improved the capability of the photonic quantum simulator and laid a solid technical foundation for the simulation of braiding Majorana zero modes in three Kitaev models.

The team demonstrated that their experimental setup could faithfully realize the desired braiding evolutions of MZMs, as the average fidelity of quantum states and braiding operation was above 97%.

By combining different braiding operations of Majorana zero modes in the three Kitaev chain models, the research team simulated five typical topological knots, which gave rise to the Jones polynomials of five topologically distinct links, further distinguishing between topologically inequivalent links.

Such an advance can greatly contribute to the fields of statistical physics, molecular synthesis technology and integrated DNA replication, where intricate topological links and knots emerge frequently.

More information: Jia-Kun Li et al, Photonic Simulation of Majorana-Based Jones Polynomials, Physical Review Letters (2024). DOI: 10.1103/PhysRevLett.133.230603

Journal information: Physical Review Letters

Provided by University of Science and Technology of China

The science behind your Christmas sweater: How friction shapes the form of knitted fabrics

A trio of physicists from the University of Rennes, Aoyama Gakuin University, and the University of Lyon have discovered, through experimentation, that it is friction between fibers that allows knitted fabrics to take on a given form. Jérôme Crassous, Samuel Poincloux, and Audrey Steinberger have attempted to understand the underlying mechanics involved in the forms of knitted garments. Their paper is published in Physical Review Letters.

The research team noted that while many of the factors that are involved in intertwined fabrics have been studied to better understand their characteristics (such as why sweaters keep people warm despite the gaps between stitches), much less is known about the form garments made using such techniques can take.

To learn more, they conducted experiments using a nylon yarn and a well-known Jersey knit stitch called the stockinette—a technique that involves forming interlocked loops using knitting needles. They knitted a piece of fabric using 70×70 stitches and attached it to a biaxial tensile machine.

The team then used the tensile machine to stretch the piece of fabric in different ways, and then closely examined how it impacted the stitches. They found that the piece of garment did not have a unique shape. By stretching the fabric in different ways, they could cause it to come to rest in different forms, which they call metastable shapes.

They noted that the ratios of the length and width of such metastable shapes varied depending on how much twisting was applied, which suggested the fabric was capable of taking on many different metastable shapes.

The researchers then created simulations of the fiber to show what was happening as it was twisted and pulled on the tensile machine. The simulations showed the same results, but it allowed them to change one characteristic of the virtual fibers that could not be changed on the real fabric—the amount of friction between the strands.

They found that setting the friction to zero reduced the metastable shapes to just one. Thus, friction was found to be the driving force behind the forms that knitted fabrics can take.

More information: Jérôme Crassous et al, Metastability of a Periodic Network of Threads: Shapes of a Knitted Fabric, Physical Review Letters (2024). DOI: 10.1103/PhysRevLett.133.248201. On arXivDOI: 10.48550/arxiv.2404.07811

Journal information: Physical Review Letters  arXiv 

© 2024 Science X Network

A new calculation of the electron’s self-energy improves determination of fundamental constants

When quantum electrodynamics, the quantum field theory of electrons and photons, was being developed after World War II, one of the major challenges for theorists was calculating a value for the Lamb shift, the energy of a photon resulting from an electron transitioning from one hydrogen hyperfine energy level to another.

The effect was first detected by Willis Lamb and Robert Retherford in 1947, with the emitted photon having a frequency of 1,000 megahertz, corresponding to a photon wavelength of 30 cm and an energy of 4 millionths of an electronvolt—right on the lower edge of the microwave spectrum. It came when the one electron of the hydrogen atom transitioned from the 2P1/2 energy level to the 2S1/2 level. (The leftmost number is the principal quantum number, much like the discrete but increasing circular orbits of the Bohr atom.)

Conventional quantum mechanics didn’t have such transitions, and Dirac’s relativistic Schrödinger equation (naturally called the Dirac equation) did not have such a hyperfine transition either, because the shift is a consequence of interactions with the vacuum, and Dirac’s vacuum was a “sea” that did not interact with real particles.

As theorists worked to produce a workable theory of quantum electrodynamics (QED), predicting the Lamb shift was an excellent challenge as the QED calculation contained the prominent thorns of the theory, such as divergent integrals at both low and high energies and singularity points.

On Lamb’s 65th birthday in 1978, Freeman Dyson said to him, “Those years, when the Lamb shift was the central theme of physics, were golden years for all the physicists of my generation. You were the first to see that this tiny shift, so elusive and hard to measure, would clarify our thinking about particles and fields.”

Precisely predicting the Lamb shift, as well as the anomalous magnetic moment of the electron, has been a challenge for theorists of every generation since. The theoretically predicted value for the shift allows the fine-structure constant to be measured with an uncertainty of less than one part in a million.

Now, a new step in the evolution of the Lamb shift calculation has been published in Physical Review Letters by a group of three scientists from the Max Planck Institute for Nuclear Physics in Germany. To be exact, they calculated the “two-loop” electron self-energy.

Self-energy is the energy a particle (here, an electron) has as a result of changes that it causes in its environment. For example, the electron in a hydrogen atom attracts the proton that is the nucleus, so the effective distance between them changes.

QED has a prescription to calculate the self-energy, and it’s easiest via Feynman diagrams. “Two-loops” refers to the Feynman diagrams that describe this quantum process—two virtual photons from the quantum vacuum that influence the electron’s behavior. They pop in from the vacuum, stay a shorter time than is set by the Heisenberg Uncertainty Principle, then are absorbed by the 1S electron state, which has spin 1/2.

Accounting for the two-loop self-energy is one of only three mathematical terms that describe the Lamb shift, but it constitutes a major problem that most influences the result for the Lamb energy shift.

Lead author Vladimir Yerokhin and his colleagues determined an enhanced precision for it from numerical calculations. Importantly, they calculated the two-loop correction to all orders in an important parameter, Zα that represents the interaction with the nucleus. (Z is the atomic number of the nucleus. The atom still has only one electron, but a nucleus bigger than hydrogen’s is included for generality. α is the fine structure constant.)

Although it was computationally challenging, the trio produced a significant improvement on previous two-loop calculations of the electron self-energy that reduces the 1S–2S Lamb shift in hydrogen by a frequency difference of 2.5 kHz and reduces its theoretical uncertainty. In particular, this reduces the value of the Rydberg constant by one part in a trillion.

Introduced by the Swedish spectroscopist Johannes Rydberg in 1890, this number appears in simple equations for the spectral lines of hydrogen. The Rydberg constant is a fundamental constant that is one of the most precisely known constants in physics, containing 12 significant figures with, previously, a relative uncertainty of about two parts in a trillion.

Overall, they write, “the calculational approach developed in this Letter allowed us to improve the numerical accuracy of this effect by more than an order of magnitude and extend calculations to lower nuclear charges [Z] than previously possible.” This, in turn, has consequences for the Rydberg constant.

Their methodology also has consequences for other celebrated QED calculations: other two-loop corrections to the Lamb shift, and especially to the two-loop QED effects for the anomalous magnetic moment of the electron and the muon, also called their “g-factors.” A great deal of experimental effort is currently being put into precisely determining the muon’s g-factor, such as the Muon g-2 experiment at Fermilab, as it could point the way to physics beyond the standard model.

More information: V. A. Yerokhin et al, Two-Loop Electron Self-Energy for Low Nuclear Charges, Physical Review Letters (2024). DOI: 10.1103/PhysRevLett.133.251803

Journal information: Physical Review Letters

© 2024 Science X Network

Starlight to sight: Researchers develop short-wave infrared technology to allow starlight detection

Prof Zhang Zhiyong’s team at Peking University developed a heterojunction-gated field-effect transistor (HGFET) that achieves high sensitivity in short-wave infrared detection, with a recorded specific detectivity above 1014 Jones at 1300 nm, making it capable of starlight detection. Their research was recently published in the journal Advanced Materials, titled “Opto-Electrical Decoupled Phototransistor for Starlight Detection.”

Highly sensitive shortwave infrared (SWIR) detectors are essential for detecting weak radiation (typically below 10−8 W·Sr−1·cm−2·µm−1) with high-end passive image sensors. However, mainstream SWIR detection based on epitaxial photodiodes cannot effectively detect ultraweak infrared radiation due to the lack of inherent gain.

Filling this gap, researchers at the Peking University School of Electronics and collaborators have presented a heterojunction-gated field-effect transistor (HGFET) that achieves ultra-high photogain and exceptionally low noise in the short-wavelength infrared (SWIR) region, benefiting from a design that incorporates a comprehensive opto-electric decoupling mechanism.

The team developed a HGFET consisting of a colloidal quantum dot (CQD)-based p-i-n heterojunction and a carbon nanotube (CNT) field-effect transistor, which significantly detects and amplifies SWIR signals with a high inherent gain while minimally amplifying noise, leading to a recorded specific detectivity above 1014 Jones at 1300 nm and a recorded maximum gain-bandwidth product of 69.2 THz.

Direct comparative testing indicates that the HGFET can detect weak infrared radiation at 0.46 nW cm−2 levels, thus making this detector much more sensitive than the commercial and reported SWIR detectors, and especially enabling starlight detection or vision.

More information: Shaoyuan Zhou et al, Opto‐Electrical Decoupled Phototransistor for Starlight Detection, Advanced Materials (2024). DOI: 10.1002/adma.202413247

Journal information: Advanced Materials

Provided by Peking University

Researchers use high-performance computing to analyze a quantum photonics experiment

by Universität Paderborn

Quantum experiments and high-performance computing: new methods enable complex calculations to be completed extremely quickly
Scientists at Paderborn University have for the first time used high-performance computing (on the right in the picture the Paderborn supercomputer Noctua) to analyze a quantum photonics experiment on a large scale. Credit: Paderborn University, Hennig/Mazhiq

For the first time ever, scientists at Paderborn University have used high-performance computing (HPC) at large scales to analyze a quantum photonics experiment. In specific terms, this involved the tomographic reconstruction of experimental data from a quantum detector. This is a device that measures individual photons.

The researchers involved developed new HPC software to achieve this. Their findings have now been published in the journal Quantum Science and Technology.

Quantum tomography on a megascale photonic quantum detector

High-resolution photon detectors are increasingly being used for quantum research. Precisely characterizing these devices is crucial if they are to be put to effective use for measurement purposes—and thus far, doing so has been a challenge. This is because it involves huge volumes of data that need to be analyzed without neglecting their quantum mechanical structure.

Suitable tools for processing these data sets are particularly important for future applications. While traditional approaches cannot perform like-for-like computations of quantum systems beyond a certain scale, Paderborn’s scientists are using high-performance computing for characterization and certification tasks.

“By developing open-source customized algorithms using HPC, we perform quantum tomography on a megascale quantum photonic detector,” explains physicist Timon Schapeler, who authored the paper with computer scientist Dr. Robert Schade and colleagues from PhoQS (Institute for Photonic Quantum Systems) and PC2 (Paderborn Center for Parallel Computing).

PC2, an interdisciplinary research project at Paderborn University, operates the HPC systems. The university is one of Germany’s national high-performance computing centers and thus stands at the forefront of university high-performance computing.

‘Unprecedented scale’

“The findings are opening up entirely new horizons for the size of systems being analyzed in the field of scalable quantum photonics. This has wider implications, for example, for characterizing photonic quantum computer hardware,” Schapeler continues. Researchers were able to perform their calculations for describing a photon detector within just a few minutes—faster than ever before.

The system also managed to complete calculations involving huge quantities of data extremely quickly. Schapeler states, “This shows the unprecedented scale on which this tool can be used with quantum photonic systems. As far as we know, our work is the first contribution to the field of traditional high-performance computing enabling experimental quantum photonics at large scales.

“This field will become increasingly important when it comes to demonstrating quantum supremacy in quantum photonic experiments—and on a scale that cannot be calculated by conventional means.”

Shaping the future with fundamental research

Schapeler is a doctoral student in the “Mesoscopic Quantum Optics” research group headed by Professor Tim Bartley. This team conducts research into the fundamental physics of the quantum states of light and its applications. These states consist of tens, hundreds or thousands of photons.

“The scale is crucial, as this illustrates the fundamental advantage that quantum systems hold over conventional ones. There is a clear benefit in many areas, including measurement technology, data processing and communications,” Bartley explains.

More information: Timon Schapeler et al, Scalable quantum detector tomography by high-performance computing, Quantum Science and Technology (2024). DOI: 10.1088/2058-9565/ad8511

Journal information: Quantum Science and Technology 

Provided by Universität Paderborn

A new spectroscopy method reveals water’s quantum secrets

by Celia Luterbacher, Ecole Polytechnique Federale de Lausanne

A new spectroscopy reveals water's quantum secrets
Ph.D. student Eksha Chaudhary with the correlated vibrational spectroscopy setup. Credit: Jamani Caillet

For the first time, EPFL researchers have exclusively observed molecules participating in hydrogen bonds in liquid water, measuring electronic and nuclear quantum effects that were previously accessible only via theoretical simulations.

Water is synonymous with life, but the dynamic, multifaceted interaction that brings H2O molecules together—the hydrogen bond—remains mysterious. Hydrogen bonds result when hydrogen and oxygen atoms between water molecules interact, sharing electronic charge in the process.

This charge-sharing is a key feature of the three-dimensional “H-bond” network that gives liquid water its unique properties, but quantum phenomena at the heart of such networks have thus far been understood only through theoretical simulations.

Now, researchers led by Sylvie Roke, head of the Laboratory for Fundamental BioPhotonics in EPFL’s School of Engineering, have published a new method—correlated vibrational spectroscopy (CVS)—that enables them to measure how water molecules behave when they participate in H-bond networks.

Crucially, CVS allows scientists to distinguish between such participating (interacting) molecules, and randomly distributed, non-H-bonded (non-interacting) molecules. By contrast, any other method reports measurements on both molecule types simultaneously, making it impossible to distinguish between them.

“Current spectroscopy methods measure the scattering of laser light caused by the vibrations of all molecules in a system, so you have to guess or assume that what you are seeing is due to the molecular interaction you’re interested in,” Roke explains.

“With CVS, the vibrational mode of each different type of molecule has its own vibrational spectrum. And because each spectrum has a unique peak corresponding to water molecules moving back and forth along the H-bonds, we can measure directly their properties, such as how much electronic charge is shared, and how H-bond strength is impacted.”

The method, which the team says has “transformative” potential to characterize interactions in any material, has been published in Science.

To distinguish between interacting and non-interacting molecules, the scientists illuminated liquid water with femtosecond (one quadrillionth of a second) laser pulses in the near-infrared spectrum. These ultra-short bursts of light create tiny charge oscillations and atomic displacements in the water, which trigger the emission of visible light.

This emitted light appears in a scattering pattern that contains key information about the spatial organization of the molecules, while the color of the photons contains information about atomic displacements within and between molecules.

“Typical experiments place the spectrographic detector at a 90-degree angle to the incoming laser beam, but we realized that we could probe interacting molecules simply by changing the detector position, and recording spectra using certain combinations of polarized light. In this way, we can create separate spectra for non-interacting and interacting molecules,” Roke says.

The team conducted more experiments aimed at using CVS to tease apart the electronic and nuclear quantum effects of H-bond networks, for example by changing the pH of water through the addition of hydroxide ions (making it more basic), or protons (more acidic).

“Hydroxide ions and protons participate in H-bonding, so changing the pH of water changes its reactivity,” says Ph.D. student Mischa Flór, the paper’s first author.

“With CVS, we can now quantify exactly how much extra charge hydroxide ions donate to H-bond networks (8%), and how much charge protons accept from it (4%)—precise measurements that could never have been done experimentally before.”

These values were explained with the aid of advanced simulations conducted by collaborators in France, Italy, and the U.K.

The researchers emphasize that the method, which they also corroborated via theoretical calculations, can be applied to any material, and indeed several new characterization experiments are already underway.

“The ability to quantify directly H-bonding strength is a powerful method that can be used to clarify molecular-level details of any solution, for example containing electrolytes, sugars, amino acids, DNA, or proteins,” Roke says. “As CVS is not limited to water, it can also deliver a wealth of information on other liquids, systems, and processes.”

More information: Mischa Flór et al, Dissecting the hydrogen bond network of water: Charge transfer and nuclear quantum effects, Science (2024). DOI: 10.1126/science.ads4369

Journal information: Science 

Scientists discover a promising way to create new superheavy elements

by David Appell , Phys.org

Researchers discover a promising way to create new superheavy elements
A chart of superheavy elements (SHEs), plotted by atomic number (protons) vs number of neutrons. Boxes are discovered SHEs, with predicted half-lives. The circle is an island of stability. Credit: Wikipedia Commons

What is the heaviest element in the universe? Are there infinitely many elements? Where and how could superheavy elements be created naturally?

The heaviest abundant element known to exist is uranium, with 92 protons (the atomic number “Z”). But scientists have succeeded in synthesizing superheavy elements up to oganesson, with a Z of 118. Immediately before it are livermorium, with 116 protons and tennessine, which has 117.

All have short half-lives—the amount of time for half of an assembly of the element’s atoms to decay—usually less than a second and some as short as a microsecond. Creating and detecting such elements is not easy and requires powerful particle accelerators and elaborate measurements.

But the typical way of producing high-Z elements is reaching its limit. In response, a group of scientists from the United States and Europe have come up with a new method to produce superheavy elements beyond the dominant existing technique. Their work, done at the Lawrence Berkeley National Laboratory in California, was published in Physical Review Letters.

“Today, the concept of an ‘island of stability’ remains an intriguing topic, with its exact position and extent on the Segré chart continuing to be a subject of active pursuit both in theoretical and experimental nuclear physics,” J.M. Gates of LBNL and colleagues wrote in their paper.

The island of stability is a region where superheavy elements and their isotopes—nuclei with the same number of protons but different numbers of neutrons—may have much longer half-lives than the elements near it. It’s been expected to occur for isotopes near Z=112.

While there have been several techniques to discover superheavy elements and create their isotopes, one of the most fruitful has been to bombard targets from the actinide series of elements with a beam of calcium atoms, specifically an isotope of calcium, 48-calcium (48Ca), that has 20 protons and 28 (48 minus 20) neutrons. The actinide elements have proton numbers from 89 to 103, and 48Ca is special because it has a “magic number” of both protons and neutrons, meaning their numbers completely fill the available energy shells in the nucleus.

Proton and/or neutron numbers being magic means the nucleus is extremely stable; for example, 48Ca has a half-life of about 60 billion billion (6 x 1019) years, far larger than the age of the universe. (By contrast, 49Ca, with just one more neutron, decays by half in about nine minutes.)

These reactions are called “hot-fusion” reactions. Another technique saw beams of isotopes from 50-titanium to 70-zinc accelerated onto targets of lead or bismuth, called “cold-fusion” reactions. Superheavy elements up to oganesson (Z=118) were discovered with these reactions.

But the time needed to produce new superheavy elements, quantified via the cross section of the reaction which measures the probability they occur, was taking longer and longer, sometimes weeks of running time. Being so close to the predicted island of stability, scientists need techniques to go further than oganesson. Targets of einsteinium or fermium, themselves superheavy, cannot be sufficiently produced to make a suitable target.

“A new reaction approach is required,” wrote Gates and her team. And that is what they found.

Theoretical models of the nucleus have successfully predicted the production rates of superheavy elements below oganesson using actinide targets and beams of isotopes heavier than 48-calcium. These models also agree that to produce elements with Z=119 and Z=120, beams of 50-titanium would work best, having the highest cross sections.

But not all necessary parameters have been pinned down by theorists, such as the necessary energy of the beams, and some of the masses needed for the models haven’t been measured by experimentalists. The exact numbers are important because the production rates of the superheavy elements could otherwise vary enormously.

Several experimental efforts to produce atoms with proton numbers from 119 to 122 have already been attempted. All have been unsatisfactory, and the limits they determined for the cross sections have not allowed different theoretical nuclear models to be constrained. Gates and his team investigated the production of isotopes of livermorium (Z=116) by beaming 50-titanium onto targets of 244-Pu (plutonium).

Using the 88-Inch Cyclotron accelerator at Lawrence Berkeley National Laboratory, the team produced a beam that averaged 6 trillion titanium ions per second that exited the cyclotron. These impacted the plutonium target, which had a circular area of 12.2 cm, over a 22-day period. Making a slew of measurements, they determined that 290-livermorium had been produced via two different nuclear decay chains.

“This is the first reported production of a SHE [superheavy element] near the predicted island of stability with a beam other than 48-calcium,” they concluded. The reaction cross section, or probability of interaction, did decrease, as was expected with heavier beam isotopes, but “success of this measurement validates that discoveries of new SHE are indeed within experimental reach.”

The discovery represents the first time a collision of non-magic nuclei has shown the potential to create other superheavy atoms and isotopes (both), hopefully paving the way for future discoveries. About 110 isotopes of superheavy elements are known to exist, but another 50 are expected to be out there, waiting to be uncovered by new techniques such as this.

More information: J. M. Gates et al, Toward the Discovery of New Elements: Production of Livermorium ( Z=116 ) with Ti50, Physical Review Letters (2024). DOI: 10.1103/PhysRevLett.133.172502

Journal information: Physical Review Letters 

Investigating the flow of fluids with non-monotonic, ‘S-shaped’ rheology

by SciencePOD

Investigating the flow of fluids with non-monotonic, ‘S-shaped’ rheology
Sketch of shear banding (top left) and vorticity banding (top right) as proposed by [18]. For shear banding, the rheological curve 𝜏 (𝛾˙) is single-valued but non-monotonic (bottom left). For vortex banding it is the 𝛾˙(𝜏) curve which is single valued and non-monotonic (bottom right). Credit: The European Physical Journal E (2024). DOI: 10.1140/epje/s10189-024-00444-5

Water and oil, and some other simple fluids, respond in the same way to all levels of shear stress. These are termed Newtonian fluids, and their viscosity is constant for all stresses although it will vary with temperature. Under different stresses and pressure gradients, other non-Newtonian fluids exhibit patterns of behavior that are much more complex.

Researchers Laurent Talon and Dominique Salin from Université Paris-Sacly, Paris, France have now shown that under certain circumstances, cornstarch suspensions can display a banding pattern with alternating regions of high and low viscosity. This work has been published in The European Physical Journal E.

Non-Newtonian fluids may exhibit shear thinning, where the viscosity decreases with stress; common examples include ketchups and sauces that can appear almost solid-like at rest. The reverse is shear thickening, in which viscosity increases with stress. Some suspensions exhibit a property called discontinuous shear thickening (DST).

“At low shear stress [these fluids] behave like Newtonian fluids, but at a certain stress value the viscosity increases very steeply,” explains Talon.

In 2014, Matthew Wyart of New York University, NY, U.S., and Michael Cates of the University of Edinburgh, Scotland, proposed a similar but even more counter-intuitive and interesting model: a so-called “S-shaped” rheology where the viscosity of a fluid first increases with increasing stress and then decreases.

Talon and Salin set out to investigate the plausibility of this simulated rheology using a suspension of cornstarch in a straight, cylindrical capillary tube. They observed the expected non-monotonic relationship between pressure and flow rate, but not exactly as predicted: the flow rate initially increased with pressure but then suddenly decreased.

“Assuming that the Wyart-Cates model is essentially correct, one solution that would match what we observed could be a ‘rheological segregation’ or ‘streamwise banding’ in the tube, in which some regions have a high viscosity and others a lower one,” explains Talon. “We are continuing to investigate the validity of this proposal, both experimentally and using numerical simulations.”

More information: L. Talon et al, On pressure-driven Poiseuille flow with non-monotonic rheology, The European Physical Journal E (2024). DOI: 10.1140/epje/s10189-024-00444-5

Journal information: European Physical Journal E 

New partially coherent unidirectional imaging system enhances visual data transmission

by UCLA Engineering Institute for Technology Advancement

Researchers Introduce Partially Coherent Unidirectional Imaging Systems
The unidirectional diffractive processor transmits high-quality images in the forward propagation direction, represented with the blue line, from field of view (FOV) A to FOV B, while effectively blocking the image formation in the backward propagation direction, represented with the brown line, from FOV B to FOV A. Credit: Ozcan Lab, UCLA

A team of researchers from the University of California, Los Angeles (UCLA) has unveiled a new development in optical imaging technology that could significantly enhance visual information processing and communication systems.

The work is published in the journal Advanced Photonics Nexus.

The new system, based on partially coherent unidirectional imaging, offers a compact, efficient solution for transmitting visual data in one direction while blocking transmission in the opposite direction.

This innovative technology, led by Professor Aydogan Ozcan and his interdisciplinary team, is designed to selectively transmit high-quality images in one direction, from field-of-view A to field-of-view B, while deliberately distorting images when viewed from the reverse direction, B to A.

This asymmetric image transmission could have broad implications for fields like privacy protection, augmented reality, and optical communications, offering new capabilities for managing how visual optical information is processed and transmitted.

Unidirectional imaging under partially coherent light

The new system addresses a challenge in optical engineering: how to control light transmission to enable clear imaging in one direction while blocking it in the reverse.

Previous solutions for unidirectional wave transmission have often relied on complex methods such as temporal modulation, nonlinear materials, or high-power beams under fully coherent illumination, which limit their practical applications.

In contrast, this UCLA innovation leverages partially coherent light to achieve high image quality and power efficiency in the forward direction (A to B), while intentionally introducing distortion and reduced power efficiency in the reverse direction (B to A).

“We engineered a set of spatially optimized diffractive layers that interact with partially coherent light in a way that promotes this asymmetric transmission,” explains Dr. Ozcan. “This system can work efficiently with common illumination sources like LEDs, making it adaptable for a variety of practical applications.”

Researchers Introduce Partially Coherent Unidirectional Imaging Systems
Conceptual illustration of the technology. Credit: UCLA Engineering Institute for Technology Advancement

Leveraging deep learning for enhanced optical design

A key aspect of this development is the use of deep learning to physically design the diffractive layers that make up the unidirectional imaging system. The UCLA team optimized these layers for partially coherent light with a phase correlation length greater than 1.5 times the wavelength of the light.

This careful optimization ensures that the system provides reliable unidirectional image transmission, even when the light source has varying coherence properties. Each imager is compact, axially spanning less than 75 wavelengths, and features a polarization-independent design.

The deep learning algorithms used in the design process help ensure that the system maintains high diffraction efficiency in the forward direction while suppressing image formation in the reverse.

The researchers demonstrated that their system performs consistently across different image datasets and illumination conditions, showing resilience to changes in the light’s coherence properties. “The ability of our system to generalize across different types of input images and light properties is one of its exciting features,” says Dr. Ozcan.

Looking ahead, the researchers plan to extend the unidirectional imager to different parts of the spectrum, including infrared and visible ranges, and to explore various kinds of illumination sources.

These advancements could push the boundaries of imaging and sensing, unlocking new applications and innovations. In privacy protection, for example, the technology could be used to prevent sensitive information from being visible from unintended perspectives. Similarly, augmented and virtual reality systems could use this technology to control how information is displayed to different viewers.

“This technology has the potential to impact multiple fields where controlling the flow of visual information is critical,” adds Dr. Ozcan. “Its compact design and compatibility with widely available light sources make it especially promising for integration into existing systems.”

This research was conducted by an interdisciplinary team from UCLA’s Department of Electrical and Computer Engineering and California NanoSystems Institute (CNSI).

More information: Guangdong Ma et al, Unidirectional imaging with partially coherent light, Advanced Photonics Nexus (2024). DOI: 10.1117/1.APN.3.6.066008

Provided by UCLA Engineering Institute for Technology Advancement 

Research team achieves first-ever acceleration of positive muons to 100 keV

by Bob Yirka , Phys.org

Team at J-PARC demonstrates acceleration of positive muons from thermal energy to 100 keV
Schematic drawing of the experimental setup. Credit: arXiv (2024). DOI: 10.48550/arxiv.2410.11367

A team of engineers and physicists affiliated with a host of institutions across Japan, working at the Japan Proton Accelerator Research Complex, has demonstrated acceleration of positive muons from thermal energy to 100 keV—the first time muons have been accelerated in a stable way. The group has published a paper describing their work on the arXiv preprint server.

Muons are sub-atomic particles similar to electrons. The main difference is their mass; a muon is 200 times heavier than an electron. They are also much shorter lived. Physicists have for many years wanted to build a muon collider to conduct new types of physics research, such as experiments that go beyond the standard model.

Unfortunately, such efforts have been held back by the extremely short muon lifespan—approximately 2 microseconds—after which they decay to electrons and neutrinos. Making things even more difficult is their tendency to zip around haphazardly, which makes forming them into a single beam extremely challenging. In this new effort, the research team has overcome such obstacles using a new technique.

The team started by shooting positively charged muons into a specially designed silica-based aerogel, similar to that used for thermal insulation applications. As the muons struck the electrons in the aerogel, muoniums μ+e (an exotic atom consisting of a positive muon and an electron) were formed. The research team then fired a laser at them to remove their electrons, which forced them to revert back to positive muons, but with greatly diminished speed.

The following step involved guiding the slowed muons into a radio-frequency cavity, where an electric field accelerated them to a final energy of 100 keV, achieving approximately 4% of the speed of light.

The research team acknowledges that despite their achievement, building a working muon collider is still a distant goal. And while their technique might play a role in such a development, there are still problems that must be worked out, such as how to scale an apparatus to a usable size.

More information: S. Aritome et al, Acceleration of positive muons by a radio-frequency cavity, arXiv (2024). DOI: 10.48550/arxiv.2410.11367

Journal information: arXiv