Particles of light may create fluid flow, data-theory comparison suggests

Particles of light may create fluid flow, data-theory comparison suggests
This graphic shows the energy density at different times during the hydrodynamic evolution of the matter created in a collision of a lead nucleus (moving to the left) with a photon emitted from the other lead nucleus (moving to the right). Yellow represents the highest energy density and purple the lowest. Credit: Brookhaven National Laboratory

A new computational analysis by theorists at the U.S. Department of Energy’s Brookhaven National Laboratory and Wayne State University supports the idea that photons (a.k.a. particles of light) colliding with heavy ions can create a fluid of “strongly interacting” particles. In a paper just published in Physical Review Letters, they show that calculations describing such a system match up with data collected by the ATLAS detector at Europe’s Large Hadron Collider (LHC).

As the paper explains, the calculations are based on the hydrodynamic particle flow seen in head-on collisions of various types of ions at both the LHC and the Relativistic Heavy Ion Collider (RHIC), a DOE Office of Science user facility for nuclear physics research at Brookhaven Lab. With only modest changes, these calculations also describe flow patterns seen in near-miss collisions, where photons that form a cloud around the speeding ions collide with the ions in the opposite beam.

“The upshot is that using the same framework we use to describe lead-lead and proton-lead collisions, we can describe the data of these ultra-peripheral collisions where we have a photon colliding with a lead nucleus,” said Brookhaven Lab theorist Bjoern Schenke, a co-author of the paper. “That tells you there’s a possibility that in these photon-ion collisions, we create a small dense strongly interacting medium that is well described by hydrodynamics—just like in the larger systems.”

Fluid signatures

Observations of particles flowing in characteristic ways have been key evidence that the larger collision systems (lead-lead and proton-lead collisions at the LHC; and gold-gold and proton-gold collisions at RHIC) create a nearly perfect fluid. The flow patterns were thought to stem from the enormous pressure gradients created by the large number of strongly interacting particles produced where the colliding ions overlap.

“By smashing these high-energy nuclei together we’re creating such high energy density—compressing the kinetic energy of these guys into such a small space—that this stuff essentially behaves like a fluid,” Schenke said.

Spherical particles (including protons and nuclei) colliding head-on are expected to generate a uniform pressure gradient. But partially overlapping collisions generate an oblong, almond-shaped pressure gradient that pushes more high-energy particles out along the short axis than perpendicular to it.

This “elliptic flow” pattern was one of the earliest hints that particle collisions at RHIC could create a quark-gluon plasma, or QGP—a hot soup of the fundamental building blocks that make up the protons and neutrons of nuclei/ions. Scientists were at first surprised by the QGP’s liquid-like behavior. But they later established elliptic flow as a defining feature of QGP, and evidence that the quarks and gluons were still interacting strongly, even when free from confinement within individual protons and neutrons. Later observations of similar flow patterns in collisions of protons with large nuclei, intriguingly suggest that these proton-nucleus collision systems can also create tiny specks of quark-gluon soup.

“Our new paper is about pushing this to even further extremes, looking at collisions between photons and nuclei,” Schenke said.

Changing the projectile

It has long been known that that ultra-peripheral collisions could create photon-nucleus interactions, using the nuclei themselves as the source of the photons. That’s because charged particles accelerated to high energies, like the lead nuclei/ions accelerated at the LHC (and gold ions at RHIC), emit electromagnetic waves—particles of light. So, each accelerated lead ion at the LHC is essentially surrounded by a cloud of photons.

“When two of these ions pass each other very closely without colliding, you can think of one as emitting a photon, which then hits the lead ion going the other way,” Schenke said. “Those events happen a lot; it’s easier for the ions to barely miss than to precisely hit one another.”

ATLAS scientists recently published data on intriguing flow-like signals from these photon-nucleus collisions.

“We had to set up special data collection techniques to pick out these unique collisions,” said Blair Seidlitz, a Columbia University physicist who helped set up the ATLAS trigger system for the analysis when he was a graduate student at the University of Colorado, Boulder. “After collecting enough data, we were surprised to find flow-like signals that were similar to those observed in lead-lead and proton-lead collisions, although they were a little smaller.”

Schenke and his collaborators set out to see whether their theoretical calculations could accurately describe the particle flow patterns.

They used the same hydrodynamic calculations that describe the behavior of particles produced in lead-lead and proton-lead collision systems. But they made a few adjustments to account for the “projectile” striking the lead nucleus changing from a proton to a photon.

According to the laws of physics (specifically, quantum electrodynamics), a photon can undergo quantum fluctuations to become another particle with the same quantum numbers. A rho meson, a particle made of a particular combination of a quark and antiquark held together by gluons, is one of the most likely results of those photon fluctuations.

If you think back to the proton—made of three quarks—this two-quark rho particle is just a step down the complexity ladder.

“Instead of having a gluon distribution around three quarks inside a proton, we have the two quarks (quark-antiquark) with a gluon distribution around those to collide with the nucleus,” Schenke said.

Particles of light may create fluid flow, data-theory comparison suggests
Brookhaven Lab theorist Bjoern Schenke’s hydrodynamic calculations match up with data from collisions of photons with atomic nuclei at the Large Hadron Collider’s ATLAS detector, suggesting those collisions create a fluid of “strongly interacting” particles. Credit: Brookhaven National Laboratory

Accounting for energy

The calculations also had to account for the big difference in energy in these photon-nucleus collision systems, compared to proton-lead and especially lead-lead.

“The emitted photon that’s colliding with the lead won’t carry the entire momentum of the lead nucleus it came from, but only a tiny fraction of that. So, the collision energy will be much lower,” Schenke said.

That energy difference turned out to be even more important than the change of projectile.

In the most energetic lead-lead or gold-gold heavy ion collisions, the pattern of particles emerging in the plane transverse to the colliding beams generally persists no matter how far you look from the collision point along the beamline (in the longitudinal direction). But when Schenke and collaborators modeled the patterns of particles expected to emerge from lower-energy photon-lead collisions, it became apparent that including the 3D details of the longitudinal direction made a difference. The model showed that the geometry of the particle distributions changes rapidly with increasing longitudinal distance; the particles become “decorrelated.”

“The particles see different pressure gradients depending on their longitudinal position,” Schenke explained.

“So, for these low energy photon-lead collisions, it is important to run a full 3D hydrodynamic model (which is more computationally demanding) because the particle distribution changes more rapidly as you go out in the longitudinal direction,” he said.

When the theorists compared their predictions using this lower-energy, full 3D, hydrodynamic model with the particle flow patterns observed in photon-lead collisions by the ATLAS detector, the data and theory matched up nicely, at least for the most obvious elliptic flow pattern, Schenke said.

Implications and the future

“From this result, it looks like it’s conceivable that even in photon-heavy ion collisions, we have a strongly interacting fluid that responds to the initial collision geometry, as described by hydrodynamics,” Schenke said. “If the energies and temperatures are high enough,” he added, “there will be a quark-gluon plasma.”

Seidlitz, the ATLAS physicist, commented, “It was very interesting to see these results suggesting the formation of a small droplet of quark-gluon plasma, as well as how this theoretical analysis offers concrete explanations as to why the flow signatures are a bit smaller in photon-lead collisions.”

Additional data to be collected by ATLAS and other experiments at RHIC and the LHC over the next several years will enable more detailed analyses of particles flowing from photon-nucleus collisions. These analyses will help distinguish the hydrodynamic calculation from another possible explanation, in which the flow patterns are not a result of the system’s response to the initial geometry.

In the longer-term future, experiments at an Electron-Ion Collider (EIC), a facility planned to replace RHIC sometime in the next decade at Brookhaven Lab, could provide more definitive conclusions.

More information: Wenbin Zhao et al, Collectivity in Ultraperipheral Pb + Pb Collisions at the Large Hadron Collider, Physical Review Letters (2022). DOI: 10.1103/PhysRevLett.129.252302journals.aps.org/prl/abstract/ … ysRevLett.129.252302

Journal information: Physical Review Letters 

Provided by Brookhaven National Laboratory 

Large Hadron Collider Beauty releases first set of data to the public

LHCb releases first set of data to the public
LHCb event display from 2011 showing a B meson decaying into a muon and antimuon pair . Credit: CERN

The Large Hadron Collider Beauty (LHCb) experiment at CERN is the world’s leading experiment in quark flavor physics with a broad particle physics program. Its data from Runs 1 and 2 of the Large Hadron Collider (LHC) has so far been used for over 600 scientific publications, including a number of significant discoveries.

While all scientific results from the LHCb collaboration are already publicly available through open access papers, the data used by the researchers to produce these results is now accessible to anyone in the world through the CERN open data portal. The data release is made in the context of CERN’s Open Science Policy, reflecting the values of transparency and international collaboration enshrined in the CERN Convention for more than 60 years.

“The data collected at LHCb is a unique legacy to humanity, especially since no other experiment covers the region LHCb looks at,” says Sebastian Neubert, leader of the LHCb open data project. “It has been obtained through a huge international collaborative effort, which was funded by the public. Therefore the data belongs to society.”

The data sample made available amounts to 20% of the total data set collected by the LHCb experiment in 2011 and 2012 during LHC Run 1. It comprises 200 terabytes containing information obtained from proton–proton collision events filtered and recorded with the detector.

The LHCb collaboration has preprocessed the data by reconstructing experimental signatures, such as the trajectories of charged particles, from the raw information delivered by its complex detector system. The data is filtered, classified according to approximately 300 processes and decays, and made available in the same format as that used by LHCb physicists.

The analysis of LHC data is a complex and time-consuming exercise. Therefore, to facilitate the analysis, the samples are accompanied by extensive documentation and metadata, as well as a glossary explaining several hundred special terms used in the preprocessing. The data can be analyzed using dedicated LHCb algorithms, which are available as open source software.

The data is suitable for different types of physics studies and can be directly downloaded by anyone. “It is intended to be used by professional scientists and its interpretation needs some knowledge of particle physics, but everybody is invited to give it a try,” continues Neubert. “It would be great if the data inspires new research directions and is used by researchers in other fields, such as data science and artificial intelligence. We are eager to hear from users of the data what they find.”

Further data releases from the LHCb collaboration are planned in the future.

More information: CERN open data portal

Provided by CERN 

Antihelium nuclei as messengers from the depths of the galaxy

Antihelium nuclei as messengers from the depths of the galaxy
Illustration of antihelium annihilation in the ALICE detector at CERN as well as in the universe. Credit: ORIGINS Cluster/S. Kwauka

How are galaxies born, and what holds them together? Astronomers assume that dark matter plays an essential role. However, as yet it has not been possible to prove directly that dark matter exists. A research team including Technical University of Munich (TUM) scientists has now measured for the first time the survival rate of antihelium nuclei from the depths of the galaxy—a necessary prerequisite for the indirect search for dark matter.

Many things point to the existence of dark matter. The way in which galaxies move in galactic clusters, or how fast stars circle the center of a galaxy results in calculations which indicate that there must be far more mass present than what we can see. Approximately 85 percent of our Milky Way for example consists of a substance which is not visible and which can only be detected based on its gravitational effects. As of today it has still not been possible to directly prove the existence of this material.

Several theoretical models of dark matter predict that it could be composed of particles which interact weakly with one another. This produces antihelium-3 nuclei, which consist of two antiprotons and one antineutron. These nuclei are also generated in high-energy collisions between cosmic radiation and common matter like hydrogen and helium—however, with energies different from those that would be expected in the interaction of dark matter particles.

In both processes, the antiparticles originate in the depths of the galaxy, several tens of thousands of lightyears away from us. After their creation, a part of them makes its way in our direction. How many of these particles survive this journey unscathed and reach the vicinity of the Earth as messengers of their formation process determines the transparency of the Milky Way for antihelium nuclei.

Until now scientists have only been able to roughly estimate this value. However, an improved approximation of transparency, a unit of measure for the number and energies of antinuclei, will be important for interpreting future antihelium measurements.

LHC particle accelerator as antimatter factory

Researchers from the ALICE collaboration have now carried out measurements that have enabled them to determine the transparency more precisely for the first time. ALICE stands for A Large Ion Collider Experiment and is one of the largest experiments in the world to explore physics on the smallest length scales. ALICE is part of the Large Hadron Collider (LHC) at CERN.

The LHC can generate large amounts of light antinuclei such as antihelium. To do so, protons and lead atoms are each put on a collision course. The collisions produce particle showers which are then recorded by the detector of the ALICE experiment. Thanks to several subsystems of the detector, the researchers can then detect the antihelium-3 nuclei that have formed and follow their trails in the detector material.

This makes it possible to quantify the probability that an antihelium-3 nucleus will interact with the detector material and disappear. Scientists from TUM and the Excellence Cluster ORIGINS have contributed significantly to the analysis of the experimental data.

Galaxy transparent for antinuclei

Using simulations, the researchers were able to transfer the findings from the ALICE experiment to the entire galaxy. The result: About half of the antihelium-3 nuclei which were expected to be generated in the interaction of dark matter particles would reach the vicinity of the Earth. Our Milky Way is thus 50 percent permeable for these antinuclei.

For antinuclei generated in collisions between cosmic radiation and the interstellar medium, the resulting transparency varies from 25 to 90 percent with increasing antihelium-3 momentum. However, these antinuclei can be distinguished from those generated from dark matter based on their higher energy.

This means that antihelium nuclei can not only travel long distances in the Milky Way, but also serve as important informants in future experiments: Depending on how many antinuclei arrive at the Earth and with which energies, the origin of these well-traveled messengers can be interpreted as cosmic rays or dark matter thanks to the new calculations.

Reference for future antinuclei measurements in space

“This is an excellent example of an interdisciplinary analysis that illustrates how measurements at particle accelerators can be directly linked with the study of cosmic rays in space,” says ORIGINS scientist Prof. Laura Fabbietti of the TUM School of Natural Sciences.

The results from the ALICE experiment at the LHC are of great importance for the search for antimatter in space with the AMS-02 module (Alpha Magnetic Spectrometer) on the International Space Station (ISS). Starting in 2025 the GAPS balloon experiment over the Arctic will also examine incoming cosmic rays for antihelium-3.

The work is published in the journal Nature Physics.

More information: The ALICE Collaboration, Measurement of anti-3He nuclei absorption in matter and impact on their propagation in the Galaxy, Nature Physics (2022). DOI: 10.1038/s41567-022-01804-8www.nature.com/articles/s41567-022-01804-8

Journal information: Nature Physics 

Provided by Technical University Munich 

Ultrafast writing with light

Ultrafast writing with light
Credit: Youri van Hees

Due to the ever-increasing growth of our data consumption, researchers are looking for faster, more efficient, and more energy-conscious data storage techniques. TU/e researcher Youri van Hees uses ultrashort light pulses that enable him to write information. This way, he combines the advantages of both light and magnetic storage.

His thesis cover looks like an old-fashioned British tabloid newspaper, which includes a large headline of a playful article in the Spintronic Chronicle about femtomagnetism, photonics and spin transport and reads “Quick as Flick.”

A smart way to create some enthusiasm among a wider audience for complicated matters of technology? “More of a nice extra,” says Youri van Hees, who defended his thesis at the department of Applied Physics on December 7th. Apart from being a researcher down to the last nanometer, he’s also a passionate music lover and longtime fan of progressive rock band Jethro Tull.

“My favorite album, Thick as a Brick, was released exactly fifty years ago. The cover of my thesis is a reference to their album, which came wrapped in a satirical edition of a non-existent local newspaper. Nevertheless, it was quite instructive to write about my research in a way that was appealing to non-physicists. Only then do you realize how inextricably linked you’ve become to certain technology.”

Perhaps it’s the cover of his thesis that will draw the most attention, but its content is equally interesting. Van Hees spent the past four years working on so-called femtosecond lasers, small mirrors, and thin magnetic layers. And successfully so, because his research has brought the technology of writing information on a magnetic medium with the use of light one step closer. This is important for data centers because they would like to start using light as the most energy-efficient means of information transmission.

Van Hees stated, “People have been using magnets to store data for a long time now. This is done in the form of bits—the familiar zeros and ones, which are like tiny magnetic domains with a north and a south pole. To write data, those poles need to switch. We now make those poles shift by creating a magnetic field. Your laptop’s hard disk, for example, contains a small coil with which you can write small magnetic domains. Until now, we always needed an electronic intermediate step for that, because that coil needs to be actuated. That process costs extra time and energy.”

Nano-sandwich

When he shot an ultrashort laser pulse—”even smaller than one billionth of a second”—at magnetic material, Van Hees didn’t just notice something happening locally, but he also realized that the pulse was able to move electrons, which carry magnetic information. He lined up a few short light pulses like rail wagons and used this setup to influence magnetic materials with various small mirrors.

“These ultrafast light pulses allowed us to shift the north and south poles of the magnetic domains, which made it possible to skip the electronic intermediate step. It’s also possible now to know in advance with certainty whether you’ll write a ‘0’ or a ‘1,’ and all without having to know the bit’s initial state. That makes it even more efficient.

“We also investigated which magnetic materials we need to use to make the bits stable with this new light method. We can’t use the standard sandwich formula of layers of cobalt and gadolinium for stable data storage. However, it turns out that an extra atomic layer of the ferromagnetic metal terbium works really well. We are still looking for the right balance because even though we have stable bits now, the magnetization switching is not as efficient as we would like it to be.”

Broken mirror

Does all this mean that we can expect a photonic data storage breakthrough any time soon? Van Hees says that the technology of writing data with light at the laboratory level is becoming mainstream, and that the chip industry is following this kind of research closely. But, he adds with a smile: “The size of our current laser pulse generator is 60x30x30 cm. It will take some time before it can fit in your pocket.”

There’s a short message on his news cover’s back page, on the right beneath the non-collinear puzzle, which illustrates the fact that minimization can sometimes be a rough business. A true story, Van Hees says. “We had ordered a special mirror for our setup, but it was too large and didn’t fit. Physics sometimes can be frustrated too, so what do you come up with? Together with a colleague, I went to work on the mirror with a screw driver. We managed to make it smaller allright, but we couldn’t exactly use it any more either.”

More information: Quick as a flick: All-optical control over ultrafast magnetization writing and spin transport. research.tue.nl/en/publication … rafast-magnetization

Provided by Eindhoven University of Technology 

Cupric oxide exhibiting both magnetic and dielectric properties at room temperature

[Press Release] "Cupric Oxide Exhibiting Both Magnetic and Dielectric Properties at Room Temperature" —Experimental Confirmation
When a cupric oxide compound is subjected to high pressure, its Cu–O–Cu bond angle widens, strengthening the magnetic interaction between the ions. Using this phenomenon, this research team experimentally confirmed that the compound is able to exhibit its multiferroic state at room temperature. Credit: National Institute for Materials Science

The National Institute for Materials Science (NIMS), the Rutherford Appleton Laboratory in the U.K. and the University of Oxford in the U.K. have experimentally confirmed that a cupric oxide exhibits multiferroic state (i.e., both magnetic and ferroelectric properties) at room temperature under high pressure.

The theoretical model constructed in this research is expected to facilitate the development of next-generation memory devices and optical modulators.

Multiferroic materials are potentially applicable to the development of next-generation memory devices and energy-efficient optical modulators. However, because most of these materials are functional only at temperatures below 100 K, scientists had worked for years to make them exhibit multiferroic properties at room temperature—a requirement for devices that need to operate at ambient temperatures.

This research team focused on cupric oxide—a multiferroic material—because when it is subjected to high pressure, the copper and oxide ions constituting it change their positions relative to each other, significantly increasing the magnetic interactions between them. Due to this phenomenon, it had been theoretically suggested to be able to exhibit multiferroic properties at room temperature. However, this had not been experimentally confirmed due to the inability to directly measure atomic spin (i.e., atomic-level magnetism) under high pressure.

The research team developed a high-pressure generator which also enables the measurement of atomic spin under high pressure. Using this apparatus, the team confirmed through neutron diffraction experiments that cupric oxide is able to exhibit multiferroic state at room temperature under high pressure.

In addition, NIMS developed a new calculation method and used it to build a theoretical model which is expected to facilitate the development of room-temperature multiferroic materials. This calculation method was designed to operate effectively without requiring a large number of predetermined assumptions related to the strength of the magnetic interactions taking place between specific copper ions under high pressure.

The cupric oxide compound is able to exhibit its room-temperature multiferroic state only when subjected to a high pressure of 18.5 GPa (185,000 atm). Thin films composed of precisely distorted crystals grown in accordance with the theoretical model may potentially be able to exhibit such properties at ambient atmospheric pressure.

This research was published in the online version of Physical Review Letters on November 15, 2022.

More information: Noriki Terada et al, Room-Temperature Type-II Multiferroic Phase Induced by Pressure in Cupric Oxide, Physical Review Letters (2022). DOI: 10.1103/PhysRevLett.129.217601

Journal information: Physical Review Letters 

Provided by National Institute for Materials Science 

Zebra-striped structures to twist neutrons and extend researchers’ vision

Zebra-striped structures to twist neutrons and extend researchers' vision
Credit: University of Waterloo

What are these microscopic zebra-striped squares, and why did scientists painstakingly arrange more than 6 million of them on a silicon plate just half a centimeter wide?

The squares are a type of diffraction grating, which causes waves that travel through to bend as they pass (similar to ocean waves hitting a breakwater). That’s beneficial to neutron scientists, who want to make beams of neutrons more effective at exploring the interiors of objects.

Neutrons can penetrate solid objects and reveal internal details that X-rays cannot. While X-rays are absorbed by heavier elements such as the calcium in our bones, neutrons are useful for examining materials that contain the light element hydrogen, which doesn’t stop X-rays but reflects neutrons.

However, scanning beams disperse quickly after emerging from the beam’s aperture. That limits what scientists can see. Researchers at the NIST Center for Neutron Research (NCNR) have been working on a solution, developing an idea that guest scientists from the University of Waterloo recently built.

Here’s how it works: A scanning beam of neutrons passes through the small silicon plate. The neutrons fly through the square grating, twisting between the zebra-striped structures. Their movements cause twisting waves to form on either side of the neutrons. Those waves extend our researchers’ vision farther away from the source than usual.

The groundbreaking accomplishment provides a new avenue for researchers to study next-generation quantum materials.

Provided by National Institute of Standards and Technology 

Research explains basics of aerosol formation at the vocal folds

Basics of aerosol formation at the vocal folds
Design and setup of the experiment. Credit: TU Bergakademie Freiberg

Very small exhaled droplets, so-called aerosol particles, play an important role in the airborne transmission of pathogens such as the coronavirus. Researchers in the field of fluid mechanics used a model to investigate how exactly the small droplets are formed in the larynx when speaking or singing. The team now reports its results in the current issue of Physics of Fluids. The findings can now help to develop targeted measures to stop chains of infection.

“Every person spreads not only gases but also aerosolized particles with the exhaled air. The connection between an increased risk of infection and coughing, singing or speaking loudly, suggests that particles are emitted more frequently during these activities,” says Prof. Rüdiger Schwarze, an expert in fluid mechanics at TU Bergakademie Freiberg (Germany).

The team has now investigated for the first time how the particles are created in the larynx using a model of the human vocal folds. “For protection, the vocal folds are covered with a thin, gel-like layer of liquid called mucus. When speaking, they are adducted by the laryngeal muscles and induced to oscillate by the exhaled airflow. Depending on oscillation frequency and airflow, different sounds are produced,” explains first author Lisa Fritzsche, who developed the model used for the experiments. The model is made of perspex, the artificial vocal folds made of silicone.

To obtain realistic properties of the silicone vocal folds, they were surface-modified at the Freiberg Research Institute for Leather and Plastic Sheeting (FILK Freiberg Institute). The model shows how the mucus forms a liquid film between the oscillating vocal folds. Then the exhaled air inflates the film, creating a bubble. When this bubble bursts, a large number of small droplets are formed, which are transported into the mouth with the airflow and then exhaled as an aerosol.

Detailed measurements using a model experiment

Employing high-resolution cameras and a special optical setup, the researchers were able to measure how different oscillations of the vocal folds affect the size distribution of the aerosol particles.

“If high-pitched tones are produced by fast oscillations, mainly smaller aerosol particles, about the size of a grain of dust, are emitted. Exhaling more air with louder tones, however, increases the proportion of larger aerosol particles that are about the size of a grain of sand,” Lisa Fritzsche summarizes.

Basis for targeted measures to break infection chains

The results show which mechanisms are responsible for the formation of the aerosols at the vocal folds and how speech volume and pitch influence the droplet sizes. “What we now need to investigate further are the properties of the mucus and how these relate to the size of the exhaled aerosol particles,” says Prof. Rüdiger Schwarze.

If, in the future, the mucus of an infected person could, for example, be specifically influenced by drugs, the risk of infection for contact persons could be reduced. In further studies, the team also wants to investigate the further path of the aerosols in the pharynx in more detail.

More information: L. Fritzsche et al, Toward unraveling the mechanisms of aerosol generation during phonation, Physics of Fluids (2022). DOI: 10.1063/5.0124944

Journal information: Physics of Fluids 

Provided by Technische Universität Bergakademie Freiberg

Telescope-inspired microscope sees molecules in 6D

Telescope-inspired microscope sees molecules in 6D
Concept of the raMVR SMOLM. Credit: Nature Photonics (2022). DOI: 10.1038/s41566-022-01116-6

A new technology, inspired in part by the design of the James Webb Space Telescope (JWST), uses mirror segments to sort and collect light on the microscopic scale, and capture images of molecules with a new level of resolution: position and orientation, each in three dimensions.

Details of this new system, developed by Oumeng Zhang, a recent Ph.D. graduate from the lab of Matthew Lew, an associate professor of electrical and systems engineering at the McKelvey School of Engineering at Washington University in St. Louis, were published Dec. 5 in the journal Nature Photonics.

Like the space telescope, the radially and azimuthally polarized multi-view reflector (raMVR) microscope depends on gathering as much light as possible. But instead of using that light to see things far away, it uses it to discern different features of tiny, fluorescent molecules attached to proteins and cell membranes.

“The setup is partially inspired by telescopes,” Zhang said. “It’s a very similar setup. Instead of the familiar honeycomb shape of the JWST, we use pyramid-shaped mirrors.”

Currently, microscopes in this domain face challenges creating biological images. For one thing, such small amounts of light given off by the fluorescent molecules are sensitive to the slightest aberrations—including the murky environment inside a cell. Because of this, precise imaging relies more heavily on computer processing to sort out orientation after an image has been captured.

Credit: Washington University in St. Louis

“Think of creating a color picture when all you have are gray-scale camera sensors,” Lew said. “You could try to recreate the color using a computational tool, or you can directly measure it using a color sensor, which uses various absorbing color filters on top of different pixels to detect colors.”

In a similar way, standard microscopes simply do not detect how molecules are oriented. The raMVR microscope uses polarization optics called waveplates along with its pyramid-shaped mirrors to separate light into eight channels, each of which represents a different piece of the molecule’s position and orientation.

Notably, the raMVR microscope is not a small technology. But smaller isn’t always better.

“At the cutting edge of engineering physics, we often have to make tradeoffs to make our instruments compact,” Lew said. “Here, we decided to take a different tack: How could we use every precious bit of light to make the most precise measurement possible? It’s absolutely fun to think differently about the architecture of a microscope, and here, we think the newfound 6D imaging performance will enable new scientific discoveries in the near future.”

More information: Oumeng Zhang et al, Six-dimensional single-molecule imaging with isotropic resolution using a multi-view reflector microscope, Nature Photonics (2022). DOI: 10.1038/s41566-022-01116-6

Journal information: Nature Photonics 

Provided by Washington University in St. Louis 

A novel, space-time coding antenna promotes 6G and secure wireless communications

A novel, space-time coding antenna developed at CityU promotes 6G and secure wireless communications
The radiated beam of the STC metasurface antenna can be used for real-time imaging and treated as a type of radar to scan the environment and feedback data. Credit: City University of Hong Kong

A research team co-led by a scientist at City University of Hong Kong (CityU) has developed a novel antenna that allows manipulation of the direction, frequency and amplitude of the radiated beam, and is expected to play an important role in the integration of sensing and communications (ISAC) for 6th-generation (6G) wireless communications.

The structure and characteristics of traditional antennas cannot be changed once fabricated. However, the direction, frequency, and amplitude of the electromagnetic waves from this new-generation antenna, which is called a “sideband-free space-time-coding (STC) metasurface antenna,” can be changed through space-time coding (i.e., software control), enabling great user flexibility.

The key to this innovative feature is that the response of the metasurface (artificial, thin-sheet material with sub-wavelength thickness and made of several sub-wavelength meta-atoms) can be changed by switching the meta-atoms on its surface between radiating and non-radiating states, like turning on and off switches, by controlling the electric current.

This allows the STC metasurface antenna to realize complicated wave manipulation in the space and frequency domains through software control, and to create a desired radiation pattern and a highly directed beam.

Professor Chan Chi-hou, Acting Provost and Chair Professor of Electronic Engineering in the Department of Electrical Engineering at CityU, who led the research, highlighted that the antenna relies on the successful combination of two research advances, namely amplitude-modulated (AM) leaky-wave antennas and space-time coding techniques.

Dr. Wu Gengbo, postdoctoral fellow in the State Key Laboratory of Terahertz and Millimeter Waves (SKLTMW) at CityU, first proposed the new concept of AM leaky-wave antennas in 2020 in his Ph.D. studies at CityU. “The concept provides an analytical approach to synthesize antennas with the desired radiation patterns for different specific uses by simply changing the antennas’ shape and structure,” explained Dr. Wu.

But as with other antennas, once the AM leaky-wave antenna is fabricated, its radiation characteristics are fixed. At about that time, Dr. Dai Junyan, from a research group led by Academician Cui Tiejun and Professor Cheng Qiang, from Southeast University at Nanjing, China, who pioneered STC technologies, joined Professor Chan’s group at CityU.

“Dr. Dai’s expertise in space-time coding and digital metasurfaces to dynamically reconfigure antenna performance added a new, important dimension to the antenna research at the SKLTMW,” said Professor Chan, who is also Director of the SKLTMW at CityU.

A novel, space-time coding antenna developed at CityU promotes 6G and secure wireless communications
A significant feature of the new-generation antenna is that the direction, frequency, and amplitude of the radiated beam from the antenna can be changed through space-time coding software control. Credit: City University of Hong Kong

Moreover, the time modulation of electromagnetic waves on metasurfaces usually generates unwanted harmonic frequencies, called sidebands. These sidebands carry part of the radiated electromagnetic wave energy and interfere with the useful communication channels of the antenna, leading to “spectrum pollution.”

But Professor Chan and his team proposed a novel design, which makes use of a waveguide (a line for transmitting electromagnetic waves by successive reflection from the inner wall) and successfully suppressed the undesired harmonics, achieving a high-directivity beam and enabling secure communication.

“With the AM leaky-wave antenna and space-time coding technologies, we achieve the designated radiation characteristics by controlling the on-off sequences and duration of the ‘switches’ on the antenna through software,” said Professor Chan.

“A high-directivity beam can be generated with the new antenna, allowing a wide range of radiation performance without having to redesign the antenna, except for using different STC inputs,” added Dr. Wu.

The energy from the radiated beam of the STC metasurface antenna can be focused to a focal point with fixed or varying focal lengths, which can be used for real-time imaging and treated as a type of radar to scan the environment and feedback data. “The invention plays an important role in the ISAC for 6G wireless communications,” Professor Chan explained.

“For example, the radiated beam can scan a person and create an image of the person, allowing mobile phone users to talk to each other with 3D hologram imaging. It also performs better against eavesdropping than the conventional transmitter architecture.”

The findings were published in the journal Nature Electronics. Dr. Wu and Dr. Dai are the co-first authors of the paper, and Dr. Dai, Professor Cheng, Academician Cui, and Professor Chan are the corresponding authors.

“Without the collaboration and complementary expertise of the two research teams at CityU and Southeast University, we could not have achieved these research results,” Professor Chan continued. “We hope that the new-generation antenna technology will become more mature in the future and that it can be applied to smaller integrated circuits at lower cost and in a wider range of applications.”

More information: Geng-Bo Wu et al, Sideband-free space–time-coding metasurface antennas, Nature Electronics (2022). DOI: 10.1038/s41928-022-00857-0

Journal information: Nature Electronics 

Provided by City University of Hong Kong 

Quantum processor reveals bound states of photons hold strong even in the midst of chaos

Quantum processor reveals bound states of photons hold strong even in the midst of chaos
A ring of superconducting qubits can host “bound states” of microwave photons, where the photons tend to clump on neighboring qubit sites. Credit: Google Quantum AI

Researchers have used a quantum processor to make microwave photons uncharacteristically sticky. They coaxed them to clump together into bound states, then found that these photon clusters survived in a regime where they were expected to dissolve into their usual, solitary states. The discovery was first made on a quantum processor, marking the growing role that these platforms are playing in studying quantum dynamics.

Photons—quantum packets of electromagnetic radiation like light or microwaves—typically don’t interact with one another. Two crossed flashlight beams, for example, pass through one another undisturbed. But in an array of superconducting qubits, microwave photons can be made to interact.

In “Formation of robust bound states of interacting photons,” published today in Nature, researchers at Google Quantum AI describe how they engineered this unusual situation. They studied a ring of 24 superconducting qubits that could host microwave photons. By applying quantum gates to pairs of neighboring qubits, photons could travel around by hopping between neighboring sites and interacting with nearby photons.

The interactions between the photons affected their so-called “phase.” The phase keeps track of the oscillation of the photon’s wavefunction. When the photons are non-interacting, their phase accumulation is rather uninteresting. Like a well-rehearsed choir, they’re all in sync with one another. In this case, a photon that was initially next to another photon can hop away from its neighbor without getting out of sync.

Just as every person in the choir contributes to the song, every possible path the photon can take contributes to the photon’s overall wavefunction. A group of photons initially clustered on neighboring sites will evolve into a superposition of all possible paths each photon might have taken.

When photons interact with their neighbors, this is no longer the case. If one photon hops away from its neighbor, its rate of phase accumulation changes, becoming out of sync with its neighbors. All paths in which the photons split apart overlap, leading to destructive interference. It would be like each choir member singing at their own pace—the song itself gets washed out, becoming impossible to discern through the din of the individual singers.

Among all the possible configuration paths, the only possible scenario that survives is the configuration in which all photons remain clustered together in a bound state. This is why interaction can enhance and lead to the formation of a bound state: by suppressing all other possibilities in which photons are not bound together.

To rigorously show that the bound states indeed behaved just as particles did, with well-defined quantities such as energy and momentum, researchers developed new techniques to measure how the energy of the particles changed with momentum. By analyzing how the correlations between photons varied with time and space, they were able to reconstruct the so-called “energy-momentum dispersion relation,” confirming the particle-like nature of the bound states.

The existence of the bound states in itself was not new—in a regime called the “integrable regime,” where the dynamics is much less complicated, the bound states were already predicted and observed ten years ago.

But beyond integrability, chaos reigns. Before this experiment, it was reasonably assumed that the bound states would fall apart in the midst of chaos. To test this, the researchers pushed beyond integrability by adjusting the simple ring geometry to a more complex, gear-shaped network of connected qubits. They were surprised to find that bound states persisted well into the chaotic regime.

The team at Google Quantum AI is still unsure where these bound states derive their unexpected resilience, but it could have something to do with a phenomenon called “prethermalization”, where incompatible energy scales in the system can prevent a system from reaching thermal equilibrium as quickly as it otherwise would.

Researchers hope investigating this system will lead to new insights into many-body quantum dynamics and inspire more fundamental physics discoveries using quantum processors.

More information: Alexis Morvan et al, Formation of robust bound states of interacting microwave photons, Nature (2022). DOI: 10.1038/s41586-022-05348-y

Journal information: Nature 

Provided by Google Quantum AI