Oldest planetary debris in our galaxy found in new study

Artist’s impression of the old white dwarfs WDJ2147-4035 and WDJ1922+0233 surrounded by orbiting planetary debris, which will accrete onto the stars and pollute their atmospheres. WDJ2147-4035 is extremely red and dim, while WDJ1922+0233 is unusually blue. Credit: University of Warwick/Dr Mark Garlick. Credit: University of Warwick/Dr Mark Garlick

Astronomers led by the University of Warwick have identified the oldest star in our galaxy that is accreting debris from orbiting planetesimals, making it one of the oldest rocky and icy planetary systems discovered in the Milky Way.

Their findings are published today (Nov. 5) in the Monthly Notices of the Royal Astronomical Society and conclude that a faint white dwarf located 90 light years from Earth, as well as the remains of its orbiting planetary system, are over 10 billion years old.

The fate of most stars, including those like our sun, is to become a white dwarf. A white dwarf is a star that has burnt up all of its fuel and shed its outer layers and is now undergoing a process of shrinking and cooling. During this process, any orbiting planets will be disrupted and in some cases destroyed, with their debris left to accrete onto the surface of the white dwarf.

For this study the team of astronomers, led by the University of Warwick, modeled two unusual white dwarfs that were detected by the space observatory GAIA of the European Space Agency. Both stars are polluted by planetary debris, with one of them being found to be unusually blue, while the other is the faintest and reddest found to date in the local galactic neighborhood—the team subjected both to further analysis.

Using spectroscopic and photometric data from GAIA, the Dark Energy Survey and the X-Shooter instrument at the European Southern Observatory to work out how long it has been cooling for, the astronomers found that the “red” star WDJ2147-4035 is around 10.7 billion years old, of which 10.2 billion years has been spent cooling as a white dwarf.

Spectroscopy involves analyzing the light from the star at different wavelengths, which can detect when elements in the star’s atmosphere are absorbing light at different colors and helps determine what elements those are and how much is present. By analyzing the spectrum from WDJ2147-4035, the team found the presence of the metals sodium, lithium, potassium and tentatively detected carbon accreting onto the star—making this the oldest metal-polluted white dwarf discovered so far.

The second “blue” star WDJ1922+0233 is only slightly younger than WDJ2147-4035 and was polluted by planetary debris of a similar composition to the Earth’s continental crust. The science team concluded that the blue color of WDJ1922+0233, despite its cool surface temperature, is caused by its unusual mixed helium-hydrogen atmosphere.

The debris found in the otherwise nearly pure-helium and high-gravity atmosphere of the red star WDJ2147-4035 are from an old planetary system that survived the evolution of the star into a white dwarf, leading the astronomers to conclude that this is the oldest planetary system around a white dwarf discovered in the Milky Way.

Lead author Abbigail Elms, a Ph.D. student in the University of Warwick Department of Physics, said, “these metal-polluted stars show that Earth isn’t unique, there are other planetary systems out there with planetary bodies similar to the Earth. 97% of all stars will become a white dwarf and they’re so ubiquitous around the universe that they are very important to understand, especially these extremely cool ones. Formed from the oldest stars in our galaxy, cool white dwarfs provide information on the formation and evolution of planetary systems around the oldest stars in the Milky Way.”

“We’re finding the oldest stellar remnants in the Milky Way that are polluted by once Earth-like planets. It’s amazing to think that this happened on the scale of 10 billion years, and that those planets died way before the Earth was even formed.”

Astronomers can also use the star’s spectra to determine how quickly those metals are sinking into the star’s core, which allows them to look back in time and determine how abundant each of those metals were in the original planetary body. By comparing those abundances to astronomical bodies and planetary material found in our own solar system, we can guess at what those planets would have been like before the star died and became a white dwarf—but in the case of WDJ2147-4035, that has proven challenging.

Abbigail explains, “The red star WDJ2147-4035 is a mystery as the accreted planetary debris are very lithium and potassium rich and unlike anything known in our own solar system. This is a very interesting white dwarf as its ultra-cool surface temperature, the metals polluting it, its old age, and the fact that it is magnetic, makes it extremely rare.”

Professor Pier-Emmanuel Tremblay of the Department of Physics at the University of Warwick said, “when these old stars formed more than 10 billion years ago, the universe was less metal-rich than it is now, since metals are formed in evolved stars and gigantic stellar explosions. The two observed white dwarfs provide an exciting window into planetary formation in a metal poor and gas-rich environment that was different to the conditions when the solar system was formed.”

More information: Abbigail Elms et al, Spectral analysis of ultra-cool white dwarfs polluted by planetary debris, Monthly Notices of the Royal Astronomical Society (2022). DOI: 10.1093/mnras/stac2908

Journal information: Monthly Notices of the Royal Astronomical Society 

Provided by University of Warwick 

Exploring the surface melting of colloidal glass

Surface melting of an attractive colloidal glass. Credit: Nature Communications (2022). DOI: 10.1038/s41467-022-34317-2

In 1842, the famous British researcher Michael Faraday made an amazing observation by chance: A thin layer of water forms on the surface of ice, even though it is well below zero degrees. The temperature is below the melting point of ice, yet the surface of the ice has melted. This liquid layer on ice crystals is also why snowballs stick together.

It was not until about 140 years later, in 1985, that this “surface melting” could be scientifically confirmed under controlled laboratory conditions. By now, surface melting has been demonstrated in a variety of crystalline materials and is scientifically well understood: Several degrees below the actual melting point, a liquid layer only a few nanometers thick forms on the surface of the otherwise solid material.

Because the surface properties of materials play a crucial role in their use as, e.g. catalysts, sensors, battery electrodes and more, surface melting is not only of fundamental importance but also in view of technical applications.

It must be emphasized that this process has absolutely nothing to do with the effect of, say, taking an ice cube out of the freezer and exposing it to ambient temperature. The reason why an ice cube melts on its surface first under such conditions is that the surface is significantly warmer than the ice cube’s interior.

Surface melting detected in glass

In crystals with periodically arranged atoms, the thin liquid layer on the surface is typically detected by scattering experiments, which are very sensitive to the presence of atomic order. Since liquids are not arranged in a regular pattern, such techniques can therefore clearly resolve the appearance of a thin liquid film on top of the solid.

This approach, however, does not work for glasses (i.e. disordered, amorphous materials) because there is no difference in the atomic order between the solid and the liquid. Therefore, surface melting of glasses has remained rather unexplored with experiments.

To overcome the above-mentioned difficulties, Clemens Bechinger, physics professor at the University of Konstanz, and his colleague Li Tian used a trick: instead of studying an atomic glass, they produced a disordered material made of microscopic glass spheres known as colloids. In contrast to atoms, these particles are about 10,000 times larger and can be observed directly under a microscope.

The researchers were able to demonstrate the process of surface melting in such a colloidal glass because the particles near the surface move much faster compared to the solid below. At first glance, such behavior is not entirely unexpected, since the particle density at the surface is lower than in the underlying bulk material. Therefore, particles close to the surface have more space to move past each other, which makes them faster.

A surprising discovery

What surprised Clemens Bechinger and Li Tian, however, was the fact that even far below the surface, where the particle density has reached the bulk value, the particle mobility is still significantly higher compared to the bulk material.

The microscope images show that this previously unknown layer is up to 30 particle diameters thick and continues from the surface into the deeper regions of the solid in a streak-like pattern. “This layer which reaches far into the material has interesting material properties since it combines liquid and solid features,” Bechinger explains.

As a consequence, the properties of thin, disordered films depend very much on their thickness. In fact, this property is already being exploited in their use as thin ionic conductors in batteries, which are found to have a significantly higher ionic conductivity compared to thick films. With the new insights gained from the experiments, however, this behavior can now be understood quantitatively and thus be optimized for technical applications.

The research was published in Nature Communications.

More information: Li Tian et al, Surface melting of a colloidal glass, Nature Communications (2022). DOI: 10.1038/s41467-022-34317-2

Provided by University of Konstanz 

Novel single-crystal production method opens up promising avenues for studies in solid-state physics

Monocrystals of Ce0.04ZrTe2 grown by heterogeneous nucleation on the surface of the polycrystalline pellet. Credit: Lucas Eduardo Corrêa/USP

Single crystals are materials in which the crystal lattice is continuous and unbroken to the edges of the sample, with no grain boundaries. The atoms occupy regular positions, which are repeated indefinitely in space. While polycrystals are made up of many crystal grains or crystallites of varying sizes and orientations, monocrystals consist of a single grain.

A large supply of high-quality monocrystals is of the utmost importance to the study of the intrinsic physical properties of materials. They can be synthesized by various techniques. The most widely used method of growing single crystals of intermetallic compounds is known as chemical vapor transport (CVT).

An alternative technique has been designed and successfully tried out by a team led by researchers at the University of São Paulo’s Lorena School of Engineering (EEL-USP) in Brazil. An article on the study is published in the Journal of Crystal Growth.

“Conventional CVT consists of a chemical reaction in which the compound reacts with the chemical agent to form a volatile complex. This complex moves to a different region of the experimental apparatus with a different temperature from that of the zone in which the first chemical reaction occurred and is eventually deposited in the form of a single crystal. A temperature gradient is required in order for the single crystal to grow as it creates the necessary thermodynamic potential. In the novel technique, which we call isothermal chemical vapor transport [ICVT], growth occurs without the need for a temperature gradient,” said Lucas Eduardo Corrêa, first author of the article.

The study was part of Corrêa’s Ph.D. research, supervised by Professor Antonio Jefferson da Silva Machado.

“In the method we developed, the chemical potential gradient is what drives the growth of the single crystal,” said Machado, last author of the article.

Novel single-crystal production method opens up promising avenues for studies in solid-state physics
Graphical abstract. Credit: Journal of Crystal Growth (2022). DOI: 10.1016/j.jcrysgro.2022.126819

“In a closed environment, a pellet of polycrystalline material and a transport agent are placed in contact at a constant temperature high enough to produce a reaction and form gaseous complexes. It’s reasonable to consider that the transport agent initially reacts with the surface of the polycrystalline material, creating a chemical potential gradient between the interior of the grains and the interface with the gas phase. Owing to this gradient, thermodynamic equilibrium can’t be obtained between the gas and solid phases.”

“When the gas phase reaches saturation point—which is facilitated by the use of very small amounts of the transport agent—the chemical potential of the pellet is lower than that of the gas. At this point, inversion of the gas flux occurs, and the surface of the pellet serves as a point for single crystal nucleation.”

According to the researchers, the isothermal growth process has a number of advantages over conventional CVT. The first is that there is no need for a two-zone furnace since in isothermal growth the temperature is kept constant throughout the experimental apparatus. Generally speaking, growth can be promoted using a simple uniform furnace. Second, there is no need for chemical attack on the tube since the pellet itself is the nucleation point, simplifying the growth process.

“It’s important to note that the crystallographic quality of the crystals obtained is very high. No seeded crystals occur. In sum, the isothermal growth process is a simplified version of conventional CVT that can grow much larger crystals,” Corrêa said.

Although growth was obtained for such materials as ZrTe2, TiTe2 and HfTe2, which are almost two-dimensional, the researchers believe the method can be applied to other systems under the right thermodynamic conditions.

“The relevance of the materials in question lies in the gap between the tellurium atoms, so that other atoms or molecules can be intercalated into the material,” Machado said. “Indeed, the electronic structure of the compound ZrTe2 exhibits a non-trivial topology. We discovered that intercalation of nickel [Ni] into this gap leads to superconductor behavior with a critical temperature close to 4.0 K.”

Another instability observed in the material—and one that competes with superconductivity—is the existence of charge density waves (CDWs). In addition to potential quantum computing applications, these properties make such materials attractive for the study of the fundamentals of solid-state physics. Other intercalations were tested by the researchers and are being analyzed as part of Corrêa’s Ph.D. research.

More information: Lucas E. Correa et al, Growth of pure and intercalated ZrTe2, TiTe2 and HfTe2 dichalcogenide single crystals by isothermal chemical vapor transport, Journal of Crystal Growth (2022). DOI: 10.1016/j.jcrysgro.2022.126819

Provided by FAPESP 

A new approach for high-throughput quantitative phase microscopy

A hybrid bright/darkfield transport of intensity (HBDTI) approach for high-throughput quantitative phase microscopy significantly expands the space-bandwidth-product of a conventional microscope, extending the accessible sample spatial frequencies in the Fourier space well beyond the traditional coherent diffraction limit. Credit: Linpeng Lu, NJUST.

Cell organelles are involved in a variety of cellular life activities. Their dysfunction is closely related to the development and metastasis of cancer. Exploration of subcellular structures and their abnormal states facilitates insights into the mechanisms of pathologies, which may enable early diagnosis for more effective treatment.

The optical microscope, invented more than 400 years ago, has become an indispensable and ubiquitous instrument for the investigation of microscale objects in many areas of science and technology. In particular, fluorescence microscopy has achieved several leaps—from 2D wide-field, to 3D confocal, and then to super-resolution fluorescence microscopy, greatly promoting the development of modern life sciences.

Using conventional microscopes, researchers currently struggle to generate sufficient intrinsic contrast for unstained cells, due to their low absorption or weak scattering properties. Specific dyes or fluorescent labels can help with visualization, but long-term observation of live cells remains difficult to achieve.

Recently, quantitative phase imaging (QPI) has shown promise with its unique ability to quantify the phase delay of unlabeled specimens in a nondestructive way. Yet the throughput of an imaging platform is fundamentally limited by its optical system’s space-bandwidth product (SBP), and the SBP increase of a microscope is fundamentally confounded by the scale-dependent geometric aberrations of its optical elements. This results in a tradeoff between achievable image resolution and field of view (FOV).

Lead author Linpeng Lu, a PhD student in the SCILab, provides a vivid hand-painted animation as a helpful summary of the report. Credit: Lu et al., doi 10.1117/1.AP.4.5.056002.

An approach to achieving label-free, high-resolution, and large FOV microscopic imaging is needed to enable precise detection and quantitative analysis of subcellular features and events. To this end, researchers from Nanjing University of Science and Technology (NJUST) and the University of Hong Kong recently developed a label-free high-throughput microscopy method based on hybrid bright/darkfield illuminations.

As reported in Advanced Photonics, the “hybrid brightfield-darkfield transport of intensity” (HBDTI) approach for high-throughput quantitative phase microscopy significantly expands the accessible sample spatial frequencies in the Fourier space, extending the maximum achievable resolution by approximately fivefold over the coherent imaging diffraction limit.

Based on the principle of illumination multiplexing and synthetic aperture, they establish a forward imaging model of nonlinear brightfield and darkfield intensity transport. This model endows HBDTI with the ability to provide features beyond the coherent diffraction limit.

High-throughput computational microscopy imaging
QPI results of unlabeled HeLa cells. (a) Approximately 4000 HeLa cells on a ∼7.19  mm2 FOV. (b1) and (c1) Low-resolution brightfield (BF) in-focus intensity images of areas 1 and 2 in (a), respectively. (b2) and (c2) Low-resolution darkfield (DF) in-focus intensity images of (b1) and (c1), respectively. (b3) and (c3) Retrieval phase results of (b1) and (c1) using the FFT-based traditional transport of intensity equation (TIE) phase retrieval method, respectively. (b4) and (c4) Retrieval phase results of (b1) and (c1) utilizing the novel HBDTI method, respectively. Credit: Lu et al., doi 10.1117/1.AP.4.5.056002.

Using a commercial microscope with a 4x, 0.16NA objective lens, the team demonstrated HBDTI high-throughput imaging, attaining 488-nm half-width imaging resolution within an FOV of approximately 7.19 mm2, yielding a 25× increase in SBP over the case of coherent illumination.

Noninvasive high-throughput imaging enables delineation of subcellular structures in large-scale cell studies. According to corresponding author Chao Zuo, principal investigator of the Smart Computational Imaging Laboratory (SCILab) at NJUST, “HBDTI offers a simple, high-performance, low-cost, and universal imaging tool for quantitative analysis in life sciences and biomedical research. Given its capability for high-throughput QPI, HBDTI is expected to provide a powerful solution for cross-scale detection and analysis of subcellular structures in a large number of cell clusters.”

Zuo notes that further efforts are needed to promote the high-speed implementation of HBDTI in large-group live cell analysis.

More information: Linpeng Lu et al, Hybrid brightfield and darkfield transport of intensity approach for high-throughput quantitative phase microscopy, Advanced Photonics (2022). DOI: 10.1117/1.AP.4.5.056002

Provided by SPIE 

World’s first optical atomic clock with highly charged ions

Illustration of the laser interrogation of a highly charged ion clock (artwork). Credit: PTB

Highly charged ions are a common form of matter in the cosmos, where they are found, for example, in the sun or other stars. They are so called because they have lost many electrons and therefore have a high positive charge. This is why the outermost electrons are more strongly bound to the atomic nucleus than in neutral or weakly charged atoms.

For this reason, highly charged ions react less strongly to interference from external electromagnetic fields, but become more sensitive probes of fundamental effects of special relativity, quantum electrodynamics and the atomic nucleus.

“Therefore, we expected that an optical atomic clock with highly charged ions would help us to better test these fundamental theories”, explains PTB physicist Lukas Spieß. This hope has already been fulfilled: “We were able to detect the quantum electrodynamic nuclear recoil, an important theoretical prediction, in a five-electron system, which has not been achieved in any other experiment before.”

Beforehand, the team had to solve some fundamental problems, such as detection and cooling, in years of work: For atomic clocks, one has to cool the particles down extremely in order to stop them as much as possible and thus read out their frequency at rest. Highly charged ions, however, are produced by creating an extremely hot plasma.

Because of their extreme atomic structure, highly charged ions can’t be cooled directly with laser light, and standard detection methods can’t be used either. This was solved by a collaboration between MPIK in Heidelberg and the QUEST Institute at PTB by isolating a single highly charged argon ion from a hot plasma and storing it in an ion trap together with a singly charged beryllium ion.

This allows the highly charged ion to be cooled indirectly and studied by means of the beryllium ion. An advanced cryogenic trap system was then built at MPIK and finalized at PTB for the following experiments, which were carried out in part by students switching between the institutions.

Subsequently, a quantum algorithm developed at PTB succeeded in cooling the highly charged ion even further, namely close to the quantum mechanical ground state. This corresponded to a temperature of 200 millionths of a Kelvin above absolute zero. These results were already published in Nature in 2020 and in Physical Review X in 2021.

Now the researchers have successfully taken the next step: They have realized an optical atomic clock based on thirteen-fold charged argon ions and compared the ticking with the existing ytterbium ion clock at PTB. To do this, they had to analyze the system in great detail in order to understand, for example, the movement of the highly charged ion and the effects of external interference fields.

They achieved a measurement uncertainty of 2 parts in 1017—comparable to many currently operated optical atomic clocks. “We expect a further reduction of the uncertainty through technical improvements, which should bring us into the range of the best atomic clocks,” says research group leader Piet Schmidt.

The researchers have thus created a serious competitor to existing optical atomic clocks based on, for example, individual ytterbium ions or neutral strontium atoms. The methods used are universally applicable and allow many different highly charged ions to be studied. These include atomic systems that can be used to search for extensions of the Standard Model of particle physics.

Other highly charged ions are particularly sensitive to changes in the fine structure constant and to certain dark matter candidates that are required in models beyond the Standard Model but could not be detected with previous methods.

More information: Lukas Spieß, An optical atomic clock based on a highly charged ion, Nature (2022). DOI: 10.1038/s41586-022-05245-4. www.nature.com/articles/s41586-022-05245-4

Journal information: Physical Review X , Nature

Provided by Physikalisch-Technische Bundesanstalt

Researchers collaborate to better understand the weak nuclear force

Radial-plane cross-sectional view of the BPT showing a typical triple event. Credit: Physical Review Letters (2022). DOI: 10.1103/PhysRevLett.128.202502

The weak nuclear force is currently not entirely understood, despite being one of the four fundamental forces of nature. In a pair of Physical Review Letters articles, a multi-institutional team, including theorists and experimentalists from Louisiana State University, Lawrence Livermore National Laboratory, Argonne National Laboratory and other institutions worked closely together to test physics beyond the “Standard Model” through high-precision measurements of nuclear beta decay.

By loading lithium-8 ions, an exotic heavy isotope of lithium with a less than one second half-life, in an ion trap, the experimental team was able to detect the energy and directions of the particles emitted in the beta decay of lithium-8 produced with the ATLAS accelerator at Argonne National Laboratory and held in an ion trap. Different underlying mechanisms for the weak nuclear force would give rise to distinct energy and angular distributions, which the team determined to unrivaled precision.

State-of-the-art calculations with the ab initio symmetry-adapted no-core shell model, developed at Louisiana State University, had to be performed to precisely account for typically neglected effects that are 100 times smaller than the dominant decay contributions. However, since the experiments have achieved remarkable precision, it is now required to confront the systematic uncertainties of such corrections that are difficult to be measured.

In their paper, “Impact of Clustering on the 8Li Beta Decay and Recoil Form Factors,” the LSU-led collaboration places unprecedented constraints on recoil corrections in the β decay of 8Li, by identifying a strong correlation between them and the 8Li ground state quadrupole moment in large-scale ab initio calculations.

The results are essential for improving the sensitivity of high-precision experiments that probe the weak interaction theory and test physics beyond the Standard Model. Dr. Grigor Sargsyan led the theoretical developments while he was a Ph.D. student at LSU, and is currently a postdoctoral researcher at Lawrence Livermore National Laboratory (LLNL).

In “Improved Limit on Tensor Currents in the Weak Interaction from 8Li β Decay,” researchers present the most precise measurement of tensor currents in the low-energy regime by examining the β−¯ν correlation of trapped 8Li ions with the Beta-decay Paul Trap. The results are found to be consistent with the Standard Model prediction, ruling out certain possible sources of “new” physics and setting the bar for precision measurements of this kind.

“This has important implications for understanding the physics of the tensor current contribution to the weak interaction,” said LSU Assistant Professor Alexis Mercenne. “Heretofore, the data has favored only vector and axial-vector couplings in the electroweak Lagrangian, but it has been suggested that other Lorentz-invariant interactions such as tensor, scalar, and pseudoscalar, can arise in the Standard Model extensions.”

“These are remarkable findings—the level of theoretical precision reached in ab initio theory beyond the lightest nuclei is unprecedented, and opens the path to novel high-precision predictions in atomic nuclei rooted in first principles,” said LSU Associate Professor Kristina Launey.

“In addition, no one expected that these theoretical developments would unveil a new state in 8Be nucleus that has not been measured yet. This nucleus is notoriously difficult to model due to its cluster structure and collective correlations, but become feasible for calculations in the ab initio symmetry-adapted no-core shell-model framework.”

The excitement of modern nuclear physics is its interdisciplinary nature and the use of a wide range of techniques and tools. LSU has both experimental and theoretical research groups in nuclear physics, with strong connections to the high-energy physics and astrophysics/space science groups. The principal focus of the experimental and theoretical groups is in the area of low-energy nuclear structure and reactions, including the study of nuclei far from stability and applications to astrophysics.

More information: G. H. Sargsyan et al, Impact of Clustering on the Li8 β Decay and Recoil Form Factors, Physical Review Letters (2022). DOI: 10.1103/PhysRevLett.128.202503

M. T. Burkey et al, Improved Limit on Tensor Currents in the Weak Interaction from Li8 β Decay, Physical Review Letters (2022). DOI: 10.1103/PhysRevLett.128.202502

Journal information: Physical Review Letters

Provided by Louisiana State University

Study proves a generalization of Bell’s theorem: Quantum correlations are genuinely tripartite and nonlocal

Credit: Marya Kuderska

Quantum theory predicts the existence of so-called tripartite-entangled states, in which three quantum particles are related in a way that has no counterpart in classical physics. Theoretical physicists would like to understand how well new theories, alternatives to quantum theory, might be able to reproduce the behavior of these states.

John Clauser, Alain Aspect and Anton Zeilinger, whose work was recently recognized by the Nobel Committee, have experimentally proven Bell’s theorem, showing that no local hidden-variable alternative to quantum theory can reproduce this behavior. In other words, they showed that quantum correlations are nonlocal.

Researchers at the University of Science and Technology of China, Institute of Photonic Sciences, Università della Svizzera Italiana and Perimeter Institute of Theoretical Physics have recently carried out an experimental study generalizing these findings, by considering new potential theories. Their findings, published in Physical Review Letters, suggest that the correlations achieved by the tripartite-entangled state used in their experiment cannot be explained by an hypothetical theory involving a generalization of bipartite entanglement, called “exotic sources of two particles,” in addition to a local hidden-variable theory.

“The main objective of our study was to prove that the behavior of a three particle quantum source (e.g., a source of three photons) cannot be reproduced by any new hypothetical theory (replacing quantum theory, yet to be discovered) which only involves exotic pairs of two particle described by new physical laws’ and a local hidden variable model,” Marc-Olivier Renou, one of the authors of the paper, told Phys.org.

Gaël Massé, a second author, explains: “To do this, we used the idea contained in the ‘inflation technique,’ invented by Elie Wolfe, one of our coauthor. If we imagine a pair of two particles described by new physical laws, then even if we have no idea how to describe them we can still create a copy of this pair and make all the particles interact together in a new way. While this technique seems elementary, it has often proved to be a very powerful tool to address theoretical abstract concepts.”

Study proves a generalization of bell theorem: quantum correlations are genuinely tripartite nonlocal
Credit: Marya Kuderska

In their paper, the researchers first derived a new device-independent witness that could falsify causal theories with bipartite nonclassical resources. Then, through a lab experiment performed by Huan Cao and Chao Zhang, they showed that some tripartite-entangled state (called the “GHZ state”) could obtain, in practice, correlations that violate this witness.

“Using a high-performance photonic GHZ3 state with fidelities of 0.9741±0.002, we provide a clear experimental violation of that witness by more than 26.3 standard deviations, under the locality and fair sampling assumption,” the team explained in their paper. “We generalize our Letter to the |GHZ4⟩ state, obtaining correlations that cannot be explained by any causal theory limited to tripartite nonclassical common causes assisted with unlimited shared randomness.”

The recent work is a generalization of Bell’s theorem. Its most remarkable achievement is that it reaches beyond what physicists previously thought was possible in constraining potential alternative theories to quantum theory.

Study proves a generalization of bell theorem: quantum correlations are genuinely tripartite nonlocal
Credit: Marya Kuderska

“Bell ruled out the possibility that quantum correlations can be explained by a local hidden variable model (i.e., shared randomness),” Xavier Coiteux-Roy, a coauthor of the study, explains. “We went a bit further, by proving that even if you add ‘bipartite exotic sources’ in your theory, it still doesn’t work. In fact, we generalized the result, showing that if you add tripartite, quadripartite, and other exotic sources, it still doesn’t work. You really need to involve N-partite exotic sources for any N, whatever high it is, as is done by quantum theory.” He concludes, “Note that experiment has imperfections, called loopholes. Realizing an experiment without these loopholes, in particular the post-selection loophole, is a great challenge for experimentalists for the next years.”

Based on their findings, the team concluded that nature’s correlations are genuinely multipartite nonlocal. The experiments they carried out so far allowed them to definitively exclude theories of bipartite and tripartite exotic sources, but they are now thinking of evaluating other alternatives to quantum theory.

“We are now trying to understand how far this idea can go, and how far we can exclude potential alternatives to quantum theory by just looking at concrete experimental results, without assuming that they are explained by quantum theory,” Renou added. “This might eventually allow us to exclude all potential alternatives to quantum theory.”

More information: Huan Cao et al, Experimental Demonstration that No Tripartite-Nonlocal Causal Theory Explains Nature’s Correlations, Physical Review Letters (2022). DOI: 10.1103/PhysRevLett.129.150402

Journal information: Physical Review Letters 

© 2022 Science X Network

Light-analyzing ‘lab on a chip’ opens door to widespread use of portable spectrometers

Spectrometer on a chip. Credit: Oregon State

Scientists including an Oregon State University materials researcher have developed a better tool to measure light, contributing to a field known as optical spectrometry in a way that could improve everything from smartphone cameras to environmental monitoring.

The study, published today in Science, was led by Finland’s Aalto University and resulted in a powerful, ultra-tiny spectrometer that fits on a microchip and is operated using artificial intelligence.

The research involved a comparatively new class of super-thin materials known as two-dimensional semiconductors, and the upshot is a proof of concept for a spectrometer that could be readily incorporated into a variety of technologies—including quality inspection platforms, security sensors, biomedical analyzers and space telescopes.

“We’ve demonstrated a way of building spectrometers that are far more miniature than what is typically used today,” said Ethan Minot, a professor of physics in the OSU College of Science. “Spectrometers measure the strength of light at different wavelengths and are super useful in lots of industries and all fields of science for identifying samples and characterizing materials.”

Traditional spectrometers require bulky optical and mechanical components, whereas the new device could fit on the end of a human hair, Minot said. The new research suggests those components can be replaced with novel semiconductor materials and AI, allowing spectrometers to be dramatically scaled down in size from the current smallest ones, which are about the size of a grape.

“Our spectrometer does not require assembling separate optical and mechanical components or array designs to disperse and filter light,” said Hoon Hahn Yoon, who led the study with Aalto University colleague Zhipei Sun Yoon. “Moreover, it can achieve a high resolution comparable to benchtop systems but in a much smaller package.”

The device is 100% electrically controllable regarding the colors of light it absorbs, which gives it massive potential for scalability and widespread usability, the researchers say.

“Integrating it directly into portable devices such as smartphones and drones could advance our daily lives,” Yoon said. “Imagine that the next generation of our smartphone cameras could be hyperspectral cameras.”

Those hyperspectral cameras could capture and analyze information not just from visible wavelengths but also allow for infrared imaging and analysis.

“It’s exciting that our spectrometer opens up possibilities for all sorts of new everyday gadgets, and instruments to do new science as well,” Minot said.

In medicine, for example, spectrometers are already being tested for their ability to identify subtle changes in human tissue such as the difference between tumors and healthy tissue.

For environmental monitoring, Minot added, spectrometers can detect exactly what kind of pollution is in the air, water or ground, and how much of it is there.

“It would be nice to have low-cost, portable spectrometers doing this work for us,” he said. “And in the educational setting, the hands-on teaching of science concepts would be more effective with inexpensive, compact spectrometers.”

Applications abound as well for science-oriented hobbyists, Minot said.

“If you’re into astronomy, you might be interested in measuring the spectrum of light that you collect with your telescope and having that information identify a star or planet,” he said. “If geology is your hobby, you could identify gemstones by measuring the spectrum of light they absorb.”

Minot thinks that as work with two-dimensional semiconductors progresses, “we’ll be rapidly discovering new ways to use their novel optical and electronic properties.” Research into 2D semiconductors has been going on in earnest for only a dozen years, starting with the study of graphene, carbon arranged in a honeycomb lattice with a thickness of one atom.

“It’s really exciting,” Minot said. “I believe we’ll continue to have interesting breakthroughs by studying two-dimensional semiconductors.”

In addition to Minot, Yoon and Sun, the collaboration included scientists from Shanghai Jiao Tong University, Zhejiang University, Sichuan University, Yonsei University and University of Cambridge, as well as other researchers from Aalto University.

Universal parity quantum computing, a new architecture that overcomes performance limitations

Illustration of the modified LHZ architecture with logical lines. Three- and four-body constraints are represented by light gray triangles and squares between corresponding qubits. Data qubits with single logical indices are added as an additional row at the bottom of the architecture to allow direct access to logical Rz rotations. Colored lines connect all qubits whose labels contain the same logical index. Logical Rx rotations can be realized with chains of cnot gates along the corresponding line. Credit: Physical Review Letters (2022). DOI: 10.1103/PhysRevLett.129.180503

The computing power of quantum machines is currently still very low. Increasing performance is a major challenge. Physicists at the University of Innsbruck, Austria, now present a new architecture for a universal quantum computer that overcomes such limitations and could be the basis of the next generation of quantum computers soon.

Quantum bits (qubits) in a quantum computer serve as a computing unit and memory at the same time. Because quantum information cannot be copied, it cannot be stored in memory as in a classical computer. Due to this limitation, all qubits in a quantum computer must be able to interact with each other.

This is currently still a major challenge for building powerful quantum computers. In 2015, theoretical physicist Wolfgang Lechner, together with Philipp Hauke and Peter Zoller, addressed this difficulty and proposed a new architecture for a quantum computer, now named LHZ architecture after the authors.

“This architecture was originally designed for optimization problems,” says Wolfgang Lechner of the Department of Theoretical Physics at the University of Innsbruck, Austria. “In the process, we reduced the architecture to a minimum in order to solve these optimization problems as efficiently as possible.”

The physical qubits in this architecture do not represent individual bits, but encode the relative coordination between the bits. “This means that not all qubits have to interact with each other anymore,” explains Wolfgang Lechner. With his team, he has now shown that this parity concept is also suitable for a universal quantum computer.

Complex operations are simplified

Parity computers can perform operations between two or more qubits on a single qubit. “Existing quantum computers already implement such operations very well on a small scale,” Michael Fellner from Wolfgang Lechner’s team explains. “However, as the number of qubits increases, it becomes more and more complex to implement these gate operations.”

In two publications in Physical Review Letters and Physical Review A, the Innsbruck scientists now show that parity computers can, for example, perform quantum Fourier transformations—a fundamental building block of many quantum algorithms—with significantly fewer computation steps and thus more quickly. “The high parallelism of our architecture means that, for example, the well-known Shor algorithm for factoring numbers can be executed very efficiently,” Fellner explains.

Two-stage error correction

The new concept also offers hardware-efficient error correction. Because quantum systems are very sensitive to disturbances, quantum computers must correct errors continuously. Significant resources must be devoted to protecting quantum information, which greatly increases the number of qubits required. “Our model operates with a two-stage error correction, one type of error (bit flip error or phase error) is prevented by the hardware used,” write Anette Messinger and Kilian Ender, also members of the Innsbruck research team.

There are already initial experimental approaches for this on different platforms. “The other type of error can be detected and corrected via the software,” Messinger and Ender say. This would allow a next generation of universal quantum computers to be realized with manageable effort.

The spin-off company ParityQC, co-founded by Wolfgang Lechner and Magdalena Hauser, is already working in Innsbruck with partners from science and industry on possible implementations of the new model.

Electrons with Planckian scattering in strange metals follow standard rules of orbital motion in a magnet

The 100-tesla magnet system at the National Laboratory for Intense Magnetic Fields in Toulouse, France. Credit: Nanda Gonzague.

Strange metals, or non-Fermi liquids, are distinct states of matter that have been observed in different quantum materials, including cuprate superconductors. These states are characterized by unusual conductive properties, such as a resistivity that is linearly associated with temperature (T-linear).

In the strange metal phase of matter, electrons undergo what is known as “Planckian dissipation,” a high scattering rate that linearly increases as the temperature rises. This T-linear, strong electron scattering is anomalous for metals, which typically present a quadratic temperature dependence (T2), as predicted by the standard theory of metals.

Researchers at Université de Sherbrooke in Canada, Laboratoire National des Champs Magnétiques Intenses in France, and other institutes worldwide have recently carried out a study exploring the possibility that the resistivity of strange metals is not only associated with temperature, but also with an applied magnetic field. This magnetic field linearity had been previously observed in some cuprates and pnictides, with some physicists suggesting that it could also be linked to Planckian dissipation.

The researchers carried out their experiments on two specific cuprate strange metals, namely Nd0.4La1.6−xSrxCuO4 and La2−xSrxCuO4. Their findings, published in Nature Physics, suggest that the resistivity of these two strange metals is consistent with the predictions of the standard Boltzmann theory of electron motion in a magnetic field in all ways, highlighting no anomaly associated with Planckian dissipation.

“We wanted to investigate the field dependence of Planckian scattering rate in the strange metal phase of cuprate superconductors, in particular in NdLSCO, that its scattering rate was previously measured with Angle Dependence Magnetoresistance (ADMR) experiments,” Amirreza Ataei, one of the researchers who carried out the study, told Phys.org. “In this material, due to a relatively low critical temperature, Tc, we had access to one of the largest measured ranges of B-linear resistivity and were able to reproduce the magnetoresistance over this magnetic field range using the standard Boltzmann theory.”

Study shows that electrons with Planckian scattering in strange metals follow standard rules of orbital motion in a magn
The sample holder that was used for high field measurements at Toulouse. The length of the black single crystal sample is less than 2 mm, the contacts were made with silver epoxy and 25 micrometer wires, and the sample is mounted on a sapphire plate Credit: Ataei M.Sc. thesis https://savoirs.usherbrooke.ca/handle/11143/15285

A key objective of the recent work by Ataei and his colleagues was to determine whether the in-plane magnetoresistance in the strange metal phase of namely Nd0.4La1.6−xSrxCuO4 and La2−xSrxCuOwas anomalous in instances where the magnetic field and electric current were in parallel. Ultimately, the measurements they collected suggest that it was not.

“We expect our findings to have a big impact in the field of Planckian dissipation, a major mystery in condensed-matter physics with intriguing connections to the physics of black holes,” Ataei explained. “We show that this enigmatic phenomenon is insensitive to magnetic field, up to 85 T, one of the highest achievable magnetic fields in the world.”

Study shows that electrons with Planckian scattering in strange metals follow standard rules of orbital motion in a magn
Louis Taillefer, Cyril Proust and Seyed Amirreza Ataei. Credit: Michel Caron – UdeS.

Overall, the results gathered by this team of researchers would seem to challenge the hypothesis that the linear dependence of resistivity on a magnetic field observed in some strange metals is associated with Planckian dissipation. In contrast, their experimental data suggests that Planckian dissipation is only anomalous in its temperature dependence, while its field dependence is aligned with standard theoretical predictions.

“We now plan to extend the scope of this research to different quantum materials in the strange metal phase or in its proximity,” Ataei added.