Numerical simulations show how the classical world might emerge from the many-worlds universes of quantum mechanics

Students learning quantum mechanics are taught the Schrodinger equation and how to solve it to obtain a wave function. But a crucial step is skipped because it has puzzled scientists since the earliest days—how does the real, classical world emerge from, often, a large number of solutions for the wave functions?

Each of these wave functions has its individual shape and associated energy level, but how does the wave function “collapse” into what we see as the classical world—atoms, cats and the pool noodles floating in the tepid swimming pool of a seedy hotel in Las Vegas hosting a convention of hungover businessmen trying to sell the world a better mousetrap?

At a high level, this is handled by the “Born rule”—the postulate that the probability density for finding an object at a particular location is proportional to the square of the wave function at that position.

Erwin Schrödinger invented his famous feline as a way to amplify the consequences of the collapsing wave function—a simple event, such as a quantum event of the radioactive decay of an atomic nucleus, somehow translates into the macroscopic cat in the box being, either alive or dead. (This mysterious transition, perhaps theoretical only, is called the Heisenberg Cut.)

Traditional quantum mechanics says that at any time the cat becomes either alive or dead when the box is opened and the cat state is “measured.” Before that, the cat is, in a sense, both alive and dead—it exists in a quantum superposition of each state. It is only when the box is opened and its inside is viewed does the wave function of the cat collapse into a definite state of being either alive or dead.

In recent years, physicists have been looking at this process more deeply to understand what’s happening. Modifying the Schrödinger equation has had only limited success.

Other ideas than the Copenhagen interpretation described above, such as De Broglie-Bohm pilot wave theory and the many-worlds interpretation of quantum mechanics, are receiving more attention.

Now a team of quantum theorists from Spain have used numerical simulations to show that, on large scales, features of the classical world can emerge from a wide class of quantum systems. Their work is published in the journal Physical Review X.

“Quantum physics is at odds with our classical experience as far as the behavior of single electrons, atoms or photons is concerned,” lead author Philipp Strasberg of the Autonomous University of Barcelona told Phys.org.

“However, if one zooms out, and considers coarse quantities that we humans can perceive (for example, the temperature of our morning coffee or the position of a stone), our results indicate that quantum interference effects, which are responsible for weird quantum behavior, vanish.”

Their finding suggests that the classical world we see can emerge from the many-worlds picture of quantum mechanics, where many universes exist at the same point in spacetime and where almost a potentially huge number of worlds branch off from ours every time a measurement is made.

As a rough analogy, imagine a shower bag filled with water. Poke holes in the bag and water—which inside the bag is a large collection of frequently colliding molecules moving in random directions—will stream out in mostly smooth flows. This is akin to how the complicated jumble of a quantum system nonetheless appears in the classical world as something we recognize and feel familiar with.

But a technical problem remained with the many-worlds portrait: how do we reconcile the many-universes with the classical experience we have within our one universe? After all, we never see cats in a superposition of alive and dead. A priori, how can we speak of other universes or worlds or branches in any meaningful sense?

In their paper, Strasberg and co-authors write “Speaking of different worlds or histories becomes meaningful if we can reason about their past, present, and future in classical terms.”

The co-authors attempted to solve this problem in a new way. While previous work has brought in the idea of quantum decoherence—where the objects we see arise out of the many superpositions of a quantum system when it interacts with their environment. But this approach has a fine-tuning problem—it only works for specific types of interactions and types of initial wave functions.

By contrast, the group showed that a stable, self-consistent set of features emerges from the range of many possible evolutions of a wave function (with many energy levels) at observable, non-microscopic scales. This solution does not have a fine-tuning problem, works for a wide choice of initial conditions and the details of the interactions between energy levels.

“In particular,” Strasberg told Phys.org, “we provide clear evidence that this vanishing [of quantum interference effects] happens extremely fast—to be precise: exponentially fast—with growing system size. That is, even a few atoms or photons can behave classically. Furthermore, it is a ubiquitous and generic phenomenon that does not require any fine tuning: the emergence of a classical world is inevitable.”

The group numerically simulated quantum evolution for up to five time-steps and up to 50,000 energy levels for nontrivial quantum systems. Though that evolution is still small compared to what will be needed to simulate everyday classical phenomena, it’s much larger than any previous work.

They considered a broad range of choices of the initial wave function and of coupling strengths and found approximately the same large-scale structure of stable branches exist—the emergence of a stable and slowly evolving macroscopic structure.

“Remarkably, we also explicitly demonstrate that interesting classical worlds can emerge from a quantum system that is overall in thermodynamic equilibrium. Even though it is very unlikely that this is the case in our universe, it nevertheless demonstrates that order, structure and an arrow of time can emerge on single branches of a quantum Multiverse, which overall looks chaotic, unstructured and time-symmetric.”

Relating their work to statistical mechanics, where macroscopic features like temperature and pressure emerge from a melange of randomly moving particles, the group found that some branches lead to worlds where entropy increases and others to worlds where entropy decreases. Such worlds would have opposite entropic arrows of time.

More information: Philipp Strasberg et al, First Principles Numerical Demonstration of Emergent Decoherent Histories, Physical Review X (2024). DOI: 10.1103/PhysRevX.14.041027

Journal information: Physical Review X 

© 2024 Science X Network

Cooperation between two intruders moving side-by-side in granular media

In bird colonies, schools of fish and cycling pelotons, significant interactions occur between individuals through the surrounding fluid. These interactions are well understood in fluids such as air and water, but what happens when objects move through something like sand? It turns out that similar interactions occur in granular materials—things like soil or sand—and they play a crucial role in everyday contexts. Think of plows cutting through farmland, animals burrowing underground, roots pushing through soil, or even robots exploring the surface of Mars.

Recently, we came across a fascinating discovery: When two objects—what we call “intruders”—move side by side through granular materials, they can actually help each other by reducing the resistance they face. This cooperative effect was uncovered by a team of researchers from the School of Mechanical Engineering at the University of Campinas (UNICAMP) in Brazil, and the FAST laboratory, CNRS, Université Paris-Saclay in France.

To investigate this, we set up an experiment using spherical objects immersed in glass beads to replicate a granular medium. The goal was to pull these objects at a constant speed and measure the drag force they experienced as they moved through the grains. While previous studies had looked at the lateral forces between objects, our team wondered whether moving together might also reduce the drag force.

Some intriguing numerical simulations by two of our researchers at UNICAMP, D. D. Carvalho and E. M. Franklin, published in the Physics of Fluids in 2022, suggested that it could, but we wanted to test this in the real world.

What we found was exciting: When the two intruders were close together, the drag on each of them dropped significantly—by as much as 30% compared to when they were farther apart. And the deeper they were buried in the material, the more pronounced this effect became. The explanation? When two objects move side by side, the motion of one disrupts the force chains between the grains around the other. This break in the grain contact reduces the overall resistance each object encounters.

Beyond just observing this effect, we also developed a semi-empirical model to describe it. The model is based on the idea that interactions between closely spaced objects disrupt these granular force chains, making it easier for them to move. This study, now published in Physical Review Fluids, highlights a previously under-explored aspect of granular dynamics: the cooperative motion of multiple objects.

As research into these dynamics advances, it may lead to new technologies and techniques for navigating granular materials—on Earth and beyond—potentially enabling more efficient solutions for various industries and scientific endeavors.

This story is part of Science X Dialog, where researchers can report findings from their published research articles. Visit this page for information about Science X Dialog and how to participate.

More information: D. D. Carvalho et al, Drag reduction during the side-by-side motion of a pair of intruders in a granular medium, Physical Review Fluids (2024). DOI: 10.1103/PhysRevFluids.9.114303

Journal information: Physical Review Fluids  Physics of Fluids 

Researchers improve chaotic mapping for super-resolution image reconstruction

Super-resolution (SR) technology plays a pivotal role in enhancing the quality of images. SR reconstruction aims to generate high-resolution images from low-resolution ones. Traditional methods often result in blurred or distorted images. Advanced techniques such as sparse representation and deep learning-based methods have shown promising results but still face limitations in terms of noise robustness and computational complexity.

In a recent study published in Sensors, researchers from the Changchun Institute of Optics, Fine Mechanics and Physics of the Chinese Academy of Sciences proposed innovative solutions that integrate chaotic mapping into SR image reconstruction process, significantly enhancing the image quality across various fields.

Researchers innovatively introduced circle chaotic mapping into the dictionary sequence solving process of the K-singular value decomposition (K-SVD) dictionary update algorithm. This integration facilitated balanced traversal and simplified the search for global optimal solutions, thereby enhancing the noise robustness of the SR reconstruction.

In addition, researchers adopted the orthogonal matching pursuit (OMP) greedy algorithm, which converges faster than the L1-norm convex optimization algorithm, to complement K-SVD, and constructed a high-resolution image using the mapping relationship generated by the algorithm.

They trained and learned high- and low-resolution dictionaries from a large number of images similar to the target. Through the joint dictionary training method, the high- and low-resolution image blocks under the dictionary had the same sparse representation, reducing the complexity of the SR reconstruction process.

The proposed method, named the Chaotic Mapping-based Sparse Representation (CMOSR), significantly improves the image quality and authenticity. It could effectively reconstruct high-resolution images with high spatial resolution, good clarity, and rich texture details. Compared to traditional SR algorithms, the CMOSR exhibits better noise robustness and computational efficiency. It does not generate unexpected details when processing images and is more inclusive of image sizes.

More information: Hailin Fang et al, Super-Resolution Reconstruction of Remote Sensing Images Using Chaotic Mapping to Optimize Sparse Representation, Sensors (2024). DOI: 10.3390/s24217030

Provided by Chinese Academy of Sciences

Researchers achieve calculation of Jones polynomial based on the Majorana zero modes

A research team has experimentally calculated the Jones polynomial based on the quantum simulation of braided Majorana zero modes. The research team determined the Jones polynomials of different links through simulating the braiding operations of Majorana fermions. This study was published in Physical Review Letters.

Link or knot invariants, such as the Jones polynomials, serve as a powerful tool to determine whether or not two knots are topologically equivalent. Currently, there is a lot of interest in determining Jones polynomials as they have applications in various disciplines, such as DNA biology and condensed matter physics.

Unfortunately, even approximating the value of Jones polynomials falls within the #P-hard complexity class, with the most efficient classical algorithms requiring an exponential amount of resources. Yet, quantum simulations offer an exciting way to experimentally investigate properties of non-Abelian anyons and Majorana zero modes (MZMs) are regarded as the most plausible candidate for experimentally realizing non-Abelian statistics.

The team used a photonic quantum simulator that employed two-photon correlations and nondissipative imaginary-time evolution to perform two distinct MZM braiding operations that generate anyonic worldlines of several links. Based on this simulator, the team conducted a series of experimental studies to simulate the topological properties of non-Abelian anyons.

They successfully simulated the exchange operations of a single Kitaev chain MZM, detected the non-Abelian geometric phase of MZMs in a two-Kitaev chain model, and further extended to high dimensions -semion zeroth mode, studying their braiding process which was immune to local noise and maintained the conservation of quantum contextual resources.

Based on this work, the team expanded the previous single-photon encoding method to dual-photon spatial methods, utilizing coincidence counting of dual photons for encoding. This significantly increased the number of quantum states that can be encoded.

Meanwhile, by introducing a Sagnac interferometer-based quantum cooling device, the dissipative evolution had been successfully transformed into a nondissipative evolution, which enhanced the device’s capability to recycle photonic resources, thus contributing to achieving multi-step quantum evolution operations. These techniques greatly improved the capability of the photonic quantum simulator and laid a solid technical foundation for the simulation of braiding Majorana zero modes in three Kitaev models.

The team demonstrated that their experimental setup could faithfully realize the desired braiding evolutions of MZMs, as the average fidelity of quantum states and braiding operation was above 97%.

By combining different braiding operations of Majorana zero modes in the three Kitaev chain models, the research team simulated five typical topological knots, which gave rise to the Jones polynomials of five topologically distinct links, further distinguishing between topologically inequivalent links.

Such an advance can greatly contribute to the fields of statistical physics, molecular synthesis technology and integrated DNA replication, where intricate topological links and knots emerge frequently.

More information: Jia-Kun Li et al, Photonic Simulation of Majorana-Based Jones Polynomials, Physical Review Letters (2024). DOI: 10.1103/PhysRevLett.133.230603

Journal information: Physical Review Letters

Provided by University of Science and Technology of China

The science behind your Christmas sweater: How friction shapes the form of knitted fabrics

A trio of physicists from the University of Rennes, Aoyama Gakuin University, and the University of Lyon have discovered, through experimentation, that it is friction between fibers that allows knitted fabrics to take on a given form. Jérôme Crassous, Samuel Poincloux, and Audrey Steinberger have attempted to understand the underlying mechanics involved in the forms of knitted garments. Their paper is published in Physical Review Letters.

The research team noted that while many of the factors that are involved in intertwined fabrics have been studied to better understand their characteristics (such as why sweaters keep people warm despite the gaps between stitches), much less is known about the form garments made using such techniques can take.

To learn more, they conducted experiments using a nylon yarn and a well-known Jersey knit stitch called the stockinette—a technique that involves forming interlocked loops using knitting needles. They knitted a piece of fabric using 70×70 stitches and attached it to a biaxial tensile machine.

The team then used the tensile machine to stretch the piece of fabric in different ways, and then closely examined how it impacted the stitches. They found that the piece of garment did not have a unique shape. By stretching the fabric in different ways, they could cause it to come to rest in different forms, which they call metastable shapes.

They noted that the ratios of the length and width of such metastable shapes varied depending on how much twisting was applied, which suggested the fabric was capable of taking on many different metastable shapes.

The researchers then created simulations of the fiber to show what was happening as it was twisted and pulled on the tensile machine. The simulations showed the same results, but it allowed them to change one characteristic of the virtual fibers that could not be changed on the real fabric—the amount of friction between the strands.

They found that setting the friction to zero reduced the metastable shapes to just one. Thus, friction was found to be the driving force behind the forms that knitted fabrics can take.

More information: Jérôme Crassous et al, Metastability of a Periodic Network of Threads: Shapes of a Knitted Fabric, Physical Review Letters (2024). DOI: 10.1103/PhysRevLett.133.248201. On arXivDOI: 10.48550/arxiv.2404.07811

Journal information: Physical Review Letters  arXiv 

© 2024 Science X Network

A new calculation of the electron’s self-energy improves determination of fundamental constants

When quantum electrodynamics, the quantum field theory of electrons and photons, was being developed after World War II, one of the major challenges for theorists was calculating a value for the Lamb shift, the energy of a photon resulting from an electron transitioning from one hydrogen hyperfine energy level to another.

The effect was first detected by Willis Lamb and Robert Retherford in 1947, with the emitted photon having a frequency of 1,000 megahertz, corresponding to a photon wavelength of 30 cm and an energy of 4 millionths of an electronvolt—right on the lower edge of the microwave spectrum. It came when the one electron of the hydrogen atom transitioned from the 2P1/2 energy level to the 2S1/2 level. (The leftmost number is the principal quantum number, much like the discrete but increasing circular orbits of the Bohr atom.)

Conventional quantum mechanics didn’t have such transitions, and Dirac’s relativistic Schrödinger equation (naturally called the Dirac equation) did not have such a hyperfine transition either, because the shift is a consequence of interactions with the vacuum, and Dirac’s vacuum was a “sea” that did not interact with real particles.

As theorists worked to produce a workable theory of quantum electrodynamics (QED), predicting the Lamb shift was an excellent challenge as the QED calculation contained the prominent thorns of the theory, such as divergent integrals at both low and high energies and singularity points.

On Lamb’s 65th birthday in 1978, Freeman Dyson said to him, “Those years, when the Lamb shift was the central theme of physics, were golden years for all the physicists of my generation. You were the first to see that this tiny shift, so elusive and hard to measure, would clarify our thinking about particles and fields.”

Precisely predicting the Lamb shift, as well as the anomalous magnetic moment of the electron, has been a challenge for theorists of every generation since. The theoretically predicted value for the shift allows the fine-structure constant to be measured with an uncertainty of less than one part in a million.

Now, a new step in the evolution of the Lamb shift calculation has been published in Physical Review Letters by a group of three scientists from the Max Planck Institute for Nuclear Physics in Germany. To be exact, they calculated the “two-loop” electron self-energy.

Self-energy is the energy a particle (here, an electron) has as a result of changes that it causes in its environment. For example, the electron in a hydrogen atom attracts the proton that is the nucleus, so the effective distance between them changes.

QED has a prescription to calculate the self-energy, and it’s easiest via Feynman diagrams. “Two-loops” refers to the Feynman diagrams that describe this quantum process—two virtual photons from the quantum vacuum that influence the electron’s behavior. They pop in from the vacuum, stay a shorter time than is set by the Heisenberg Uncertainty Principle, then are absorbed by the 1S electron state, which has spin 1/2.

Accounting for the two-loop self-energy is one of only three mathematical terms that describe the Lamb shift, but it constitutes a major problem that most influences the result for the Lamb energy shift.

Lead author Vladimir Yerokhin and his colleagues determined an enhanced precision for it from numerical calculations. Importantly, they calculated the two-loop correction to all orders in an important parameter, Zα that represents the interaction with the nucleus. (Z is the atomic number of the nucleus. The atom still has only one electron, but a nucleus bigger than hydrogen’s is included for generality. α is the fine structure constant.)

Although it was computationally challenging, the trio produced a significant improvement on previous two-loop calculations of the electron self-energy that reduces the 1S–2S Lamb shift in hydrogen by a frequency difference of 2.5 kHz and reduces its theoretical uncertainty. In particular, this reduces the value of the Rydberg constant by one part in a trillion.

Introduced by the Swedish spectroscopist Johannes Rydberg in 1890, this number appears in simple equations for the spectral lines of hydrogen. The Rydberg constant is a fundamental constant that is one of the most precisely known constants in physics, containing 12 significant figures with, previously, a relative uncertainty of about two parts in a trillion.

Overall, they write, “the calculational approach developed in this Letter allowed us to improve the numerical accuracy of this effect by more than an order of magnitude and extend calculations to lower nuclear charges [Z] than previously possible.” This, in turn, has consequences for the Rydberg constant.

Their methodology also has consequences for other celebrated QED calculations: other two-loop corrections to the Lamb shift, and especially to the two-loop QED effects for the anomalous magnetic moment of the electron and the muon, also called their “g-factors.” A great deal of experimental effort is currently being put into precisely determining the muon’s g-factor, such as the Muon g-2 experiment at Fermilab, as it could point the way to physics beyond the standard model.

More information: V. A. Yerokhin et al, Two-Loop Electron Self-Energy for Low Nuclear Charges, Physical Review Letters (2024). DOI: 10.1103/PhysRevLett.133.251803

Journal information: Physical Review Letters

© 2024 Science X Network

Starlight to sight: Researchers develop short-wave infrared technology to allow starlight detection

Prof Zhang Zhiyong’s team at Peking University developed a heterojunction-gated field-effect transistor (HGFET) that achieves high sensitivity in short-wave infrared detection, with a recorded specific detectivity above 1014 Jones at 1300 nm, making it capable of starlight detection. Their research was recently published in the journal Advanced Materials, titled “Opto-Electrical Decoupled Phototransistor for Starlight Detection.”

Highly sensitive shortwave infrared (SWIR) detectors are essential for detecting weak radiation (typically below 10−8 W·Sr−1·cm−2·µm−1) with high-end passive image sensors. However, mainstream SWIR detection based on epitaxial photodiodes cannot effectively detect ultraweak infrared radiation due to the lack of inherent gain.

Filling this gap, researchers at the Peking University School of Electronics and collaborators have presented a heterojunction-gated field-effect transistor (HGFET) that achieves ultra-high photogain and exceptionally low noise in the short-wavelength infrared (SWIR) region, benefiting from a design that incorporates a comprehensive opto-electric decoupling mechanism.

The team developed a HGFET consisting of a colloidal quantum dot (CQD)-based p-i-n heterojunction and a carbon nanotube (CNT) field-effect transistor, which significantly detects and amplifies SWIR signals with a high inherent gain while minimally amplifying noise, leading to a recorded specific detectivity above 1014 Jones at 1300 nm and a recorded maximum gain-bandwidth product of 69.2 THz.

Direct comparative testing indicates that the HGFET can detect weak infrared radiation at 0.46 nW cm−2 levels, thus making this detector much more sensitive than the commercial and reported SWIR detectors, and especially enabling starlight detection or vision.

More information: Shaoyuan Zhou et al, Opto‐Electrical Decoupled Phototransistor for Starlight Detection, Advanced Materials (2024). DOI: 10.1002/adma.202413247

Journal information: Advanced Materials

Provided by Peking University

Researchers use high-performance computing to analyze a quantum photonics experiment

by Universität Paderborn

Quantum experiments and high-performance computing: new methods enable complex calculations to be completed extremely quickly
Scientists at Paderborn University have for the first time used high-performance computing (on the right in the picture the Paderborn supercomputer Noctua) to analyze a quantum photonics experiment on a large scale. Credit: Paderborn University, Hennig/Mazhiq

For the first time ever, scientists at Paderborn University have used high-performance computing (HPC) at large scales to analyze a quantum photonics experiment. In specific terms, this involved the tomographic reconstruction of experimental data from a quantum detector. This is a device that measures individual photons.

The researchers involved developed new HPC software to achieve this. Their findings have now been published in the journal Quantum Science and Technology.

Quantum tomography on a megascale photonic quantum detector

High-resolution photon detectors are increasingly being used for quantum research. Precisely characterizing these devices is crucial if they are to be put to effective use for measurement purposes—and thus far, doing so has been a challenge. This is because it involves huge volumes of data that need to be analyzed without neglecting their quantum mechanical structure.

Suitable tools for processing these data sets are particularly important for future applications. While traditional approaches cannot perform like-for-like computations of quantum systems beyond a certain scale, Paderborn’s scientists are using high-performance computing for characterization and certification tasks.

“By developing open-source customized algorithms using HPC, we perform quantum tomography on a megascale quantum photonic detector,” explains physicist Timon Schapeler, who authored the paper with computer scientist Dr. Robert Schade and colleagues from PhoQS (Institute for Photonic Quantum Systems) and PC2 (Paderborn Center for Parallel Computing).

PC2, an interdisciplinary research project at Paderborn University, operates the HPC systems. The university is one of Germany’s national high-performance computing centers and thus stands at the forefront of university high-performance computing.

‘Unprecedented scale’

“The findings are opening up entirely new horizons for the size of systems being analyzed in the field of scalable quantum photonics. This has wider implications, for example, for characterizing photonic quantum computer hardware,” Schapeler continues. Researchers were able to perform their calculations for describing a photon detector within just a few minutes—faster than ever before.

The system also managed to complete calculations involving huge quantities of data extremely quickly. Schapeler states, “This shows the unprecedented scale on which this tool can be used with quantum photonic systems. As far as we know, our work is the first contribution to the field of traditional high-performance computing enabling experimental quantum photonics at large scales.

“This field will become increasingly important when it comes to demonstrating quantum supremacy in quantum photonic experiments—and on a scale that cannot be calculated by conventional means.”

Shaping the future with fundamental research

Schapeler is a doctoral student in the “Mesoscopic Quantum Optics” research group headed by Professor Tim Bartley. This team conducts research into the fundamental physics of the quantum states of light and its applications. These states consist of tens, hundreds or thousands of photons.

“The scale is crucial, as this illustrates the fundamental advantage that quantum systems hold over conventional ones. There is a clear benefit in many areas, including measurement technology, data processing and communications,” Bartley explains.

More information: Timon Schapeler et al, Scalable quantum detector tomography by high-performance computing, Quantum Science and Technology (2024). DOI: 10.1088/2058-9565/ad8511

Journal information: Quantum Science and Technology 

Provided by Universität Paderborn

A new spectroscopy method reveals water’s quantum secrets

by Celia Luterbacher, Ecole Polytechnique Federale de Lausanne

A new spectroscopy reveals water's quantum secrets
Ph.D. student Eksha Chaudhary with the correlated vibrational spectroscopy setup. Credit: Jamani Caillet

For the first time, EPFL researchers have exclusively observed molecules participating in hydrogen bonds in liquid water, measuring electronic and nuclear quantum effects that were previously accessible only via theoretical simulations.

Water is synonymous with life, but the dynamic, multifaceted interaction that brings H2O molecules together—the hydrogen bond—remains mysterious. Hydrogen bonds result when hydrogen and oxygen atoms between water molecules interact, sharing electronic charge in the process.

This charge-sharing is a key feature of the three-dimensional “H-bond” network that gives liquid water its unique properties, but quantum phenomena at the heart of such networks have thus far been understood only through theoretical simulations.

Now, researchers led by Sylvie Roke, head of the Laboratory for Fundamental BioPhotonics in EPFL’s School of Engineering, have published a new method—correlated vibrational spectroscopy (CVS)—that enables them to measure how water molecules behave when they participate in H-bond networks.

Crucially, CVS allows scientists to distinguish between such participating (interacting) molecules, and randomly distributed, non-H-bonded (non-interacting) molecules. By contrast, any other method reports measurements on both molecule types simultaneously, making it impossible to distinguish between them.

“Current spectroscopy methods measure the scattering of laser light caused by the vibrations of all molecules in a system, so you have to guess or assume that what you are seeing is due to the molecular interaction you’re interested in,” Roke explains.

“With CVS, the vibrational mode of each different type of molecule has its own vibrational spectrum. And because each spectrum has a unique peak corresponding to water molecules moving back and forth along the H-bonds, we can measure directly their properties, such as how much electronic charge is shared, and how H-bond strength is impacted.”

The method, which the team says has “transformative” potential to characterize interactions in any material, has been published in Science.

To distinguish between interacting and non-interacting molecules, the scientists illuminated liquid water with femtosecond (one quadrillionth of a second) laser pulses in the near-infrared spectrum. These ultra-short bursts of light create tiny charge oscillations and atomic displacements in the water, which trigger the emission of visible light.

This emitted light appears in a scattering pattern that contains key information about the spatial organization of the molecules, while the color of the photons contains information about atomic displacements within and between molecules.

“Typical experiments place the spectrographic detector at a 90-degree angle to the incoming laser beam, but we realized that we could probe interacting molecules simply by changing the detector position, and recording spectra using certain combinations of polarized light. In this way, we can create separate spectra for non-interacting and interacting molecules,” Roke says.

The team conducted more experiments aimed at using CVS to tease apart the electronic and nuclear quantum effects of H-bond networks, for example by changing the pH of water through the addition of hydroxide ions (making it more basic), or protons (more acidic).

“Hydroxide ions and protons participate in H-bonding, so changing the pH of water changes its reactivity,” says Ph.D. student Mischa Flór, the paper’s first author.

“With CVS, we can now quantify exactly how much extra charge hydroxide ions donate to H-bond networks (8%), and how much charge protons accept from it (4%)—precise measurements that could never have been done experimentally before.”

These values were explained with the aid of advanced simulations conducted by collaborators in France, Italy, and the U.K.

The researchers emphasize that the method, which they also corroborated via theoretical calculations, can be applied to any material, and indeed several new characterization experiments are already underway.

“The ability to quantify directly H-bonding strength is a powerful method that can be used to clarify molecular-level details of any solution, for example containing electrolytes, sugars, amino acids, DNA, or proteins,” Roke says. “As CVS is not limited to water, it can also deliver a wealth of information on other liquids, systems, and processes.”

More information: Mischa Flór et al, Dissecting the hydrogen bond network of water: Charge transfer and nuclear quantum effects, Science (2024). DOI: 10.1126/science.ads4369

Journal information: Science 

Scientists discover a promising way to create new superheavy elements

by David Appell , Phys.org

Researchers discover a promising way to create new superheavy elements
A chart of superheavy elements (SHEs), plotted by atomic number (protons) vs number of neutrons. Boxes are discovered SHEs, with predicted half-lives. The circle is an island of stability. Credit: Wikipedia Commons

What is the heaviest element in the universe? Are there infinitely many elements? Where and how could superheavy elements be created naturally?

The heaviest abundant element known to exist is uranium, with 92 protons (the atomic number “Z”). But scientists have succeeded in synthesizing superheavy elements up to oganesson, with a Z of 118. Immediately before it are livermorium, with 116 protons and tennessine, which has 117.

All have short half-lives—the amount of time for half of an assembly of the element’s atoms to decay—usually less than a second and some as short as a microsecond. Creating and detecting such elements is not easy and requires powerful particle accelerators and elaborate measurements.

But the typical way of producing high-Z elements is reaching its limit. In response, a group of scientists from the United States and Europe have come up with a new method to produce superheavy elements beyond the dominant existing technique. Their work, done at the Lawrence Berkeley National Laboratory in California, was published in Physical Review Letters.

“Today, the concept of an ‘island of stability’ remains an intriguing topic, with its exact position and extent on the Segré chart continuing to be a subject of active pursuit both in theoretical and experimental nuclear physics,” J.M. Gates of LBNL and colleagues wrote in their paper.

The island of stability is a region where superheavy elements and their isotopes—nuclei with the same number of protons but different numbers of neutrons—may have much longer half-lives than the elements near it. It’s been expected to occur for isotopes near Z=112.

While there have been several techniques to discover superheavy elements and create their isotopes, one of the most fruitful has been to bombard targets from the actinide series of elements with a beam of calcium atoms, specifically an isotope of calcium, 48-calcium (48Ca), that has 20 protons and 28 (48 minus 20) neutrons. The actinide elements have proton numbers from 89 to 103, and 48Ca is special because it has a “magic number” of both protons and neutrons, meaning their numbers completely fill the available energy shells in the nucleus.

Proton and/or neutron numbers being magic means the nucleus is extremely stable; for example, 48Ca has a half-life of about 60 billion billion (6 x 1019) years, far larger than the age of the universe. (By contrast, 49Ca, with just one more neutron, decays by half in about nine minutes.)

These reactions are called “hot-fusion” reactions. Another technique saw beams of isotopes from 50-titanium to 70-zinc accelerated onto targets of lead or bismuth, called “cold-fusion” reactions. Superheavy elements up to oganesson (Z=118) were discovered with these reactions.

But the time needed to produce new superheavy elements, quantified via the cross section of the reaction which measures the probability they occur, was taking longer and longer, sometimes weeks of running time. Being so close to the predicted island of stability, scientists need techniques to go further than oganesson. Targets of einsteinium or fermium, themselves superheavy, cannot be sufficiently produced to make a suitable target.

“A new reaction approach is required,” wrote Gates and her team. And that is what they found.

Theoretical models of the nucleus have successfully predicted the production rates of superheavy elements below oganesson using actinide targets and beams of isotopes heavier than 48-calcium. These models also agree that to produce elements with Z=119 and Z=120, beams of 50-titanium would work best, having the highest cross sections.

But not all necessary parameters have been pinned down by theorists, such as the necessary energy of the beams, and some of the masses needed for the models haven’t been measured by experimentalists. The exact numbers are important because the production rates of the superheavy elements could otherwise vary enormously.

Several experimental efforts to produce atoms with proton numbers from 119 to 122 have already been attempted. All have been unsatisfactory, and the limits they determined for the cross sections have not allowed different theoretical nuclear models to be constrained. Gates and his team investigated the production of isotopes of livermorium (Z=116) by beaming 50-titanium onto targets of 244-Pu (plutonium).

Using the 88-Inch Cyclotron accelerator at Lawrence Berkeley National Laboratory, the team produced a beam that averaged 6 trillion titanium ions per second that exited the cyclotron. These impacted the plutonium target, which had a circular area of 12.2 cm, over a 22-day period. Making a slew of measurements, they determined that 290-livermorium had been produced via two different nuclear decay chains.

“This is the first reported production of a SHE [superheavy element] near the predicted island of stability with a beam other than 48-calcium,” they concluded. The reaction cross section, or probability of interaction, did decrease, as was expected with heavier beam isotopes, but “success of this measurement validates that discoveries of new SHE are indeed within experimental reach.”

The discovery represents the first time a collision of non-magic nuclei has shown the potential to create other superheavy atoms and isotopes (both), hopefully paving the way for future discoveries. About 110 isotopes of superheavy elements are known to exist, but another 50 are expected to be out there, waiting to be uncovered by new techniques such as this.

More information: J. M. Gates et al, Toward the Discovery of New Elements: Production of Livermorium ( Z=116 ) with Ti50, Physical Review Letters (2024). DOI: 10.1103/PhysRevLett.133.172502

Journal information: Physical Review Letters