Researchers use high-performance computing to analyze a quantum photonics experiment

by Universität Paderborn

Quantum experiments and high-performance computing: new methods enable complex calculations to be completed extremely quickly
Scientists at Paderborn University have for the first time used high-performance computing (on the right in the picture the Paderborn supercomputer Noctua) to analyze a quantum photonics experiment on a large scale. Credit: Paderborn University, Hennig/Mazhiq

For the first time ever, scientists at Paderborn University have used high-performance computing (HPC) at large scales to analyze a quantum photonics experiment. In specific terms, this involved the tomographic reconstruction of experimental data from a quantum detector. This is a device that measures individual photons.

The researchers involved developed new HPC software to achieve this. Their findings have now been published in the journal Quantum Science and Technology.

Quantum tomography on a megascale photonic quantum detector

High-resolution photon detectors are increasingly being used for quantum research. Precisely characterizing these devices is crucial if they are to be put to effective use for measurement purposes—and thus far, doing so has been a challenge. This is because it involves huge volumes of data that need to be analyzed without neglecting their quantum mechanical structure.

Suitable tools for processing these data sets are particularly important for future applications. While traditional approaches cannot perform like-for-like computations of quantum systems beyond a certain scale, Paderborn’s scientists are using high-performance computing for characterization and certification tasks.

“By developing open-source customized algorithms using HPC, we perform quantum tomography on a megascale quantum photonic detector,” explains physicist Timon Schapeler, who authored the paper with computer scientist Dr. Robert Schade and colleagues from PhoQS (Institute for Photonic Quantum Systems) and PC2 (Paderborn Center for Parallel Computing).

PC2, an interdisciplinary research project at Paderborn University, operates the HPC systems. The university is one of Germany’s national high-performance computing centers and thus stands at the forefront of university high-performance computing.

‘Unprecedented scale’

“The findings are opening up entirely new horizons for the size of systems being analyzed in the field of scalable quantum photonics. This has wider implications, for example, for characterizing photonic quantum computer hardware,” Schapeler continues. Researchers were able to perform their calculations for describing a photon detector within just a few minutes—faster than ever before.

The system also managed to complete calculations involving huge quantities of data extremely quickly. Schapeler states, “This shows the unprecedented scale on which this tool can be used with quantum photonic systems. As far as we know, our work is the first contribution to the field of traditional high-performance computing enabling experimental quantum photonics at large scales.

“This field will become increasingly important when it comes to demonstrating quantum supremacy in quantum photonic experiments—and on a scale that cannot be calculated by conventional means.”

Shaping the future with fundamental research

Schapeler is a doctoral student in the “Mesoscopic Quantum Optics” research group headed by Professor Tim Bartley. This team conducts research into the fundamental physics of the quantum states of light and its applications. These states consist of tens, hundreds or thousands of photons.

“The scale is crucial, as this illustrates the fundamental advantage that quantum systems hold over conventional ones. There is a clear benefit in many areas, including measurement technology, data processing and communications,” Bartley explains.

More information: Timon Schapeler et al, Scalable quantum detector tomography by high-performance computing, Quantum Science and Technology (2024). DOI: 10.1088/2058-9565/ad8511

Journal information: Quantum Science and Technology 

Provided by Universität Paderborn

A new spectroscopy method reveals water’s quantum secrets

by Celia Luterbacher, Ecole Polytechnique Federale de Lausanne

A new spectroscopy reveals water's quantum secrets
Ph.D. student Eksha Chaudhary with the correlated vibrational spectroscopy setup. Credit: Jamani Caillet

For the first time, EPFL researchers have exclusively observed molecules participating in hydrogen bonds in liquid water, measuring electronic and nuclear quantum effects that were previously accessible only via theoretical simulations.

Water is synonymous with life, but the dynamic, multifaceted interaction that brings H2O molecules together—the hydrogen bond—remains mysterious. Hydrogen bonds result when hydrogen and oxygen atoms between water molecules interact, sharing electronic charge in the process.

This charge-sharing is a key feature of the three-dimensional “H-bond” network that gives liquid water its unique properties, but quantum phenomena at the heart of such networks have thus far been understood only through theoretical simulations.

Now, researchers led by Sylvie Roke, head of the Laboratory for Fundamental BioPhotonics in EPFL’s School of Engineering, have published a new method—correlated vibrational spectroscopy (CVS)—that enables them to measure how water molecules behave when they participate in H-bond networks.

Crucially, CVS allows scientists to distinguish between such participating (interacting) molecules, and randomly distributed, non-H-bonded (non-interacting) molecules. By contrast, any other method reports measurements on both molecule types simultaneously, making it impossible to distinguish between them.

“Current spectroscopy methods measure the scattering of laser light caused by the vibrations of all molecules in a system, so you have to guess or assume that what you are seeing is due to the molecular interaction you’re interested in,” Roke explains.

“With CVS, the vibrational mode of each different type of molecule has its own vibrational spectrum. And because each spectrum has a unique peak corresponding to water molecules moving back and forth along the H-bonds, we can measure directly their properties, such as how much electronic charge is shared, and how H-bond strength is impacted.”

The method, which the team says has “transformative” potential to characterize interactions in any material, has been published in Science.

To distinguish between interacting and non-interacting molecules, the scientists illuminated liquid water with femtosecond (one quadrillionth of a second) laser pulses in the near-infrared spectrum. These ultra-short bursts of light create tiny charge oscillations and atomic displacements in the water, which trigger the emission of visible light.

This emitted light appears in a scattering pattern that contains key information about the spatial organization of the molecules, while the color of the photons contains information about atomic displacements within and between molecules.

“Typical experiments place the spectrographic detector at a 90-degree angle to the incoming laser beam, but we realized that we could probe interacting molecules simply by changing the detector position, and recording spectra using certain combinations of polarized light. In this way, we can create separate spectra for non-interacting and interacting molecules,” Roke says.

The team conducted more experiments aimed at using CVS to tease apart the electronic and nuclear quantum effects of H-bond networks, for example by changing the pH of water through the addition of hydroxide ions (making it more basic), or protons (more acidic).

“Hydroxide ions and protons participate in H-bonding, so changing the pH of water changes its reactivity,” says Ph.D. student Mischa Flór, the paper’s first author.

“With CVS, we can now quantify exactly how much extra charge hydroxide ions donate to H-bond networks (8%), and how much charge protons accept from it (4%)—precise measurements that could never have been done experimentally before.”

These values were explained with the aid of advanced simulations conducted by collaborators in France, Italy, and the U.K.

The researchers emphasize that the method, which they also corroborated via theoretical calculations, can be applied to any material, and indeed several new characterization experiments are already underway.

“The ability to quantify directly H-bonding strength is a powerful method that can be used to clarify molecular-level details of any solution, for example containing electrolytes, sugars, amino acids, DNA, or proteins,” Roke says. “As CVS is not limited to water, it can also deliver a wealth of information on other liquids, systems, and processes.”

More information: Mischa Flór et al, Dissecting the hydrogen bond network of water: Charge transfer and nuclear quantum effects, Science (2024). DOI: 10.1126/science.ads4369

Journal information: Science 

Scientists discover a promising way to create new superheavy elements

by David Appell , Phys.org

Researchers discover a promising way to create new superheavy elements
A chart of superheavy elements (SHEs), plotted by atomic number (protons) vs number of neutrons. Boxes are discovered SHEs, with predicted half-lives. The circle is an island of stability. Credit: Wikipedia Commons

What is the heaviest element in the universe? Are there infinitely many elements? Where and how could superheavy elements be created naturally?

The heaviest abundant element known to exist is uranium, with 92 protons (the atomic number “Z”). But scientists have succeeded in synthesizing superheavy elements up to oganesson, with a Z of 118. Immediately before it are livermorium, with 116 protons and tennessine, which has 117.

All have short half-lives—the amount of time for half of an assembly of the element’s atoms to decay—usually less than a second and some as short as a microsecond. Creating and detecting such elements is not easy and requires powerful particle accelerators and elaborate measurements.

But the typical way of producing high-Z elements is reaching its limit. In response, a group of scientists from the United States and Europe have come up with a new method to produce superheavy elements beyond the dominant existing technique. Their work, done at the Lawrence Berkeley National Laboratory in California, was published in Physical Review Letters.

“Today, the concept of an ‘island of stability’ remains an intriguing topic, with its exact position and extent on the Segré chart continuing to be a subject of active pursuit both in theoretical and experimental nuclear physics,” J.M. Gates of LBNL and colleagues wrote in their paper.

The island of stability is a region where superheavy elements and their isotopes—nuclei with the same number of protons but different numbers of neutrons—may have much longer half-lives than the elements near it. It’s been expected to occur for isotopes near Z=112.

While there have been several techniques to discover superheavy elements and create their isotopes, one of the most fruitful has been to bombard targets from the actinide series of elements with a beam of calcium atoms, specifically an isotope of calcium, 48-calcium (48Ca), that has 20 protons and 28 (48 minus 20) neutrons. The actinide elements have proton numbers from 89 to 103, and 48Ca is special because it has a “magic number” of both protons and neutrons, meaning their numbers completely fill the available energy shells in the nucleus.

Proton and/or neutron numbers being magic means the nucleus is extremely stable; for example, 48Ca has a half-life of about 60 billion billion (6 x 1019) years, far larger than the age of the universe. (By contrast, 49Ca, with just one more neutron, decays by half in about nine minutes.)

These reactions are called “hot-fusion” reactions. Another technique saw beams of isotopes from 50-titanium to 70-zinc accelerated onto targets of lead or bismuth, called “cold-fusion” reactions. Superheavy elements up to oganesson (Z=118) were discovered with these reactions.

But the time needed to produce new superheavy elements, quantified via the cross section of the reaction which measures the probability they occur, was taking longer and longer, sometimes weeks of running time. Being so close to the predicted island of stability, scientists need techniques to go further than oganesson. Targets of einsteinium or fermium, themselves superheavy, cannot be sufficiently produced to make a suitable target.

“A new reaction approach is required,” wrote Gates and her team. And that is what they found.

Theoretical models of the nucleus have successfully predicted the production rates of superheavy elements below oganesson using actinide targets and beams of isotopes heavier than 48-calcium. These models also agree that to produce elements with Z=119 and Z=120, beams of 50-titanium would work best, having the highest cross sections.

But not all necessary parameters have been pinned down by theorists, such as the necessary energy of the beams, and some of the masses needed for the models haven’t been measured by experimentalists. The exact numbers are important because the production rates of the superheavy elements could otherwise vary enormously.

Several experimental efforts to produce atoms with proton numbers from 119 to 122 have already been attempted. All have been unsatisfactory, and the limits they determined for the cross sections have not allowed different theoretical nuclear models to be constrained. Gates and his team investigated the production of isotopes of livermorium (Z=116) by beaming 50-titanium onto targets of 244-Pu (plutonium).

Using the 88-Inch Cyclotron accelerator at Lawrence Berkeley National Laboratory, the team produced a beam that averaged 6 trillion titanium ions per second that exited the cyclotron. These impacted the plutonium target, which had a circular area of 12.2 cm, over a 22-day period. Making a slew of measurements, they determined that 290-livermorium had been produced via two different nuclear decay chains.

“This is the first reported production of a SHE [superheavy element] near the predicted island of stability with a beam other than 48-calcium,” they concluded. The reaction cross section, or probability of interaction, did decrease, as was expected with heavier beam isotopes, but “success of this measurement validates that discoveries of new SHE are indeed within experimental reach.”

The discovery represents the first time a collision of non-magic nuclei has shown the potential to create other superheavy atoms and isotopes (both), hopefully paving the way for future discoveries. About 110 isotopes of superheavy elements are known to exist, but another 50 are expected to be out there, waiting to be uncovered by new techniques such as this.

More information: J. M. Gates et al, Toward the Discovery of New Elements: Production of Livermorium ( Z=116 ) with Ti50, Physical Review Letters (2024). DOI: 10.1103/PhysRevLett.133.172502

Journal information: Physical Review Letters 

Investigating the flow of fluids with non-monotonic, ‘S-shaped’ rheology

by SciencePOD

Investigating the flow of fluids with non-monotonic, ‘S-shaped’ rheology
Sketch of shear banding (top left) and vorticity banding (top right) as proposed by [18]. For shear banding, the rheological curve 𝜏 (𝛾˙) is single-valued but non-monotonic (bottom left). For vortex banding it is the 𝛾˙(𝜏) curve which is single valued and non-monotonic (bottom right). Credit: The European Physical Journal E (2024). DOI: 10.1140/epje/s10189-024-00444-5

Water and oil, and some other simple fluids, respond in the same way to all levels of shear stress. These are termed Newtonian fluids, and their viscosity is constant for all stresses although it will vary with temperature. Under different stresses and pressure gradients, other non-Newtonian fluids exhibit patterns of behavior that are much more complex.

Researchers Laurent Talon and Dominique Salin from Université Paris-Sacly, Paris, France have now shown that under certain circumstances, cornstarch suspensions can display a banding pattern with alternating regions of high and low viscosity. This work has been published in The European Physical Journal E.

Non-Newtonian fluids may exhibit shear thinning, where the viscosity decreases with stress; common examples include ketchups and sauces that can appear almost solid-like at rest. The reverse is shear thickening, in which viscosity increases with stress. Some suspensions exhibit a property called discontinuous shear thickening (DST).

“At low shear stress [these fluids] behave like Newtonian fluids, but at a certain stress value the viscosity increases very steeply,” explains Talon.

In 2014, Matthew Wyart of New York University, NY, U.S., and Michael Cates of the University of Edinburgh, Scotland, proposed a similar but even more counter-intuitive and interesting model: a so-called “S-shaped” rheology where the viscosity of a fluid first increases with increasing stress and then decreases.

Talon and Salin set out to investigate the plausibility of this simulated rheology using a suspension of cornstarch in a straight, cylindrical capillary tube. They observed the expected non-monotonic relationship between pressure and flow rate, but not exactly as predicted: the flow rate initially increased with pressure but then suddenly decreased.

“Assuming that the Wyart-Cates model is essentially correct, one solution that would match what we observed could be a ‘rheological segregation’ or ‘streamwise banding’ in the tube, in which some regions have a high viscosity and others a lower one,” explains Talon. “We are continuing to investigate the validity of this proposal, both experimentally and using numerical simulations.”

More information: L. Talon et al, On pressure-driven Poiseuille flow with non-monotonic rheology, The European Physical Journal E (2024). DOI: 10.1140/epje/s10189-024-00444-5

Journal information: European Physical Journal E 

New partially coherent unidirectional imaging system enhances visual data transmission

by UCLA Engineering Institute for Technology Advancement

Researchers Introduce Partially Coherent Unidirectional Imaging Systems
The unidirectional diffractive processor transmits high-quality images in the forward propagation direction, represented with the blue line, from field of view (FOV) A to FOV B, while effectively blocking the image formation in the backward propagation direction, represented with the brown line, from FOV B to FOV A. Credit: Ozcan Lab, UCLA

A team of researchers from the University of California, Los Angeles (UCLA) has unveiled a new development in optical imaging technology that could significantly enhance visual information processing and communication systems.

The work is published in the journal Advanced Photonics Nexus.

The new system, based on partially coherent unidirectional imaging, offers a compact, efficient solution for transmitting visual data in one direction while blocking transmission in the opposite direction.

This innovative technology, led by Professor Aydogan Ozcan and his interdisciplinary team, is designed to selectively transmit high-quality images in one direction, from field-of-view A to field-of-view B, while deliberately distorting images when viewed from the reverse direction, B to A.

This asymmetric image transmission could have broad implications for fields like privacy protection, augmented reality, and optical communications, offering new capabilities for managing how visual optical information is processed and transmitted.

Unidirectional imaging under partially coherent light

The new system addresses a challenge in optical engineering: how to control light transmission to enable clear imaging in one direction while blocking it in the reverse.

Previous solutions for unidirectional wave transmission have often relied on complex methods such as temporal modulation, nonlinear materials, or high-power beams under fully coherent illumination, which limit their practical applications.

In contrast, this UCLA innovation leverages partially coherent light to achieve high image quality and power efficiency in the forward direction (A to B), while intentionally introducing distortion and reduced power efficiency in the reverse direction (B to A).

“We engineered a set of spatially optimized diffractive layers that interact with partially coherent light in a way that promotes this asymmetric transmission,” explains Dr. Ozcan. “This system can work efficiently with common illumination sources like LEDs, making it adaptable for a variety of practical applications.”

Researchers Introduce Partially Coherent Unidirectional Imaging Systems
Conceptual illustration of the technology. Credit: UCLA Engineering Institute for Technology Advancement

Leveraging deep learning for enhanced optical design

A key aspect of this development is the use of deep learning to physically design the diffractive layers that make up the unidirectional imaging system. The UCLA team optimized these layers for partially coherent light with a phase correlation length greater than 1.5 times the wavelength of the light.

This careful optimization ensures that the system provides reliable unidirectional image transmission, even when the light source has varying coherence properties. Each imager is compact, axially spanning less than 75 wavelengths, and features a polarization-independent design.

The deep learning algorithms used in the design process help ensure that the system maintains high diffraction efficiency in the forward direction while suppressing image formation in the reverse.

The researchers demonstrated that their system performs consistently across different image datasets and illumination conditions, showing resilience to changes in the light’s coherence properties. “The ability of our system to generalize across different types of input images and light properties is one of its exciting features,” says Dr. Ozcan.

Looking ahead, the researchers plan to extend the unidirectional imager to different parts of the spectrum, including infrared and visible ranges, and to explore various kinds of illumination sources.

These advancements could push the boundaries of imaging and sensing, unlocking new applications and innovations. In privacy protection, for example, the technology could be used to prevent sensitive information from being visible from unintended perspectives. Similarly, augmented and virtual reality systems could use this technology to control how information is displayed to different viewers.

“This technology has the potential to impact multiple fields where controlling the flow of visual information is critical,” adds Dr. Ozcan. “Its compact design and compatibility with widely available light sources make it especially promising for integration into existing systems.”

This research was conducted by an interdisciplinary team from UCLA’s Department of Electrical and Computer Engineering and California NanoSystems Institute (CNSI).

More information: Guangdong Ma et al, Unidirectional imaging with partially coherent light, Advanced Photonics Nexus (2024). DOI: 10.1117/1.APN.3.6.066008

Provided by UCLA Engineering Institute for Technology Advancement 

Research team achieves first-ever acceleration of positive muons to 100 keV

by Bob Yirka , Phys.org

Team at J-PARC demonstrates acceleration of positive muons from thermal energy to 100 keV
Schematic drawing of the experimental setup. Credit: arXiv (2024). DOI: 10.48550/arxiv.2410.11367

A team of engineers and physicists affiliated with a host of institutions across Japan, working at the Japan Proton Accelerator Research Complex, has demonstrated acceleration of positive muons from thermal energy to 100 keV—the first time muons have been accelerated in a stable way. The group has published a paper describing their work on the arXiv preprint server.

Muons are sub-atomic particles similar to electrons. The main difference is their mass; a muon is 200 times heavier than an electron. They are also much shorter lived. Physicists have for many years wanted to build a muon collider to conduct new types of physics research, such as experiments that go beyond the standard model.

Unfortunately, such efforts have been held back by the extremely short muon lifespan—approximately 2 microseconds—after which they decay to electrons and neutrinos. Making things even more difficult is their tendency to zip around haphazardly, which makes forming them into a single beam extremely challenging. In this new effort, the research team has overcome such obstacles using a new technique.

The team started by shooting positively charged muons into a specially designed silica-based aerogel, similar to that used for thermal insulation applications. As the muons struck the electrons in the aerogel, muoniums μ+e (an exotic atom consisting of a positive muon and an electron) were formed. The research team then fired a laser at them to remove their electrons, which forced them to revert back to positive muons, but with greatly diminished speed.

The following step involved guiding the slowed muons into a radio-frequency cavity, where an electric field accelerated them to a final energy of 100 keV, achieving approximately 4% of the speed of light.

The research team acknowledges that despite their achievement, building a working muon collider is still a distant goal. And while their technique might play a role in such a development, there are still problems that must be worked out, such as how to scale an apparatus to a usable size.

More information: S. Aritome et al, Acceleration of positive muons by a radio-frequency cavity, arXiv (2024). DOI: 10.48550/arxiv.2410.11367

Journal information: arXiv 

Scientists transport protons in truck, paving way for antimatter delivery

by Sarah Charley, CERN

BASE experiment takes a big step towards portable antimatter
The BASE-STEP transportable trap system, lifted by crane through the AD hall before being loaded onto a truck. The team monitored all the parameters during transport. Credit: CERN

Antimatter might sound like something out of science fiction, but at the CERN Antiproton Decelerator (AD), scientists produce and trap antiprotons every day. The BASE experiment can even contain them for more than a year—an impressive feat considering that antimatter and matter annihilate upon contact.

The CERN AD hall is the only place in the world where scientists are able to store and study antiprotons. But this is something that scientists working on the BASE experiment hope to change one day with their subproject BASE-STEP: an apparatus designed to store and transport antimatter.

Most recently, the team of scientists and engineers took an important step towards this goal by transporting a cloud of 70 protons in a truck across CERN’s main site.

“If you can do it with protons, it will also work with antiprotons,” said Christian Smorra, the leader of BASE-STEP. “The only difference is that you need a much better vacuum chamber for the antiprotons.”

This is the first time that loose particles have been transported in a reusable trap that scientists can then open in a new location and then transfer the contents into another experiment. The end goal is to create an antiproton-delivery service from CERN to experiments located at other laboratories.

Antimatter is a naturally occurring class of particles that is almost identical to ordinary matter except that the charges and magnetic properties are reversed. This has baffled scientists for decades, because according to the laws of physics, the Big Bang should have produced equal amounts of matter and antimatter. These equal-but-opposite particles would have quickly annihilated each other, leaving a simmering but empty universe. Physicists suspect that there are hidden differences that can explain why matter survived and antimatter all but disappeared.

The BASE experiment aims to answer this question by precisely measuring the properties of antiprotons, such as their intrinsic magnetic moment, and then comparing these measurements with those taken with protons. However, the precision the experiment can achieve is limited by its location.

“The accelerator equipment in the AD hall generates magnetic field fluctuations that limit how far we can push our precision measurements,” said BASE spokesperson Stefan Ulmer. “If we want to get an even deeper understanding of the fundamental properties of antiprotons, we need to move out.”

This is where BASE-STEP comes in. The goal is to trap antiprotons and then transfer them to a facility where scientists can study them with a greater precision. To be able to do this, they need a device that is small enough to be loaded onto a truck and can resist the bumps and vibrations that are inevitable during ground transport.

BASE experiment takes a big step towards portable antimatter
The transportable trap being carefully loaded in the truck before going for a road trip across CERN’s main site. Credit: CERN

The current apparatus—which includes a superconducting magnet, cryogenic cooling, power reserves, and a vacuum chamber that traps the particles using magnetic and electric fields—weighs 1,000 kilograms and needs two cranes to be lifted out of the experimental hall and onto the truck. Even though it weighs a ton, BASE-STEP is much more compact than any existing system used to study antimatter. For example, it has a footprint that is five times smaller than the original BASE experiment, as it must be narrow enough to fit through ordinary laboratory doors.

During the rehearsal, the scientists used trapped protons as a stand-in for antiprotons. Protons are a key ingredient of every atom, the simplest of which is hydrogen (one proton and one electron.) But storing protons as loose particles and then moving them onto a truck is a challenge because any tiny disturbance will draw the unbonded protons back into an atomic nucleus.

“When it’s transported by road, our trap system is exposed to acceleration and vibrations, and laboratory experiments are usually not designed for this,” Smorra said. “We needed to build a trap system that is robust enough to withstand these forces, and we have now put this to a real test for the first time.”

However, Smorra noted that the biggest potential hurdle isn’t currently the bumpiness of the road but traffic jams.

“If the transport takes too long, we will run out of helium at some point,” he said. Liquid helium keeps the trap’s superconducting magnet at a temperature below 8.2 Kelvin: its maximum operating temperature. If the drive takes too long, the magnetic field will be lost and the trapped particles will be released and vanish as soon as they touch ordinary matter.

“Eventually, we want to be able to transport antimatter to our dedicated precision laboratories at the Heinrich Heine University in Düsseldorf, which will allow us to study antimatter with at least 100-fold improved precision,” Smorra said. “In the longer term, we want to transport it to any laboratory in Europe. This means that we need to have a power generator on the truck. We are currently investigating this possibility.”

After this successful test, which included ample monitoring and data-taking, the team plans to refine its procedure with the goal of transporting antimatter next year.

“This is a totally new technology that will open the door for new possibilities of study, not only with antiprotons but also with other exotic particles, such as ultra-highly-charged ions,” Ulmer said.

Another experiment, PUMA, is preparing a transportable trap. Next year, it plans to transport antiprotons 600 meters from the ADH hall to CERN’s ISOLDE facility in order to use them to study the properties and structure of exotic atomic nuclei.

Provided by CERN 

Cool journey to the center of the Earth: Researchers build superconducting cryomodule prototype

by Karyn Houston, US Department of Energy

Cool journey to the center of the Earth
The fully assembled prototype high-beta 650-megahertz cryomodule. Four of these will make up the final stage in Fermilab’s new linear accelerator. Credit: Saravan Chandrasekaran, Fermilab

Patience and complexity are the hallmarks of fundamental scientific research. Work at the Department of Energy (DOE) Office of Science takes time.

Case in point: Technical staff at the DOE’s Fermi National Accelerator Laboratory have built a prototype of a superconducting cryomodule for the Proton Improvement Plan II (PIP-II) project.

Four of these 39-foot-long vessels, which weigh an astonishing 27,500 pounds each, will be responsible for accelerating hydrogen ions to more than 80% of the speed of light. Ultimately, the cryomodules will comprise the last section of the new linear accelerator, or linac, that will drive Fermilab’s accelerator complex.

Physicists like to accelerate particles to higher and higher energies. The higher the energy, the more finely penetrating and discriminating a particle probe can be. That increased precision allows scientists to study the tiniest of structures.

There are many benefits of faster and faster accelerators. To name a few: destroying cancer cells; revealing the structure of proteins and viruses; creating vaccines and new drugs; and advancing our knowledge of the origins of our universe.

For the PIP-II linac, each superconducting cryomodule vessel will contain a chain of devices called “cavities” at its core. These cavities look like oversized soda cans stacked end-to-end. They’re made of pure niobium, a superconducting material. Electricity flows through the superconducting material with no energy loss when the niobium is kept well below the average temperature of outer space.

Note the snippet “cryo” in the word cryomodule, meaning involving or producing cold. Especially extreme cold. In order to reach superconducting state, the cavities need to be kept at super-cold temperatures, hovering around absolute zero.

To keep things cool, the team fills the inside of the vessel with liquid helium. The vessel has many layers of insulation to protect the cavities from outside temperatures that are too warm.

Once the prototype is functioning properly, four of the modules will be assembled to build out the last section of Fermilab’s new linear accelerator.

Here’s how the journey will unfold. The superconducting cryomodules will power beams of hydrogen anions, which are hydrogen atoms made up of one proton and two electrons, instead of the usual one proton and one electron.

The beams will reach a final energy of 800 million electronvolts, or MeV, before they exit the accelerator.

From there, the beam will transfer to the upgraded Booster and Main Injector accelerators. There it will gain more energy before being turned into neutrinos.

The machine will then send these neutrinos on a 1,300-kilometer journey (800 miles) through Earth to the Deep Underground Neutrino Experiment (DUNE) at the Long Baseline Neutrino Facility in Lead, South Dakota.

The team is now making sure that all the preparations have paid off as the modules are tested at Fermilab’s Cryomodule Test Facility. This will reveal how well the modules function after practice shipments between Fermilab and the United Kingdom.

The final modules will be built by PIP-II’s partners around the world. Three will come together at Daresbury Laboratory, run by the Science and Technology Facilities Council of United Kingdom Research and Innovation, and shipped to Fermilab.

The fourth will be assembled at Fermilab using components provided by the Raja Rammana Center for Advanced Technology of India’s Department of Atomic Energy.

International partners from India, Italy, France, Poland and the United Kingdom are contributing to many aspects of the PIP-II project.

All of this work is done as part of the PIP-II project, an essential enhancement to the Fermilab accelerator complex. PIP-II will provide neutrinos for DUNE scientists to study.

In parallel, the high-power proton beams delivered by the PIP-II accelerator will enable muon-based experiments to search for new particles and forces at unprecedented levels of precision. The diverse physics program is powering new discoveries for decades to come.

Provided by US Department of Energy 

Study observes a phase transition in magic of a quantum system with random circuits

by Ingrid Fadelli , Phys.org

Study observes a phase transition in magic of a quantum system with random circuits
Picture of a trapped-ion quantum computer on which the experiment was conducted. Credit: IonQ

In the context of quantum mechanics and information, “magic” is a key property of quantum states that describes the extent to which they deviate from so-called stabilizer states. Stabilizer states are a class of states that can be effectively simulated on classical computers.

Magic in quantum states is crucial to the realization of universal and fault-tolerant quantum computing via simple gate operations. Gaining insight about the mechanisms behind this property could help engineers to effectively create it and leverage it, thus potentially enabling the development of better performing quantum computers.

Researchers at University of Maryland and NIST, IonQ Inc. and the Duke Quantum Center recently showed that a random stabilizer code (i.e., a code designed to protect quantum information from errors) presents vastly different behavior with regards to magic when exposed to coherent errors.

Their observations, outlined in a paper published in Nature Physics, could broaden the understanding of how magic states originate, which could facilitate the generation of these states in quantum computing systems.

“Even though superposition and entanglement are the terms people most often associate with quantum computers, it turns out they aren’t enough to make quantum computers more powerful than classical computers,” Pradeep Niroula, co-author of the paper, told Phys.org.

“To attain a quantum advantage over traditional or classical computers, you need one another ingredient called ‘magic’ or ‘non-stabilizer-ness.’ If your quantum system has no ‘magic,’ it can be simulated by a classical computer, making the quantum computer unnecessary. It is only when your system has a lot of magic that you go beyond what’s possible with a classical computer.”

For error-resistant quantum computers, creating superpositions or entanglement between states is relatively easy. In contrast, adding magic to the state or dislocating them further from easy-to-simulate stabilizer states is expected to be highly challenging.

“In the literature of quantum information, you often encounter terms like ‘magic state distillation’ or ‘magic state cultivation,’ which refer to pretty arduous processes to create special quantum states with magic that the quantum computer can make use of,” said Niroula.

“Prior to this paper, we had written a paper that observed a similar phase transition in entanglement, in which we had observed phases where measurements of a quantum system preserved or destroyed entanglement depending on how frequent they are.”

While there is an extensive amount of literature focusing on the realization of entanglement in error-corrected quantum computing systems, the underpinnings of magic states remain less understood.

The main goal of the recent study by Niroula and his colleagues was to determine whether a similar phase transition as that previously observed for entanglement also exists for magic. The existence of such a transition may hint at the existence of a more general theory that is applicable to different quantum properties, including both entanglement and magic.

Study observes a phase transition in magic of a quantum system with random circuits
A) The circuit model for used in the study. Coherent error is used to tune magic on a random stabilizer code. B) A schematic illustration of how magic is created and destroyed from the circuit. The coherent errors dislocate a quantum state away from stabilizer states which are easy to represent and simulate. The final measurements sometimes destroy the injected magic, revert the states back to stabilizer states, and sometimes leave the magic intact. C) The phase diagram of magic. Credit: Niroula et al.

“A general feature of such phase transitions is that it involves two competing forces or processes,” explained Niroula. “One of these creates the resource and one which destroys it—tuning the relative strength or proportion of those processes seems to reveal such transitions.

“In the case of entanglement, a quantum gate acting between two qubits tends to produce entanglement between them, whereas a measurement of one of those qubits tends to destroy the entanglement. Now if you had a quantum circuit with many gates, you can randomly add measurements in the circuit and control the spread of entanglement in the system.”

Past studies focusing on entanglement in quantum circuits have established that if there are too few measurements in a quantum circuit, the entire quantum system becomes entangled. In contrast, if there are too many measurements, entanglement is suppressed and thus minimal. Moreover, if one gradually increases the density of measurements in a system, the entanglement will rapidly shift from high to almost null.

“Measurements destroy magic too, but to be able to controllably add magic to the system, you need to be able to do small rotations of the qubit,” said Niroula. “So, the two competing forces here are ‘how much you measure’ and ‘how much you rotate the qubits.’ What we observed is that at a fixed rate of measurement, you can tune your rotation angle and go from a phase where you have a lot of magic to a phase where you have no magic.”

As part of their study, Niroula and his colleagues first ran a series of numerical simulations, which offered a strong indication that a phase transition in magic did in fact take place. Encouraged by these findings, they then set out to test their hypothesis in an experimental setting, using real quantum circuits.

“In our experiment, we observed the signature of the phase transition even in a noisy machine,” said Niroula “Our work thus uncovered a phase transition in magic.

“Earlier works have uncovered other kinds of transitions in entanglement and in charges etc. and this raises the questions: what other resources might exhibit similar transitions? Do they all belong to some universal type of transition? Are they all distinct or are they all related somehow? Also importantly, what does the presence of phase transition teach us about building noise-resilient quantum computers?”

The findings gathered by this team of researchers open new avenues for research focusing on resources in error-corrected quantum computing systems. Future studies could, for instance, explore other properties and resources that exhibit a phase transition resembling those observed for entanglement and magic.

“Magic states are important for error-correction,” added Niroula. “Our work gives us some insights on when we can concentrate magic and when we can suppress it. One avenue that would be interesting to explore is to see if we can use our experiment as a ‘magic state factory’ where you are producing good magic states for consumption by the quantum computer.

“Currently, there is a lot of interest in the field in demonstrating the primitives or the building blocks of error-correction, and our work could be a part of that.”

More information: Pradeep Niroula et al, Phase transition in magic with random quantum circuits, Nature Physics (2024). DOI: 10.1038/s41567-024-02637-3.

Journal information: Nature Physics 

Study finds optimal standing positions in airport smoking lounges

by American Institute of Physics

Optimal standing positions and ventilation in airport smoking lounges
Researchers modeled the trail of nicotine particles that are released from the mouth, nose, and cigarette. Credit: Younes Bakhshan

While many smoking rooms in U.S. airports have closed in recent years, they are still common in other airports around the world. These lounges can be ventilated, but how much does it actually help the dispersion of smoke?

Research published in Physics of Fluids shows that not all standing positions in airport smoking lounges are created equal.

Researchers from the University of Hormozgan in Iran studied nicotine particles in a simulated airport smoking room and found that the thermal environment and positioning of smokers influenced how particles settle in the room.

Additionally, smokers seated farther from ventilation inlets experienced the lowest levels of pollution in the room.

“We expected that people who are standing in the corners would report the same amount of particles settling on their body,” author Younes Bakhshan said. “But according to the numbers that we determined, the wave created by the ventilation in the room is not the same every time.”

The researchers created a smoking room using computational models and placed heated and unheated manikins in the room to simulate smokers. They also modeled the ventilation system with three exhaust air diffusers.

The manikin smokers “exhaled” cigarette smoke through their mouths and noses, and the flow of the particles was modeled and observed. They found that over time, as the concentration of particles decreases in the air, the particles settling on the smokers increases.

“According to the results, body heat causes more absorption of cigarette pollution,” Bakhshan said. “We suggest that if people have to smoke in the room, empty places are the best to choose.”

The results gave insight into improving ventilation in smoking lounges.

“According to previous research, displacement ventilation system is the best for a smoking room,” Bakhshan said. “But if we want to optimize the HVAC system, we suggest that the exhaust should be installed on the wall in addition to the vents placed on the ceiling.”

Next, the researchers want to take a step beyond measuring particle dispersion to particle reduction.

“We believe that smokers who go into the smoking room for the sake of others’ health should be also protected from the harmful effects of secondhand smoke,” Bakhshan said.

More information: Numerical simulation of particles distribution of environmental tobacco smoke and its concentration in the smoking room of Shiraz airport, Physics of Fluids (2024). DOI: 10.1063/5.0223568

Journal information: Physics of Fluids 

Provided by American Institute of Physics