Research offer direct view of tantalum oxidation that impedes qubit coherence

by Brookhaven National Laboratory

Direct view of tantalum oxidation that impedes qubit coherence
Left: This scanning transmission electron microscope (STEM) image of a tantalum (Ta) film surface shows an amorphous oxide above the regularly arrayed atoms of crystalline Ta metal. Right: The STEM imaging combined with computational modeling revealed details of the interface between these layers, including the formation of the amorphous oxide (top layer) and a suboxide layer that retains crystalline features (second layer) above the regularly arrayed tantalum atoms. Credit: Brookhaven National Laboratory

Scientists at the U.S. Department of Energy’s (DOE) Brookhaven National Laboratory and DOE’s Pacific Northwest National Laboratory (PNNL) have used a combination of scanning transmission electron microscopy (STEM) and computational modeling to get a closer look and deeper understanding of tantalum oxide. When this amorphous oxide layer forms on the surface of tantalum—a superconductor that shows great promise for making the “qubit” building blocks of a quantum computer—it can impede the material’s ability to retain quantum information.

Learning how the oxide forms may offer clues as to why this happens—and potentially point to ways to prevent quantum coherence loss. The research was recently published in the journal ACS Nano.

The paper builds on earlier research by a team at Brookhaven’s Center for Functional Nanomaterials (CFN), Brookhaven’s National Synchrotron Light Source II (NSLS-II), and Princeton University that was conducted as part of the Co-design Center for Quantum Advantage (C2QA), a Brookhaven-led national quantum information science research center in which Princeton is a key partner.

“In that work, we used X-ray photoemission spectroscopy at NSLS-II to infer details about the type of oxide that forms on the surface of tantalum when it is exposed to oxygen in the air,” said Mingzhao Liu, a CFN scientist and one of the lead authors on the study. “But we wanted to understand more about the chemistry of this very thin layer of oxide by making direct measurements,” he explained.

So, in the new study, the team partnered with scientists in Brookhaven’s Condensed Matter Physics & Materials Science (CMPMS) Department to use advanced STEM techniques that enabled them to study the ultrathin oxide layer directly. They also worked with theorists at PNNL who performed computational modeling that revealed the most likely arrangements and interactions of atoms in the material as they underwent oxidation.

Together, these methods helped the team build an atomic-level understanding of the ordered crystalline lattice of tantalum metal, the amorphous oxide that forms on its surface, and intriguing new details about the interface between these layers.

“The key is to understand the interface between the surface oxide layer and the tantalum film because this interface can profoundly impact qubit performance,” said study co-author Yimei Zhu, a physicist from CMPMS, echoing the wisdom of Nobel laureate Herbert Kroemer, who famously asserted, “The interface is the device.”

Emphasizing that “quantitatively probing a mere one-to-two-atomic-layer-thick interface poses a formidable challenge,” Zhu noted, “we were able to directly measure the atomic structures and bonding states of the oxide layer and tantalum film as well as identify those of the interface using the advanced electron microscopy techniques developed at Brookhaven.”

“The measurements reveal that the interface consists of a ‘suboxide’ layer nestled between the periodically ordered tantalum atoms and the fully disordered amorphous tantalum oxide. Within this suboxide layer, only a few oxygen atoms are integrated into the tantalum crystal lattice,” Zhu said.

The combined structural and chemical measurements offer a crucially detailed perspective on the material. Density functional theory calculations then helped the scientists validate and gain deeper insight into these observations.

“We simulated the effect of gradual surface oxidation by gradually increasing the number of oxygen species at the surface and in the subsurface region,” said Peter Sushko, one of the PNNL theorists.

By assessing the thermodynamic stability, structure, and electronic property changes of the tantalum films during oxidation, the scientists concluded that while the fully oxidized amorphous layer acts as an insulator, the suboxide layer retains features of a metal.

“We always thought if the tantalum is oxidized, it becomes completely amorphous, with no crystalline order at all,” said Liu. “But in the suboxide layer, the tantalum sites are still quite ordered.”

With the presence of both fully oxidized tantalum and a suboxide layer, the scientists wanted to understand which part is most responsible the loss of coherence in qubits made of this superconducting material.

“It’s likely the oxide has multiple roles,” Liu said.

First, he noted, the fully oxidized amorphous layer contains many lattice defects. That is, the locations of the atoms are not well defined. Some atoms can shift around to different configurations, each with a different energy level. Though these shifts are small, each one consumes a tiny bit of electrical energy, which contributes to loss of energy from the qubit.

“This so-called two-level system loss in an amorphous material brings parasitic and irreversible loss to the quantum coherence—the ability of the material to hold onto quantum information,” Liu said.

But because the suboxide layer is still crystalline, “it may not be as bad as people were thinking,” Liu said. Maybe the more fixed atomic arrangements in this layer will minimize two-level system loss.

Then again, he noted, because the suboxide layer has some metallic characteristics, it could cause other problems.

“When you put a normal metal next to a superconductor, that could contribute to breaking up the pairs of electrons that move through the material with no resistance,” he noted. “If the pair breaks into two electrons again, then you will have loss of superconductivity and coherence. And that is not what you want.”

Future studies may reveal more details and strategies for preventing loss of superconductivity and quantum coherence in tantalum.

More information: Junsik Mun et al, Probing Oxidation-Driven Amorphized Surfaces in a Ta(110) Film for Superconducting Qubit, ACS Nano (2023). DOI: 10.1021/acsnano.3c10740

Journal information: ACS Nano 

Provided by Brookhaven National Laboratory 

Scientists create effective ‘spark plug’ for direct-drive inertial confinement fusion experiments

by Luke Auburn, University of Rochester

Scientists create effective 'spark plug' for direct-drive inertial confinement fusion experiments
A a view from inside the OMEGA target chamber during a direct-drive inertial fusion experiment at the University of Rochester’s Laboratory for Laser Energetics. Scientists fired 28 kilojoules of laser energy at small capsules filled with deuterium and tritium fuel, causing the capsules to implode and produce a plasma hot enough to initiate fusion reactions between the fuel nuclei. The temperatures achieved at the heart of these implosions are as high as 100 million degrees Celsius (180 million degrees Fahrenheit). The speed at which the implosion takes place is typically between 500 and 600 kilometers per second (1.1 to 1.35 million miles per hour). The pressures at the core are up to 80 billion times greater than atmospheric pressure. Credit: University of Rochester Laboratory for Laser Energetics photo / Eugene Kowaluk

Scientists from the University of Rochester’s Laboratory for Laser Energetics (LLE) led experiments to demonstrate an effective “spark plug” for direct-drive methods of inertial confinement fusion (ICF). In two studies published in Nature Physics, the authors discuss their results and outline how they can be applied at bigger scales with the hopes of eventually producing fusion at a future facility.

LLE is the largest university-based U.S. Department of Energy program and hosts the OMEGA laser system, which is largest academic laser in the world but still almost one hundredth the energy of the National Ignition Facility (NIF) at the Lawrence Livermore National Laboratory in California.

With OMEGA, Rochester scientists completed several successful attempts to fire 28 kilojoules of laser energy at small capsules filled with deuterium and tritium fuel, causing the capsules to implode and produce a plasma hot enough to initiate fusion reactions between the fuel nuclei. The experiments caused fusion reactions that produced more energy than the amount of energy in the central hot plasma.

The OMEGA experiments use direct laser illumination of the capsule and differ from the indirect drive approach used on the NIF. When using the indirect drive approach, the laser light is converted into X-rays that in turn drive the capsule implosion. The NIF used indirect drive to irradiate a capsule with X-rays using about 2,000 kilojoules of laser energy. This led to a 2022 breakthrough at NIF in achieving fusion ignition—a fusion reaction that creates a net gain of energy from the target.

“Generating more fusion energy than the internal energy content of where the fusion takes place is an important threshold,” says lead author of the first paper Connor Williams ’23 Ph.D. (physics and astronomy), now a staff scientist at Sandia National Labs in radiation and ICF target design. “That’s a necessary requirement for anything you want to accomplish later on, such as burning plasmas or achieving ignition.”

By showing they can achieve this level of implosion performance with just 28 kilojoules of laser energy, the Rochester team is excited by the prospect of applying direct-drive methods to lasers with more energy. Demonstrating a spark plug is an important step, however, OMEGA is too small to compress enough fuel to get to ignition.

“If you can eventually create the spark plug and compress fuel, direct drive has a lot of characteristics that are favorable for fusion energy compared to indirect drive,” says Varchas Gopalaswamy ’21 Ph.D. (mechanical engineering), the LLE scientist who led the second study that explores the implications of using the direct-drive approach on megajoule-class lasers, similar to the size of the NIF. “After scaling the OMEGA results to a few megajoules of laser energies, the fusion reactions are predicted to become self-sustaining, a condition called ‘burning plasmas.'”

Gopalaswamy says that direct-drive ICF is a promising approach for achieving thermonuclear ignition and net energy in laser fusion.

“A major factor contributing to the success of these recent experiments is the development of a novel implosion design method based on statistical predictions and validated by machine learning algorithms,” says Riccardo Betti, LLE’s chief scientist and the Robert L. McCrory Professor in the Department of Mechanical Engineering and in the Department of Physics and Astronomy. “These predictive models allow us to narrow the pool of promising candidate designs before carrying out valuable experiments.”

The Rochester experiments required a highly coordinated effort between large number of scientists, engineers, and technical staff to operate the complex laser facility. They collaborated with researchers from the MIT Plasma Science and Fusion Center and General Atomics to conduct the experiments.

More information: C. A. Williams et al, Demonstration of hot-spot fuel gain exceeding unity in direct-drive inertial confinement fusion implosions, Nature Physics (2024). DOI: 10.1038/s41567-023-02363-2

V. Gopalaswamy et al, Demonstration of a hydrodynamically equivalent burning plasma in direct-drive inertial confinement fusion, Nature Physics (2024). DOI: 10.1038/s41567-023-02361-4

Provided by University of Rochester 

New ion cooling technique could simplify quantum computing devices

by John Toon, Georgia Institute of Technology

New ion cooling technique could simplify quantum computing devices
Image shows the ion trap used to control the location of computational and refrigerant ions. The device was produced by Sandia National Laboratories. Credit: Sandia National Laboratories.

A new cooling technique that utilizes a single species of trapped ion for both computing and cooling could simplify the use of quantum charge-coupled devices (QCCDs), potentially moving quantum computing closer to practical applications.

Using a technique called rapid ion exchange cooling, scientists at the Georgia Tech Research Institute (GTRI) have shown that they could cool a calcium ion—which gains vibrational energy while doing quantum computations—by moving a cold ion of the same species into close proximity. After transferring energy from the hot ion to the cold one, the refrigerant ion is returned to a nearby reservoir to be cooled for further use.

The research is reported in the journal Nature Communications.

Conventional ion cooling for QCCDs involves the use of two different ion species, with cooling ions coupled to lasers of a different wavelength that do not affect the ions used for quantum computing. Beyond the lasers needed to control the quantum computing operations, this sympathetic cooling technique requires additional lasers to trap and control the refrigerant ions, and that both increases complexity and slows quantum computing operations.

“We have shown a new method for cooling ions faster and more simply in this promising QCCD architecture,” said Spencer Fallek, a GTRI research scientist. “Rapid exchange cooling can be faster because transporting the cooling ions requires less time than laser cooling two different species. And it’s simpler because using two different species requires operating and controlling more lasers.”

https://www.youtube.com/embed/Uj9ITEhh3Pc?color=whiteVideo shows how a computational ion can be cooled by bringing it near a refrigerant ion of the same atomic species. Credit: Georgia Tech Research Institute

The ion movement takes place in a trap maintained by precisely controlling voltages that create an electrical potential between gold contacts. But moving a cold atom from one part of the trap is a bit like moving a bowl with a marble sitting in the bottom.

When the bowl stops moving, the marble must become stationary—not rolling around in the bowl, explained Kenton Brown, a GTRI principal research scientist who has worked on quantum computing issues for more than 15 years.

“That’s basically what we’re always trying to do with these ions when we’re moving the confining potential, which is like the bowl, from one place to another in the trap,” he said. “When we’re done moving the confining potential to the final location in the trap, we don’t want the ion moving around inside the potential.”

Once the hot ion and cold ion are close to each other, a simple energy swap takes place and the original cold ion—heated now by its interaction with a computing ion—can be split off and returned to a nearby reservoir of cooled ions.

The GTRI researchers have so far demonstrated a two-ion proof-of-concept system, but say their technique is applicable to the use of multiple computing and cooling ions, and other ion species.

A single energy exchange removed more than 96% of the heat—measured as 102(5) quanta—from the computing ion, which came as a pleasant surprise to Brown, who had expected multiple interactions might be necessary. The researchers tested the energy exchange by varying the starting temperature of the computational ions and found that the technique is effective regardless of the initial temperature. They have also demonstrated that the energy exchange operation can be done multiple times.

Heat—essentially vibrational energy—seeps into the trapped ion system through both computational activity and from anomalous heating, such as unavoidable radio-frequency noise in the ion trap itself. Because the computing ion is absorbing heat from these sources even as it is being cooled, removing more than 96% of the energy will require more improvements, Brown said.

The researchers envision that in an operating system, cooled atoms would be available in a reservoir off to the side of the QCCD operations and maintained at a steady temperature. The computing ions cannot be directly laser-cooled because doing so would erase the quantum data they hold.

Excessive heat in a QCCD system adversely affects the fidelity of the quantum gates, introducing errors in the system. The GTRI researchers have not yet built a QCCD that uses their cooling technique, though that is a future step in the research. Other work ahead includes accelerating the cooling process and studying its effectiveness at cooling motion along other spatial directions.

The experimental component of the rapid exchange cooling experiment was guided by simulations done to predict, among other factors, the pathways that the ions would take in their journey within the ion trap. “We definitely understood what we were looking for and how we should go about achieving it based on the theory and simulations we had,” Brown said.

The unique ion trap was fabricated by collaborators at Sandia National Laboratories. The GTRI researchers used computer-controlled voltage generation cards able to produce specific waveforms in the trap, which has a total of 154 electrodes, of which the experiment used 48. The experiments took place in a cryostat maintained at about 4 degrees Kelvin.

New ion cooling technique could simplify quantum computing devices
Researchers Spencer Fallek (left) and Kenton Brown are shown with equipment used to develop a new technique for cooling ions in quantum devices. Credit: Sean McNeil, GTRI

GTRI’s Quantum Systems Division (QSD) investigates quantum computing systems based on individual trapped atomic ions and novel quantum sensor devices based on atomic systems. GTRI researchers have designed, fabricated, and demonstrated a number of ion traps and state-of-the-art components to support integrated quantum information systems. Among the technologies developed is the ability to precisely transport ions to where they are needed.

“We have very fine control of how the ions move, the speed at which they can be brought together, the potential they’re in when they are near one another, and the timing that’s necessary to do experiments like this,” said Fallek.

Other GTRI researchers involved in the project included Craig Clark, Holly Tinkey, John Gray, Ryan McGill and Vikram Sandhu. The research was done in collaboration with Los Alamos National Laboratory.

Image denoising using a diffractive material

by UCLA Engineering Institute for Technology Advancement

Image denoising using a diffractive material
All-optical image denoising using diffractive visual processors. Credit: Ozcan Lab UCLA

While image denoising algorithms have undergone extensive research and advancements in the past decades, classical denoising techniques often necessitate numerous iterations for their inference, making them less suitable for real-time applications.

The advent of deep neural networks (DNNs) has ushered in a paradigm shift, enabling the development of non-iterative, feed-forward digital image denoising approaches.

These DNN-based methods exhibit remarkable efficacy, achieving real-time performance while maintaining high denoising accuracy. However, these deep learning-based digital denoisers incur a trade-off, demanding high-cost, resource- and power-intensive graphics processing units (GPUs) for operation.

In an article published in Light: Science & Applications, a team of researchers, led by Professors Aydogan Ozcan and Mona Jarrahi from University of California, Los Angeles (UCLA), U.S., and Professor Kaan Akşit from University College London (UCL), UK developed a physical image denoiser comprising spatially engineered diffractive layers to process noisy input images at the speed of light and synthesize denoised images at its output field-of-view without any digital computing.

Following a one-time training on a computer, the resulting visual processor with its passive diffractive layers is fabricated, forming a physical image denoiser that scatters out the optical modes associated with undesired noise or spatial artifacts of the input images.

Through its optimized design, this diffractive visual processor preserves the optical modes representing the desired spatial features of the input images with minimal distortions.

As a result, it instantly synthesizes denoised images within its output field-of-view without the need to digitize, store or transmit an image for a digital processor to act on it. The efficacy of this all-optical image denoising approach was validated by suppressing salt and pepper noise from both intensity- and phase-encoded input images.

Furthermore, this physical image denoising framework was experimentally demonstrated using terahertz radiation and a 3D-fabricated diffractive denoiser.

This all-optical image denoising framework offers several important advantages, such as low power consumption, ultra-high speed, and compact size.

The research team envisions that the success of these all-optical image denoisers can catalyze the development of all-optical visual processors tailored to address various inverse problems in imaging and sensing.

More information: Çağatay Işıl et al, All-optical image denoising using a diffractive visual processor, Light: Science & Applications (2024). DOI: 10.1038/s41377-024-01385-6

Provided by UCLA Engineering Institute for Technology Advancement 

Scientists mix and match properties to make new superconductor with chiral structure

by Tokyo Metropolitan University

Scientists mix and match properties to make new superconductor with chiral structure
A non-chiral, superconducting material and a chiral, non-superconducting material were combined in different element ratios to create a new compound with the properties of both. Credit: Tokyo Metropolitan University

Researchers from Tokyo Metropolitan University have created a new superconductor with a chiral crystalline structure by mixing two materials, one with superconductivity but no chirality, another with chirality but no superconductivity.

The new platinum-iridium-zirconium compound transitions to a bulk superconductor below 2.2 K and was observed to have chiral crystalline structure using X-ray diffraction. Their new solid solution approach promises to accelerate the discovery and understanding of new exotic superconducting materials.

Scientists studying superconductivity are on a mission to understand how the exotic nature of superconducting materials arises from their structure, and how we might control the structure to get desirable properties.

Of the many aspects of structure, an interesting recent development is the issue of chirality. Many structures have a “handedness,” that is, they do not look the same in a mirror. An effect of chirality in superconductors is to trigger something called asymmetric spin-orbit coupling (ASOC), an effect that can make superconductors more robust to high magnetic field exposure.

To understand chirality in more depth, however, scientists need more superconductors with a chiral structure to study. The usual route is to search out chiral compounds, check if they are superconducting or not, rinse and repeat: this is very inefficient.

That is why a team from Tokyo Metropolitan University led by Associate Professor Yoshikazu Mizuguchi has introduced an entirely new approach. Instead of combing through lists of compounds, they mixed two compounds with known physical properties, a platinum-zirconium compound with superconductivity but no chirality, and an iridium-zirconium compound with a chiral structure, but no reports of superconductivity. The work is published in the Journal of the American Chemical Society.

By combining elements in a ratio that matches a certain proportion of each compound, they were able to effectively “mix and match” physical properties, coming up with a new material that had both a chiral crystal structure and superconductivity.

  • Scientists mix and match properties to make new superconductor with chiral structureX-ray diffraction patterns at different temperatures (top), and the extracted fraction of chiral compound (bottom) show that the proportion of chiral compound increases at lower temperature. Credit: Tokyo Metropolitan University
  • Scientists mix and match properties to make new superconductor with chiral structureAs the proportion of iridium is increased, the proportion of P6122, the chiral component, increases. Credit: Tokyo Metropolitan University
  • Scientists mix and match properties to make new superconductor with chiral structureSuperconductivity can be confirmed below an iridium proportion of around x = 0.85 in (Pt1-xIrx)3Zr5. Credit: Tokyo Metropolitan University
  • Scientists mix and match properties to make new superconductor with chiral structureX-ray diffraction patterns at different temperatures (top), and the extracted fraction of chiral compound (bottom) show that the proportion of chiral compound increases at lower temperature. Credit: Tokyo Metropolitan University
  • Scientists mix and match properties to make new superconductor with chiral structureAs the proportion of iridium is increased, the proportion of P6122, the chiral component, increases. Credit: Tokyo Metropolitan University

Machine learning techniques enhance the discovery of excited nuclear levels in sulfur-38

by 

Machine learning techniques enhance the discovery of excited nuclear levels in sulfur-38
A representation of the machine learning approach used to classify sulfur-38 nuclei (38S) from all other nuclei created in a complex nuclear reaction (left) and the resulting ability to gain knowledge of the unique sulfur-38 quantum “fingerprint” (right). Credit: Argonne National Laboratory

Fixed numbers of protons and neutrons—the building blocks of nuclei—can rearrange themselves within a single nucleus. The products of this reshuffling include electromagnetic (gamma ray) transitions. These transitions connect excited energy levels called quantum levels, and the pattern in these connections provide a unique “fingerprint” for each isotope.

Determining these fingerprints provides a sensitive test of scientists’ ability to describe one of the , the strong (nuclear) force that holds protons and neutrons together.

In the laboratory, scientists can initiate the movement of protons and neutrons through an injection of excess  using a nuclear reaction.

In a paper, published in Physical Review C, researchers successfully used this approach to study the fingerprint of sulfur-38. They also used machine learning and other cutting-edge tools to analyze the data.

The results provide new empirical information on the “fingerprint” of quantum energy levels in the sulfur-38 nucleus. Comparisons with  may lead to important new insights. For example, one of the calculations highlighted the key role played by a particular nucleon orbital in the model’s ability to reproduce the fingerprints of sulfur-38 as well as neighboring nuclei.

The study is also important for its first successful implementation of a specific machine learning-based approach to classifying data. Scientists are adopting this approach to other challenges in .

Researchers used a measurement that included a  (ML) assisted analysis of the collected data to better determine the unique quantum energy levels—a “fingerprint” formed through the rearrangement of the protons and neutrons—in the neutron-rich nucleus sulfur-38.

The results doubled the amount of empirical information on this particular fingerprint. They used a nuclear reaction involving the fusion of two nuclei, one from a heavy-ion beam and the second from a target, to produce the isotope and introduce the energy needed to excite it into higher quantum levels.

The reaction and measurement leveraged a heavy-ion beam produced by the ATLAS Facility (a Department of Energy user facility), a target produced by the Center for Accelerator and Target Science (CATS), the detection of electromagnetic decays (gamma-rays) using the Gamma-Ray Energy Tracking Array (GRETINA), and the detection of the nuclei produced using the Fragment Mass Analyzer (FMA).

Due to complexities in the experimental parameters—which hinged between the production yields of the sulfur-38 nuclei in the reaction and the optimal settings for detection—the research adapted and implemented ML techniques throughout the data reduction.

These techniques achieved significant improvements over other techniques. The ML-framework itself consisted of a fully connected neural network that was trained under supervision to classify sulfur-38 nuclei against all other isotopes produced by the .

Key innovation in photonic components could transform supercomputing technology

by Daegu Gyeongbuk Institute of Science and Technology (DGIST)

Key innovation in photonic components could transform supercomputing technology
A MEMS-based 2 × 2 unitary gate and its measured responses. a,b, Schematic (a) and optical microscopy image (b) of the MEMS-based 2 × 2 unitary gate. The gate consists of one phase shifter and one tunable coupler. The equation in a shows the mathematical description of the ideal 2 × 2 unitary transformation gate without any optical losses. Credit: Nature Photonics (2023). DOI: 10.1038/s41566-023-01327-5

Programmable photonic integrated circuits (PPICs) process light waves for computation, sensing, and signaling in ways that can be programmed to suit diverse requirements. Researchers at Daegu Gyeongbuk Institute of Science and Technology (DGIST), in South Korea, with collaborators at Korea Advanced Institute of Science and Technology (KAIST), have achieved a major advance in incorporating microelectromechanical systems into PPICs.

Their research has been published in the journal Nature Photonics.

“Programmable photonic processors promise to outperform conventional supercomputers, offering faster, more efficient and massively parallel computing capabilities,” says Sangyoon Han of the DGIST team. He emphasizes that, in addition to the increased speeds achieved by using light instead of electric current, the significant reduction in power consumption and size of PPICs could lead to major advances in artificial intelligence, neural networks, quantum computing, and communications.

The microelectromechanical systems (MEMS) at the heart of the new advance are tiny components that can interconvert optical, electronic, and mechanical changes to perform the variety of communication and mechanical functions needed by an integrated circuit.

The researchers believe they are the first to integrate silicon-based photonic MEMS technologies onto PPIC chips that operate with extremely low power requirements.

“Our innovation has dramatically reduced the power consumption to femtowatt levels, which is over a million times an improvement compared to the previous state of the art,” says Han. The technology can also be built onto chips up to five times smaller than existing options.

One key to the dramatic reduction in power requirements was to move away from the dependence on temperature changes required by the dominant “thermo-optic” systems currently in use. The required tiny mechanical movements are powered by electrostatic forces—the attractions and repulsions between fluctuating electric charges.

The components integrated onto the team’s chips can manipulate a feature of light waves called “phase” and control the coupling between different parallel waveguides, which guide and constrain the light. These are the two most fundamental requirements for building PPICs. These features interact with micromechanical “actuators” (essentially switches) to complete the programmable integrated circuit.

The key to the advance has been to apply innovative concepts to the fabrication of the required silicon-based parts. Crucially, the manufacturing process can be used with conventional silicon wafer technology. This makes it compatible with the large-scale production of photonic chips essential to commercial applications.

The team now plans to refine their technology to build and commercialize a photonic computer that will outperform conventional electronic computers in a wide variety of applications. Han says that examples of specific uses include the crucial inference tasks in artificial intelligence, advanced image processing, and high-bandwidth data transmission.

“We expect to continue to push the boundaries of computational technology, contributing further to the field of photonics and its practical applications in modern technology,” Han concludes.

More information: Dong Uk Kim et al, Programmable photonic arrays based on microelectromechanical elements with femtowatt-level standby power consumption, Nature Photonics (2023). DOI: 10.1038/s41566-023-01327-5

Solvent sieve method sets new record for perovskite light-emitting diodes

by Chinese Academy of Sciences

Solvent sieve method sets new record for perovskite light-emitting diodes
The solvent sieve method for high-performance PeLEDs. Credit: NIMTE

Using a simple solvent sieve method, researchers from the Ningbo Institute of Materials Technology and Engineering (NIMTE) of the Chinese Academy of Sciences (CAS) have taken the lead in developing highly efficient and stable perovskite light-emitting diodes (PeLEDs) with record performance.

Their study is published in Nature Photonics.

Perovskites are one of the most promising optoelectronic materials due to their excellent optoelectronic performance and low preparation cost. Compared with traditional organic light-emitting diodes (OLEDs), PeLEDs have a narrower light-emitting spectrum and superior color purity, thus showing great application potential in display and lighting.

However, despite significant progress in efficiency, low operational stability has long limited the practical application of PeLEDs. In particular, a limited understanding of the cause of perovskite instability has greatly hindered the development and commercialization of PeLEDs.

Based on an in-depth analysis of the fine nanostructures of perovskites, the researchers identified the perovskites’ defective low n-phase as the key source of perovskite instability. The low quality of the low n-phase, which contained only one or two layers of lead ions, originated from the rapid and uncontrollable crystallization process.

Inspired by the process of separating sand of different sizes with a sieve, the researchers proposed a solvent sieve method to screen out these undesirable low n-phases.

According to the researchers, the solvent sieve is a combination of polar and nonpolar solvents. The polar solvent acts as a mesh that interacts with perovskites, while the nonpolar solvent acts as a framework that does not affect perovskites. The researchers adjusted the ratio of polar solvents to effectively remove the defective low n-phases.

The PeLEDs based on the sieved perovskites achieved an operating lifetime of more than 5.7 years under normal conditions (luminance of 100 cd/m2), more than 30 times longer than the untreated device. This record lifetime is also the highest value reported to date for green PeLEDs, reaching the fundamental threshold for commercial application.

In addition, these PeLEDs achieved a record high external quantum efficiency (EQE) of 29.5%, significantly improving the efficiency of converting electricity to light.

When exposed to ambient air (50±10% humidity), the device can maintain 75% of its film photoluminescence quantum yield and 80% of its EQE for more than 100 days, thus showing excellent stability.

This solvent sieve method not only significantly improves the luminescence performance and stability of PeLEDs, but also paves the way for the future development and application of perovskites with unique nanostructures and excellent luminescence performance.

More information: Shuo Ding et al, Phase dimensions resolving of efficient and stable perovskite light-emitting diodes at high brightness, Nature Photonics (2024). DOI: 10.1038/s41566-023-01372-0

Provided by Chinese Academy of Sciences 

Plan for Europe’s huge new particle collider takes shape

by Pierre Celerier

The FCC would form a new circular tunnel under France and Switzerland
The FCC would form a new circular tunnel under France and Switzerland.

Europe’s CERN laboratory revealed more details Monday about its plans for a huge new particle accelerator that would dwarf the Large Hadron Collider (LHC), ramping up efforts to uncover the underlying secrets of the universe.

If approved, the Future Circular Collider (FCC) would start smashing its first particles together around the middle of this century—and start its highest-energy collisions around 2070.

Running under France and Switzerland, it would be more than triple the length of CERN’s LHC, currently the largest and most powerful particle accelerator.

The idea behind both is to send particles spinning around a ring to smash into each at nearly the speed of light, so that the collisions reveal their true nature.

Among other discoveries, the LHC made history in 2012 when it allowed scientists to observe the Higgs boson for the first time.

But the LHC, which cost $5.6 billion and began operating in 2010, is expected to have run its course by around 2040.

The faster and more powerful FCC would allow scientists to continue pushing the envelope. They hope it could confirm the existence of more particles—the building blocks of matter—which so far have only been theorized.

Another unfinished job for science is working out exactly what 95 percent of the universe is made of. About 68 percent of the universe is believed to be dark energy while 27 percent is dark matter—both remain a complete mystery.

Another unknown is why there is so little antimatter in the universe, compared to matter.

CERN hopes that a massive upgrade of humanity’s ability to smash particles could shed light on these enigmas and more.

“Our aim is to study the properties of matter at the smallest scale and highest energy,” CERN director-general Fabiola Gianotti said as she presented an interim report in Geneva.

The report laid out the first findings of a FCC feasibility study that will be finalized by 2025.

$17 billion first stage

In 2028, CERN’s member states, which include the UK and Israel, will decide whether or not to go through with the plan.

If given the green light, construction on the collider would start in 2033.

The project is split into parts.

In 2048, the “electron-positron” collider would start smashing light particles, with the aim of further investigating the Higgs boson and what is called the weak force, one of the four fundamental forces.

The cost of the tunnel, infrastructure and the first stage of the collider would be about 15 billion Swiss Francs ($17 billion), Gianotti said.

The heavy duty hadron collider, which would smash protons together, would only come online in 2070.

Its energy target would be 100 trillion electronvolts—smashing the LHC’s record of 13.6 trillion.

Gianotti said this later collider is the “only machine” that would allow humanity “to make a big jump in studying matter”.

After eight years of study, the configuration chosen for the FCC was a new circular tunnel 90.7 kilometers (56.5 miles) long and 5.5 meters (feet) in diameter.

The tunnel, which would connect to the LHC, would pass under the Geneva region and its namesake lake in Switzerland, and loop round to the south near the picturesque French town of Annecy.

Eight technical and scientific sites would be built on the surface.

CERN said it is consulting with the regions along the route and plans to carry out impact studies on how the tunnel would affect the area.

© 2024 AFP

Novel quantum sensor breaks limits of optical measurement using entanglement

by National Research Council of Science and Technology

KRISS breaks limits of optical measurement using quantum entanglement
A composite interferometer experiment device for the undetected photon quantum sensor. Credit: Korea Research Institute of Standards and Science (KRISS)

The Korea Research Institute of Standards and Science (KRISS) has developed a novel quantum sensor technology that allows the measurement of perturbations in the infrared region with visible light by leveraging the phenomenon of quantum entanglement. This will enable low-cost, high-performance IR optical measurement, which previously accompanied limitations in delivering quality results.

The work is published in the journal Quantum Science and Technology.

When a pair of photons, the smallest unit of light particles, are linked by quantum entanglement, they share an associated quantum state regardless of their respective distance. The recently developed undetected photon quantum sensor is a remote sensor that utilizes two light sources that recreate such quantum entanglement.

An undetected photon (idler) refers to a photon that travels to the target of measurement and bounces back. Instead of directly measuring this photon, the undetected photon sensor measures the other photon of the pair that is linked by quantum entanglement to obtain information about the target.

Quantum sensing based on undetected photons is a nascent technology that has only been realized in the last decade. With the technology still at its early stages, the global research community continues to engage actively in the development race. The undetected photon quantum sensor developed by KRISS is differentiated from previous studies in its core photometric devices, the photodetector and interferometer.

KRISS breaks limits of optical measurement using quantum entanglement
Researchers performing optical alignment with the pump laser of the composite interferometer experiment device. Credit: Korea Research Institute of Standards and Science (KRISS)

A photodetector is a device that converts light into an electrical signal output. Existing high-performance photodetectors were largely limited in their applications to the visible light bandwidths. While wavelengths in the infrared region are useful for measurements in diverse applications across many fields, there were either no available detectors or only detectors with poor performance.

This latest KRISS research has allowed the use of visible light detectors to measure the light states in the infrared band, enabling efficient measurement without requiring costly and power-consuming equipment. It can be used in a wide range of applications, including the non-destructive measurement of three-dimensional structures, biometry, and the analysis of gas compositions.

Another critical element in precision optical measurement is the interferometer, a device that obtains signals by integrating multiple rays of light that travel through separated paths. Conventional undetected photon quantum sensors mainly use simple Michelson interferometers that adopt simple light paths, restricting the number of targets that can be measured.

The sensor developed by KRISS implements a hybrid interferometer that can flexibly change the light paths depending on the target object, greatly improving scalability. Thus, the sensor is suitable for adaptation to various environmental requirements as it can be modified based on the size or shape of the measured object.

The Quantum Optics Group at KRISS has presented a theoretical analysis of the factors that determine the key performance metrics of the quantum sensors and empirically demonstrated their effectiveness by using a hybrid interferometer.

The research team reflected light in the infrared band onto a three-dimensional sample to be measured and measured the entangled photons in the visible bandwidth to obtain the sample image, including its depth and width. The team has successfully reconstructed a three-dimensional infrared image from measurements made in the visible range.

Park Hee Su, Head of the Quantum Optics Group at KRISS, said, “This is a breakthrough example that has overcome the limits of conventional optical sensing by leveraging the principles of quantum optics.” He added that KRISS “will continue with follow-up research for the practical application of the technology by reducing its measurement time and increasing sensor resolution.”