A model system of topological superconductivity mediated by skyrmionic magnons

A model system of topological superconductivity mediated by skyrmionics magnons
A magnetic monolayer (MML) grown on top of a heavy metal (HM) can host skyrmion states. Spin fluctuations may then induce topological superconductivity in a normal metal (NM). Credit: Kristian Mæland and Asle Sudbø.

Topological superconductors are superconducting materials with unique characteristics, including the appearance of so-called in-gap Majorana states. These bound states can serve as qubits, making topological superconductors particularly promising for the creation of quantum computing technologies.

Some physicists have recently been exploring the potential for creating quantum systems that integrate superconductors with swirling configurations of atomic magnetic dipoles (spins), known as quantum skyrmion crystals. Most of these efforts suggested sandwiching quantum skyrmion crystals between superconductors to achieve topological superconductivity.

Kristian Mæland and Asle Sudbø, two researchers at the Norwegian University of Science and Technology, have recently proposed an alternative model system of topological superconductivity, which does not contain superconducting materials. This theoretical model, introduced in Physical Review Letters, would instead use a sandwich structure of a heavy metal, a magnetic insulator, and a normal metal, where the heavy metal induces a quantum skyrmion crystal in the magnetic insulator.

“We have been interested in low-dimensional novel types of quantum spin systems for a long time and were looking into the question of how quantum spin-fluctuations in quantum skyrmion crystals could affect normal metallic states and possibly lead to superconductivity of an unusual type,” Sudbø told Phys.org.

“Previous work that in particular have inspired us and that we have been building on, is the experimental work of Heinze et al on realizations of quantum skyrmion crystals, and two of our own papers on quantum skyrmion crystals.”

In a paper published in 2011, Stefan Heinze at University of Kiel and his colleagues at University of Hamburg showed that skyrmion crystals could be realized in actual physical systems. Inspired by the previous work by this research team, Sudbø and Mæland made a series of predictions, which serve as the basis of their newly proposed model system of topological superconductivity.

A model system of topological superconductivity mediated by skyrmionics magnons
Illustrations of the skyrmion crystal ground states in the magnetic monolayer. Arrows show the inplane component, while color gives the out of plane component. Credit: Kristian Mæland and Asle Sudbø

“We ourselves have not made these systems experimentally, but we are suggesting materials that could be used to create such systems and study their properties,” Sudbø said. “We specifically studied a new way of creating topological superconductivity by sandwiching a normal metal with a very specific spin systems where the spins form skyrmions in a repeated pattern, a skyrmion crystal. Previous propositions for creating topological superconductivity suggested sandwiching skyrmion crystals with superconductors. Our approach obviates the need for a superconductor in the sandwich.”

While they did not experimentally realize their proposed model system, Sudbø and Mæland tried to determine its properties through a series of calculations. Specifically, they calculated a property of the system’s induced superconducting state, the so-called superconducting order parameter, and found that it had a non-trivial topology.

“We were able to create a model system where we can produce topological superconductivity in a heterostructure without having a superconductor a priori in the sandwich,” Sudbø said. “Our system is sandwich structure of a normal metal and a magnetic insulator, while previous proposals have involved a sandwich structure of magnetic insulators and other superconductors.”

In the future, new studies could try to realize the model system proposed by these researchers in an experimental setting, further examining its properties and potential for quantum computing applications. Meanwhile, Sudbø and Mæland plan to theoretically explore other possible routes to achieving unconventional superconductivity.

“In general terms, we will pursue unconventional superconductivity and routes to topological superconductivity in heterostructures of involving magnetic insulators with unusual and unconventional ground states as well as novel types of spin-excitations out of the ground state,” Sudbø said.

More information: Kristian Mæland et al, Topological Superconductivity Mediated by Skyrmionic Magnons, Physical Review Letters (2023). DOI: 10.1103/PhysRevLett.130.156002

Journal information: Physical Review Letters 

© 2023 Science X Network

Counting photons for quantum computing

Counting photons for quantum computing
Experimental setup. A pulsed source is evenly split into three segments, and each is coupled to a transition-edge sensor detector channel. Credit: DOE’s Jefferson Lab and University of Virginia

Experts in nuclear physics and quantum information have demonstrated the application of a photon-number-resolving system to accurately resolve more than 100 photons. The feat is a major step forward in capability for quantum computing development efforts. It also may enable quantum generation of truly random numbers, a long-sought goal for developing unbreakable encryption techniques for applications in—for instance—military communications and financial transactions. The detector was recently reported in Nature Photonics.

Physicists around the world are hotly pursuing the promise of reliable and robust quantum computing. Not only would harnessing quantum computing herald a giant leap for science, but it would also elevate the economy and enhance national security. But getting there has so far eluded the best brains on the planet.

A pair of engineers at the U.S. Department of Energy’s Thomas Jefferson National Accelerator Facility has designed a crucial piece of a photon detection system that has brought physicists a step closer to fully operational photonics-based quantum computing—that is, a quantum computer built entirely with light. The engineers are part of an interdisciplinary team of federal and academic researchers led by Jefferson Lab who are working on advancing quantum computing in nuclear physics.

There are many different ways to try to make a fully functioning quantum computer. For photonics-based computing, quantum detection of light particles, or photons, is vital. Currently, individual detectors can resolve up to about 10 photons, but that number is too small for many quantum-state generation methods. No one had yet demonstrated detection of more than 16 photons, but simulations suggest that quantum computing will require detecting large numbers of photons—50 or more.

Crossing that 50-photon threshold means being able to implement a “cubic gate”—a milestone toward building a complete gate set for universal quantum computing, explained Amr Hossameldin, team member and graduate research assistant in quantum computing and quantum optics at the University of Virginia.

The team blew past the record of 16 photons and demonstrated a photon count of about 35 per single detector and reached 100 photons with a three-detector system.

“They could predict that they were resolving 100 of these photons that were impacting upon the detectors with this extremely accurate resolution,” said Robert Edwards, senior staff scientist and deputy associate director for theoretical and computational physics at Jefferson Lab. “It’s super accurate—and that has never been achieved.”

“The lack of detection has been a major limitation to this approach of quantum computing. The new photon number resolution is the necessary step to implementing a universal instruction set,” he continued.

The new detection system also has another highly valuable secondary benefit: quantum generation of truly random numbers—a boon to unbreakable secret codes or encryptions in such areas as military communications and financial transactions.

So-called random numbers generated by classical computer algorithms aren’t truly random. The algorithm they are generated from can be compromised with some effort by playing a numbers game—looking for which numbers pop up more often than others. True random-number generation using quantum physics has no such flaw or bias.

“There is an intrinsic randomness in quantum mechanics, where you can have a physical system that’s in two states at once,” explained Olivier Pfister, a physics professor at UVa who specializes in quantum fields and quantum information and served as external team leader for the project. “And when you want to know which it is, it’s random.

“Einstein got bugged by this. He called it ‘the Old One playing dice with the universe.’ And we don’t know any better than Einstein.”

Pfister and Hossameldin are co-authors of the paper presenting the team’s research. Other authors are Chris Cuevas and Hai Dong from Jefferson Lab, Richard Birrittella and Paul Alsing from the Air Force Research Laboratory in New York, Miller Eaton from UVa, and Christopher C. Gerry from the City University of New York.

Signals not seen before

The team effort was inspired by an announcement in 2019 by the DOE Office of Science offering funding opportunities for quantum information science research for nuclear physics under the Quantum Horizons program. Edwards secured a small grant to fund a lecture series bringing in experts in quantum computing.

Pfister was the first lecturer in March 2020. A week later, the COVID-19 pandemic shut down the lab, but the seed for joint research into photonics-based quantum computing had been planted.

A large team of physicists, engineers and postdocs was assembled. The collaboration began with the goal to use quantum photonics for calculations relevant to the Jefferson Lab science program.

UVa already had a photon-based system for making quantum calculations using a pulsed laser, but lacked the means to detect with great speed and accuracy the number of photons impacting on its detector before the signal decayed.

Meanwhile, detecting particles with speed and accuracy is Jefferson Lab’s forte. Its Continuous Electron Beam Accelerator Facility, or CEBAF, has been used for decades in experiments that rely on ultra-sensitive detectors to measure the cascade of fleeting subatomic particles created when a particle beam slams into targets at nearly the speed of light. CEBAF is a DOE Office of Science user facility that is accessed by more than 1,850 nuclear physicists for their research.

In a team experiment at Pfister’s Quantum Optics Lab at UVa, Hossameldin linked up three superconducting transition-edge sensor (TES) devices to make one detector, with each TES device capable of seeing 35 photons, and set them in front of the laser and turned on the beam.

A high-speed digitizer designed and developed by Dong at Jefferson Lab was a key piece of the detector electronics.

“The TES original digitizers did not have the high-speed capabilities that are included with our design,” said Cuevas. “Our digitizer has a 12-bit accuracy with a 4ns sampling time, so this allowed us to capture signals from the TES that were not seen before.”

Research for quantum computing is evolving at an exponential pace, and Cuevas predicts new technology will replace their system soon. But the larger collaboration to build a light-based quantum computer continues.

“The project is a very good example where designs can be reused and applied to a completely different scientific application,” Cuevas said. “Sharing technology is a core foundation for the scientific communities, and as electronics engineers, it is exciting to know our designs can further important discoveries.”

More information: Miller Eaton et al, Resolution of 100 photons and quantum generation of unbiased random numbers, Nature Photonics (2022). DOI: 10.1038/s41566-022-01105-9

Journal information: Nature Photonics 

Provided by Thomas Jefferson National Accelerator Facility 

Highly multicolored, light-emitting arrays for compressive spectroscopy on a chip

Highly multicolored, light-emitting arrays for compressive spectroscopy on a chip
Multiplexed array of multicolored electroluminescent devices. (A) Schematic of an array of light-emitting MOS capacitors, in which different emitting materials are micro-dispensed on each capacitor. (B) Optical image of a fabricated 7 × 7 array of devices with a different emitter dispensed on each pixel of the array. Scale bar, 400 μm. (C) Example of time-resolved EL (red) corresponding to a square wave pulsed gate voltage waveform (gray). (D) Example of increase in EL intensity with driving frequency. (E) EL spectra from several materials emitting across the blue to near-infrared range, with emission at all wavelengths in between. (F) Example of EL spectra from a wide range of materials emitting in the visible range. Credit: Science Advances (2023). DOI: 10.1126/sciadv.adg1607

Miniaturized and multicolored light-emitting device arrays provide a promising instrument to sense, image and compute in materials science and applied physics. A range of emission colors can be achieved by using conventional light emitting diodes, although the process can be limited by material or device constraints.

In a new report published in Science Advances, Vivian Wang and a research team in electrical engineering and materials science at the Lawrence Berkeley National Laboratory, and the University of California, Berkeley, developed a variety of highly multicolored light emitting arrays with diverse colors on a single chip. The array contained pulse-driven metal-oxide-semiconductor capacitors to generate electroluminescence from micro-dispersed materials that spanned a variety of colors and spectral shapes.

The materials scientists and engineers combined compressive reconstruction algorithms to perform spectroscopic measurements in a compact environment, using multiplexed electroluminescent arrays to demonstrate spectral imaging of samples in conjunction with a camera.

Experimental design

Multicolored light emitting arrays have versatile applications in the field, although the range of colors in use are insufficient for commercial use. In this study, Wang and colleagues addressed the problem by using arrays of electroluminescent, alternating current-driven metal-oxide-semiconductor capacitors, where they lithographically defined the device electrodes prior to depositing the emitting materials. The deposition of the emissive layer formed the final fabrication step in these devices, and the team used microprinting methods to dispense diverse color emitters on a single chip with ease to perform active spectral measurements in a miniaturized environment.

Highly multicolored, light-emitting arrays for compressive spectroscopy on a chip
Design of arbitrary EL spectra. (A) Schematic depicting generation of arbitrary EL spectra using an EL array. (B) Example of EL spectra and (C) relative weights of the corresponding spectra used to reconstruct the target spectrum in (D). The dashed gray curve represents the desired target spectrum. The purple curve represents the experimentally measured spectrum from the implemented five-device EL array. The gold curve represents the designed spectrum from calculation. Credit: Science Advances (2023). DOI: 10.1126/sciadv.adg1607

Electroluminescent device arrays with arbitrary spectra

The research team developed multiplexed arrays of light sources via metal-oxide-semiconductor capacitor devices where they deposited emissive materials on top of capacitors engineered on silicon substrates. They applied an alternating current voltage between the two capacitors to overcome differences in band alignment at the metal-semiconductor contact to produce transient electroluminescence during each voltage transition. The materials scientists developed the transient electroluminescence from materials sensitive to the visible to infrared range with variations to both frequency and voltage.

During the experiments, they characterized the brightness, efficiency color and spectral bandwidth of the electroluminescent devices. Due to its simplicity, the team successfully constructed large multicolor electroluminescent arrays on a single substrate. The team accomplished this by developing a 7 x 7 array of light emitting devices, where they deposited a different emissive layer on each pixel via micro-dispersing to obtain a nanometer-thin film. The team highlighted the core characteristics of the device by demonstrating how any light spectra could be generated with a sufficiently large library of emitters. For instance, the total light spectrum from a miniature electroluminescence array facilitated a linear combination of the spectra from individual elements. As a result, the researchers designed an array of emitters to yield a desired light spectrum.

Highly multicolored, light-emitting arrays for compressive spectroscopy on a chip
Spectral measurement with highly multicolored arrays. (A) Schematic depicting the concept behind transmittance measurement using variable incident illumination and a single photodetector. (B) Spectra of EL devices (table S3) and (C) photodetector readings used to reconstruct the sample transmittance in (D). (E) Schematic depicting the concept behind time-multiplexed transmittance measurement using a series of phase-shifted gate voltage waveforms applied to each device in a 2 × 3 EL array. (F) Reconstruction of spectral transmittance based on the concept in (E). Credit: Science Advances (2023). DOI: 10.1126/sciadv.adg1607

Reconstructive spectral measurement and imaging

The team showed the scalability and versatility of the device platform alongside the breadth of achievable electroluminescence spectra with active spectral measurements. For example, they measured the transmission spectrum of an unknown sample by passing a broadband light through the sample. In another instance, they measured the spectral information with reconstructive spectrometry to recover spectral information algorithmically. The unique and inherently pulsed nature of the instrument allowed them to conduct alternate spectral measurements and achieve fast spectral measurements without switching devices.

After conducting reconstructive spectral measurements, Wang et al. used the instrument for spectral imaging. In this instance, the electroluminescence arrays served as the light source, while a silicon charge-coupled camera captured images of the samples. They further accomplished spectral imaging of semi-transparent biological samples and maintained the scale of miniaturized light emitting arrays at the nanoscale/microscale on a chip since they can be feasible for a range of adaptive sensing applications.

Highly multicolored, light-emitting arrays for compressive spectroscopy on a chip
Spectral imaging with highly multicolored arrays. Schematic depicting the concept behind microscale (A) reflected-light and (B) transmitted-light spectral imaging using variable incident illumination and a monochrome camera. (C) Example of different incident light spectra and the corresponding reflected-light microscope images. Scale bar, 40 μm. (D) Reconstructed reflectance spectra at different spots on the sample shown in the optical micrograph, using the emitters in table S4. Scale bar, 40 μm. (E) Spectral data cube for a human tissue sample imaged using a transmitted-light imaging setup. Scale bars, 100 μm. Credit: Science Advances (2023). DOI: 10.1126/sciadv.adg1607

Outlook

In this way, Vivian Wang and colleagues developed a simple and scalable instrument to house light-emitting device arrays with highly multiplexed emission spanning the visible and infrared wavelengths. The scientists used pulsed-driven metal-oxide semiconductor capacitors to completely integrate the multicolor devices.

This platform is a source of variable light emission via random selection to probe the spectral properties of samples and explore spectral reflectance and transmission imaging. The scientists improved spectral imaging accuracy and coverage with spectrally tuned materials such as colloidal quantum dots and perovskite nanomaterials, or thin luminescent films developed by combinatorial deposition.

This work can be expanded to more extreme wavelengths in addition to the visible spectrum. The pixels in the light emitting arrays can be individually addressed, allowing materials scientists to use the instruments and generate light with customizable patterns in frequency, space and time for multidimensional spectral measurements.

More information: Vivian Wang et al, Highly multicolored light-emitting arrays for compressive spectroscopy, Science Advances (2023). DOI: 10.1126/sciadv.adg1607

Chuan Wang et al, User-interactive electronic skin for instantaneous pressure visualization, Nature Materials (2013). DOI: 10.1038/nmat3711

Journal information: Nature Materials  Science Advances 

A highly precise terahertz molecular clock

A highly precise terahertz molecular clock
A very narrow vibrational molecular resonance has a sharpness (or quality factor) of three trillion. Credit: K. H. Leung.

In recent years, many physicists worldwide have introduced atomic clocks, systems to measure the passing of time that are based on quantum states of atoms. These clocks can have numerous valuable applications, for instance in the development of satellite and navigation systems.

Lately, some researchers have also been exploring the possible development of molecular clocks, systems that resemble atomic clocks, but based on simple molecules. A team at Columbia University and University of Warsaw recently created a highly accurate molecular clock that could be used to study new physical phenomena.

“Our recent paper is the result of a multi-year effort to create what is called a molecular clock,” Tanya Zelevinsky, one of the researchers who carried out the study, told Phys.org. “It was inspired by the rapid progress in the precision of atomic clocks, and the realization that molecular clocks rely on a different ‘ticking’ mechanism and thus could be sensitive to complementary phenomena. One of these is the idea that the fundamental constants of nature might change very slightly over time. The other is the possibility that gravity between very small objects may be different from what we experience at larger scales.”

The molecular clock created by Zelevinsky and her colleagues is based on the diatomic molecule Sr2, structurally resembling two tiny spheres connected by a spring. The clock specifically uses the vibrational modes of this molecule as a precise frequency reference, which in turn allows it to keep track of time.

A highly precise terahertz molecular clock
An image of the ultracold molecules broken up into atoms used by the researchers. Credit: K. H. Leung

“Our clock requires the use of lasers to cool atoms near absolute zero and hold them in optical traps, induce them to combine into molecules, and shine highly precise ‘clock’ lasers at them to actually make a measurement,” Zelevinsky explained. “Some of the advantages of the molecular clock include its very low sensitivity to stray magnetic or electric fields, and the very long natural lifetimes of the vibrational modes.”

In their study published in Physical Review X, Zelevinsky and her colleagues evaluated the precision of their molecular clock in a series of tests, measuring its so-called systematic uncertainty. They found that their proposed design minimized sources of errors significantly and their clock achieved a total systematic uncertainty of 4.6×10−14, exhibiting a notably high precision.

“Our recent work sets the benchmark for the precision of molecular spectroscopy, with the observed measure of the peak sharpness—or its quality factor—of 3 trillion,” Zelevinsky said. “It also illuminates the effects that limit this precision, in particular, the eventual loss of molecules via scattering of the light in which they are trapped. This inspires us to search for improvements in the optical trapping strategy.”

A highly precise terahertz molecular clock
Small shifts of the clock resonance position with the wavelength of the trapping light (color-coded) currently limit the accuracy of the vibrational clock. Credit: K. H. Leung.

The vibrational molecular clock created by this team of researchers could become a standard for terahertz frequency applications, while also potentially informing the creation of new molecular spectroscopy tools. Its design could also be altered, substituting the Srmolecules with other isotopic variants (with different mass), which could aid ongoing searches for new physical interactions.

“In the future, we hope to apply the molecular clock to understand molecular structure at the highest precision and to study any possible signatures of non-Newtonian gravity at nanometer size scales,” Zelevinsky added.

More information: K. H. Leung et al, Terahertz Vibrational Molecular Clock with Systematic Uncertainty at the 10−14 Level, Physical Review X (2023). DOI: 10.1103/PhysRevX.13.011047

Journal information: Physical Review X 

Android-based application for photoacoustic tomography image reconstruction

Android-based application for photoacoustic tomography image reconstruction
Experimental pulsed laser diode based in vivo PAT imaging system. Reconstructed PAT images, displayed on the Android mobile platform-based application. In vivo PAT reconstructed rat brain vasculature image with original dataset and twofold downsampled dataset. Credit: Hui et al., doi 10.1117/1.JBO.28.4.046009

Photoacoustic tomography (PAT) is a hybrid imaging technique that combines optical illumination with ultrasound detection for high-resolution imaging of deep tissues. Utilizing the photoacoustic (PA) effect, PAT provides a distinct advantage with scalable resolution, higher imaging depth, and high contrast imaging. It uses nanosecond laser pulses to illuminate the tissue of interest, enabling their chromophores to absorb the incident laser energy. This results in a local temperature rise and generates pressure waves that propagate to the tissue boundary as ultrasound waves. These PA waves are then acquired with the help of an ultrasound transducer and converted into internal absorption maps using reconstruction algorithms.

This process of generating initial pressure maps can be carried out with several different image reconstruction algorithms, including the simple delay-and-sum (DAS) beamformer. This algorithm back-projects the acquired signals from various tissue locations, which are then added at every pixel in the reconstructed image. However, this makes the DAS beamformer computationally expensive and time-consuming, and results in artifacts, i.e., anomalies, in the reconstructed images. Despite these drawbacks, its simplicity and ease of implementation make it a popular choice for PAT reconstruction.

Typically, implementing such reconstruction algorithms requires a workstation, desktop, or laptop with extensive computational resources. But the computational power of mobile phones has been growing in recent years. Although mobile phones have been proposed for various microscopy modalities including ultrasound imaging, their utility for photoacoustic imaging such as PAT image reconstruction has not been explored.

Capitalizing on the advanced processing ability of mobile phones, researchers from Singapore and the United States have now developed an Android-based application for PAT image reconstruction. The study was led by Manojit Pramanik, Northrop Grumman Associate Professor in the Department of Electrical and Computer Engineering at Iowa State University, and published in the Journal of Biomedical Optics (JBO).

The developed application utilizes a single-element ultrasound transducer (SUT)-based DAS beamformer algorithm for image reconstruction on Kivy—a cross-platform Python 3.9.5 framework.

The researchers verified its performance on different mobile phones by using simulated and experimental PAT datasets. While the simulated datasets consisted of point targets, triangular shape, and rat’s brain vessel shape, experimental datasets comprised of a point source phantom, a triangular shape phantom and blood vessels in the brain of live rats.

“The developed application can successfully reconstruct the PAT data into high-quality PAT images with signal-to-noise ratio values above 30 decibels,” comments Pramanik.

Interestingly, the algorithm’s computational time on a Huawei P20 mobile phone was comparable to that on a laptop for small datasets. Furthermore, two-fold downsampling of the original dataset reduced the computational time while maintaining the image quality, thus allowing image reconstruction with both speed and quality. In contrast, three-fold downsampling visibly degraded the PAT images.

Moreover, the researchers found that with the Samsung Galaxy S21+’s advanced processor, PAT reconstruction could be achieved in only 2.4 seconds. “This is a considerably reduced running time for image reconstruction and highlights the efficiency of the mobile phone application,” notes Pramanik.

JBO Editor-in-Chief Brian Pogue, Chair of Medical Physics at University of Wisconsin–Madison, remarks, “This first-of-its-kind application provides an opportunity for PAT image reconstruction on inexpensive, portable, and widely available mobile phones. Going ahead, the application can make PAT systems more adaptable and extendable to other fields of biomedical imaging, facilitating point-of-care diagnosis.” He adds, “The code for this Android-based application has been made freely available on GitHub, making this a major service to the biomedical imaging community.”

More information: Xie Hui et al, Android mobile-platform-based image reconstruction for photoacoustic tomography, Journal of Biomedical Optics (2023). DOI: 10.1117/1.JBO.28.4.046009

Journal information: Journal of Biomedical Optics 

Provided by SPIE 

No need for a super computer: Describing electron interactions efficiently and accurately

No need for a super computer: Describing electron interactions efficiently and accurately
An illustration of a checkerboard lattice structure representing a metal oxide 2D layer. Red circles represent correlated d sites (transition metals), and blue circles are noninteracting p sites (oxygens). The double-arrow lines illustrate examples of bonds used to define the slave-particle operators. The dashed black ellipses indicate the d−p−d clusters in the layer which overlap with each other on the correlated d sites. Credit: Physical Review B (2023). DOI: 10.1103/PhysRevB.107.115153

One of the outstanding challenges in the field of condensed matter physics is finding computationally efficient and simultaneously accurate methods to describe interacting electron systems for crystalline materials.

In a new study, researchers have discovered an efficient but highly accurate method of doing so. The work, led by Zheting Jin (a graduate student in Yale Applied Physics) and his thesis supervisor, Sohrab Ismail-Beigi, is published in Physical Review B.

Developing methods to accurately describe interacting quantum electrons has long been of interest to researchers in the fields because it can provide valuable insights about many important aspects of materials. Describing the electrons at this level is tricky for a few reasons, though. One is that, because they’re quantum mechanical, they move in a wavy manner and tracking them is more complicated. The other is that they interact with each other.

Each component of this problem is “OK to deal with separately,” said Ismail-Beigi, Strathcona Professor of Applied Physics, Physics, and Mechanical Engineering & Materials Science. But when you have waviness and interactions, the problem is so complex that nobody knows how to solve it efficiently.

Like many difficult problems in physics and mathematics, one can in principle take a giant computer and numerically solve the problem with brute force, but the amount of computation and storage needed would be exponential in the number of electrons. For example, every time one adds a new electron to the system, the size of the computer needed increases by a factor of two (typically, even a larger factor). This means studying a system with about 50 electrons is infeasible even with today’s largest supercomputers. For context, a single iodine atom has 53 electrons, while a small nanoparticle has more than 1,000 electrons.

“On the one hand, the electrons want to move around—that’s to take advantage of the kinetic energy,” Ismail-Beigi said. “On the other, they repel each other—’don’t come next to me if I’m here already.’ Both effects are captured in the well-known Hubbard model for interacting electrons. Basically, it has these two key ingredients, and it’s a very hard problem to solve. No one knows how to solve it exactly, and high-quality approximate and efficient solutions are not easy to come by.”

The Ismail-Beigi team has developed a method related to a class of approaches that use what’s known as an auxiliary or subsidiary boson. Typically, these approaches require much less computational resources but are only moderately accurate as they treat one atom at a time. Ismail-Beigi’s team tried a different tack. Rather than examining one atom at a time, the researchers treat two or three bonded atoms at a time (called a cluster).

“Electrons can hop between the atoms in the cluster: we solve the cluster problem directly, and then we connect the clusters together in a novel way to describe the entire system,” Ismail-Beigi said. “In principle, the larger the cluster, the more accurate the approach, so the question is how large a cluster does one need to get a desired accuracy?”

Researchers have previously tried cluster approaches, but the computational costs have been prohibitively high and the accuracy has been wanting, given the added computational cost.

“Zheting and I found a clever way of matching different clusters together so that the quantities calculated between the different clusters agree across their boundaries,” he said. “The good news is that this method then gives a very highly accurate description with even a relatively small cluster of three atoms. Because of the smooth way one glues the clusters together, one describes the long-range motion of the electrons well in addition to the localized interactions with each other. Going into this project, we didn’t expect it to be this accurate.”

Compared to literature benchmark calculations, the new method is three to four orders of magnitude faster.

“All the calculations in the paper were run on Zheting’s student laptop, and each one completes within a few minutes,” Ismail-Beigi said. “Whereas for the corresponding benchmark calculations, we have to run them on a computer cluster, and that takes a few days.”

The researchers said they look forward to applying this method to more complex and realistic materials problems in the near future.

More information: Zheting Jin et al, Bond-dependent slave-particle cluster theory based on density matrix expansion, Physical Review B (2023). DOI: 10.1103/PhysRevB.107.115153

Journal information: Physical Review B 

Provided by Yale University 

Scientists find molecule responsible for the bright white coloring of Pacific cleaner shrimp

Molecule responsible for the bright white coloring of Pacific cleaner shrimp found
Brilliant white dermal chromatophore cells in the Pacific cleaner shrimp. a, The Pacific cleaner shrimp (Lysmata amboinensis). Scale bar, 0.5 cm. b, Optical micrograph of the white stripe on the carapace. Inset: the dendritic white chromatophores. Inset scale bar, 100 μm. c, Optical micrograph of white chromatophores on the tail. Inset: higher-magnification image of a white chromatophore. Inset scale bar, 100 μm. d, Optical micrograph of the maxilliped. Inset: the anatomical location of the maxillipeds. Inset scale bar, 0.5 cm. e, Average reflectance spectra of five measurements obtained from the white stripe using a ×40 water immersion objective. Credit: Nature Photonics (2023). DOI: 10.1038/s41566-023-01182-4

An international team of molecular chemists, physicists and nanomolecular scientists has found the molecule responsible for the bright, white-colored stripes sported by the Pacific cleaner shrimp. The study is published in Nature Photonics. Diederik Wiersma with the European Laboratory for Non-Linear Spectroscopy, has published a News & Views piece in the same journal issue outlining the work by the team and explaining why it has such pertinence to the creation of photonic materials.

Photonic materials used in solar cells, sensors and optical displays all have thin, white coatings to assist with light propagation. Most such materials are made using titanium dioxide and zinc oxide nanoparticles, which are toxic to humans. So scientists have been looking for an organic source. To that end, they have been looking at animals that have natural white coatings made of thin material. In this new effort, the team focused their energy on learning more about the bright white stripes on Pacific cleaner shrimp.

To learn more about the stripes, the team obtained samples and studied them using microscopy. Then, using optical measurements, they created simulations showing how the stripes interact with light. They found that the stripes are white due to a thin white coating. They also found that the coating was made of extremely thin layers of nanospheres packed tightly together. In watching as light was introduced, the researchers could see it scatter due to the angles created by the nanospheres.

They found that this optical crowding resulted in the stripes reflecting up to 80% of the light that struck them. The researchers determined that the nanospheres were made up of spoke-like isoxanthopterin molecules. These molecules, they found, allowed for efficient scattering even as the nanospheres were densely packed.

The study illuminates the role of optical anisotropy as it applies to light-scattering. And that, the researchers suggest, could help in developing less hazardous white coatings for optical applications.

More information: Tali Lemcoff et al, Brilliant whiteness in shrimp from ultra-thin layers of birefringent nanospheres, Nature Photonics (2023). DOI: 10.1038/s41566-023-01182-4

Diederik S. Wiersma, A shrimp solves a scattering problem, Nature Photonics (2023). DOI: 10.1038/s41566-023-01183-3

How this shrimp gets its brilliant white stripe, Nature (2023). DOI: 10.1038/d41586-023-01415-0

Journal information: Nature Photonics  Nature 

© 2023 Science X Network

Quantum scientists achieve state-of-the-art defect-free atom array

NUS quantum scientists achieve state-of-the-art defect-free atom array
Arrays of neutral atoms are a promising platform for quantum simulation. Asst Prof Loh Huanqian and her group can precisely assemble large arrays of singly-trapped rubidium atoms, as shown in these single-shot images of arrays with arbitrary geometries: (from left to right) kagome, honeycomb, and a lion head, which is a national symbol of Singapore. Credit: Centre for Quantum Technologies

The glowing dots in these images are single rubidium atoms, pristinely arranged in arrays about as wide as a human hair. The team of CQT Principal Investigator Loh Huanqian captured these pictures to show how they can assemble atoms into any pattern—even Singapore’s Lion Head symbol—fitting within a 15 by 15 triangular grid. The researchers describe the setup and novel algorithm that makes this possible in a paper published 15 March 2023 in Physical Review Applied.

Researchers are keen to work with arrays of neutral atoms because, like Lego blocks that can be assembled into prototype buildings, atom arrays can be used to perform powerful quantum simulations of materials. Already scientists use supercomputers to calculate material properties, but the calculations quickly become intractable if you try to simulate more particles. With an atom array, scientists can model materials directly.

The CQT group’s approach allowed them to achieve a state-of-the-art defect-free array size of 225 atoms reliably at room temperature. Perfection in the pattern is important because defects, or missing atoms, in an array have been found to deteriorate the observed signal in quantum simulations.

Rearranging atoms in parallel

To assemble their atom arrays, the group starts by trapping atoms using laser beams also known as optical tweezers. Trapping single atoms involves an element of chance, so not every tweezer is successfully loaded with an atom. This means that even though the researchers begin with an array of 400 tweezers, they end up with an atom array full of defects. In the next step, they rearrange the atoms in real time to form a smaller, defect-free target array.

This is where the team’s approach diverges from existing approaches. Previous efforts to rearrange atoms have focused on moving the atoms one at a time using a single extra optical tweezer, after calculating the smallest number of moves needed. Atoms do not stay in their traps forever, so minimizing the number of moves, and so the time needed for the rearrangement, increases the probability of achieving a defect-free array.

Huanqian and her group members have developed a system that instead speeds up the rearrangement by moving many atoms in parallel with multiple tweezers. In their experiment, up to 15 mobile tweezers can be used to move atoms simultaneously to generate the defect-free array. The maximum number of mobile tweezers involved can be specified by the user.

“Moving the atoms one at a time is like playing the piano with one finger,” says Weikun, who is the first author of the paper. “In our protocol, we use more fingers and can play the piano faster, which saves a lot of time. For example, if we had 100 atoms to move, instead of making 100 moves one at a time, we could move ten atoms at a time. This means that the number of moves we make is ten times smaller.”

To make this work, the researchers designed a novel algorithm that calculates what moves to make.

Strategic moves

The algorithm’s input comes from an image of how the atoms are initially loaded. The image is converted to a binary matrix, with values 1 and 0 representing if an atom is successfully trapped in the tweezers or not. The researchers also specify the target array.

The rearrangement strategy consists of two parts. The first is row sorting. In this procedure, atoms in the rows are redistributed among the columns such that each column has the number of atoms required in the target array. The second is a column compression procedure that moves the atoms to their target positions.

To ensure atoms do not collide during moves, which could knock them out of the array, the group specified that the algorithm should always move atoms with the same speed and preserve their order.

After completing the calculation, the algorithm communicates with the hardware. Optical tweezers, acting as mechanical arms, rearrange the atoms row by row, then column by column. The group call their algorithm the parallel sort-and-compression algorithm, which can complete the array assembly faster than a blink of an eye.

“Coding is one of the most difficult parts of the experiment,” says Weikun. “Our algorithm sees the whole picture, designs the entire move set in one go, makes sure that there is no collision and then conducts it.”

With their novel algorithm, the group experimentally realized the defect-free 225-atom array with a success probability of 33%, which is among the highest success probabilities reported in the literature for room temperature setups. The team expects their success probability can be improved with quieter and more powerful laser sources.

“We have demonstrated that we can apply our algorithm to arbitrary geometries such as the honeycomb, kagome, and link-kagome, which are interesting for studying different advanced materials like graphene, superconductors or quantum spin liquids,” says Huanqian. “Just to show that we’ve done this in Singapore, we also rearranged the single atoms to form the Lion Head symbol.” The Lion Head symbol was introduced as a national symbol in 1986 and symbolizes courage, strength and excellence.

More information: Weikun Tian et al, Parallel Assembly of Arbitrary Defect-Free Atom Arrays with a Multitweezer Algorithm, Physical Review Applied (2023). DOI: 10.1103/PhysRevApplied.19.034048

Provided by National University of Singapore 

Removing the sluggishness associated with using hydrogen to make steel

Removing the sluggishness associated with using hydrogen to make steel
(a) A representative HAADF image of the hydrogen-reduced cubic-Fe1−xO sample. Areas marked with red arrows appear to assume a faceted Wulff shape. Regions of interest for 4D STEM scans (Fig. 3) are outlined in color. The rectangular areas in orange, red, and blue are regions close to the surface, in the middle of the lamella, and away from the surface, respectively. Some channels between adjacent pores are marked by blue arrows. (b) 3D reconstruction of the ET results. The regions in royal blue color coding are the bcc-Fe matrix, while the salmon-colored ones are the pores. The reconstruction is based on 29 HAADF images (tilting from −70° to 70° for every 5°). (c),(d) Magnified images of the rectangular area highlighted in blue in (a). (e) EDS mapping of the purple rectangular region in (c). Credit: Physical Review Letters (2023). DOI: 10.1103/PhysRevLett.130.168001

A team of metallurgists and materials scientists at the Max-Planck-Institut für Eisenforschung GmbH has uncovered the reason for the sluggishness that occurs when attempting to use hydrogen instead of coke to make steel. In their study, reported in the journal Physical Review Letters, the group isolated the problem and offer solutions for producing steel with reduced carbon emissions.

Prior research has shown that making steel accounts for approximately 7% to 9% of carbon dioxide emissions into the atmosphere, which has led scientists to look for cleaner ways of making the material. Thus far, the prime approach has been using hydrogen to heat the iron oxide. But this has proven to be too sluggish. In this new effort, the research team has found the reason for the sluggishness and has also found a solution.

To make steel, coke (which has a high carbon content) is burned inside of a blast furnace to heat up a quantity of iron oxide. As this happens, a reaction occurs between the carbon and the oxygen in the iron, resulting in the release of carbon dioxide into the air as the iron is converted to steel. Prior research has shown that as the oxygen leaves the iron during the reaction, pores remain in the iron which must then be cleared of oxygen by heating the metal a second time. In this new effort, the researchers found that these pores led to problems when attempting to use the much cleaner hydrogen to heat the iron oxide.

By using an electron microscope, the research team was able to see that when hydrogen was used to heat iron oxide, the pores that were left behind in the reactions became filled with water, which subsequently became trapped inside the iron. That water then reoxidized the iron, which was slowing the oxygen removal process. They also found that sometimes, channels formed that allowed the water to drain. Thus, the solution to removing the sluggishness from the process was to heat the iron oxide in a way that promoted the creation of channels, which could be done by introducing a microfracture structure into the feedstock.

More information: Xuyang Zhou et al, Effect of Pore Formation on Redox-Driven Phase Transformation, Physical Review Letters (2023). DOI: 10.1103/PhysRevLett.130.168001

Journal information: Physical Review Letters 

© 2023 Science X Network

Newly observed effect makes atoms transparent to certain frequencies of light

Newly observed effect makes atoms transparent to certain frequencies of light
Artist’s visualization of a laser striking atoms in an optical cavity. Credit: Ella Maru Studio

A newly discovered phenomenon dubbed “collectively induced transparency” (CIT) causes groups of atoms to abruptly stop reflecting light at specific frequencies.

CIT was discovered by confining ytterbium atoms inside an optical cavity—essentially, a tiny box for light—and blasting them with a laser. Although the laser’s light will bounce off the atoms up to a point, as the frequency of the light is adjusted, a transparency window appears in which the light simply passes through the cavity unimpeded.

“We never knew this transparency window existed,” says Caltech’s Andrei Faraon (BS ’04), William L. Valentine Professor of Applied Physics and Electrical Engineering, and co-corresponding author of a paper on the discovery that was published on April 26 in the journal Nature. “Our research has primarily become a journey to find out why.”

An analysis of the transparency window points to it being the result of interactions in the cavity between groups of atoms and light. This phenomenon is akin to destructive interference, in which waves from two or more sources can cancel one another out. The groups of atoms continually absorb and re-emit light, which generally results in the reflection of the laser’s light. However, at the CIT frequency, there is a balance created by the re-emitted light from each of the atoms in a group, resulting in a drop in reflection.

“An ensemble of atoms strongly coupled to the same optical field can lead to unexpected results,” says co-lead author Mi Lei, a graduate student at Caltech.

The optical resonator, which measures just 20 microns in length and includes features smaller than 1 micron, was fabricated at the Kavli Nanoscience Institute at Caltech.

“Through conventional quantum optics measurement techniques, we found that our system had reached an unexplored regime, revealing new physics,” says graduate student Rikuto Fukumori, co-lead author of the paper.

Besides the transparency phenomenon, the researchers also observed that the collection of atoms can absorb and emit light from the laser either much faster or much slower compared to a single atom depending on the intensity of the laser. These processes, called superradiance and subradiance, and their underlying physics are still poorly understood because of the large number of interacting quantum particles.

“We were able to monitor and control quantum mechanical light–matter interactions at nanoscale,” says co-corresponding author Joonhee Choi, a former postdoctoral scholar at Caltech who is now an assistant professor at Stanford University.

Though the research is primarily fundamental and expands our understanding of the mysterious world of quantum effects, this discovery has the potential to one day help pave the way to more efficient quantum memories in which information is stored in an ensemble of strongly coupled atoms. Faraon has also worked on creating quantum storage by manipulating the interactions of multiple vanadium atoms.

“Besides memories, these experimental systems provide important insight about developing future connections between quantum computers,” says Manuel Endres, professor of physics and Rosenberg Scholar, who is a co-author of the study.

More information: Mi Lei et al, Many-body cavity quantum electrodynamics with driven inhomogeneous emitters, Nature (2023). DOI: 10.1038/s41586-023-05884-1

Journal information: Nature 

Provided by California Institute of Technology