Two new innovative methods describe how to measure the electrical resistivity of liquid iron—such as that found in planetary cores—at extremely high pressures and temperatures. Prior to this, there were no experimental measurements of the electrical resistivity of liquid iron beyond 51 GPa and 2,900 K. These findings will help derive better theoretical models for the puzzling properties of liquid iron.
Iron is the most abundant element by mass on Earth. Despite being so common and well-studied, iron still manages to puzzle scientists by exhibiting electric and magnetic behaviors that are not fully understood. In particular, the physical properties of liquid iron—which makes up most of the Earth’s core—have been the subject of much debate among physicists and geoscientists.
The problem is that certain predictions about liquid iron’s properties are difficult to experimentally verify due to the extreme conditions required to ascertain them. For example, liquid iron’s resistivity, which is the inverse of electrical conductivity, has only been measured up to 51 GPa pressure and 2,900 K temperature. This is because it is challenging to keep the iron sample’s shape and chemical composition intact within current high-pressure apparatus.
Against this backdrop, a research team including Associate Professor Kenji Ohta from the Tokyo Institute of Technology, has recently achieved a breakthrough by measuring the electrical properties of liquid iron under extreme experimental conditions. As explained in their paper, which was published in Physical Review Letters, this was possible thanks to two new techniques that they developed.
Both techniques involved the use of a diamond anvil cell (DAC) that exerts incredibly high pressure on a sample by compressing it between the flat faces of two opposing diamonds. In the first technique, the researchers used a sapphire capsule to contain the iron sample in the DAC while heating it using a laser and electric current. “The idea was to keep the geometry of the iron sample unchanged during melting and to minimize temperature differences inside the sample,” explains Dr. Ohta.
The second technique involved a radically different approach. Instead of preserving the sample’s shape during the melting process by encapsulating it, the researchers used powerful lasers to “instantly” melt the iron. The goal was to quickly and simultaneously measure millisecond-resolved resistance, X-ray diffraction, and temperature of the molten sample before it had enough time to change its geometry. This innovative strategy enabled the team to measure the resistivity of liquid iron at pressures and temperatures up to 135 GPa and 6,680 K, respectively.
Dr. Ohta says, “Our measurements provide the experimentally constrained resistivity of liquid iron at pressures more than two times higher than those in previous experiments.”
The measurements revealed that the resistivity of liquid iron does not vary much with temperature. Moreover, it follows existing theoretical estimates at higher pressures quite well, including the anomalous decrease around 50 GPa, likely indicative of a gradual magnetic transition. This is important because there are some discrepancies between theoretical predictions and experimental data on the resistivity of liquid iron, especially at pressures below 50 GPa.
Thus, the results of this study will help clarify the origin of these discrepancies and help physicists develop more accurate models and theories about the behavior of iron. In turn, this could lead to a more comprehensive understanding of terrestrial cores, as well as related phenomena such as planetary magnetic fields.
More information: Kenji Ohta et al, Measuring the Electrical Resistivity of Liquid Iron to 1.4 Mbar, Physical Review Letters (2023). DOI: 10.1103/PhysRevLett.130.266301
New measurements of how particles flow from collisions of different types of particles at the Relativistic Heavy Ion Collider (RHIC) have provided new insights into the origin of the shape of hot specks of matter generated in these collisions. The results may lead to a deeper understanding of the properties and dynamics of this form of matter, known as a quark-gluon plasma (QGP).
QGP is a soup of the quarks and gluons that make up the protons and neutrons of atomic nuclei at the heart of all visible matter in the universe. Scientists think the entire universe was filled with QGP just after the Big Bang some 14 billion years ago, before protons and neutrons formed. RHIC, a U.S. Department of Energy Office of Science user facility for nuclear physics research at Brookhaven National Laboratory, creates QGP by colliding the nuclei of atoms at nearly the speed of light. The collisions melt the boundaries of the protons and neutrons, momentarily freeing the quarks and gluons from their confinement within these ordinary nuclear building blocks (collectively called nucleons).
The new analysis of data from RHIC’s STAR detector suggests that the shape of the QGP created in collisions of small nuclei with large ones may be influenced by the substructure of the smaller projectile—that is, the internal arrangement of quarks and gluons inside the protons and neutrons of the smaller nucleus. This is in contrast to publications on data from RHIC’s PHENIX detector, which reported that the QGP shape was determined by the larger-scale positions of the individual nucleons and thus the shapes of the colliding nuclei.
“The question of whether the shape of the QGP is determined by the positions of the nucleons or by their internal structure has been a longstanding inquiry in the field. The recent measurement conducted by the STAR collaboration provides significant clues to help resolve this question,” said Roy Lacey, a professor at Stony Brook University and a principal author of the STAR paper.
As it turns out, the differences in the STAR and PHENIX results may be due to the way the two detectors made their respective measurements, each observing the QGP droplets from a different perspective.
Tracking two-particle correlations
As the STAR collaboration reports in a paper just published in Physical Review Letters, their measurements come from an analysis of particles emerging mostly at the center of their detector, all around the beampipe. By looking at the angles between pairs of particles in this “midrapidity” region, physicists can detect whether there are more particles flowing in particular directions.
“You use one particle to determine the direction and use another to measure the density around it,” said Jiangyong Jia, a physicist at Brookhaven Lab and Stony Brook University. The closer the particles are in angles, the higher the density/more particles in that direction.
These flow patterns can be established by pressure gradients associated with the shape of QGP. The STAR team analyzed the flow patterns from three different collision systems: single protons colliding with gold nuclei; two-nucleon deuterons (one proton and one neutron) colliding with gold; and three-nucleon helium-3 nuclei (two protons and one neutron) colliding with gold. The data were collected over three separate runs in 2014 (helium), 2015 (protons), and 2016 (deuterons).
The flow results from PHENIX were based on correlations between particles at midrapidity with particles emitted far out in the forward region of their detector. That analysis found that the specks of QGP and flow patterns established across these three collision systems were associated with the shape of the projectile colliding with the gold nucleus: Spherical protons created circular drops of QGP with uniform flow, elongated two-particle deuterons produced elongated drops and elliptical flow patterns, and roughly triangular three-particle helium-3 nuclei produced triangular blobs of QGP with a correspondingly stronger triangular flow.
“You could see a clear imprint of the shape of the nucleus on the elliptic and triangular flow measurements from PHENIX,” said James Dunlop, the Associate Chair for Nuclear Physics in the Physics Department at Brookhaven Lab.
In contrast, according to Shengli Huang, a Stony Brook University research scientist who led the STAR analysis, “STAR’s ‘v3’ triangular flow patterns were all the same as one another, no matter which projectile we looked at. It seems that the imprint of the triangular shape of the helium-3 nucleus, producing more pronounced v3 flow patterns than the other two systems, is absent. Our findings indicate that nucleon substructure fluctuations play a more important role in determining the QGP shape than do changes in the number of nucleons and their positions.”
Takahito Todoroki, an assistant professor from Tsukuba University, conducted an independent cross-check of the STAR analysis and found the same result.
A question of perspective
“Both sets of measurements from STAR and PHENIX have been rigorously checked by independent teams within both collaborations, and there is no question about the results,” said Dunlop.
Theorists have proposed some explanations.
“While the STAR results can be interpreted as subnucleon fluctuations playing an important role in determining the QGP geometry and smearing out the influence of the triangular shape, and the PHENIX results indicate that the QGP shape is dictated by the nucleon position fluctuations, the experiments are not necessarily inconsistent,” said Brookhaven Lab theorist Bjoern Schenke. “Taking into account the fact that the blob of QGP changes along the longitudinal direction could explain the differences.”
As Jiangyong Jia explained, “When a collision creates QGP, you don’t produce just a slice of QGP; you can imagine it as a cylinder along the beam direction. If you go to the forward end of the cylinder, the geometry might not be the same as if you look right at the middle of this cylinder. There could be a lot of fluctuations along the beam direction.”
Whereas STAR measures at midrapidity, the PHENIX analysis of correlations between particles at midrapidity with longitudinally distant “forward” particles may reflect this longitudinal evolution of the QGP. That difference in perspective may explain the different results.
A recent theoretical analysis led by Schenke found evidence for such longitudinal fluctuations. That work, which also includes subnucleon fluctuations, suggests that longitudinal variation in the QGP could explain at least part of the difference between the STAR and PHENIX v3 results.
“These results underscore the richness of QGP physics, and the importance of comparing results from different detectors,” Dunlop said.
Future analyses
The STAR physicists have a plan to explore these explanations by analyzing additional data from deuteron-gold collisions, collected by STAR in 2021. These measurements made use of upgraded components of STAR installed in the forward region of that detector in the time since the original deuteron-gold data were collected.
“By analyzing these data, we should be able to do both measurements—look at middle-middle particle correlations, and middle-forward correlations—in the same detector,” said Huang.
If the scientists confirm both the results published in this paper and the previous results from PHENIX, it would be clear evidence of the longitudinal fluctuations in the QGP.
In addition, RHIC also ran collisions between two beams of oxygen nuclei for part of the run in 2021. Analyzing those data between collisions of roughly spherical nuclei each made of 16 nucleons could help disentangle the impact of the subnucleon fluctuations from the nuclear shape.
“By adding more nucleons, we dilute the influence of the fluctuations within each nucleon,” Jia said. “We already know that in gold-gold collisions, with 197 nucleons, the subnucleon fluctuations do not influence the flow patterns, but what happens if you pick something that is not so big?”
“Because we have the same collision system (deuteron-gold), now we can repeat the previous PHENIX and STAR measurements in the same experiment with the same collision system. This will allow us to directly quantify how much any observed longitudinal variation is contributing to the difference between the results from STAR and PHENIX.”
More information: M. I. Abdulhamid et al, Measurements of the Elliptic and Triangular Azimuthal Anisotropies in Central He3+Au , d+Au and p+Au Collisions at sNN=200 GeV, Physical Review Letters (2023). DOI: 10.1103/PhysRevLett.130.242301
Imagine the explosive disturbance caused by a jet going supersonic. A similar shock wave occurs when subatomic particles known as “solar wind” flow from the sun and strike the Earth’s magnetic field. Now, scientists have used a recently developed technique to improve predictions of the timing and intensity of the solar wind’s strikes, which sometimes disrupt telecommunications satellites and damage electrical grids.
The research team—led by James Juno, a staff physicist at the U.S. Department of Energy’s (DOE) Princeton Plasma Physics Laboratory (PPPL)—used what is known as “field-particle correlation” (FPC) for a detailed view of how shock waves composed of plasma, the hot, charged state of matter that makes up 99% of the visible universe, can heat up blobs of plasma in outer space.
FPC is an algorithm that digests data produced by spacecraft or computers and determines how the energy of magnetic fields and moving plasma particles change back and forth. The process gives precise information about the particles’ positions and velocities as the changes occur.
“Though much of what we found was already well known, heat transfer has never been examined in this way before,” said James Juno, a staff physicist at the U.S. Department of Energy’s (DOE) Princeton Plasma Physics Laboratory (PPPL) and lead author of a paper reporting the results in The Astrophysical Journal. “The fact that this technique works as well as it does shows us that there are new ways to look at old problems.”
PPPL scientists study plasma because it fuels fusion reactions within doughnut-shaped devices known as tokamaks and twisty facilities known as stellarators. Plasma is a soup of atomic nuclei, or ions, electrons, and neutral atoms that together allow the flow of electric current. That flow allows plasma to respond to magnetic fields, which scientists use to corral the plasma within fusion facilities. Researchers are trying to harness fusion, which drives the sun and stars, to generate electricity without producing greenhouse gases and long-lived radioactive waste.
The FPC technique reveals plasma behavior never observed before, including which parts of the plasma are heating up and which physical processes are responsible. Older techniques can produce only broad pictures of where and how a plasma heats up.
“One of our key findings is that different regions of the plasma could respond to a shock wave in different ways,” Juno said. “We now have a more holistic picture of what’s going on in the shock wave interactions.”
“This means that we can discover the energy of a specific particle at a particular position and that a shock wave is transferring energy to a particular plasma region,” Juno said. “Previous methods cannot distinguish between the large variety of processes that are occurring.”
Juno and colleagues performed the FPC technique, which was developed by Gregory Howes at the University of Iowa, on data produced by a plasma simulation program known as Gkeyll (pronounced like Robert Louis Stevenson’s Dr. “Jekyll”). PPPL developed the code to simulate the orbiting of plasma particles around magnetic field lines at the edge of the plasma in tokamak fusion facilities. Scientists are constantly updating the program to extend its capabilities.
“Gkeyll has many components that apply to a range of plasma problems, from space plasmas to laboratory fusion experiments,” said Ammar Hakim, PPPL deputy associate laboratory director for the computational sciences department. “And with new National Science Foundation investment in the Cyberinfrastructure for Sustained Science Innovation program, we are improving Gkeyll to make it a national resource for plasma simulations.”
Scientists can use FPC to glean information from current space voyages like NASA’s Magnetospheric Multiscale (MMS) mission, which employs four satellites flying in formation in the region of space affected by Earth’s magnetic field. “The MMS mission presents an incredible opportunity to study space plasmas using the FPC technique,” says Collin Brown, researcher at the University of Iowa and co-author of the paper. “That significantly enhances our ability to study the three-dimensional structure of energy transfer.”
Moreover, “it’s one thing to use a supercomputer to reproduce a plasma shock wave,” Juno said. “But the FPC technique can actually help you learn new things about the world. And that’s exciting.”
More information: James Juno et al, Phase-space Energization of Ions in Oblique Shocks, The Astrophysical Journal (2023). DOI: 10.3847/1538-4357/acaf53
Electron tunneling associated with ferritin was proposed as early as 1988, but it is still viewed skeptically despite substantial evidence that it occurs. In our recent paper published in IEEE Transactions on Molecular, Biological and Multi-Scale Communications, my co-authors and I review the evidence of electron tunneling in ferritin, as well as the evidence that such electron tunneling may be used by biological systems that include the retina, the cochlea, macrophages, glial cells, mitochondria and magnetosensory systems.
While these diverse systems fall in different fields of study, we hope that this article will raise awareness of the mechanism of electron tunneling associated with ferritin and encourage further research into that phenomenon in biological systems that incorporate ferritin, particularly where there is no apparent need for the iron storage functions of ferritin in those systems.
A brief history of ferritin and ferritin research
Ferritin is an iron storage protein that self-assembles into a 12-nanometer diameter spherical shell that is 2 nanometers thick, and it can store up to ~4,500 iron atoms in an 8-nanometer diameter core. With an evolutionary history that appears to stretch back more than 1.2 billion years, it might seem rather old, but it should be kept in mind that single-celled organisms are believed to have first evolved ~3.5 billion years ago. As such, it may have taken more than 2 billion years for ferritin to evolve. When the first multicellular organisms evolved ~600 million years ago, members of the ferritin family of proteins were likely present, and they can be found today in almost all plants and animals.
The first suggestion that ferritin might have some quantum mechanical properties was made as early as 1988, 88 years after the discovery of quantum mechanics and eight years after the discovery of quantum dots, semiconductor nanoparticles that behave like artificial atoms and which are similar in size to ferritin. The quantum mechanical properties include magnetic behavior that arises from the way iron forms crystalline structures in the core of the ferritin shell, and electron tunneling.
Subsequent studies discussed in the paper provide substantial evidence that such quantum mechanical properties exist. However, those properties in this billion-year-old biostructure appear to have been mostly considered to be a curiosity or artifact, and not a quantum biological agent. Quantum biology as a study has been viewed with skepticism by biologists and many other scientists (although many of the scientists who discovered quantum mechanics more than 100 years ago believed it could be applicable to biology), but it is a growing field with research being conducted at many of the top universities, such as Caltech, Yale, the University of Chicago and UCLA.
What is electron tunneling?
Quantum mechanics proposes that the physical properties of electrons, protons, neutrons and other things referred to as subatomic “particles” are defined in terms of probability waves. Experimental evidence of the wave-like behavior of these particles has been obtained and is generally accepted. Those waves are described by those experiments as a probability of detecting a physical property of the particle at a location in time and space, which is sometimes referred to as “collapse” of the wave function.
However, nothing changes about the particle when it collapses, other than the behavior of the wave function. When the wave function behaves in accordance with the Schrödinger wave function, it may be called “coherent,” and when it interacts with other particles and no longer behaves in accordance with that wave function, it may be called “incoherent.”
The spatial wave-like properties of electrons in a vacuum can have a wavelength of around 5 nanometers at room temperature, which is significant for molecular interactions. Electrons can move between molecules when they “touch” each other (recognizing that the wave functions of the atoms and sub-atomic particles in the molecules are what is actually interacting), which may be referred to as adiabatic or classical behavior, but under the right conditions, an electron can “tunnel” between molecules, which means that it can appear to move from one molecule to another molecule in a way that is not permitted by adiabatic or classical behavior. There is nothing mysterious about this, it is just a physical property of electrons, but because the wave function is a probability wave, it can seem mysterious.
Some of my co-authors have shown that electrons appear to tunnel over distances of up to 12 nm through ferritin in sequential tunneling events, and that the unusual magnetic properties of the core materials of the ferritin might be associated with this unusually long electron tunneling distance. That work has been based on what are referred to as “solid state” experiments, which do not involve living biological systems. Because electron tunneling cannot be directly observed, it has to be inferred from other evidence such as measured currents and voltages. In biological systems, it can be more difficult to obtain evidence of such electron tunneling, but it is not impossible.
Electron tunneling in biological systems containing ferritin
There are several proposed cellular reactions associated with electron tunneling in ferritin. The first is electron storage. In laboratory tests outside of cells, which are sometimes referred to as “in vitro” for the Latin term meaning “in glass,” the ability of ferritin in solution with water to store electrons for several hours has been demonstrated. This is unusual, because it would be expected that the iron stored inside the ferritin would be released as soon as an electron is received, but that does not happen quickly. This observation indicates that electrons are not readily conducted through the insulating protein shell by classical conduction, and that instead they move electrochemically or by tunneling.
Evidence also indicates that electrons can tunnel distances of up to 8 nanometers in a single tunneling event through the ferritin in solid state tests, so it is possible that the electrons stored inside the ferritin core can tunnel to molecules outside of the 2-nanometer-thick protein shell, such as free radicals that have energy levels that allow them to receive electrons. These free radicals can take electrons from other molecules and cause cellular damage, and neutralizing free radicals by donating an electron is one of the functions of antioxidants.
Ferritin interacts with antioxidants like ascorbic acid (known more commonly as vitamin C) in a cellular environment in a way that stabilizes the stored iron, and it is also overexpressed in response to free radicals. If ferritin is able to store electrons from antioxidants to make them available to free radicals through electron tunneling, it could improve the efficiency of that neutralization reaction by allowing the electrons to reach free radicals that are farther away and by storing the electrons until they are needed.
If the only function of ferritin is iron storage, that would make no sense in situations where the source of the free radicals, inflammation and ROS is not excess iron, which is often the case. The complexity of the way that cells use iron, known as iron homeostasis, has made it difficult to identify electron tunneling associated with ferritin.
Another proposed quantum biological function for electron tunneling in ferritin is electron transport across cellular distances. In a type of cell called an M2 macrophage, ferritin can form somewhat regularly ordered structures that the macrophages appear to use to provide the ferritin to a cell that the macrophage is aiding. For example, macrophages are associated with increased ferritin levels associated with some cancers and appear to help the cancer cells to neutralize inflammation.
Antioxidants may also help some cancer cells to survive by providing electrons to neutralize free radicals and ROS, but in the absence of antioxidants in those cells, is it possible for electrons to tunnel through the ferritin structures in M2 macrophages into ferritin in other cells? Evidence of that function also exists.
In small angle neutron scattering (SANS) tests performed by Dr. Olga Mykhaylyk on placental tissue that included macrophages, increased neutron scattering was measured that was absent in bulk ferritin extracted from the tissues. Neutron scattering can occur in solids that contain nanoparticles with aligned magnetic moments, and these tests indicate that the ferritin in placental tissue with macrophages has aligned magnetic moments.
SANS tests were also conducted on self-assembled monolayers (SAMs) of ferritin by Prof. Heinz Nakotte that demonstrated neutron scattering, and tests that I conducted with Prof. Cai Shen showed that self-assembled multilayers of ferritin similar to those in M2 macrophages were not only able to conduct electrons over distances as great as 80 microns in vitro but were also able to route those electrons using a physical mechanism known as a Coulomb blockade.
Routing electrons to ferritin where they are need for elimination of free radicals, inflammation and ROS in cells is another proposed quantum biological function, but because electron tunneling cannot be directly observed, further research to investigate that hypothesis is needed.
Conclusion and next steps
This new paper in IEEE Transactions provides more details on how these building blocks of electron tunneling functions could be used in different biological systems that contain ferritin, but it will be up to researchers in the different fields of study for those biological systems to design tests and to investigate whether electron tunneling is occurring.
Many researchers in biology do not understand electron tunneling and are skeptical of quantum biology, so it might take decades before these questions are answered and used to develop treatments for cancer, blindness, deafness and other maladies. Hopefully, this paper will help to raise awareness and foster additional research into whether and how biological systems utilize the proven phenomenon of electron tunneling in ferritin.
This story is part of Science X Dialog, where researchers can report findings from their published research articles. Visit this page for information about ScienceX Dialog and how to participate.
More information: Ismael Diez Perez et al, Electron Tunneling in Ferritin and Associated Biosystems, IEEE Transactions on Molecular, Biological and Multi-Scale Communications (2023). DOI: 10.1109/TMBMC.2023.3275793
Bio: Chris Rourk is a citizen scientist and patent attorney who has been conducting research into quantum biological processes that use electron tunneling associated with ferritin since 2017. He is a former research scientist and received his B.S.E.E. in 1985 and M.Eng. in 1987.
Quantum computing could revolutionize our world. For specific and crucial tasks, it promises to be exponentially faster than the zero-or-one binary technology that underlies today’s machines, from supercomputers in laboratories to smartphones in our pockets. But developing quantum computers hinges on building a stable network of qubits—or quantum bits—to store information, access it and perform computations.
Yet the qubit platforms unveiled to date have a common problem: They tend to be delicate and vulnerable to outside disturbances. Even a stray photon can cause trouble. Developing fault-tolerant qubits—which would be immune to external perturbations—could be the ultimate solution to this challenge.
A team led by scientists and engineers at the University of Washington has announced a significant advancement in this quest. In a pair of papers published June 14 in Nature and June 22 in Science, the researchers report that in experiments with flakes of semiconductor materials—each only a single layer of atoms thick—they detected signatures of “fractional quantum anomalous Hall” (FQAH) states.
The team’s discoveries mark a first and promising step in constructing a type of fault-tolerant qubit because FQAH states can host anyons—strange “quasiparticles” that have only a fraction of an electron’s charge. Some types of anyons can be used to make what are called “topologically protected” qubits, which are stable against any small, local disturbances.
“This really establishes a new paradigm for studying quantum physics with fractional excitations in the future,” said Xiaodong Xu, the lead researcher behind these discoveries, who is also the Boeing Distinguished Professor of Physics and a professor of materials science and engineering at the UW.
FQAH states are related to the fractional quantum Hall state, an exotic phase of matter that exists in two-dimensional systems. In these states, electrical conductivity is constrained to precise fractions of a constant known as the conductance quantum. But fractional quantum Hall systems typically require massive magnetic fields to keep them stable, making them impractical for applications in quantum computing. The FQAH state has no such requirement—it is stable even “at zero magnetic field,” according to the team.
Hosting such an exotic phase of matter required the researchers to build an artificial lattice with exotic properties. They stacked two atomically thin flakes of the semiconductor material molybdenum ditelluride (MoTe2) at small, mutual “twist” angles relative to one another. This configuration formed a synthetic “honeycomb lattice” for electrons.
When researchers cooled the stacked slices to a few degrees above absolute zero, an intrinsic magnetism arose in the system. The intrinsic magnetism takes the place of the strong magnetic field typically required for the fractional quantum Hall state. Using lasers as probes, the researchers detected signatures of the FQAH effect, a major step forward in unlocking the power of anyons for quantum computing.
The team—which also includes scientists at the University of Hong Kong, the National Institute for Materials Science in Japan, Boston College and the Massachusetts Institute of Technology—envisions their system as a powerful platform to develop a deeper understanding of anyons, which have very different properties from everyday particles like electrons.
Anyons are quasiparticles—or particle-like “excitations”—that can act as fractions of an electron. In future work with their experimental system, the researchers hope to discover an even more exotic version of this type of quasiparticle: “non-Abelian” anyons, which could be used as topological qubits. Wrapping—or “braiding”—the non-Abelian anyons around each other can generate an entangled quantum state. In this quantum state, information is essentially “spread out” over the entire system and resistant to local disturbances—forming the basis of topological qubits and a major advancement over the capabilities of current quantum computers.
“This type of topological qubit would be fundamentally different from those that can be created now,” said UW physics doctoral student Eric Anderson, who is lead author of the Science paper and co-lead author of the Nature paper. “The strange behavior of non-Abelian anyons would make them much more robust as a quantum computing platform.”
Three key properties, all of which existed simultaneously in the researchers’ experimental setup, allowed FQAH states to emerge:
Magnetism: Though MoTe2 is not a magnetic material, when they loaded the system with positive charges, a “spontaneous spin order”—a form of magnetism called ferromagnetism—emerged.
Topology: Electrical charges within their system have “twisted bands,” similar to a Möbius strip, which helps make the system topological.
Interactions: The charges within their experimental system interact strongly enough to stabilize the FQAH state.
The team hopes that non-Abelian anyons await discovery via this new approach.
“The observed signatures of the fractional quantum anomalous Hall effect are inspiring,” said UW physics doctoral student Jiaqi Cai, co-lead author on the Nature paper and co-author of the Science paper. “The fruitful quantum states in the system can be a laboratory-on-a-chip for discovering new physics in two dimensions, and also new devices for quantum applications.”
“Our work provides clear evidence of the long-sought FQAH states,” said Xu, who is also a member of the Molecular Engineering and Sciences Institute, the Institute for Nano-Engineered Systems and the Clean Energy Institute, all at UW. “We are currently working on electrical transport measurements, which could provide direct and unambiguous evidence of fractional excitations at zero magnetic field.”
The team believes that with their approach, investigating and manipulating these unusual FQAH states can become commonplace—accelerating the quantum computing journey.
More information: Jiaqi Cai et al, Signatures of Fractional Quantum Anomalous Hall States in Twisted MoTe2, Nature (2023). DOI: 10.1038/s41586-023-06289-w
Eric Anderson et al, Programming correlated magnetic states with gate-controlled moiré geometry, Science (2023). DOI: 10.1126/science.adg4268
As a macroscopic quantum state of matter, superconductivity has attracted tremendous attention in the field of scientific research and industry over the past century. According to the BCS (Bardeen-Cooper-Schrieffer) microscopic theory, superconductivity arises from the condensation of coherent Cooper pairs, and each Cooper pair is formed by two electrons with opposite spins and momenta.
Theoretically, when time-reversal symmetry is broken, Cooper pairs may acquire a finite momentum and exhibit a spatially modulated superconducting order parameter, which is known as the Fulde-Ferrell-Larkin-Ovchinnikov (FFLO) state. Although the FFLO state was theoretically proposed in 1964, it has proven challenging to observe the FFLO state due to the stringent requirement for materials. To date, direct evidences of the FFLO state, such as the modulation of the superconducting order parameter in real space, have not been detected experimentally.
To understand the observed two-dimensional (2D) superconducting properties in cuprates, some theoretical works predicted that the finite-momentum Cooper pairs can exist in strong-coupling systems without breaking time-reversal symmetry and show the spatial modulation of Cooper-pair density. This extraordinary superconducting state, referred to as the pair density wave (PDW), has sparked numerous theoretical investigations due to the potential connection between the PDW and unconventional superconductivity.
Among various theoretical hypotheses, the most intriguing one is that the PDW is another principal state along with d-wave superconductivity in the phase diagram of cuprates, which provides new insights into the complex intertwined orders of the cuprates showing high-temperature superconductivity. Moreover, according to some theoretical proposals, the enigmatic pseudogap phase of cuprates can be attributed to the PDW state, further indicating the potential importance of PDW.
However, experimental evidences of the PDW state in high-temperature (high-Tc) superconductors have only been observed in some cuprates. The existence of PDW state in iron-based superconductors, another high-Tc superconductor family, has never been experimentally detected. Furthermore, early theoretical studies of cuprates proposed the PDW is a low-dimensional stripe order in 2D systems, but no compelling experimental evidences of the PDW in 2D systems have been reported so far.
Recently, Prof. Jian Wang’s group at Peking University, in collaboration with Prof. Ziqiang Wang at Boston College and Prof. Yi Zhang at Shanghai University, discovered the primary pair density wave state in a 2D iron-based high-Tc superconductor, which provides a new 2D high-Tc platform to investigate the PDW in unconventional superconductors. Their paper is published in the journal Nature.
By using molecular beam epitaxy (MBE) technique, Jian Wang’s group successfully grew large-area and high-quality one-unit-cell-thick Fe(Te,Se) films on SrTiO3(001) substrates (1-UC Fe(Te,Se)/STO), which show superconducting gap as large as 18 meV, much higher than that (~1.8 meV) in bulk Fe(Te,Se), a promising topological superconductor candidate.
Previously, the Jian Wang group and collaborators discovered zero-energy excitations at both ends of 1D atomic line defects in 1-UC Fe(Te,Se)/STO, which are found to be consistent with the Majorana zero modes interpretation (Nat. Phys. 16, 536-540 (2020)). In the current work, another atomic structure in 1-UC Fe(Te,Se)/STO, the innate domain wall where the atomic lattice is compressed along Fe-Fe direction across the domain wall (Fig. 1a–d), was investigated by in situ low-temperature (4.3 K) scanning tunneling microscopy/spectroscopy (STM/STS). Within the domain wall area, clear spatial modulation of the local density of stats (LDOS) is detected (Fig. 1e–f).
By performing the 2D lock-in analysis (Fig. 1g–h), a modulation period of 3.6aFe (aFe is the distance between neighboring Fe atoms) is determined. Further bias voltage dependent measurements show that the period of the LDOS modulation is independent of the energy, demonstrating an origin of electronic order. Furthermore, the electronic ordering-induced LDOS modulations mainly exist in the energies within the superconducting gap, indicating that the charge order is potentially related to the superconductivity of 1-UC Fe(Te,Se)/STO.
By performing further STS measurements, spatial modulations of the superconducting coherence peak height (Fig. 2a–c) and gap energy (Fig. 2d–f) are detected at the domain wall. Previous studies have reported the strong correlation between these two physical quantities and the superconducting order parameter. Therefore, the spatial modulation of the superconducting order parameter is directly observed in real space, which provides compelling evidence of the existence of PDW order in the 2D iron-based high-temperature superconductor.
Apart from the PDW state, a charge density wave (CDW) state with a period of about 1.8aFe (half of the period of PDW) is also detected at the domain wall. Fig. 3e and 3f show the phase map of the PDW (period ~ 3.6aFe) and CDW state (period ~ 1.8aFe) at the domain wall, in which vortices with 2π phase winding in the CDW phase (black dots in Fig. 3e) and π-phase shifts in the PDW phase (arrows in Fig. 3f) can be identified. It is clear that the π-phase shifts in the PDW phase are observed near the vortices of the CDW, which is consistent with the theoretical scenario of a primary PDW and PDW-induced secondary CDW (Fig. 3a-d). Therefore, the PDW state observed at the domain wall of 1-UC Fe(Te,Se)/STO is a primary state.
To explain the mechanism of the primary PDW state at the domain wall, Prof. Ziqiang Wang and Prof. Yi Zhang proposed a novel triplet equal-spin pairing model. At the domain wall, the broken inversion and mirror symmetry introduce the Rashba and Dresselhaus spin-orbit couplings (SOC). Due to the large SOC, electrons with equal spin can pair across the Fermi points of the SOC splitting bands, leading to a primary PDW state with finite-momentum Cooper pairs. Theoretical calculations based on the equal-spin pairing model show the spatial modulation of the LDOS and the superconducting gap, which are consistent with our experimental results and reveal the possible existence of topological spin-triplet superconducting order parameters.
The research team led by academician Guo Guangcan and Prof. Ren Xifeng from the University of Science and Technology of China (USTC) of the Chinese Academy of Sciences (CAS) achieved quantum photonic sources at cryogenic temperatures based on the spontaneous four-wave mixing (SFWM) effect. The results are published in Optica.
Quantum photonic integrated circuits (QPICs), with their extremely high phase stability and reconfigurability, are powerful platforms that have fueled a wide range of quantum information applications with significant building blocks and are becoming possible candidates to interface different quantum systems with hybrid integrated techniques.
Current investigations on QPICs mainly focus on operation at ambient temperatures, while many quantum components have to operate under cryogenic conditions. Additionally, quantum technologies must be mutually compatible for the sake of scalable photonic quantum computing and interfacing among different quantum systems. Therefore, QPICs that are designed at room temperature, especially those involving nonlinear processes, should be able to work in cryogenic environments.
The researchers set their eyes on the SFWM effect for its outstanding performance in a variety of nonlinear processes and quantum applications. They managed to make breakthroughs through studying the SFWM effect in an integrated silicon waveguide under cryogenic operation conditions. They revealed that the effect was still in good performance for generating quantum photonic sources.
Then, the researchers investigated the noises from the photon-pair source preparation with cryogenic photon-pairs being generated and experimentally verified with a bandwidth of ~2 THz.
Finally, with the help of a Michelson interferometer, they studied frequency-multiplexed energy-time entangled states.
Researchers from USTC presented an important part of cryogenic nonlinear photonics through the preparation of cryogenic integrated quantum entangled light sources. The results will benefit integrated scalable quantum information applications. Just as the reviewers of Optica stated, this paper provides useful insights into the study of integrated quantum optics in cryogenic environments.
More information: Lan-Tian Feng et al, Entanglement generation using cryogenic integrated four-wave mixing, Optica (2023). DOI: 10.1364/OPTICA.476712
Scientists using one of the world’s most powerful quantum microscopes have made a discovery that could have significant consequences for the future of computing.
Researchers at the Macroscopic Quantum Matter Group laboratory in University College Cork (UCC) have discovered a spatially modulating superconducting state in a new and unusual superconductor, uranium ditelluride (UTe2). This new superconductor may provide a solution to one of quantum computing’s greatest challenges. Their findings have been published in Nature.
Lead author Joe Carroll, a Ph.D. researcher working with UCC Prof. of Quantum Physics Séamus Davis, explains the subject of the paper.
“Superconductors are amazing materials which have many strange and unusual properties. Most famously they allow electricity to flow with zero resistance. That is, if you pass a current through them they don’t start to heat up, in fact, they don’t dissipate any energy despite carrying a huge current. They can do this because instead of individual electrons moving through the metal we have pairs of electrons which bind together. These pairs of electrons together form macroscopic quantum mechanical fluid.”
“What our team found was that some of the electron pairs form a new crystal structure embedded in this background fluid. These types of states were first discovered by our group in 2016 and are now called Electron Pair-Density Waves. These Pair Density Waves are a new form of superconducting matter the properties of which we are still discovering.”
“What is particularly exciting for us and the wider community is that UTe2 appears to be a new type of superconductor. Physicists have been searching for a material like it for nearly 40 years. The pairs of electrons appear to have intrinsic angular momentum. If this is true, then what we have detected is the first Pair-Density Wave composed of these exotic pairs of electrons.”
When asked about the practical implications of this work Mr. Carroll explained, “There are indications that UTe2 is a special type of superconductor that could have huge consequences for quantum computing.”
“Typical, classical, computers use bits to store and manipulate information. Quantum computers rely on quantum bits or qubits to do the same. The problem facing existing quantum computers is that each qubit must be in a superposition with two different energies—just as Schrödinger’s cat could be called both ‘dead’ and ‘alive.’ This quantum state is very easily destroyed by collapsing into the lowest energy state—’dead’—thereby cutting off any useful computation.”
“This places huge limits on the application of quantum computers. However, since its discovery five years ago there has been a huge amount of research on UTe2 with evidence pointing to it being a superconductor which may be used as a basis for topological quantum computing. In such materials there is no limit on the lifetime of the qubit during computation opening up many new ways for more stable and useful quantum computers. In fact, Microsoft have already invested billions of dollars into topological quantum computing so this is a well-established theoretical science already.” he said.
“What the community has been searching for is a relevant topological superconductor; UTe2 appears to be that.”
“What we’ve discovered then provides another piece to the puzzle of UTe2. To make applications using materials like this we must understand their fundamental superconducting properties. All of modern science moves step by step. We are delighted to have contributed to the understanding of a material which could bring us closer to much more practical quantum computers.”
Professor John F. Cryan, Vice President Research and Innovation said, “This important discovery will have significant consequences for the future of quantum computing. In the coming weeks, the University will launch UCC Futures—Future Quantum and Photonics and research led by Professor Seamus Davis and the Macroscopic Quantum Matter Group, with the use of one of the world’s most powerful microscopes, will play a crucial role in this exciting initiative.”
Engineers from Rice University and the University of Maryland have created full-motion video technology that could potentially be used to make cameras that peer through fog, smoke, driving rain, murky water, skin, bone and other media that reflect scattered light and obscure objects from view.
“Imaging through scattering media is the ‘holy grail problem’ in optical imaging at this point,” said Rice’s Ashok Veeraraghavan, co-corresponding author of an open-access study published today in Science Advances. “Scattering is what makes light—which has lower wavelength, and therefore gives much better spatial resolution—unusable in many, many scenarios. If you can undo the effects of scattering, then imaging just goes so much further.”
Veeraraghavan’s lab collaborated with the research group of Maryland co-corresponding author Christopher Metzler to create a technology they named NeuWS, which is an acronym for “neural wavefront shaping,” the technology’s core technique.
“If you ask people who are working on autonomous driving vehicles about the biggest challenges they face, they’ll say, ‘Bad weather. We can’t do good imaging in bad weather.'” Veeraraghavan said. “They are saying ‘bad weather,’ but what they mean, in technical terms, is light scattering.”
“If you ask biologists about the biggest challenges in microscopy, they’ll say, ‘We can’t image deep tissue in vivo.’ They’re saying ‘deep tissue’ and ‘in vivo,’ but what they actually mean is that skin and other layers of tissue they want to see through are scattering light. If you ask underwater photographers about their biggest challenge, they’ll say, ‘I can only image things that are close to me.’ What they actually mean is light scatters in water, and therefore doesn’t go deep enough for them to focus on things that are far away. In all of these circumstances, and others, the real technical problem is scattering.”
Veeraraghavan said NeuWS could potentially be used to overcome scattering in those scenarios and others.
“This is a big step forward for us, in terms of solving this in a way that’s potentially practical,” he said. “There’s a lot of work to be done before we can actually build prototypes in each of those application domains, but the approach we have demonstrated could traverse them.”
Conceptually, NeuWS is based on the principle that light waves are complex mathematical quantities with two key properties that can be computed for any given location. The first, magnitude, is the amount of energy the wave carries at the location, and the second is phase, which is the wave’s state of oscillation at the location. Metzler and Veeraraghavan said measuring phase is critical for overcoming scattering, but it is impractical to measure directly because of the high-frequency of optical light.
So they instead measure incoming light as “wavefronts”—single measurements that contain both phase and intensity information—and use backend processing to rapidly decipher phase information from several hundred wavefront measurements per second.
“The technical challenge is finding a way to rapidly measure phase information,” said Metzler, an assistant professor of computer science at Maryland and “triple Owl” Rice alum who earned his Ph.D., masters and bachelors degrees in electrical and computer engineering from Rice in 2019, 2014 and 2013 respectively. Metzler was at Rice University during the development of an earlier iteration of wavefront-processing technology called WISH that Veeraraghavan and colleagues published in 2020.
“WISH tackled the same problem, but it worked under the assumption that everything was static and nice,” Veeraraghavan said. “In the real world, of course, things change all of the time.”
With NeuWS, he said, the idea is to not only undo the effects of scattering, but to undo them fast enough so the scattering media itself doesn’t change during the measurement.
“Instead of measuring the state of oscillation itself, you measure its correlation with known wavefronts,” Veeraraghavan said. “You take a known wavefront, you interfere that with the unknown wavefront and you measure the interference pattern produced by the two. That is the correlation between those two wavefronts.”
Metzler used the analogy of looking at the North Star at night through a haze of clouds. “If I know what the North Star is supposed to look like, and I can tell it is blurred in a particular way, then that tells me how everything else will be blurred.”
Veerarghavan said, “It’s not a comparison, it’s a correlation, and if you measure at least three such correlations, you can uniquely recover the unknown wavefront.”
State-of-the-art spatial light modulators can make several hundred such measurements per minute, and Veeraraghavan, Metzler and colleagues showed they could use a modulator and their computational method to capture video of moving objects that were obscured from view by intervening scattering media.
“This is the first step, the proof-of principle that this technology can correct for light scattering in real time,” said Rice’s Haiyun Guo, one of the study’s lead authors and a Ph.D. student in Veeraraghavan’s research group.
In one set of experiments, for example, a microscope slide containing a printed image of an owl or a turtle was spun on a spindle and filmed by an overhead camera. Light-scattering media were placed between the camera and target slide, and the researchers measured NeuWS ability to correct for light scattering. Examples of scattering media included onion skin, slides coated with nail polish, slices of chicken breast tissue and light-diffusing films. For each of these, the experiments showed NeuWS could correct for light scattering and produce clear video of the spinning figures.
“We developed algorithms that allow us to continuously estimate both the scattering and the scene,” Metzler said. “That’s what allows us to do this, and we do it with mathematical machinery called neural representation that allows it to be both efficient and fast.”
NeuWS rapidly modulates light from incoming wavefronts to create several slightly altered phase measurements. The altered phases are then fed directly into a 16,000-parameter neural network that quickly computes the necessary correlations to recover the wavefront‘s original phase information.
“The neural networks allow it to be faster by allowing us to design algorithms that require fewer measurements,” Veeraraghavan said.
Metzler said, “That’s actually the biggest selling point. Fewer measurements, basically, means we need much less capture time. It’s what allows us to capture video rather than still frames.”
More information: Brandon Y. Feng et al, NeuWS: Neural wavefront shaping for guidestar-free imaging through static and dynamic scattering media, Science Advances (2023). DOI: 10.1126/sciadv.adg4671
Optical devices and materials allow scientists and engineers to harness light for research and real-world applications, like sensing and microscopy. Federico Capasso’s group at the Harvard John A. Paulson School of Engineering Applied Sciences (SEAS) has dedicated years to inventing more powerful and sophisticated optical methods and tools. Now, his team has developed new techniques to exert control over points of darkness, rather than light, using metasurfaces.
“Dark regions in electromagnetic fields, or optical singularities, have traditionally posed a challenge due to their complex structures and the difficulty in shaping and sculpting them. These singularities, however, carry the potential for groundbreaking applications in fields such as remote sensing and precision measurement,” said Capasso, the Robert L. Wallace Professor of Applied Physics and Vinton Hayes Senior Research Fellow in Electrical Engineering at SEAS and senior corresponding author on two new papers describing the work.
In 2011, Capasso’s lab introduced metasurfaces, or sub-wavelength-spaced arrays of nanostructures. In 2016, they used metasurfaces to build high-performance metalenses—flat optical lenses comprising nanopillars that they fabricated using semiconductor lithography techniques—which unlocked a new strategy to focus light using extremely lightweight devices.
The newest studies from the Capasso group—published in Nature Communications and Science Advances—report how metasurface technology can harness not just light, but also darkness.
“Both of these studies introduce new classes of optical singularities—regions of designed darkness—using powerful but intuitive algorithms to inform the fabrication of metasurfaces,” said Soon Wei Daniel Lim, co-first author of the paper in Nature Communications with Joon-Suh Park.
In that study, Lim and collaborators designed and fabricated an optical device containing metasurfaces of titanium dioxide nanopillars that can control light to create an array of optical singularities.
To control exactly where these points of darkness appear, Lim used a computer algorithm to help him reverse engineer the design of the metasurface.
“I told the computer: Here’s what I want to achieve in terms of dark spots, tell me what shape and diameter the nanopillars should be on this metasurface to make this happen,” he said.
As light travels through the metasurface and lens, it generates a prescribed array of dark spots.
“These dark spots are exciting because they could be used as optical traps to capture atoms,” Lim said. “It’s possible this could be used to simplify the optical architecture used in atomic physics labs, replacing today’s conventional equipment—instruments that take up 30 feet of space on a lab table—with compact, lightweight optical devices.”
Dark spots aren’t just handy for trapping atoms. They can also be useful as highly precise reference positions for imaging.
“Points of darkness are much smaller than points of light,” Lim said. “As part of an imaging system, that makes them effective points of measurement to accurately discriminate between two different positions within a sample.”
In their Science Advances paper, the Capasso group described a new class of optical singularities: extremely stable points of darkness in a polarized optical field, known as polarization singularities.
“We’ve designed points of darkness that can withstand a wide range of perturbations—they are topologically protected,” said Christina Spaegele, first author of the paper. “This robustness opens the way to optical devices with high reliability and durability in various applications.”
Previous research achieved some polarization singularities, but the conditions for maintaining that perfect spot of darkness were extremely fragile, making them easily destroyed by stray light or other environmental conditions.
“By shining light through a specially-designed metasurface and focusing lens, we can produce an unwavering polarization singularity surrounded entirely by points of light—essentially creating a dark spot inside a sphere of brightness,” Spaegele said.
The technique is so robust that even introducing a defect to the metasurface doesn’t destroy the dark spot, but simply shifts its position.
“This degree of control could be especially useful for imaging samples in ‘hostile’ environments, where vibrations, pressure, temperature, and stray light would typically interfere with imaging behavior,” Spaegele said.
The team says these new developments in optical singularities have implications for remote sensing and covert detection.
“Points of darkness could be used to mask out bright sources while imaging a scene, allowing us to see faint objects that are otherwise overshadowed,” Capasso said. “Objects or detectors placed at these dark positions will also not give away their position by scattering light, allowing them to be ‘hidden’ without affecting the surrounding light.”
More information: Christina M. Spaegele et al, Topologically protected optical polarization singularities in four-dimensional space, Science Advances (2023). DOI: 10.1126/sciadv.adh0369
Soon Wei Daniel Lim et al, Point singularity array with metasurfaces, Nature Communications (2023). DOI: 10.1038/s41467-023-39072-6