The horizontal axis shows the ordinary logarithm of the neutrino mass squared difference ratio, while the vertical axis shows their probability distribution. Each histograms represent the probability distributions for the seesaw mechanisms of the corresponding color. The vertical red and blue lines represent the experimental values (1σ and 3σ errors) of the ordinary logarithm of the neutrino mass squared difference ratio. The probability distribution for the seesaw model with the random Dirac and Majorana matrices in orange has the highest probability of reproducing the experimental value. Credit: Naoyuki Haba, Osaka Metropolitan University
When any matter is divided into smaller and smaller pieces, eventually all you are left with—when it cannot be divided any further—is a particle. Currently, there are 12 different known elementary particles, which in turn are made up of quarks and leptons, each of which come in six different flavors. These flavors are grouped into three generations—each with one charged and one neutral lepton—to form different particles, including the electron, muon, and tau neutrinos. In the Standard Model, the masses of the three generations of neutrinos are represented by a three-by-three matrix.
A research team led by Professor Naoyuki Haba from the Osaka Metropolitan University Graduate School of Science, analyzed the collection of leptons that make up the neutrino mass matrix. Neutrinos are known to have less difference in mass between generations than other elementary particles, so the research team considered that neutrinos are roughly equal in mass between generations. They analyzed the neutrino mass matrix by randomly assigning each element of the matrix. They showed theoretically, using the random mass matrix model that the lepton flavor mixings are large. Their findings were published in Progress of Theoretical and Experimental Physics.
“Clarifying the properties of elementary particles leads to the exploration of the universe and ultimately to the grand theme of where we came from,” Professor Haba explained. “Beyond the remaining mysteries of the Standard Model, there is a whole new world of physics.”
After studying the neutrino mass anarchy in the Dirac neutrino, seesaw, double seesaw models, the researchers found that the anarchy approach requires that the measure of the matrix should obey the Gaussian distribution. Having considered several models of light neutrino mass where the matrix is composed of the product of several random matrices, the research team was able to prove, as best they could at this stage, why the calculation of the squared difference of the neutrino masses are closest with the experimental results in the case of the seesaw model with the random Dirac and Majorana matrices.
“In this study, we showed that the neutrino mass hierarchy can be mathematically explained using random matrix theory. However, this proof is not mathematically complete and is expected to be rigorously proven as random matrix theory continues to develop,” said Professor Haba. “In the future, we will continue with our challenge of elucidating the three-generation copy structure of elementary particles, the essential nature of which is still completely unknown both theoretically and experimentally.”
More information: Naoyuki Haba et al, Neutrino mass square ratio and neutrinoless double-beta decay in random neutrino mass matrices, Progress of Theoretical and Experimental Physics (2023). DOI: 10.1093/ptep/ptad010
Diffractive optical network-based multispectral imager achieves high imaging quality and high spectral signal contrast. This diffractive multispectral imager can convert a monochrome image sensor into a snapshot multispectral imaging device without conventional spectral filters or digital reconstruction algorithms. Credit: Ozcan Lab @ UCLA.
Multispectral imaging has fueled major advances in various fields, including environmental monitoring, astronomy, agricultural sciences, biomedicine, medical diagnostics and food quality control. The most ubiquitous and primitive form of a spectral imaging device is the color camera that collects information from red (R), green (G) and blue (B) color channels.
The traditional design of RGB color cameras relies on spectral filters spatially located over a periodically repeating array of 2×2 pixels, with each subpixel containing an absorptive spectral filter that transmits one of the red, green, or blue channels while blocking others.
Despite its widespread use in various imaging applications, scaling up the number of these absorptive filter arrays to collect richer spectral information from many distinct color bands poses various challenges due to their low power efficiency, high spectral cross-talk and poor color representation quality.
UCLA researchers have recently introduced a snapshot multispectral imager that uses a diffractive optical network, instead of absorptive filters, to have 16 unique spectral bands periodically repeating at the output image field-of-view to form a virtual multispectral pixel array. This diffractive network-based multispectral imager is optimized using deep learning to spatially separate the input spectral channels onto distinct pixels at the output image plane, serving as a virtual spectral filter array that preserves the spatial information of the input scene or objects, instantaneously yielding an image cube without image reconstruction algorithms.
Therefore, this diffractive multispectral imaging network can virtually convert a monochrome image sensor into a snapshot multispectral imaging device without conventional spectral filters or digital algorithms.
Published in Light: Science & Applications, the diffractive network-based multispectral imager framework is reported to offer both high spatial imaging quality and high spectral signal contrast. The authors’ research showed that ~79% average transmission efficiency across distinct bands could be achieved without a major compromise on the system’s spatial imaging performance and spectral signal contrast.
More information: Deniz Mengu et al, Snapshot multispectral imaging using a diffractive optical network, Light: Science & Applications (2023). DOI: 10.1038/s41377-023-01135-0
Researchers have developed a new way to achieve dynamic projection of 3D objects onto ultrahigh-density successive planes. By packing more details into a 3D image, this approach could enable realistic representations for use in virtual reality and other applications. Credit: Lei Gong, University of Science and Technology of China
Researchers have developed a new way to create dynamic ultrahigh-density 3D holographic projections. By packing more details into a 3D image, this type of hologram could enable realistic representations of the world around us for use in virtual reality and other applications.
“A 3D hologram can present real 3D scenes with continuous and fine features,” said Lei Gong, who led a research team from the University of Science and Technology of China. “For virtual reality, our method could be used with headset-based holographic displays to greatly improve the viewing angles, which would enhance the 3D viewing experience. It could also provide better 3D visuals without requiring a headset.”
Producing a realistic-looking holographic display of 3D objects requires projecting images with a high pixel resolution onto a large number of successive planes, or layers, that are spaced closely together. This achieves high depth resolution, which is important for providing the depth cues that make the hologram look three dimensional.
Gong’s team and Chengwei Qiu’s research team at the National University of Singapore describe their new approach, called three-dimensional scattering-assisted dynamic holography (3D-SDH), in the journal Optica. They show that it can achieve a depth resolution more than three orders of magnitude greater than state-of-the-art methods for multiplane holographic projection.
“Our new method overcomes two long-existing bottlenecks in current digital holographic techniques—low axial resolution and high interplane crosstalk—that prevent fine depth control of the hologram and thus limit the quality of the 3D display,” said Gong. “Our approach could also improve holography-based optical encryption by allowing more data to be encrypted in the hologram.”
The new 3D scattering-assisted dynamic holography approach creates a digital hologram by projecting high-resolution images onto planes spaced closely together (a), achieving a more realistic representation than conventional holography techniques (b). Credit: Lei Gong, University of Science and Technology of China
Producing more detailed holograms
Creating a dynamic holographic projection typically involves using a spatial light modulator (SLM) to modulate the intensity and/or phase of a light beam. However, today’s holograms are limited in terms of quality because current SLM technology allows only a few low-resolution images to be projected onto sperate planes with low depth resolution.
To overcome this problem, the researchers combined an SLM with a diffuser that enables multiple image planes to be separated by a much smaller amount without being constrained by the properties of the SLM. By also suppressing crosstalk between the planes and exploiting scattering of light and wavefront shaping, this setup enables ultrahigh-density 3D holographic projection.
To test the new method, the researchers first used simulations to show that it could produce 3D reconstructions with a much smaller depth interval between each plane. For example, they were able to project a 3D rocket model with 125 successive image planes at a depth interval of 0.96 mm in a single 1000×1000-pixel hologram, compared to 32 image planes with a depth interval of 3.75 mm using another recently developed approach known as random vector-based computer-generated holography.
The researchers used their new method to simulate a holographic representation of a rocket [drawing shown in (a), point-cloud model in (b)]. A volume-rendered image of the 3D rocket projected by the random vector-based computer-generated holography (RV-CGH) method is shown in (c), using a single 1000×1000-pixel hologram. The 3D projection is represented by 32 images with a depth interval of 3.75 mm. Volume-rendered image of the object projected by 3D-SDH is shown in (d). 125 image planes with a uniform distance of 0.96 mm are simultaneously projected from a single 1000×1000-pixel hologram. Volume-rendered images of the simulated 3D rocket with varying perspective views are pictured in (e–g). Credit: Lei Gong, University of Science and Technology of China
To validate the concept experimentally, they built a prototype 3D-SDH projector to create dynamic 3D projections and compared this to a conventional state-of- the-art setup for 3D Fresnel computer-generated holography. They showed that 3D-SDH achieved an improvement in axial resolution of more than three orders of magnitude over the conventional counterpart.
The 3D holograms the researchers demonstrated are all point-cloud 3D images, meaning they cannot present the solid body of a 3D object. Ultimately, the researchers would like to be able to project a collection of 3D objects with a hologram, which would require a higher pixel-count hologram and new algorithms.
More information: Panpan Yu et al, Ultrahigh-density 3D Holographic Projection by Scattering-assisted Dynamic Holography, Optica (2023). DOI: 10.1364/OPTICA.483057
Ultrafast turn-on of magnetism in monolayer BiH. a Illustration of the hexagonal BiH lattice and ultrafast turn-on of magnetism—an intense femtosecond laser pulse is irradiated onto the material, exciting electronic currents that through spin–orbit interactions induce magnetization and spin flipping. b Band structure of BiH with and w/o SOC (red and blue bands indicate occupied and unoccupied states, respectively). In the SOC case, each band is spin-degenerate. c Calculated spin expectation value, <Sz(t)>, driven by circularly polarized pulses for several driving intensities (for a wavelength of 3000 nm). The x-component of the driving field is illustrated in arbitrary units to convey the different timescales in the dynamics. Credit: npj Computational Materials (2023). DOI: 10.1038/s41524-023-00997-7
Intense laser light can induce magnetism in solids on the attosecond scale—the fastest magnetic response to date. That is the finding reached by theoreticians at the Max Planck Institute for the Structure and Dynamics of Matter in Hamburg, Germany, who used advanced simulations to investigate the magnetization process in several 2D and 3D materials.
Their calculations show that, in structures with heavy atoms, the fast electron dynamics initiated by the laser pulses can be converted to attosecondmagnetism. The work has been published in npj Computational Materials.
The team concentrated on several benchmark 2D and 3D material systems, but the results apply to all materials that include heavy atomic constituents. “The heavy atoms are especially important, because they induce a strong spin-orbit interaction,” explains lead author Ofer Neufeld. “This interaction is key to converting the light-induced electron motion into spin polarization—in other words, into magnetism. Otherwise, light simply doesn’t interact with the electrons’ spin.”
Just like tiny compass needles, electrons can also be imagined as having an internal needle that points to some direction in space, e.g. ‘up’ or ‘down’—the so-called ‘spin’. Each electron’s spin direction depends on the chemical environment around it, for instance which atoms it can see and where other electrons are. In non-magnetic materials, the electrons spin equally in all directions. In contrast, when the individual electrons’ spins align with each other to point in the same direction, the material becomes magnetic.
The theoreticians set out to investigate what magnetic phenomena can occur when solids interact with intense linearly-polarized laser pulses, that typically accelerate electrons on very fast timescales inside matter. “These conditions are fascinating to explore, because when the laser pulses have a linear polarization, they are typically believed not to induce any magnetism,” says Neufeld.
Unexpectedly, their simulations showed that these particularly powerful lasers do magnetize materials, even though the magnetism is transient—it lasts only until the laser pulse is turned off. The most remarkable finding, however, concerned the speed of this process: The magnetization evolves on extremely short timescales, less than 500 attoseconds—a prediction for the fastest magnetic response ever. For scale, a single attosecond is to one second as one second is to about 32 billion years.
Using advanced simulation tools to explain the underlying mechanism, the team showed that the intense light flips the electrons’ spins back and forth. The laser effectively accelerates the electrons in circular-like orbits in the space of a few hundred attoseconds. These strong spin-orbit interactions then align the spin directions.
The process can be imagined as a bowling ball sliding across a surface which then starts to roll: In this analogy the light pushes the ball around, and the spin-orbit interactions (a force arising from the nearby heavy nuclei as the electron orbits around it) cause it to roll back and forth, magnetizing it. Both forces act together to get the ball rolling.
The results offer fascinating new insights into the fundamentals of magnetization, says Neufeld: “We found that it’s a highly nonlinear effect which can be tuned by the laser‘s properties. The results hint, though don’t unequivocally prove, that the ultimate speed limit for magnetism is several tens of attoseconds, because that’s the natural speed limit of electronic motion.”
Understanding these light-induced magnetization processes at their fundamental level in a range of materials is a crucial step towards the development of ultrafast memory devices and changes the current understanding of magnetism.
More information: Ofer Neufeld et al, Attosecond magnetization dynamics in non-magnetic materials driven by intense femtosecond lasers, npj Computational Materials (2023). DOI: 10.1038/s41524-023-00997-7
Magnetic field (Hall) probe matrix allows current distributions to be recreated and demountable joints allow current redistribution between CORC cables at each turn. Demountable joints shown here are conceptual. Joint resistances must be on the order of nano-ohms. Credit: Berkeley Lab
Researchers at Berkeley Lab’s Accelerator Technology & Applied Physics (ATAP) Division have developed a method for detecting and predicting the local loss of superconductivity in large-scale magnets that are capable of generating high magnetic fields. These high-field magnets are a core enabling technology for many areas of scientific research, medicine, and energy, where they are used in a range of applications, including in particle accelerators and colliders for high-energy and nuclear physics, diagnostic and therapeutic medical devices, and energy generation, transmission, and storage technologies.
High-field magnets also show promise as an enabling technology for magnetic confinement fusion reactors, which aim to replicate the processes that power the sun by fusing two hydrogen isotopes (deuterium and tritium) to produce a carbon-free source of energy. They are used to confine the plasma of deuterium and tritium so that fusion can occur.
To realize the full potential of these reactors “will require high-performance superconducting magnets capable of generating large magnetic fields safely and reliably under the demanding dynamic conditions found in fusion reactors,” says Reed Teyber, a Research Scientist at ATAP’s Superconducting Magnet Program who is developing diagnostic tools for monitoring the performance of both low- and high-temperature superconducting magnets. The work is published in the journal Scientific Reports.
Superconducting cables, however, can experience sudden and unpredictable losses in superconductivity—a phenomenon referred to as quenching—that can generate temperatures high enough to destroy the magnets, costing millions of dollars in damage.
“While for older, low-temperature superconducting magnets quenching is inherent, it must be avoided altogether in high-temperature superconducting magnets to ensure their reliable and safe operation,” explains Reed. “Detecting a quench and preventing it from damaging magnets is, therefore, a central focus for researchers looking to develop superconducting magnets for compact fusion reactors.”
Rare-earth barium copper oxide (ReBCO) is a promising material for the fabrication of superconducting magnets used in fusion reactors. ReBCO tape, which is used in the cables that carry the currents in superconducting magnets, has a high critical temperature (making it a so-called high-temperature superconductor), high critical field, and the potential to form demountable magnets—an important property that improves maintenance access, simplifies materials component testing, and allows for modularity to accommodate future reactor designs.
Superconducting magnets are familiar in such applications as colliders for high-energy physics, so methods exist for quench protection in low-temperature superconducting magnets like niobium-titanium or niobium-tin. These methods use voltage or temperature measurements to trigger energy extraction processes. However, Reed says quench detection in high-temperature superconducting magnets, like those that use ReBCO, needed for fusion reactors is far more challenging and calls for new approaches.
Reed Teyber working on high-temperature superconducting CORC cable. Credit: Berkeley Lab/Carl A. Williams
These extremely powerful magnets, capable of generating magnetic fields exceeding 20 tesla, are characterized by a much slower initial rate of quench development compared to accelerator magnets, making it difficult to detect quenches using voltage- or temperature-based techniques.
To address this issue, Reed is working with colleagues at ATAP to develop a method that employs an array of Hall probes—devices that are used to measure the magnetic fields created when a current-carrying conductor is placed in a magnetic field—to measure the magnetic fields produced around ReBCO CORC cables. These cables are composed of numerous “conductor-on-round-core” wires made from tapes of ReBCO to achieve the required current capacity.
Current distributions for the individual conductors recreated from these measurements provide insights into the detailed dynamics of magnet operation, allowing the extraction of parameters for a predictive model.
Although it shows promise as a powerful technique for quench detection and prevention, it currently has many limitations, notes Reed. “For example, it only works for specific magnet technologies with no inter-cable current sharing and cables of moderate length.”
However, he adds, it turns out that demountable toroidal field coils wound from ReBCO cables meet these requirements. Toroidal field coils are the leading superconducting magnet technology for generating the enormous magnetic fields required for containing the plasma to ensure fusion reactions can happen.
“We are now looking into how we can use our technique to solve the problem of quench detection in these coils.”
Reed says the work could be a “game-changer” not only for superconducting magnets used in nuclear fusion reactor experiments, but also for particle accelerators in high-energy physics, magnetic energy storage technologies, medicine, and various power devices such as electric motors, generators, and transmission lines.
“This innovative technique conceived by Reed,” said Paolo Ferracin, deputy program head of ATAP’s Superconducting Magnet Program, “has the potential to serve as a key element in solving the quench protection for high-temperature superconductor cables, a fundamental issue for the scientific community working on the next generation of superconducting magnets.”
More information: Reed Teyber et al, Current distribution monitoring enables quench and damage detection in superconducting fusion magnets, Scientific Reports (2022). DOI: 10.1038/s41598-022-26592-2
Environmental noise, here represented as a little demon, can affect the state of a quantum computer by changing the phases of various branches of its wave function in an unpredictable fashion; we call this dephasing. Here, the position of the hand of the clock represents the phase of a particular branch of the wave function. Its modification, not known to us, will affect the delicate ballet of phase recombination which quantum computations rely on. Credit: L. Lami
Researchers Ludovico Lami (QuSoft, University of Amsterdam) and Mark M. Wilde (Cornell) have made significant progress in quantum computing by deriving a formula that predicts the effects of environmental noise. This is crucial for designing and building quantum computers capable of working in our imperfect world.
Quantum computing uses the principles of quantum mechanics to perform calculations. Unlike classical computers, which use bits that can be either 0 or 1, quantum computers use quantum bits, or qubits, which can be in a superposition of 0 and 1 simultaneously.
This allows quantum computers to perform certain types of calculations much faster than classical computers. For example, a quantum computer can factor very large numbers in a fraction of the time it would take a classical computer.
While one could naively attribute such an advantage to the ability of a quantum computer to perform numerous calculations in parallel, the reality is more complicated. The quantum wave function of the quantum computer (which represents its physical state) possesses several branches, each with its own phase. A phase can be thought of as the position of the hand of a clock, which can point in any direction on the clockface.
At the end of its computation, the quantum computer recombines the results of all computations it simultaneously carried out on different branches of the wave function into a single answer. “The phases associated to the different branches play a key role in determining the outcome of this recombination process, not unlike how the timing of a ballerina’s steps play a key role in determining the success of a ballet performance,” explains Lami.
Light can travel through an optical fiber via different paths. The impossibility of knowing the exact path a light ray has taken leads to an effective dephasing noise. Credit: L. Lami
Disruptive environmental noise
A significant obstacle to quantum computing is environmental noise. Such noise can be likened to a little demon that alters the phase of different branches of the wave function in an unpredictable way. This process of tampering with the phase of a quantum system is called dephasing, and can be detrimental to the success of a quantum computation.
Dephasing can occur in everyday devices such as optical fibers, which are used to transfer information in the form of light. Light rays traveling through an optical fiber can take different paths; since each path is associated to a specific phase, not knowing the path taken amounts to an effective dephasing noise.
In their new publication in Nature Photonics, Lami and Wilde analyze a model, called the bosonic dephasing channel, to study how noise affects the transmission of quantum information. It represents the dephasing acting on a single mode of light at definite wavelength and polarization.
The number quantifying the effect of the noise on quantum information is the quantum capacity, which is the number of qubits that can be safely transmitted per use of a fiber. The new publication provides a full analytical solution to the problem of calculating the quantum capacity of the bosonic dephasing channel, for all possible forms of dephasing noise.
Longer messages overcome errors
To overcome the effects of noise, one can incorporate redundancy in the message to ensure that the quantum information can still be retrieved at the receiving end. This is similar to saying “Alpha, Beta, Charlie” instead of “A, B, C” when speaking on the phone. Although the transmitted message is longer, the redundancy ensures that it is understood correctly.
The new study quantifies exactly how much redundancy needs to be added to a quantum message to protect it from dephasing noise. This is significant because it enables scientists to quantify the effects of noise on quantum computing and develop methods to overcome these effects.
Over the short span of just 300 years, since the invention of modern physics, we have gained a deeper understanding of how our universe works on both small and large scales. Yet, physics is still very young and when it comes to using it to explain life, physicists struggle.
Even today, we can’t really explain what the difference is between a living lump of matter and a dead one. But my colleagues and I are creating a new physics of life that might soon provide answers.
More than 150 years ago, Darwin poignantly noted the dichotomy between what we understand in physics and what we observe in life—noting at the end of The Origin of Species “…whilst this planet has gone cycling on according to the fixed law of gravity, from so simple a beginning endless forms most beautiful and most wonderful have been and are being evolved.”
The importance of time
Isaac Newton described a universe where the laws never change, and time is an immutable and absolute backdrop against which everything moves. Darwin, however, observed a universe where endless forms are generated, each changing features of what came before, suggesting that time should not only have a direction, but that it in some ways folds back on itself. New evolutionary forms can only arise via selection on the past.
Presumably these two areas of science are describing the same universe, but how can two such diametrically opposite views be unified? The key to understanding why life is not explainable in current physics may be to reconsider our notions of time as the key difference between the universe as described by Newton and that of Darwin. Time has, in fact, been reinvented many times through the history of physics.
Although Newton’s time was fixed and absolute, Einstein’s time became a dimension—just like space. And just as all points in space exist all at once, so do all points in time. This philosophy of time is sometimes referred to as the “block universe” where the past, present and future are equally real and exist in a static structure—with no special “now”. In quantum mechanics, the passage of time emerges from how quantum states change from one to the next.
The invention of thermodynamics gave time its arrow, explaining why it’s moving forward rather than backwards. That’s because there are clear examples of systems in our universe, such as a working engine, that are irreversible—only working in one direction. Each new area of fundamental physics, whether describing space and time (Newton/Einstein), matter and light (quantum mechanics), or heat and work (thermodynamics) has introduced a new concept of time.
But what about evolution and life? To build novel things, evolution requires time. Endless novelty can only come to be in a universe where time exists and has a clear direction. Evolution is the only physical process in our universe that can generate the succession of novel objects we associate to life—things like microbes, mammals, trees and even cellphones.
Information and memory
Such objects cannot fluctuate into existence spontaneously. They require a memory, based on what existed in the past, to construct things in the present. It is such “selection” that determines the dividing line between the universe described by current physics, and what Darwin saw: it is the mechanism that turns a universe where memory does not matter in determining what exists, to one where it does.
Think about it, everything in the living world requires some kind of memory and information flow. The DNA in our cells is our blueprint. And to invent new things, such as rockets or medication, living beings also need information—knowledge of the laws of physics and chemistry.
To explain life, we therefore need to understand how the complex objects life creates exist in time. With my collaborators, we have been doing just that in a newly proposed theory of physics called assembly theory.
A key conjecture of assembly theory is that, as objects become more complex, the number of unique parts that make it up increases, and so does the need for local memory to store how to assemble the object from its unique parts. We quantify this in assembly theory as the shortest number of physical steps to build an object from its elementary building blocks, called the assembly index.
Importantly, assembly theory treats this shortest path as an intrinsic property of the object, and indeed we have shown how assembly index can be measured for molecules using several different measuring techniques including mass spectrometry (an analytical method to measure the mass-to-charge ratio of molecules).
With this approach, we have shown in the lab, with measurements on both biological and non-biological samples, how molecules with an assembly index above 15 steps are only found in living samples.
This suggests that assembly theory is indeed capable of testing our hypothesis that life is the only physics that generates complex objects. And we can do so by identifying those objects that are so complex the only physical mechanism to form them is evolution.
We are aiming to use our theory to estimate when the origin if life happens by measuring the point at which molecules in a chemical soup become so complex that they start using information to make copies of themselves—the threshold at which life arises from non-life. We may then apply the theory to experiments aiming to generate a new origin of life event in the lab.
And when we know this, we can use the theory to look for life on worlds that are radically different to Earth, and may therefore look so alien that we wouldn’t recognize life there.
If the theory holds, it will force a radical rethink on time in physics. According to our theory, assembly can be measured as an intrinsic property for molecules, which corresponds to their size in time—meaning time is a physical attribute.
Ultimately, time is intrinsic to our experiences of the world, and it is necessary for evolution to happen. If we want physics to be capable of explaining life—and us—it may be that we need to treat time as a material property for the first time in physics.
This is perhaps the most radical departure for physics of life from standard physics, but it may be the critical insight needed to explain what life is.
Artist’s impression of the XMCD experiment. The soft-x-ray light from a plasma source is first circularly polarized by the transmission through a magnetic film. Subsequently, the magnetization in the actual sample can be determined accurately. Credit: Christian Tzschaschel
Magnetic nanostructures have long been part of our everyday life, e.g., in the form of fast and compact data storage devices or highly sensitive sensors. A major contribution to the understanding of many of the relevant magnetic effects and functionalities is made by a special measurement method: X-ray magnetic circular dichroism (XMCD).
This impressive term describes a fundamental effect of the interaction between light and matter: In a ferromagnetic material, there is an imbalance of electrons with a certain angular momentum, the spin. If one shines circularly polarized light, which also has a defined angular momentum, through a ferromagnet, a clear difference in transmission for a parallel or anti-parallel alignment of the two angular momenta is observable—a so-called dichroism.
This circular dichroism of magnetic origin is particularly pronounced in the soft-X-ray region (200 to 2000 eV energy of the light particles, corresponding to a wavelength of only 6 to 0.6 nm), when considering the element-specific absorption edges of transition metals, such as iron, nickel, or cobalt, as well as rare earths, such as dysprosium or gadolinium.
The averaged transmission through the investigated sample at the Fe L absorption edges (black data points) can be measured precisely and is well described by a simulation (black line). At the two absorption maxima, see insets, significant dichroism for the two different directions of saturation magnetization of the sample is observable. So far, such experiments have only been possible at large-scale facilities. Credit: Forschungsverbund Berlin e.V. (FVB)
These elements are particularly important for the technical application of magnetic effects. The XMCD effect allows for precisely determining the magnetic moment of the respective elements, even in buried layers in a material and without damaging the sample system. If the circularly polarized soft-X-ray radiation comes in very short femto- to picosecond (ps) pulses, even ultrafast magnetization processes can be monitored on the relevant time scale.
Until now, access to the required X-ray radiation has only been possible at scientific large-scale facilities, such as synchrotron-radiation sources or free-electron lasers (FELs), and has thus been strongly limited.
A team of researchers around junior research group leader Daniel Schick at the Max Born Institute (MBI) in Berlin has now succeeded for the first time in realizing XMCD experiments at the absorption L edges of iron at a photon energy of around 700 eV in a laser laboratory. A laser-driven plasma source was used to generate the required soft X-ray light, by focusing very short (2 ps) and intense (200 mJ per pulse) optical laser pulses onto a cylinder of tungsten.
The generated plasma thereby emits a lot of light continuously in the relevant spectral range of 200-2000 eV at a pulse duration of smaller than 10 ps. However, due to the stochastic generation process in the plasma, a very important requirement to observe XMCD is not met—the polarization of the soft-X-ray light is not circular, as required, but completely random, similar to that of a light bulb.
Magnetic asymmetry behind the polarizer and the examined sample at the Fe L absorption edges. The two colors correspond to measurements with reversed magnetization of the polarizer – the magnetization direction of the sample is immediately evident from the sign of the dichroism observed (blue vs. red curve). The measurements can be reproduced very accurately by simulations (lines). Credit: Forschungsverbund Berlin e.V. (FVB)
Therefore, the researchers used a trick: the X-ray light first passes through a magnetic polarization filter in which the same XMCD effect as described above is active. Due to the polarization-dependent dichroic transmission, an imbalance of light particles with parallel vs. anti-parallel angular momentum relative to the magnetization of the filter can be generated. After passing through the polarization filter, the partially circularly or elliptically polarized soft-X-ray light can be employed for the actual XMCD experiment on a magnetic sample.
The work, published in the journal Optica, demonstrates that laser-based X-ray sources are catching up with large-scale facilities.
“Our concept for generating circularly polarized soft X-rays is not only very flexible but also partly superior to conventional methods in XMCD spectroscopy due to the broadband nature of our light source,” says the first author of the study and Ph.D. student at the MBI, Martin Borchert. In particular, the already demonstrated pulse duration of the generated X-ray pulses of only a few picoseconds opens up new possibilities to observe and ultimately understand even very fast magnetization processes, e.g., when triggered by ultrashort light flashes.
More information: Martin Borchert et al, X-ray magnetic circular dichroism spectroscopy at the Fe L edges with a picosecond laser-driven plasma source, Optica (2023). DOI: 10.1364/OPTICA.480221
An overview of the QRNG setup. (a) The vacuum noise that is used as a source to generate random numbers. (b) A micrograph of the manufactured PIC and TIA. (c) The Gaussian distribution after digitization. (d) The distribution of the distilled random 32-bit integers, grouped into 256 bins. Credit: PRX Quantum (2023). DOI: 10.1103/PRXQuantum.4.010330
A team of physicists from Ghent University—Interuniversity Microelectronics Center, Technical University of Denmark and Politecnico & Università di Bari, reports that it is possible to use quantum fluctuations to generate random numbers faster than standard methods.
In their study, reported in the journal PRX Quantum, the group used the behavior of pairs of particles and antiparticles to create a random generator that is up to 200 times faster than conventional systems.
Random number generation is important in computer science. In addition to such applications as generating random backdrops and scenarios in video games, random numbers are used to create encryption keys for a host of sensitive applications. But generating keys that cannot be easily cracked requires computer power and time. For that reason, computer scientists are constantly looking for new ways to generate random numbers.
In this new effort, the research team turned to a new source—quantum fluctuation—which, in its most basic form, is a temporary change in the amount of energy that exists at a unique point in space. Such flickering has been widely studied due to the way it impacts chemical bonding and resulting types of light scattering. In this new effort, the research team took advantage of the randomness of such flickering to create a random number generator. In their approach, they focused on quantum flickering related to instances of particles and antiparticles forming and self-destructing and the fields of energy associated with them. Such flickering has in the past been shown to be random.
To capture the randomness of such flickering, the researchers used an integrated balanced homodyne detector—a device that is capable of measuring the electric field of a quantum state. But noting that such a device is susceptible to also capturing the less-than-random behavior of entangling particles, they added another device designed to identify this noise and ignore it while taking measurements.
The team then shrank the components used by their homodyne detector to a size that would allow incorporation on a chip installed in a computer system. They then used data from the chip to generate random numbers.
More information: Cédric Bruynsteen et al, 100-Gbit/s Integrated Quantum Random Number Generator Based on Vacuum Fluctuations, PRX Quantum (2023). DOI: 10.1103/PRXQuantum.4.010330
An illustration of how a 2D photonic time crystal can boost light waves. Credit: Xuchen Wang / Aalto University
Researchers have developed a way to create photonic time crystals, and they have shown that these bizarre, artificial materials amplify the light that shines on them. These findings, described in a paper in Science Advances, could lead to more efficient and robust wireless communications and significantly improved lasers.
Time crystals were first conceived by Nobel laureate Frank Wilczek in 2012. Mundane, familiar crystals have a structural pattern that repeats in space, but in a time crystal, the pattern repeats in time instead. While some physicists were initially skeptical that time crystals could exist, recent experiments have succeeding in creating them. Last year, researchers at Aalto University’s Low Temperature Laboratory created paired time crystals that could be useful for quantum devices.
Now, another team has made photonic time crystals, which are time-based versions of optical materials. The researchers created photonic time crystals that operate at microwave frequencies, and they showed that the crystals can amplify electromagnetic waves. This ability has potential applications in various technologies, including wireless communication, integrated circuits, and lasers.
So far, research on photonic time crystals has focused on bulk materials—that is, three-dimensional structures. This has proven enormously challenging, and the experiments haven’t gotten past model systems with no practical applications. So the team, which included researchers from Aalto University, the Karlsruhe Institute of Technology (KIT), and Stanford University, tried a new approach: building a two-dimensional photonic time crystal, known as a metasurface.
“We found that reducing the dimensionality from a 3D to a 2D structure made the implementation significantly easier, which made it possible to realize photonic time crystals in reality,” says Xuchen Wang, the study’s lead author, who was a doctoral student at Aalto and is currently at KIT.
The new approach enabled the team to fabricate a photonic time crystal and experimentally verify the theoretical predictions about its behavior. “We demonstrated for the first time that photonic time crystals can amplify incident light with high gain,” says Wang.
“In a photonic time crystal, the photons are arranged in a pattern that repeats over time. This means that the photons in the crystal are synchronized and coherent, which can lead to constructive interference and amplification of the light,” explains Wang. The periodic arrangement of the photons means they can also interact in ways that boost the amplification.
Two-dimensional photonic time crystals have a range of potential applications. By amplifying electromagnetic waves, they could make wireless transmitters and receivers more powerful or more efficient. Wang points out that coating surfaces with 2D photonic time crystals could also help with signal decay, which is a significant problem in wireless transmission. Photonic time crystals could also simplify laser designs by removing the need for bulk mirrors that are typically used in laser cavities.
Another application emerges from the finding that 2D photonic time crystals don’t just amplify electromagnetic waves that hit them in free space but also waves traveling along the surface. Surface waves are used for communication between electronic components in integrated circuits. “When a surface wave propagates, it suffers from material losses, and the signal strength is reduced. With 2D photonic time crystals integrated into the system, the surface wave can be amplified, and communication efficiency enhanced,” says Wang.