Study explores the possibility that dark photons might be a heat source for intergalactic gas

Study explores the possibility that dark photons might be a heat source for intergalactic gas
(Top panel) Fit to the Doppler parameter distribution and column density distribution function of the Lyman-alpha forest at z=0.1 assuming a maximal contribution of dark photon heating to the line widths.  Contours show the projection of the 68% and 95% intervals for the mass and mixing parameter of the dark photon. The colors correspond to different assumptions about the uncertainty of the intergalactic medium temperature at z = 2. (Bottom panel) The corresponding best-fit models compared to the COS observational data.  The solid gray curve shows a result with no dark photon heating. Credit: Physical Review Letters (2022). DOI: 10.1103/PhysRevLett.129.211102

Gas clouds across the universe are known to absorb the light produced by distant massive celestial objects, known as quasars. This light manifests as the so-called Lyman alpha forest, a dense structure composed of absorption lines that can be observed using spectroscopy tools.

Over the past decades, astrophysicists have been assessing the value of these absorption lines as a tool to better understand the universe and the relationships between cosmological objects. The Lyman alpha forest could also potentially aid the ongoing search for dark matter, offering an additional tool to test theoretical predictions and models.

Researchers at University of Nottingham, Tel-Aviv University, New York University, and the Institute for Fundamental Physics of the Universe in Trieste have recently compared low-redshift Lyman alpha forest observations to hydrodynamical simulations of the intergalactic medium and dark matter made up of dark photons, a renowned dark matter candidate.

Their paper, published in Physical Review Letters (PRL), builds on an earlier work by some members of their team, which compared simulations of the intergalactic medium (IGM) with Lyman-alpha forest measurements collected by the Cosmic Origins Spectrograph (COS) aboard the Hubble Space Telescope.

“In our analyses, we found that the simulation predicted line widths that were too narrow compared to the COS results, suggesting that there could be additional, noncanonical sources of heating occurring at low redshifts,” Hongwan Liu, Matteo Viel, Andrea Caputo and James Bolton, the researchers who carried out the study, told Phys.org via email.

“We explored several dark matter models that could act as this source of heating. Building on two of the authors’ experience with dark photons in a previous paper published in PRL, we eventually realized that heating from dark photon dark matter could work.”

Based on their previous observations, Liu, Viel, Caputo and Bolton decided to alter a hydrodynamical simulation of the IGM (i.e., a sparse cloud of hydrogen that exists in the spaces between galaxies). In their new simulation, they included the effects of the heat that models predict would be produced by dark photon dark matter.

“In regions of space where the mass of the dark photon matches the effective plasma mass of the photon, conversions from dark photons to photons can occur,” Liu, Viel, Caputo and Bolton explained. “The converted photons are then rapidly absorbed by the IGM in those regions, heating the gas up. The amount of energy transferred from dark matter to the gas can be calculated theoretically.”

The researchers added this estimated energy transfer between dark photons and intergalactic clouds to their simulations. This ultimately allowed them to attain a series of simulated absorption line widths, which they could compare to actual Lyman-alpha forest observations collected by the COS.

“Broadly speaking, we have shown that the Lyman-alpha forest is extremely useful for understanding dark matter models where energy can be converted from dark matter into heating,” Liu, Viel, Caputo and Bolton said. “I think our study will encourage physicists interested in dark matter to pay more attention to the Lyman-alpha forest.”

Overall, the comparison between COS measurements and hydrodynamical simulations performed by this team of researchers suggests that dark photons could in fact be a source of heat in intergalactic gas clouds. Their findings could thus be the first hint of the existence of dark matter that is not observed through its gravitational effects.

While this is a fascinating possibility, Liu, Viel, Caputo and Bolton have not yet ruled out other possible theoretical explanations. They thus hope that their study will inspire other teams to similarly probe the properties of the IGM in the early universe.

“One particularly interesting consequence of dark photon heating is that underdense regions in the IGM are heated up at earlier times compared to overdense regions,” Liu, Viel, Caputo and Bolton said. “This can lead to underdense regions being hotter than overdense regions, which is contrary to standard expectations. There are some indications that the IGM does exhibit this behavior at high redshifts. If so, it could be another important piece of evidence in favor of dark photon dark matter heating.”

More information: James S. Bolton et al, Comparison of Low-Redshift Lyman- α Forest Observations to Hydrodynamical Simulations with Dark Photon Dark Matter, Physical Review Letters (2022). DOI: 10.1103/PhysRevLett.129.211102

James S Bolton et al, Limits on non-canonical heating and turbulence in the intergalactic medium from the low redshift Lyman α forest, Monthly Notices of the Royal Astronomical Society (2022). DOI: 10.1093/mnras/stac862

Andrea Caputo et al, Dark Photon Oscillations in Our Inhomogeneous Universe, Physical Review Letters (2020). DOI: 10.1103/PhysRevLett.125.221303

Journal information: Monthly Notices of the Royal Astronomical Society  Physical Review Letters 

How far has nuclear fusion power come? We could be at a turning point for the technology

How far has nuclear fusion power come? We could be at a turning point for the technology
Mega Ampere Spherical Tokamak in Oxfordshire, UK. Credit: Courtesy of MAST, CC BY-SA

Our society faces the grand challenge of providing sustainable, secure and affordable means of generating energy, while trying to reduce carbon dioxide emissions to net zero around 2050.

To date, developments in fusion power, which potentially ticks all these boxes, have been funded almost exclusively by the public sector. However, something is changing.

Private equity investment in the global fusion industry has more than doubled in just one year—from US$2.1 billion in 2021 to US$4.7 billion in 2022, according to a survey from the Fusion Industry Association.

So, what is driving this recent change? There’s lots to be excited about.

Before we explore that, let’s take a quick detour to recap what fusion power is.

Merging atoms together

Fusion works the same way our Sun does, by merging two heavy hydrogen atoms under extreme heat and pressure to release vast amounts of energy.

It’s the opposite of the fission process used by nuclear power plants, in which atoms are split to release large amounts of energy.

Sustaining nuclear fusion at scale has the potential to produce a safe, clean, almost inexhaustible power source.

Our Sun sustains fusion at its core with a plasma of charged particles at around 15 million degrees Celsius. Down on Earth, we are aiming for hundreds of millions of degrees Celsius, because we don’t have the enormous mass of the Sun compressing the fuel down for us.

Scientists and engineers have worked out several designs for how we might achieve this, but most fusion reactors use strong magnetic fields to “bottle” and confine the hot plasma.

Generally, the main challenge to overcome on our road to commercial fusion power is to provide environments that can contain the intense burning plasma needed to produce a fusion reaction that is self-sustaining, producing more energy than was needed to get it started.

Joining the public and private

Fusion development has been progressing since the 1950s. Most of it was driven by government funding for fundamental science.

Now, a growing number of private fusion companies around the world are forging ahead towards commercial fusion energy. A change in government attitudes has been crucial to this.

The US and UK governments are fostering public-private partnerships to complement their strategic research programs.

For example, the White House recently announced it would develop a “bold decadal vision for commercial fusion energy“.

How far has nuclear fusion power come? We could be at a turning point for the technology
A donut-shaped magnetic confinement device called a tokamak is one of the leading designs for a working fusion power generator, with many such experiments running worldwide. Credit: Christopher Roux, EUROfusion ConsortiumCC BY

In the United Kingdom, the government has invested in a program aimed at connecting a fusion generator to the national electricity grid.

The technology has actually advanced, too

In addition to public-private resourcing, the technologies we need for fusion plants have come along in leaps and bounds.

In 2021, MIT scientists and Commonwealth Fusion Systems developed a record-breaking magnet that will allow them to build a compact fusion device called SPARC “that is substantially smaller, lower cost, and on a faster timeline”.

In recent years, several fusion experiments have also reached the all-important milestone of sustaining plasma temperatures of 100 million degrees Celsius or above. These include the EAST experiment in ChinaKorea’s flagship experiment KSTAR, and UK-based company Tokamak Energy.

These incredible feats demonstrate an unprecedented ability to replicate conditions found inside our Sun and keep extremely hot plasma trapped long enough to encourage fusion to occur.

In February, the Joint European Torus—the world’s most powerful operational tokamak—announced world-record energy confinement.

And the next-step fusion energy experiment to demonstrate net power gain, ITER, is under construction in France and now about 80% complete.

Magnets aren’t the only path to fusion either. In November 2021, the National Ignition Facility at Lawrence Livermore National Laboratory in California achieved a historic step forward for inertial confinement fusion.

By focusing nearly 200 powerful lasers to confine and compress a target the size of a pencil’s eraser, they produced a small fusion “hot spot” generating fusion energy over a short time period.

In Australia, a company called HB11 is developing proton-boron fusion technology through a combination of high-powered lasers and magnetic fields.

Fusion and renewables can go hand in hand

It is crucial that investment in fusion is not at the cost of other forms of renewable energy and the transition away from fossil fuels.

We can afford to expand adoption of current renewable energy technology like solar, wind, and pumped hydro while also developing next-generation solutions for electricity production.

This exact strategy was outlined recently by the United States in its Net-Zero Game Changers Initiative. In this plan, resource investment will be targeted to developing a path to rapid decarbonisation in parallel with the commercial development of fusion.

History shows us that incredible scientific and engineering progress is possible when we work together with the right resources—the rapid development of COVID-19 vaccines is just one recent example.

It is clear many scientists, engineers, and now governments and private investors (and even fashion designers) have decided fusion energy is a solution worth pursuing, not a pipe dream. Right now, it’s the best shot we’ve yet had to make fusion power a viable reality.

Provided by The Conversation 

Squeezing microwave fields by magnetostrictive interaction

Squeezing microwave fields by magnetostrictive interaction
The magnetostrictive interaction of an yttrium-iron-garnet (YIG) sphere in a cavity magnomechanical system prepares the magnon mode in a squeezed vacuum state. The squeezing is transferred to the coherently coupled microwave cavity field, thereby yielding a squeezed microwave cavity output field. Credit: Science China Press

Squeezed states of the electromagnetic field find many important applications in quantum information science and quantum metrology. Dr. Jie Li et al. at Zhejiang University put forward a new mechanism for preparing microwave squeezed vacuum states using a cavity magnomechanical system.

Specifically, the spin wave (magnon mode) formed by a large number of spins in a ferrimagnet couples to the phonon mode of the deformation vibration of the ferrimagnet via the magnetostrictive force. The magnetostrictive interaction is a nonlinear effect, which can establish a unique correlation between the amplitude and phase of the magnon mode. This correlation can reduce the quantum noise of the magnon mode, yielding squeezed vacuum of the magnon mode.

Due to the state-swap interaction between magnons and cavity microwave photons, the cavity mode also gets squeezed, leading to squeezed vacuum of the microwave cavity output field. The work shows that the cavity magnomechanical system exhibits some advantages over the most-widely-used method using Josephson parametric amplifiers (JPA) in preparing microwave squeezed states. The working temperature of JPA is typically at 10–20 millikelvin.

This work shows that at temperature of 200 millikelvin, the cavity magnomechanical system can produce microwave squeezed states with the same degree of squeezing as that produced by JPA. This greatly reduces the stringent requirement for ambient temperature. In addition, the operation of JPA requires a large auxiliary circuit, while the cavity magnomechanical system is much simpler, which greatly reduces the cost of the experiment.

The work provides a new mechanism and approach for preparing microwave squeezed vacuum states, which will find many important applications in microwave quantum information processing and quantum metrology.

The paper is published in the journal National Science Review.

More information: Jie Li et al, Squeezing Microwaves by Magnetostriction, National Science Review (2022). DOI: 10.1093/nsr/nwac247

Provided by Science China Press 

A sustainable path for energy-demanding photochemistry

A sustainable path for energy-demanding photochemistry
Conversion of readily available blue light into high-energy UV photons that cannot be provided by sunlight. Credit: Christoph Kerzig

Many photochemical processes rely on UV light from inefficient or toxic light sources that the LED technology cannot replace for technical reasons. An international team of scientists led by Professor Christoph Kerzig of Johannes Gutenberg University Mainz (JGU) in Germany and Professor Nobuhiro Yanai of Kyushu University in Japan has now developed the first molecular system for the conversion of blue light into high-energy UV photons with wavelengths below 315 nanometers.

These photons in the so-called UVB range are essential for numerous photochemical processes in the context of light-to-energy conversion, disinfection, or even wastewater treatment applications. However, sunlight cannot provide UVB photons, and their artificial generation typically relies on mercury lamps or other highly inefficient alternatives.

The new findings show that a metal-free photon upconversion (UC) system can transform readily available visible light into UVB photons. Hence, this breakthrough can be regarded as a more environmentally friendly approach. Initial mercury-free applications have already been demonstrated in the lab.

Collaborative research with a long tradition

Both research groups started working on upconversion several years ago. UC is a process in which the absorption of two photons of lower energy leads to the emission of one photon of higher energy. This technique has been developed to increase the efficiency of solar cells, mainly by converting low-energy photons in the infrared region.

“In contrast, highly energetic UV photons are within reach when blue light is used as the energy source,” explained Professor Kerzig of the Department of Chemistry at Mainz University.

Tailor-made molecules have been prepared in Mainz and characterized with a new large-scale laser device recently installed in the Kerzig group. Furthermore, special spectroscopic techniques in the lab of Professor Nobuhiro Yanai have been applied to the UC system to understand its performance in detail.

While the current paper represents the first collaboration between the Kerzig and Yanai groups, the chemistry departments of both universities have a well-established student exchange program. This novel collaboration will further strengthen the network between Mainz and Kyushu.

Development of reusable upconversion materials

The scientists used a commercial blue LED as light source and exploited the generated UV light for the cleavage of strong chemical bonds that would otherwise require very harsh reaction conditions. Moreover, using the laser setup in Mainz, Ph.D. student Till Zähringer managed to observe all intermediates in the complex energy conversion mechanism.

“Our next goal is to develop reusable materials for versatile applications,” said Professor Nobuhiro Yanai.

His group in Kyushu is well known for the development of photoactive materials. The combination of materials science, photochemistry, and photocatalysis in the framework of the Kyushu-Mainz collaboration will pave the way for this ambitious goal.

The research is published in the journal Angewandte Chemie International Edition.

More information: Till J. B. Zähringer et al, Blue‐to‐UVB Upconversion, Solvent Sensitization and Challenging Bond Activation Enabled by a Benzene‐Based Annihilator, Angewandte Chemie International Edition (2022). DOI: 10.1002/anie.202215340

Journal information: Angewandte Chemie International Edition 

Provided by Universitaet Mainz 

Changing the color of quantum light on an integrated chip

Changing the color of quantum light on an integrated chip
Changing the color of single photons using an integrated phase modulator. Credit: Loncar Lab/Harvard SEAS

Optical photons are ideal carriers of quantum information. But to work together in a quantum computer or network, they need to have the same color—or frequency—and bandwidth. Changing a photon’s frequency requires altering its energy, which is particularly challenging on integrated photonic chips.

Recently, researchers from the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) developed an integrated electro-optic modulator that can efficiently change the frequency and bandwidth of single photons. The device could be used for more advanced quantum computing and quantum networks.

The research is published in Light: Science & Applications.

Converting a photon from one color to another is usually done by sending the photon into a crystal with a strong laser shining through it, a process that tends to be inefficient and noisy. Phase modulation, in which photon wave’s oscillation is accelerated or slowed down to change the photon’s frequency, offers a more efficient method, but the device required for such a process, an electro-optic phase modulator, has proven difficult to integrate on a chip.

One material may be uniquely suited for such an application: thin-film lithium niobate.

“In our work, we adopted a new modulator design on thin-film lithium niobate that significantly improved the device performance,” said Marko Lončar, the Tiantsai Lin Professor of Electrical Engineering at SEAS and senior author of the study. “With this integrated modulator, we achieved record-high terahertz frequency shifts of single photons.”

The team also used the same modulator as a “time lens”— a magnifying glass that bends light in time instead of space—to change the spectral shape of a photon from fat to skinny.

“Our device is much more compact and energy-efficient than traditional bulk devices,” said Di Zhu, the first author of the paper. “It can be integrated with a wide range of classical and quantum devices on the same chip to realize more sophisticated quantum light control.”

Di is a former postdoctoral fellow at SEAS and is currently a research scientist at the Agency for Science, Research and Technology (A*STAR) in Singapore.

Next, the team aims to use the device to control the frequency and bandwidth of quantum emitters for applications in quantum networks.

The research was a collaboration between Harvard, MIT, HyperLight, and A*STAR.

The paper was co-authored by Changchen Chen, Mengjie Yu, Linbo Shao, Yaowen Hu, C. J. Xin, Matthew Yeh, Soumya Ghosh, Lingyan He, Christian Reimer, Neil Sinclair, Franco N. C. Wong, and Mian Zhang.

More information: Di Zhu et al, Spectral control of nonclassical light pulses using an integrated thin-film lithium niobate modulator, Light: Science & Applications (2022). DOI: 10.1038/s41377-022-01029-7

Journal information: Light: Science & Applications 

Provided by Harvard John A. Paulson School of Engineering and Applied Sciences 

Physicists produce symmetry-protected Majorana edge modes on quantum computer

Symmetry-protected Majorana edge modes produced on Google's quantum computer
An artist’s depiction of Majorana edge modes on a chain of superconducting qubits. Credit: Google Quantum AI

Physicists at Google Quantum AI have used their quantum computer to study a type of effective particle that is more resilient to environmental disturbances that can degrade quantum calculations. These effective particles, known as Majorana edge modes, form as a result of a collective excitation of multiple individual particles, like ocean waves form from the collective motions of water molecules. Majorana edge modes are of particular interest in quantum computing applications because they exhibit special symmetries that can protect the otherwise fragile quantum states from noise in the environment.

The condensed matter physicist Philip Anderson once wrote, “It is only slightly overstating the case to say that physics is the study of symmetry.” Indeed, studying physical phenomena and their relationship to underlying symmetries has been the main thrust of physics for centuries. Symmetries are simply statements about what transformations a system can undergo—such as a translation, rotation, or inversion through a mirror—and remain unchanged. They can simplify problems and elucidate underlying physical laws. And, as shown in the new research, symmetries can even prevent the seemingly inexorable quantum process of decoherence.

When running a calculation on a quantum computer, we typically want the quantum bits, or “qubits,” in the computer to be in a single, pure quantum state. But decoherence occurs when external electric fields or other environmental noise disturb these states by jumbling them up with other states to create undesirable states. If a state has a certain symmetry, then it could be possible to isolate it, effectively creating an island of stability that is impossible to mix with the other states that don’t also have the special symmetry. In this way, since the noise can no longer connect the symmetric state to the others, it could preserve the coherence of the state.

In 2000, the physicist Alexei Kitaev devised a simple model to generate symmetry-protected quantum states. The model consisted of a chain of interconnected particles called fermions. They could be connected in such a way that two effective particles would appear at the ends of the chain. But these were no ordinary particles—they were delocalized in space, with each appearing at both ends of the chain simultaneously.

These were the Majorana edge modes (MEMs). The two modes had distinctly different behaviors under so-called parity transformation. One mode looked identical under this transformation, so it was a symmetry of the state. The other picked up a minus sign. The difference in parity between these two states meant that they could not be mixed by many external noise sources (i.e. those that also had parity symmetry).

In their new paper published in Science and titled “Noise-resilient Majorana edge modes on a chain of superconducting qubits,” Xiao Mi, Pedram Roushan, Dima Abanin and their colleagues at Google realized these MEMs with superconducting qubits for the first time. They used a mathematical transformation called the Jordan-Wigner transformation to map the model Kitaev had considered to one that they could realize on their quantum computer: the 1D kicked-Ising model. This model connects each qubit in a 1D chain to each of its two nearest neighbors, such that neighboring qubits interact with one another. Then, a “kick” periodically disturbs the chain.

Mi and his colleagues looked for signatures of the MEMs by comparing the behavior of the edge qubits with those in the middle of the chain. While the state of the qubits in the middle decohered rapidly, the states of those on the edge lasted much longer. Mi says this was “preliminary indication for the resilience of the MEMs toward external decoherence.”

The team then conducted a series of systematic studies on the noise resilience of the MEMs. As a first step, they measured the energies corresponding to the various quantum states of the system and observed that they exactly matched the textbook example of the Kitaev model. In particular, they found that the two MEMs at the opposite ends of the chain are exponentially more difficult to mix as the system size grew—a hallmark feature of the Kitaev model.

Next, the team perturbed the system by adding low-frequency noise to the control operations in the quantum circuits. They found that the MEMs were immune to such perturbations, contrasting sharply with other generic edge modes without symmetries. Surprisingly, the team also found that the MEMs are resilient even to some noise that breaks the symmetries of the Ising model. This is due to a mechanism called “prethermalization,” which arises from the large energy cost required to change the MEMs into other possible excitations in the system.

Lastly, the team measured the full wavefunctions of the MEMs. Doing so required simultaneously measuring the states of varying numbers of qubits close to either end of the chain. Here they made another surprising discovery: No matter how many qubits a measurement included, its decay time was identical. In other words, measurements involving even up to 12 qubits decayed over the same time scale as those of just one qubit. This was contrary to the intuitive expectation that larger quantum observables decay faster in the presence of noise, and further highlighted the collective nature and noise resilience of the MEMs.

Mi and Roushan believe that in the future, they might be able to use MEMs to enable symmetry-protected quantum gates. Their work demonstrates that the MEMs are insensitive to both low-frequency noise and small errors, so this is a promising route to making more robust gates in a quantum processor.

The researchers plan to continue to improve the level of protection these MEMs experience, hopefully to rival some of the leading techniques used to fight against decoherence in quantum computers. Abanin says, “A key question for future works is whether these techniques can be extended to achieve the levels of protection comparable to active error-correction codes.”

More information: X. Mi et al, Noise-resilient edge modes on a chain of superconducting qubits, Science (2022). DOI: 10.1126/science.abq5769

Journal information: Science 

Provided by Google Quantum AI

Researchers realize long-lived storage of multimode quantum states

Researchers realize long-lived storage of multimode quantum states
Principle of the experiment and the schematic of the clock-state preparation. Credit: Ye Yinghao et al

Recently, a team led by Prof. Guo Guangcan achieved long-lived storage of high-dimensional orbital angular momentum (OAM) quantum states of photons based on cold atomic ensembles, using a guiding magnetic field combined with clock state preparation. Their work was published in Physical Review Letters.

Previous work has shown that integrating multimode memory into quantum networks can greatly improve channel capacity, which is crucial for long distance quantum communication. The collective enhancement effect of the cold atomic ensemble makes it an efficient medium for storing photonic information. Although important progress has been made, many problems remain to be solved in long-lived spatial multimode memory based on cold atomic ensembles, one of which is how to achieve high fidelity for multimode memory after a long storage time since multiple spatial modes are more easily affected by the surrounding environment.

Based on the degrees of freedom of OAM, the team carried out research on the long-lived storage of high-dimensional multimode quantum states using the cold 85Rb system. In this work, to overcome the effect of inhomogeneous evolution due to the spatial complexity of stored OAM, the team used a guiding magnetic field to dominate atomic evolution and then employed a pair of magnetically insensitive states to suppress the decoherence in the transverse direction. After the clock states were employed, the destructive interference between different Zeeman sublevels was eliminated, which consequently extended the lifetime of faithful storage.

The team extended the dimension of stored OAM superposition states to three in the experiment, and achieved fidelity that exceeds the quantum-classical criteria after a storage time of 400μs, which is two orders of magnitude longer than previous works. When the storage time was extended from 10μs to 400μs, the retrieval efficiency dropped from 10.7% to 4.7%, showing a clear decreasing trend while the fidelity barely decayed.

More information: Ying-Hao Ye et al, Long-Lived Memory for Orbital Angular Momentum Quantum States, Physical Review Letters (2022). DOI: 10.1103/PhysRevLett.129.193601

Journal information: Physical Review Letters 

Provided by University of Science and Technology of China

Studying muonium to reveal new physics beyond the Standard Model

Studying muonium to reveal new physics beyond the Standard Model
Muonium frequency scan at 22.5 W in the range of 200–800 MHz. The fitted black line is with, the gray line without the 3S contribution. The error bars correspond to the counting statistical error. The colored areas represent the underlying contributions from 2S − 2P1/2 transitions, namely 583 MHz (blue), 1140 MHz (orange), 1326 MHz (green), and the combined 3S − 3P1/2 (yellow). The data point with TL OFF is not displayed in the figure, but is included in the fit; it lies at 20.4(4) × 10−4. Credit: Nature Communications (2022). DOI: 10.1038/s41467-022-34672-0

By studying an exotic atom called muonium, researchers are hoping misbehaving muons will spill the beans on the Standard Model of particle physics. To make muonium, they use the most intense continuous beam of low energy muons in the world at Paul Scherrer Institute PSI. The research is published in Nature Communications.

The muon is often described as the electron’s heavy cousin. A more appropriate description might be its rogue relation. Since its discovery triggered the words “who ordered that” (Isidor Isaac Rabi, Nobel laureate), the muon has been bamboozling scientists with its law-breaking antics.

The muon’s most famous misdemeanor is to wobble slightly too much in a magnetic field: its anomalous magnetic moment hit the headlines with the 2021 muon g-2 experiment at Fermilab. The muon also notably caused trouble when it was used to measure the radius of the proton—giving rise to a wildly different value to previous measurements and what became known as the proton radius puzzle.

Yet rather than being chastised, the muon is cherished for its surprising behavior, which makes it a likely candidate to reveal new physics beyond the Standard Model.

Aiming to make sense of the muon’s strange behavior, researchers from PSI and ETH Zurich turned to an exotic atom known as muonium. Formed from a positive muon orbited by an electron, muonium is similar to hydrogen but much simpler. Whereas hydrogen’s proton is made up of quarks, muonium’s positive muon has no substructure. And this means it provides a very clean model system from which to sort these problems out: for example, by obtaining extremely precise values of fundamental constants such as the mass of the muon.

“With muonium, because we can measure its properties so precisely, we can try to detect any deviation from the Standard Model. And if we see this, we can then infer which of the theories that go beyond the Standard Model are viable or not,” explains Paolo Crivelli from ETH Zurich, who is leading the study supported by a European Research Council Consolidator grant in the frame of the Mu-MASS project.

Making sense of the muon's misdemeanours
By making precise measurements in an exotic atom known as muonium, Crivelli and Prokscha are aiming to understand puzzling results using muons, which may in turn reveal gaps in the laws of physics as we know them. To make the measurements, they use the most intense, continuous source of low energy muons in the world at the Paul Scherrer Institute PSI in Switzerland. Credit: Paul Scherrer Institute / Mahir Dzambegovic

Only one place in the world this is possible

A major challenge to making these measurements very precisely is having an intense beam of muonium particles so that statistical errors can be reduced. Making lots of muonium, which incidentally lasts for only two microseconds, is not simple. There is one place in the world where enough positive muons at low energy are available to create this: PSI’s Swiss Muon Source.

“To make muonium efficiently, we need to use slow muons. When they’re first produced they’re going at a quarter of the speed of light. We then need to slow them down by a factor of a thousand without losing them. At PSI, we’ve perfected this art. We have the most intense continuous source of low energy muons in the world. So we’re uniquely positioned to perform these measurements,” says Thomas Prokscha, who heads the Low Energy Muons group at PSI.

At the Low Energy Muons beamline, slow muons pass through a thin foil target where they pick up electrons to form muonium. As they emerge, Crivelli’s team are waiting to probe their properties using microwave and laser spectroscopy.

Tiny change in energy levels could hold the key

The property of muonium that the researchers are able to study in such detail is its energy levels. In the recent publication, the teams were able to measure for the first time a transition between certain very specific energy sublevels in muonium. Isolated from other so-called hyperfine levels, the transition can be modeled extremely cleanly. The ability to now measure it will facilitate other precision measurements: in particular, to obtain an improved value of an important quantity known as the Lamb shift.

The Lamb shift is a miniscule change in certain energy levels in hydrogen relative to where they “should” be as predicted by classical theory. The shift was explained with the advent of Quantum Electrodynamics (the quantum theory of how light and matter interact). Yet, as discussed, in hydrogen, protons—possessing substructure—complicate things. An ultra-precise Lamb shift measured in muonium could put the theory of Quantum Electrodynamics to the test.

There is more. The muon is nine times lighter than the proton. This means that effects relating to the nuclear mass, such as how a particle recoils after absorbing a photon of light, are enhanced. Indetectable in hydrogen, a route to these values at high precision in muonium could enable scientists to test certain theories that would explain the muon g-2 anomaly: for example, the existence of new particles such as scalar or vector gauge bosons.

Putting the muon on the scales

However exciting the potential of this may be, the team have a greater goal in their sights: weighing the muon. To do this, they will measure a different transition in muonium to a precision one thousand times greater than ever before.

An ultra-high precision value of the muon mass—the goal is 1 part per billion—will support ongoing efforts to reduce uncertainty even further for muon g-2. “The muon mass is a fundamental parameter that we cannot predict with theory, and so as experimental precision improves, we desperately need an improved value of the muon mass as an input for the calculations,” explains Crivelli.

The measurement could also lead to a new value of the Rydberg constant—an important fundamental constant in atomic physics—that is independent of hydrogen spectroscopy. This could explain discrepancies between measurements that gave rise to the proton radius puzzle, and maybe even solve it once and for all.

Muonium spectroscopy poised to fly with IMPACT project

Given that the main limitation for such experiments is producing enough muonium to reduce statistical errors, the outlook for this research at PSI looks bright.

“With the high intensity muon beams planned for the IMPACT project we could potentially go a factor of one hundred higher in precision, and this would be getting very interesting for the Standard Model,” says Prokscha.

More information: Gianluca Janka et al, Measurement of the transition frequency from 2S1/2, F = 0 to 2P1/2, F = 1 states in Muonium, Nature Communications (2022). DOI: 10.1038/s41467-022-34672-0

Journal information: Nature Communications 

Provided by Paul Scherrer Institute 

Covering a cylinder with a magnetic coil triples its energy output in nuclear fusion test

Covering a cylinder with a magnetic coil triples its energy output in nuclear fusion test
(a) Sketch of the magnetized NIF hohlraum constructed from AuTa4 with solenoidal coil to carry current. (b) X-ray drive measured through one of the LEHs and the incident laser powers for a magnetized and unmagnetized AuTa4 hohlraum and two unmagnetized Au hohlraums. “BF” refers to “BigFoot,” the name of the previous ignition design. Credit: Physical Review Letters (2022). DOI: 10.1103/PhysRevLett.129.195002

A team of researchers working at the National Ignition Facility, part of Lawrence Livermore National Laboratory, has found that covering a cylinder containing a small amount of hydrogen fuel with a magnetic coil and firing lasers at it triples its energy output—another step toward the development of nuclear fusion as a power source.

In their paper published in the journal Physical Review Letters, the team, which has members from several facilities in the U.S., one in the U.K. and one in Japan, describes upgrading their setup to allow for the introduction of the magnetic coil.

Last year, a team working at the same facility announced that they had come closer to achieving ignition in a nuclear fusion test than anyone has so far. Unfortunately, the were unable to repeat their results. Since that time, the team has been reviewing their original design, looking for ways to make it better.

The original design involved firing 192 laser beams at a tiny cylinder containing a tiny sphere of hydrogen at its center. This created X-rays that heated the sphere until its atoms began to fuse. Some of the design improvements have involved changing the size of the holes through which the lasers pass, but they have only led to minor changes.

Looking for a better solution, the team studied prior research and found several studies that had shown, via simulation, that encasing a cylinder in a magnetic field should significantly increase the energy output.

Putting the suggestion into practice, the researchers had to modify the cylinder—originally, it was made of gold. Placing it in a strong magnetic field would create an electric current strong enough to tear the cylinder apart, so they made a new one from an alloy of gold and tantalum. They also switched the gas from hydrogen to deuterium (another kind of hydrogen), then covered the whole works with a tesla magnetic field using a coil. Then they fired up the lasers. The researchers saw an immediate improvement—the hot spot on the sphere went up by 40% and the energy output was tripled.

The work marks a step toward the ultimate goal—creating a fusion reactor that can produce more energy than is put into it.

More information: J. D. Moody et al, Increased Ion Temperature and Neutron Yield Observed in Magnetized Indirectly Driven D2 -Filled Capsule Implosions on the National Ignition Facility, Physical Review Letters (2022). DOI: 10.1103/PhysRevLett.129.195002

Journal information: Physical Review Letters 

© 2022 Science X Network

Using machine learning to infer rules for designing complex mechanical metamaterials

Using machine learning to infer rules for designing complex mechanical metamaterials
Two combinatorial mechanical metamaterials designed in such a way that the letters M and L bulge out in the front when being squeezed between two plates (top and bottom). Designing novel metamaterials such as this is made easy by AI. Credit: Daan Haver and Yao Du, University of Amsterdam

Mechanical metamaterials are sophisticated artificial structures with mechanical properties that are driven by their structure, rather than their composition. While these structures have proved to be very promising for the development of new technologies designing them can be both challenging and time-consuming.

Researchers at University of Amsterdam, AMOLF, and Utrecht University have recently demonstrated the potential of convolutional neural networks (CNNs), a class of machine learning algorithms, for designing complex mechanical metamaterials. Their paper, published in Physical Review Letters, specifically introduces two-different CNN-based methods that can derive and capture the subtle combinatorial rules underpinning the design of mechanical metamaterials.

“Our recent study can be considered a continuation of the combinatorial design approach introduced in a previous paper, which can be applied to more complicated building blocks,” Ryan van Mastrigt, one of the researchers who carried out the study, told Phys.org. “Around the time when I started working on this study, Aleksi Bossart and David Dykstra were working on a combinatorial metamaterial that is able to host multiple functionalities, meaning a material that can deform in multiple distinct ways depending on how one actuates it.”

As part of their previous research, van Mastrigt and his colleagues tried to distill the rules underpinning the successful design of complex metamaterials. They soon realized that this was far from an easy task, as the “building blocks” that make up these structures can be deformed and arranged in countless different ways.

Previous studies showed that when metamaterials have small unit cell-sizes (i.e., a limited amount of “building blocks“), simulating all the ways in which these blocks can be deformed and arranged using conventional physics simulation tools is possible. As these unit cell-sizes become larger, however, the task becomes extremely challenging or impossible.

“Since we were unable to reason about any underlying design rules and conventional tools failed at allowing us to explore larger unit cell designs in an efficient way, we decided to consider machine learning as a serious option,” van Mastrigt explained. “Thus, the main objective of our study became to identify a machine learning tool that would allow us to explore the design space much quicker than before. I think that we succeeded and even exceeded our own expectations with our findings.”

To successfully train CNNs to tackle the design of complex metamaterials, van Mastrigt and his colleagues initially had to overcome a series of challenges. Firstly, they had to find a way to effectively represent their metamaterial designs.

“We tried a couple of approaches and finally settled on what we refer to as the pixel representation,” van Mastrigt explained. “This representation encodes the orientation of each building block in a clear visual manner, such that the classification problem is cast to a visual pattern detection problem, which is exactly what CNNs are good at.”

Subsequently, the researchers had to devise methods that considered the huge metamaterials class-imbalance. In other words, as there are currently many known metamaterials belonging to class I, but far fewer belonging to class C (the class that the researchers are interested in), training CNNs to infer combinatorial rules for these different classes might entail different steps.

To tackle this challenge, van Mastrigt and his colleagues devised two different CNN-based techniques. These two techniques are applicable to different metamaterial classes and classification problems.

“In the case of metamaterial M2, we tried to create a training set that is class-balanced,” van Mastrigt said. “We did this using naïve undersampling (i.e., throwing a lot of class I examples away) and combine this with symmetries which we know some designs have, such as translational and rotational symmetry, to create additional class C designs.

“This approach thus requires some domain knowledge. For metamaterial M1, on the other hand, we added a reweight term to the loss function such that the rare class C designs weigh more heavily during training, where the key idea is that this reweighting of class C cancels out with the much larger number of class I designs in the training set. This approach requires no domain knowledge.”

In initial tests, both these CNN-based methods for deriving the combinatorial rules behind the design of mechanical metamaterials achieved highly promising results. The team found that they each performed better on different tasks, depending on the initial dataset used and known (or unknown) design symmetries.

“We showed just how extraordinarily good these networks are at solving complex combinatorial problems,” van Mastrigt said. “This was really surprising for us, since all other conventional (statistical) tools we as physicists commonly use fail for these types of problems. We showed that neural networks really do more than just interpolate the design space based on the examples you give them, as they appear to be somehow biased to find a structure (which comes from rules) in this design space that generalizes extremely well.”

The recent findings gathered by this team of researchers could have far reaching implications for the design of metamaterials. While the networks they trained were so far applied to a few metamaterial structures, they could eventually also be used to create far more complex designs, which would be incredibly difficult to tackle using conventional physics simulation tools.

The work by van Mastrigt and his colleagues also highlights the huge value of CNNs for tackling combinatorial problems, optimization tasks that entail composing an “optimal object” or deriving an “optimal solution” that satisfies all constraints in a set, in instances where there are numerous variables at play. As combinatorial problems are common in numerous scientific fields, this paper could promote the use of CNNs in other research and development settings.

The researchers showed that even if machine learning is typically a “black box” approach (i.e., it does not always allow researchers to view the processes behind a given prediction or outcome), it can still be very valuable for exploring the design space for metamaterials, and potentially other materials, objects, or chemical substances. This could in turn potentially help to reason about and better understand the complex rules underlying effective designs.

“In our next studies, we will turn our attention to inverse design,” van Mastrigt added. “The current tool already helps us enormously to reduce the design space to find suitable (class C) designs, but it does not find us the best design for the task we have in mind. We are now considering machine learning methods that will help us find extremely rare designs that have the properties that we want, ideally even when no examples of such designs are shown to the machine learning method beforehand.

“This is a very hard problem, but after our recent study, we believe, that neural networks will allow us to successfully tackle it.”

More information: Ryan van Mastrigt et al, Machine Learning of Implicit Combinatorial Rules in Mechanical Metamaterials, Physical Review Letters (2022). DOI: 10.1103/PhysRevLett.129.198003

Corentin Coulais et al, Combinatorial design of textured mechanical metamaterials, Nature (2016). DOI: 10.1038/nature18960

Anne S. Meeussen et al, Topological defects produce exotic mechanics in complex metamaterials, Nature Physics (2020). DOI: 10.1038/s41567-019-0763-6

Aleksi Bossart et al, Oligomodal metamaterials with multifunctional mechanics, Proceedings of the National Academy of Sciences (2021). DOI: 10.1073/pnas.2018610118

Journal information: Nature Physics  Nature  Physical Review Letters  Proceedings of the National Academy of Sciences