Conversion of readily available blue light into high-energy UV photons that cannot be provided by sunlight. Credit: Christoph Kerzig
Many photochemical processes rely on UV light from inefficient or toxic light sources that the LED technology cannot replace for technical reasons. An international team of scientists led by Professor Christoph Kerzig of Johannes Gutenberg University Mainz (JGU) in Germany and Professor Nobuhiro Yanai of Kyushu University in Japan has now developed the first molecular system for the conversion of blue light into high-energy UV photons with wavelengths below 315 nanometers.
These photons in the so-called UVB range are essential for numerous photochemical processes in the context of light-to-energy conversion, disinfection, or even wastewater treatment applications. However, sunlight cannot provide UVB photons, and their artificial generation typically relies on mercury lamps or other highly inefficient alternatives.
The new findings show that a metal-free photon upconversion (UC) system can transform readily available visible light into UVB photons. Hence, this breakthrough can be regarded as a more environmentally friendly approach. Initial mercury-free applications have already been demonstrated in the lab.
Collaborative research with a long tradition
Both research groups started working on upconversion several years ago. UC is a process in which the absorption of two photons of lower energy leads to the emission of one photon of higher energy. This technique has been developed to increase the efficiency of solar cells, mainly by converting low-energy photons in the infrared region.
“In contrast, highly energetic UV photons are within reach when blue light is used as the energy source,” explained Professor Kerzig of the Department of Chemistry at Mainz University.
Tailor-made molecules have been prepared in Mainz and characterized with a new large-scale laser device recently installed in the Kerzig group. Furthermore, special spectroscopic techniques in the lab of Professor Nobuhiro Yanai have been applied to the UC system to understand its performance in detail.
While the current paper represents the first collaboration between the Kerzig and Yanai groups, the chemistry departments of both universities have a well-established student exchange program. This novel collaboration will further strengthen the network between Mainz and Kyushu.
Development of reusable upconversion materials
The scientists used a commercial blue LED as light source and exploited the generated UV light for the cleavage of strong chemical bonds that would otherwise require very harsh reaction conditions. Moreover, using the laser setup in Mainz, Ph.D. student Till Zähringer managed to observe all intermediates in the complex energy conversion mechanism.
“Our next goal is to develop reusable materials for versatile applications,” said Professor Nobuhiro Yanai.
His group in Kyushu is well known for the development of photoactive materials. The combination of materials science, photochemistry, and photocatalysis in the framework of the Kyushu-Mainz collaboration will pave the way for this ambitious goal.
The research is published in the journal Angewandte Chemie International Edition.
More information: Till J. B. Zähringer et al, Blue‐to‐UVB Upconversion, Solvent Sensitization and Challenging Bond Activation Enabled by a Benzene‐Based Annihilator, Angewandte Chemie International Edition (2022). DOI: 10.1002/anie.202215340
Changing the color of single photons using an integrated phase modulator. Credit: Loncar Lab/Harvard SEAS
Optical photons are ideal carriers of quantum information. But to work together in a quantum computer or network, they need to have the same color—or frequency—and bandwidth. Changing a photon’s frequency requires altering its energy, which is particularly challenging on integrated photonic chips.
Recently, researchers from the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) developed an integrated electro-optic modulator that can efficiently change the frequency and bandwidth of single photons. The device could be used for more advanced quantum computing and quantum networks.
The research is published in Light: Science & Applications.
Converting a photon from one color to another is usually done by sending the photon into a crystal with a strong laser shining through it, a process that tends to be inefficient and noisy. Phase modulation, in which photon wave’s oscillation is accelerated or slowed down to change the photon’s frequency, offers a more efficient method, but the device required for such a process, an electro-optic phase modulator, has proven difficult to integrate on a chip.
“In our work, we adopted a new modulator design on thin-film lithium niobate that significantly improved the device performance,” said Marko Lončar, the Tiantsai Lin Professor of Electrical Engineering at SEAS and senior author of the study. “With this integrated modulator, we achieved record-high terahertz frequency shifts of single photons.”
The team also used the same modulator as a “time lens”— a magnifying glass that bends light in time instead of space—to change the spectral shape of a photon from fat to skinny.
“Our device is much more compact and energy-efficient than traditional bulk devices,” said Di Zhu, the first author of the paper. “It can be integrated with a wide range of classical and quantum devices on the same chip to realize more sophisticated quantum light control.”
Di is a former postdoctoral fellow at SEAS and is currently a research scientist at the Agency for Science, Research and Technology (A*STAR) in Singapore.
Next, the team aims to use the device to control the frequency and bandwidth of quantum emitters for applications in quantum networks.
The research was a collaboration between Harvard, MIT, HyperLight, and A*STAR.
The paper was co-authored by Changchen Chen, Mengjie Yu, Linbo Shao, Yaowen Hu, C. J. Xin, Matthew Yeh, Soumya Ghosh, Lingyan He, Christian Reimer, Neil Sinclair, Franco N. C. Wong, and Mian Zhang.
More information: Di Zhu et al, Spectral control of nonclassical light pulses using an integrated thin-film lithium niobate modulator, Light: Science & Applications (2022). DOI: 10.1038/s41377-022-01029-7
An artist’s depiction of Majorana edge modes on a chain of superconducting qubits. Credit: Google Quantum AI
Physicists at Google Quantum AI have used their quantum computer to study a type of effective particle that is more resilient to environmental disturbances that can degrade quantum calculations. These effective particles, known as Majorana edge modes, form as a result of a collective excitation of multiple individual particles, like ocean waves form from the collective motions of water molecules. Majorana edge modes are of particular interest in quantum computing applications because they exhibit special symmetries that can protect the otherwise fragile quantum states from noise in the environment.
The condensed matter physicist Philip Anderson once wrote, “It is only slightly overstating the case to say that physics is the study of symmetry.” Indeed, studying physical phenomena and their relationship to underlying symmetries has been the main thrust of physics for centuries. Symmetries are simply statements about what transformations a system can undergo—such as a translation, rotation, or inversion through a mirror—and remain unchanged. They can simplify problems and elucidate underlying physical laws. And, as shown in the new research, symmetries can even prevent the seemingly inexorable quantum process of decoherence.
When running a calculation on a quantum computer, we typically want the quantum bits, or “qubits,” in the computer to be in a single, pure quantum state. But decoherence occurs when external electric fields or other environmental noise disturb these states by jumbling them up with other states to create undesirable states. If a state has a certain symmetry, then it could be possible to isolate it, effectively creating an island of stability that is impossible to mix with the other states that don’t also have the special symmetry. In this way, since the noise can no longer connect the symmetric state to the others, it could preserve the coherence of the state.
In 2000, the physicist Alexei Kitaev devised a simple model to generate symmetry-protected quantum states. The model consisted of a chain of interconnected particles called fermions. They could be connected in such a way that two effective particles would appear at the ends of the chain. But these were no ordinary particles—they were delocalized in space, with each appearing at both ends of the chain simultaneously.
These were the Majorana edge modes (MEMs). The two modes had distinctly different behaviors under so-called parity transformation. One mode looked identical under this transformation, so it was a symmetry of the state. The other picked up a minus sign. The difference in parity between these two states meant that they could not be mixed by many external noise sources (i.e. those that also had parity symmetry).
In their new paper published in Science and titled “Noise-resilient Majorana edge modes on a chain of superconducting qubits,” Xiao Mi, Pedram Roushan, Dima Abanin and their colleagues at Google realized these MEMs with superconducting qubits for the first time. They used a mathematical transformation called the Jordan-Wigner transformation to map the model Kitaev had considered to one that they could realize on their quantum computer: the 1D kicked-Ising model. This model connects each qubit in a 1D chain to each of its two nearest neighbors, such that neighboring qubits interact with one another. Then, a “kick” periodically disturbs the chain.
Mi and his colleagues looked for signatures of the MEMs by comparing the behavior of the edge qubits with those in the middle of the chain. While the state of the qubits in the middle decohered rapidly, the states of those on the edge lasted much longer. Mi says this was “preliminary indication for the resilience of the MEMs toward external decoherence.”
The team then conducted a series of systematic studies on the noise resilience of the MEMs. As a first step, they measured the energies corresponding to the various quantum states of the system and observed that they exactly matched the textbook example of the Kitaev model. In particular, they found that the two MEMs at the opposite ends of the chain are exponentially more difficult to mix as the system size grew—a hallmark feature of the Kitaev model.
Next, the team perturbed the system by adding low-frequency noise to the control operations in the quantum circuits. They found that the MEMs were immune to such perturbations, contrasting sharply with other generic edge modes without symmetries. Surprisingly, the team also found that the MEMs are resilient even to some noise that breaks the symmetries of the Ising model. This is due to a mechanism called “prethermalization,” which arises from the large energy cost required to change the MEMs into other possible excitations in the system.
Lastly, the team measured the full wavefunctions of the MEMs. Doing so required simultaneously measuring the states of varying numbers of qubits close to either end of the chain. Here they made another surprising discovery: No matter how many qubits a measurement included, its decay time was identical. In other words, measurements involving even up to 12 qubits decayed over the same time scale as those of just one qubit. This was contrary to the intuitive expectation that larger quantum observables decay faster in the presence of noise, and further highlighted the collective nature and noise resilience of the MEMs.
Mi and Roushan believe that in the future, they might be able to use MEMs to enable symmetry-protected quantum gates. Their work demonstrates that the MEMs are insensitive to both low-frequency noise and small errors, so this is a promising route to making more robust gates in a quantum processor.
The researchers plan to continue to improve the level of protection these MEMs experience, hopefully to rival some of the leading techniques used to fight against decoherence in quantum computers. Abanin says, “A key question for future works is whether these techniques can be extended to achieve the levels of protection comparable to active error-correction codes.”
More information: X. Mi et al, Noise-resilient edge modes on a chain of superconducting qubits, Science (2022). DOI: 10.1126/science.abq5769
Principle of the experiment and the schematic of the clock-state preparation. Credit: Ye Yinghao et al
Recently, a team led by Prof. Guo Guangcan achieved long-lived storage of high-dimensional orbital angular momentum (OAM) quantum states of photons based on cold atomic ensembles, using a guiding magnetic field combined with clock state preparation. Their work was published in Physical Review Letters.
Previous work has shown that integrating multimode memory into quantum networks can greatly improve channel capacity, which is crucial for long distance quantum communication. The collective enhancement effect of the cold atomic ensemble makes it an efficient medium for storing photonic information. Although important progress has been made, many problems remain to be solved in long-lived spatial multimode memory based on cold atomic ensembles, one of which is how to achieve high fidelity for multimode memory after a long storage time since multiple spatial modes are more easily affected by the surrounding environment.
Based on the degrees of freedom of OAM, the team carried out research on the long-lived storage of high-dimensional multimode quantum states using the cold 85Rb system. In this work, to overcome the effect of inhomogeneous evolution due to the spatial complexity of stored OAM, the team used a guiding magnetic field to dominate atomic evolution and then employed a pair of magnetically insensitive states to suppress the decoherence in the transverse direction. After the clock states were employed, the destructive interference between different Zeeman sublevels was eliminated, which consequently extended the lifetime of faithful storage.
The team extended the dimension of stored OAM superposition states to three in the experiment, and achieved fidelity that exceeds the quantum-classical criteria after a storage time of 400μs, which is two orders of magnitude longer than previous works. When the storage time was extended from 10μs to 400μs, the retrieval efficiency dropped from 10.7% to 4.7%, showing a clear decreasing trend while the fidelity barely decayed.
More information: Ying-Hao Ye et al, Long-Lived Memory for Orbital Angular Momentum Quantum States, Physical Review Letters (2022). DOI: 10.1103/PhysRevLett.129.193601
Muonium frequency scan at 22.5 W in the range of 200–800 MHz. The fitted black line is with, the gray line without the 3S contribution. The error bars correspond to the counting statistical error. The colored areas represent the underlying contributions from 2S − 2P1/2 transitions, namely 583 MHz (blue), 1140 MHz (orange), 1326 MHz (green), and the combined 3S − 3P1/2 (yellow). The data point with TL OFF is not displayed in the figure, but is included in the fit; it lies at 20.4(4) × 10−4. Credit: Nature Communications (2022). DOI: 10.1038/s41467-022-34672-0
By studying an exotic atom called muonium, researchers are hoping misbehaving muons will spill the beans on the Standard Model of particle physics. To make muonium, they use the most intense continuous beam of low energy muons in the world at Paul Scherrer Institute PSI. The research is published in Nature Communications.
The muon is often described as the electron’s heavy cousin. A more appropriate description might be its rogue relation. Since its discovery triggered the words “who ordered that” (Isidor Isaac Rabi, Nobel laureate), the muon has been bamboozling scientists with its law-breaking antics.
The muon’s most famous misdemeanor is to wobble slightly too much in a magnetic field: its anomalous magnetic moment hit the headlines with the 2021 muon g-2 experiment at Fermilab. The muon also notably caused trouble when it was used to measure the radius of the proton—giving rise to a wildly different value to previous measurements and what became known as the proton radius puzzle.
Yet rather than being chastised, the muon is cherished for its surprising behavior, which makes it a likely candidate to reveal new physics beyond the Standard Model.
Aiming to make sense of the muon’s strange behavior, researchers from PSI and ETH Zurich turned to an exotic atom known as muonium. Formed from a positive muon orbited by an electron, muonium is similar to hydrogen but much simpler. Whereas hydrogen’s proton is made up of quarks, muonium’s positive muon has no substructure. And this means it provides a very clean model system from which to sort these problems out: for example, by obtaining extremely precise values of fundamental constants such as the mass of the muon.
“With muonium, because we can measure its properties so precisely, we can try to detect any deviation from the Standard Model. And if we see this, we can then infer which of the theories that go beyond the Standard Model are viable or not,” explains Paolo Crivelli from ETH Zurich, who is leading the study supported by a European Research Council Consolidator grant in the frame of the Mu-MASS project.
By making precise measurements in an exotic atom known as muonium, Crivelli and Prokscha are aiming to understand puzzling results using muons, which may in turn reveal gaps in the laws of physics as we know them. To make the measurements, they use the most intense, continuous source of low energy muons in the world at the Paul Scherrer Institute PSI in Switzerland. Credit: Paul Scherrer Institute / Mahir Dzambegovic
Only one place in the world this is possible
A major challenge to making these measurements very precisely is having an intense beam of muonium particles so that statistical errors can be reduced. Making lots of muonium, which incidentally lasts for only two microseconds, is not simple. There is one place in the world where enough positive muons at low energy are available to create this: PSI’s Swiss Muon Source.
“To make muonium efficiently, we need to use slow muons. When they’re first produced they’re going at a quarter of the speed of light. We then need to slow them down by a factor of a thousand without losing them. At PSI, we’ve perfected this art. We have the most intense continuous source of low energy muons in the world. So we’re uniquely positioned to perform these measurements,” says Thomas Prokscha, who heads the Low Energy Muons group at PSI.
At the Low Energy Muons beamline, slow muons pass through a thin foil target where they pick up electrons to form muonium. As they emerge, Crivelli’s team are waiting to probe their properties using microwave and laser spectroscopy.
Tiny change in energy levels could hold the key
The property of muonium that the researchers are able to study in such detail is its energy levels. In the recent publication, the teams were able to measure for the first time a transition between certain very specific energy sublevels in muonium. Isolated from other so-called hyperfine levels, the transition can be modeled extremely cleanly. The ability to now measure it will facilitate other precision measurements: in particular, to obtain an improved value of an important quantity known as the Lamb shift.
The Lamb shift is a miniscule change in certain energy levels in hydrogen relative to where they “should” be as predicted by classical theory. The shift was explained with the advent of Quantum Electrodynamics (the quantum theory of how light and matter interact). Yet, as discussed, in hydrogen, protons—possessing substructure—complicate things. An ultra-precise Lamb shift measured in muonium could put the theory of Quantum Electrodynamics to the test.
There is more. The muon is nine times lighter than the proton. This means that effects relating to the nuclear mass, such as how a particle recoils after absorbing a photon of light, are enhanced. Indetectable in hydrogen, a route to these values at high precision in muonium could enable scientists to test certain theories that would explain the muon g-2 anomaly: for example, the existence of new particles such as scalar or vector gauge bosons.
Putting the muon on the scales
However exciting the potential of this may be, the team have a greater goal in their sights: weighing the muon. To do this, they will measure a different transition in muonium to a precision one thousand times greater than ever before.
An ultra-high precision value of the muon mass—the goal is 1 part per billion—will support ongoing efforts to reduce uncertainty even further for muon g-2. “The muon mass is a fundamental parameter that we cannot predict with theory, and so as experimental precision improves, we desperately need an improved value of the muon mass as an input for the calculations,” explains Crivelli.
The measurement could also lead to a new value of the Rydberg constant—an important fundamental constant in atomic physics—that is independent of hydrogen spectroscopy. This could explain discrepancies between measurements that gave rise to the proton radius puzzle, and maybe even solve it once and for all.
Muonium spectroscopy poised to fly with IMPACT project
Given that the main limitation for such experiments is producing enough muonium to reduce statistical errors, the outlook for this research at PSI looks bright.
“With the high intensity muon beams planned for the IMPACT project we could potentially go a factor of one hundred higher in precision, and this would be getting very interesting for the Standard Model,” says Prokscha.
More information: Gianluca Janka et al, Measurement of the transition frequency from 2S1/2, F = 0 to 2P1/2, F = 1 states in Muonium, Nature Communications (2022). DOI: 10.1038/s41467-022-34672-0
(a) Sketch of the magnetized NIF hohlraum constructed from AuTa4 with solenoidal coil to carry current. (b) X-ray drive measured through one of the LEHs and the incident laser powers for a magnetized and unmagnetized AuTa4 hohlraum and two unmagnetized Au hohlraums. “BF” refers to “BigFoot,” the name of the previous ignition design. Credit: Physical Review Letters (2022). DOI: 10.1103/PhysRevLett.129.195002
A team of researchers working at the National Ignition Facility, part of Lawrence Livermore National Laboratory, has found that covering a cylinder containing a small amount of hydrogen fuel with a magnetic coil and firing lasers at it triples its energy output—another step toward the development of nuclear fusion as a power source.
In their paper published in the journal Physical Review Letters, the team, which has members from several facilities in the U.S., one in the U.K. and one in Japan, describes upgrading their setup to allow for the introduction of the magnetic coil.
Last year, a team working at the same facility announced that they had come closer to achieving ignition in a nuclear fusion test than anyone has so far. Unfortunately, the were unable to repeat their results. Since that time, the team has been reviewing their original design, looking for ways to make it better.
The original design involved firing 192 laser beams at a tiny cylinder containing a tiny sphere of hydrogen at its center. This created X-rays that heated the sphere until its atoms began to fuse. Some of the design improvements have involved changing the size of the holes through which the lasers pass, but they have only led to minor changes.
Looking for a better solution, the team studied prior research and found several studies that had shown, via simulation, that encasing a cylinder in a magnetic field should significantly increase the energy output.
Putting the suggestion into practice, the researchers had to modify the cylinder—originally, it was made of gold. Placing it in a strong magnetic field would create an electric current strong enough to tear the cylinder apart, so they made a new one from an alloy of gold and tantalum. They also switched the gas from hydrogen to deuterium (another kind of hydrogen), then covered the whole works with a tesla magnetic field using a coil. Then they fired up the lasers. The researchers saw an immediate improvement—the hot spot on the sphere went up by 40% and the energy output was tripled.
The work marks a step toward the ultimate goal—creating a fusion reactor that can produce more energy than is put into it.
More information: J. D. Moody et al, Increased Ion Temperature and Neutron Yield Observed in Magnetized Indirectly Driven D2 -Filled Capsule Implosions on the National Ignition Facility, Physical Review Letters (2022). DOI: 10.1103/PhysRevLett.129.195002
Two combinatorial mechanical metamaterials designed in such a way that the letters M and L bulge out in the front when being squeezed between two plates (top and bottom). Designing novel metamaterials such as this is made easy by AI. Credit: Daan Haver and Yao Du, University of Amsterdam
Mechanical metamaterials are sophisticated artificial structures with mechanical properties that are driven by their structure, rather than their composition. While these structures have proved to be very promising for the development of new technologies designing them can be both challenging and time-consuming.
Researchers at University of Amsterdam, AMOLF, and Utrecht University have recently demonstrated the potential of convolutional neural networks (CNNs), a class of machine learning algorithms, for designing complex mechanical metamaterials. Their paper, published in Physical Review Letters, specifically introduces two-different CNN-based methods that can derive and capture the subtle combinatorial rules underpinning the design of mechanical metamaterials.
“Our recent study can be considered a continuation of the combinatorial design approach introduced in a previous paper, which can be applied to more complicated building blocks,” Ryan van Mastrigt, one of the researchers who carried out the study, told Phys.org. “Around the time when I started working on this study, Aleksi Bossart and David Dykstra were working on a combinatorial metamaterial that is able to host multiple functionalities, meaning a material that can deform in multiple distinct ways depending on how one actuates it.”
As part of their previous research, van Mastrigt and his colleagues tried to distill the rules underpinning the successful design of complex metamaterials. They soon realized that this was far from an easy task, as the “building blocks” that make up these structures can be deformed and arranged in countless different ways.
Previous studies showed that when metamaterials have small unit cell-sizes (i.e., a limited amount of “building blocks“), simulating all the ways in which these blocks can be deformed and arranged using conventional physics simulation tools is possible. As these unit cell-sizes become larger, however, the task becomes extremely challenging or impossible.
“Since we were unable to reason about any underlying design rules and conventional tools failed at allowing us to explore larger unit cell designs in an efficient way, we decided to consider machine learning as a serious option,” van Mastrigt explained. “Thus, the main objective of our study became to identify a machine learning tool that would allow us to explore the design space much quicker than before. I think that we succeeded and even exceeded our own expectations with our findings.”
To successfully train CNNs to tackle the design of complex metamaterials, van Mastrigt and his colleagues initially had to overcome a series of challenges. Firstly, they had to find a way to effectively represent their metamaterial designs.
“We tried a couple of approaches and finally settled on what we refer to as the pixel representation,” van Mastrigt explained. “This representation encodes the orientation of each building block in a clear visual manner, such that the classification problem is cast to a visual pattern detection problem, which is exactly what CNNs are good at.”
Subsequently, the researchers had to devise methods that considered the huge metamaterials class-imbalance. In other words, as there are currently many known metamaterials belonging to class I, but far fewer belonging to class C (the class that the researchers are interested in), training CNNs to infer combinatorial rules for these different classes might entail different steps.
To tackle this challenge, van Mastrigt and his colleagues devised two different CNN-based techniques. These two techniques are applicable to different metamaterial classes and classification problems.
“In the case of metamaterial M2, we tried to create a training set that is class-balanced,” van Mastrigt said. “We did this using naïve undersampling (i.e., throwing a lot of class I examples away) and combine this with symmetries which we know some designs have, such as translational and rotational symmetry, to create additional class C designs.
“This approach thus requires some domain knowledge. For metamaterial M1, on the other hand, we added a reweight term to the loss function such that the rare class C designs weigh more heavily during training, where the key idea is that this reweighting of class C cancels out with the much larger number of class I designs in the training set. This approach requires no domain knowledge.”
In initial tests, both these CNN-based methods for deriving the combinatorial rules behind the design of mechanical metamaterials achieved highly promising results. The team found that they each performed better on different tasks, depending on the initial dataset used and known (or unknown) design symmetries.
“We showed just how extraordinarily good these networks are at solving complex combinatorial problems,” van Mastrigt said. “This was really surprising for us, since all other conventional (statistical) tools we as physicists commonly use fail for these types of problems. We showed that neural networks really do more than just interpolate the design space based on the examples you give them, as they appear to be somehow biased to find a structure (which comes from rules) in this design space that generalizes extremely well.”
The recent findings gathered by this team of researchers could have far reaching implications for the design of metamaterials. While the networks they trained were so far applied to a few metamaterial structures, they could eventually also be used to create far more complex designs, which would be incredibly difficult to tackle using conventional physics simulation tools.
The work by van Mastrigt and his colleagues also highlights the huge value of CNNs for tackling combinatorial problems, optimization tasks that entail composing an “optimal object” or deriving an “optimal solution” that satisfies all constraints in a set, in instances where there are numerous variables at play. As combinatorial problems are common in numerous scientific fields, this paper could promote the use of CNNs in other research and development settings.
The researchers showed that even if machine learning is typically a “black box” approach (i.e., it does not always allow researchers to view the processes behind a given prediction or outcome), it can still be very valuable for exploring the design space for metamaterials, and potentially other materials, objects, or chemical substances. This could in turn potentially help to reason about and better understand the complex rules underlying effective designs.
“In our next studies, we will turn our attention to inverse design,” van Mastrigt added. “The current tool already helps us enormously to reduce the design space to find suitable (class C) designs, but it does not find us the best design for the task we have in mind. We are now considering machine learning methods that will help us find extremely rare designs that have the properties that we want, ideally even when no examples of such designs are shown to the machine learning method beforehand.
“This is a very hard problem, but after our recent study, we believe, that neural networks will allow us to successfully tackle it.”
More information: Ryan van Mastrigt et al, Machine Learning of Implicit Combinatorial Rules in Mechanical Metamaterials, Physical Review Letters (2022). DOI: 10.1103/PhysRevLett.129.198003
Corentin Coulais et al, Combinatorial design of textured mechanical metamaterials, Nature (2016). DOI: 10.1038/nature18960
Anne S. Meeussen et al, Topological defects produce exotic mechanics in complex metamaterials, Nature Physics (2020). DOI: 10.1038/s41567-019-0763-6
Aleksi Bossart et al, Oligomodal metamaterials with multifunctional mechanics, Proceedings of the National Academy of Sciences (2021). DOI: 10.1073/pnas.2018610118
Left: two-parameter likelihood scan of the off-shell gg and EW production signal strength parameters, 𝜇off-shellF and 𝜇off-shellV, respectively. The dot-dashed and dashed contours enclose the 68% (−2Δln𝐿=2.30) and 95% (−2Δln𝐿=5.99) CL regions. The cross marks the minimum, and the blue diamond marks the SM expectation. The integrated luminosity reaches only up to 138 fb−1 as on-shell 4ℓ events are not included in performing this scan. Right: observed (solid) and expected (dashed) one-parameter likelihood scans over ΓH. Scans are shown for the combination of 4ℓ on-shell data with 4ℓ off-shell (magenta) or 2ℓ2ν off-shell data (green) alone, or with both datasets (black). The horizontal lines indicate the 68% (−2Δln𝐿=1.0) and 95% (−2Δln𝐿=3.84) CL regions. The integrated luminosity reaches up to 140 fb−1 as on-shell 4ℓ events are included in performing these scans. The exclusion of the no off-shell hypothesis is consistent with 3.6 s.d. in both panels. Credit: The CMS Collaboration.
The Higgs boson, the fundamental subatomic particle associated with the Higgs field, was first discovered in 2012 as part of the ATLAS and CMS experiments, both of which analyze data collected at CERN’s Large Hadron Collider (LHC), the most powerful particle accelerator in existence. Since the discovery of the Higgs boson, research teams worldwide have been trying to better understand this unique particle’s properties and characteristics.
The CMS Collaboration, the large group of researchers involved in the CMS experiment, has recently obtained an updated measurement of the width of the Higgs boson, while also gathering the first evidence of its off-shell contributions to the production of Z boson pairs. Their findings, published in Nature Physics, are consistent with standard model predictions.
“The quantum theoretical description of fundamental particles is probabilistic in nature, and if you consider all the different states of a collection of particles, their probabilities must always add up to 1 regardless of whether you look at this collection now or sometime later,” Ulascan Sarica, researcher for the CMS Collaboration, told Phys.org. “When analyzed mathematically, this simple statement imposes restrictions, the so-called unitarity bounds, on the probabilities of particle interactions at high energies.”
Since the 1970s, physicists have predicted that when pairs of heavy vector bosons Z or W are produced, typical restrictions at high energies would be violated, unless a Higgs boson was contributing to the production of these pairs. Over the past ten years, theoretical physics calculations showed that the occurrence of these Higgs boson contributions at high energies should be measurable using existing data collected by the LHC.
“Other investigations have shown that the total decay width of the Higgs boson, which is inversely proportional to its lifetime and predicted in the standard model to be notably very small (4.1 mega-electron volts in width, or 1.6×10-22 seconds in lifetime) can be determined using these high-energy events at precision at least a hundred times better than other techniques limited by detector resolution (1000 mega-electron volts in total width measurements, and 1.9×10-13 seconds in lifetime measurements),” Sarica explained.
“For these reasons, our paper had two objectives: to look for the presence of Higgs boson contributions to heavy diboson production at high energies, and to measure the Higgs boson total decay width as precisely as possible via these contributions.”
As part of their recent study, the CMS collaboration analyzed some of the data collected between 2015 and 2018, as part of the second data collection run of the LHC. They specifically focused on events characterized by the production of pairs of Z bosons, which subsequently decayed into either four charged leptons (i.e., electrons or muons) or two charged leptons and two neutrinos.
Past experimental analyses suggest that these two unique patterns are the most sensitive to the production of heavy pairs of bosons at high energies. By analyzing events that matched these patterns, therefore, the team hoped to gather clearer and more reliable results.
“We observed the first evidence of the Higgs boson contributions in the production of Z boson pairs at high energies with a statistical significance of more than 3 standard deviations,” Li Yuan, another member of the CMS collaboration, told Phys.org. “The result strongly supports the spontaneous electroweak symmetry breaking mechanism, which preserves unitarity in heavy diboson production at high energies.”
In addition to gathering evidence of Higgs boson contributions to ZZ production, the CMS collaboration was able to significantly improve existing measurements of the Higgs boson’s total decay width or lifetime. The measurement they collected was believed to be unattainable 10 years ago, given the narrow width of the particle (i.e., 4.1 mega-electron volts according to predictions from the standard model of particle physics).
“Our result for this measurement is 3.2 mega-electron volts with an upper error of 2.4 mega-electron volts and a lower error of 1.7 mega-electron volts,” Yuan said. “This result is consistent with the standard model expectation so far, but there is still room that a future measurement with even greater precision could deviate from the prediction.”
The recent work by the CMS collaboration offers new insight about the properties of the Higgs boson, while also highlighting its contribution to the production of Z boson pairs. In their next studies, the researchers plan to continue their exploration of this fascinating subatomic particle using new data collected at the LHC and advanced analysis techniques.
“While our results have reached a statistical significance beyond the threshold of 3 standard deviations, typically taken as evidence in the particle physics community, more data is needed to be able to reach the threshold of 5 standard deviations in order to claim a discovery,” Sarica said.
The third data collection run of the LHC started this year and is expected to continue until the end of 2025. Sarica, Yuan, and the rest of the CMS collaboration have already started preparations that will allow them to measure the Higgs boson’s width with even greater precision using the new data collected as part of this third round of data collection.
“In addition, our CMS analysis does not yet include the analysis of high-energy events with four charged leptons from the 2018 data, and preparations are ongoing for its inclusion in an update,” Sarica added.
“Recent preliminary results from the ATLAS Collaboration, showcased on Nov. 9 during the Higgs 2022 conference, also provide an independent confirmation of the evidence CMS finds, so once their results go through peer-review, we hope the two collaborations can discuss how the two analyses can be combined to provide the best measurements of Higgs boson contributions at high energy and its total width.”
More information: The CMS Collaboration , Measurement of the Higgs boson width and evidence of its off-shell contributions to ZZ production, Nature Physics (2022). DOI: 10.1038/s41567-022-01682-0
In world first, scientists demonstrate continuous-wave lasing of deep-ultraviolet laser diode at room temperature. Credit: Issey Takahashi
A research group led by 2014 Nobel laureate Hiroshi Amano at Nagoya University’s Institute of Materials and Systems for Sustainability (IMaSS) in central Japan, in collaboration with Asahi Kasei Corporation, has successfully conducted the world’s first room-temperature continuous-wave lasing of a deep-ultraviolet laser diode (wavelengths down to UV-C region).
These results, published in Applied Physics Letters, represent a step toward the widespread use of a technology with the potential for a wide range of applications, including sterilization and medicine.
Since they were introduced in the 1960s, and after decades of research and development, successful commercialization of laser diodes (LDs) was finally achieved for a number of applications with wavelengths ranging from infrared to blue-violet. Examples of this technology include optical communications devices with infrared LDs and Blu-ray discs using blue-violet LDs.
However, despite the efforts of research groups around the world, no one could develop deep ultraviolet LDs. A key breakthrough only occurred after 2007 with the emergence of technology to fabricate aluminum nitride (AlN) substrates, an ideal material for growing aluminum gallium nitride (AlGaN) film for UV light-emitting devices.
Demonstration of room-temperature continuous-wave lasing. Credit: 2022 Asahi Kasei Corp. and Nagoya University
Starting in 2017, Professor Amano’s research group, in cooperation with Asahi Kasei, the company that provided 2-inch AlN substrates, began developing a deep-ultraviolet LD. At first, sufficient injection of current into the device was too difficult, preventing further development of UV-C laser diodes.
But in 2019, the research group successfully solved this problem using a polarization-induced doping technique. For the first time, they produced a short-wavelength ultraviolet-visible (UV-C) LD that operates with short pulses of current. However, the input power required for these current pulses was 5.2 W. This was too high for continuous-wave lasing because the power would cause the diode to quickly heat up and stop lasing.
Researchers that successfully conducted the world’s first room-temperature continuous-wave lasing of a deep-ultraviolet laser diode. Credit: 2022 Asahi Kasei Corp. and Nagoya University
But now, researchers from Nagoya University and Asahi Kasei have reshaped the structure of the device itself, reducing the drive power needed for the laser to operate at only 1.1W at room temperature. Earlier devices were found to require high levels of operating power because of the inability of effective current paths due to crystal defects that occur at the laser stripe. But in this study, the researchers found that the strong crystal strain creates these defects.
By clever tailoring of the side walls of the laser stripe, they suppressed the defects, achieving efficient current flow to the active region of the laser diode and reducing the operating power.
Nagoya University’s industry-academic cooperation platform, called the Center for Integrated Research of Future Electronics, Transformative Electronics Facilities (C-TEFs), made possible the development of the new UV laser technology. Under C-TEFs, researchers from partners such as Asahi Kasei share access to state-of-the-art facilities on the Nagoya University campus, providing them with the people and tools needed to build reproducible high-quality devices.
Zhang Ziyi, a representative of the research team, was in his second year at Asahi Kasei when he became involved in the project’s founding. “I wanted to do something new,” he said in an interview. “Back then everyone assumed that the deep ultraviolet laser diode was an impossibility, but Professor Amano told me, ‘We have made it to the blue laser, now is the time for ultraviolet’.”
This research is a milestone in the practical application and development of semiconductor lasers in all wavelength ranges. In the future, UV-C LDs could be applied to healthcare, virus detection, particulate measurement, gas analysis, and high-definition laser processing.
“Its application to sterilization technology could be groundbreaking,” Zhang said. “Unlike the current LED sterilization methods, which are time-inefficient, lasers can disinfect large areas in a short time and over long distances”. This technology could especially benefit surgeons and nurses who need sterilized operating rooms and tap water.
The successful results have been reported in two papers in Applied Physics Letters.
More information: Hiroshi Amano et al, Local stress control to suppress dislocation generation for pseudomorphically grown AlGaN UV-C laser diodes, Applied Physics Letters (2022). DOI: 10.1063/5.0124512
Hiroshi Amano et al, Key temperature-dependent characteristics of AlGaN-based UV-C laser diode and demonstration of room-temperature continuous-wave lasing, Applied Physics Letters (2022). DOI: 10.1063/5.0124480
Schematic overview of a phase-separated Anderson localization fiber as quantum channel between a transmitter and receiver. The illustration shows that quantum correlations such as entanglement are maintained during transport from the transmitter (generation) to receiver (detection) all the way along the fiber. Credit: ICFO/ A. Cuevas
Invented in 1970 by Corning Incorporated, low-loss optical fiber became the best means to efficiently transport information from one place to another over long distances without loss of information. The most common way of data transmission nowadays is through conventional optical fibers—one single core channel transmits the information. However, with the exponential increase of data generation, these systems are reaching information-carrying capacity limits.
Thus, research now focuses on finding new ways to utilize the full potential of fibers by examining their inner structure and applying new approaches to signal generation and transmission. Moreover, applications in quantum technology are enabled by extending this research from classical to quantum light.
In the late 50s, the physicist Philip W. Anderson (who also made important contributions to particle physics and superconductivity) predicted what is now called Anderson localization. For this discovery, he received the 1977 Nobel Prize in Physics. Anderson showed theoretically under which conditions an electron in a disordered system can either move freely through the system as a whole, or be tied to a specific position as a “localized electron.” This disordered system can for example be a semiconductor with impurities.
Later, the same theoretical approach was applied to a variety of disordered systems, and it was deduced that light could also experience Anderson localization. Experiments in the past have demonstrated Anderson localization in optical fibers, realizing the confinement or localization of light—classical or conventional light—in two dimensions while propagating it through the third dimension. While these experiments had shown successful results with classical light, so far no one had tested such systems with quantum light—light consisting of quantum correlated states. That is, until recently.
In a study published in Communications Physics, ICFO researchers Alexander Demuth, Robing Camphausen, Alvaro Cuevas, led by ICREA Prof at ICFO Valerio Pruneri, in collaboration with Nick Borrelli, Thomas Seward, Lisa Lamberson and Karl W. Koch from Corning, together with Alessandro Ruggeri from Micro Photon Devices (MPD) and Federica Villa and Francesca Madonini from Politecnico di Milano, have been able to successfully demonstrate the transport of two-photon quantum states of light through a phase-separated Anderson localization optical fiber (PSF).
A conventional optical fiber vs. an Anderson localization fiber
Contrary to conventional single mode optical fibers, where data is transmitted through a single core, a phase-separated fiber (PSF) or phase-separated Anderson localization fiber is made of many glass strands embedded in a glass matrix of two different refractive indexes.
During its fabrication, as borosilicate glass is heated and melted, it is drawn into a fiber, where one of the two phases of different refractive indexes tends to form elongated glass strands. Since there are two refractive indexes within the material, this generates what is known as a lateral disorder, which leads to transverse (2D) Anderson localization of light in the material.
Experts in optical fiber fabrication, Corning created an optical fiber that can propagate multiple optical beams in a single optical fiber by harnessing Anderson localization. Contrary to multicore fiber bundles, this PSF showed to be very suitable for such experiments since many parallel optical beams can propagate through the fiber with minimal spacing between them.
The team of scientists, experts in quantum communications, wanted to transport quantum information as efficiently as possible through Corning’s phase-separated optical fiber. In experiment, the PSF connects a transmitter and a receiver. The transmitter is a quantum light source (built by ICFO). The source generates quantum correlated photon pairs via spontaneous parametric down-conversion (SPDC) in a non-linear crystal, where one photon of high energy is converted to pairs of photons, which have lower energy each.
The low-energy photon pairs have a wavelength of 810 nm. Due to momentum conservation, spatial anti-correlation arises. The receiver is a single-photon avalanche diode (SPAD) array camera, developed by Polimi and MPD. The SPAD array camera, unlike common CMOS cameras, is so sensitive that it can detect single photons with extremely low noise; it also has very high time resolution, such that the arrival time of the single photons is known with high precision.
Quantum light
The ICFO team engineered the optical setup to send the quantum light through the phase-separated Anderson localization fiber and detected its arrival with the SPAD array camera. The SPAD array enabled them not only to detect the pairs of photons but also to identify them as pairs, as they arrive at the same time (coincident).
As the pairs are quantum correlated, knowing where one of the two photons is detected tells us the other photon’s location. The team verified this correlation right before and after sending the quantum light through PSF, successfully showing that the spatial anti-correlation of the photons was indeed maintained.
After this demonstration, the ICFO team then set out to show how to improve their results in future work. For this, they conducted a scaling analysis, in order to find out the optimal size distribution of the elongated glass strands for the quantum light wavelength of 810 nm. After a thorough analysis with classical light they were able to identify the current limitations of phase-separated fiber and propose improvements of its fabrication, in order to minimize attenuation and loss of resolution during transport.
The results of this study have shown this approach to be potentially attractive for scalable fabrication processes in real-world applications in quantum imaging or quantum communications, especially for the fields of high-resolution endoscopy, entanglement distribution and quantum key distribution.
More information: Alexander Demuth et al, Quantum light transport in phase-separated Anderson localization fiber, Communications Physics (2022). DOI: 10.1038/s42005-022-01036-5