High pressure has revealed surprising physics and created novel states in condensed matter. Exciting examples include near room temperature superconductivity (Tc > 200 K) in high-pressure hydrides such as H3S and LaH10.
Although the superconducting transition temperature of high-pressure superconductors is constantly increasing, the mechanism of superconductivity at such high pressures remains an open question. Knowledge of the properties and ultrafast dynamics of electrons and quasiparticles in high-pressure quantum states is lacking.
High harmonic generation (HHG) is the up-conversion of laser light to radiation carried at multiples of the laser frequency. HHG in solids originates from the nonlinear driving of electrons within and between electronic bands by strong field light-matter interactions. Therefore, HHG spectroscopy naturally contains fingerprints of intrinsic atomic and electronic properties of materials. There is a great deal of excitement in learning about material properties through this nonlinear, non-perturbative laser-matter interaction.
Using state-of-the-art first-principles time-dependent density-functional theory simulations, Prof. Meng Sheng’s group from the Institute of Physics of the Chinese Academy of Sciences has studied the ultrafast HHG dynamics in the high-pressure superconductor H3S.
Electron-phonon coupling reconstruction via HHG spectra. Credit: Institute of Physics
Band structure reconstruction via HHG spectra. Credit: Institute of Physics
Electron-phonon coupling reconstruction via HHG spectra. Credit: Institute of Physics
Band structure reconstruction via HHG spectra. Credit: Institute of Physics
Using HHG spectroscopy, they retrieved the band dispersion and EPC, and revealed the significant influence of the many-body EPC on the electron behavior near the Fermi level.
Their results support the phonon-mediated mechanism based on the EPC of high-pressure superconductivity, providing an all-optical approach to probe the band dispersion and EPC of high-pressure quantum states.
More information: Shi-Qi Hu et al, Solid-state high harmonic spectroscopy for all-optical band structure probing of high-pressure quantum states, Proceedings of the National Academy of Sciences (2024). DOI: 10.1073/pnas.2316775121
There has been significant progress in the field of quantum computing. Big global players, such as Google and IBM, are already offering cloud-based quantum computing services. However, quantum computers cannot yet help with problems that occur when standard computers reach the limits of their capacities because the availability of qubits or quantum bits, i.e., the basic units of quantum information, is still insufficient.
One of the reasons for this is that bare qubits are not of immediate use for running a quantum algorithm. While the binary bits of customary computers store information in the form of fixed values of either 0 or 1, qubits can represent 0 and 1 at one and the same time, bringing probability as to their value into play. This is known as quantum superposition.
This makes them very susceptible to external influences, which means that the information they store can readily be lost. In order to ensure that quantum computers supply reliable results, it is necessary to generate a genuine entanglement to join together several physical qubits to form a logical qubit. Should one of these physical qubits fail, the other qubits will retain the information. However, one of the main difficulties preventing the development of functional quantum computers is the large number of physical qubits required.
Advantages of a photon-based approach
Many different concepts are being employed to make quantum computing viable. Large corporations currently rely on superconducting solid-state systems, for example, but these have the disadvantage that they only function at temperatures close to absolute zero. Photonic concepts, on the other hand, work at room temperature.
Single photons usually serve as physical qubits here. These photons, which are, in a sense, tiny particles of light, inherently operate more rapidly than solid-state qubits but, at the same time, are more easily lost. To avoid qubit losses and other errors, it is necessary to couple several single-photon light pulses together to construct a logical qubit—as in the case of the superconductor-based approach.
A qubit with the inherent capacity for error correction
Researchers of the University of Tokyo together with colleagues from Johannes Gutenberg University Mainz (JGU) in Germany and Palacký University Olomouc in the Czech Republic have recently demonstrated a new means of constructing a photonic quantum computer. Rather than using a single photon, the team employed a laser-generated light pulse that can consist of several photons. The research is published in the journal Science.
“Our laser pulse was converted to a quantum optical state that gives us an inherent capacity to correct errors,” stated Professor Peter van Loock of Mainz University. “Although the system consists only of a laser pulse and is thus very small, it can—in principle—eradicate errors immediately.” Thus, there is no need to generate individual photons as qubits via numerous light pulses and then have them interact as logical qubits.
“We need just a single light pulse to obtain a robust logical qubit,” added van Loock. To put it in other words, a physical qubit is already equivalent to a logical qubit in this system—a remarkable and unique concept. However, the logical qubit experimentally produced at the University of Tokyo was not yet of a sufficient quality to provide the necessary level of error tolerance. Nonetheless, the researchers have clearly demonstrated that it is possible to transform non-universally correctable qubits into correctable qubits using the most innovative quantum optical methods.
Particle physicists have detected a novel decay of the Higgs boson for the first time, revealing a slight discrepancy in the predictions of the Standard Model and perhaps pointing to new physics beyond it. The findings are published in the journal Physical Review Letters.
The Higgs boson, predicted theoretically since the 1960s, was finally detected in 2012 at the CERN laboratory in Europe. As a quantum field it permeates all of space, through which other particles move, acquiring mass via their interaction with the Higgs field that can be roughly envisioned as a kind of resistance to their motion.
Many properties of the Higgs boson, including how it interacts with other particle and their associated fields, have already been measured to be consistent with predictions of the Standard Model.
But one Higgs decay mode that had yet to be investigated was a theoretical prediction that a Higgs boson would occasionally decay and produce a photon, the quantum of light, and a Z boson, which is an uncharged particle that together with the two W bosons conveys the weak force.
Scientists from the ATLAS and CMS collaborations at CERN used data from proton-proton collisions taken from Run 2 from 2015 to 2018 to search for this particular Z+photon Higgs decay. The Large Hadron Collider (LHC) at CERN is the high-energy particle accelerator near Geneva, Switzerland that circulates protons in opposite directions while causing them to collide at specific detector points, millions of times per second.
For this run the energy in the collision of the two protons was 13 trillion electron-volts, just below the machine’s current maximum, which in more relatable units is 2.1 microjoules. That’s about the kinetic energy of the average mosquito, or a grain of salt, traveling one meter per second.
Theory predicts that about 15 times per 10,000 decays, the Higgs boson should decay into a Z boson and a photon, the rarest decay in the Standard Model. It does so by first producing a pair of top quarks, or a pair of W bosons, which themselves then decay into the Z and photon.
The Atlas/CMS collaboration, work from more than 9,000 scientists, found a “branching ratio,” or fraction of decays of 34 times per 10,000 decays, plus or minus 11 per 10,000—2.2 times the theoretical value.
The measured fraction is too large—3.4 standard deviations above the theoretical value, a number still too small to rule out a statistical fluke. Still, the relatively large difference hints at the possibility of a meaningful discrepancy from theory that could be due to physics beyond the Standard Model—new particles that are the intermediaries other than the top quark and W bosons.
One possibility for physics beyond the Standard Model is supersymmetry, the theory that posits a symmetry—a relationship—between particles of a half-spin, called fermions, and integer spin, called bosons, with every known particle having a partner with a spin differing by a half-integer.
Many theoretical physicists have long been advocates of supersymmetry as it would solve many conundrums that plague the Standard Model, such as the large difference (1024) between the strengths of the weak force and gravity, or why the mass of the Higgs boson, about 125 gigaelectron-volts (GeV), is so much less than the grand unification energy scale of about 1016 GeV.
In the experiment, the massive Z boson decays in about 3 × 10-25 seconds, long before it would reach a detector. So the experimenters compensated by looking at the energy of the two electrons or two muons the Z decay would produce, requiring their combined mass be larger than 50 GeV, a significant fraction of the Z’s mass of 91 GeV.
“This very nice result obtained together with the CMS collaboration. It is, according to the Standard Model prediction, the rarest Higgs boson final state, for which we have seen first evidence,” said Andreas Hoecker, spokesperson for the ATLAS collaboration.
“The decay occurs through quantum loops and is thus sensitive to new physics in a similar, but not quite the same way as the two-photon decay, which contributed to the Higgs boson discovery by ATLAS and CMS in 2012.”
“This result is impressive for several reasons,” added Monica Dunford of the ATLAS Physics collaboration. “We are experimentally able to measure with such precision these very rare processes. They are a powerful test of the Standard Model and possible theories beyond it.”
Dunford adds that the groups have acquired new data during Run 3 at CERN, which began in July 2022, with 13.6 TeV of total energy. Even more data will come from the High Luminosity Large Hadron Collider, which will provide about five times more proton-proton collisions per second. The HL-LHC is projected to come online in 2028.
“These results are a preview of what we will continue to be able to achieve,” said Dunford.
As an ocean wave laps up against a beach, it contains innumerable swirls and eddies. The seawater forms complex patterns at each level, from the waves that surfers catch to ripples too small and fast for the human eye to notice. Each motion sets off another set of motions, cascading through layers of water.
What’s merely scenic at a beach is essential for scientists to understand. Describing more accurately how heat moves through the ocean could help scientists develop better, more precise computer models of Earth’s climate. Understanding turbulence—the irregular movement of fluids—in the ocean would help researchers solve this issue.
Scientists at the University of Cambridge and the University of Massachusetts Amherst used the Summit supercomputer at the Department of Energy’s Oak Ridge Leadership Computing Facility (OLCF) to run a new model of ocean turbulence. (The OLCF is a DOE Office of Science user facility.) The work is published in the Journal of Turbulence.
The computer simulated a generic 10-meter cube of ocean water. While this doesn’t seem very big, just this small chunk of ocean is incredibly complex. To analyze changes down to the centimeter, the program simulates the cube of water on a digital grid. This digital cube was made up of almost 4 trillion grid points.
With the model, the scientists analyzed how turbulence influences heat moving through seawater. In the real ocean, the sun heats water on the surface. Cold water sits at the bottom of the ocean floor. The heat disperses through the different layers of water, but it’s not a series of consistent or small changes. The water is a combination of relatively still areas and areas that mix vigorously once in a while. Turbulence’s inconsistency is one of the things that makes it so complicated.
This new model was the most detailed simulation of these processes yet. Previously, computers simply weren’t powerful enough to handle the layers upon layers of complexity and capture the motion at the vast range of scales.
To handle those limitations, past models collapsed all of the actions happening in different parts of the water into one average measurement. In addition, they used a low value of a ratio that’s important to measuring the turbulence and dissipation of heat in realistic ocean flows. But that muddled the individual changes and their effects.
In contrast, the new model used a much higher value of the ratio and showed how the turbulence occurs under realistic conditions. It enabled the scientists to track the initial surge of turbulence and then follow it until it faded away. The new model also allowed them to zoom into different layers to examine specific details.
The data from these new simulations is challenging some long-standing theories about turbulence. Previously, scientists thought that cold and hot fluids mix into each other at about the same rate. The model suggests that the hotter fluids mix slower than the momentum from the turbulence.
In addition to improving climate models, this information can provide insight into other areas influenced by fluid dynamics. It may help scientists better understand how pollution spreads through water or air. That’s important to scientists who are working to help communities and ecosystems affected by pollution.
With the even-more-powerful Frontier supercomputer now available at OLCF, the scientists on this project are hoping to further expand their understanding of this complex topic. The waves in the ocean are beautiful, but so are the data that help us comprehend them.
Researchers from Thailand have pioneered the conversion of waste HDPE milk bottles into high-stiffness composites, using PALF reinforcement for a 162% increase in flexural strength and 204% in modulus. This eco-friendly upcycling boosts mechanical properties while sequestering carbon, presenting a promising path for sustainable materials.
To meet the UN Sustainable Development Goals by reducing the making of new plastic materials and making use of natural fiber from agriwaste, this research addresses the potential of repurposing high-density polyethylene (HDPE) milk bottles. The aim is to create high-stiffness, high-heat-distortion-temperature (HDT) composites through upcycling.
The composite matrix utilizes recycled high-density polyethylene (rHDPE) obtained from used milk bottles, while the reinforcing fillers are derived from waste pineapple leaves, encompassing both fibers (PALF) and non-fibrous materials (NFM). The research is published in the journal Polymers.
To prepare these composites, a two-roll mixer is employed to blend rHDPE with NFM and PALF, ensuring optimal alignment of the fillers in the resulting prepreg. Subsequently, the prepreg is layered and compressed into composite sheets. The incorporation of PALF as a reinforcing filler plays a pivotal role in significantly enhancing the flexural strength and modulus of the rHDPE composite.
A particularly noteworthy result is observed with a 20 wt.% PALF content, leading to an impressive 162% increase in flexural strength and a remarkable 204% increase in modulus compared to pristine rHDPE.
While the rHDPE/NFM composite also exhibits improved mechanical properties, albeit to a lesser extent than fiber reinforcement, both composites experience a slight reduction in impact resistance. Notably, the addition of NFM or PALF substantially raises the heat distortion temperature (HDT), elevating the HDT values to approximately 84°C and 108°C for the rHDPE/NFM and rHDPE/PALF composites, respectively. This is in stark contrast to the 71°C HDT of neat rHDPE.
Furthermore, the overall properties of both composites are further enhanced by enhancing their compatibility through the use of maleic anhydride-modified polyethylene (MAPE). Examination of impact fracture surfaces on both composites reveals heightened compatibility and clear alignment of NFM and PALF fillers, highlighting the improved performance and environmental friendliness of composites produced from recycled plastics reinforced with pineapple leaf waste fillers.
Improved mechanical properties, especially resistance to deformation under normal or high temperatures, enhance the feasibility of using the product with reduced weight or a thinner design. This is crucial for applications like automotive parts.
This research underscores the promising avenue of utilizing waste materials for sustainable composite development, contributing to the broader goal of reducing environmental impact in the plastics industry. It also contributes to carbon removal by sequestrating carbon in durable products.
Associate Professor Kheng Lim Goh, technical advisor of the PALF-HDPE study, considers the upcycling of HDPE milk bottles with pineapple leaf fibers a significant advancement. He is excited that this approach transforms abundant waste into high-stiffness HDPE composite materials with enhanced mechanical properties, holding promise for various industries, including biomedical and automotive.
However, to maintain a sustainable PALF supply chain for high-stiffness HDPE production that can be applied at speed and scale, pineapple farmers must prepare for and adapt to climate change effects, including erratic rainfall, temperature extremes, drought, soil erosion, invasive weeds, and durable pests.
Both farmers and crop scientists should utilize information from climate projections, crop and economic models, and empirical field data to identify how pineapple crops can withstand dryness and inadequate soil moisture. They also need to explore alternative options for sustaining pineapple production to ensure a consistent PALF supply for high-stiffness HDPE composite material manufacturing.
More information: Taweechai Amornsakchai et al, Upcycling of HDPE Milk Bottles into High-Stiffness, High-HDT Composites with Pineapple Leaf Waste Materials, Polymers (2023). DOI: 10.3390/polym15244697
Physicists in Darmstadt are investigating aging processes in materials. For the first time, they have measured the ticking of an internal clock in glass. When evaluating the data, they discovered a surprising phenomenon.
We experience time as having only one direction. Who has ever seen a cup smash on the floor, only to then spontaneously reassemble itself? To physicists, this is not immediately self-evident because the formulae that describe movements apply irrespective of the direction of time.
A video of a pendulum swinging unimpeded, for instance, would look just the same if it ran backwards. The everyday irreversibility we experience only comes into play through a further law of nature, the second law of thermodynamics. This states that the disorder in a system grows constantly. If the smashed cup were to reassemble itself, however, the disorder would decrease.
You might think that the aging of materials is just as irreversible as the shattering of a glass. However, when researching the movements of molecules in glass or plastic, physicists from Darmstadt have now discovered that these movements are time-reversible if they are viewed from a certain perspective.
The team led by Till Böhmer at the Institute for Condensed Matter Physics at the Technical University of Darmstadt has published its results in Nature Physics.
Glasses or plastics consist of a tangle of molecules. The particles are in constant motion, causing them to slip into new positions again and again. They are permanently seeking a more favorable energetic state, which changes the material properties over time—the glass ages.
In useful materials such as window glass, however, this can take billions of years. The aging process can be described by what is known as the “material time.” Imagine it like this: the material has an internal clock that ticks differently to the clock on the lab wall. The material time ticks at a different speed depending on how quickly the molecules within the material reorganize.
Since the concept was discovered some 50 years ago, though, no one has succeeded in measuring material time. Now, the researchers in Darmstadt led by Prof. Thomas Blochowicz have done it for the first time.
“It was a huge experimental challenge,” says Böhmer. The minuscule fluctuations in the molecules had to be documented using an ultra-sensitive video camera. “You can’t just watch the molecules jiggle around,” adds Blochowicz.
Yet the researchers did notice something. They directed a laser at the sample made of glass. The molecules within it scatter the light. The scattered beams overlap and form a chaotic pattern of light and dark spots on the camera’s sensor. Statistical methods can be used to calculate how the fluctuations vary over time—in other words, how fast the material’s internal clock ticks. “This requires extremely precise measurements which were only possible using state-of-the-art video cameras,” says Blochowicz.
But it was worth it. The statistical analysis of the molecular fluctuations, which researchers from Roskilde University in Denmark helped with, revealed some surprising results. In terms of material time, the fluctuations of the molecules are time-reversible. This means that they do not change if the material time is allowed to tick backwards, similar to the video of the pendulum, which looks the same when played forwards and backwards.
“However, this does not mean that the aging of materials can be reversed,” emphasizes Böhmer. Rather, the result confirms that the concept of material time is well chosen because it expresses the entire irreversible part of the aging of the material. Its ticking embodies the passage of time for the material in question.
Everything else that moves in the material in relation to this time scale does not contribute to aging. Just as, metaphorically speaking, children playing around in the back seat of a car do not contribute to its movement.
The Darmstadt researchers believe that this generally applies to disordered materials, as they examined two classes of material—glass and plastic—and carried out a computer simulation of a model material—with the same results.
The physicists’ success is just the beginning. “This leaves us with a mountain of unanswered questions,” says Blochowicz. For example, it remains to be clarified to what extent the observed reversibility in terms of material time is due to the reversibility of the physical laws of nature, or how the ticking of the internal clock differs for different materials.
The researchers are keen to investigate further, so more exciting discoveries could lie ahead.
More information: Böhmer, T. et al, Time reversibility during the ageing of materials. Nature Physics (2024). DOI: 10.1038/s41567-023-02366-z
What happens when you expose tellurite glass to femtosecond laser light? That’s the question that Gözden Torun at the Galatea Lab at Ecole Polytechnique Federale de Lausanne, in collaboration with Tokyo Tech scientists, aimed to answer in her thesis work when she made the discovery that may one day turn windows into single material light-harvesting and sensing devices. The results are published in Physical Review Applied.
Interested in how the atoms in the tellurite glass would reorganize when exposed to fast pulses of high energy femtosecond laser light, the scientists stumbled upon the formation of nanoscale tellurium and tellurium oxide crystals, both semiconducting materials etched into the glass, precisely where the glass had been exposed. That was the eureka moment for the scientists, since a semiconducting material exposed to daylight may lead to the generation of electricity.
“Tellurium being semiconducting, based on this finding we wondered if it would be possible to write durable patterns on the tellurite glass surface that could reliably induce electricity when exposed to light, and the answer is yes,” explains Yves Bellouard who runs EPFL’s Galatea Laboratory. “An interesting twist to the technique is that no additional materials are needed in the process. All you need is tellurite glass and a femtosecond laser to make an active photoconductive material.”
Using tellurite glass produced by colleagues at Tokyo Tech, the EPFL team brought their expertise in femtosecond laser technology to modify the glass and analyze the effect of the laser. After exposing a simple line pattern on the surface of a tellurite glass 1 cm in diameter, Torun found that it could generate a current when exposing it to UV light and the visible spectrum, and this, reliably for months.
“It’s fantastic, we’re locally turning glass into a semiconductor using light,” says Yves Bellouard. “We’re essentially transforming materials into something else, perhaps approaching the dream of the alchemist.”
More information: Gözden Torun et al, Femtosecond-laser direct-write photoconductive patterns on tellurite glass, Physical Review Applied (2024). DOI: 10.1103/PhysRevApplied.21.014008
by Nicola Nosengo, National Centre of Competence in Research (NCCR) MARVEL
Muon spectroscopy is an important experimental technique that scientists use to study the magnetic properties of materials. It is based on “implanting” a spin-polarized muon in the crystal and measuring how its behavior is affected by the surroundings.
The technique relies on the idea that the muon will occupy a well-identified site that is mainly determined by electrostatic forces, and that can be found by calculating the material’s electronic structure.
But a new study led by scientists in Italy, Switzerland, UK and Germany has found that, at least for some materials, that is not the end of the story: the muon site can change due to a well-known but previously neglected effect, magnetostriction.
Pietro Bonfà from the University of Parma, lead author of the study published in Physical Review Letters, explains that his group and their colleagues at the University of Oxford (UK) have been using density-functional theory (DFT) simulations for at least a decade to find muon sites.
“We started with tricky cases, such as europium oxide and manganese oxide, and in both cases, we could not find a reasonable way to reconcile DFT predictions and the experiments,” he says.
“We then tested simpler systems and we had many successful predictions, but those two cases were really bothering us. These compounds should be easy and instead turned out to be super complicated and we did not understand what was happening. Manganese oxide is a textbook case of an antiferromagnetic system, and we could not explain muon spectroscopy results for it, which was a bit embarrassing.”
The problem, he explains, was the contradiction between the expectation to find the muon in a high symmetry position, and its well-known tendency to make bonds with oxygen atoms. The antiferromagnetic order of the material reduces the symmetry, and the position close to the oxygen atoms becomes incompatible with experiments.
Bonfà suspected that the explanation could be linked to the material undergoing a magnetic phase transition and started trying to reproduce the phenomenon in simulations of manganese oxide.
“Because it is a complicated system, you must add some corrections to DFT, such as the Hubbard U parameter,” he said. “But we were choosing its value empirically, and when you do that, you have a lot of uncertainty, and the results can change dramatically depending on the value you choose.”
Still, Bonfà’s initial simulations suggested that the muon positions could be driven by magnetostriction, a phenomenon that causes a material to change its shape and dimensions during magnetization. To prove it beyond doubt, he teamed up with the MARVEL laboratories at EPFL and PSI of Nicola Marzari and Giovanni Pizzi.
“We used a state-of-the-art method called DFT+U+V, which was very important to make simulations more accurate,” explains Iurii Timrov, a scientist in the Laboratory for Materials Simulations at PSI and co-author of the study.
This method can be used with onsite U and intersite V Hubbard parameters that are computed from first principles instead of being chosen empirically, thanks to the use of density-functional perturbation theory for DFT+U+V that was developed within MARVEL and implemented in the Quantum ESPRESSO package.
“Although we had already figured out that magnetostriction was at play, having the correct information on the building blocks of the simulation was very important, and that came from Iurii’s work,” adds Bonfà.
In the end, the solution of the puzzle was relatively simple: magnetostriction, which is the interplay between magnetic and elastic degrees of freedom in the material, causes a magnetic phase transition in MnO at 118K, at which the muon site switches. Above that temperature, the muon becomes delocalized around a network of equivalent sites—which explains the unusual behavior observed in experiments at high temperatures.
The scientists expect that the same may be true also for many other rocksalt-structured magnetic oxides.
In the future, Timrov explains, the group wants to keep studying the same material also including temperature effects, using another advanced technique developed in MARVEL and called stochastic self-consistent harmonic approximation.
In addition, and in collaboration with Giovanni Pizzi’s group at the Paul Scherrer Institute, this approach will be made available to the community through the AiiDAlab interface, so that all experimentalists can use it for their own studies.
More information: Pietro Bonfà et al, Magnetostriction-Driven Muon Localization in an Antiferromagnetic Oxide, Physical Review Letters (2024). DOI: 10.1103/PhysRevLett.132.046701
Scientists from Hiroshima University undertook a study of dragonfly wings in order to better understand the relationship between a corrugated wing structure and vortex motions. They discovered that corrugated wings exhibit larger lift than flat wings.
Their work was published in the journal Physical Review Fluids on December 7, 2023.
The researchers set out to determine if the corrugation of a dragonfly’s wing is a secret ingredient for boosting lift. While past research has largely zoomed in on the steady flow around the wing during forward motion, the impact of vortices spawned by its corrugated structure on lift has remained a mystery.
The wing surfaces of insects like dragonflies, cicadas, and bees, are not flat like the wings on a passenger plane. The insect wings are composed of nerves and membranes, and their cross-section shapes consist of vertices (nerves) and line segments (membranes). The geometry of the shape appears as a connection of objects with a V-shape or other shapes.
Earlier studies have shown that corrugated wings, with their ridges and grooves, have a better aerodynamic performance than smooth wings at low Reynolds numbers. In aerodynamics, the Reynolds number is a quantity that helps predict the flow pattern of fluids.
Earlier aerodynamic studies on corrugated wings have contributed to applications in small flying robots, drones, and windmills. Because insects possess low muscular strength, in some way their corrugated wings must give them aerodynamic advantages. Yet scientists have not fully understood the mechanism at work because of the complex wing structure and flow characteristics.
The researchers used direct numerical calculations to analyze the flow around a two-dimensional corrugated wing and compared the corrugated wing performance to that of a flat wing. They focused their study on the period between the initial generation of the leading-edge vortex and subsequent interactions before detachment.
They discovered that the corrugated wing performance was better when the angle of attack, that angle at which the wind meets the wing, was greater than 30°.
The corrugated wing’s uneven structure generates an unsteady lift because of complex flow structures and vortex motions. “We’ve discovered a boosting lift mechanism powered by a unique airflow dance set off by a distinct corrugated structure. It can be a game-changer from the simple plate wing scenario,” said Yusuke Fujita, a Ph.D. student at the Graduate School of Integrated Sciences for Life, Hiroshima University.
The researchers constructed a two-dimensional model of a corrugated wing using a real-life dragonfly wing. The model consisted of deeper corrugated structures on the leading-edge side and less deep, or flatter, structures on the trailing-edge side.
Using their two-dimensional model, they further simplified the wing motion and focused on unsteady lift generation by translating from rest. Translational motion, or sliding motion, is a principal component of wing motion, in addition to pitching and rotation. The researchers’ analysis expands the understanding of the non-stationary mechanisms that dragonflies use during flight.
The research team considered two-dimensional models in their study. However, their work focused on the aerodynamics of insect flight, where the flow is typically three-dimensional.
“If these results are expanded to a three-dimensional system, we expect to gain more practical knowledge for understanding insect flight and its application in the industry,” said Makoto Iima, a professor at the Graduate School of Integrated Sciences for Life, Hiroshima University.
Looking ahead, the researchers will focus their investigations on three-dimensional models. “We kicked things off with a two-dimensional corrugated wing model in a sudden burst of motion. Now, we embark on the quest to explore the lift-boosting across a broader range of wing shapes and motions. Our ultimate goal is crafting a new bio-inspired wing with high performance by our lift-enhancing mechanism,” said Fujita.
More information: Yusuke Fujita et al, Dynamic lift enhancement mechanism of dragonfly wing model by vortex-corrugation interaction, Physical Review Fluids (2023). DOI: 10.1103/PhysRevFluids.8.123101
Quark gluon plasma (QGP) is an exciting state of matter that scientists create in a laboratory by colliding two heavy nuclei. These collisions produce a QGP fireball. The fireball expands and cools following the laws of hydrodynamics, which govern how fluids behave in various conditions. Eventually, subatomic particles (protons, pions, and other hadrons, or particles made up of two or more quarks) emerge and are observed and counted by detectors surrounding the collision.
Fluctuations in the number of these particles from collision to collision carry important information about the QGP. However, extracting this information from what scientists can observe is a difficult task. An approach called the maximum entropy principle provides a crucial connection between these experimental observations and the hydrodynamics of the QGP fireball.
The approach is described in the journal Physical Review Letters.
As a QGP fireball expands and cools, it eventually becomes too diluted to be described by hydrodynamics. At this stage the QGP has “hadronized.” This means its energy and other quantum properties are carried by hadrons. These are subatomic particles such as protons, neutrons, and pions that are made up of quarks. The hadrons “freeze out”—they freeze information about the final hydrodynamic state of the QGP fireball, allowing the particles streaming from the collision to carry this information to the detectors in an experiment.
The research provides a tool for using simulations to compute observable fluctuations in the QGP. This has allowed the researchers, from the University of Illinois, Chicago, to use freeze-out to identify hints of a critical point between a QGP fireball and a gaseous hadronized state. This critical point is one of scientists’ unresolved questions about quantum chromodynamics, the theory of the strong gluon-driven interactions between quarks.
Fluctuations in the QGP carry information about the region of the QCD phase diagram where the collisions “freeze out.” This makes connecting fluctuations in hydrodynamics to fluctuations of the observed hadrons a crucial step in translating experimental measurements into the map of the QCD phase diagram. Large event-by-event fluctuations are telltale experimental signatures of the critical point.
Data from the Run-I Beam Energy Scan (BES) program at the Relativistic Heavy-Ion Collider (RHIC) hint at the presence of the critical point. To follow this hint, the researchers proposed a novel and universal approach to converting hydrodynamic fluctuations into fluctuations of hadron multiplicities.
The approach elegantly overcomes challenges faced by previous attempts to solve this problem. Crucially, the new approach based on the maximum-entropy principle preserves all the information about the fluctuations of conserved quantities described by hydrodynamics. The novel freeze-out procedure will find applications in the theoretical calculations of event-by-event fluctuations and correlations observed in experiments such as the Beam Energy Scan program at RHIC aimed at mapping the QCD phase diagram.
More information: Maneesha Sushama Pradeep et al, Maximum Entropy Freeze-Out of Hydrodynamic Fluctuations, Physical Review Letters (2023). DOI: 10.1103/PhysRevLett.130.162301