The FCC would form a new circular tunnel under France and Switzerland.
Europe’s CERN laboratory revealed more details Monday about its plans for a huge new particle accelerator that would dwarf the Large Hadron Collider (LHC), ramping up efforts to uncover the underlying secrets of the universe.
If approved, the Future Circular Collider (FCC) would start smashing its first particles together around the middle of this century—and start its highest-energy collisions around 2070.
Running under France and Switzerland, it would be more than triple the length of CERN’s LHC, currently the largest and most powerful particle accelerator.
The idea behind both is to send particles spinning around a ring to smash into each at nearly the speed of light, so that the collisions reveal their true nature.
Among other discoveries, the LHC made history in 2012 when it allowed scientists to observe the Higgs boson for the first time.
But the LHC, which cost $5.6 billion and began operating in 2010, is expected to have run its course by around 2040.
The faster and more powerful FCC would allow scientists to continue pushing the envelope. They hope it could confirm the existence of more particles—the building blocks of matter—which so far have only been theorized.
Another unfinished job for science is working out exactly what 95 percent of the universe is made of. About 68 percent of the universe is believed to be dark energy while 27 percent is dark matter—both remain a complete mystery.
Another unknown is why there is so little antimatter in the universe, compared to matter.
CERN hopes that a massive upgrade of humanity’s ability to smash particles could shed light on these enigmas and more.
“Our aim is to study the properties of matter at the smallest scale and highest energy,” CERN director-general Fabiola Gianotti said as she presented an interim report in Geneva.
The report laid out the first findings of a FCC feasibility study that will be finalized by 2025.
$17 billion first stage
In 2028, CERN’s member states, which include the UK and Israel, will decide whether or not to go through with the plan.
If given the green light, construction on the collider would start in 2033.
The project is split into parts.
In 2048, the “electron-positron” collider would start smashing light particles, with the aim of further investigating the Higgs boson and what is called the weak force, one of the four fundamental forces.
The cost of the tunnel, infrastructure and the first stage of the collider would be about 15 billion Swiss Francs ($17 billion), Gianotti said.
The heavy duty hadron collider, which would smash protons together, would only come online in 2070.
Its energy target would be 100 trillion electronvolts—smashing the LHC’s record of 13.6 trillion.
Gianotti said this later collider is the “only machine” that would allow humanity “to make a big jump in studying matter”.
After eight years of study, the configuration chosen for the FCC was a new circular tunnel 90.7 kilometers (56.5 miles) long and 5.5 meters (feet) in diameter.
The tunnel, which would connect to the LHC, would pass under the Geneva region and its namesake lake in Switzerland, and loop round to the south near the picturesque French town of Annecy.
Eight technical and scientific sites would be built on the surface.
CERN said it is consulting with the regions along the route and plans to carry out impact studies on how the tunnel would affect the area.
A composite interferometer experiment device for the undetected photon quantum sensor. Credit: Korea Research Institute of Standards and Science (KRISS)
The Korea Research Institute of Standards and Science (KRISS) has developed a novel quantum sensor technology that allows the measurement of perturbations in the infrared region with visible light by leveraging the phenomenon of quantum entanglement. This will enable low-cost, high-performance IR optical measurement, which previously accompanied limitations in delivering quality results.
The work is published in the journal Quantum Science and Technology.
When a pair of photons, the smallest unit of light particles, are linked by quantum entanglement, they share an associated quantum state regardless of their respective distance. The recently developed undetected photon quantum sensor is a remote sensor that utilizes two light sources that recreate such quantum entanglement.
An undetected photon (idler) refers to a photon that travels to the target of measurement and bounces back. Instead of directly measuring this photon, the undetected photon sensor measures the other photon of the pair that is linked by quantum entanglement to obtain information about the target.
Quantum sensing based on undetected photons is a nascent technology that has only been realized in the last decade. With the technology still at its early stages, the global research community continues to engage actively in the development race. The undetected photon quantum sensor developed by KRISS is differentiated from previous studies in its core photometric devices, the photodetector and interferometer.
Researchers performing optical alignment with the pump laser of the composite interferometer experiment device. Credit: Korea Research Institute of Standards and Science (KRISS)
A photodetector is a device that converts light into an electrical signal output. Existing high-performance photodetectors were largely limited in their applications to the visible light bandwidths. While wavelengths in the infrared region are useful for measurements in diverse applications across many fields, there were either no available detectors or only detectors with poor performance.
This latest KRISS research has allowed the use of visible light detectors to measure the light states in the infrared band, enabling efficient measurement without requiring costly and power-consuming equipment. It can be used in a wide range of applications, including the non-destructive measurement of three-dimensional structures, biometry, and the analysis of gas compositions.
Another critical element in precision optical measurement is the interferometer, a device that obtains signals by integrating multiple rays of light that travel through separated paths. Conventional undetected photon quantum sensors mainly use simple Michelson interferometers that adopt simple light paths, restricting the number of targets that can be measured.
The sensor developed by KRISS implements a hybrid interferometer that can flexibly change the light paths depending on the target object, greatly improving scalability. Thus, the sensor is suitable for adaptation to various environmental requirements as it can be modified based on the size or shape of the measured object.
The Quantum Optics Group at KRISS has presented a theoretical analysis of the factors that determine the key performance metrics of the quantum sensors and empirically demonstrated their effectiveness by using a hybrid interferometer.
The research team reflected light in the infrared band onto a three-dimensional sample to be measured and measured the entangled photons in the visible bandwidth to obtain the sample image, including its depth and width. The team has successfully reconstructed a three-dimensional infrared image from measurements made in the visible range.
Park Hee Su, Head of the Quantum Optics Group at KRISS, said, “This is a breakthrough example that has overcome the limits of conventional optical sensing by leveraging the principles of quantum optics.” He added that KRISS “will continue with follow-up research for the practical application of the technology by reducing its measurement time and increasing sensor resolution.”
3D reconstruction of select neurons in a small region of the human cortex dataset. Credit: Harvard University/Google
When a magnet is heated up, it reaches a critical point where it loses magnetization. Called “criticality,” this point of high complexity is reached when a physical object is transitioning smoothly from one phase into the next.
Now, a new Northwestern University study has discovered that the brain’s structural features reside in the vicinity of a similar critical point—either at or close to a structural phase transition. Surprisingly, these results are consistent across brains of humans, mice and fruit flies, which suggests the finding might be universal.
Although the researchers don’t know between which phases the brain’s structure is transitioning, they say this new information could enable new designs for computational models of the brain’s complexity and emergent phenomena.
The research was published today in Communications Physics.
“The human brain is one of the most complex systems known, and many properties of the details governing its structure are not yet understood,” said Northwestern’s István Kovács, the study’s senior author.
“Several other researchers have studied brain criticality in terms of neuron dynamics. But we are looking at criticality at the structural level in order to ultimately understand how this underpins the complexity of brain dynamics. That has been a missing piece for how we think about the brain’s complexity. Unlike in a computer where any software can run on the same hardware, in the brain the dynamics and the hardware are strongly related.”
Examples of a single neuron reconstruction from each of the fruit fly, mouse and human datasets. (Not to scale). Credit: Northwestern University
“The structure of the brain at the cellular level appears to be near a phase transition,” said Northwestern’s Helen Ansell, the paper’s first author. “An everyday example of this is when ice melts into water. It’s still water molecules, but they are undergoing a transition from solid to liquid. We certainly are not saying that the brain is near melting. In fact, we don’t have a way of knowing what two phases the brain could be transitioning between. Because if it were on either side of the critical point, it wouldn’t be a brain.”
Kovács is an assistant professor of physics and astronomy at Northwestern’s Weinberg College of Arts and Sciences. At the time of the research, Ansell was a postdoctoral researcher in his laboratory; now she is a Tarbutton Fellow at Emory University.
While researchers have long studied brain dynamics using functional magnetic resonance imaging (fMRI) and electroencephalograms (EEG), advances in neuroscience have only recently provided massive datasets for the brain’s cellular structure. These data opened possibilities for Kovács and his team to apply statistical physics techniques to measure the physical structure of neurons.
For the new study, Kovács and Ansell analyzed publicly available data of 3D brain reconstructions from humans, fruit flies and mice. By examining the brain at nanoscale resolution, the researchers found the samples showcased hallmarks of physical properties associated with criticality.
One such property is the well-known, fractal-like structure of neurons. This nontrivial fractal-dimension is an example of a set of observables, called “critical exponents,” that emerge when a system is close to a phase transition.
Snapshot of select neurons from the human cortex dataset, viewed using the online neuroglancer platform. Credit: Harvard University/Google
Brain cells are arranged in a fractal-like statistical pattern at different scales. When zoomed in, the fractal shapes are “self-similar,” meaning that smaller parts of the sample resemble the whole sample. The sizes of various neuron segments observed are also diverse, which provides another clue. According to Kovács, self-similarity, long-range correlations and broad size distributions are all signatures of a critical state, where features are neither too organized nor too random. These observations lead to a set of critical exponents that characterize these structural features.
“These are things we see in all critical systems in physics,” Kovács said. “It seems the brain is in a delicate balance between two phases.”
Kovács and Ansell were amazed to find that all brain samples studied—from humans, mice and fruit flies—have consistent critical exponents across organisms, meaning they share the same quantitative features of criticality. The underlying, compatible structures among organisms hint that a universal governing principle might be at play. Their new findings potentially could help explain why brains of different creatures share some of the same fundamental principles.
“Initially, these structures look quite different—a whole fly brain is roughly the size of a small human neuron,” Ansell said. “But then we found emerging properties that are surprisingly similar.”
“Among the many characteristics that are very different across organisms, we relied on the suggestions of statistical physics to check which measures are potentially universal, such as critical exponents. Indeed, those are consistent across organisms,” Kovács said.
3D reconstruction of select neurons in a small region of the human cortex dataset. Credit: Harvard University/Google
“As an even deeper sign of criticality, the obtained critical exponents are not independent—from any three, we can calculate the rest, as dictated by statistical physics. This finding opens the way to formulating simple physical models to capture statistical patterns of the brain structure. Such models are useful inputs for dynamical brain models and can be inspirational for artificial neural network architectures.”
Next, the researchers plan to apply their techniques to emerging new datasets, including larger sections of the brain and more organisms. They aim to determine whether the universality will still apply.
The GeV as an antenna. Credit: Nature Photonics (2024). DOI: 10.1038/s41566-024-01456-5
Similar to how a radio antenna plucks a broadcast from the air and concentrates the energy into a song, individual atoms can collect and concentrate the energy of light into a strong, localized signal that researchers can use to study the fundamental building blocks of matter.
The more powerful the intensity enhancement, the better the antenna. But researchers have never been able to tap the potentially huge intensity enhancements of some “atomic antennas” in solid materials simply because they were solids.
“Most of the time when you have atoms in solids, they interact with the environment. There’s a lot of disorder, they get shaken by phonons and face other disruptions that reduce the coherence of the signal,” said UChicago Pritzker School of Molecular Engineering Asst. Prof. Alex High.
In a new paper published in Nature Photonics, a multi-institutional team led by the High Lab has cracked this problem. They have used germanium vacancy centers in diamonds to create an optical energy enhancement of six orders of magnitude, a regime challenging to reach with conventional antenna structures.
This million-fold energy enhancement creates what the paper calls an “exemplary” optical antenna and provides a new tool opening up entirely new research areas.
“It’s not just a breakthrough in technology. It’s also a breakthrough in fundamental physics,” said PME Ph.D. candidate Zixi Li, co-first author on the paper. “While it’s well-known that an excited atomic dipole can generate a near-filed with huge intensity, no one has ever demonstrated this in an experiment before.”
From theory to practice
The core feature of an optical antenna is that it creates an oscillating electronic dipole when excited at resonance.
“Optical antennas are basically structures that interact with electromagnetic fields and absorb or emit light at certain resonances, like the electrons moving between energy levels in these color centers,” High said.
The electron oscillates when it transitions between an excited state and a ground state and concentrates a comparatively huge amount of energy, making an atomic optical dipole in a solid an excellent antenna—theoretically.
What kept that ability theoretical was the fact the atoms were in solids, subject to all the jostling, electron interference and general noise that comes from being part of a tightly-packed structure. Color centers—small defects in diamonds and other materials with interesting quantum properties—provided the team a solution.
“Something that’s been observed for the last seven or eight years is that certain types of color centers can be immune to these environmental effects,” High said.
Ph.D. candidate Zixi Li at the UChicago Pritzker School of Molecular Engineering is the co-first author on a new paper from the lab of Asst. Prof. Alex High, which demonstrates a new way to provide more powerful measurements on the atomic level. Credit: Hong Qiao
This opens intriguing research opportunities, said co-author Darrick Chang of the Institute of Photonic Sciences in Barcelona, Spain.
“To me, the most interesting aspect of a color center is not just the field enhancement, but also the fact that the emitted light is intrinsically quantum mechanical,” he said. “That makes it intriguing to consider whether a ‘quantum optical antenna’ can have a different set of functionalities and working mechanisms as compared to a classical optical antenna.”
But turning this theory into a practicable antenna took years, collaboration with researchers around the globe and theoretical guidance from UChicago’s Galli Group.
“The collaboration between theory, computation and experiments initiated by Alex High not only contributed to understanding and interpreting the core science, but also opened new lines of research on the computational side,” said PME Liew Family Prof. Guilia Galli, a co-author on the paper. “The collaboration has been extremely fruitful.”
‘The magic of a color center’
Imaging at the atomic level is a combination of amplification and bandwidth—the strength of the signal and the amount of signal you can study. Because of this, co-first author Xinghan Guo sees the new technique as complementary to, not replacing, existing techniques.
“We offer a much higher amplification but our bandwidth is narrower,” said Guo, who recently completed his Ph.D. at PME and is now a postdoctoral researcher at Yale. “If you have a very selective signal which has a narrow bandwidth but requires a lot of amplification, you can come to us.”
The new technique offers other benefits than just a more powerful signal. While existing techniques like single-molecule Raman and FRET spectroscopy boost the signal by blasting it with light, this technique only requires nanowatts of energy to activate. This means a strong signal without the bleaching, heating and background fluorescence that excessive light creates.
The germanium vacancy centers also do not dissipate energy as they are used, unlike conventional plasmonic antennas.
“The magic of a color center is that it is simultaneously point-like and avoids the losses of a plasmonic material, allowing it to retain its extreme field enhancement,” Chang said.
For High, the exciting part is not the new form of antenna, but the potential discoveries they will make.
“What’s exciting is that this is a general feature,” High said. “We can integrate these color centers into a huge range of systems, and then we can use these as local antennas to grow new processes that both build new devices and help us understand how the universe works.”
Simulation of the interaction between a Doppler-boosted laser and a solid target. Credit: Zaïm et al.
The experimental generation of increasingly intense light beams could help to unveil new physical regimes occurring in the presence of very strong electromagnetic fields. While some progress has been made towards this goal, physicists are yet to develop a reliable strategy to achieve extreme light intensities.
Researchers at LIDYL, CEA, CNRS, Université Paris-Saclay recently proposed a realistic method to reach unprecedented light intensities in experimental settings, using tightly focused doppler-boosted lasers. This method, outlined in a paper published in Physical Review Letters, was theoretically found to enable light-matter interactions near the Schwinger limit.
“Within this collaboration, we are devising a new technique to produce a light source at an unprecedented intensity and researching how such a light source could be used to explore the strong field regime of quantum electrodynamics (SF-QED).”
QED, the relativistic quantum theory of electrodynamics, is among one of the most accurately tested physics theories. Its strong-field regime, however, remains largely unexplored, due to current difficulties in probing it experimentally.
“The theory of SF-QED has been developed decades ago and predicts the emergence of new physical regimes in the presence of very strong electromagnetic fields, where gamma-ray emission and antimatter (electron-positron pairs) production are prevalent and where even light propagation in vacuum becomes nonlinear,” Vincenti and Zaim said.
“For instance, an intense light beam can modify the propagation of another light beam crossing its path, a regime not described by Maxwell’s equations that are linear by essence.”
Strong field regimes are theorized to occur in the vicinity of massive astrophysical objects, including black holes and neutron stars, as well as during extreme astrophysical events, such as gamma-ray bursts. These cosmological phenomena are not yet fully understood, thus studying the extreme regimes associated with them in laboratory settings could prove highly insightful.
Until now, however, scientists have been unable to successfully reproduce SF-QED dominated regimes in experimental settings. The few experiments that attempted to do so relied on large-scale particle accelerators, yet they could only detect a small number of SF-QED processes.
“These regimes are challenging to reproduce in a laboratory setting because SF-QED phenomena occur when electromagnetic fields approach the so-called Schwinger limit (~1018 V/m or equivalently ~1029 W/cm2); orders of magnitude above state-of-the-art laser technology, which can ‘only’ produce intensities up to ~1023 W/cm2,” Vincenti and Zaim said.
“In fact, it is often considered impossible to reach the Schwinger field in the laboratory frame. Thus, all past and proposed experiments rely on reaching the Schwinger limit only in the rest frame of very energetic particles.”
Vincenti, Zaim and their colleagues hope that their proposed technique to generate highly intense light will open new opportunities for research. Specifically, it could eventually allow physicists to approach the so-called Schwinger limit in a laboratory setting.
“In our 2019 paper, we validated with state-of-the-art numerical simulations the feasibility of our light boosting technique,” Vincenti and Zaim said.
“Our simulations indicated that this method could increase the intensity of a PW laser by 2 to 5 orders of magnitude, potentially making the 1025-1028 W/cm2 intensity range within the reach of current laser technology. In a 2021 paper published in Nature Physics, we obtained a first experimental confirmation of this result with a more moderate intensity (~1019 W/cm2) terawatt (TW)-class laser.”
In a further paper published in 2021, Vincenti and his collaborators outlined the results of further numerical simulations. These results demonstrated that even at the lowest intensities that they hope to achieve using their proposed method (~1025 W/cm2), the boosted light would be sufficient to trigger much more SF-QED phenomena than those probed by a conventional PW laser.
“This could lead to a new kind of SF-QED experiments in the coming years,” Vincenti and Zaim said. “In this case, however, we are still orders of magnitude from reaching the Schwinger limit in the laboratory frame, as it is only exceeded in the rest frame of energetic particles.”
While the researchers had already run various numerical simulations to theoretically validate their approach, one of its potential applications was yet to be explored. Specifically, the team had not yet explored its potential for approaching the Schwinger limit in a laboratory frame.
Simulation of the collision between a Doppler-boosted laser and an electron beam. Credit: Zaïm et al.
“This corresponds to the highest intensities that we hope to achieve with our light boosting technique (~1028 W/cm2),” Vincenti and Zaim. “The objective of this new paper was to explore the physical scenarios that come into play in these uncharted territories, using state-of-the-art numerical tools. Such results are very important to motivate, define and prepare the future generations of SF-QED experiments.”
To produce light of an unprecedented intensity, the technique proposed by Vincenti and his colleagues leverages the interaction between a PW laser and a flat solid target, which is ionized into a plasma. Specifically, the researchers proposed hitting an optically polished solid target with a laser beam of ultrahigh intensity, leading to the formation of a so-called plasma mirror.
This plasma mirror reflects the incident light and is also moved by the intense laser field. This motion results in the temporary compression of the reflected laser pulse, which is then converted to a shorter wavelength by the Doppler effect. The radiation pressure from the laser gives the plasma mirror a natural curvature, focusing the Doppler-boosted beam onto smaller spots and theoretically producing intensity gains of over three orders of magnitude in these spots.
“The key additional ingredient needed to reach the highest intensities approaching the Schwinger limit (say ~1028 W/cm2, rather than ~1025 W/cm2), is the ability to focus the boosted light down to its smallest possible volume,” Vincenti and Zaim explained.
“We are currently exploring several paths to achieve such a tight focusing in experiments, for instance, by using external refocusing extreme ultraviolet optics. Some of these techniques already look very promising and will be the subject of future publications.”
In their recent paper, Vincenti and Zaim made no assumption about the method used to tightly focus the Doppler-boosted light, as this would allow them to represent various potential options in their numerical simulations. Instead, they merely assumed that they could focus the light to its smallest possible volume (i.e. its diffraction limit).
“The results that we have obtained are very exciting, as they show that approaching the Schwinger limit in the laboratory frame leads to new and extremely rich light-matter interactions scenarios, lying at the frontier of modern physics,” Vincenti and Zaim said.
“The simple interaction between our boosted light and a solid target leads to a profusion of SF-QED events that dominates the physics. Typically, between 30% and 50% of the boosted light energy is converted into gamma-rays and electron-positron pairs in a few tens of femtoseconds by SF-QED processes.”
The numerical simulations run by the researchers also showed that their method leads to generated gamma photons and electron-positron pairs being bunched into dense fireballs that move at the speed of light. While these fire balls have a short life of approximately 1 fs, the team think that they could mimic the electron/positron jets that exist in the proximity of black holes and neutron stars, thus they help to unveil the origin of the radiation they emit.
“At the highest intensities that we could reasonably consider (>1028 W/cm2), we discovered that the physics becomes even more radical: chain reactions of particle creation start occurring,” Vincenti and Zaim said.
“In other words, photons and electron-positron pairs create themselves new photons and pairs, exponentially increasing the density of the fireballs up to more than 5,000-times the density of a solid. It is not too unreasonable to think that such a chain reaction mechanism has the potential to give rise to new advanced sources of gamma-rays bursts and antimatter.”
Vincenti, Zaim and his colleagues theoretically showed that the collision of their Doppler-boosted light with an energetic electron beam originating from a particle accelerator could also lead to interesting results. In this configuration, in fact, the field in rest frame of the electrons becomes so high that the perturbative theory developed for SF-QED breaks down.
“In other words, we have as of today, no clue what would happen in such an experiment,” Vincenti and Zaim explained.
“This lack of theoretical framework is likely both due to the mathematical complexity of nonperturbative quantum field theory and to the fact that researchers thought for many years that it would be impossible to reach such high electric fields in the rest frame of an electron. Our guess is that these kinds of results will further revive the interest in the nonperturbative regime of SF-QED and spur the development of new theoretical or numerical frameworks better suited to this regime.”
The results gathered by the researchers so far suggest that performing experiments close to the Schwinger limit would yield exciting new results that could greatly contribute to the fields of plasma physics and QED. In their next studies, they plan to start applying their proposed method in real experiments, in collaboration with major laser facilities worldwide.
“The main challenge that we anticipate is to actually produce the highest possible light intensities (up to 1028 W/cm2) in a real-world environment with experimental imperfections (both laser and targetry) and limited beam time,” Vincenti and Zaim said. “Identifying and mitigating the hurdles that lie ahead will require the combination of theoretical, numerical and experimental expertise.”
The researchers forecast that in the first experiments, they will be able to create boosted light with intensities around 1025 W/cm2. While these intensities would still be far from the Schwinger limit, they would still be a world record, paving the way for high-impact SF-QED experiments that have never been performed before.
“We would then take advantage of the feedback from previous experiments and of future progress in laser technology to increase gradually the boosted light intensity from here up to the Schwinger limit,” Vincenti and Zaim added. “This will allow us to obtain more and more spectacular SF-QED dominated interactions. We are therefore convinced that there are exciting times ahead.”
Credit: The European Physical Journal E (2024). DOI: 10.1140/epje/s10189-024-00423-w
Active systems display a wide range of complex and fascinating behaviors, many of which are not yet fully understood. Found on scales ranging from microbes and self-propelling particles to large groups of fish, birds, and mammals, they are made up of many individual parts, which each convert energy from their surroundings into motion.
Through new analysis published in The European Physical Journal E, Antonio Romaguera and collaborators at the Rural Federal University of Pernambuco, Brazil, have gained deeper insights into the collective motions of schools of zebrafish: active systems in which multiple fish can collectively move in the same direction. The team’s discoveries could help researchers to better understand the unique properties of active matter, and how complex behaviors emerge and evolve on different scales.
Individually, zebrafish enhance the efficiency of their movements through sporadic bursts of their tails, followed by longer periods of coasting. When swimming in large groups, these fish coordinate their sporadic motions as they communicate with each other, leading to complex and interesting patterns.
Among these patterns are “polarized groups,” which emerge when groups of fish within the school swim in the same direction. Using mathematical relationships named “polarization time series” (PTSs), researchers can describe how these collective motions will evolve over time.
In their study, Romaguera and his collaborators examined this behavior in a group of zebrafish confined in a circular tank. As they observed the fish, they discovered a distinctive pattern in the PTS, which varied depending on how crowded the tank was.
As lower densities of zebrafish, the team found that the PTS became “multifractal”: meaning that polarized groups within the same group of fish exhibited different degrees of complexity structure over different scales. Yet as higher densities, the PTS instead became “monofracta”‘—displaying uniform behavior across different scales.
The team of researchers now hope this discovery could deepen our understanding of how active systems behave across a wider range of scenarios.
MINFLUX is a powerful microscopy technique that allows researchers to see objects much smaller than the wavelength of light. A newly developed evolution of the process uses a simpler device to create the light pattern needed to examine the molecule, making the entire process faster, cheaper and easier to use for future discoveries.
The research is published in the journal Light: Science & Applications.
MINFLUX pushes the boundaries of what we can see. It works by shining a specially patterned beam of light on a single molecule and measuring the intensity of the light at different locations. By analyzing these measurements, scientists can then calculate the exact position of the molecule. This allows them to study the behavior of molecules in incredible detail, providing insights into fundamental biological processes.
Unfortunately, current MINFLUX setups are complex, expensive, and require specialized equipment, which has limited the widespread use of this powerful technique. Traditional methods often involve bulky and expensive components, making MINFLUX inaccessible to many research labs.
Researchers have developed a new way of creating the patterned light beam for MINFLUX. This method combines two simpler devices: a spatial light modulator (SLM) and an electro-optical modulator (EOM). The SLM acts like a digital projector, manipulating light patterns, while the EOM controls the intensity of the light. This setup is significantly faster, cheaper, and easier to use than traditional methods.
The new approach offers several advantages. Firstly, using simpler components allows for much faster scanning of the light pattern. This rapid scanning improves the accuracy of measurements, leading to sharper and more detailed images of the molecules.
Secondly, the more straightforward setup significantly reduces the cost of the equipment needed for MINFLUX, making this powerful technique more accessible to researchers. Finally, the new method is more user-friendly and can be more easily integrated into existing microscopes, streamlining the research process.
This new development paves the way for creating more affordable and accessible MINFLUX microscopes. This could open up new possibilities for studying a wide range of biological processes at the molecular level. With MINFLUX becoming more accessible, scientists can delve deeper into the unseen world, unlocking new knowledge about how life works at its most fundamental level.
In a study published in The Astrophysical Journal, Prof. Zhou Xia from the Xinjiang Astronomical Observatory (XAO) of the Chinese Academy of Sciences and collaborators have, for the first time, derived the dispersion relation for photons with nonzero mass propagating in plasma, and established a stringent upper limit for the photon mass at 9.52 × 10-46 kg (5.34 × 10-10 eV c-2) using data collected by ultra-wideband (UWB) receivers from pulsar timing and fast radio bursts (FRBs).
Photons are typically considered massless particles, a hypothesis based on Maxwell’s electromagnetic theory and Einstein’s special relativity. However, if photons possess nonzero mass, it would have profound implications for existing physical theories.
Researchers in this study provided a novel theoretical framework for understanding the propagation characteristics of massive photons in plasma.
They used high-precision timing data from the Parkes Pulsar Timing Array (PPTA) and dedispersed pulse data from FRBs. Leveraging the wide frequency range covered by UWB receivers, they improved the signal-to-noise ratio and the accuracy of dispersion measurements.
The high time resolution of UWB technology allowed for precise determination of signal arrival times, effectively reducing the dispersion effects caused by the interstellar medium.
This study highlights the critical role of high-precision radio telescopes and advanced equipment in astronomical research.
With the deployment of the Five-hundred-meter Aperture Spherical radio Telescope (FAST) and the upcoming QiTai radio Telescope (QTT), along with the widespread application of UWB receivers, testing photon mass will become more precise and in-depth, which will contribute to a deeper understanding of the nature of photons and help uncover the fundamental laws of the universe.
A schematic 3D visualization of gallium (transparent in this schematic) and mercury layers, showing the thermoelectric poloidal currents (blue) and magnetic field (yellow). Credit: Christophe Gissinger
A trio of physicists at Sorbonne Université, in France, has observed a thermoelectric effect between two liquid materials for the first time. In their study, published in Proceedings of the National Academy of Sciences, Marlone Vernet, Stephan Fauve and Christophe Gissinger put two types of liquid metals together at room temperature and subjected them to a heat gradient.
Scientists have known for many years that thermoelectric devices can convert thermal energy into electricity and vice versa. Such thermoelectric effects have been seen in the interfaces between two solids and between solids and liquids—but until now, never between two liquids. In this new effort, the researchers built an environment conducive to such an event and tested it in their lab.
The environment consisted of a cylinder with another smaller cylinder at its center. The researchers poured liquid mercury into the outer cylinder and then poured liquid gallium on top of it. The gallium floated because it was lighter. They then added a chilling device to cool the outer walls of the outer cylinder and a heating device to warm the walls of the inner cylinder.
This resulted in a temperature gradient between the two metals. The team then inserted a wire into the outer cylinder to the place where the two metals met—the other end was connected to an electricity measuring device.
The researchers found that the addition of a temperature gradient led to a thermoelectric effect at the interface between the two liquid metals. They also found that it was turbulent—the current ran in a loop from a hot part of the cylinder to the cold part. Further testing showed that there were multiple loops. They also found places in the interface where no electricity was generated, which does not happen in thermoelectric effects between solids, they noted.
The experiment being filled, with the top endcap removed. Credit: Timothé Paire / CNRS
The researchers suggest that the reason such thermoelectric effects have not been observed before is that no one was looking for them. They also note that their findings may have implications for new kinds of battery development.
This still image from a new simulation shows how plasma from the pedestal region is connected through the supposedly last confinement surface into the divertor plasma region. The long and thin lobes are fluctuating in time and space. Credit: (Simulation) Seung-Hoe Ku / Princeton Plasma Physics Laboratory on DOE’s Summit computer at Oak Ridge National Laboratory; (Visualization) Dave Pugmire and Jong Youl Choi / Oak Ridge National Laboratory
The furious exhaust heat generated by a fusing plasma in a commercial-scale reactor may not be as damaging to the vessel’s innards as once thought, according to researchers at the U.S. Department of Energy’s (DOE) Princeton Plasma Physics Laboratory (PPPL), Oak Ridge National Laboratory and the ITER Organization (ITER).
“This discovery fundamentally changes how we think about the way heat and particles travel between two critically important regions at the edge of a plasma during fusion,” said PPPL Managing Principal Research Physicist Choongseok Chang, who led the team of researchers behind the discovery. A new paper detailing their work was recently published in the journal Nuclear Fusion, following previous publications on the subject.
To achieve fusion, temperatures inside a tokamak—the doughnut-shaped device that holds the plasma—must soar higher than 150 million degrees Celsius. That’s 10 times hotter than the center of the sun. Containing something that hot is challenging, even though the plasma is largely held away from the inner surfaces using magnetic fields. Those fields keep most of the plasma confined in a central region known as the core, forming a doughnut-shaped ring.
Some particles and heat escape the confined plasma, however, and strike the material facing the plasma. New findings by PPPL researchers suggest that particles escaping the core plasma inside a tokamak collide with a larger area of the tokamak than once thought, greatly reducing the risk of damage.
Past research based on physics and experimental data from present-day tokamaks suggested exhaust heat would focus on a very narrow band along a part of the tokamak wall known as the divertor plates. Dedicated to removing exhaust heat and particles from the burning plasma, the divertor is critical to a tokamak’s performance.
The experimental ITER tokamak will have a divertor running in a ring around the bottom of the tokamak chamber. In the image above, the divertor is highlighted in yellow. Credit: ITER Organization
“If all of this heat hits this narrow area, then this part of the divertor plate will be damaged very quickly,” said Chang, who works in the PPPL Theory Department. “It could mean frequent stretches of downtime. Even if you are just replacing this part of the machine, it’s not going to be quick.”
The problem hasn’t stopped the operation of existing tokamaks which are not as powerful as those that will be needed for a commercial-scale fusion reactor. However, for the last few decades, there has been significant concern that a commercial-scale device would create plasmas so dense and hot that the divertor plates might be damaged. One proposed plan involved adding impurities to the edge of the plasma to radiate away the energy of the escaping plasma, reducing the intensity of the heat hitting the divertor material, but Chang said this plan was still challenging.
Simulating the escape route
Chang decided to study how the particles were escaping and where the particles would land on such a device as ITER, the multinational fusion facility under assembly in France. To do so, his group created a plasma simulation using a computer code known as X-Point Included Gyrokinetic Code (XGC). This code is one of several developed and maintained by PPPL that are used for fusion plasma research.
The simulation showed how plasma particles traveled across the magnetic field surface, which was intended to be the boundary separating the confined plasma from the unconfined plasma, including the plasma in the divertor region. This magnetic field surface—generated by external magnets—is called the last confinement surface.
A couple of decades ago, Chang and his co-workers found that charged particles known as ions were crossing this barrier and hitting the divertor plates. They later discovered these escaping ions were causing the heat load to be focused on a very narrow area of the divertor plates.
A few years ago, Chang and his co-workers found that the plasma turbulence can allow negatively charged particles called electrons to cross the last confinement surface and widen the heat load by 10 times on the divertor plates in ITER. However, the simulation still assumed the last confinement surface was undisturbed by the plasma turbulence.
“In the new paper, we show that the last confinement surface is strongly disturbed by the plasma turbulence during fusion, even when there are no disturbances caused by external coils or abrupt plasma instabilities,” Chang said. “A good last confinement surface does not exist due to the crazy, turbulent magnetic surface disturbance called homoclinic tangles.”
In fact, Chang said the simulation showed that electrons connect the edge of the main plasma to the divertor plasmas. The path of the electrons as they follow the path of these homoclinic tangles widens the heat strike zone 30% more than the previous width estimate based on turbulence alone.
He explained, “This means it is even less likely that the divertor surface will be damaged by the exhaust heat when combined with the radiative cooling of the electrons by impurity injection in the divertor plasma. The research also shows that the turbulent homoclinic tangles can reduce the likelihood of abrupt instabilities at the edge of the plasma, as they weaken their driving force.”
“The last confinement surface in a tokamak should not be trusted,” Chang said. “But ironically, it may raise fusion performance by lowering the chance for divertor surface damage in steady-state operation and eliminating the transient burst of plasma energy to divertor surface from the abrupt edge plasma instabilities, which are two among the most performance-limiting concerns in future commercial tokamak reactors.”