Zoltan Demme: The Universe was born from the Absurdity 

This voluminous study is a kind of synthesis of the author’s research in the varied fields of the natural sciences. Now, uniting all of his contemporary knowledge of the humanities and natural sciences, he tries, in this study, to find the answer to one of the biggest problems of the humankind: how was the Universe born. We cite here one of the essential thesis of his paper:

“In the initial, changeless phase of its unfolding — the Big Bang’s Planck Era — a multitude of fundamentally absurd elements are already present: infinite, hyper-extreme temperature; infinite pressure; infinite energy density; infinite curvature; and so on. In fact, no elements other than absurd ones exist at this stage.

Many quantum physicists regard every phenomenon of this era, with its parameters receding into infinity, as impossible — utterly inconceivable — regardless of how firmly they emerge from calculations that have been verified countless times. One may, of course, consider the picture incomplete, or even subject to total revision in the future. Attempts to incorporate quantum effects into the description of the Planck Era have so far met with no truly conclusive success, despite the high hopes of many particle-physicists. At the same time, many microphysicists maintain that such attempts would yield no meaningful results whatsoever, while others consider the problem insoluble from the outset.

Yet whatever the stance, the nearly century-long, universally acknowledged absurd character of this era can inspire not only the effort to transcend the absurd picture or to refute the reality of absurdity — but also its acceptance! This is the position we ourselves take, and hardly without reason.

The presence — according to calculations — of hyper-extremes and infinite values that are entirely impossible within the Planck Era invites not only the thought that our informational basis is incomplete – but also the recognition of absurd functioning. The fact that colossal hyper-extremes and hyper-phenomena could not possibly condense into a point-like state without space, without time, without matter — and yet do so — likewise suggests not merely an as -yet- unexplained phenomenon, but also the absurd functioning and the validity of absurdity. That such hyper-extremes and hyper-phenomena should exist without any precedent — something science deems impossible — yet be present without precedent, similarly invites not only speculation about emergence from nothing, or perhaps divine creation – but also points toward absurd functioning and the operative reality of absurdity.

This pattern could be extended indefinitely: in every case, the phenomena and values in question would appear solely in versions that are scientifically impossible yet realized—manifestations, in other words, of absurd functioning and the validity of absurdity. The Planck Era can scarcely be called absurd solely for its extreme physical characteristics, its hotter-than-hot, hyper-valued traits. Even its very existence, its realized state, was itself an absurdity.

Since we accept the century-old scientific consensus that the Big Bang had no antecedent, we must also address the problem of the transition from nothing (Nonexistence) to something (Existence). Here too, the essential features of absurdity provide guidance.

Ontology — a branch of philosophy spanning from antiquity to the present — holds that absurdity can validate both nothing and something alike as its own elements or constituents, and thus may arise without any distinct origin zone, provided nothing obstructs it. Moreover, by virtue of its own functioning, absurdity is capable of generating itself, again provided there is nothing to hinder it. According to ontology, absurdity can also act independently of space, matter, and time—capacities contrarily of the Universe’s other forces, which cannot operate in such complete detachment.

A Greek postulate, occasionally reiterated by thinkers from antiquity to the present, affirms that absurdity can accomplish anything that is absurd until it is brought under the dominion of some ordering force of change. Another postulate adds that anything — of any nature, in any magnitude — may become an immanent element of absurdity. In the present context, this may include density, pressure, curvature — or even nothing itself.”

I strongly recommend a thorough studying of this paper that contains many other amazing thoughts.

ANDREW R. DAMU

Adding up Feynman diagrams to make predictions about real materials

Caltech scientists have found a fast and efficient way to add up large numbers of Feynman diagrams, the simple drawings physicists use to represent particle interactions. The new method has already enabled the researchers to solve a longstanding problem in the materials science and physics worlds known as the polaron problem, giving scientists and engineers a way to predict how electrons will flow in certain materials, both conventional and quantum.

In the 1940s, physicist Richard Feynman first proposed a way to represent the various interactions that take place between electrons, photons, and other fundamental particles using 2D drawings that involve straight and wavy lines intersecting at vertices. Though they look simple, these Feynman diagrams allow scientists to calculate the probability that a particular collision, or scattering, will take place between particles.

Since particles can interact in many ways, many different diagrams are needed to depict every possible interaction. And each diagram represents a mathematical expression. Therefore, by summing all the possible diagrams, scientists can arrive at quantitative values related to particular interactions and scattering probabilities.

“Summing all Feynman diagrams with quantitative accuracy is a holy grail in theoretical physics,” says Marco Bernardi, professor of applied physics, physics, and materials science at Caltech.

“We have attacked the polaron problem by adding up all the diagrams for the so-called electron-phonon interaction, essentially up to an infinite order.”

In a paper published in Nature Physics, the Caltech team uses its new method to precisely compute the strength of electron-phonon interactions and to predict associated effects quantitatively. The lead author of the paper is graduate student Yao Luo, a member of Bernardi’s group.

For some materials, such as simple metals, the electrons moving inside the crystal structure will interact only weakly with its atomic vibrations. For such materials, scientists can use a method called perturbation theory to describe the interactions that occur between electrons and phonons, which can be thought of as “units” of atomic vibration.

Perturbation theory is a good approximation in these systems because each successive order or interaction becomes decreasingly important. That means that computing only one or a few Feynman diagrams—a calculation that can be done routinely—is sufficient to obtain accurate electron-phonon interactions in these materials.

Introducing polarons
But for many other materials, electrons interact much more strongly with the atomic lattice, forming entangled electron-phonon states known as polarons. Polarons are electrons accompanied by the lattice distortion they induce. They form in a wide range of materials including insulators, semiconductors, materials used in electronics or energy devices, as well as many quantum materials.

For example, an electron placed in a material with ionic bonds will distort the surrounding lattice and form a localized polaron state, resulting in decreased mobility due to the strong electron-phonon interaction. Scientists can study these polaron states by measuring how conductive the electrons are or how they distort the atomic lattice around them.

Perturbation theory does not work for these materials because each successive order is more important than the last. “It’s basically a nightmare in terms of scaling,” says Bernardi.

“If you can calculate the lowest order, it’s very likely that you cannot do the second order, and the third order will just be impossible. The computational cost typically scales prohibitively with interaction order. There are too many diagrams to compute, and the higher-order diagrams are too computationally expensive.”

Polaron energy vs. momentum curves. Credit: Nature Physics (2025). DOI: 10.1038/s41567-025-02954-1
Summing Feynman diagrams
Scientists have searched for a way to add up all the Feynman diagrams that describe the many, many ways that the electrons in such a material can interact with atomic vibrations. Thus far such calculations have been dominated by methods where scientists can tune certain parameters to match an experiment.

“But when you do that, you don’t know whether you’ve actually understood the mechanism or not,” says Bernardi. Instead, his group focuses on solving problems from “first principles,” meaning beginning with nothing more than the positions of atoms within a material and using the equations of quantum mechanics.

When thinking about the scope of this problem, Luo says to imagine trying to predict how the stock market might behave tomorrow. To attempt this, one would need to consider every interaction between every trader over some period to get precise predictions of the market’s dynamics.

Luo wants to understand all the interactions between electrons and phonons in a material where the phonons interact strongly with the atoms in the material. But as with predicting the stock market, the number of possible interactions is prohibitively large. “It is actually impossible to calculate directly,” he says. “The only thing we can do is use a smart way of sampling all these scattering processes.”

Betting on Monte Carlo
Caltech researchers are addressing this problem by applying a technique called diagrammatic Monte Carlo (DMC), in which an algorithm randomly samples spots within the space of all Feynman diagrams for a system, but with some guidance in terms of the most important places to sample.

“We set up some rules to move effectively, with high agility, within the space of Feynman diagrams,” explains Bernardi.

The Caltech team overcame the enormous amount of computing that would have normally been required to use DMC to study real materials with first principle methods by relying on a technique they reported last year that compresses the matrices that represent electron-phonon interactions.

Another major advance is nearly removing the so-called “sign problem” in electron-phonon DMC using a clever technique that views diagrams as products of tensors, mathematical objects expressed as multi-dimensional matrices.

“The clever diagram sampling, sign-problem removal, and electron-phonon matrix compression are the three key pieces of the puzzle that have enabled this paradigm shift in the polaron problem,” says Bernardi.

In the new paper, the researchers have applied DMC calculations in diverse systems that contain polarons, including lithium fluoride, titanium dioxide, and strontium titanate. The scientists say their work opens up a wide range of predictions that are relevant to experiments that people are conducting on both conventional and quantum materials—including electrical transport, spectroscopy, superconductivity, and other properties in materials that have strong electron-phonon coupling.

“We have successfully described polarons in materials using DMC, but the method we developed could also help study strong interactions between light and matter, or even provide the blueprint to efficiently add up Feynman diagrams in entirely different physical theories,” says Bernardi.

The KATRIN experiment sets new constraints on general neutrino interactions

Neutrinos are elementary particles that are predicted to be massless by the standard model of particle physics, yet their observed oscillations suggest that they do in fact have a mass, which is very low. A further characteristic of these particles is that they only weakly interact with other matter, which makes them very difficult to detect using conventional experimental methods.

The KATRIN (Karlsruhe Tritium Neutrino) experiment is a large-scale research effort aimed at precisely measuring the effective mass of the electron anti-neutrino using advanced instruments located at the Karlsruhe Institute of Technology (KIT) in Germany.

The researchers involved in this experiment recently published the results of a new analysis of data from the second measurement campaign in Physical Review Letters, which set new constraints on interactions involving neutrinos that could arise from unknown physics that is not explained by the standard model, also known as general neutrino interactions.

“We know that beyond standard model (BSM) physics is hiding in the neutrino sector, but we don’t know what it looks like yet,” Caroline Fengler, lead analyst for this search, told Phys.org. “That’s what has motivated us already in the past to look for various BSM physics phenomena with KATRIN, such as light and heavy sterile neutrinos and Lorentz invariance violations.

“The theory work by the group of Werner Rodejohann then gave us the incentive to broaden our search to any possible new neutrino interactions that might contribute to the weak interaction of the beta decay.”

The new interactions that the researchers started looking for could hint at the existence of various exciting physical phenomena outside that are not predicted by the standard model of particle physics, but that have been widely explored by theorists. For instance, they could indicate the presence of various hypothetical particles, including right-handed W bosons, charged Higgs bosons, and Leptoquarks.

Inside the KATRIN spectrometer. Credit: Michael Zacher
“The main purpose of the KATRIN experiment is to measure the mass of the neutrino,” explained Fengler. “This is done through a highly precise measurement of the energy spectrum of the electrons originating from tritium beta decay, using a high-activity tritium source and a one-of-a-kind electron spectrometer. The shape of the recorded beta spectrum then contains information about the neutrino mass and other BSM physics contributions.”

Notably, general neutrino interactions are predicted to prompt characteristic shape deformations of the so-called beta spectrum, which is the distribution of electron energies emitted during a type of radioactive decay known as beta decay. The KATRIN collaboration thus set out to search for these beta spectrum deformations in the data collected as part of the experiment.

“With only a small part (5%) of the final KATRIN dataset, we were already able to set competitive constraints on some of the investigated new neutrino interactions compared to the global constraints from other low-energy experiments,” said Fengler. “This shows that the KATRIN experiment is sensitive to these new interactions.”

While the KATRIN experiment did not detect signs of general neutrino interactions yet, it set competitive constraints on the strength of these new and elusive interactions, employing a new experimental approach. The KATRIN collaboration hopes that these constraints will contribute to the future search for physics beyond the standard model.

“We are already working on further improving our sensitivity on the general neutrino interactions with KATRIN by extending the data set and fine-tuning our analysis approach,” added Fengler. “With the beginning of the upcoming TRISTAN phase at KATRIN in 2026, which is set out to search for keV sterile neutrinos with the help of an upgraded detector, we will gain access to another powerful data set, which promises to greatly improve our sensitivity in the future.”

Tunable laser light: Ring design could be used in telecom, medicine and more

Researchers at the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) and Technical University of Vienna (TU Wien) have invented a new type of tunable semiconductor laser that combines the best attributes of today’s most advanced laser products, demonstrating smooth, reliable, wide-range wavelength tuning in a simple, chip-sized design.

Tunable lasers, or lasers whose light output wavelengths can be changed and controlled, are integral to many technologies, from high-speed telecommunications to medical diagnostics to safety inspections of gas pipelines.

Yet laser technology faces many trade-offs—for example, lasers that emit across a wide range of wavelengths, or colors, sacrifice the accuracy of each color. But lasers that can precisely tune to many colors get complicated and expensive because they commonly require moving parts.

The new Harvard device could one day replace many types of tunable lasers in a smaller, more cost-effective package.

The research is published in Optica and was co-led by Federico Capasso, the Robert L. Wallace Professor of Applied Physics and Vinton Hayes Senior Research Fellow in Electrical Engineering at SEAS, and Professor Benedikt Schwarz at TU Wien, with whom Capasso’s group has maintained a longstanding research partnership.

The researchers initially demonstrated a laser that emits light in the mid-infrared wavelength range because that’s where quantum cascade lasers, upon which their architecture is based, typically emit.

“The versatility of this new platform means that similar lasers can be fabricated at more commercially relevant wavelengths, such as for telecommunications applications, for medical diagnostics, or for any laser that emits in the visible spectrum of light,” said Capasso, who co-invented the quantum cascade laser in 1994.

The new laser consists of multiple tiny ring-shaped lasers, each a slightly different size, and all connected to the same waveguide. Each ring emits light of a different wavelength, and by adjusting electric current input, the laser can smoothly tune between different wavelengths.

The clever and compact design ensures the laser emits only one wavelength at a time, remains stable even in harsh environments, and can be easily scaled. The rings function either one at a time or all together to make a stronger beam.

“By adjusting the size of the ring, we can effectively target any line we want, and any lasing frequency we want,” said co-lead author Theodore Letsou, an MIT graduate student and research fellow in Capasso’s lab at Harvard.

“All the light from every single laser gets coupled through the same waveguide and is formed into the same beam. This is quite powerful, because we can extend the tuning range of typical semiconductor lasers, and we can target individual wavelengths using a different ring radius.”

“What’s really nice about our laser is the simplicity of fabrication,” added co-lead author Johannes Fuchsberger, a graduate student at TU Wien, where the team fabricated the devices using the cleanroom facilities permanently provided by the school’s Center for Micro and Nanostructures. “We have no mechanically movable parts and an easy fabrication scheme that results in a small footprint.”

AI reveals unexpected new physics in dusty plasma

by Carol Clark, Emory University

A view inside the laboratory vacuum chamber, where collodial particles are suspended in a flat disc, lit by the green light of a laser, to study dusty plasma. Credit: Burton lab

Physicists have used a machine-learning method to identify surprising new twists on the non-reciprocal forces governing a many-body system.

The journal Proceedings of the National Academy of Sciences published the findings by experimental and theoretical physicists at Emory University, based on a neural network model and data from laboratory experiments on dusty plasma—ionized gas containing suspended dust particles.

The work is one of the relatively few instances of using AI not as a data processing or predictive tool, but to discover new physical laws governing the natural world.

“We showed that we can use AI to discover new physics,” says Justin Burton, an Emory professor of experimental physics and senior co-author of the paper. “Our AI method is not a black box: we understand how and why it works. The framework it provides is also universal. It could potentially be applied to other many-body systems to open new routes to discovery.”

The PNAS paper provides the most detailed description yet for the physics of a dusty plasma, yielding precise approximations for non-reciprocal forces.

“We can describe these forces with an accuracy of more than 99%,” says Ilya Nemenman, an Emory professor of theoretical physics and co-senior author of the paper.

“What’s even more interesting is that we show that some common theoretical assumptions about these forces are not quite accurate. We’re able to correct these inaccuracies because we can now see what’s occurring in such exquisite detail.”

The researchers hope that their AI approach will serve as a starting point for inferring laws from the dynamics of a wide range of many-body systems, which are composed of a large number of interacting particles. Examples range from colloids—such as paint, ink and other industrial materials—to clusters of cells in living organisms.

First author of the paper is Wentao Yu who worked on the project as an Emory Ph.D. student and is now a postdoctoral fellow at the California Institute of Technology. Co-author is Eslam Abdelaleem, who was also part of the project as an Emory graduate student and is now a postdoctoral fellow at Georgia Tech.

“This project serves as a great example of an interdisciplinary collaboration where the development of new knowledge in plasma physics and AI may lead to further advances in the study of living systems,” says Vyacheslav (Slava) Lukin, program director for the NSF Plasma Physics program. “The dynamics of these complex systems is dominated by collective interactions that emerging AI techniques may help us to better describe, recognize, understand and even control.”

Plasmas are ionized gases, meaning charged particles of electrons and ions move about freely, creating unique properties like electrical conductivity. Known as the fourth state of matter, plasma makes up an estimated 99.9% of the visible universe, from the solar winds flowing from the sun’s corona to lightning bolts that strike Earth.

Dusty plasma, which adds charged particles of dust to the mix of ions and electrons, is also common in space and planetary environments—from the rings of Saturn to Earth’s ionosphere.

The charged particles levitating above the surface of the moon, due to weak gravity, are an example of a dusty plasma. “That’s why when astronauts walk on the moon their suits get covered in dust,” Burton explains.Play

An example of a dusty plasma on Earth can occur during wildfires when soot mixes with the smoke. The charged soot particles can interfere with radio signals, affecting communications between firefighters.

Burton studies the physics of dusty plasmas and amorphous materials. His lab conducts experiments on tiny, plastic particles suspended in a vacuum chamber filled with plasma as a model for more complex systems. By altering the gas pressure inside the chamber, the lab members can mimic the properties of real phenomena and study how a system changes when it is driven by forces.

For the current project, Burton and Yu developed a tomographic-imaging technique to track the three-dimensional (3D) motion of particles in a dusty plasma. A laser spread into a sheet of light moves up and down in the vacuum chamber as a high-speed camera captures images. The snapshots of particles within the plane of light are then assembled into a stack, revealing the 3D location of individual particles over centimeter length scales for several minutes.

A theoretical biophysicist, Nemenman searches for laws that underlie natural dynamical systems, especially complex biological ones. He’s interested in particular in the phenomenon of collective motion, such as how human cells move about the body.

“General questions of how a whole system arises from interactions of tiny parts are very important,” Nemenman explains. “In cancer, for instance, you want to understand how the interaction of cells may relate to some of them breaking away from a tumor and moving to a new place, becoming metastatic.”

While Nemenman often collaborates with researchers from the life sciences, the project with the Burton lab offered a chance to delve into a system somewhat simpler than a living one. That presented an ideal chance to try to use AI to investigate the dynamics of collective motion to learn new physics.

“For all the talk about how AI is revolutionizing science, there are very few examples where something fundamentally new has been found directly by an AI system,” Nemenman says.

One of the most famous examples of AI, ChatGPT, trains on the vast amount of information available on the internet in order to predict the appropriate text in response to a prompt.

“When you’re probing something new, you don’t have a lot of data to train AI,” Nemenman explains. “That meant we would have to design a neural network that could be trained with a small amount of data and still learn something new.”

Burton, Nemenman, Yu and Abdelaleem met weekly in a conference room to discuss the problem.

“We needed to structure the network to follow the necessary rules while still allowing it to explore and infer unknown physics,” Burton explains.

“It took us more than a year of back-and-forth discussions in these weekly meetings,” Nemenman adds. “Once we came up with the correct structure of the network to train, it turned out to be fairly simple.”

The physicists distilled the restraints for the neural network to modeling three independent contributions to particle motion: the effect of velocity, or drag force; the environmental forces, such as gravity; and the particle-to-particle forces.

Trained on 3D particle trajectories, the AI model accounted for inherent symmetries, non-identical particles and learned the effective non-reciprocal forces between particles with exquisite accuracy.

To explain these non-reciprocal forces, the researchers use the analogy of two boats moving across a lake, creating waves. The wake pattern of each boat affects the motion of the other boat. The wake of one boat may repel or attract the other boat depending on their relative positions—for example, whether the boats are traveling side by side or one behind the other.

“In a dusty plasma, we described how a leading particle attracts the trailing particle, but the trailing particle always repels the leading one,” Nemenman explains. “This phenomenon was expected by some but now we have a precise approximation for it which didn’t exist previously.”Overview of experiment and data workflow. Credit: Proceedings of the National Academy of Sciences (2025). DOI: 10.1073/pnas.2505725122, https://www.pnas.org/doi/10.1073/pnas.2505725122

Their findings also correct some wrong assumptions about dusty plasma.

For example, a longstanding theory held that the larger the radius of a dust particle, the larger the charge that stuck to that particle, in exact proportion to the radius of the particle. “We showed that this theory is not quite right,” Nemenman says. “While it’s true that the larger the particle the larger the charge, that increase is not necessarily proportional to the radius. It depends on the density and temperature of the plasma.”

Another theory held that the forces between two particles falls off exponentially in direct relationship to the distance between two particles and that the factor by which it drops is not dependent on the size of the particle. The new AI method showed that the drop off in force does depend on the particle size.

The researchers verified their findings through experiments.

Their physics-based neural network runs on a desktop computer and offers a universal, theoretical framework to unravel mysteries about other complex, many-body systems.

Nemenman, for example, is looking forward to an upcoming visiting professorship at the Konstanz School of Collective Behavior in Germany. The school brings together interdisciplinary approaches to study the burgeoning field of collective behavior, everything from flocking birds to schools of fish and human crowds.

“I’ll be teaching students from all over the world how to use AI to infer the physics of collective motion—not within a dusty plasma but within a living system,” he says.

While their AI framework holds the ability to infer new physics, expert human physicists are needed to design the right structure for the neural network and to interpret and to validate the resulting data.

“It takes critical thinking to develop and use AI tools in ways that make real advances in science, technology and the humanities,” Burton says.

He feels optimistic about the potential for AI to benefit society.

“I think of it like the Star Trek motto, to boldly go where no one has before,” Burton says. “Used properly, AI can open doors to whole new realms to explore.”

Noninvasive stent imaging powered by light and sound

In a new study, researchers show, for the first time, that photoacoustic microscopy can image stents through skin, potentially offering a safer, easier way to monitor these life-saving devices. Each year, around 2 million people in the U.S. are implanted with a stent to improve blood flow in narrowed or blocked arteries.

“It is critical to monitor stents for problems such as fractures or improper positioning, but conventionally used techniques require invasive procedures or radiation exposure,” said co-lead researcher Myeongsu Seong from Xi’an Jiaotong-Liverpool University in China. “This inspired us to test the potential of using photoacoustic imaging for monitoring stents through the skin.”

In the journal Optics Letters, the researchers show that photoacoustic microscopy can be used to visualize stents covered with mouse skin under various clinically relevant conditions, including simulated damage and plaque buildup.

“While our photoacoustic microscopy results are preliminary, further development could enable frequent, noninvasive monitoring of stent status—without the need for surgical access or X-ray exposure,” said co-lead researcher Sung-Liang Chen from Shanghai Jiao Tong University in China. “This would make it easier and safer to monitor the condition of stents in patients.”

Using sound to see stents
Photoacoustic imaging is a label-free technique that detects sound waves generated when materials absorb light and release energy. Because sound scatters less than light, this imaging method can be used to acquire higher-resolution images at greater depths than purely optical methods.

Although other studies have used photoacoustic imaging via an endoscope to image stents, this still requires that patients undergo a procedure. In the new study, the researchers examined whether photoacoustic microscopy might enable noninvasive stent monitoring through the skin.

To do this, they mimicked different stent scenarios, including fractures, compression and movement of overlapped stents. They also used butter to mimic deposition of plaque or blood clots after stenting. Using photoacoustic microscopy at various wavelengths, including 670 nm and 1210 nm, they were able to image these various stent conditions through excised mouse skin.

“One of the most interesting results is that we could easily differentiate between the butter we used to mimic a lipid plaque and the stent,” said Seong. “Because plaque and stents absorb light differently, using two wavelengths helped us distinguish them.”

Adapting for depth
The researchers say that photoacoustic microscopy could potentially be used to image stents placed in dialysis access sites, which are typically located just beneath the skin. For stents in deeper areas like the carotid artery, a related method called photoacoustic computed tomography may be more suitable.

The researchers point out that before photoacoustic imaging can be used for clinical noninvasive stent monitoring, in vivo animal experiments and preliminary clinical experiments would have to be performed. The system would also need to be optimized for use in various parts of the body.

Measuring three-nucleon interactions to better understand nuclear data and neutron stars

Though atomic nuclei are often depicted as static clusters of protons and neutrons (nucleons), the particles are actually bustling with movement. Thus, the nucleons carry a range of momenta. Sometimes, these nucleons may even briefly engage through the strong interaction. This interaction between two nucleons can boost the momentum of both and form high-momentum nucleon pairs. This effect yields two-nucleon short-range correlations.

Experiments at the U.S. Department of Energy’s Thomas Jefferson National Accelerator Facility have studied these pairs to learn how protons and neutrons preferentially pair up at short distances. However, short-range correlations involving three or more nucleons haven’t been detected yet.

Now, in a study published in Physics Letters B, researchers used data from a 2018 experiment in Jefferson Lab’s Hall A to measure the signature of three-nucleon short-range correlations for the first time.

Physicists are pursuing these trios because they would explain the extremely high-momentum component in the nucleus. Regular nucleons, with their typical, uncorrelated momenta, make up most of the nucleon momentum distribution in the nucleus. Short-range correlated pairs produce a noticeable fraction of high-momentum nucleons but some of the higher momentum is still unaccounted for.

“We’re unraveling the nucleus to find what’s missing in our understanding,” said John Arrington, a senior scientist and Relativistic Nuclear Collisions group head at the DOE’s Lawrence Berkeley National Laboratory. “We know that the three-nucleon interaction is important in the description of nuclear properties, even though it’s a very small contribution. Until now, there’s never really been any indication that we’d observed them at all. This work provides a first glimpse at them.”

Mirrored nuclei simplify the search

The experiment was carried out in Jefferson Lab’s Continuous Electron Beam Accelerator Facility (CEBAF), a DOE Office of Science user facility dedicated to nuclear physics research. To access short-range-correlated nucleons, researchers aimed CEBAF’s electron beam at nuclei. The high-energy electrons interacted with the nucleons inside these nuclei.

Detecting the properties of the electrons after these interactions revealed how fast the nucleon they hit was moving. This allowed physicists to pick out events in which the electron scattered off high-momentum, short-range-correlated nucleons.

Protons and neutrons involved in three-nucleon short-range correlations are moving even faster than those in correlated pairs. This makes it more difficult to access in experiments. Originally, theoretical predictions proposed that accessing three-nucleon short-range correlations would require a beam energy beyond that at CEBAF. However, the researchers designed an experiment that works around this limitation by taking advantage of light nuclei.

The team used two light nuclear targets: helium-3 and tritium. Helium-3 has two protons and one neutron; tritium has two neutrons and one proton. They are known as mirror nuclei for their similar-but-opposite composition.

Because these light nuclei each only have three nucleons, researchers know exactly which particles are involved when an electron scatters off a three-nucleon interaction. That’s because there are no other nucleons to create other possible combinations.

This is not the case for heavy nuclei, in which the three nucleons could be many different combinations of protons and neutrons (not to mention the short-range-correlated pairs happening in the background). The lack of other possible combinations in this experiment simplified the analysis.

“We’re trying to show that it’s possible to study three-nucleon correlations at Jefferson Lab even though we can’t get the energies necessary to do these studies in heavy nuclei,” said Shujie Li, a research scientist at Lawrence Berkeley and a principal investigator on this experiment. “These light systems give us a clean picture—that’s the reason we put in the effort to get a radioactive target material.”

Tritium is a radioactive isotope of hydrogen. Jefferson Lab had to implement rigorous safety precautions, including a redesign of the ventilation system in Hall A. The container that holds the radioactive tritium gas was filled at the DOE’s Savannah River National Laboratory, sealed, and shipped back to Jefferson Lab. Fortunately, the special instrumentation at Jefferson Lab allows the team to use a minimal amount of tritium, reducing potential safety concerns.

“This is a testament to what Jefferson Lab can do,” Arrington said. “CEBAF’s high intensity beam combined with the good detectors allowed us to use less tritium.”

From atomic nuclei to neutron stars

The results hint at the detection of three-nucleon short-range correlations. However, the researchers need more data before they feel comfortable claiming certainty.

“We want to do a similar experiment at Jefferson Lab to get more data at higher energy so that we can confirm what we observed already is a sign of three-nucleon short-range correlations,” Li said. “Eventually we want to understand how those extreme, high-momentum nucleons are generated in the nuclear system.”

Theory predicts these three-nucleon systems are generated in two ways. In one, three particles interact simultaneously. In the other, two nucleons interact and then one of those goes on to interact with another nucleon.

In addition to figuring out the mechanism of three-nucleon short-range correlations, the researchers would like to pin down exactly how fast they move. Ultimately, understanding short-range-correlated pairs and trios shows physicists how different particles and interactions contribute to the overall properties of the atomic nucleus, which is important for interpreting other kinds of nuclear experiments.

And this exploration brings an added bonus. Neutron stars are the remnants of exploded giant stars. Their inner workings are mysterious, but we know they are incredibly dense—just like the atomic nucleus. Physicists think that the way matter behaves inside a neutron star could be similar to the mechanisms of these short-distance nucleon interactions, meaning these experiments on matter at its tiniest scales may help interpret phenomena light years away.

After all, according to Arrington, “It’s much easier to study a three-nucleon correlation in the lab than in a neutron star.”

New imaging method reveals how light and heat generate electricity in nanomaterials

UC Riverside researchers have unveiled a powerful new imaging technique that exposes how cutting-edge materials used in solar panels and light sensors convert light into electricity—offering a path to better, faster, and more efficient devices.

The breakthrough, published in the journal Science Advances, could lead to improvements in solar energy systems and optical communications technology. The study title is “Deciphering photocurrent mechanisms at the nanoscale in van der Waals interfaces for enhanced optoelectronic applications.”

The research team, led by associate professors Ming Liu and Ruoxue Yan of UCR’s Bourns College of Engineering, developed a three-dimensional imaging method that distinguishes between two fundamental processes by which light is transformed into electric current in quantum materials.

One process, known as the photovoltaic, or PV, effect, is the well-known mechanism behind solar panels: incoming photons from light knock electrons loose in a semiconductor, creating a flow of electricity that accumulates at electrode contacts to provide electricity.

The second process, called the photothermoelectric, or PTE, effect, is less familiar but just as important—especially in small-scale devices.

In PTE, as light energy heats up electrons in the material, making them “hotter” than their surroundings, these energized electrons then naturally move toward cooler regions, generating electric current as they flow. These electrons tend to move away from their accumulated regions near the electrode, right against the PV effect.

“Before now, we knew both effects were happening, but we couldn’t see how much each one contributed and how they spatially distribute,” Liu said. “With our new technique, we can finally tell them apart and understand how they work together. That opens new ways to design better devices.”

The team focused on nanodevices made from molybdenum disulfide, or MoS2—a two-dimensional semiconductor just a few atoms thick—combined with gold electrodes. These ultrathin structures are drawing intense interest for next-generation electronics due to their unusual optical and electrical properties.

Using a specialized scanning method that funnels light through the tip of an atomic-force microscope, Liu and Yan’s teams were able to pinpoint where and how the PV and PTE effects occurred—down to the nanometer scale.

What they found surprised them: While the PV effect was expected at the junction where the gold and MoS2 meet, the PTE effect extended much farther into the material than previously thought.

“This goes against the conventional wisdom,” Xu, the Ph.D. student who was the first author of the paper, added. “It shows that heat-driven effects can influence electrical output over much larger areas, even away from the metal contact.”

The team also discovered that by adding a thin layer of hexagonal boron nitride, or h-BN, over the MoS2, they could steer heat sideways through the material. This redirected heat flow boosted the PTE effect by aligning temperature changes with variations in how the material responds to heat—essentially enhancing current production.

“Normally, you try to keep heat localized,” Xu said. “But in this case, letting it spread out actually helped.”

To separate the PV and PTE contributions, the researchers developed a new analysis method that changes the distance between the microscope tip and the sample. By tracking how the current signal changed with distance, and breaking it down using a technique called multi-order harmonic analysis, they could isolate the two effects for the first time in real space.

This innovation could help engineers better design light-detecting components in fiber-optic communication systems—where devices are getting ever smaller, and managing heat is increasingly important. It may also point the way toward more efficient solar power technologies, especially those that aim to harvest both light and heat.

“The idea that we can fine-tune a photodetector’s performance using heat flow is really exciting,” Liu said.

The study’s lead author was Liu’s graduate student Da Xu. The team also collaborated with Takashi Taniguchi of Japan’s National Institute for Materials Science.

“We’re just beginning to uncover how light, heat, and electricity interact in these extraordinary materials,” Liu said. “There’s a lot more to discover.”

Researchers observe nematic order in magnetic helices, echoing liquid crystal behavior

Nematic materials are made of elongated molecules that align in a preferred direction, but, like in a fluid, are spaced out irregularly. The best-known nematic materials are liquid crystals, which are used in liquid crystal display (LCD) screens. However, nematic order has been identified in a wide range of systems, including bacterial suspensions and superconductors.

Now, a team led by researchers at the Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab), SLAC National Accelerator Laboratory and University of California, Santa Cruz, has discovered a nematic order in a magnetic material, in which the magnetic spins of the material are arranged into coils pointing in the same general direction.

“If we think of these magnetic helices as the objects that are aligning, the magnetism follows expectations for nematic phases,” said Zoey Tumbleson, a graduate student at Berkeley Lab and the University of California, Santa Cruz, who led this work. “These phases were not previously known and it’s very exciting to see this generalized to a wider field of study.”

While this new exotic order needs further study, the discovery could one day lead to future technology based on tiny magnetic helices rather than conventional liquid crystals.

“If you can control these weird helical nematic states, maybe you could build new materials with on-demand properties,” said co-author Joshua Turner, a lead scientist at SLAC and principal investigator at the Stanford Institute for Materials and Energy Sciences (SIMES). “I feel like this is just the beginning.”

The researchers reported their findings in Science Advances.Nematic phases in condensed matter systems. Credit: Science Advances (2025). DOI: 10.1126/sciadv.adt5680

Two X-ray facilities probe two very different timescales of motion

The team found the magnetic nematic order in special films of iron germanide that were grown by the collaboration at the University of California, Berkeley, and Berkeley Lab. These films lacked the crystalline order typically seen in this material.

“Our discovery of a magnetic nematic phase is an example of a new exotic phase in amorphous iron germanide,” said co-author Sujoy Roy, a staff scientist at Berkeley Lab. “This work is part of our group’s broader research effort into understanding fluctuations in magnetic materials, which could lead to advancements in information storage and other microelectronic applications.”

To determine the arrangement and motions of the magnetic coils in the films, the researchers brought them to two different X-ray light sources—the Linac Coherent Light Source (LCLS) at SLAC and the Advanced Light Source (ALS) at Berkeley Lab—where they shot X-rays through the films and measured how they scattered.

Using both light sources’ unique capabilities, they discovered motions of the magnetic coils at two vastly different timescales, one being a trillion times faster than the other. At LCLS, they measured rapid motions occurring within billionths of a second, or nanoseconds. At ALS, they observed slower motions happening over hundreds of seconds.

“This work truly would not have come together without the collaboration between SLAC and Berkeley Lab,” Tumbleson said.

Together, the findings from both light sources give researchers a first glimpse of the complicated motions involved in this magnetic nematic order. Future measurements could investigate either the motion on timescales between the two investigated in this work, or at even faster timescales with the recent upgrade at LCLS.

“These measurements at very different timescales combine to provide us with this really interesting picture. It’s mysterious and points to much more occurring here than previously understood,” Turner said. “But we’re only capturing a narrow sliver of what’s happening.”

Light-based listening: Researchers develop a low-cost visual microphone

Researchers have created a microphone that listens with light instead of sound. Unlike traditional microphones, this visual microphone captures tiny vibrations on the surfaces of objects caused by sound waves and turns them into audible signals.

“Our method simplifies and reduces the cost of using light to capture sound while also enabling applications in scenarios where traditional microphones are ineffective, such as conversing through a glass window,” said research team leader Xu-Ri Yao from Beijing Institute of Technology in China. “As long as there is a way for light to pass through, sound transmission isn’t necessary.”

In the journal Optics Express, the researchers describe the new approach, which applies single-pixel imaging to sound detection for the first time. Using an optical setup without any expensive components, they demonstrate that the technique can recover sound by using the vibrations on the surfaces of everyday objects such as leaves and pieces of paper.

“The new technology could potentially change the way we record and monitor sound, bringing new opportunities to many fields, such as environmental monitoring, security and industrial diagnostics,” said Yao. “For example, it could make it possible to talk to someone stuck in a closed-off space like a room or a vehicle.”

The researchers successfully reconstructed audio signals by imaging vibration from a paper card (a-c). They applied a signal processing filter to enhance the high-frequency component of the signal (d-f). Credit: Xu-Ri Yao, Beijing Institute of Technology
Simplifying the setup
Although various methods have been used to detect sound with light, they require sophisticated optical equipment such as lasers or high-speed cameras. In the new work, the researchers set out to use a computational imaging approach known as single-pixel imaging to develop a simpler and less expensive approach that would make optical sound-detection technology more accessible.

Single-pixel imaging captures images using just one light detector—or pixel—instead of a traditional camera sensor with millions of pixels. Rather than recording an image all at once, the scene’s light is modulated using time-varying structured patterns by a spatial light modulator, and the single-pixel detector measures the amount of modulated light for each pattern. A computer then uses these measurements to reconstruct information about the object.

To apply single-pixel imaging to sound detection, Yao’s team used a high-speed spatial light modulator to encode light reflected from the vibrating surface. The sound-induced motion causes subtle changes in light intensity that were captured by the single-pixel detector and decoded into audible sound. They used Fourier-based localization methods to track object vibrations, which enabled efficient and precise measurement of minute variations.

“Combining single-pixel imaging with Fourier-based localization methods allowed us to achieve high-quality sound detection using simpler equipment and at a lower cost,” said Yao. “Our system enables sound detection using everyday items like paper cards and leaves, under natural lighting conditions, and doesn’t require the vibrating surface to reflect light in a certain way.”

Another advantage of using a single-pixel detector to record light intensity information is that it generates a relatively small volume of data. This means that data can be easily downloaded to storage devices or uploaded to the internet in real time, enabling long-duration or even continuous sound recording.

Capturing sound

To demonstrate the new visual microphone, the researchers tested its ability to reconstruct Chinese and English pronunciations of numbers as well as a segment from Beethoven’s Für Elise. They used a paper card and a leaf as vibration targets, placing them 0.5 meters away from the objects while a nearby speaker played the audio.

The system was able to successfully reconstruct clear and intelligible audio, with the paper card producing better results than the leaf. Low-frequency sounds (<1 kHz) were accurately recovered, while high-frequency sounds (>1 kHz) showed slight distortion that improved when a signal processing filter was applied. Tests of the system’s data rate showed it produced 4 MB/s, a rate sufficiently low to minimize storage demands and allow for long-term recording.

“Currently, this technology still only exists in the laboratory and can be used in special scenarios where traditional microphones fail to work,” said Yao. “We aim to expand the system into other vibration measurement applications, including human pulse and heart rate detection, leveraging its multifunctional information sensing capabilities.”

They are also working to improve the system’s sensitivity and accuracy, while also making it portable enough for convenient everyday use. Another key goal is to extend its effective range to enable reliable long-distance sound detection.