Numerical simulations show how the classical world might emerge from the many-worlds universes of quantum mechanics

Students learning quantum mechanics are taught the Schrodinger equation and how to solve it to obtain a wave function. But a crucial step is skipped because it has puzzled scientists since the earliest days—how does the real, classical world emerge from, often, a large number of solutions for the wave functions?

Each of these wave functions has its individual shape and associated energy level, but how does the wave function “collapse” into what we see as the classical world—atoms, cats and the pool noodles floating in the tepid swimming pool of a seedy hotel in Las Vegas hosting a convention of hungover businessmen trying to sell the world a better mousetrap?

At a high level, this is handled by the “Born rule”—the postulate that the probability density for finding an object at a particular location is proportional to the square of the wave function at that position.

Erwin Schrödinger invented his famous feline as a way to amplify the consequences of the collapsing wave function—a simple event, such as a quantum event of the radioactive decay of an atomic nucleus, somehow translates into the macroscopic cat in the box being, either alive or dead. (This mysterious transition, perhaps theoretical only, is called the Heisenberg Cut.)

Traditional quantum mechanics says that at any time the cat becomes either alive or dead when the box is opened and the cat state is “measured.” Before that, the cat is, in a sense, both alive and dead—it exists in a quantum superposition of each state. It is only when the box is opened and its inside is viewed does the wave function of the cat collapse into a definite state of being either alive or dead.

In recent years, physicists have been looking at this process more deeply to understand what’s happening. Modifying the Schrödinger equation has had only limited success.

Other ideas than the Copenhagen interpretation described above, such as De Broglie-Bohm pilot wave theory and the many-worlds interpretation of quantum mechanics, are receiving more attention.

Now a team of quantum theorists from Spain have used numerical simulations to show that, on large scales, features of the classical world can emerge from a wide class of quantum systems. Their work is published in the journal Physical Review X.

“Quantum physics is at odds with our classical experience as far as the behavior of single electrons, atoms or photons is concerned,” lead author Philipp Strasberg of the Autonomous University of Barcelona told Phys.org.

“However, if one zooms out, and considers coarse quantities that we humans can perceive (for example, the temperature of our morning coffee or the position of a stone), our results indicate that quantum interference effects, which are responsible for weird quantum behavior, vanish.”

Their finding suggests that the classical world we see can emerge from the many-worlds picture of quantum mechanics, where many universes exist at the same point in spacetime and where almost a potentially huge number of worlds branch off from ours every time a measurement is made.

As a rough analogy, imagine a shower bag filled with water. Poke holes in the bag and water—which inside the bag is a large collection of frequently colliding molecules moving in random directions—will stream out in mostly smooth flows. This is akin to how the complicated jumble of a quantum system nonetheless appears in the classical world as something we recognize and feel familiar with.

But a technical problem remained with the many-worlds portrait: how do we reconcile the many-universes with the classical experience we have within our one universe? After all, we never see cats in a superposition of alive and dead. A priori, how can we speak of other universes or worlds or branches in any meaningful sense?

In their paper, Strasberg and co-authors write “Speaking of different worlds or histories becomes meaningful if we can reason about their past, present, and future in classical terms.”

The co-authors attempted to solve this problem in a new way. While previous work has brought in the idea of quantum decoherence—where the objects we see arise out of the many superpositions of a quantum system when it interacts with their environment. But this approach has a fine-tuning problem—it only works for specific types of interactions and types of initial wave functions.

By contrast, the group showed that a stable, self-consistent set of features emerges from the range of many possible evolutions of a wave function (with many energy levels) at observable, non-microscopic scales. This solution does not have a fine-tuning problem, works for a wide choice of initial conditions and the details of the interactions between energy levels.

“In particular,” Strasberg told Phys.org, “we provide clear evidence that this vanishing [of quantum interference effects] happens extremely fast—to be precise: exponentially fast—with growing system size. That is, even a few atoms or photons can behave classically. Furthermore, it is a ubiquitous and generic phenomenon that does not require any fine tuning: the emergence of a classical world is inevitable.”

The group numerically simulated quantum evolution for up to five time-steps and up to 50,000 energy levels for nontrivial quantum systems. Though that evolution is still small compared to what will be needed to simulate everyday classical phenomena, it’s much larger than any previous work.

They considered a broad range of choices of the initial wave function and of coupling strengths and found approximately the same large-scale structure of stable branches exist—the emergence of a stable and slowly evolving macroscopic structure.

“Remarkably, we also explicitly demonstrate that interesting classical worlds can emerge from a quantum system that is overall in thermodynamic equilibrium. Even though it is very unlikely that this is the case in our universe, it nevertheless demonstrates that order, structure and an arrow of time can emerge on single branches of a quantum Multiverse, which overall looks chaotic, unstructured and time-symmetric.”

Relating their work to statistical mechanics, where macroscopic features like temperature and pressure emerge from a melange of randomly moving particles, the group found that some branches lead to worlds where entropy increases and others to worlds where entropy decreases. Such worlds would have opposite entropic arrows of time.

More information: Philipp Strasberg et al, First Principles Numerical Demonstration of Emergent Decoherent Histories, Physical Review X (2024). DOI: 10.1103/PhysRevX.14.041027

Journal information: Physical Review X 

© 2024 Science X Network

Cooperation between two intruders moving side-by-side in granular media

In bird colonies, schools of fish and cycling pelotons, significant interactions occur between individuals through the surrounding fluid. These interactions are well understood in fluids such as air and water, but what happens when objects move through something like sand? It turns out that similar interactions occur in granular materials—things like soil or sand—and they play a crucial role in everyday contexts. Think of plows cutting through farmland, animals burrowing underground, roots pushing through soil, or even robots exploring the surface of Mars.

Recently, we came across a fascinating discovery: When two objects—what we call “intruders”—move side by side through granular materials, they can actually help each other by reducing the resistance they face. This cooperative effect was uncovered by a team of researchers from the School of Mechanical Engineering at the University of Campinas (UNICAMP) in Brazil, and the FAST laboratory, CNRS, Université Paris-Saclay in France.

To investigate this, we set up an experiment using spherical objects immersed in glass beads to replicate a granular medium. The goal was to pull these objects at a constant speed and measure the drag force they experienced as they moved through the grains. While previous studies had looked at the lateral forces between objects, our team wondered whether moving together might also reduce the drag force.

Some intriguing numerical simulations by two of our researchers at UNICAMP, D. D. Carvalho and E. M. Franklin, published in the Physics of Fluids in 2022, suggested that it could, but we wanted to test this in the real world.

What we found was exciting: When the two intruders were close together, the drag on each of them dropped significantly—by as much as 30% compared to when they were farther apart. And the deeper they were buried in the material, the more pronounced this effect became. The explanation? When two objects move side by side, the motion of one disrupts the force chains between the grains around the other. This break in the grain contact reduces the overall resistance each object encounters.

Beyond just observing this effect, we also developed a semi-empirical model to describe it. The model is based on the idea that interactions between closely spaced objects disrupt these granular force chains, making it easier for them to move. This study, now published in Physical Review Fluids, highlights a previously under-explored aspect of granular dynamics: the cooperative motion of multiple objects.

As research into these dynamics advances, it may lead to new technologies and techniques for navigating granular materials—on Earth and beyond—potentially enabling more efficient solutions for various industries and scientific endeavors.

This story is part of Science X Dialog, where researchers can report findings from their published research articles. Visit this page for information about Science X Dialog and how to participate.

More information: D. D. Carvalho et al, Drag reduction during the side-by-side motion of a pair of intruders in a granular medium, Physical Review Fluids (2024). DOI: 10.1103/PhysRevFluids.9.114303

Journal information: Physical Review Fluids  Physics of Fluids 

Researchers improve chaotic mapping for super-resolution image reconstruction

Super-resolution (SR) technology plays a pivotal role in enhancing the quality of images. SR reconstruction aims to generate high-resolution images from low-resolution ones. Traditional methods often result in blurred or distorted images. Advanced techniques such as sparse representation and deep learning-based methods have shown promising results but still face limitations in terms of noise robustness and computational complexity.

In a recent study published in Sensors, researchers from the Changchun Institute of Optics, Fine Mechanics and Physics of the Chinese Academy of Sciences proposed innovative solutions that integrate chaotic mapping into SR image reconstruction process, significantly enhancing the image quality across various fields.

Researchers innovatively introduced circle chaotic mapping into the dictionary sequence solving process of the K-singular value decomposition (K-SVD) dictionary update algorithm. This integration facilitated balanced traversal and simplified the search for global optimal solutions, thereby enhancing the noise robustness of the SR reconstruction.

In addition, researchers adopted the orthogonal matching pursuit (OMP) greedy algorithm, which converges faster than the L1-norm convex optimization algorithm, to complement K-SVD, and constructed a high-resolution image using the mapping relationship generated by the algorithm.

They trained and learned high- and low-resolution dictionaries from a large number of images similar to the target. Through the joint dictionary training method, the high- and low-resolution image blocks under the dictionary had the same sparse representation, reducing the complexity of the SR reconstruction process.

The proposed method, named the Chaotic Mapping-based Sparse Representation (CMOSR), significantly improves the image quality and authenticity. It could effectively reconstruct high-resolution images with high spatial resolution, good clarity, and rich texture details. Compared to traditional SR algorithms, the CMOSR exhibits better noise robustness and computational efficiency. It does not generate unexpected details when processing images and is more inclusive of image sizes.

More information: Hailin Fang et al, Super-Resolution Reconstruction of Remote Sensing Images Using Chaotic Mapping to Optimize Sparse Representation, Sensors (2024). DOI: 10.3390/s24217030

Provided by Chinese Academy of Sciences

Researchers achieve calculation of Jones polynomial based on the Majorana zero modes

A research team has experimentally calculated the Jones polynomial based on the quantum simulation of braided Majorana zero modes. The research team determined the Jones polynomials of different links through simulating the braiding operations of Majorana fermions. This study was published in Physical Review Letters.

Link or knot invariants, such as the Jones polynomials, serve as a powerful tool to determine whether or not two knots are topologically equivalent. Currently, there is a lot of interest in determining Jones polynomials as they have applications in various disciplines, such as DNA biology and condensed matter physics.

Unfortunately, even approximating the value of Jones polynomials falls within the #P-hard complexity class, with the most efficient classical algorithms requiring an exponential amount of resources. Yet, quantum simulations offer an exciting way to experimentally investigate properties of non-Abelian anyons and Majorana zero modes (MZMs) are regarded as the most plausible candidate for experimentally realizing non-Abelian statistics.

The team used a photonic quantum simulator that employed two-photon correlations and nondissipative imaginary-time evolution to perform two distinct MZM braiding operations that generate anyonic worldlines of several links. Based on this simulator, the team conducted a series of experimental studies to simulate the topological properties of non-Abelian anyons.

They successfully simulated the exchange operations of a single Kitaev chain MZM, detected the non-Abelian geometric phase of MZMs in a two-Kitaev chain model, and further extended to high dimensions -semion zeroth mode, studying their braiding process which was immune to local noise and maintained the conservation of quantum contextual resources.

Based on this work, the team expanded the previous single-photon encoding method to dual-photon spatial methods, utilizing coincidence counting of dual photons for encoding. This significantly increased the number of quantum states that can be encoded.

Meanwhile, by introducing a Sagnac interferometer-based quantum cooling device, the dissipative evolution had been successfully transformed into a nondissipative evolution, which enhanced the device’s capability to recycle photonic resources, thus contributing to achieving multi-step quantum evolution operations. These techniques greatly improved the capability of the photonic quantum simulator and laid a solid technical foundation for the simulation of braiding Majorana zero modes in three Kitaev models.

The team demonstrated that their experimental setup could faithfully realize the desired braiding evolutions of MZMs, as the average fidelity of quantum states and braiding operation was above 97%.

By combining different braiding operations of Majorana zero modes in the three Kitaev chain models, the research team simulated five typical topological knots, which gave rise to the Jones polynomials of five topologically distinct links, further distinguishing between topologically inequivalent links.

Such an advance can greatly contribute to the fields of statistical physics, molecular synthesis technology and integrated DNA replication, where intricate topological links and knots emerge frequently.

More information: Jia-Kun Li et al, Photonic Simulation of Majorana-Based Jones Polynomials, Physical Review Letters (2024). DOI: 10.1103/PhysRevLett.133.230603

Journal information: Physical Review Letters

Provided by University of Science and Technology of China

The science behind your Christmas sweater: How friction shapes the form of knitted fabrics

A trio of physicists from the University of Rennes, Aoyama Gakuin University, and the University of Lyon have discovered, through experimentation, that it is friction between fibers that allows knitted fabrics to take on a given form. Jérôme Crassous, Samuel Poincloux, and Audrey Steinberger have attempted to understand the underlying mechanics involved in the forms of knitted garments. Their paper is published in Physical Review Letters.

The research team noted that while many of the factors that are involved in intertwined fabrics have been studied to better understand their characteristics (such as why sweaters keep people warm despite the gaps between stitches), much less is known about the form garments made using such techniques can take.

To learn more, they conducted experiments using a nylon yarn and a well-known Jersey knit stitch called the stockinette—a technique that involves forming interlocked loops using knitting needles. They knitted a piece of fabric using 70×70 stitches and attached it to a biaxial tensile machine.

The team then used the tensile machine to stretch the piece of fabric in different ways, and then closely examined how it impacted the stitches. They found that the piece of garment did not have a unique shape. By stretching the fabric in different ways, they could cause it to come to rest in different forms, which they call metastable shapes.

They noted that the ratios of the length and width of such metastable shapes varied depending on how much twisting was applied, which suggested the fabric was capable of taking on many different metastable shapes.

The researchers then created simulations of the fiber to show what was happening as it was twisted and pulled on the tensile machine. The simulations showed the same results, but it allowed them to change one characteristic of the virtual fibers that could not be changed on the real fabric—the amount of friction between the strands.

They found that setting the friction to zero reduced the metastable shapes to just one. Thus, friction was found to be the driving force behind the forms that knitted fabrics can take.

More information: Jérôme Crassous et al, Metastability of a Periodic Network of Threads: Shapes of a Knitted Fabric, Physical Review Letters (2024). DOI: 10.1103/PhysRevLett.133.248201. On arXivDOI: 10.48550/arxiv.2404.07811

Journal information: Physical Review Letters  arXiv 

© 2024 Science X Network

A new calculation of the electron’s self-energy improves determination of fundamental constants

When quantum electrodynamics, the quantum field theory of electrons and photons, was being developed after World War II, one of the major challenges for theorists was calculating a value for the Lamb shift, the energy of a photon resulting from an electron transitioning from one hydrogen hyperfine energy level to another.

The effect was first detected by Willis Lamb and Robert Retherford in 1947, with the emitted photon having a frequency of 1,000 megahertz, corresponding to a photon wavelength of 30 cm and an energy of 4 millionths of an electronvolt—right on the lower edge of the microwave spectrum. It came when the one electron of the hydrogen atom transitioned from the 2P1/2 energy level to the 2S1/2 level. (The leftmost number is the principal quantum number, much like the discrete but increasing circular orbits of the Bohr atom.)

Conventional quantum mechanics didn’t have such transitions, and Dirac’s relativistic Schrödinger equation (naturally called the Dirac equation) did not have such a hyperfine transition either, because the shift is a consequence of interactions with the vacuum, and Dirac’s vacuum was a “sea” that did not interact with real particles.

As theorists worked to produce a workable theory of quantum electrodynamics (QED), predicting the Lamb shift was an excellent challenge as the QED calculation contained the prominent thorns of the theory, such as divergent integrals at both low and high energies and singularity points.

On Lamb’s 65th birthday in 1978, Freeman Dyson said to him, “Those years, when the Lamb shift was the central theme of physics, were golden years for all the physicists of my generation. You were the first to see that this tiny shift, so elusive and hard to measure, would clarify our thinking about particles and fields.”

Precisely predicting the Lamb shift, as well as the anomalous magnetic moment of the electron, has been a challenge for theorists of every generation since. The theoretically predicted value for the shift allows the fine-structure constant to be measured with an uncertainty of less than one part in a million.

Now, a new step in the evolution of the Lamb shift calculation has been published in Physical Review Letters by a group of three scientists from the Max Planck Institute for Nuclear Physics in Germany. To be exact, they calculated the “two-loop” electron self-energy.

Self-energy is the energy a particle (here, an electron) has as a result of changes that it causes in its environment. For example, the electron in a hydrogen atom attracts the proton that is the nucleus, so the effective distance between them changes.

QED has a prescription to calculate the self-energy, and it’s easiest via Feynman diagrams. “Two-loops” refers to the Feynman diagrams that describe this quantum process—two virtual photons from the quantum vacuum that influence the electron’s behavior. They pop in from the vacuum, stay a shorter time than is set by the Heisenberg Uncertainty Principle, then are absorbed by the 1S electron state, which has spin 1/2.

Accounting for the two-loop self-energy is one of only three mathematical terms that describe the Lamb shift, but it constitutes a major problem that most influences the result for the Lamb energy shift.

Lead author Vladimir Yerokhin and his colleagues determined an enhanced precision for it from numerical calculations. Importantly, they calculated the two-loop correction to all orders in an important parameter, Zα that represents the interaction with the nucleus. (Z is the atomic number of the nucleus. The atom still has only one electron, but a nucleus bigger than hydrogen’s is included for generality. α is the fine structure constant.)

Although it was computationally challenging, the trio produced a significant improvement on previous two-loop calculations of the electron self-energy that reduces the 1S–2S Lamb shift in hydrogen by a frequency difference of 2.5 kHz and reduces its theoretical uncertainty. In particular, this reduces the value of the Rydberg constant by one part in a trillion.

Introduced by the Swedish spectroscopist Johannes Rydberg in 1890, this number appears in simple equations for the spectral lines of hydrogen. The Rydberg constant is a fundamental constant that is one of the most precisely known constants in physics, containing 12 significant figures with, previously, a relative uncertainty of about two parts in a trillion.

Overall, they write, “the calculational approach developed in this Letter allowed us to improve the numerical accuracy of this effect by more than an order of magnitude and extend calculations to lower nuclear charges [Z] than previously possible.” This, in turn, has consequences for the Rydberg constant.

Their methodology also has consequences for other celebrated QED calculations: other two-loop corrections to the Lamb shift, and especially to the two-loop QED effects for the anomalous magnetic moment of the electron and the muon, also called their “g-factors.” A great deal of experimental effort is currently being put into precisely determining the muon’s g-factor, such as the Muon g-2 experiment at Fermilab, as it could point the way to physics beyond the standard model.

More information: V. A. Yerokhin et al, Two-Loop Electron Self-Energy for Low Nuclear Charges, Physical Review Letters (2024). DOI: 10.1103/PhysRevLett.133.251803

Journal information: Physical Review Letters

© 2024 Science X Network

Starlight to sight: Researchers develop short-wave infrared technology to allow starlight detection

Prof Zhang Zhiyong’s team at Peking University developed a heterojunction-gated field-effect transistor (HGFET) that achieves high sensitivity in short-wave infrared detection, with a recorded specific detectivity above 1014 Jones at 1300 nm, making it capable of starlight detection. Their research was recently published in the journal Advanced Materials, titled “Opto-Electrical Decoupled Phototransistor for Starlight Detection.”

Highly sensitive shortwave infrared (SWIR) detectors are essential for detecting weak radiation (typically below 10−8 W·Sr−1·cm−2·µm−1) with high-end passive image sensors. However, mainstream SWIR detection based on epitaxial photodiodes cannot effectively detect ultraweak infrared radiation due to the lack of inherent gain.

Filling this gap, researchers at the Peking University School of Electronics and collaborators have presented a heterojunction-gated field-effect transistor (HGFET) that achieves ultra-high photogain and exceptionally low noise in the short-wavelength infrared (SWIR) region, benefiting from a design that incorporates a comprehensive opto-electric decoupling mechanism.

The team developed a HGFET consisting of a colloidal quantum dot (CQD)-based p-i-n heterojunction and a carbon nanotube (CNT) field-effect transistor, which significantly detects and amplifies SWIR signals with a high inherent gain while minimally amplifying noise, leading to a recorded specific detectivity above 1014 Jones at 1300 nm and a recorded maximum gain-bandwidth product of 69.2 THz.

Direct comparative testing indicates that the HGFET can detect weak infrared radiation at 0.46 nW cm−2 levels, thus making this detector much more sensitive than the commercial and reported SWIR detectors, and especially enabling starlight detection or vision.

More information: Shaoyuan Zhou et al, Opto‐Electrical Decoupled Phototransistor for Starlight Detection, Advanced Materials (2024). DOI: 10.1002/adma.202413247

Journal information: Advanced Materials

Provided by Peking University

Secret lab developing UK’s first quantum clock

A top-secret lab in the UK is developing the country’s first quantum clock to help the British military boost intelligence and reconnaissance operations, the defense ministry said Thursday.

The clock is so precise that it will lose less than one second over billions of years, “allowing scientists to measure time at an unprecedented scale,” the ministry said in a statement.

“The trialing of this emerging, groundbreaking technology could not only strengthen our operational capability, but also drive progress in industry, bolster our science sector and support high-skilled jobs,” Minister for Defense Procurement Maria Eagle said.

The groundbreaking technology by the Defense Science and Technology Laboratory will reduce reliance on GPS technology, which “can be disrupted and blocked by adversaries,” the ministry added.

It is not a world first, as the University of Colorado at Boulder developed a quantum clock 15 years ago with the US National Institute of Standards and Technology.

But it is “the first device of its kind to be built in the UK,” the statement said, adding it could be deployed by the military “in the next five years”.

A quantum clock uses quantum mechanics — the physics of matter and energy at the atomic and subatomic scale — to keep time with unprecedented accuracy by measuring energy fluctuations within atoms.

Accurate timekeeping is crucial for satellite navigation systems, mobile telephones and digital TV, among other applications, and may open new frontiers in research fields such as quantum science.

Companies and governments around the world are keen to cash in on the huge potential benefits quantum technology could bring.

Google last month unveiled a new quantum computing chip it said could do in minutes what it would take leading supercomputers 10 septillion years to complete.

The United States and China are investing heavily in quantum research, and the US administration has imposed tight restrictions on exporting such sensitive technology.

One expert, Olivier Ezratty, told AFP in October that private and public investment in such technology had reached $20 billion during the past five years.

The defense ministry said future research would “see the technology decrease in size to allow mass manufacturing and miniaturization, unlocking a wide range of applications, such as use by military vehicles and aircraft”.

© 2025 AFP

Building better infrared sensors: Novel photodiode design boosts responsivity

Detecting infrared light is critical in an enormous range of technologies, from remote controls to autofocus systems to self-driving cars and virtual reality headsets. That means there would be major benefits from improving the efficiency of infrared sensors, such as photodiodes.

Researchers at Aalto University have developed a new type of infrared photodiode that is 35% more responsive at 1.55 µm, the key wavelength for telecommunications, compared to other germanium-based components. Importantly, this new device can be manufactured using current production techniques, making it highly practical for adoption.

“It took us eight years from the idea to proof-of-concept,” says Hele Savin, a professor at Aalto University.

The basic idea is to make the photodiodes using germanium instead of indium gallium arsenide. Germanium photodiodes are cheaper and already fully compatible with the semiconductor manufacturing process—but so far, germanium photodiodes have performed poorly in terms of capturing infrared light.

Savin’s team managed to make germanium photodiodes that capture nearly all the infrared light that hits them.
The study was published on 1 Jan 2025 in the journal Light: Science & Applications.

“The high performance was made possible by combining several novel approaches: eliminating optical losses using surface nanostructures and minimizing electrical losses in two different ways,” explains Hanchen Liu, the doctoral researcher who built the proof-of-concept device.

The team’s tests showed that their proof-of-concept photodiode outperformed not only existing germanium photodiodes but also commercial indium gallium arsenide photodiodes in responsivity. The new technology captures infrared photons very efficiently and works well across a wide range of wavelengths. The new photodiodes can be readily fabricated by existing manufacturing facilities, and the researchers expect that they can be directly integrated into many technologies.

“The timing couldn’t be better. So many fields nowadays rely on sensing infrared radiation that the technology has become part of our everyday lives,” says Savin.

Savin and the rest of the team are keen to see how their technology will affect existing applications and to discover what new applications become possible with the improved sensitivity.

More information: Hanchen Liu et al, Near-infrared germanium PIN-photodiodes with >1A/W responsivity, Light: Science & Applications (2025). DOI: 10.1038/s41377-024-01670-4

Journal information: Light: Science & Applications

Provided by Aalto University

Researchers use high-performance computing to analyze a quantum photonics experiment

by Universität Paderborn

Quantum experiments and high-performance computing: new methods enable complex calculations to be completed extremely quickly
Scientists at Paderborn University have for the first time used high-performance computing (on the right in the picture the Paderborn supercomputer Noctua) to analyze a quantum photonics experiment on a large scale. Credit: Paderborn University, Hennig/Mazhiq

For the first time ever, scientists at Paderborn University have used high-performance computing (HPC) at large scales to analyze a quantum photonics experiment. In specific terms, this involved the tomographic reconstruction of experimental data from a quantum detector. This is a device that measures individual photons.

The researchers involved developed new HPC software to achieve this. Their findings have now been published in the journal Quantum Science and Technology.

Quantum tomography on a megascale photonic quantum detector

High-resolution photon detectors are increasingly being used for quantum research. Precisely characterizing these devices is crucial if they are to be put to effective use for measurement purposes—and thus far, doing so has been a challenge. This is because it involves huge volumes of data that need to be analyzed without neglecting their quantum mechanical structure.

Suitable tools for processing these data sets are particularly important for future applications. While traditional approaches cannot perform like-for-like computations of quantum systems beyond a certain scale, Paderborn’s scientists are using high-performance computing for characterization and certification tasks.

“By developing open-source customized algorithms using HPC, we perform quantum tomography on a megascale quantum photonic detector,” explains physicist Timon Schapeler, who authored the paper with computer scientist Dr. Robert Schade and colleagues from PhoQS (Institute for Photonic Quantum Systems) and PC2 (Paderborn Center for Parallel Computing).

PC2, an interdisciplinary research project at Paderborn University, operates the HPC systems. The university is one of Germany’s national high-performance computing centers and thus stands at the forefront of university high-performance computing.

‘Unprecedented scale’

“The findings are opening up entirely new horizons for the size of systems being analyzed in the field of scalable quantum photonics. This has wider implications, for example, for characterizing photonic quantum computer hardware,” Schapeler continues. Researchers were able to perform their calculations for describing a photon detector within just a few minutes—faster than ever before.

The system also managed to complete calculations involving huge quantities of data extremely quickly. Schapeler states, “This shows the unprecedented scale on which this tool can be used with quantum photonic systems. As far as we know, our work is the first contribution to the field of traditional high-performance computing enabling experimental quantum photonics at large scales.

“This field will become increasingly important when it comes to demonstrating quantum supremacy in quantum photonic experiments—and on a scale that cannot be calculated by conventional means.”

Shaping the future with fundamental research

Schapeler is a doctoral student in the “Mesoscopic Quantum Optics” research group headed by Professor Tim Bartley. This team conducts research into the fundamental physics of the quantum states of light and its applications. These states consist of tens, hundreds or thousands of photons.

“The scale is crucial, as this illustrates the fundamental advantage that quantum systems hold over conventional ones. There is a clear benefit in many areas, including measurement technology, data processing and communications,” Bartley explains.

More information: Timon Schapeler et al, Scalable quantum detector tomography by high-performance computing, Quantum Science and Technology (2024). DOI: 10.1088/2058-9565/ad8511

Journal information: Quantum Science and Technology 

Provided by Universität Paderborn