Is each snowflake really unique? Why is some snow light and fluffy or heavy? The amazing science of snow

Is each snowflake really unique? Why is some snow light and fluffy or heavy? The amazing science of snow
If you catch a snowflake, take a moment to look at it: It’s a formation no one has ever seen before. Credit: Damian McCoig/Unsplash

In northern communities, seasonal snow plays a central role in day-to-day activities.

For some, it means a day off from school. For others, it’s a signal that skiing season is starting. Or maybe it’s a harbinger of an extra long commute to work. It’s remarkable how many memories and emotions can be evoked by a few billion tiny ice crystals.

We may see snow as a blanket or drifts across the landscape or our driveway. But when was the last time you took a closer look at snow, and I mean a really close look?

Many a writer has mused about snowflakes as a natural work of art. Here’s a scientific look at the amazing nature of snowflakes and snow.

How do snowflakes form?

While different catalogs will say that there are seven types of snowflakesor eight or 35, we are probably most familiar with the classic six-sided dendrite forms, characterized by elaborate and nearly symmetrical branches. You know, the type that you would cut out of a piece of paper.

The dendrite form is a study in water chemistry. When ice forms at the molecular level, the angle between the hydrogen and oxygen atoms will always be 120 degrees; put three of these together to get a full ring of molecules with a six-sided structure. In fact, every time a water molecule attaches itself to this ring, it will do so at the same angle.

As the snowflake grows, the attachment of water molecules is determined by the temperature and humidity of the air. Since these characteristics don’t change too much at the size of a growing snowflake, those attachments tend to occur evenly across the six points of the hexagonal flake.

Molecule by molecule, the snowflake grows and eventually begins to fall. This takes the snowflake to a new part of the atmosphere, where temperature and humidity are different, resulting in new ice structures forming, but still with the same set of angles.

Video about ice and snow crystal growth with physics professor Ken Libbrecht.

Is each snowflake really unique?

A typical dendrite is made up of about a quintillion (that’s a one with 18 zeroes after it) individual water molecules. Given slight changes in temperature and humidity and the huge number of molecules and bonding opportunities involved, the ice structures created can be incredibly diverse and complicated.

For this reason, it is entirely likely that no two snowflakes form in exactly the same way, and consequently no two snowflakes are alike.

Twin snowflakes have been grown in a lab, where temperature and humidity are closely controlled, but that’s a bit of a cheat.

Why is some snow light and fluffy and some is heavy?

The story of snow crystal growth doesn’t end high above in the clouds. Once the snowflakes reach the ground and accumulate as a blanket of snow, they begin to change.

Freshly fallen snow tends to be light and fluffy because the flakes take up a lot of space and there is a lot of air between and within them. But over time, they break apart, pack tighter together and the density increases.

This process is known as sintering and is useful for building snow shelters like igloos and quinzees. But some of the most remarkable changes happen at the bottom of the snowpack, where warmth from the ground below and cold from the air above interact.

Through a process of sublimation—water molecules change from ice directly to vapor, skipping the liquid phase—and refreezing, cup-shaped crystals a few centimeters across known as depth hoar can form. Though beautiful to look at, depth hoar has a low density and when it forms on a steep slope there is a chance for the snowpack to slide as an avalanche.

So next time you’re out in the snow, even if you’re grumbling about having to shovel the driveway for the umpteenth time this winter, take a moment to catch a snowflake on your mitten and have a look at it. You’re looking at a formation no one has ever seen before.

Check out physics professor Kenneth Libbrecht’s website for a full description of snowflake forms.

Scientists achieve phonon and photon lasing in optomechanical cavities

Scientists achieved phonon and photon lasing in optomechanical cavities
Strong photon-phonon coupling and simultaneous lasing of photon and phonon in a one-dimensional optomechanical microcavity. Credit: Kaiyu Cui, Tsinghua University

Since the introduction of the first ruby laser—a solid-state laser that uses the synthetic ruby crystal as its laser medium—in 1960, the use of lasers has expanded significantly in scientific, medical and industrial fields.

With the advancement of science and technology, lasers with extremely narrow linewidths have become the key to research in frontier scientific fields. The much-anticipated gravitational wave detection program Laser Interferometer Gravitational-Wave Observatory (LIGO) in the US, for instance, has extremely strict requirements for the coherence of lasers. While Brillouin lasers have great potential for such applications because of their linewidth narrowing effect, the lasing threshold of on-chip Brillouin lasers is high due to the intrinsic loss of the Brillouin waveguide and the large mode volume.

To circumvent this issue, a team from the Nano-OptoElectronics Lab led by Professor Yidong Huang at Tsinghua University proposes a similar scattering photon lasing phenomenon occurring in an optomechanical microcavity, which can help realize a new on-chip narrow linewidth laser with a lower lasing threshold.

“The new laser can be achieved from a one-dimensional optomechanical crystal with both photon and phonon excitation within a chip size of just tens of microns by a small pump threshold of only 500 microwatts,” shares Associate Professor Kaiyu Cui, a researcher involved in the study.

The team observed that the linewidth of the new laser was narrowed by four orders of magnitude to 5.4 kHz after phonon lasing at 6.2 GHz. This highly coherent phonon laser has important applications in fields such as high-precision mass sensing, spectral sensing and signal processing. At the same time, the excited photon also exhibits a significant threshold effect, which can be applied in coherent wavelength conversion.

Notably, achieving simultaneous photon and phonon lasing in one-dimensional optomechanical crystals is no easy feat. Periodically aligned nanostructures are needed to confine both light and mechanical waves in a very small volume by a physical mechanism known as defect modes. Only then could the localized photons and phonons within the microcavity undergo strong energy coupling, and in turn enabling coherent lasing at very low pump power.

Nonetheless, the team has successfully fabricated one-dimensional optomechanical crystals on a silicon chip using electron-beam lithography. When the incident pump power exceeded the threshold, significant lasing was observed on the spectrometer. Indeed, the experimental results matched those of theoretical expectations.

The researchers published their latest findings, which could pave the way for silicon-based photonic and phononic lasers to fulfill the urgent need for new laser technologies, in the journal Fundamental Research.

“In optomechanical crystals, nonlinear equations can be used to describe the behavior of photons and phonons. Since nonlinear systems cannot be solved analytically in general, most previous studies have been conducted based on linearized equations,” explains Prof Huang.

“Based on our findings, we propose that the nonlinear equations can be analyzed directly by means of limit-cycle theory, which gives the first analytical formulation of the laser linewidth under the effect of phase noise.”

More information: Jian Xiong et al, Phonon and photon lasing dynamics in optomechanical cavities, Fundamental Research (2022). DOI: 10.1016/j.fmre.2022.10.008

Provided by KeAi Communications Co., Ltd.

New techniques for accurate measurements of tiny quantum objects

New techniques for accurate measurements of tiny objects "
Experimental implementation of optimal collective measurements using quantum computers. a,b, Probe states are sent to the quantum computers (QC) individually for the single-copy measurement (a) and in pairs for the two-copy measurement (b). c,d, The qubit probes experience rotations, θx and θy, about the x and y axes of the Bloch sphere (c) before undergoing decoherence that has the effect of shrinking the Bloch vector (d). This rotation can be thought of as being caused by an external magnetic field that we wish to sense. e,f, The QCs then implement quantum circuits corresponding to the optimal single-copy (e) and two-copy (f) measurements. Two optimal single-copy circuits are shown, one for estimating θx and one for θy. g, Finally, error mitigation is used to improve the accuracy of the estimated angle. We create a model (green line) for how the noisy estimate of θ, θ^noisy (black dots), is related to the true value (red line). The model is then used to correct θ^noisy to produce the final estimate θ^θ^. Sample data from the F-IBM QS1 device downsampled by a factor of three are shown in g. Error bars are smaller than the markers. Credit: Nature Physics (2023). DOI: 10.1038/s41567-022-01875-7

New research led by a team of scientists at The Australian National University (ANU) has outlined a way to achieve more accurate measurements of microscopic objects using quantum computers—a step that could prove useful in a huge range of next-generation technologies, including biomedical sensing.

Examining the various individual properties of a large everyday object like a car is fairly simple: a car has a well-defined position, color and speed. However, this becomes much trickier when trying to examine microscopic quantum objects like photons—tiny little particles of light.

That’s because certain properties of quantum objects are connected, and measuring one property can disturb another property. For example, measuring the position of an electron will affect its speed and vice versa.

Such properties are called conjugate properties. This is a direct manifestation of Heisenberg’s famous uncertainty principle—it is not possible to simultaneously measure two conjugate properties of a quantum object with arbitrary accuracy.

According to lead author and ANU Ph.D. researcher Lorcán Conlon, this is one of the defining challenges of quantum mechanics.

“We were able to design a measurement to determine conjugate properties of quantum objects more accurately. Remarkably, our collaborators were able to implement this measurement in various labs around the world,” Conlon said.

“More accurate measurements are crucial, and can in turn open up new possibilities for all sorts of technologies, including biomedical sensing, laser ranging, and quantum communications.”

The new technique revolves around a strange quirk of quantum systems, known as entanglement. According to the researchers, by entangling two identical quantum objects and measuring them together, scientists can determine their properties more precisely than if they were measured individually.

“By entangling two identical quantum systems, we can acquire more information,” co-author Dr. Syed Assad said. “There is some unavoidable noise associated with measuring any property of a quantum system. By entangling the two, we’re able to reduce this noise and get a more accurate measurement.”

In theory, it is possible to entangle and measure three or more quantum systems to achieve even better precision, but in this case the experiments failed to agree with the theory. Nevertheless, the authors are confident that future quantum computers will be able to overcome these limitations.

“Quantum computers with error-corrected qubits will be able to gainfully measure with more and more copies in the future,” Conlon said.

According to Professor Ping Koy Lam, A*STAR chief quantum scientist at Institute of Materials Research and Engineering (IMRE), one of the key strengths of this work is that a quantum-enhancement can still be observed in noisy scenarios.

“For practical applications, such as in biomedical measurements, it is important that we can see an advantage even when the signal is inevitably embedded in a noisy real-world environment,” he said.

The study was conducted by experts at the ARC Centre of Excellence for Quantum Computation and Communication Technology (CQC2T), in collaboration with researchers from A*STAR’s Institute of Materials Research and Engineering (IMRE), the University of Jena, the University of Innsbruck, and Macquarie University. Amazon Web Services collaborated by providing research and architectural support, and by making available the Rigetti Aspen-9 device using Amazon Braket.

The researchers tested their theory on 19 different quantum computers, across three different platforms: superconducting, trapped ion and photonic quantum computers. These world leading devices are located across Europe and America and are cloud-accessible, allowing researchers from across the globe to connect and carry out important research.

The research has been published in Nature Physics.

More information: Lorcán O. Conlon et al, Approaching optimal entangling collective measurements on quantum computing platforms, Nature Physics (2023). DOI: 10.1038/s41567-022-01875-7

Journal information: Nature Physics 

Provided by Australian National University 

Laser-controlled synthetic microswimmers show swarm intelligence can be caused by physical mechanisms

Swarm intelligence caused by physical mechanisms
Experimental realization. A Particles used in the experiments consist of a melamine resin colloid (2.18 μm in diameter) with 8 nm gold nanoparticles scattered across the surface (covering up to 10% of the total surface area). A 532 nm laser focused at the edge of the particle at a distance d from its center induces a self-thermophoretic motion and allows for precise control of the propulsion direction. Importantly, optical forces are weak so the particles exhibit a truly self-phoretic autonomous motility, making them proper microswimmers. B Experimental setup used to image the particles by darkfield microscopy (LED, darkfield condenser, and camera) and guide their motion by sequential beam steering of the laser on the sample plane with a two-axis acousto-optic deflector (AOD). All particles in the field of view are addressed during each exposure period of the camera. C The interaction rule for the delayed attraction of a single active particle (white sphere) towards a target (red sphere) is split into an observation made at a time t − δt that sets the direction of motion for the self-propulsion step exerted after a programmed delay time δt. The green arrows indicate the planned motion −r(t − δt) and its actual realization at time tD Examples of darkfield microscopy images where a single active particle (top) and 16 active particles (bottom) interact with one target particle (red). Credit: Nature Communications (2023). DOI: 10.1038/s41467-022-35427-7

Seemingly spontaneously coordinated swarm behavior exhibited by large groups of animals is a fascinating and striking collective phenomenon. Experiments conducted by researchers at Leipzig University on laser-controlled synthetic microswimmers now show that supposed swarm intelligence can sometimes also be the result of simple and generic physical mechanisms.

A team of physicists led by Professor Frank Cichos and Professor Klaus Kroy found that swarms of synthetically produced Brownian microswimmers appear to spontaneously decide to orbit their target point instead of heading for it directly. They have just published their findings in the renowned journal Nature Communications.

“Scientific research on herd and flock behavior is usually based on field observations. In such cases, it is usually difficult to reliably record the internal states of the herd animals,” Kroy said. As a result, the interpretation of observations frequently relies on plausible assumptions as to which individual behavioral rules are necessary for the complex collective groups under observation.

Researchers at Leipzig University therefore developed an experimental model system of microswimmers that elicits properties of natural swarm intelligence and provides complete control over the individuals’ internal states, strategies, and transformation of signal perception into a navigational reaction.

Thanks to a sophisticated laser heating system (see image), the colloidal swimmers, which are visible only under the microscope, can actively self-propel in a water container by a kind of “thermophoretic self-propulsion” while their travel is permanently disturbed in a random manner by Brownian motion.

“Apart from Brownian random motion, which is ubiquitous in microphysics, the experimental set-up provides complete control over the physical parameters and navigation rules of the individual colloidal swimmers and allows long-term observations of swarms of variable sizes,” Cichos said.

According to Cichos, when just a very simple and generic navigation rule is followed identically by all of the swimmers, a surprisingly complex swarm behavior results. For example, if the swimmers are aiming at the same fixed point, instead of them gathering at the same place a kind of carousel can form. Similar to satellites or atomic electrons, the swimmers then orbit their attractive center on circular paths of varying heights.

The only “intelligent” behavioral rule required for this is that the self-propulsion responds to environmental perception with a certain time delay, which usually occurs in natural swarm phenomena from mosquito dances to road traffic anyway. It turns out that such a “delayed” effect alone is sufficient to form complex dynamic patterns such as the carousel described above.

“Physically speaking, each individual swimmer can spontaneously break the radial symmetry of the system and go into circular motion if the product of the delayed time and swimming speed is large enough,” Kroy said. In contrast, the orbits of larger swarms and their synchronization and stabilization depend on additional details such as the steric, phoretic and hydrodynamic interactions between the individual swimmers.

Since all signal-response interactions in the living world occur in a time-delayed manner, these findings should also further the understanding of dynamic pattern formation in natural swarm ensembles. The researchers deliberately chose primitive and uniform navigation rules for their experiment. This allowed them to develop a stringent mathematical description of the observed phenomena.

In the analysis of the delayed stochastic differential equations used for this purpose, the delay-induced effective synchronization of the swimmers with their own past turned out to be the key mechanism for the spontaneous circular motion. To a large extent, the theory allows us to mathematically predict the experimental observations.

“All in all, we have succeeded in creating a laboratory for swarms of Brownian microswimmers. This can serve as a building block for future systematic studies of increasingly complex and possibly still unknown swarm behavior, and it may also explain why puppies often circle their food bowl when they are being fed,” Cichos said.

More information: Xiangzun Wang et al, Spontaneous vortex formation by microswimmers with retarded attractions, Nature Communications (2023). DOI: 10.1038/s41467-022-35427-7

Journal information: Nature Communications 

Provided by Leipzig University 

Nuclear reactor experiment rules out one dark matter hope

Dark matter makes up more than a quarter of universe, but remains shrouded in mystery
Dark matter makes up more than a quarter of universe, but remains shrouded in mystery.

It was an anomaly detected in the storm of a nuclear reactor so puzzling that physicists hoped it would shine a light on dark matter, one of the universe’s greatest mysteries.

However new research has definitively ruled out that this strange measurement signaled the existence of a “sterile neutrino“, a hypothetical particle that has long eluded scientists.

Neutrinos are sometimes called “ghost particles” because they barely interact with other matter—around 100 trillion are estimated to pass through our bodies every second.

Since neutrinos were first theorized in 1930, scientists have been trying to nail down the properties of these shape-shifters, which are one of the most common particles in the universe.

They appear “when the nature of the nucleus of an atom has been changed”, physicist David Lhuillier of France’s Atomic Energy Commission told AFP.

That could happen when they come together in the furious fusion in the heart of stars like our Sun, or are broken apart in nuclear reactors, he said.

There are three confirmed flavors of neutrino: electron, muon and tau.

However physicists suspect there could be a fourth neutrino, dubbed “sterile” because it does not interact with ordinary matter at all.

In theory, it would only answer to gravity and not the fundamental force of weak interactions, which still hold sway over the other neutrinos.

The sterile neutrino has a place ready for it in theoretical physics, “but there has not yet been a clear demonstration that is exists,” he added.

Dark matter candidate

So Lhuillier and the rest of the STEREO collaboration, which brings together French and German scientists, set out to find it.

Previous nuclear reactor measurements had found fewer neutrinos than the amount expected by theoretical models, a phenomenon dubbed the “reactor antineutrino anomaly”.

It was suggested that the missing neutrinos had changed into the sterile kind, offering a rare chance to prove their existence.

To find out, the STEREO collaboration installed a dedicated detector a few meters away from a nuclear reactor used for research at the Laue–Langevin institute in Grenoble, France.

After four years of observing more than 100,000 neutrinos and two years analyzing the data, the verdict was published in the journal Nature on Wednesday.

The anomaly “cannot be explained by sterile neutrinos,” Lhuillier said.

But that “does not mean there are none in the universe”, he added.

The experiment found that previous predictions of the amount of neutrinos being produced were incorrect.

But it was not a total loss, offering a much clearer picture of neutrinos emitted by nuclear reactors.

This could help not just with future research, but also for monitoring nuclear reactors.

Meanwhile, the search for the sterile neutrino continues. Particle accelerators, which smash atoms, could offer up new leads.

Despite the setback, interest could remain high because sterile neutrinos have been considered a suspect for dark matter, which makes up more than quarter of the universe but remains shrouded in mystery.

Like dark matter, the sterile neutrino does not interact with ordinary matter, making it incredibly difficult to observe.

“It would be a candidate which would explain why we see the effects of dark matter—and why we cannot see dark matter,” Lhuillier said.

More information: David Lhuillier, STEREO neutrino spectrum of 235U fission rejects sterile neutrino hypothesis, Nature (2023). DOI: 10.1038/s41586-022-05568-2www.nature.com/articles/s41586-022-05568-2

Journal information: Nature 

© 2023 AFP

Physicists confirm effective wave growth theory in space

Physicists confirm effective wave growth theory in space
Whistler-mode wave magnetic field (blue arrows with spiral) propagating along the magnetic field (purple) interacting with electrons (red) passing through it. Credit: University of Tokyo

A team from Nagoya University in Japan has observed, for the first time, the energy transferring from resonant electrons to whistler-mode waves in space. Their findings offer direct evidence of previously theorized efficient growth, as predicted by the non-linear growth theory of waves. This should improve our understanding of not only space plasma physics but also space weather, a phenomenon that affects satellites.

When people imagine outer space, they often envision it as a perfect vacuum. In fact, this impression is wrong because the vacuum is filled with charged particles. In the depths of space, the density of charged particles becomes so low that they rarely collide with each other.

Instead of collisions, the forces related to the electric and magnetic fields filling space, control the motion of charged particles. This lack of collisions occurs throughout space, except for very near to celestial objects, such as stars, moons, or planets. In these cases, the charged particles are no longer traveling through the vacuum of space but instead through a medium where they can strike other particles.

Around the Earth, these charged-particle interactions generate waves, including electromagnetic whistler-mode waves, which scatter and accelerate some of the charged particles. When diffuse auroras appear around the poles of planets, observers are seeing the results of an interaction between waves and electrons. Since electromagnetic fields are so important in space weather, studying these interactions should help scientists predict variations in the intensity of highly energetic particles. This might help protect astronauts and satellites from the most severe effects of space weather.

A team comprising Designated Assistant Professor Naritoshi Kitamura and Professor Yoshizumi Miyoshi of the Institute for Space and Earth Science (ISEE) at Nagoya University, together with researchers from the University of Tokyo, Kyoto University, Tohoku University, Osaka University, and Japan Aerospace Exploration Agency (JAXA), and several international collaborators, mainly used data obtained using low-energy electron spectrometers, called Fast Plasma Investigation-Dual Electron Spectrometers, on board NASA’s Magnetospheric Multiscale spacecraft.

They analyzed interactions between electrons and whistler-mode waves, which were also measured by the spacecraft. By applying a method of using a wave particle interaction analyzer, they succeeded in directly detecting the ongoing energy transfer from resonant electrons to whistler-mode waves at the location of the spacecraft in space. From this, they derived the growth rate of the wave. The researchers published their results in Nature Communications.

The most important finding was that the observed results were consistent with the hypothesis that non-linear growth occurs in this interaction. “This is the first time anybody has directly observed the efficient growth of waves in space for the wave-particle interaction between electrons and whistler-mode waves,” explains Kitamura.

“We expect that the results will contribute to research on various wave-particle interactions and to also improve our understanding of the progress of plasma physics research. As more specific phenomena, the results will contribute to our understanding of the acceleration of electrons to high energies in the radiation belt, which are sometimes called ‘killer electrons’ because they inflict damage on satellites, as well as the loss of high-energy electrons in the atmosphere, which form diffuse auroras.”

More information: N. Kitamura et al, Direct observations of energy transfer from resonant electrons to whistler-mode waves in magnetosheath of Earth, Nature Communications (2022). DOI: 10.1038/s41467-022-33604-2

Journal information: Nature Communications 

Provided by Nagoya University 

Physicists find that organelles grow in random bursts

Physicists find that organelles grow in random bursts
Research from Washington University in St. Louis suggests that organelles in the eukaryotic cell grow in random bursts from a limiting pool of building blocks. Credit: Shutterstock

Eukaryotic cells—the ones that make up most life as we know it, including all animals, plants and fungi—are highly structured objects.

These cells assemble and maintain their own smaller, internal bits: the membrane-bound organelles like nuclei, which store genetic information, or mitochondria, which produce chemical energy. But much remains to be learned about how they organize themselves into these spatial compartments.

Physicists at Washington University in St. Louis conducted new experiments that show that eukaryotic cells can robustly control average fluctuations in organelle size. By demonstrating that organelle sizes obey a universal scaling relationship that the scientists predict theoretically, their new framework suggests that organelles grow in random bursts from a limiting pool of building blocks.

The study was published Jan. 6 in Physical Review Letters.

“In our work, we suggest that the steps by which organelles are grown—far from being an orderly ‘brick-by-brick’ assembly—occur in stochastic bursts,” said Shankar Mukherji, assistant professor of physics in Arts & Sciences.

“Such bursts fundamentally limit the precision with which organelle size is controlled but also maintain noise in organelle size within a narrow window,” Mukherji said. “Burstlike growth provides a general biophysical mechanism by which cells can maintain, on average, reliable yet plastic organelle sizes.”

Organelles must be flexible enough to allow cells to grow or shrink them as environments demand. Still, the size of organelles must be maintained within certain limits. Biologists have previously identified certain molecular factors that regulate organelle sizes, but this study provides new insights into the quantitative principles underlying organelle size control.

While this study used budding yeast as a model organism, the team is excited to explore how these assembly mechanisms are utilized across different species and cell types. Mukherji said that they plan to examine what these patterns of robustness can teach us about how to harness organelle assembly for bioengineering applications and how to spot defects in organelle biogenesis in the context of disease.

“The pattern of organelle size robustness is shared between budding yeast and human iPS cells,” Mukherji said. “The underlying molecular mechanisms producing these bursts are yet to be fully elucidated and are likely to be organelle-specific and potentially species-specific.”

More information: Kiandokht Panjtan Amiri et al, Robustness and Universality in Organelle Size Control, Physical Review Letters (2023). DOI: 10.1103/PhysRevLett.130.018401

Journal information: Physical Review Letters 

Provided by Washington University in St. Louis 

International fusion energy project faces delays, says chief

ITER, under construction at Saint-Paul-les-Durance in southern France, aims at emulating the Sun, which fuses particles together
ITER, under construction at Saint-Paul-les-Durance in southern France, aims at emulating the Sun, which fuses particles together to release energy.

An international project in nuclear fusion may face “years” of delays, its boss has told AFP, weeks after scientists in the United States announced a breakthrough in their own quest for the coveted goal.

The International Thermonuclear Experimental Reactor (ITER) project seeks to prove the feasibility of fusion as a large-scale and carbon-free source of energy.

Installed at a site in southern France, the decades-old initiative has a long history of technical challenges and cost overruns.

Fusion entails forcing together the nuclei of light atomic elements in a super-heated plasma, held by powerful magnetic forces in a doughnut-shaped chamber called a tokamak.

The idea is that fusing the particles together from isotopes of hydrogen—which can be extracted from seawater—will create a safer and almost inexhaustible form of energy compared with splitting atoms from uranium or plutonium.

ITER’S previously-stated goal was to create the plasma by 2025.

But that deadline will have to be postponed, Pietro Barabaschi—who in September became the project’s director-general—told AFP during a visit to the facility.

The date “wasn’t realistic in the first place,” even before two major problems surfaced, Barabaschi said.

One problem, he said, was wrong sizes for the joints of blocks to be welded together for the installation’s 19-by-11-metre (62-by-36-feet) chamber.

The second was traces of corrosion in a thermal shield designed to protect the outside world from the enormous heat created during nuclear fusion.

Fixing the problems “is not a question of weeks, but months, even years,” Barabaschi said.

A new timetable is to be worked out by the end of this year, he said, including some modification to contain the expected cost overrun, and to meet the French nuclear safety agency’s security requirements.

Barabaschi said he hoped ITER would be able to make up for the delays as it prepares to enter the full phase, currently scheduled for 2035.

On December 13, US researchers working separately from ITER announced an important technical breakthrough.

Scientists at the Lawrence Livermore National Laboratory (LLNL) in California said they had used the world’s largest laser to create, for the first time, a fusion reaction generating more energy than it took to produce.

“Some competition is healthy in any environment,” Barabaschi said about the success.

“If tomorrow somebody found another breakthrough that would make my work redundant, I would be very happy,” he added.

ITER was set in motion after a 1985 summit between US president Ronald Reagan and Soviet leader Mikhail Gorbachev.

Its seven partners are China, the European Union, India, Japan, South Korea, Russia and the United States.

Russia still participates in ITER despite the start of the Ukraine conflict.

In November it dispatched one of six giant magnets needed for the top part of the tokamak.

© 2023 AFP

Cooling 100 million degree plasma with a hydrogen-neon mixture ice pellet

Cooling 100 million degree plasma with a hydrogen-neon mixture ice pellet
Plasmoid behavior of pure hydrogen and hydrogen mixed with 5 % neon. In this experiment, a new Thomson Scattering (TS) diagnostic system operating at (an unprecedented rate of) 20 kHz was used to (i) measure the density of the plasmoid at the moment it passed through the observation region, and (ii) identify its position, which verified the theoretical predictions. Credit: National Institute for Fusion Science

At ITER—the world’s largest experimental fusion reactor, currently under construction in France through international cooperation—the abrupt termination of magnetic confinement of a high temperature plasma through a so-called “disruption” poses a major open issue. As a countermeasure, disruption mitigation techniques, which allow to forcibly cool the plasma when signs of plasma instabilities are detected, are a subject of intensive research worldwide.

Now, a team of Japanese researchers from National Institutes for Quantum Science and Technology (QST) and National Institute for Fusion Science (NIFS) of National Institute of National Sciences (NINS) found that by adding approximately 5% neon to a hydrogen ice pellet, it is possible to cool the plasma more deeply below its surface and hence more effectively than when pure hydrogen ice pellets are injected.

Using theoretical models and experimental measurements with advanced diagnostics at Large Helical Device owned by NIFS, the researchers clarified the dynamics of the dense plasmoid that forms around the ice pellet and identified the physical mechanisms responsible for the successful enhancement of the performance of the forced cooling system, which is indispensable for carrying out the experiments at ITER. These results will contribute to the establishment of plasma control technologies for future fusion reactors. The team’s report was made available online in Physical Review Letters.

The construction of the world’s largest experimental fusion reactor, ITER, is underway in France through international cooperation. At ITER, experiments will be conducted to generate 500 MW fusion energy by maintaining the “burning state” of the hydrogen isotope plasma at more than 100 million degrees. One of the major obstacles to the success of those experiments is a phenomenon called “disruption” during which the magnetic field configuration used to confine the plasma collapses due to magnetohydrodynamic instabilities.

Disruption causes the high-temperature plasma to flow into the inner surface of the containing vessel, resulting in structural damage that, in turn, may cause delays in the experimental schedule and higher cost. Although the machine and the operating conditions of ITER have been carefully designed to avoid disruption, uncertainties remain and for a number of experiments so that a dedicated machine protection strategy is required as a safeguard.

A promising solution to this problem is a technique called “disruption mitigation,” which forcibly cools the plasma at the stage where first signs of instabilities that may cause a disruption are detected, thereby preventing damage to plasma-facing material components. As a baseline strategy, researchers are developing a method using ice pellets of hydrogen frozen at temperatures below 10 Kelvin and injecting it into a high-temperature plasma.

The injected ice melts from the surface and evaporates and ionizes owing to heating by the ambient high-temperature plasma, forming a layer of low-temperature, high-density plasma (hereafter referred to as a “plasmoid”) around the ice. Such a low-temperature, high-density plasmoid mixes with the main plasma, whose temperature is reduced in the process. However, in recent experiments, it has become clear that when pure hydrogen ice is used, the plasmoid is ejected before it can mix with the target plasma, making it ineffective for cooling the high-temperature plasma deeper below the surface.

This ejection was attributed to the high pressure of the plasmoid. Qualitatively, a plasma confined in a donut-shaped magnetic field tends to expand outward in proportion to the pressure. Plasmoids, which are formed by the melting and the ionization of hydrogen ice, are cold but very dense. Because temperature equilibration is much faster than density equilibration, the plasmoid pressure rises above that of the hot target plasma. The consequence is that the plasmoid becomes polarized and experiences drift motion across the magnetic field, so that it propagates outward before being able to fully mix with the hot target plasma.

A solution to this problem was proposed from theoretical analysis: model calculations predicted that by mixing a small amount of neon into hydrogen, the pressure of the plasmoid could be reduced. Neon freezes at a temperature of approximately 20 Kelvin and produces strong line radiation in the plasmoid. Therefore, if the neon is mixed with hydrogen ice before injection, part of the heating energy can be emitted as photon energy.

To demonstrate such a beneficial effect of using a hydrogen-neon mixture, a series of experiments was conducted in the Large Helical Device (LHD) located in Toki, Japan. For many years, the LHD has operated a device called the “solid hydrogen pellet injector” with high reliability, which injects ice pellets with a diameter of approximately 3 mm at the speed of 1100 m/s. Due to the system’s high reliability, it is possible to inject hydrogen ice into the plasma with a temporal precision of 1 ms, which allows measurement of the plasma temperature and density just after the injected ice melts.

Recently, the world’s highest time resolution for Thomson Scattering (TS) of 20 kHz was achieved in the LHD system using new laser technology. Using this system, the research team has captured the evolution of plasmoids. They found that, as predicted by theoretical calculations, plasmoid ejection was suppressed when hydrogen ice was doped with approximately 5 % neon, in stark contrast to the case where pure hydrogen ice was injected. In addition, the experiments confirmed that the neon plays a useful role in the effective cooling of the plasma.

The results of this study show for the first time that the injection of hydrogen ice pellets doped with a small amount of neon into a high-temperature plasma is useful to effectively cool the deep core region of the plasma by suppressing plasmoid ejection. This effect of neon doping is not only interesting as a new experimental phenomenon, but also supports the development of the baseline strategy of disruption mitigation in ITER. The design review of the ITER disruption mitigation system is scheduled for 2023, and the present results will help improve the performance of the system.

More information: A. Matsuyama et al, Enhanced Material Assimilation in a Toroidal Plasma Using Mixed H2+Ne Pellet Injection and Implications to ITER, Physical Review Letters (2022). DOI: 10.1103/PhysRevLett.129.255001

Journal information: Physical Review Letters 

Provided by National Institutes of Natural Sciences

Chip circuit for light could be applied to quantum computations

Chip circuit for light could be applied to quantum computations
Future versions of the new photonic circuits will feature low-loss waveguides—the channels through which the single photons travel–some 3 meters long but tightly coiled to fit on a chip. The long waveguides will allow researchers to more precisely choose the time intervals (Δt) when photons exit different channels to rendezvous at a particular location. Credit: NIST

The ability to transmit and manipulate, with minimal loss, the smallest unit of light—the photon—plays a pivotal role in optical communications as well as designs for quantum computers that would use light rather than electric charges to store and carry information.

Now, researchers at the National Institute of Standards and Technology (NIST) and their colleagues have connected, on a single microchip, quantum dots—artificial atoms that generate individual photons rapidly and on-demand when illuminated by a laser—with miniature circuits that can guide the light without significant loss of intensity.

To create the ultra-low-loss circuits, the researchers fabricated silicon-nitride waveguides—the channels through which the photons traveled—and buried them in silicon dioxide. The channels were wide but shallow, a geometry that reduced the likelihood that photons would scatter out of the waveguides. Encapsulating the waveguides in silicon dioxide also helped to reduce scattering.

The scientists reported that their prototype circuits have a loss of intensity equal to only one percent of similar circuits—also using quantum dots—that were fabricated by other teams.

Ultimately, devices that incorporate this new chip technology could take advantage of the strange properties of quantum mechanics to perform complex computations that classical (non-quantum) circuits may not be capable of doing.

Chip circuit for light could be applied to quantum computations
Illustration shows some of the steps in creating the new ultra-low-loss photonic circuit on a chip. A microprobe lifts a gallium arsenide device containing a quantum dot—artificial atoms that generate single photons—from one chip. Then the probe places the quantum-dot device atop a low-loss silver-nitride waveguide built on another chip. Credit: S. Kelley/NIST

For instance, according to the laws of quantum mechanics, a single photon has a probability of residing in two different places, such as two different waveguides, at the same time. Those probabilities can be used to store information; an individual photon can act as a quantum bit, or qubit, which carries much more information than the binary bit of a classical computer, which is limited to a value of 0 or 1.

To perform operations necessary to solve computational problems, these photon qubits—all of which travel at the same speed and are indistinguishable from each other—must simultaneously arrive at specific processing nodes in the circuit. That poses a challenge because photons originating from different locations—and traveling along different waveguides—across the circuit may lie at significantly different distances from processing points. To ensure simultaneous arrival, photons emitted closer to the designated destination must delay their journey, giving those that lie in more distant waveguides a head start.

The circuit devised by NIST researchers including Ashish Chanana and Marcelo Davanco, along with an international team of colleagues, allows for significant time delays because it employs waveguides of various lengths that can store photons for relatively long periods of time. For instance, the researchers calculate that a 3-meter-long waveguide (tightly coiled so its diameter on a chip is only a few millimeters) would have a 50 percent probability of transmitting a photon with a time delay of 20 nanoseconds (billionths of a second). By comparison, previous devices, developed by other teams and operating under similar conditions, were limited to inducing time delays only one one-hundredth as long.

The longer delay times achieved with the new circuit are also important for operations in which photons from one or more quantum dots need to arrive at a specific location at equally spaced time intervals. In addition, the low-loss quantum-dot circuit could dramatically increase the number of single photons available for carrying quantum information on a chip, enabling larger, speedier, and more reliable computational and information-processing systems

The scientists, who include researchers from the University of California, Santa Barbara (UCSB), the Massachusetts Institute of Technology (MIT), the Korea Institute of Science and Technology and the University of São Paulo in Brazil, reported their findings December 11 in Nature Communications.

Chip circuit for light could be applied to quantum computations
Laser light shining on the quantum dots triggers them to produce a series of single photons that travel through the silicon nitride waveguide. Credit: S. Kelley/NIST

The hybrid circuit consists of two components, each initially built on a separate chip. One, a gallium arsenide semiconductor device designed and fabricated at NIST, hosts the quantum dots and directly funnels the single photons they generate into a second device—a low-loss silicon nitride waveguide developed at UCSB.

To marry the two components, researchers at MIT first used the fine metal tip of a pick-and-place microprobe, acting like a miniature crowbar, to pry the gallium arsenide device from the chip built at NIST. They then placed it atop the silicon nitride circuit on the other chip.

The researchers face several challenges before the hybrid circuit can be routinely employed in a photonic device. At present, only about 6 percent of the individual photons generated by the quantum dots can be funneled into the circuit. However, simulations suggest that if the team changes the angle at which the photons are funneled, in tandem with improvements in the positioning and orientation of the quantum dots, the rate could rise above 80 percent.

Another issue is that the quantum dots do not always emit single photons at exactly the same wavelength, a requirement for creating the indistinguishable photons necessary for the quantum computational operations. The team is exploring several strategies, including applying a constant electric field to the dots, that may alleviate that problem.

More information: Ashish Chanana et al, Ultra-low loss quantum photonic circuits integrated with single quantum emitters, Nature Communications (2022). DOI: 10.1038/s41467-022-35332-z

Journal information: Nature Communications 

Provided by National Institute of Standards and Technology