Computing At Light Speed: New System Performs AI Calculations In Single Flash

Laser light computing could significantly reduce AI energy consumption. (Credit: Summit Art Creations on Shutterstock)

Researchers have built a computer that performs complex AI calculations in a single pass of light, completing what today’s fastest AI chips need multiple steps to accomplish. The breakthrough promises substantial gains in parallelism and energy efficiency for AI computations.

The system, called parallel optical matrix-matrix multiplication or POMMM, performs complex mathematical operations by encoding data into laser beams and letting physics do the work. Published in Nature Photonics, the technology executes an entire matrix multiplication (the core calculation in AI neural networks) through a single propagation of coherent light. No waiting for sequential processing. Just light passing through optical elements, minimizing data movement during the core computation.

Why Light Beats Electronics for AI Calculations

The research team from Shanghai Jiao Tong University, Aalto University and the Chinese Academy of Sciences notes that current optical computing methods struggle with tensor-based tasks because they require multiple light propagations for each operation. POMMM collapses that entire sequence into a single instant.

When a GPU multiplies two matrices together, it performs thousands or millions of individual calculations in sequence. It reads values from memory, multiplies them, adds results and writes back to storage. Each step takes time. Each data movement consumes energy.

POMMM takes a different approach. It encodes one matrix into the amplitude and position of a spatial optical field, applies distinct phase patterns to different rows of data, then uses cylindrical lenses to perform optical Fourier transforms. These transforms naturally separate and combine the calculations simultaneously.

Testing showed the optical system produces results matching GPU computations with high consistency. Across matrix sizes ranging from 10×10 to 50×50 elements, POMMM maintained low average error that closely matched GPU results. The calculations happened during a single pass of light through the optical system.

Optical Hardware Runs Real Neural Networks

For practical AI applications, the researchers demonstrated POMMM running actual neural networks originally designed for GPUs. Their experimental prototype processed convolutional neural networks for image recognition, achieving 94.44% accuracy on handwritten digit classification and 84.11% on clothing item recognition. Vision transformer models showed similar performance. These tests used neural network weights trained on GPUs and deployed directly to the optical system.

The physics enabling this advantage comes from properties of light understood for over a century but not previously combined this way for computing. POMMM exploits two key properties of Fourier transforms: moving a signal in space doesn’t alter its frequency spectrum, and applying phase modulation shifts the spatial frequency. By encoding different rows of a matrix with different phase gradients, then performing optical transforms in perpendicular directions, the system makes all the partial products naturally separate into distinct spatial locations where a camera captures them simultaneously.

Inside the Speed-of-Light Computer

The experimental prototype uses spatial light modulators to encode input matrices onto a 532-nanometer laser beam, cylindrical lens assemblies to perform the parallel optical transforms and a high-resolution quantitative CMOS camera to record results. The core calculation happens during a single pass of light through the optical elements. The speed of the modulators and camera determines overall throughput.

Taking this further, the team demonstrated wavelength multiplexing for processing higher-dimensional data. By encoding the real and imaginary parts of a complex matrix onto two different laser wavelengths (540 and 550 nanometers), they performed complete complex-valued calculations in parallel. This wavelength-multiplexing capability points toward processing three-dimensional tensors—the multidimensional data arrays common in modern deep learning—through single-shot operations across multiple colors of light simultaneously.

Energy Efficiency and Future Potential

Theoretical analysis suggests POMMM’s single-propagation architecture could outperform existing optical computing paradigms by multiple orders of magnitude in both computational parallelism and energy efficiency, particularly when implemented with purpose-built photonic hardware rather than off-the-shelf components.

“Imagine you’re a customs officer who must inspect every parcel through multiple machines with different functions and then sort them into the right bins,” says lead author Dr. Yufeng Zhang, from the Photonics Group at Aalto University’s Department of Electronics and Nanoengineering, in a statement. “Normally, you’d process each parcel one by one. Our optical computing method merges all parcels and all machines together — we create multiple ‘optical hooks’ that connect each input to its correct output. With just one operation, one pass of light, all inspections and sorting happen instantly and in parallel.”

The work addresses a critical bottleneck in modern AI hardware. Neural networks generate massive data movement between processors and memory, and today’s GPU tensor cores spend substantial time and energy shuttling information back and forth. Optical computing performs calculations through physical light propagation rather than electronic data transfer, potentially reducing memory bandwidth limitations.

Current challenges include the complexity of cascading multiple optical layers for deep neural networks and the precision required in aligning optical components. The researchers found that training neural networks with POMMM-specific error characteristics can compensate for some hardware imperfections, though physical implementation still demands careful engineering.

Source : https://studyfinds.org/new-system-performs-ai-calculations-in-single-flash/

Exit mobile version