LightMatter – photonic processor 10x faster with 90% less energy consumption

As artificial intelligence (AI) grows, researchers are exploring ways to push beyond electrons and into the world of photons. Technology is now available that is replacing electronic processors with photonic designs that incorporate lasers and other light components.

Through the combination of electronics, photonics, and new algorithms, lightmatter has built a next-generation computing platform purpose-built for artificial intelligence.

Photonic computing and especially the area of integrated photonic computing, which uses silicon-based chips for optical signal processing, is actively evolving and beginning to make an impact.

General-purpose AI inference acceleration. Combines photonics and electronics in a single, compact package.

A wafer-scale, programmable photonic interconnect that enables arrays of heterogeneous chips to communicate with unprecedented bandwidth and energy efficiency.

Envise processor is a general-purpose machine learning accelerator that combines photonics and transistor-based systems in a single, compact module. Using a silicon photonics processing core for most computational tasks, Envise provides offload acceleration for high performance AI inference workloads with never before seen performance and efficiency.

Envise provides flexible features for machine learning workloads:

  • Common activation functions such as ReLU, GELU, sigmoid, tanh, and support for customizable activation functions
  • Convolution acceleration for common filter sizes
  • Flexible number format support including INT8, INT16, and bfloat16
  • Dynamic scaling for maximum precision
  • Matrix transform hardware including transpose, swap, reductions, and other custom data movement operators
  • Flexcore modes for 256X256, 128X512, or dual core 128X256 operation

INSIDE THE CHIP

ENVISE Server

The Envise server features 16 Envise Chips in a 4-U server configuration with only 3kW power consumption. The server is a building block for a rack-scale Envise inference system that can run the largest neural networks developed to date at unprecedented performance — 3 times higher inferences/second than the Nvidia DGX-A100 with 7 times the inferences/second/Watt on BERT-Base with the SQuAD dataset.

4U FORM FACTOR