‘Lightelligence’ – fully integrated optical computing systems that leverages the speed, power, and efficiency of light.

Lightelligence build optical chips that empower the next generation of high-performance computing tasks by processing information with light, their chips offer ultra high speed, low latency, and low power consumption representing orders of magnitude improvement over traditional electronic architectures.

Artificial Neural Networks are computational network models inspired by signal processing in the brain. These models have dramatically improved the performance of many learning tasks, including speech and object recognition.

However, today’s computing hardware is inefficient at implementing neural networks, in large part because much of it was designed for von Neumann computing schemes. Significant effort has been made to develop electronic architectures tuned to implement artificial neural networks that improve upon both computational speed and energy efficiency.

Lightelligence propose a new architecture for a fully-optical neural network that, using unique advantages of optics, promises a computational speed enhancement of at least two orders of magnitude over the state-of-the-art and three orders of magnitude in power efficiency for conventional learning tasks. It experimentally demonstrate essential parts of our architecture using a programmable nanophotonic processor.

Using unitary (instead of general) matrices in artificial neural networks (ANNs) is a promising way to solve the gradient explosion/vanishing problem, as well as to enable ANNs to learn long-term correlations in the data. This approach appears particularly promising for Recurrent Neural Networks (RNNs). In this work, It present a new architecture for implementing an Efficient Unitary Neural Network (EUNNs).

Gated Orthogonal Recurrent Units: On Learning to Forget
We present a novel recurrent neural network (RNN) based model that combines the remembering ability of unitary RNNs with the ability of gated RNNs to effectively forget redundant/irrelevant information in its memory. We achieve this by extending unitary RNNs with a gating mechanism. Our model is able to outperform LSTMs, GRUs and Unitary RNNs on several long-term dependency benchmark tasks. We empirically both show the orthogonal/unitary RNNs lack the ability to forget and also the ability of GORU to simultaneously remember long term dependencies while forgetting irrelevant information. This plays an important role in recurrent neural networks. We provide competitive results along with an analysis of our model on many natural sequential tasks including the bAbI Question Answering, TIMIT speech spectrum prediction, Penn TreeBank, and synthetic tasks that involve long-term dependencies such as algorithmic, parenthesis, denoising and copying tasks.

Improved computing power and an exponential increase in data have helped fuel the rapid rise of artificial intelligence. But as AI systems become more sophisticated, they’ll need even more computational power to address their needs, which traditional computing hardware most likely won’t be able to keep up with. To solve the problem, MIT spinout Lightelligence is developing the next generation of computing hardware.

The Lightelligence solution makes use of the silicon fabrication platform used for traditional semiconductor chips, but in a novel way. Rather than building chips that use electricity to carry out computations, Lightelligence develops components powered by light that are low energy and fast, and they might just be the hardware we need to power the AI revolution. Compared to traditional architectures, the optical chips made by Lightelligence offer orders of magnitude improvement in terms of high speed, low latency, and low power consumption.

In order to perform arithmetic operations, electronic chips need to combine tens, sometimes hundreds, of logic gates. To perform this process requires the electronic chip transistors to switch off and on for multiple clock periods. Every time a logic gate transistor switches, it generates heat and consumes power.

Not so with the chips produced by Lightelligence. In the optical domain, arithmetic computations are done with physics instead of with logic gate transistors that require multiple clocks.

The new Photonics Hardware company Lightelligence has announced their Photonics AI Accelerator card PACE that’s eventually supposed to be put into consumers personal computers in conjunction with their regular CPU’s and GPU’s. The crazy speed and efficiency of those Photonic chips are supposed to make Artificial Intelligence model training much faster and keep the heat down.

https://www.lightelligence.ai/