Inside a Neural Network’s Mind

Neural Networks are a set of algorithms modeled after the human brain that recognizes the underlying patterns in a data set. Similar to a brain, the neural network learns all by itself without the need for explicit programming. What happens inside a neural network has intrigued many and research has been dedicated to seeing how the neural nets perform what they are intended to.

In this blog, we will explore the inner workings of a neural network that processes language. MIT in collaboration with Qatar Computing Research Institute has released several papers on an interpretive technique that analyzes neural networks trained for translation and speech recognition. Through the research, they could find some support for some of the common notions about how a neural network works.

Lower Level Vs Higher Level

Neural Networks concentrate on lower-level tasks before moving on to higher-level tasks. For instance, they seem to concentrate on sound recognition or a part of speech recognition before moving on to translation.

As per the researchers, the translation neural network considers a certain type of data which leads to the omission of some part of the data. By correcting the omission helps to improve the performance of the network which in turn helps to improve the accuracy of artificial intelligence systems. 

Neural Networks are typically arranged into layers with each layer consisting of nodes which are nothing but simple processing units. Each node is connected to several other nodes in the above and below layers. Different weights are assigned to the connections between the layers which determines how much a node’s output is considered for the calculation performed by the next node.

Since there are thousands to millions of nodes and connections involved, finding out what algorithm those weights give rise to highly impossible. The technique used by the MIT and QCRI researchers involved taking a trained network and using the output of each of its layers to corresponding individual training examples to train another neural network to perform the same task. This helps to understand what each layer is optimized to perform.

The researchers found through the technique that in translation networks, lower levels performed better at recognizing phones than higher ones and also were good at identifying the parts of speech and morphology. The higher levels were found to be good at semantic tagging.

About Data Labeler

At Data Labeler, we provide fully managed data labeling services and specialize in the production of high-volume and best-in-class training datasets for AI and ML initiatives. Reach out to us at sales@datalabeler.com for high-quality data labeling services.