Artificial Intelligence Neural Networks

    How Neural Networks Think

    By Shayaan Abdullah on September, 20 2017

    Stay up to date

    Back to main Blog
    Shayaan Abdullah

    This blog post was originally posted by Larry Hardesty on MIT News.

    Artificial-intelligence research has been transformed by machine-learning systems called neural networks, which learn how to perform tasks by analyzing huge volumes of training data.

    During training, a neural net continually readjusts thousands of internal parameters until it can reliably perform some task, such as identifying objects in digital images or translating text from one language to another. But on their own, the final values of those parameters say very little about how the neural net does what it does.

    Understanding what neural networks are doing can help researchers improve their performance and transfer their insights to other applications, and computer scientists have recently developed some clever techniques for divining the computations of particular neural networks.

    But, at the 2017 Conference on Empirical Methods on Natural Language Processing starting this week, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory are presenting a new general-purpose technique for making sense of neural networks that are trained to perform natural-language-processing tasks, in which computers attempt to interpret freeform texts written in ordinary, or “natural,” language (as opposed to a structured language, such as a database-query language).

    The technique applies to any system that takes text as input and produces strings of symbols as output, such as an automatic translator. And because its analysis results from varying inputs and examining the effects on outputs, it can work with online natural-language-processing services, without access to the underlying software.

    Read more

    Submit a Comment

    Stay up to date