We Almost Gave Up On Building Artificial Brains

We Almost Gave Up On Building Artificial Brains

Today artificial neural networks are making art, writing speeches, identifying faces and even driving cars. It feels as if we’re riding the wave of a novel technological era, but the current rise in neural networks is actually a renaissance of sorts.

It may be hard to believe, but artificial intelligence researchers were already beginning to see the promise in neural networks during World War II in their mathematical models. But by the 1970s, the field was ready to give up on them entirely.

“[T]here were no impressive results until computers grew up, that is until the past 10 years,” Patrick Henry Winston, a professor at MIT who specializes in artificial intelligence, says. “It remains the most important enabler of deep learning.”

Today’s neural networks are essentially decision trees that rely on mathematical logic that resembles, for lack of a better analogy, the firing of synapses in the human brain. Several layers of artificial neurons, or nodes, are utilized to arrive at the solution to a problem. As data is fed through the layers, a simple computation occurs at each node, and the solution is passed to the next layer of neurons for another round of computations. All the while, the math that occurs at each neuron is being slightly modified by the previous result. In this way, a neural network can teach itself patterns in data that match a desired solution and optimize the path to it, sort of like tuning a guitar. The more data you feed a neural net, the better it gets at tuning its neurons and finding a desired pattern.

While the field has emerged in recent years as a tour de force for computer experts and even some hobbyists, the history of the neural network stretches back far further to the dawn of computers. The very first map of a neural network came in 1943 in a paper from Warren Sturgis McCulloch and Walter Pitts. But McCulloch’s framework had little at all to do with computing; instead, he was focused on the structure and function of the human brain. The McCulloch-Pitts model of neuron function, of course, arose during a time when the technology to monitor such activity didn’t exist.

McCulloch and Pitts believed each neuron in the brain functioned like an on-off switch (like binary numbers 1 and 0), and that combinations of these neurons firing on or off yielded logical decisions. At the time, there were many competing theories to describe the way the brain operated, but according to a paper by Gualtiero Piccinni of the University of Missouri, St. Louis, the McCulloch-Pitts model did something others hadn’t: It whittled brain function down to something that resembled a simple computer, and that sparked interest in building an artificial brain from scratch.

The first successful—and that’s a generous term—neural network concept was the Perceptron algorithm from Cornell University’s Frank Rosenblatt. The Perceptron was originally envisioned to be a machine, though its first implementation was as a class of neural networks that could make fairly rudimentary decisions. Eventually, the algorithm was incorporated into a refrigerator-sized computer called the Mark 1, which was an image recognition machine. It had an array of 400 photocells linked with its artificial neural network, and it could identify a shape when it was held before its “eye.”

A few years later in 1959, ADALINE arrived via researchers at Stanford University, and was at the time the biggest artificial brain. But it, too, could only handle a few processes at a time and was meant as a demonstration of machine learning rather than being set to a specific task. B

These small, but tantalizing advancements in computing fueled the hysteria surrounding artificial intelligence in the 1950s, with Science running the headline “Human Brains Replaced?” in a 1958 issue about neural networks. Intelligent robots stormed into science fiction at a swifter clip. This same cycle, though, has repeated itself with many automated processes throughout history.

Share it:
Share it:

[Social9_Share class=”s9-widget-wrapper”]

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

You Might Be Interested In

7 Steps to Mastering Data Preparation with Python

14 Jun, 2017

Follow these 7 steps for mastering data preparation, covering the concepts, the individual tasks, as well as different approaches to …

Read more

Five building blocks of a data-driven culture

27 Jun, 2017

How can organizations leverage data as a strategic asset? Data comes at a high price. Businesses must pay for data …

Read more

The data behind an Olympics story

23 Jun, 2016

The Olympic Games have all the elements of a great story—power, drama, intrigue, and the key moment when one team …

Read more

Recent Jobs

Senior Cloud Engineer (AWS, Snowflake)

Remote (United States (Nationwide))

9 May, 2024

Read More

IT Engineer

Washington D.C., DC, USA

1 May, 2024

Read More

Data Engineer

Washington D.C., DC, USA

1 May, 2024

Read More

Applications Developer

Washington D.C., DC, USA

1 May, 2024

Read More

Do You Want to Share Your Story?

Bring your insights on Data, Visualization, Innovation or Business Agility to our community. Let them learn from your experience.

Get the 3 STEPS

To Drive Analytics Adoption
And manage change

3-steps-to-drive-analytics-adoption

Get Access to Event Discounts

Switch your 7wData account from Subscriber to Event Discount Member by clicking the button below and get access to event discounts. Learn & Grow together with us in a more profitable way!

Get Access to Event Discounts

Create a 7wData account and get access to event discounts. Learn & Grow together with us in a more profitable way!

Don't miss Out!

Stay in touch and receive in depth articles, guides, news & commentary of all things data.