Dictionary

Neural network

A neural network is a computer model that learns by working through examples, loosely inspired by the human brain. It is built from layers of connected "neurons" that pick up patterns in data step by step. Neural networks can recognise images, understand text, make predictions, or generate new content.

What is a neural network?

Neural networks are the foundation of modern artificial intelligence. They let computers learn from examples instead of following rules someone wrote out by hand. The idea comes from the human brain: a network of cells that pass signals along and recognise patterns together. Inside a computer it works differently, but the principle is the same.

How does a neural network work?

A neural network looks for relationships in data. It is built from a series of layers that work together, a bit like a factory that turns raw material into a finished product step by step.

First comes the input layer. It takes in the raw information, for example the pixels of a photo or the words in a sentence. Next come the hidden layers. These hold thousands of small calculation points called neurons. Each neuron looks at a slice of the information, runs a small calculation on it, and passes the result to the next layer. The last step is the output layer, which produces a final answer, for example "this is a cat" or "there is a 12% chance of default".

During training, the network works out for itself which connections are useful. At first it gets things wrong a lot. It compares its guess to the correct answer and adjusts the strength of its connections so it does better next time. This happens thousands of times. The process is called backpropagation.

That is how the network learns to recognise patterns step by step, without anyone having to explain what a cat is or how one looks.

Types of neural networks

Not every neural network works the same way. There are many variants, each tuned to a specific kind of data or problem. The main ones are:

  • Feedforward networks: the basic form. Data flows in one direction, from input to output.

  • Convolutional networks (CNNs): strong at image recognition. They automatically pick up on patterns like edges and shapes.

  • Recurrent networks (RNNs): good with sequences like text or audio. They remember what came before.

  • Transformers: the successor to RNNs, used in modern language models like ChatGPT. They handle context far better.

  • Generative networks (GANs): two networks that compete with each other. One tries to produce realistic images, the other tries to spot the fakes.

Applications

Neural networks have a wide range of uses. They have been deployed for years in places like:

  • Image recognition in medical scans, factory lines, and security cameras.

  • Language and speech, from machine translation to chatbots.

  • Forecasting in banking, retail, and logistics.

  • Recommendations on Netflix, Spotify, and online shops.

  • Self-driving cars that recognise objects and road signs.

  • Generative AI that produces text, images, or music.

Anywhere there are patterns to find, neural networks can learn them.

How neural networks evolved

The story starts in the 1940s, when researchers tried to mimic the brain. Early models like the Perceptron were too limited to be useful. The 1980s brought the breakthrough of backpropagation, which let networks learn from their mistakes. Even so, adoption stayed small because computers were slow and data was scarce.

Around 2010 everything changed. Faster GPUs and huge datasets made deep learning possible. Networks with dozens of layers suddenly outperformed humans at image and speech recognition.

Today we are in the era of generative networks. Models like ChatGPT and DALL-E produce text, images, and music on their own. Research now focuses on smaller, more energy-efficient models that stay just as capable.

Last Updated: April 18, 2026 Back to Dictionary
Keywords
neural network deep learning artificial intelligence AI machine learning embeddings generative AI backpropagation transformer