Using machine learning techniques, today’s computer algorithms are capable of solving problems previously thought to be exclusively within the capabilities of human beings. Modern machine learning algorithms have shown aptitude in various fields such as finance, healthcare, and transportation. These algorithms have already been implemented in a variety of ways to improve our society, such as helping doctors diagnose diseases and preventing drivers from causing accidents. While they are capable of many impressive feats, they are far from perfect. There are still many areas in which computers struggle that humans find trivial. The limitations of computers are shown when they cannot function without exact instruction or, in the case of machine learning, must be fed vast amounts of data coupled with reinforcement on how to properly interpret such data. Fortunately, the solution to some of the problems holding back the field of machine learning may lie within our own biological mechanisms.

The brain contains advanced learning mechanisms capable of adapting to a wide variety of tasks. In fact, many algorithms’ capabilities are designed based on older neuroscience discoveries. One example is how the concept of human neural networks influenced the creation of artificial neural networks. Many frontrunners in the field of machine learning agree that the best way forward is to look closely at research coming from neuroscientists [1]. Simply put, better models of the brain will lead to better machine learning techniques.

While it is true that computers have several advantages over human capabilities, namely specialization, memory capacity, and calculation speed, there are still many important areas in which they lag behind. Using only a fraction of the power and space, our brains possess important advantages over high-end computers when it comes to one of the core aspects of intelligence: the ability to learn. Modern machine learning algorithms perform best when completing repetitive tasks and receiving large amounts of data. In contrast, humans can take an extremely limited amount of data with little to no direct reinforcement and apply it in a wide variety of contexts.

Our remarkable learning ability is demonstrated by a study showing that children at only three years of age can begin to learn the meaning of a new word after being given just a single example [2]. The experiment was conducted in a nursery school where the teacher would introduce a new color dubbed “chromium,” which was indicated to be olive green, in the context of a normal classroom activity. The researchers ensured that the activity provided no reinforcement toward retaining the newly acquired term. Seven to ten days later, the children were assessed on their knowledge of several colors, including “chromium,” through a series of tasks, such as being asked to identify the name of a color, pick out the “chromium” paper from several different papers, differentiate between olive green and other similar colors, and identify whether or not certain words were used to identify colors. Even though none of the children’s understanding was perfect, many of them successfully demonstrated comprehension in at least one of the tested areas. The most impressive ability demonstrated in this study was the children’s innate ability to determine the context of the new phrase they had just learned. The only guidance the children were given regarding the meaning of the word “chromium” was that it described some property of an object. The children were able to independently determine that said property was a color [2]. This ability to infer the context of a new piece of data is something that machines have great difficulty with.

It is entirely possible for machine learning algorithms to learn how to identify colors in an isolated context, but it is incredibly difficult for such an algorithm to learn that it has to identify colors in order to differentiate between various objects. A hypothetical example of this difference can be found in image classification, a task humans perform exceedingly well. Picture an algorithm capable of recognizing whether or not an image represents a car using modern machine learning techniques. If this algorithm was given a set of cars of varying colors and asked to report the difference between said cars, it could not do so. No matter how effective the algorithm is at identifying cars, it could not identify differences between cars unless it was retrained to do so. It could not take a set of data and change the context in which it interprets that data without being manually modified. This difference may be trivial when it comes to algorithms designed for a single task, but it becomes critical if a single algorithm needs to learn how to do multiple tasks. Such generalizable learning is a property unique to biological organisms, namely humans. Algorithms with this ability could adapt to solve new sets of problems without any outside changes being made to their code. Presently, every task performed by a machine learning algorithm must be designed around learning that specific task. An algorithm capable of self-determining how it is supposed to perform a task could be repeatedly used across a wide range of applications.

Machine learning algorithms excel at performing specific tasks with concrete goals. For more complex tasks, they must be led to the correct conclusion by having the overall method for achieving their goals broken up into more manageable and straightforward pieces. The exact computational mechanisms underlying our ability to learn with limited data and context are unknown, but when they are uncovered they will certainly have an immense impact on machine learning. Understanding the basic principles underlying learning is one of the most difficult tasks facing computer scientists, and the only guidance they have regarding what this could look like lies in neuroscience. Algorithms based on our brain’s basic structure are already very common machine learning techniques, and more algorithms based on recent neuroscience discoveries, such as neurogenesis, are yielding promising results [3].

Artificial Neural Networks

One of the oldest and most important machine learning techniques based on neuroscience is artificial neural networks (ANN). ANNs are a type of machine learning algorithm that uses a network of mathematical transformations to approximate functions relating a set of input and output variables based on patterns in the input data [4]. This process is roughly analogous to the propagation of information through the network of neurons making up our own nervous systems. The algorithm begins with multiple inputs, varying from a few to several thousand, that represent a complex set of data as numerical values; these values are comparable to how sensory information is encoded in the frequency of voltage spikes, known as action potentials, in an actual neuron. The network then applies a multiplier, known as a weight, to the numerical inputs before summing all the weighted values heading toward the same neuron and applying a mathematical transformation, the specifics of which depend on the type of neural network. The new number is then sent to the next layer of artificial neurons, which then repeats the process of applying a weight and passing along the value to the next layer until a final output layer is reached. The output values can be used for a variety of applications [4].

One such example of neural network usage is classification tasks, where the algorithm maps the final numerical output value to the closest number in a fixed set of possible outputs, allowing the data to be classified into the correct category. What allows this algorithm to learn is that it can calculate a value for its error and adjust its weights accordingly, similar to how the strength of the connections between human neurons can be changed over time to alter the probability of the next neuron in the sequence being activated. Once the algorithm finds how its error relates to its weights, it can either increase or decrease the weights accordingly. The ability to change itself to reduce error allows this machine learning technique to analyze data sets that are too complex for other algorithms, or even humans, to understand [4].

Neruogenesis-Inspired Neural Networks

One of the biggest pitfalls of artificial neural networks, and machine learning in general, is the lack of flexibility. Every time an ANN is tasked with a new objective, its architecture must be tuned by hand. If machine learning algorithms are to improve their abilities, flexibility is imperative. Neurogenesis, a recent discovery in neuroscience, may provide the solution to this problem. Neurogenesis is the process through which neural stem cells transform into new neurons. For decades, it was thought that the human brain was incapable of producing new neurons. Researchers found, however, that this was not true when they discovered neural stem cells in the adult human hippocampus [5]. Similar to other species, such as mice and monkeys, adult humans do generate new neurons in certain regions of the brain [5]. The exact purpose of this process is unknown, but there are several hypotheses about its possible benefit to cognitive functions. One proposed hypothesis is that neurogenesis allows for the flexibility required to adapt to the complexities of new learning scenarios [6]. One study showed that learning a new activity increased the rate of neurogenesis, which suggests an increased aptitude at learning new activities over time [6].

Two of the biggest advantages of human learning are efficiency and flexibility, two areas that have proved difficult for machine learning algorithms. In this context, efficiency is the ability to solve problems using minimal resources, and flexibility is the ability to adapt to changing contexts. A paper produced at an IBM research center proposed an algorithm based on neurogenesis, which seeks to solve some of the problems associated with neural networks [3]. The first component of the algorithm increases efficiency by removing unnecessary or redundant artificial neurons from the neural network. While our own brains do not kill off neurons, they do improve efficiency by pruning unused synapses. However, the problem with removing some artificial neurons is that some neurons are useless in one context but important in another. One example of changing contexts is when the algorithm is trained to identify pictures of an urban setting, only to then be trained to receive a new dataset of pictures containing plants and animals; the algorithm no longer has the proper context for the first set of pictures. It solves this problem by introducing a neural network that regularly creates new neurons in addition to shaving them away. This feature gives the algorithm high flexibility when dealing with changing contexts. One of the key components of this algorithm is that the rate of artificial neurogenesis correlates with the error calculation. When a change in context is made, the algorithm automatically creates more artificial neurons to better handle the task before shaving off the irrelevant ones once again [3]. This algorithm mirrors not only the concept of neurogenesis but also its mechanisms and possible functions as well. The relationship between both neurogenesis and artificial neurogenesis and learning is bidirectional. The introduction of a new learning experience in both cases causes an increase in neurogenesis, leading to an improved ability to grasp that new experience.


Machine learning is an important and exciting field in modern society with many applications. The number of applications will only grow as algorithms become better at approximating human learning ability. There is much value and inspiration to be found in biological systems, even if they are not to be copied exactly. Historically, computers have always had difficulty operating in the real world. Our surroundings change too rapidly, and machine learning techniques have not been flexible enough to handle the flood of information that we sift through daily without effort. Machine learning algorithms are confined by inflexibility. In order to break free of this limitation, human adaptability must be examined. The importance of finding new techniques for this in machine learning cannot be overstated, and neuroscience research provides one of the best avenues to accomplish that. The field of machine learning has benefited from neuroscience theories in the past and continues to do so today. For every discovery we have about the brain, there are still many more aspects we do not understand. Discoveries in the field of neuroscience, such as neurogenesis, could provide valuable insights into understanding and emulating fluid learning in machines. If computer scientists hope to create new breakthroughs in machine learning, they must pay close attention to those discoveries.


  1. Hassabis, D., Kumaran, D., Summerfield, C., & Botvinick, M. (2017). Neuroscience-Inspired Artificial Intelligence. Neuron, 95, 245-258.
  2. Carey, S., & Bartlett, E. (1978). Acquiring a single new word. Papers and Reports on Child Language Development, 15, 17–29.
  3. Garg, S., Rish, I., Cecchi, G., & Lozano, A. (2017) Neurogenesis-inspired dictionary learning: online model adaptation in a changing world. International joint conference on artificial intelligence, 2017.
  4. LeCun, Y., Bengio, Y., Hinton, G., (2015) Deep Learning. Nature, 521, 436-444.
  5. Erikkson, P. S., Perfilieva, E., Bjork-Eriksson, T., Alborn, A., Nordborg, C., Peterson, D. A., & Gage, F. H. (1998) Neurogenesis in the adult human hippocampus. Nature Medicine, 4, 1313-1317. Retrieved from
  6. Kempermann, G., (2002) Why new neurons? Possible functions for adult hippocampal neurogenesis. Journal of neuroscience, 22(3), 635-638. Retrieved from