THE GREAT SPANISH neuroanatomist Santiago Ramón y Cajal believed that with personal commitment and willpower we can do almost anything with our brains. "Consider the possibility that any man could, if he so wished, be the sculptor of his own brain, and that even the least gifted may, like the poorest land that has been well cultivated and fertilised, produce an abundant harvest."
The long-standing academic argument over the relative importance of nature and nurture in the development of intelligence has led to a narrow focus on certain aspects of brain function to the exclusion of the really important ones â€‘ those that give us self-esteem, self-belief and satisfaction in our lives. There is no doubt that our genes (nature) and our environment (nurture) both contribute to brain development. Cajal was more interested in the additional role that each of us has in determining our own brain development.
American essayist Joan Didion describes her first acute realisation that she would have to do more than rely on the talents that her genes and her family environment had given her. She describes these talents as the "passive virtues" that served her well as a child, such as "good manners, clean hair and proven competence on the Stanford-Binet scale". At the age of 19, she realised that building her future self would involve hard work and rigour; it would take commitment, discipline, self-knowledge and the awareness of the danger of self-deception.
These practices and disciplines build what Didion calls self-respect and what is commonly called character. There are no short cuts to building character, as Didion wryly observes. "To say that Waterloo was won on the playing fields of Eton is not to say that Napoleon might have been saved by a crash program in cricket."
The process of perfecting the functions of a human brain does not rely only on the interaction of our genetic make-up and our family and social environment. Certainly our genes specify limits on what kind of brain we can have, and our upbringing has a major influence on our future. But we need to ask ourselves whether we are just the sum of our genes and our environment, or whether we have an opportunity to shape our own brains?
While this discussion seems as if it may be in danger of wandering into the territory of novelist Ayn Rand or former British prime minister Maggie Thatcher in search of the aggressively individualistic self-made person, it is simply an attempt to identify each of the players in this personal drama and an assessment of the influence of each. I will look first at the way genes build the standard human brain, and then at the evidence for the influence of environment. I will ask whether it is possible for individuals to shape their own brains.
So we must begin with the genes. Our genes work hard to make a brain that works. Hundreds of millions of years of evolution have perfected the process of building a brain that keeps animals like us alive and enables us to survive and reproduce. Until the primates came along, this was pretty much the end of the story; the genes would build the brain under the influence of certain random environmental factors, and the brains so produced would in turn do a good job in ensuring the survival of the individual and the species. But primate brains have been partly freed from the tyranny of genes; instead of completing brain development by the time of birth, the primate brain continues to develop after birth, at the same time as the individual explores their world and makes choices about the way they want to live.
The human cerebrum has barely started to develop at the time of birth. It is a blank canvas that will be painted not only by their parents, friends and teachers, but also by the capricious and sometimes wilful decisions that are made every day by the child exploring the world. It does not end with childhood. While we used to believe that the adult brain was incapable of significant change, we now know that the so-called mature brain is a seething mass of synaptic reorganisation, and is trying constantly to make sense of the world and to find ways of dealing with the people it encounters. The reason humans have such huge brains is partly to support their new-found intelligence and use of language, and partly to cope with the complex interactions with those around us. Richard Wilkinson has described the brain as a social organ, which works constantly to manage our strange and complex relationships of love and friendship, suspicion and fear.
EACH OF US has about 100,000 genes, more than half of them are used exclusively for building the brain. We used to think that genes had a one-time-only role in giving a particular instruction to build cells and bodies, but genes have a complex life, being turned on and off at different times under the influence of other genes and the environment. Our own choices and the social world we create can influence which genes are expressing themselves and which genes are silent at any particular time.
At the earliest stages of brain development, our genes participate in a tightly choreographed dance to build the basic vertebrate brain that equips us for survival and reproduction. There is not much room for error here, and a single gene mutation may create a brain that has no chance of survival. But after this rigorously scripted early period, things loosen up somewhat, and the genes are able to respond to supportive and threatening changes in our world by expressing or turning off particular functions. This is the process by which the environment makes an imprint on the developing brain.
In the earliest stages of brain development, genes trigger the multiplication of cells in the primitive nervous system. In the first month of embryonic life, the primitive nervous system appears as a tube of cells. The embryonic cells that form the neural tube quickly multiply and specialise to form specific functional centres in the brain. The rate of multiplication is extraordinary and the few hundred cells in the primitive neural tube eventually generate about 90 billion cells in the adult brain. During the first two years of life, up to 250,000 new cells are generated every minute.
Even while multiplication of cells is going on, some of them mature and sprout processes that connect with other nerve cells (neurons) or muscle cells. These chemical and electrical connections are called synapses. Much of the basic wiring of the brain stem and spinal cord is laid down early in fetal development. Early movements of the foetus are evidence of the formation of connections between neurons and muscle cells. The early connections in the brain are mostly the ones required for survival, such as those that control breathing and eating. Mistakes at this stage would be fatal, and these vital early connections are tightly controlled by the genes.
A second category of connections starts late in fetal development and extends into childhood and adult life. These billions of later connections are made while the infant or adult interacts with their environment and is learning new skills. These billions of connections are not pre-programmed; they provide an opportunity for the brain to learn and adapt to the world around it. It was once believed that the formation of new connections stopped in childhood, but it is now known that new connections are a normal part of the life of all brains, both child and adult. The ability of the brain to form new connections is referred to as plasticity â€‘ the capacity to reshape patterns of connection in response to new information. It has been estimated that the brain forms an average of a million new connections a minute throughout life. There are many opportunities each day for us to shape the pattern of synapses in our brains.
Not all new connections work as well as they should, and billions of synapses are pruned during childhood so that the systems formed are accurate and efficient. Pruning of synapses continues throughout life. The same pruning process is applied to neurons, and millions are killed off during development. There is evidence neurons that have made inaccurate connections, or are unnecessary or inefficient, are deleted. These waves of synaptic pruning and neuronal death are part of the process of building an efficient nervous system. As with synapses, the pruning of cells continues throughout life. The brain of a 60-year-old contains only about half as many cells as that of a 20-year-old.
BEFORE BIRTH, MOST of the development of the brain is driven by the genes. They orchestrate the formation of the brain's extraordinary wiring diagram, which enables the newborn baby to survive. After the baby is born, the development of the brain involves a complex interaction between the immediate environment and the progressive formation of billions of synaptic connections.
It has been shown that kittens that had one eye kept closed during a particular period after birth would never develop binocular vision â€‘ the ability to focus both eyes on a single object. The same thing happens to children who are born with a cataract which blocks their vision. This gave rise to the idea that there was a circumscribed period, originally called the critical period, during which a particular input must be received in order for a part of the brain to develop. The critical period concept had a major impact on theories of child development. It also created a serious concern that children deprived of a particular stimulus or environment for a short period might be deprived forever of some vital capability.
However, this pessimistic view is not supported by contemporary evidence, and it is now felt that the concept should be interpreted more generously. The current view is that critical periods are the ideal time for most children to receive a particular input, but later inputs can still be effective. While knowledge of research on critical periods can give us a guide to the importance of particular environments at certain stages, it would be a mistake to think that the critical periods are so rigidly specified that nothing can be done after the official critical period has passed. For this reason, the term "sensitive period" is preferred by many authors.
It is important to distinguish between the value of recognising a sensitive period as an optimal time for input, versus the mistaken idea that it is a vital opportunity which, if missed, will be lost forever. The value of recognising the opportunity in a defined sensitive period is important in developing the timing of interventions in pre-school and primary school settings.
Despite this more generous interpretation of the concept of the critical period in postnatal development, there is no doubt that some adverse early events are critical to fetal brain development. It has been shown that nutritional deprivation and some infections can have a devastating effect on fetal brain development. For example, inadequate folic acid levels in the first few weeks of pregnancy can result in abnormalities of neural development, called neural tube defects. The Australian ophthalmologist Norman Gregg discovered that rubella infection during the first three months of pregnancy can cause brain abnormalities. Similar impacts may be caused by the non-availability of food elements, such as protein and fat. It is not well known that iodine deficiency during fetal development is the most important cause of mental retardation in the world. Iodine deficiency affects millions of children in the mountainous regions of Asia and for the rest of their lives. We can note with pride that this was another Australian discovery, for which physician Basil Hetzel deserves a Nobel Prize.
One idea that has stemmed mistakenly from the critical period concept is the supposed value of environmental enrichment. This came from the observation that rats raised in sterile environments were deficient in cerebral cortical development compared with rats raised in "enriched" environments. The problem with these experiments was that the researchers were comparing rats in grossly deprived environments with rats raised in something approaching the complexity of a normal rat environment. It was not a question of comparing the effects of normal with enriched environments, but really a comparison of the effects of deprived with normal environments.
Some enthusiasts concluded, however, that babies should be bombarded with sensory stimuli for their brains to realise their potential. This led to the fashion for surrounding cribs with toys and hanging objects. In recent times this has led some educationists to recommend the playing of music by Mozart to help infant brains develop. While I think that early exposure to Mozart must be a wonderful thing, there has never been any convincing evidence that these enriched environments are more effective than a normal environment. The important conclusion that can be drawn from these studies is that human brains develop normally in a range of environments, ranging from barely adequate to wonderfully enriched.
PERHAPS THE GREATEST mystery of biology is the way the most wonderful of structures can be built from humble components and then go on to modify itself to become a thinking organ. In pondering this mystery, scientist Douglas Hofstadter draws an analogy between the brain and an ant colony. Individual ants, like individual neurons, are almost useless in biological terms and have no significant intelligence and little ability to learn. An ant colony, on the other hand, can adapt to its environment and learn from experience. The complex division of labour in any part of the ant colony can be constantly adjusted to meet external threats or opportunities. These changes are triggered by processes of internal communication in the colony that we do not fully understand. In a similar way, the brain contains many different clusters of neurons communicating with each other through synapses. What confers intelligence on a collection of neurons of otherwise modest ability is the way that they constantly communicate with each other through synapses. More importantly, they can change the pattern of synapses in response to changing circumstances.
Hofstadter's analogy provides him with an opportunity to play with the paradoxical relationship between holism and reductionism. The holistic point of view insists that the ant colony or the brain can only be understood by recognising that the whole is far greater than the sum of its parts. The reductionists will counter with the view that the whole can be properly understood only through a detailed analysis of the basic components – the individual ants or neurons.
In fact, both points of view are simultaneously valid, as the two faces of one coin. The value of appreciating this paradox is that it can continually challenge us in our attempts to understand complex systems, whether they are neural networks or societal structures. Joseph LeDoux plays with this paradox in his book Synaptic Self (Viking, 2002). He argues that the basic unit of function in the brain is the synapse, the chemical and electrical connection point between neurons. Although an individual synapse is nothing more than a switch, the pattern of billions of synapses determines who we are. While the pattern of some groups of synapses are largely genetically determined and do not allow much change or adjustment, the pattern of synapses in other areas, such as the cerebral cortex, is influenced on a moment-to-moment basis by the choices we make as individuals and our personal interpretation of the world around us.
From this point of view, our synapses are who we are: the pattern of synapses we have helped shape during our life becomes our personality, our motivations and perhaps even our soul.