This is a golden age of cognitive science. Artificial intelligence (AI) technology can make predictions based on ‘deep learning’ capabilities. And neuroscientists have developed ways to predetermine brain conditions in unborn babies.
At a time of revolutionary scientific and technological breakthroughs, communicating these findings clearly is imperative to public understanding. Otherwise complex technologies are left shrouded in mystery.
"We're living in a genomics revolution," says Dr Hannah Critchlow, neuroscientist and science outreach fellow at the University of Cambridge. "All of this information is coming online about how a genome, this very individual and personalised genome, can give us a hereditary predisposition towards aspects of our life.
"20 weeks before birth, researchers and scientists are able to see the brain being built. They’re able to peer into the brain and see the different anatomical changes in neural circuitry that are related to really complex conditions of the brain like autism or ADHD (Attention Deficit Hyperactivity Disorder), or even symptoms that might not emerge for decades, like major depressive disorder, bipolar, or even schizophrenia."
Critchlow says these studies currently focus on medical conditions. But her 2019 book, The Science of Fate, considers how personality, behaviour and temperament could be predetermined using the same technology.
It is a fearful thought. Any notion of free will could be scientifically contested. However, she has resolved that public conversations need to be had now more than ever, particularly around the ethics and morality of gene editing: "I think now is a good time for us as a society to decide how we want to start using this information. Do we want to start to appreciate neurodiversity rather than create a more homogenous society? We really need to all get involved in this discussion to try and think about the future of our species.
"There are a lot of scientists out there who are talking about their research and going to festivals like [Writers' Week], which is a wonderful way for people to exchange thoughts and ideas to help enrich all of our lives. It keeps our brains healthy by being able to learn new things and think in different ways. I get a huge sense of reward and pleasure from going out and talking about the brain with other people. I learn something from the public and I also feel it's almost a duty really, that scientists go out and do this kind of work," says Critchlow.
Dr Robert Elliott-Smith is a Senior Research Fellow of Computer Science at University College London. He's an expert in AI and evolutionary algorithms. His 2019 book, Rage Inside the Machine, explains how the internet exposes users to bigotry within online information bubbles, and seeks to reveal the prejudices within algorithms.
"I use 'prejudice' very specifically because what prejudice means is to prejudge, and you prejudge by basically making simplifications by effectively not looking at the entirety of evidence," says Elliott-Smith.
"What algorithms quite literally do is simplify and generalise, that really is the goal of an algorithm. So the simplification and generalisation in algorithms is the same process as [humans] being prejudiced. The way algorithms work is they are effectively driven by some sort of incentive and usually that's an economic incentive, and therefore their tendencies line up fairly closely with existing social biases."
The answer about why algorithms have racist and sexist biases is complex. Elliott-Smith says we must take an historical look at data, information gathering processes and the cultural contexts of the past: "[Algorithms] are a feedback loop and factors that influence it are certainly things like the fact that in Silicon Valley, representation is certainly not diversified and fair. It’s unfortunately a white male community that are creating most of the algorithms.
"However, it also has to do with the data we provide the algorithms. When you take the algorithms that are learning to read and understand speech – in order to learn they're being fed past documents and of course past documents carry all the prejudices of the past, including linguistic prejudices."
Unlike most science communication in the media – which Critchlow says has seen a vast improvement in coverage and literacy – Elliott-Smith says the media is "terrible" at accurately reporting on AI.
"The thing I’m concerned about more than anything else is the media promotes the idea that machines can do doctor’s jobs. What I strongly believe will happen in the future is that we're going to have a two-tier system, where people are going to get increasing services provided by machines that are not as good as services provided by human beings but are good enough for 'plebs'. And then people who have money in their pockets are going to get real doctors and real lawyers and real justice and everybody else is going to get machine mediated services like those."
The future may seem dystopian but with science communicators committed to accurately informing the media and lay public about their research, new light is being shed on what was once the dark.
Writer's Week, Pioneer Women's Memorial Garden, 29 Feb - 9 Mar, 9.30am - 8.30pm, FREE