The following text is based on an article I wrote for the University of Sydney Business School, and the School of Social and Political Sciences, which I have applied to the Fake News program.
Navigating the Digital Battlefield: Understanding Fake News, Misinformation and Disinformation
Fake news and disinformation have been prevalent since ancient times, evolving alongside technology and communication. In today’s digital age, they pose unprecedented challenges, influencing public opinion and societal norms like never before.
The Trojan Horse of Information
Metaphorically, the Trojan Horse symbolises a trick or stratagem that causes someone to invite a foe into a secure place; or to subvert from within using deceptive means to manipulate outcomes. A malicious computer program that tricks users into willingly running it is called a Trojan.
The Psychology of Manipulation
Manipulation consists of breaking into someone’s mind to lodge an opinion or provoke a behaviour without the person realising that an intrusion has occurred. Manipulation is omnipresent in everyday life and has been exacerbated by the advent of information technology. We are now experiencing first-hand the rapid transition from the neurobiological state where humans teach and learn from each other to the neurodigital new state where all of AI acquires and processes knowledge as one rapidly expanding complex ecosystem.
Technological Platforms Driving Change
Deep Fake, ChatGPT now available as a smartphone app soon to be made obsolete by Copilot (IChatGPT-4) and X.AI are all technological platforms that are driving society into an uncertain future.
A Historical Perspective
How did we get here? In 1895, the French sociologist and psychologist Gustave Le Bon wrote The Crowd: A Study of the Popular Mind, one of the seminal works of crowd psychology. Inspired by Le Bon‘s work, Edwards Bernays, an American theorist considered a pioneer in the field of public relations and propaganda created strategies to manipulate masses.
The nephew of Sigmund Freud, Bernays touted the idea that masses were irrational, driven by factors outside their conscious understanding. He posited that consequently people’s minds could and should be manipulated by the capable few who controlled the science of managing information released to the public, and in a manner most advantageous to an organisation or a state. One of his many books, Propaganda published in 1928 gained special attention for theorising propaganda and then providing insight into its application. Joseph Goebbels – the Reich Minister for Propaganda between 1933 and 1945 – mastered Nazi propaganda to mobilise masses for Adolph Hitler, and referred to Bernays’ work in his diaries.
The Birth of AI Communication
In 1966, MIT computer scientist Joseph Weizenbaum introduced ‘Eliza’ the first program that allowed some kind of credible conversation between humans and machines. Eliza would rephrase whatever speech input it was given in the form of a question. If you told Eliza, “A conversation with your friend left you angry”, it might ask, “Why do you feel angry?”
Disinformation in Warfare
Fake news and disinformation have been widely used by states to justify war. The 2003 Iraq War, for example, was triggered by US and British claims based on ‘robust intelligence reports’ that Saddam Hussein had stockpiles of weapons of mass destruction (WMD). This was later proved to be false. Disinformation plays a key role in the current war in Ukraine.
Impact on Society and Children
Disinformation is omnipresent on social media. Its impact on children faced with online bullying and open access to distressing sites emphasises the responsibility of social media companies to give children the information and tools they need as they grow up in the digital world. Parents and educators must take a key role in educating children from a young age in how to use the internet safely, just as they would with any other dangerous appliances at home or in outdoors situations.
Ethical Considerations in the Digital Age
The race to create a ‘machine’ more performant than man is ignoring the notion that there is still a critical social element in every human interaction with infotechnology. Society needs the time to reflect philosophically on the impact of technological changes on the future of work, geopolitics, health and communication.
Can ethics be integrated into AI? Or, are we taking the risk of returning to the separation between technology and philosophy that we experienced in the early part of the twentieth Century after Albert Einstein warned the world of the possible tragic impact that technology could have on humanity? There are important ethical issues at play, which we cannot afford to underestimate.