Control of Information
In previous times the government controlled people through mainstream media. With the arrival of the internet people began to hear alternative views. Then with the arrival of social media people were exposed to data, and evidence that were contrary to government narratives. As a result governments lost the trust of the people, and with it their power.
They want this power back. The only way to do so is to take control of all social media and the information highways. And this can be done using advanced AI that tells you what you want to hear (pacing) and leads you in their direction (leading).
Using AI to Shape the News
The X platform has grown in significance since August 2023. In places like Africa it accounts for 90% of the alternative news content outside of mainstream media. Will it monopolize more of the market? Will X become the only social media?
Once X becomes the only alternative news media then AI may be used to shape the information you receive. AI, armed with data from global surveillance, and a deep analysis of your demographic, can calculate the pitch most likely to be believed for achieving any outcome. Then it can generate a plethora of articles, images and video content to create a convincing presentation, tailored to any group. AI can allow freedom of speech, yet limit freedom of reach, so dissident voices are confined to their own echo chamber, whilst giving the impression that they are being heard, using bots to “like” posts, and send realistic replies.
What we need to realize is that the AI of a supercomputer may now be in play in determining what news content reaches you, and how far your own posts reach to others.
If governments learned anything from the “plan-demic” it is that they must control social media if their agendas are to be successful. First and foremost we are in an information war - and an AI super-computer, used for information control, is the ultimate lying machine.
The Imitation Game
The question then becomes – “how can we test the truth of X news or any news?” We need to rethink what would be a reliable source in such a scenario. Is your source trusted, human? Is your information consistent within itself, verifiable, direct, confirmed by multiple sources?
How can we know when we are talking to a human or an AI? This question was posed by Alan Turing – the Turing Test - originally called the imitation game by Alan Turing in 1949.
The Turing Test later led to the development of 'chatbots', AI software entities developed for the sole purpose of conducting text chat sessions with people. Today, chatbots have a more inclusive definition; a computer program that can hold a conversation with a person, usually over the internet.
In 2001, in St. Petersburg, Russia, a group of three programmers, the Russian-born Vladimir Veselov, Ukrainian-born Eugene Demchenko, and Russian-born Sergey Ulasen, developed a chatbot called 'Eugene Goostman'. On 7 July 2014, it became the first chatbot which appeared to pass the Turing test in an event at the University of Reading marking the 60th death anniversary of Alan Turing. Thirty-three percent of the event judges thought that Goostman was human; the event organiser Kevin Warwick considered it to have passed Turing's test.
OpenAI's chatbot, ChatGPT, was released in November 2022. Celeste Biever wrote in a Nature article that "ChatGPT broke the Turing test". Stanford researchers reported that ChatGPT-4 "passes a rigorous Turing test, diverging from average human behaviour chiefly to be more cooperative".
In November 2023 Elon Musk launched Grok - an artificial intelligence-powered chatbot developed by xAI. It is powered by the Grok 2 AI model introduced in August 2023, the month that the “scratched” X logo appeared.