A recent study conducted by researchers at the University of Zurich has found that people have a harder time recognizing false social media posts generated by artificial intelligence (AI) compared to those composed by humans. The researchers selected common disinformation topics such as climate change and the COVID-19 pandemic and asked OpenAI’s large language model GPT-3 to generate 10 true tweets and 10 false tweets. They also collected a random sample of both true and false tweets from Twitter.
To test people’s ability to identify AI-generated tweets, the researchers recruited 697 participants to complete an online quiz. The participants had to judge whether the tweets were generated by AI or collected from Twitter, and whether they contained accurate information or disinformation. The study found that participants were 3% less likely to believe false tweets written by humans compared to those composed by AI.
Although the difference is small, it suggests that AI is marginally more convincing than humans when it comes to spreading disinformation. One possible explanation for this is that the AI used in the study is better at producing concise and easily processable text, while humans tend to be more discursive and prone to rambling.
In another news, there has been speculation about the performance of the Russian Army, considering the country’s state media’s continuous insistence on portraying it as the best force globally. The most common explanation for the perceived discrepancy between reputation and performance is that the Army has been held back and not allowed to operate to its full potential. The question then arises: who is responsible for restraining the Army?
According to the Telegraph, Yevgeny Prigozhin, the boss of the Wagner group, has given an answer. Prigozhin claims that “total trash” is being presented to the Russian President, and that Defense Minister Shoigu and Chief of Staff Gerasimov are intentionally concealing the true nature and extent of Russian losses and setbacks. Prigozhin alleges that they are misleading the Russian people and warns of the potential consequences if this continues.
Prigozhin’s recent march on Moscow, though intended as propaganda, failed to gain traction and instead generated echoes of internal disinformation. The Kremlin’s response has been to deny the existence of any armed mutiny during the march. The Daily Beast reports that this insistence on denial is highly implausible.
In Ukraine, hacktivist auxiliaries have taken to hacking into Russian radio broadcasts and inserting pro-Ukrainian messages. Starting in early June, the message spread that Russia had declared full mobilization and martial law in response to an alleged large-scale invasion. These messages, despite their outrageousness, gained enough attention to elicit an official denial from Kremlin spokesman Dmitry Peskov. The hacktivist style of incursions into Russian broadcasts is characterized by their uncoordinated, opportunistic, and chaotic nature. However, their effectiveness in swaying public opinion remains up for debate.
Bogdan Litvin, national coordinator of the Russian anti-war movement Vesna, believes that the hacks are a missed opportunity. Litvin argues that transmitting the sound of sirens, explosions, and warnings of rocket attacks would only increase fear and discourage opposition to the war. He suggests that rather than resorting to shock tactics, it would be better to convince Russians that the ongoing conflict has dire consequences.
In conclusion, the study conducted at the University of Zurich highlights the challenges individuals face in identifying AI-generated disinformation compared to human-generated content. Additionally, the revelations surrounding the Russian Army and the role of individuals like Prigozhin provide insights into the dynamics of power and control within the Russian military establishment. Lastly, the Ukrainian hacktivist auxiliaries’ attempts to influence public opinion via radio broadcasts underscore the ongoing information warfare between Ukraine and Russia.

