The study, which was carried out by Brown University, found that the majority of tweets from bots about climate change fall into the denial camp, with very few supporting the evidence for its existence. It comes at a time when social media companies are under increasing pressure for their perceived lack of action on policing the spread of fake news. With news of Twitter recently suspending around 70 pro-Bloomberg accounts for what it calls “platform manipulation,” the Brown University report looks to shed more light on the question of who to trust online.
Twitter Bots and Climate Change Denial
The study, which was first picked up by the Guardian, focuses on tweets surrounding President Trump’s withdrawal from the Paris climate accord. This was a UN agreement focused on tackling climate change, and supported by 189 member countries. Under President Trump, the US gave notice that it would be leaving the agreement on the 4th August 2017. The withdrawal itself cannot be actioned until the day after the next presidential election, later this year. By analysing the tweets posted around the time of Trump’s original announcement, Brown University collected data from 6.5 million messages on the platform. It found that 25% of those concerning climate change had been posted by bots. Of these tweets, the study states that the vast majority pushed the message that climate change was not real, or at least tackled the issue in a negative manner. The tweets were identified using a tool dubbed the Botometer, which analyses messages and gives the probability that they are bots. Within the analysis, there was evidence of further troubling activity. For example, it identified that bots were responsible for 38% of tweets concerning “fake science”, and 28% of all tweets about Exxon.
Conservative Bots vs Liberal Bots
In the report, researchers identified that while the majority of tweets were against climate change action, a small fraction were in support of it. However, the stats are loaded in the denier bots’ favor – just 5% were found to be arguing against Trump’s position. Tweets that spread misinformation are capable of a snowball affect. Those whose views they align with are happy to use them as evidence, and retweet them. That’s what makes bots so dangerous, on either side of the political spectrum – their ability to cascade a lie or half-truth. As for who is more likely to end up sharing such fake news, studies have found mixed results. For instance, a report by Princeton and New York University researchers found that social media users over the age of 65 who identify as Republicans are most likely to share fake news. People within this demographic were found to be were seven times more likely to share the content than people aged 18 to 29. Yet the report noted that the results could be skewed by the far greater prevalence of fake news articles with a pro-Trump slant produced during the 2016 election. With the growth of fake news bots in 2020, it seems that right-wing slant remains.
Social Media Action
In the midst of fake news and election season, many are accusing social media platforms of being slow to act. Both Twitter and Facebook recently refused a request from Nancy Pelosi to remove an edited video that made her appear to protest Trump’s praise of the military and their families. In the video, labelled “Powerful American stories ripped to shreds by Nancy Pelosi,” the creator has edited the footage to make it appear that Pelosi repeatedly rips up the speech in disgust at the moment Trump recognises members of the military. In fact, she tore it apart after Trump had finished his entire speech. Trump tweeted the video to his 72 million followers. A Facebook spokesperson told CNBC “the video doesn’t violate our policies.” While this video of Pelosi wasn’t created by a bot, it is exactly the sort of story that is likely to be shared by one, and is a textbook demonstration of the ease with which one can spread ‘fake news’. However, instances like this could soon be a thing of the past, with Twitter announcing it will remove or label “synthetic or manipulated video”, starting from 5th March.
Bloomberg Accounts Suspended by Twitter
Democratic Presidential candidate Michael Bloomberg is no stranger to misinformation on Twitter. Recently, a video of his performance during the leadership debates was edited to look as though he had left his rivals shocked and open mouthed. “Am I the only one here who has started a business?” asks Bloomberg in the clip, which is followed by 22 seconds of reaction shots from the likes of Elizabeth Warren and Bernie Sanders appearing to look confused and non-plussed. In reality, there was a two-second pause before Bloomberg moved on. However, Bloomberg found himself on the wrong end of Twitter’s policies recently, when the social media network suspended 70 of his affiliated accounts for deceptive representation. The move was part of Bloomberg’s attempt to mobilize an online army of supporters, hiring Californians to tweet positively about him. The accounts, which were all posting identical messages in favor of Bloomberg, were suspended by Twitter for “platform manipulation and spam,” according to a spokesperson. These messages came directly from Bloomberg HQ, according to a recent Wall Street Journal article. As the presidential election ramps up in 2020, it’s likely we’ll see even more cases like these. Voices on both the left wing and the right, as well as foreign agents, may try to manipulate social media platforms, and those who use them, for their own gain. It seems that seeing is no longer believing, and in the age of fake news bots, we should all exercise a healthy amount of scepticism, no matter where we get our news from.