According to the technology news website, MyBroadBand, in 2017 South Africa had over one hundred known ‘fake news’ sites. A recent study also shows that people over the age of 65 years old are more likely to share ‘fake news’ on social media.
In the wake of South Africa’s general elections, the spread of such incorrect information, particularly on social media platforms, is set to be one of the biggest stumbling blocks to holding credible elections.
Several media organisations, such as Media Monitoring Africa (MMA) and Africa Check, have been established in South Africa to act as a watchdog and seek out websites that intentionally publish incorrect information.
According to MMA’s Sarah Findlay, it’s important to first establish the difference between news that contains incorrect information: “Instead of using the phrase ‘fake news’ we should use ‘misinformation’ which is the unintentional spreading of false information, and ‘disinformation’ which is the deliberate spreading of false information to cause public harm or for commercial purposes.”
Online news websites are, however, not the only sources of disinformation. Internet robots or ‘bots’ are also misused to spread dishonest news. Although many companies use bots to spread helpful and honest information, it is necessary to be alert of those that do not.
“The accounts that are of interest are those that are dishonest, and either automated or not. Dishonest automated accounts pose as actual human beings, in order to lend social credence to their actions,” says Dr Aldu Cornelissen, lecturer and co-founder of the Computational Social Science research group at Stellenbosch University.
Despite the rules set out by online platforms such as Twitter and Facebook to regulate the types of accounts created, bots are still able to actively function. “It is relatively easy to create new profiles, and it takes a few extra steps to automate them. Even with all the effort made by Twitter, it is still possible to create them,” says Cornelissen.
“Many accounts get created en masse and lie dormant and play it safe, until called upon by the designer, usually after someone pays for some likes or retweets. These ‘aged’ bots are more valuable since they are more difficult to discredit without digging deeper into their activity,” he adds.
During high trending events, there tends to be an increase in bot activity because they are most effective when there is attention. Cornelissen highlights that the elections are therefore a perfect opportunity for such campaigns since there is sustained attention on multiple fronts, with many stakeholders: “The biggest threat of bots stems from the role of social media in a politically charged environment such as South Africa. Even though a small minority is active on Twitter, it is where most social and political issues are discussed in the open. These narratives then naturally make their way into more traditional media where the rest of the nation will take part in the conversation.
“Therefore, if you have a fringe ideology, idea or concept, you only need to get it trending on a platform like Twitter … these campaigns are fake grassroots movements. You get a key ‘honest’ individual to tweet a message or highlight an event, you then pay a botnet to retweet the tweet. This pushes the tweet into more people’s attention, with the added benefit of having 1500 retweets, so ‘this must be a big issue’, you might then even retweet it yourself to share this ‘big issue’,” he adds.
With technology becoming more developed, bots are also becoming more advanced. However, there are ways to detect whether an account is a bot.
“A simple test is to run the account through Botometer. If you suspect it is a bot, report it. Finding a single bot in a large hidden botnet can lead to the banning of the entire botnet. Moreover, it makes it more troublesome and expensive for the creators of the bots. Particularly for the elections, people are encouraged to report it,” says Cornelissen.