The Logic’s subscribers believe that Ottawa hasn’t adequately addressed the spread of disinformation and violent content on social media platforms.
Almost 80 per cent of those who responded to the April survey said that the federal government should do more to halt the growth of misleading and problematic posts. About 38 per cent strongly agreed, while around 41 per cent somewhat agreed.
The response comes as governments across the world, including Canada, have begun to consider imposing regulations on Big Tech, particularly around content management.
The results are from The Logic’s April 2019 subscriber survey. A private link was sent to subscribers by email and the survey was conducted online. All respondents were kept anonymous and duplicates were removed as needed. Our subscribers were asked to provide responses to a series of questions. For this question, subscribers were asked to indicate whether they agreed or disagreed with the following statement: “The federal government should do more to prevent the spread of disinformation and violent content on social media platforms.”
One subscriber advocated for imposing escalating financial penalties if companies don’t comply with the rules.
“Repeated violations could result in ever larger fines,” they said. “Traditional fines (which some firms may simply see as a ‘cost of doing business’) should be raised on repeated violations.”
“Markets don’t regulate themselves,” wrote another. They cited the growing anti-vaccination movement as an example of false information that has now become “a harmful and enduring presence.”
Social media companies have been under increased scrutiny over the kinds of content spreading among their users.
The criticism has particularly focused on violent content after the mass shootings at two mosques in Christchurch, New Zealand were livestreamed on Facebook on March 15. The videos were then propagated across platforms like YouTube, Twitter and Reddit. Australia’s Parliament subsequently passed laws criminalizing online companies that do not quickly take down what it called “abhorrent” content on their platforms. John Edwards, New Zealand’s privacy commissioner, tweeted that Facebook was “morally bankrupt” and called on his country to follow Australia’s lead.
Sites like Facebook and Reddit have also become breeding grounds for conspiracy theories and so-called fake news. The debunked Pizzagate theory began spreading on social media platforms in 2016; a Washington, D.C.-area pizza shop implicated by the theory was set on fire in February.
In Canada, the federal government is reportedly considering regulating tech giants like Facebook, but hasn’t provided any concrete measures as to how it would do so. “There is an onus on social media and digital platforms to better protect the digital public square by increasing efforts to prevent malicious cyber activity including the spread of disinformation,” said Democratic Institutions Minister Karina Gould in April.
She cited a U.K. government white paper released the same day that proposed fining platforms, blocking access to their sites and holding their executives personally liable for failing to remove terrorist, abusive and child sexual exploitation content. “I would also say this is not something we wouldn’t pursue going down here in Canada,” Gould added.
Members of Parliament have been urging the government to act for months. In a December 2018 report, the House of Commons Standing Committee on Access to Information, Privacy and Ethics recommended legislation that would require online platforms to “remove manifestly illegal content in a timely fashion, including hate speech, harassment and disinformation.” Not doing so would incur fines based on the company’s size. In its response to the group, the government said it would continue to engage with the companies and monitor their behaviour, but promised no such legislation.
In January, The Logic reported that Gould was set to announce a $7-million effort to fight the spread of fake news online, the first federal funding of its kind. The grants will reportedly be given to organizations running public-awareness campaigns aimed at improving digital literacy and curbing the spread of disinformation and misinformation in the lead-up to the federal election, scheduled for October.
But 21 per cent of The Logic’s April survey respondents disagreed that Canada’s government should be doing more to regulate problematic content on social media.
“This is a larger problem than one government,” wrote one subscriber. “It is evolving at such a great pace it is unclear to me what the federal government could do that would be meaningful in the future.”
Share the full article!Send to a friend
Thanks for sharing!
You have shared 5 articles this month and reached the maximum amount of shares available.Close
Share the full article!
Share the full article with your friends. Recipients will be able to read the full text of the article after submitting their email address. They will not have access to other articles or subscriber benefits.
You have shared 0 article(s) this month and have 5 remaining.
Elected officials have begun to collaborate across borders on social media regulation. The International Grand Committee on Disinformation and ‘Fake News,’ which includes parliamentarians from nine countries, holds its second meeting in Ottawa on May 28. The group has called on several social media executives to testify, including Facebook CEO Mark Zuckerberg and COO Sheryl Sandberg, Google CEO Sundar Pichai and former Alphabet chair Eric Schmidt, Apple CEO Tim Cook, Amazon CEO Jeff Bezos and Snap CEO Evan Spiegel. Those summoned may not actually show up—Zuckerberg was also invited to its first meeting in London in November 2018, but instead sent a policy executive.
Many subscribers expressed unease with the government playing a role in content monitoring or moderation. One subscriber said it would be “dangerous to have government deciding what is news and what is not.” Another reader called it “the start of a slippery slope.”
One respondent wrote, “I’m not sure I trust all current politicians, or future ones for that matter, to not weaponize the idea of ‘disinformation protection’ to hide opinions they don’t like.”