Misinformation

You are here

The Dangers of Domestic Disinformation

Here are some factoids that might highlight the ballooning issue of disinformation for you: Facebook took down 3 billion fake accounts in 2019. 3 billion. One study suggested that 15% of Twitter’s 330 million monthly users are bots. Bots have a massive multiplier effect on disinformation because they are far more prolific than humans, tweeting hundreds of times a day. Some studies estimate that more than 60% of Trump’s 80+ million followers are bots.

People often talk about how we should be worried about Russian trolls on social media sites and Twitter, but the fact is that it is domestic disinformation that is running rampant. Americans are intentionally feeding other Americans with wrong or factually inaccurate information about Covid-19, the George Floyd demonstrations and other conspiracy theories, and we are going to see much more as the election approaches.

As a parent, what can you do to help your kids navigate all of this “fake news”? First we need to recognize that many conspiracy theories are very seductive. We often want to go along with that particular explanation because it goes along with our own (sometimes hidden) prejudices and biases. Second, you and your kids need to learn to vet information and not to be satisfied with what comes up as one of the first few entries in a web search. Be prepared to search and read different viewpoints on a topic to get at the facts.

Trust in Social Media Platforms Waning

A recent study conducted by OpenX and The Harris Poll points to shifting consumer sentiment regarding social media platforms. The study found 61% of respondents first use the web browsers like Google to discover "high-quality content," while 31% first turn to major platforms like Facebook, Instagram and YouTube. Compared to last year, 31% say they use Facebook less, while 26% say they'll decrease their Facebook time going forward.

Maps Are Extremely Easy to Fake or Manipulate

For all the talk these days about "fake news" and "deep fake" videos, there hasn't been much chatter about fake maps. Information on a map is easy to manipulate, and most people just glance at maps and don't dive into the data sets that serve as their source material. Using maps related to the coronavirus as an example, theconversation.com highlights some key questions you and your kids should ask whenever you look to a map for information.

Groups Urge Facebook Advertisers to Boycott Platform Over Hate Speech

Civil Rights Groups including The Anti-Defamation League, Color of Change, Common Sense Media, Free Press, the NAACP and Sleeping Giants, are launching a social media campaign, #StopHateForProfit, to urge large Facebook advertisers to boycott the platform unless it makes formal moves to curtail the proliferation of hate speech on its platform. The group is also requesting Facebook to take steps such as removing ads labeled as misinformation or hateful, and informing advertisers when their media buys appear near harmful content and grant refunds. The list of those companies taking part is growing by the day, although critics have questioned the effectiveness, pointing out these companies are not taking down their pages and will most likely buy more ads on Facebook after July.

These actions are one example of recent backlash against Facebook, which seemed to intensify when a flurry of misinformation appeared on the social platform amid worldwide protests against racism and police brutality. The company declined to take action against posts from President Trump — despite Twitter flagging that same content as misleading or glorifying violence. Facebook did remove ads from Mr. Trump’s re-election campaign that featured a symbol used by the Nazis during World War II. The company also announced that it would gradually allow users to opt out of seeing political ads, and has acknowledged in a blog post that its enforcement of content rules “isn’t perfect.”

Google Will Fact Check Images

With the amount of fake images flooding social media and even mainstream media platforms, Google is introducing fact check labels for images in its search results to help crack down on manipulated photos. When you conduct a search on Google Images, you may see a ‘Fact Check’ label under the thumbnail image results. Tapping the label will give you a summary on the 'dubiousness' of the image. The tech giant says these labels may also appear for search results that show both articles about specific images as well as articles that include an image in the story. "Starting today, we are surfacing fact check information in Google Images globally to help people navigate these issues and make more informed judgments about what they see on the web. This builds on the fact check features in Search and News, which people come across billions of times per year," Google said in a post.

Facebook to Identify Content from State Run Media

Facebook says it will start labeling content produced by at least 18 government-controlled news outlets, including Russia's RT and China's Xinhua News. The social platform will also begin labeling ads from the news outlets and plans to block their ads in the US in the near future. This is a bit of reversal for Facebook who has not been willing to label misinformation or election related materials.

The “Freedom of Reach” Question

A new term – “freedom of reach” – is in circulation among those who are concerned about how social media sites are handling misinformation and inflammatory comments. Snapchat is the latest to try to answer the question of “freedom of reach versus freedom of speech” after Twitter has decided to label tweets from President Trump that it considers misleading or “glorifying violence”, and Facebook agonized but decided to do nothing. Snapchat’s approach is to no longer promote President Trumps’s verified Snapchat account. His account, RealDonaldTrump, will remain on the platform and continue to appear on search results. But he will no longer appear in the app’s Discover tab, which promotes news publishers, elected officials, celebrities, and influencers. “We are not currently promoting the president’s content on Snapchat’s Discover platform,” Snapchat said in a statement. “We will not amplify voices who incite racial violence and injustice by giving them free promotion on Discover. Racial violence and injustice have no place in our society and we stand together with all who seek peace, love, equality, and justice in America.”

Since Snapchat is one of the social media sites used mainly by teens and young adults, the fairness of the “freedom of reach” question is one you might want to discuss with your children in the context of misinformation online. Snapchat isn’t deleting Trump’s account, and he is free to keep posting to existing followers. But to the extent that his Snapchat account grows in the future, it will be without Snapchat’s help. In Snapchat’s terms, the company has preserved Trump’s speech while making him responsible for finding his own reach. Trump’s campaign thinks this approach is unfair, but Snapchat has neatly sidestepped questions of censorship by not censoring the president at all. Instead the company has said that if you want to see the president’s snaps, you’ll have to go look for them on your own time.

Watch Out for Deepfake Videos and Images

Here is another vocabulary term you need to add to your lexicon – deepfakes. Deepfakes are images and audio pulled from social media accounts to create convincing videos – sometimes of people who never existed - for extortion, misinformation and disinformation. Deepfake technology enables anyone with a computer and an Internet connection to create realistic-looking photos and videos of people saying and doing things that they did not actually say or do. Cybercriminals are increasingly interested in the potential use of deepfake videos to pressure people into paying ransom or divulging sensitive information or to spread misinformation, Trend Micro reports, making the vetting of any information online or in media even more important.

Understanding Section 230 of the Communications Decency Act

It will be very interesting to see what effect the new Executive Order that President Trump signed recently targeting Section 230 of the Communication Decency Act has on cyberbullying and misinformation online. While you may have never heard of this section of the law, it was created almost 30 years ago to protect Internet platforms from liability for many of the things third parties say or do on them. But now it’s under threat by President Trump, who hopes to use this act to fight back against the social media platforms he believes are unfairly censoring him and other conservative voices. Some critics say he is trying to bully these platforms into letting him post anything he wants without correction or reprimand, even when he has broken a site’s rules about posting bullying comments or questionable information.

In a nutshell, Section 230 says that Internet platforms that host third-party content (for example, tweets on Twitter, posts on Facebook, photos on Instagram, reviews on Yelp, or a news outlet’s reader comments) are not liable for what those third parties post (with a few exceptions). For instance, if a Yelp reviewer were to post something defamatory about a business, the business could sue the reviewer for libel, but it couldn’t sue Yelp. Without Section 230’s protections, the Internet as we know it today would not exist. If the law were taken away, many websites driven by user-generated content would likely be shut down. As Senator Ron Wyden, one of the authors of the Section 230 says about it, the law is both a sword and a shield for platforms: They’re shielded from liability for user content, and they have a sword to moderate it as they see fit.

That doesn’t mean Section 230 is perfect. Some argue that it gives platforms too little accountability, allowing some of the worst parts of the internet — think cyberbullying that parents or schools struggle to have removed or misinformation that stays online for all to see with little recourse— to flourish along with the best. Simply put, Internet platforms have been happy to use the shield to protect themselves from lawsuits, but they’ve largely ignored the sword to moderate the bad stuff their users upload. It is also important to remember that the cyberbullying that occurs is less than one tenth of one percent of all the traffic online, but it still is important for these sites to acknowledge their role and do more about it.

All that said, this protection has allowed the Internet to thrive. Think about it: Websites like Facebook, Instagram, and YouTube have millions and even billions of users. If these platforms had to monitor and approve every single thing every user posted, they simply wouldn’t be able to exist. No website or platform can moderate at such an incredible scale, and no one wants to open themselves up to the legal liability of doing so. But if that free flow of information and creativity goes away, our online world will be very different.

So where do we stand? While the executive order sounds strict ( and a little frightening with the government making “watch lists” of people who post or support “certain kinds” of content) , legal experts don’t seem to think much — or even any — of it can be backed up, citing First Amendment concerns. It’s also unclear whether or not the Federal Communications Commission has the authority to regulate Section 230 in this way, or if the president can change the scope of a law without any congressional approval.

Facebook Planning to Use Artificial Intelligence To Combat Hateful Memes

Facebook is combating hate speech and misinformation by developing natural language processing models and a database of meme examples for training artificial intelligence moderators. The company, together with DrivenData, will also launch the Hateful Meme Challenge, which will award $100,000 to researchers who develop AI models that can detect hate speech in memes.

Pages