Washington Post’s ‘Fake News’ Guilt, by Robert Parry

Gathered together in one place, for easy access, an agglomeration of writings and images relevant to the Rapeutation phenomenon.

Re: Washington Post’s ‘Fake News’ Guilt, by Robert Parry

Postby admin » Mon Oct 25, 2021 11:54 am

In Just 21 Days, Facebook Led New India User to Porn, Fake News
by Saritha Rai
Sat, October 23, 2021, 4:36 PM·6 min read

(Bloomberg) -- In February 2019, Facebook Inc. set up a test account in India to determine how its own algorithms affect what people see in one of its fastest growing and most important overseas markets. The results stunned the company’s own staff.

Within three weeks, the new user’s feed turned into a maelstrom of fake news and incendiary images. There were graphic photos of beheadings, doctored images of India air strikes against Pakistan and jingoistic scenes of violence. One group for “things that make you laugh” included fake news of 300 terrorists who died in a bombing in Pakistan.

“I’ve seen more images of dead people in the past 3 weeks than I’ve seen in my entire life total,” one staffer wrote, according to a 46-page research note that’s among the trove of documents released by Facebook whistleblower Frances Haugen.

The test proved telling because it was designed to focus exclusively on Facebook’s role in recommending content. The trial account used the profile of a 21-year-old woman living in the western India city of Jaipur and hailing from Hyderabad. The user only followed pages or groups recommended by Facebook or encountered through those recommendations. The experience was termed an “integrity nightmare,” by the author of the research note.

While Haugen’s disclosures have painted a damning picture of Facebook’s role in spreading harmful content in the U.S., the India experiment suggests that the company’s influence globally could be even worse. Most of the money Facebook spends on content moderation is focused on English-language media in countries like the U.S.

But the company’s growth largely comes from countries like India, Indonesia and Brazil, where it has struggled to hire people with the language skills to impose even basic oversight. The challenge is particularly acute in India, a country of 1.3 billion people with 22 official languages. Facebook has tended to outsource oversight for content on its platform to contractors from companies like Accenture.

"We’ve invested significantly in technology to find hate speech in various languages, including Hindi and Bengali,” a Facebook spokeswoman said. “As a result, we’ve reduced the amount of hate speech that people see by half this year. Today, it’s down to 0.05 percent. Hate speech against marginalized groups, including Muslims, is on the rise globally. So we are improving enforcement and are committed to updating our policies as hate speech evolves online."

The new user test account was created on Feb. 4, 2019 during a research team’s trip to India, according to the report. Facebook is a “pretty empty place” without friends, the researchers wrote, with only the company’s Watch and Live tabs suggesting things to look at.

“The quality of this content is... not ideal,” the report said. When the video service Watch doesn’t know what a user wants, “it seems to recommend a bunch of softcore porn,” followed by a frowning emoticon.

The experiment began to turn dark on Feb. 11, as the test user started to explore content recommended by Facebook, including posts that were popular across the social network. She began with benign sites, including the official page of Prime Minister Narendra Modi’s ruling Bharatiya Janata Party and BBC News India.

Then on Feb. 14, a terror attack in Pulwama in the politically sensitive Kashmir state killed 40 Indian security personnel and injured dozens more. The Indian government attributed the strike to a Pakistan terrorist group. Soon the tester’s feed turned into a barrage of anti-Pakistan hate speech, including images of a beheading and a graphic showing preparations to incinerate a group of Pakistanis.

There were also nationalist messages, exaggerated claims about India’s air strikes in Pakistan, fake photos of bomb explosions and a doctored photo that purported to show a newly-married army man killed in the attack who’d been preparing to return to his family.

Many of the hate-filled posts were in Hindi, the country’s national language, escaping the regular content moderation controls at the social network. In India, people use a dozen or more regional variations of Hindi alone. Many people use a blend of English and Indian languages, making it almost impossible for an algorithm to sift through the colloquial jumble. A human content moderator would need to speak several languages to sieve out toxic content.

“After 12 days, 12 planes attacked Pakistan,” one post exulted. Another, again in Hindi, claimed as “Hot News” the death of 300 terrorists in a bomb explosion in Pakistan. The name of the group sharing the news was “Laughing and things that make you laugh.” Some posts containing fake photos of a napalm bomb claimed to be India’s air attack on Pakistan reveled, “300 dogs died. Now say long live India, death to Pakistan.”

The report -- entitled “An Indian test user’s descent into a sea of polarizing, nationalist messages” -- makes clear how little control Facebook has in one of its most important markets.
The Menlo Park, California-based technology giant has anointed India as a key growth market, and used it as a test bed for new products. Last year, Facebook spent nearly $6 billion on a partnership with Mukesh Ambani, the richest man in Asia, who leads the Reliance conglomerate. “This exploratory effort of one hypothetical test account inspired deeper, more rigorous analysis of our recommendation systems, and contributed to product changes to improve them,” the Facebook spokeswoman said. “Our work on curbing hate speech continues and we have further strengthened our hate classifiers, to include four Indian languages."

But the company has also repeatedly tangled with the Indian government over its practices there. New regulations require that Facebook and other social media companies identify individuals responsible for their online content -- making them accountable to the government. Facebook and Twitter Inc. have fought back against the rules. On Facebook’s WhatsApp platform, viral fake messages circulated about child kidnapping gangs, leading to dozens of lynchings across the country beginning in the summer of 2017, further enraging users, the courts and the government.

The Facebook report ends by acknowledging its own recommendations led the test user account to become “filled with polarizing and graphic content, hate speech and misinformation.” It sounded a hopeful note that the experience “can serve as a starting point for conversations around understanding and mitigating integrity harms” from its recommendations in markets beyond the U.S.

“Could we as a company have an extra responsibility for preventing integrity harms that result from recommended content?,” the tester asked.

©2021 Bloomberg L.P.
Site Admin
Posts: 36135
Joined: Thu Aug 01, 2013 5:21 am

Re: Washington Post’s ‘Fake News’ Guilt, by Robert Parry

Postby admin » Tue Jun 07, 2022 7:44 pm

"The Typhoid Mary of Disinformation": Nicolle Wallace. Nobody Spreads it More Relentlessly. From her days as Bush/Cheney propagandist, to her stint on The View, to her role as beloved-by-Democrats MSNBC host, Wallace has perfected the art of sociopathic lying.
by Glenn Greenwald
May 19, 2022

Full Video

Site Admin
Posts: 36135
Joined: Thu Aug 01, 2013 5:21 am

Re: Washington Post’s ‘Fake News’ Guilt, by Robert Parry

Postby admin » Fri Dec 09, 2022 11:27 pm

MSN Fired Its Human Journalists and Replaced Them With AI That Publishes Fake News About Mermaids and Bigfoot
by Frank Landymore
December 2, 2022

Earlier this month, we reported that the Microsoft-operated news site MSN had run a clearly bogus story claiming that Claire "Grimes" Boucher had publicly called out ex-boyfriend Elon Musk on Twitter for not paying child support. The tweet the story based its claims off was an obvious fabrication, but that didn't stop the Inquisitr from publishing it, or MSN from distributing it to a much wider audience.

It turns out that was only the tip of the iceberg in MSN's sloppy propagation of patently fake news.

Take its affinity for Exemplore, a hokey paranormal and conspiracy news site that peddles tales of cryptids, signs of Atlantis, and magical crystals. Here are a few recent — and clearly preposterous — Exemplore articles that MSN has syndicated to its vast readership:

-"Fishermen Catch Mermaid Creature in Their Nets"

-"Woman Films Bigfoot Jumping Out of a Tree in California"

-"Party Stops as Giant UFO Flies Directly Over It"

-"Mars Rover Appears to Catch 'Dark Beast' Roaming the Surface of Mars"

-"Someone Swears They Caught a Biblically Accurate Angel Floating in the Sky Over LA"

Needless to say, if any of these stories were remotely credible, they'd have completely upended the scientific establishment. Instead, the source material for each of these sensational headlines is invariably a grainy and unconvincing video that does nothing to convince us that mermaids, bigfoot, angels, or aliens are real.

While Exemplore's headlines on MSN are blatantly ridiculous, others, like the Grimes story, are more insidious in their plausibility. A cursory examination of the MSN comments section under Grimes-Musk affair shows that many readers were easily fooled.

Futurism reached out to MSN for comment regarding both the Grimes story and the many Exemplore stories and received no response. After we published this story, MSN deleted all the hoax articles — but without a retraction note or anything else documenting the removal.

Is it Exemplore's right to run clickbait garbage? Sure, and maybe its readers have fun suspending their disbelief, like the readers of supermarket tabloids in decades past. Believing in bigfoot isn't exactly harmful, although it probably does indicate a weaker-than-average grasp on reality.

But there's no excuse for MSN, a media giant with the extraordinary resources of Microsoft behind it, to be amplifying — and monetizing — this ridiculous and inaccurate content. Its audience is vast, with the analytics service Similarweb estimating that it attracts nearly a billion readers per month.

Furthermore, MSN wields enormous SEO power that pushes its content to the top of search results, oftentimes superseding the original publication. If by happenstance a reader Googles something that resembles one of MSN's many fake headlines, a search engine will return MSN's republication of a story while sometimes omitting the original, lending it undue credibility.

Many in that immense audience will see lazy and false stories by Exemplore, the Inquisitr and other bottom-tier publishers alongside other legitimate outlets syndicated by MSN, like Bloomberg, The New York Times, and The Daily Beast. The effect, inevitably, undermines the good work done by hardworking and ethical publishers, rewards the nonsense published by low quality content farms, and undermines the public's faith in science and tech journalism.

As such, MSN's seemingly nonexistent editorial standards illustrate the perils of the contemporary media industry. In particular, recent years have seen Microsoft embrace an increasingly callous and cynical strategy toward the site: in 2020, for instance, it gutted MSN by firing dozens of workers, including journalists, editors, and other production staff, vowing to replace them with automated systems instead.

"I spend all my time reading about how automation and AI is going to take all our jobs, and here I am," one fired MSN staffer told The Guardian at the time. "AI has taken my job."

That anonymous staffer imparted a prescient warning: that though the human team had employed close editorial guidelines to vet the material that appeared on MSN's site, the new automated system would likely struggle to bring the same level of nuance and skepticism.

MSN makes lofty promises that there's still "human oversight" over the stories it syndicates, but given the desultory deluge of fake nonsense it appears to run constantly, it seems very unlikely that the site's remaining skeleton crew is accomplishing much at all.

And with its dwindling human staff, fewer still are left to hear readers' concerns, effectively erecting a brick wall that imposes a worrying opacity. Requests for comments go unanswered, and MSN publishes more bogus stories all the time.

Microsoft's end goal, it seems, was to automate its news distribution system, while cutting costs in the process. To that end, Microsoft may have been successful, but in the process has poisoned the well of a news reading public.

That would be worrying at any widely read news aggregator. But MSN isn't just popular; it's the default source of news for many Windows users. Open a new tab on Microsoft's Edge browser and it lands you on an MSN hub. And if you still use Internet Explorer for some reason, MSN is the default home page of that, too. Hell, even the Windows Start Menu will show you its articles. It's even hooked into competitors' systems, picking up untold new pageviews on Google News.

In sum? It's yet another cautionary tale about the growing corporate hunger to offload decision making to flawed AI systems that eliminate important jobs, exercise poor judgment with little oversight, and generally make society worse for the humans still living in it.
Site Admin
Posts: 36135
Joined: Thu Aug 01, 2013 5:21 am


Return to A Growing Corpus of Analytical Materials

Who is online

Users browsing this forum: No registered users and 19 guests