This activity has become professionalised, with private firms offering disinformation-for-hire services

Social media manipulation by political actors an industrial scale problem - Oxford report

Social media manipulation of public opinion is a growing threat to democracies around the world, according to the 2020 media manipulation survey from the Oxford Internet Institute , which found evidence in every one of the 80+ countries surveyed.

Organised social media manipulation campaigns were found in each of the 81 surveyed countries, up 15% in one year, from 70 countries in 2019. Governments, public relations firms and political parties are producing misinformation on an industrial scale, according to the report.  It shows disinformation has become a common strategy, with more than 93% of the countries (76 out of 81) seeing disinformation deployed as part of political communication. 

Social media manipulation of public opinion is a growing threat to democracies around the world

Professor Philip Howard , Director of the Oxford Internet Institute, and the report’s co-author says, ‘Our report shows misinformation has become more professionalised and is now produced on an industrial scale.  Now, more than ever, the public needs to be able to rely on trustworthy information about government policy and activity. Social media companies need to raise their game by increasing their efforts to flag misinformation and close fake accounts without the need for government intervention, so the public has access to high-quality information.’

Social media companies need to raise their game by increasing their efforts to flag misinformation and close fake accounts without the need for government intervention, so the public has access to high-quality information Professor Philip Howard

The OII team warns the level of social media manipulation has soared, with governments and political parties spending millions on private sector ‘cyber troops’, who drown out other voices on social media. Citizen influencers are used to spread manipulated messages. These include volunteers, youth groups and civil society organisations, who support their ideologies.

OII alumna, Dr Samantha Bradshaw, the report’s lead author says, ‘Our 2020 report highlights the way in which government agencies, political parties and private firms continue to use social media to spread political propaganda, polluting the digital information ecosystem and suppressing freedom of speech and freedom of the press.  A large part of this activity has become professionalised, with private firms offering disinformation-for-hire services.’

Key findings the OII researchers identified include:

  • Private ‘strategic communications’ firms are playing an increasing role in spreading computational propaganda, with researchers identifying state actors working with such firms in 48 countries.
  • Almost $60 million has been spent on firms who use bots and other amplification strategies to create the impression of trending political messaging.  
  • Social media has become a major battleground, with firms such as Facebook and Twitter taking steps to combat ‘cyber troops’, with some $10 million has been spent on social media political advertisements. The platforms removed more than 317,000 accounts and pages from ‘cyber troops’ actors between January 2019 and November 2020.
This activity has become professionalised, with private firms offering disinformation-for-hire services Dr Samantha Bradshaw

Cyber troops are frequently directly linked to state agencies. According to the report, ‘In 62 countries, we found evidence of a government agency using computational propaganda to shape public attitudes.’

Established political parties were also found to be using social media to ‘spread disinformation, suppress political participation, and undermine oppositional parties’, say the Oxford researchers.  

According to the report, ‘In 61 countries, we found evidence of political parties or politicians running for office who have used the tools and techniques of computational propaganda as part of their political campaigns. Indeed, social media has become a critical component of digital campaigning.’

We found evidence of political parties or politicians running for office who have used the tools and techniques of computational propaganda as part of their political media has become a critical component of digital campaigning

Dr Bradshaw adds, ‘Cyber troop activity can look different in democracies compared to authoritarian regimes. Electoral authorities need to consider the broader ecosystem of disinformation and computational propaganda, including private firms and paid influencers, who are increasingly prominent actors in this space.’

The report explores the tools and techniques of computational propaganda, including the use of fake accounts – bots, humans and hacked accounts – to spread disinformation. It finds:

  • 79 countries used human accounts,
  • 57 counties used bot accounts, and
  • 14 countries used hacked or stolen accounts.

Researchers examined how cyber troops use different communication strategies to manipulate public opinion, such as creating disinformation or manipulated media, data-driven targeting and employing abusive strategies such as mounting smear campaigns or online harassment. The report finds:

  • 76 countries used disinformation and media manipulation as part of their campaigns,
  • 30 countries used data-drive strategies to target specific users with political advertisements,
  • 59 countries used state sponsored trolls to attack political opponents or activists in 2019, up from 47 countries in 2019.

The 2020 report draws upon a four-step methodology employed by Oxford researchers to identify evidence of globally organised manipulation campaigns. This includes a systematic content analysis of news articles on cyber troop activity, a secondary literature review of public archives and scientific reports, generating country specific case studies and expert consultations.

The research work was carried out by Oxford researchers between 2019 and 2020. Computational Propaganda project research studies are published at

Subscribe to News


  • Support Oxford's research
  • Partner with Oxford on research
  • Study at Oxford
  • Research jobs at Oxford

You can view all news or browse by category

  • Share full article


Supported by

the media equation

Inside the ‘Misinformation’ Wars

Journalists and academics are developing a new language for truth. The results are not always clearer.

research topics on media manipulation

By Ben Smith

On Friday afternoons this fall, top American news executives have dialed into a series of off-the-record Zoom meetings led by Harvard academics whose goal is to “help newsroom leaders fight misinformation and media manipulation.”

Those are hot topics in the news industry right now, and so the program at Harvard University’s Shorenstein Center on Media, Politics and Public Policy drew an impressive roster of executives at CNN, NBC News, The Associated Press, Axios and other major U.S. outlets.

A couple of them, though, told me they were puzzled by the reading package for the first session.

It consisted of a Harvard case study, which a participant shared with me, examining the coverage of Hunter Biden’s lost laptop in the final days of the 2020 campaign. The story had been pushed by aides and allies of then-President Donald J. Trump who tried to persuade journalists that the hard drive’s contents would reveal the corruption of the father.

The news media’s handling of that narrative provides “an instructive case study on the power of social media and news organizations to mitigate media manipulation campaigns,” according to the Shorenstein Center summary.

The Hunter Biden laptop saga sure is instructive about something. As you may recall, panicked Trump allies frantically dumped its contents onto the internet and into reporters’ inboxes, a trove that apparently included embarrassing images and emails purportedly from the candidate’s son showing that he had tried to trade on the family name. The big social media platforms, primed for a repeat of the WikiLeaks 2016 election shenanigans, reacted forcefully: Twitter blocked links to a New York Post story that tied Joe Biden to the emails without strong evidence (though Twitter quickly reversed that decision) and Facebook limited the spread of the Post story under its own “misinformation” policy.

But as it now appears, the story about the laptop was an old-fashioned, politically motivated dirty tricks campaign, and describing it with the word “misinformation” doesn’t add much to our understanding of what happened. While some of the emails purportedly on the laptop have since been called genuine by at least one recipient, the younger Mr. Biden has said he doesn’t know if the laptop in question was his. And the “media manipulation campaign” was a threadbare, 11th-hour effort to produce a late-campaign scandal, an attempt at an October Surprise that has been part of nearly every presidential campaign I’ve covered.

The Wall Street Journal, as I reported at the time , looked hard at the story. Unable to prove that Joe Biden had tried, as vice president, to change U.S. policy to enrich a family member, The Journal refused to tell it the way the Trump aides wanted, leaving that spin to the right-wing tabloids. What remained was a murky situation that is hard to call “misinformation,” even if some journalists and academics like the clarity of that label. The Journal’s role was, in fact, a pretty standard journalistic exercise, a blend of fact-finding and the sort of news judgment that has fallen a bit out of favor as journalists have found themselves chasing social media.

While some academics use the term carefully, “misinformation” in the case of the lost laptop was more or less synonymous with “material passed along by Trump aides.” And in that context, the phrase “media manipulation” refers to any attempt to shape news coverage by people whose politics you dislike. (Emily Dreyfuss, a fellow at the Technology and Social Change Project at the Shorenstein Center, told me that “media manipulation,” despite its sinister ring, is “not necessarily nefarious.”)

The focus on who’s saying something, and how they’re spreading their claims, can pretty quickly lead Silicon Valley engineers to slap the “misinformation” label on something that is, in plainer English, true.

Shorenstein’s research director, Joan Donovan, who is leading the program and raised its funding from the John S. and James L. Knight Foundation, said that the Hunter Biden case study was “designed to cause conversation — it’s not supposed to leave you resolved as a reader.”

Ms. Donovan, a force on Twitter and a longtime student of the shadiest corners of the internet, said she defines “misinformation” as “false information that’s being spread.” She strongly objected to my suggestion that the term lacks a precise meaning.

She added that, appearances aside, she doesn’t believe the word is merely a left-wing label for things that Democrats don’t like. Instead, she traces the modern practice of “disinformation” (that is, deliberate misinformation) to the anti-corporate activists the Yes Men, famous for hoaxed corporate announcements and other stunts, and the “culture jamming” of Adbusters. But their tools, she wrote, have been adopted by “foreign operatives, partisan pundits, white supremacists, violent misogynists, grifters and scammers.”

Ms. Donovan is among the scholars who have tried to unravel the knotty information tangle of contemporary politics. She’s currently a compulsive consumer of Steve Bannon’s influential podcast, “War Room.” Like many of the journalists and academics who study our chaotic media environment, she has zeroed in on the way that trolls and pranksters developed tactics for angering and tricking people online over the first half of the last decade, and how those people brought their tactics to the right-wing reactionary politics in the decade’s second half.

To the people paying close attention, this new world was riveting and dangerous — and it was maddening that outsiders couldn’t see what was happening. For these information scholars, widespread media manipulation seemed like the main event of recent years, the main driver of millions of people’s beliefs, and the main reason Mr. Trump and people like him won elections all over the world. But this perspective, while sometimes revelatory, may leave little space for other causes of political action, or for other types of political lies, like the U.S. government’s long deception on its progress in the war in Afghanistan.

What had been a niche preoccupation has now been adopted by people who have spent somewhat less time on 4chan than Ms. Donovan. The broadcaster Katie Couric recently led the Aspen Institute’s Commission on Information Disorder. I moderated a panel at Bloomberg’s New Economy Forum with a different, somewhat dental, label for the same set of issues, “truth decay.” (The RAND Corporation seems to have coined that one, though T Bone Burnett did release an album by that name in 1980.) There, an Australian senator, Sarah Hanson-Young, said she thought the biggest culprit in misleading her fellow citizens about climate change had been Rupert Murdoch’s News Corp — hardly a new issue, or one that needs a new name. The New York Post’s insistence that the emails prove President Biden’s corruption, and not just his son’s influence peddling, are part of the same partisan genre.

This hints at a weakness of the new focus on misinformation: It’s a technocratic solution to a problem that’s as much about politics as technology. The new social media-fueled right-wing populists lie a lot, and stretch the truth more. But as American reporters quizzing Donald Trump’s fans on camera discovered, his audience was often in on the joke. And many of the most offensive things he said weren’t necessarily lies — they were just deeply ugly to half the country, including most of the people running news organizations and universities.

It’s more comfortable to reckon with an information crisis — if there’s anything we’re good at, it’s information — than a political one. If only responsible journalists and technologists could explain how misguided Mr. Trump’s statements were, surely the citizenry would come around. But these well-meaning communications experts never quite understood that the people who liked him knew what was going on, laughed about it and voted for him despite, or perhaps even because of, the times he went “too far.”

Harper’s Magazine recently published a broadside against “Big Disinfo,” contending that the think tanks raising money to focus on the topic were offering a simple solution to a political crisis that defies easy explanation and exaggerating the power of Facebook in a way that, ultimately, served Facebook most of all. The author, Joseph Bernstein, argued that the journalists and academics who specialize in exposing instances of disinformation seem to believe they have a particular claim on truth. “However well-intentioned these professionals are, they don’t have special access to the fabric of reality,” he wrote.

In fact, I’ve found many of the people worrying about our information diets are reassuringly modest about how far the new field of misinformation studies is going to take us. Ms. Donovan calls it “a new field of data journalism,” but said she agreed that “this part of the field needs to get better at figuring out what’s true or false.” The Aspen report acknowledged “that in a free society there are no ‘arbiters of truth.’” They’re putting healthy new pressure on tech platforms to be transparent in how claims — true and false — spread.

The editor in chief of The Texas Tribune, Sewell Chan, one of the Harvard course’s participants, said he didn’t think the program had a political slant, adding that it “helped me understand the new forms of mischief making and lie peddling that have emerged.”

“That said, like the term ‘fake news,’ misinformation is a loaded and somewhat subjective term,” he said. “I’m more comfortable with precise descriptions.”

I also feel the push and pull of the information ecosystem in my own journalism, as well as the temptation to evaluate a claim by its formal qualities — who is saying it and why — rather than its substance. Last April, for instance, I tweeted about what I saw as the sneaky way that anti-China Republicans around Donald Trump were pushing the idea that Covid-19 had leaked from a lab. There were informational red flags galore. But media criticism (and I’m sorry you’ve gotten this far into a media column to read this) is skin-deep. Below the partisan shouting match was a more interesting scientific shouting match (which also made liberal use of the word “misinformation”). And the state of that story now is that scientists’ understanding of the origins of Covid-19 is evolving and hotly debated , and we’re not going to be able to resolve it on Twitter.

The story of tech platforms helping to spread falsehoods is still incredibly important, as is the work of identifying stealthy social media campaigns from Washington to, as my colleague Davey Alba recently reported, Nairobi . And the Covid-19 pandemic also gave everyone from Mark Zuckerberg to my colleagues at The New York Times a new sense of urgency about, for instance, communicating the seriousness of the pandemic and the safety of vaccines in a media landscape littered with false reports.

But politics isn’t a science. We don’t need to mystify the old-fashioned practice of news judgment with a new terminology. There’s a danger in adopting jargony new frameworks we haven’t really thought through. The job of reporters isn’t, ultimately, to put neat labels on the news. It’s to report out what’s actually happening, as messy and unsatisfying as that can be.

Ben Smith is the media columnist. He joined The Times in 2020 after eight years as founding editor in chief of BuzzFeed News. Before that, he covered politics for Politico, The New York Daily News, The New York Observer and The New York Sun. Email: [email protected] More about Ben Smith

Support our work

Media Manipulation & Disinformation

We take a sociotechnical approach to understanding the social, political, and economic incentives to game information systems, websites, platforms, and search engines., featured work, trade craft.

Manipulated metadata, leak forgeries, AV fakes, search engine trickery: Our "trade craft" series exposes the wiliest sociotechnical techniques used in today's manipulation and disinformation campaigns.

About This Track

Data & Society’s Media Manipulation & Disinformation research examines how different groups use the participatory culture of the internet to turn the strengths of a free society into vulnerabilities, ultimately threatening expressive freedoms and civil rights. Efforts to exploit technical, social, economic, and institutional configurations of media can catalyze social change, sow dissent, and challenge the stability of social institutions.

Broadly, this initiative takes a sociotechnical approach to understanding the social, political, and economic incentives to game information systems, websites, platforms, and search engines —especially in cases where the attackers intend to destabilize democratic, social, and economic institutions. Through empirical research, we identify the unintended consequences of socio-technical systems and track attempts to locate and address threats , with an eye towards increasing organizational capacity across fields, so that action can be taken as problems emerge.

From social movements, to political parties, governments, dissidents, and corporations, many groups engage in active efforts to shape media narratives. Media manipulation tactics include: planting and/or amplifying misinformation and disinformation using humans (troll armies, doxxing, and bounties) or digital tools (bots); targeting journalists or public figures for social engineering (psychological manipulation); gaming trending and ranking algorithms, and coordinating action across multiple user accounts to force topics, keywords, or questions into the public conversation. Because the internet is a tool, a tactic, and a territory – integral to challenging the relations of power– studying the new vulnerabilities of networked media is fundamental to the future of democracies.

Data & Society’s Media Manipulation research initiative is generously supported by the Craig Newmark Philanthropies, the Ford Foundation, News Integrity Initiative, and  other donors through programmatic and general support .

Focus Areas


Sourcing innovative approaches to address the complex dynamics that underpin the spread of propaganda and disinformation by working in concert with research, industry, and civil society partners.

  • Disinformation Action Lab


Analyzing how social movements use, leverage, and manipulate digital infrastructures to create or take advantage of political opportunities.


Tracking and historicizing the tools, tactics, and techniques of groups who seek to exploit socio-technical systems for profit, politics, or fun.


Researching the effects of echo chambers, filter bubbles, analytics, and the engagement of audiences on political media makers, consumers, and distributors.


Investigating how media makers and platforms, from small entrepreneurs to multinational corporations, monetize content and value advertising at different scales. We also assess the legal, regulatory, and governance regimes of the internet and the shifting roles of platform corporations in the public sphere.

Featured Report

“New media technologies do not inherently change how evidence works in society. What they do is provide new opportunities for the negotiation of expertise, and therefore power.”

  • Read the Report
  • Select Type
  • Academic Article
  • Announcement
  • Book or Chapter
  • Installation
  • Policy Brief
  • Press Coverage
  • podcast We Be Imagining Looking at the Oversight: Meditations on Platforms Governance Robyn Caplan How do we contextualize platform governance in a longer history where big tech companies refused access to researchers, and many felt that harm was something that would happen further down the road? Read on We Be Imagining February 2021
  • op-ed Slate Pornhub Is Just the Latest Example of the Move Toward a Verified Internet Robyn Caplan It is often said that pornography drives innovation in technology, so perhaps that's why many outlets have framed Pormhub's verification move as "unprecedented." Read on Slate December 2020
  • op-ed Columbia Journalism Review QAnon shows that age of alternative facts will not end with Trump Alice E. Marwick William Partin While many conspiracies encourage readers to doubt mainstream sources, QAnon takes things one step further by building an entire knowledge-making institution of its own. And that takes some serious effort. Read on Columbia Journalism Review October 2020
  • op-ed Poynter A view from somewhere: What White managers need to know Meredith D. Clark Whiteness as the default existence - enforced by explicit and implicit values, norms, and practices in the newsroom - has and will continue to thwart any meaningful efforts to make the country's newsrooms reflect the communities they claim to serve. Read on Poynter June 2020
  • report Data & Society Securitize/Counter-Securitize Gabrielle Lim The Life and Death of Malaysia’s Anti-Fake News Act Read more March 2020
  • report Data & Society Data Voids danah boyd Michael Golebiewski "Data Voids" demonstrates how manipulators expose people to problematic content by exploiting search-engine results. Read more October 2019
  • op-ed NY Daily News The deeper danger of deepfakes: Worry less about politicians and more about powerless people Britt Paris Drawing on insights from her co-authored report Deepfakes and Cheap Fakes, Data & Society Affiliate Britt Paris emphasizes who will be most negatively impacted by deepfake technologies. "What is far too little discussed ar... Read on NY Daily News September 2019
  • report Data & Society Deepfakes and Cheap Fakes Britt Paris Joan Donovan This report traces decades of AV manipulation to demonstrate how evolving technologies aid consolidations of power in society. Read more September 2019
  • report Data & Society Source Hacking Joan Donovan Source Hacking details the techniques used by media manipulators to target journalists and other influential public figures to pick up falsehoods and unknowingly amplify them to the public. Read more September 2019
  • report Data & Society Data Craft Amelia Acker Data Craft analyzes how bad actors manipulate metadata to create effective disinformation campaigns and provides tips for researchers and technology companies trying to spot this “data craft.” Read more November 2018
  • Nieman Foundation
  • Fellowships

To promote and elevate the standards of journalism

Journalist’s Trade

October 20, 2020, a blueprint for documenting and debunking misinformation campaigns, the media manipulation casebook is a tool to help journalists, researchers, and policymakers know how and when to respond to misinformation in all its forms.

Brian Friedberg

Brian Friedberg

Emily Dreyfuss

Emily Dreyfuss

Gabrielle Lim

Gabrielle Lim

Joan Donovan

Joan Donovan

Tagged with.

People wearing face masks walk past a Twitter logo outside New York City headquarters

People wearing face masks walk past a Twitter logo outside New York City headquarters John Nacion/Sipa via AP Photo

In 2020, amid a pandemic and protests and a presidential election, misinformation lays in wait everywhere. It’s on our social media feeds, coming out of the mouths of our politicians, and printed in pamphlets mailed to our doors, intermingling indistinguishably with facts. The World Health Organization has termed this an infodemic . Some of it is the result of intentional media manipulation campaigns, scams, hoaxes, and grifts cooked up by people with an agenda. This disinformation, like a virus, is contagious and potentially deadly—to individuals and democracy itself .

It didn’t start this way. The advent of online communication, and the vast possibility for connection that came with it, enabled people to find each other based on interest and affinity like never before, and new toolkits for those engaged in cultural production. Scientific innovators, advocacy groups, and independent media all flourished with new advances in networked communication and broadband technology, establishing their communities on the open web and social media.

But as the naivete of the techno-utopian era fades into the horrors of the infodemic, we now see platforms running defense after knowingly allowing radicalization to flourish. The direct harm caused by ransomware attacks on our vital institutions, the cyber-troopers of oppressive regimes, for-profit disinformation outfits, harmful conspiracy theories grounded in anti-Semitism and medical misinformation , and the celebration of extremist violence are breaking our institutions, which have little or no ability to identify the source of these attacks.

We at the Technology and Social Change team at Harvard’s Shorenstein Center on Media, Politics and Policy are publishing the Media Manipulation Casebook to help cut through this noise. The Casebook is a database of media manipulation campaign case studies, some old, some ongoing, that we hope will provide a framework to analyze this phenomenon. We intend this research platform to be both a resource to scholars and a tool to help researchers, technologists, policymakers, civil society organizations, and journalists know how and when to respond to the very real threat of media manipulation.

The different stages of the media manipulation life cycle

The different stages of the media manipulation life cycle Technology and Social Change Project at Shorenstein Center on Media, Politics and Public Policy

The heart of the Casebook is the media manipulation life cycle , which presents a methodology for how to understand the origins and impacts of media manipulation campaigns, both domestic and international, and their relation to the wider information ecosystem. Situated in the emerging field of Critical Internet Studies , it is the product of three years of research on how journalists, civil society groups, and technologists grapple with media manipulation and disinformation campaigns. We took seriously the need for a cross-sector set of definitions that help us make sense of the tactics of manipulators and the communication strategies they employ to hoax the public.

Here, we break down how each stage of the life cycle works, and the ways different groups of people trying to fight back can be most useful. Media manipulation affects not just journalists and social media companies but presents a collective challenge to all of us who believe that knowledge is power. Like a hammer in a world lined with nails, the Casebook offers a way of analyzing interactions across our media ecosystem that is consistent with current journalistic and research practices, which seek to get us closer to the truth.

Stage 1: Campaign Planning

Media manipulation campaigns are a product of our culture, and of Silicon Valley. As the tech industry’s wares spread globally, driven by a technocratic and profit-driven machine, so too were pre-existing social problems reproduced and amplified . In many of the media manipulation campaigns we catalogue in the Casebook, you see small groups of motivated actors, often driven by these toxic social forces, opportunistically using technology to scale and amplify their impact.

Establishing who these people are, and why they are acting, is extremely difficult. Social media platforms, prime targets for extremists and media manipulators, are increasingly opaque and difficult to study critically. This makes establishing intent and attribution for disinformation artifacts and harmful propaganda a time-consuming and emotionally taxing process for journalists and researchers. Behind every visible campaign plan is another layer of invisible communication to outsiders, another new platform to evade regulation and oversight .

But the opacity of content moderation regarding these materials makes external critical research and journalism such a necessary part of pressuring for change.

Uncovering evidence of campaign planning and coordination takes domain expertise, which takes time. This information may be gathered in real time by a dedicated watcher tasked with understanding the dynamics of online subcultural spaces but is often only available forensically. We know the extent to which the far right organized for Unite the Right because of chat leaks published by Unicorn Riots , for example. In our case studies, when possible, we illustrate what the beginning of a campaign looks like, and explain how other researchers and journalists can cultivate that domain expertise themselves. Our case studies on the fake Antifa social media accounts phenomenon and the digital blackface “ Operation Blaxit ” show how planning and coordination may be discoverable for those who know where to look.

Discovering campaign planning and establishing intent, are impossible without qualitative research that contextualizes how and why it was created. Rather than rely on anonymized large data sets handed out by these platforms or increasingly restrictive information access, our research methods incorporate ethnographic, sociological, and anthropological understandings of human communication to make sense of the mess. Included as part of our methodological package is “ Investigative Digital Ethnography ,” a guide for academics and journalists seeking to design social media research that leads to deep insight into communities targeted by disinformation, and those that reliably produce it. While there will also be another layer to a disinformation campaign that we cannot see, we as journalists and researchers must perform clear, reproducible research to collectively address the many online harms we face today.

Stage 2: Seeding the Campaign Across Social Platforms and Web

Stage 2 is when a campaign moves from planning to execution, when memes, hashtags, forgeries, and false or misleading information are seeded across social media, fringe news sites, blogs, and forums. Often aided by willing participants, online influencers, and networked factions, this stage documents the earliest point in which a campaign moves beyond its original creators. If the messaging and calls to action are attractive enough, the campaign grows, reaching new audiences who often have no idea as to the origins or motivations behind what they’re now seeing.

Intervention at this stage is not clear cut. At what point does one intervene? How egregious is the content? What is the likely outcome? Will intervention backfire? This is where civil society organizations (CSOs) play an important role. Because of their domain expertise and connections with individuals and groups who may be most affected by an ill-motivated influence operation, CSOs will not only know where to look but will have a better understanding of the vectors of attack, the wedge issues that will be exploited, and the context and nuance to discern what action (if any) should be taken. CSOs with the capacity to monitor such activities therefore become an invaluable actor in preventing a potentially dangerous influence operation from moving onto the next stage.

Often more technically savvy and quicker to action, CSOs can counter messaging before it reaches mainstream audiences, pre-bunk likely misconceptions about an issue, and agitate for platform response. Here, humor and creativity are assets activists can draw from in countering mis- and disinformation. Often the first to notice when something seems dubious, CSOs can be a trusted resource. Technology companies and researchers should also take notice, as the most effective interventions will likely involve all parties.

Stage 3: Responses by Industry, Activists, Politicians, and Journalists

Stage 3 of the life cycle model documents how highly visible people and organizations outside of a manipulation campaign react and respond to it. These individuals or institutions can be politicians, government agencies, celebrities, influencers, civil society organizations—or journalists. It is almost always after reactions by these people with cultural power that a manipulation campaign grows to its most visible and most dangerous. Stage 3 is a turning point. What happens during this critical period determines whether the campaign gets undue amplification and attention or sputters out in failure.

It’s at this stage that journalistic judgment is most important. Media manipulators crave attention . If the point of Stage 2 is to lay a trap around the internet to get that attention, Stage 3 is where the campaign snares it.

Journalists are often the ones to find those traps, as it is their job to seek out important information that the public needs to know. Journalists are on the hunt. And so they have to think of media manipulation campaigns like deadfalls, laid out around the internet for them to get caught up. When encountering evidence of a campaign that’s still in Stages 1 or 2, journalists must carefully balance the need to report on true events with the need not to fall prey to a manipulation campaign. Sometimes it is not in the public interest to report on nascent campaigns.

To determine whether reporting in Stage 3 will do more good than harm, journalists must ask themselves: Does this bit of media manipulation have the potential to cause real harm? Are influential people responding to and spreading it? Do many people seem to be falling for it, and adopting its harmful messaging? If the answer to these questions is yes, then reporting is warranted. If the answers are less clear, they must make the best judgment they can.

As a few of our case studies show, the worst thing journalists can do in Stage 3 is to report on a media manipulation campaign at face value, repeating the misinformation and framing of the campaign. In this case, journalists have been duped. That’s an obvious win for the manipulators.

But journalists can still amplify the manipulation campaign even if they get the reporting right, which makes Stage 3 extremely tricky. If the disinformation campaign is limping along on social media, an article in the mainstream press—even one accurately pointing out how fake or wrong the campaign is—could be the match that lights the fire under the operation.

In that situation, the right move may be to not write a story—to deploy strategic silence .

But if it’s too late for strategic silence—for example, because other news organizations are already amplifying it, or social media platforms are serving it up to huge audiences who are already acting on it, or high-profile people are already responding to it—then it’s well into Stage 3 and it’s appropriate and even necessary to report on it.

One way to think of this is: As journalists, you rarely want to kick off Stage 3. You only want to instigate Stage 3 with your reporting if a campaign has gained such a hidden viral popularity already, outside of mainstream view, that it is causing harm or will be imminently.

In this case, the most important thing to do is report critically. This means deploying “ strategic amplification .” It means following the truth sandwich rubric—lead with what’s true, quickly debunk what’s false, and then return to what is known. What is known can be things like who is behind the campaign, where it was planned, who it hurts, and how it fits into the current news cycle and the media manipulation life cycle. Do not link to campaign operators’ handles and websites directly if you can avoid it. Don’t make it easy for readers to use your reporting as a way to find, spread, and join the campaign.

Journalists also have a crucial role to play in Stage 4: Mitigation. By Stage 4, a campaign has reached such a viral tipping point that a corrective in the media is clearly necessary. Whether that corrective is effective depends on the situation, but such reporting is always warranted because the campaign has reached a certain level of public awareness.

Stage 4: Mitigation

Once a campaign is amplified into public awareness, a host of stakeholders must act to mitigate its harms. Journalism plays a crucial role here, as well, actively fact-checking and debunking individual disinformation campaigns, to bring the actions and impacts of malicious actors on social media platforms to the attention of civil society, technologists, and policymakers.

As newsrooms adapted over the last four years to the normalization of misinformation on social media, they began regularly fact checking and debunking beats. Fact checkers have written thousands of articles debunking misinformation and conspiracies because they see how audiences are repeatedly targeted by sensational and scandalous content online. It’s a drain on resources, which could be much better spent on sustaining journalism rather than moderating content on platforms. Dedicated fact checks are a form of mitigation, dominating SEO results for confirmed manipulation campaigns.

Mitigation efforts often fall on civil society, which bears the long tail of manipulation over years as disinformation is dispersed. Journalists, public health and medical professionals, civil society leaders, and law enforcement personnel are bearing the true cost of responding to unrelenting misinformation.

The evidence they gather adds up and can help pressure platforms to change their systems or Terms of Service. Civil society coalitions, like Change the Terms , have pushed for years to force the platform companies into taking responsibility for the harms that proliferate on their sites. Content moderation shouldn’t be the job of civil society—or of the communities being harmed.

Platform companies are the ones who wield the power of content moderation in Stage 4. They can deplatform, remove content, ban terms—in short, they can pull the plug on media manipulation campaigns if they take the right actions at the right times. Deplatforming manipulators and hate-mongers works . But these mitigation efforts often come too late, like deplatforming white supremacists who planned the Unite the Right murderous event or the long, slow growth of the QAnon movement. An example of this from our Casebook is in the case of the targeted harassment of an alleged whistleblower, when some social media companies followed the lead of mainstream journalism and blocked the use of a specific name on their platforms to protect an individual from harm.

But platform companies often respond too late, or not at all. We know platforms like Facebook have knowingly allowed radicalization to fester with deadly results . Though they have policy departments focused on minimizing harm, and they have pledged over and over again to make their platforms a safe and equitable environment, they often do not take action until civil society and journalists have forced them to. Their disparate mitigation efforts are uncoordinated and unstandardized, allowing manipulators to leverage an asymmetrical media environment to execute attacks.

In the vacuum of regulation, we see repeatedly how platforms fail in the pursuit of brand protection, only acting when a campaign has ended or adapted. In January 2020, Facebook published a statement , “In the absence of regulation, Facebook and other companies are left to design their own policies. We have based ours on the principle that people should be able to hear from those who wish to lead them, warts and all.”

This reveals that in Stage 4, the missing power broker is regulators, who could create standardized rules for the platforms, but so far have largely abdicated that duty or found it too difficult.

Stage 5: Campaign Adaptation

As many of the cases in the Casebook reveal, despite some mitigation, media manipulation campaigns often find ways to continue. In Stage 5, campaigns adapt when possible, sometimes overnight, or over the course of several years, such as the case study about the digital blackface Operation Blaxit campaign, or the enduring Pizzagate conspiracy theory . Operators often are fully aware of the best ways to exploit sociotechnical systems, and often use anonymity to avoid attribution, and use edited materials and coded language to avoid automatic content flagging. While these individuals or groups may be outside of accountability, major social media platforms remain the major attack vector for such campaigns and bear the responsibility for curbing the impact of this behavior.

Successful platform mitigation is the only way to curb the impact of adaptation by manipulators. The “Plandemic ” film, which claimed the Covid-19 virus was deployed by powerful elites to create a new world order, went super viral in spring 2020 .  It was taken down after receiving almost two million views . It still circulated on smaller video platforms . Before mitigation, this disinformation campaign operated publicly, even pre-announcing a follow-up film, “Indoctrination.” When that film launched, platforms were ready. By taking proactive action, major platforms did much to stem transmission of the documentary and were able to avoid a repeat of “Plandemic” ’ s virality . As a result of cross-sector coordination, “Indoctrination” received far less attention. Motivated manipulators will continue to adapt, but without the amplification capabilities of social media at their disposal, their audiences are greatly diminished.

Keeping track of this ecosystem is hard. Campaigns are hard to find, difficult to identify while being seeded, a challenge for journalists, our institutions, and civil society. Platforms’ unmotivated and uneven mitigation practices enable manipulator adaptation. But we at the Technology and Social Change project introduce this model, open to many disciplines and research practices, as a means to detect, document, and debunk misinformation in all its forms. It is a frame for policymakers seeking to understand the impact media manipulation has outside of platforms, and how those platforms are designed for continued exploitation. And we hope it is a blueprint for journalists and researchers seeking standards for how to approach the current information crisis.

Further Reading

How not to cover voter fraud disinformation, by yochai benkler, from nieman reports, “we can’t only be mad at facebook”: nonprofits, newsrooms team up against misinformation, by catherine buni, as the november election approaches, are newsrooms ready for guccifer 3.0, by christa case bryant.



Misinformation, manipulation, and abuse on social media in the era of COVID-19

  • Published: 22 November 2020
  • Volume 3 , pages 271–277, ( 2020 )

Cite this article

research topics on media manipulation

  • Emilio Ferrara 1 ,
  • Stefano Cresci 2 &
  • Luca Luceri 3  

30k Accesses

93 Citations

40 Altmetric

Explore all metrics

The COVID-19 pandemic represented an unprecedented setting for the spread of online misinformation, manipulation, and abuse, with the potential to cause dramatic real-world consequences. The aim of this special issue was to collect contributions investigating issues such as the emergence of infodemics, misinformation, conspiracy theories, automation, and online harassment on the onset of the coronavirus outbreak. Articles in this collection adopt a diverse range of methods and techniques, and focus on the study of the narratives that fueled conspiracy theories, on the diffusion patterns of COVID-19 misinformation, on the global news sentiment, on hate speech and social bot interference, and on multimodal Chinese propaganda. The diversity of the methodological and scientific approaches undertaken in the aforementioned articles demonstrates the interdisciplinarity of these issues. In turn, these crucial endeavors might anticipate a growing trend of studies where diverse theories, models, and techniques will be combined to tackle the different aspects of online misinformation, manipulation, and abuse.

Similar content being viewed by others

research topics on media manipulation

Digital media and misinformation: An outlook on multidisciplinary strategies against manipulation

research topics on media manipulation

Coronavirus Conspiracy Theories: Tracing Misinformation Trajectories from the Fringes to the Mainstream

research topics on media manipulation

The Role of Online Misinformation and Fake News in Ideological Polarization: Barriers, Catalysts, and Implications

Avoid common mistakes on your manuscript.


Malicious and abusive behaviors on social media have elicited massive concerns for the negative repercussions that online activity can have on personal and collective life. The spread of false information [ 8 , 14 , 19 ] and propaganda [ 10 ], the rise of AI-manipulated multimedia [ 3 ], the presence of AI-powered automated accounts [ 9 , 12 ], and the emergence of various forms of harmful content are just a few of the several perils that social media users can—even unconsciously—encounter in the online ecosystem. In times of crisis, these issues can only get more pressing, with increased threats for everyday social media users [ 20 ]. The ongoing COVID-19 pandemic makes no exception and, due to dramatically increased information needs, represents the ideal setting for the emergence of infodemics —situations characterized by the undisciplined spread of information, including a multitude of low-credibility, fake, misleading, and unverified information [ 24 ]. In addition, malicious actors thrive on these wild situations and aim to take advantage of the resulting chaos. In such high-stakes scenarios, the downstream effects of misinformation exposure or information landscape manipulation can manifest in attitudes and behaviors with potentially dramatic public health consequences [ 4 , 21 ].

By affecting the very fabric of our socio-technical systems, these problems are intrinsically interdisciplinary and require joint efforts to investigate and address both the technical (e.g., how to thwart automated accounts and the spread of low-quality information, how to develop algorithms for detecting deception, automation, and manipulation), as well as the socio-cultural aspects (e.g., why do people believe in and share false news, how do interference campaigns evolve over time) [ 7 , 15 ]. Fortunately, in the case of COVID-19, several open datasets were promptly made available to foster research on the aforementioned matters [ 1 , 2 , 6 , 16 ]. Such assets bootstrapped the first wave of studies on the interplay between a global pandemic and online deception, manipulation, and automation.


In light of the previous considerations, the purpose of this special issue was to collect contributions proposing models, methods, empirical findings, and intervention strategies to investigate and tackle the abuse of social media along several dimensions that include (but are not limited to) infodemics, misinformation, automation, online harassment, false information, and conspiracy theories about the COVID-19 outbreak. In particular, to protect the integrity of online discussions on social media, we aimed to stimulate contributions along two interlaced lines. On one hand, we solicited contributions to enhance the understanding on how health misinformation spreads, on the role of social media actors that play a pivotal part in the diffusion of inaccurate information, and on the impact of their interactions with organic users. On the other hand, we sought to stimulate research on the downstream effects of misinformation and manipulation on user perception of, and reaction to, the wave of questionable information they are exposed to, and on possible strategies to curb the spread of false narratives. From ten submissions, we selected seven high-quality articles that provide important contributions for curbing the spread of misinformation, manipulation, and abuse on social media. In the following, we briefly summarize each of the accepted articles.

The COVID-19 pandemic has been plagued by the pervasive spread of a large number of rumors and conspiracy theories, which even led to dramatic real-world consequences. “Conspiracy in the Time of Corona: Automatic Detection of Emerging COVID-19 Conspiracy Theories in Social Media and the News” by Shahsavari, Holur, Wang, Tangherlini, and Roychowdhury grounds on a machine learning approach to automatically discover and investigate the narrative frameworks supporting such rumors and conspiracy theories [ 17 ]. Authors uncover how the various narrative frameworks rely on the alignment of otherwise disparate domains of knowledge, and how they attach to the broader reporting on the pandemic. These alignments and attachments are useful for identifying areas in the news that are particularly vulnerable to reinterpretation by conspiracy theorists. Moreover, identifying the narrative frameworks that provide the generative basis for these stories may also contribute to devise methods for disrupting their spread.

The widespread diffusion of rumors and conspiracy theories during the outbreak has also been analyzed in “Partisan Public Health: How Does Political Ideology Influence Support for COVID-19 Related Misinformation?” by Nicholas Havey. The author investigates how political leaning influences the participation in the discourse of six COVID-19 misinformation narratives: 5G activating the virus, Bill Gates using the virus to implement a global surveillance project, the “Deep State” causing the virus, bleach, and other disinfectants as ingestible protection against the virus, hydroxychloroquine being a valid treatment for the virus, and the Chinese Communist party intentionally creating the virus [ 13 ]. Results show that conservative users dominated most of these discussions and pushed diverse conspiracy theories. The study further highlights how political and informational polarization might affect the adherence to health recommendations and can, thus, have dire consequences for public health.

figure 1

Network based on the web-page URLs shared on Twitter from January 16, 2020 to April 15, 2020 [ 18 ]. Each node represents a web-page URL, while connections indicate links among web-pages. The purple nodes represent traditional news sources, the orange nodes indicate the low-quality and misinformation news sources, and the green nodes represent authoritative health sources. The edges take the color of the source, while the node size is based on the degree

“Understanding High and Low Quality URL Sharing on COVID-19 Twitter Streams” by Singh, Bode, Budak, Kawintiranon, Padden, and Vraga investigate URL sharing patterns during the pandemic, for different categories of websites [ 18 ]. Specifically, authors categorize URLs as either related to traditional news outlets, authoritative health sources, or low-quality and misinformation news sources. Then, they build networks of shared URLs (see Fig. 1 ). They find that both authoritative health sources and low-quality/misinformation ones are shared much less than traditional news sources. However, COVID-19 misinformation is shared at a higher rate than news from authoritative health sources. Moreover, the COVID-19 misinformation network appears to be dense (i.e., tightly connected) and disassortative. These results can pave the way for future intervention strategies aimed at fragmenting networks responsible for the spread of misinformation.

The relationship between news sentiment and real-world events is a long-studied matter that has serious repercussions for agenda setting and (mis-)information spreading. In “Around the world in 60 days: An exploratory study of impact of COVID-19 on online global news sentiment” , Chakraborty and Bose explore this relationship for a large set of worldwide news articles published during the COVID-19 pandemic [ 5 ]. They apply unsupervised and transfer learning-based sentiment analysis techniques and they explore correlations between news sentiment scores and the global and local numbers of infected people and deaths. Specific case studies are also conducted for countries, such as China, the US, Italy, and India. Results of the study contribute to identify the key drivers for negative news sentiment during an infodemic, as well as the communication strategies that were used to curb negative sentiment.

Farrell, Gorrell, and Bontcheva investigate one of the most damaging sides of online malicious content: online abuse and hate speech. In “Vindication, Virtue and Vitriol: A study of online engagement and abuse toward British MPs during the COVID-19 Pandemic” , they adopt a mixed methods approach to analyze citizen engagement towards British MPs online communications during the pandemic [ 11 ]. Among their findings is that certain pressing topics, such as financial concerns, attract the highest levels of engagement, although not necessarily negative. Instead, other topics such as criticism of authorities and subjects like racism and inequality tend to attract higher levels of abuse, depending on factors such as ideology, authority, and affect.

Yet, another aspect of online manipulation—that is, automation and social bot interference—is tackled by Uyheng and Carley in their article “Bots and online hate during the COVID-19 pandemic: Case studies in the United States and the Philippines”  [ 22 ]. Using a combination of machine learning and network science, the authors investigate the interplay between the use of social media automation and the spread of hateful messages. They find that the use of social bots yields more results when targeting dense and isolated communities. While the majority of extant literature frames hate speech as a linguistic phenomenon and, similarly, social bots as an algorithmic one, Uyheng and Carley adopt a more holistic approach by proposing a unified framework that accounts for disinformation, automation, and hate speech as interlinked processes, generating insights by examining their interplay. The study also reflects on the value of taking a global approach to computational social science, particularly in the context of a worldwide pandemic and infodemic, with its universal yet also distinct and unequal impacts on societies.

It has now become clear that text is not the only way to convey online misinformation and propaganda [ 10 ]. Instead, images such as those used for memes are being increasingly weaponized for this purpose. Based on this evidence, Wang, Lee, Wu, and Shen investigate US-targeted Chinese COVID propaganda, which happens to rely heavily on text images [ 23 ]. In their article “Influencing Overseas Chinese by Tweets: Text-Images as the Key Tactic of Chinese Propaganda” , they tracked thousands of Twitter accounts involved in the #USAVirus propaganda campaign. A large percentage ( \(\simeq 38\%\) ) of those accounts was later suspended by Twitter, as part of their efforts for contrasting information operations. Footnote 1 Authors studied the behavior and content production of suspended accounts. They also experimented with different statistical and machine learning models for understanding which account characteristics mostly determined their suspension by Twitter, finding that the repeated use of text images played a crucial part.

Overall, the great interest around the COVID-19 infodemic and, more broadly, about research themes such as online manipulation, automation, and abuse, combined with the growing risks of future infodemics, make this special issue a timely endeavor that will contribute to the future development of this crucial area. Given the recent advances and breadth of the topic, as well as the level of interest in related events that followed this special issue—such as dedicated panels, webinars, conferences, workshops, and other special issues in journals—we are confident that the articles selected in this collection will be both highly informative and thought provoking for readers. The diversity of the methodological and scientific approaches undertaken in the aforementioned articles demonstrates the interdisciplinarity of these issues, which demand renewed and joint efforts from different computer science fields, as well as from other related disciplines such as the social, political, and psychological sciences. To this regard, the articles in this collection testify and anticipate a growing trend of interdisciplinary studies where diverse theories, models, and techniques will be combined to tackle the different aspects at the core of online misinformation, manipulation, and abuse. .

Alqurashi, S., Alhindi, A., & Alanazi, E. (2020). Large Arabic Twitter dataset on COVID-19. arXiv preprint arXiv:2004.04315 .

Banda, J.M., Tekumalla, R., Wang, G., Yu, J., Liu, T., Ding, Y., & Chowell, G. (2020). A large-scale COVID-19 Twitter chatter dataset for open scientific research—An international collaboration. arXiv preprint arXiv:2004.03688 .

Boneh, D., Grotto, A. J., McDaniel, P., & Papernot, N. (2019). How relevant is the Turing test in the age of sophisbots? IEEE Security & Privacy, 17 (6), 64–71.

Article   Google Scholar  

Broniatowski, D. A., Jamison, A. M., Qi, S., AlKulaib, L., Chen, T., Benton, A., et al. (2018). Weaponized health communication: Twitter bots and Russian trolls amplify the vaccine debate. American Journal of Public Health, 108 (10), 1378–1384.

Chakraborty, A., & Bose, S. (2020). Around the world in sixty days: An exploratory study of impact of COVID-19 on online global news sentiment. Journal of Computational Social Science .

Chen, E., Lerman, K., & Ferrara, E. (2020). Tracking social media discourse about the COVID-19 pandemic: Development of a public coronavirus Twitter data set. JMIR Public Health and Surveillance, 6 (2), e19273.

Ciampaglia, G. L. (2018). Fighting fake news: A role for computational social science in the fight against digital misinformation. Journal of Computational Social Science, 1 (1), 147–153.

Cinelli, M., Cresci, S., Galeazzi, A., Quattrociocchi, W., & Tesconi, M. (2020). The limited reach of fake news on Twitter during 2019 European elections. PLoS One, 15 (6), e0234689.

Cresci, S. (2020). A decade of social bot detection. Communications of the ACM, 63 (10), 61–72.

Da San M., G., Cresci, S., Barrón-Cedeño, A., Yu, S., Di Pietro, R., & Nakov, P. (2020). A survey on computational propaganda detection. In: The 29th International Joint Conference on Artificial Intelligence (IJCAI’20), pp. 4826–4832.

Farrell, T., Gorrell, G., & Bontcheva, K. (2020). Vindication, virtue and vitriol: A study of online engagement and abuse toward British MPs during the COVID-19 Pandemic. Journal of Computational Social Science .

Ferrara, E., Varol, O., Davis, C., Menczer, F., & Flammini, A. (2016). The rise of social bots. Communications of the ACM, 59 (7), 96–104.

Havey, N. (2020). Partisan public health: How does political ideology influence support for COVID-19 related misinformation?. Journal of Computational Social Science .

Lazer, D. M., Baum, M. A., Benkler, Y., Berinsky, A. J., Greenhill, K. M., Menczer, F., et al. (2018). The science of fake news. Science, 359 (6380), 1094–1096.

Luceri, L., Deb, A., Giordano, S., & Ferrara, E. (2019). Evolution of bot and human behavior during elections. First Monday, 24 , 9.

Google Scholar  

Qazi, U., Imran, M., & Ofli, F. (2020). GeoCoV19: a dataset of hundreds of millions of multilingual COVID-19 tweets with location information. ACM SIGSPATIAL Special, 12 (1), 6–15.

Shahsavari, S., Holur, P., Wang, T., Tangherlini, T. R., & Roychowdhury, V. (2020). Conspiracy in the time of corona: Automatic detection of emerging COVID-19 conspiracy theories in social media and the news. Journal of Computational Social Science .

Singh, L., Bode, L., Budak, C., Kawintiranon, K., Padden, C., & Vraga, E. (2020). Understanding high and low quality URL sharing on COVID-19 Twitter streams. Journal of Computational Social Science .

Starbird, K. (2019). Disinformation’s spread: Bots, trolls and all of us. Nature, 571 (7766), 449–450.

Starbird, K., Dailey, D., Mohamed, O., Lee, G., & Spiro, E.S. (2018). Engage early, correct more: How journalists participate in false rumors online during crisis events. In: Proceedings of the 2018 ACM CHI Conference on Human Factors in Computing Systems (CHI’18), pp. 1–12. ACM.

Swire-Thompson, B., & Lazer, D. (2020). Public health and online misinformation: challenges and recommendations. Annual Review of Public Health, 41 , 433–451.

Uyheng, J., & Carley, K. M. (2020). Bots and online hate during the COVID-19 pandemic: Case studies in the United States and the Philippines. Journal of Computational Social Science .

Wang, A. H. E., Lee, M. C., Wu, M. H., & Shen, P. (2020). Influencing overseas Chinese by tweets: Text-Images as the key tactic of Chinese propaganda. Journal of Computational Social Science .

Zarocostas, J. (2020). How to fight an infodemic. The Lancet, 395 (10225), 676.

Download references

Author information

Authors and affiliations.

University of Southern California, Los Angeles, CA, 90007, USA

Emilio Ferrara

Institute of Informatics and Telematics, National Research Council (IIT-CNR), 56124, Pisa, Italy

Stefano Cresci

University of Applied Sciences and Arts of Southern Switzerland (SUPSI), Manno, Switzerland

Luca Luceri

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Emilio Ferrara .

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Ferrara, E., Cresci, S. & Luceri, L. Misinformation, manipulation, and abuse on social media in the era of COVID-19. J Comput Soc Sc 3 , 271–277 (2020).

Download citation

Received : 19 October 2020

Accepted : 23 October 2020

Published : 22 November 2020

Issue Date : November 2020


Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Misinformation
  • Social bots
  • Social media
  • Find a journal
  • Publish with us
  • Track your research

Articles on media manipulation

Displaying all articles.

research topics on media manipulation

Celebrity deepfakes are all over TikTok. Here’s why they’re becoming common – and how you can spot them

Rob Cover , RMIT University

research topics on media manipulation

How Trump uses Twitter to distract the media – new research

Ullrich Ecker , The University of Western Australia ; Michael Jetter , The University of Western Australia , and Stephan Lewandowsky , University of Bristol

research topics on media manipulation

Donald Trump, Hugo Chávez: Using illness for political gain and to erode democracy

Isaac Nahon-Serfaty , L’Université d’Ottawa/University of Ottawa

research topics on media manipulation

Trump v Mourinho: masters of media distraction

Martin Smith , Sheffield Hallam University

research topics on media manipulation

How Twitter bots affected the US presidential campaign

Emilio Ferrara , University of Southern California

research topics on media manipulation

Young, undecided and in the front line of Scotland’s indyref mindgames

Paul Aitken , University of Aberdeen

Related Topics

  • Donald Trump
  • Manipulation
  • Social media
  • Trump and COVID-19
  • US politics
  • X (formerly Twitter)

Top contributors

research topics on media manipulation

Chair of Cognitive Psychology, University of Bristol

research topics on media manipulation

Professor of Cognitive Psychology and Australian Research Council Future Fellow, The University of Western Australia

research topics on media manipulation

Professor of Digital Communication and Co-Director of the RMIT Digital Ethnography Research Centre, RMIT University

research topics on media manipulation

PhD Candidate, Bangor University

research topics on media manipulation

Associate Professor in Economics, The University of Western Australia

research topics on media manipulation

Professor of Computer Science and of Communication, University of Southern California

research topics on media manipulation

Senior Lecturer in Journalism, Sheffield Hallam University

research topics on media manipulation

Associate Professor, L’Université d’Ottawa/University of Ottawa

  • X (Twitter)
  • Unfollow topic Follow topic

research topics on media manipulation

Tackling Disinformation

Harvard Kennedy School is studying how legacy news organizations and social media handle disinformation.

Part 5 of a series on all things digital at HKS.

One focal point is online misinformation. Matthew Baum, the Marvin Kalb Professor of Global Communications , says the first step in addressing misinformation should be evidence-based research on the impact of false or misleading information on users. Among his investigations is a project gauging the effects on people in several countries who use digital communication tools such as WhatsApp. Another research project will study the influence that both fact-checked and non-fact-checked information have on people’s political views “because you have to know the dogs that didn’t bark as well as the ones that do,” says Baum.

Media Manipulation

In addition, Baum and Shorenstein Center post-doctoral fellow Irene Pasquetto are overseeing the creation of a new journal called The Misinformation Review, which will vastly speed up the typical time lag in publishing peer-reviewed research from more than a year to just a month or two. The journal’s design is in part a response to the pace of digital innovation so that peer-reviewed findings will reach the field while still relevant for policy debates. The Misinformation Review is convening a conference for researchers in October.

“You have to know the dogs that didn’t bark as well as the ones that do.”

Matthew baum.

Gibbs is also working with lead researcher Joan Donovan on the Technology and Social Change Research Project, which has set out to compile 100 case studies on media manipulation and to train 100 researchers at a dozen universities over the next three years to focus on the field of “critical internet studies.” Donovan is creating methodological standards for the case studies on media manipulation to distinguish malicious campaigns from legitimate social media advocacy. “We don’t yet have methods that help us find and measure what we’re looking at,” Donovan says. “We’re really trying to get a very robust, transdisciplinary field together.”

Photos by Raychel Casey, Jessica Scranton, Martha Stewart

Discover our digital governance work

Digital governing / governing digital, government as a platform, elections and democracy, regulation challenges, cyber and digital security, rights and public purpose.

Get smart & reliable public policy insights right in your inbox. 

  • Tools and Resources
  • Customer Services
  • Corrections
  • Crime, Media, and Popular Culture
  • Criminal Behavior
  • Criminological Theory
  • Critical Criminology
  • Geography of Crime
  • International Crime
  • Juvenile Justice
  • Prevention/Public Policy
  • Race, Ethnicity, and Crime
  • Research Methods
  • Victimology/Criminal Victimization
  • White Collar Crime
  • Women, Crime, and Justice
  • Share This Facebook LinkedIn Twitter

Article contents

Violence, media effects, and criminology.

  • Nickie D. Phillips Nickie D. Phillips Department of Sociology and Criminal Justice, St. Francis College
  • Published online: 27 July 2017

Debate surrounding the impact of media representations on violence and crime has raged for decades and shows no sign of abating. Over the years, the targets of concern have shifted from film to comic books to television to video games, but the central questions remain the same. What is the relationship between popular media and audience emotions, attitudes, and behaviors? While media effects research covers a vast range of topics—from the study of its persuasive effects in advertising to its positive impact on emotions and behaviors—of particular interest to criminologists is the relationship between violence in popular media and real-life aggression and violence. Does media violence cause aggression and/or violence?

The study of media effects is informed by a variety of theoretical perspectives and spans many disciplines including communications and media studies, psychology, medicine, sociology, and criminology. Decades of research have amassed on the topic, yet there is no clear agreement about the impact of media or about which methodologies are most appropriate. Instead, there continues to be disagreement about whether media portrayals of violence are a serious problem and, if so, how society should respond.

Conflicting interpretations of research findings inform and shape public debate around media effects. Although there seems to be a consensus among scholars that exposure to media violence impacts aggression, there is less agreement around its potential impact on violence and criminal behavior. While a few criminologists focus on the phenomenon of copycat crimes, most rarely engage with whether media directly causes violence. Instead, they explore broader considerations of the relationship between media, popular culture, and society.

  • media exposure
  • criminal behavior
  • popular culture
  • media violence
  • media and crime
  • copycat crimes

Media Exposure, Violence, and Aggression

On Friday July 22, 2016 , a gunman killed nine people at a mall in Munich, Germany. The 18-year-old shooter was subsequently characterized by the media as being under psychiatric care and harboring at least two obsessions. One, an obsession with mass shootings, including that of Anders Breivik who ultimately killed 77 people in Norway in 2011 , and the other an obsession with video games. A Los Angeles, California, news report stated that the gunman was “an avid player of first-person shooter video games, including ‘Counter-Strike,’” while another headline similarly declared, “Munich gunman, a fan of violent video games, rampage killers, had planned attack for a year”(CNN Wire, 2016 ; Reuters, 2016 ). This high-profile incident was hardly the first to link popular culture to violent crime. Notably, in the aftermath of the 1999 Columbine shooting massacre, for example, media sources implicated and later discredited music, video games, and a gothic aesthetic as causal factors of the crime (Cullen, 2009 ; Yamato, 2016 ). Other, more recent, incidents have echoed similar claims suggesting that popular culture has a nefarious influence on consumers.

Media violence and its impact on audiences are among the most researched and examined topics in communications studies (Hetsroni, 2007 ). Yet, debate over whether media violence causes aggression and violence persists, particularly in response to high-profile criminal incidents. Blaming video games, and other forms of media and popular culture, as contributing to violence is not a new phenomenon. However, interpreting media effects can be difficult because commenters often seem to indicate a grand consensus that understates more contradictory and nuanced interpretations of the data.

In fact, there is a consensus among many media researchers that media violence has an impact on aggression although its impact on violence is less clear. For example, in response to the shooting in Munich, Brad Bushman, professor of communication and psychology, avoided pinning the incident solely on video games, but in the process supported the assertion that video gameplay is linked to aggression. He stated,

While there isn’t complete consensus in any scientific field, a study we conducted showed more than 90% of pediatricians and about two-thirds of media researchers surveyed agreed that violent video games increase aggression in children. (Bushman, 2016 )

Others, too, have reached similar conclusions with regard to other media. In 2008 , psychologist John Murray summarized decades of research stating, “Fifty years of research on the effect of TV violence on children leads to the inescapable conclusion that viewing media violence is related to increases in aggressive attitudes, values, and behaviors” (Murray, 2008 , p. 1212). Scholars Glenn Sparks and Cheri Sparks similarly declared that,

Despite the fact that controversy still exists about the impact of media violence, the research results reveal a dominant and consistent pattern in favor of the notion that exposure to violent media images does increase the risk of aggressive behavior. (Sparks & Sparks, 2002 , p. 273)

In 2014 , psychologist Wayne Warburton more broadly concluded that the vast majority of studies have found “that exposure to violent media increases the likelihood of aggressive behavior in the short and longterm, increases hostile perceptions and attitudes, and desensitizes individuals to violent content” (Warburton, 2014 , p. 64).

Criminologists, too, are sensitive to the impact of media exposure. For example, Jacqueline Helfgott summarized the research:

There have been over 1000 studies on the effects of TV and film violence over the past 40 years. Research on the influence of TV violence on aggression has consistently shown that TV violence increases aggression and social anxiety, cultivates a “mean view” of the world, and negatively impacts real-world behavior. (Helfgott, 2015 , p. 50)

In his book, Media Coverage of Crime and Criminal Justice , criminologist Matthew Robinson stated, “Studies of the impact of media on violence are crystal clear in their findings and implications for society” (Robinson, 2011 , p. 135). He cited studies on childhood exposure to violent media leading to aggressive behavior as evidence. In his pioneering book Media, Crime, and Criminal Justice , criminologist Ray Surette concurred that media violence is linked to aggression, but offered a nuanced interpretation. He stated,

a small to modest but genuine causal role for media violence regarding viewer aggression has been established for most beyond a reasonable doubt . . . There is certainly a connection between violent media and social aggression, but its strength and configuration is simply not known at this time. (Surette, 2011 , p. 68)

The uncertainties about the strength of the relationship and the lack of evidence linking media violence to real-world violence is often lost in the news media accounts of high-profile violent crimes.

Media Exposure and Copycat Crimes

While many scholars do seem to agree that there is evidence that media violence—whether that of film, TV, or video games—increases aggression, they disagree about its impact on violent or criminal behavior (Ferguson, 2014 ; Gunter, 2008 ; Helfgott, 2015 ; Reiner, 2002 ; Savage, 2008 ). Nonetheless, it is violent incidents that most often prompt speculation that media causes violence. More specifically, violence that appears to mimic portrayals of violent media tends to ignite controversy. For example, the idea that films contribute to violent crime is not a new assertion. Films such as A Clockwork Orange , Menace II Society , Set it Off , and Child’s Play 3 , have been linked to crimes and at least eight murders have been linked to Oliver Stone’s 1994 film Natural Born Killers (Bracci, 2010 ; Brooks, 2002 ; PBS, n.d. ). Nonetheless, pinpointing a direct, causal relationship between media and violent crime remains elusive.

Criminologist Jacqueline Helfgott defined copycat crime as a “crime that is inspired by another crime” (Helfgott, 2015 , p. 51). The idea is that offenders model their behavior on media representations of violence whether real or fictional. One case, in particular, illustrated how popular culture, media, and criminal violence converge. On July 20, 2012 , James Holmes entered the midnight premiere of The Dark Knight Rises , the third film in the massively successful Batman trilogy, in a movie theater in Aurora, Colorado. He shot and killed 12 people and wounded 70 others. At the time, the New York Times described the incident,

Witnesses told the police that Mr. Holmes said something to the effect of “I am the Joker,” according to a federal law enforcement official, and that his hair had been dyed or he was wearing a wig. Then, as people began to rise from their seats in confusion or anxiety, he began to shoot. The gunman paused at least once, several witnesses said, perhaps to reload, and continued firing. (Frosch & Johnson, 2012 ).

The dyed hair, Holme’s alleged comment, and that the incident occurred at a popular screening led many to speculate that the shooter was influenced by the earlier film in the trilogy and reignited debate around the impact about media violence. The Daily Mail pointed out that Holmes may have been motivated by a 25-year-old Batman comic in which a gunman opens fire in a movie theater—thus further suggesting the iconic villain served as motivation for the attack (Graham & Gallagher, 2012 ). Perceptions of the “Joker connection” fed into the notion that popular media has a direct causal influence on violent behavior even as press reports later indicated that Holmes had not, in fact, made reference to the Joker (Meyer, 2015 ).

A week after the Aurora shooting, the New York Daily News published an article detailing a “possible copycat” crime. A suspect was arrested in his Maryland home after making threatening phone calls to his workplace. The article reported that the suspect stated, “I am a [sic] joker” and “I’m going to load my guns and blow everybody up.” In their search, police found “a lethal arsenal of 25 guns and thousands of rounds of ammunition” in the suspect’s home (McShane, 2012 ).

Though criminologists are generally skeptical that those who commit violent crimes are motivated solely by media violence, there does seem to be some evidence that media may be influential in shaping how some offenders commit crime. In his study of serious and violent juvenile offenders, criminologist Ray Surette found “about one out of three juveniles reports having considered a copycat crime and about one out of four reports actually having attempted one.” He concluded that “those juveniles who are self-reported copycats are significantly more likely to credit the media as both a general and personal influence.” Surette contended that though violent offenses garner the most media attention, copycat criminals are more likely to be career criminals and to commit property crimes rather than violent crimes (Surette, 2002 , pp. 56, 63; Surette 2011 ).

Discerning what crimes may be classified as copycat crimes is a challenge. Jacqueline Helfgott suggested they occur on a “continuum of influence.” On one end, she said, media plays a relatively minor role in being a “component of the modus operandi” of the offender, while on the other end, she said, “personality disordered media junkies” have difficulty distinguishing reality from violent fantasy. According to Helfgott, various factors such as individual characteristics, characteristics of media sources, relationship to media, demographic factors, and cultural factors are influential. Overall, scholars suggest that rather than pushing unsuspecting viewers to commit crimes, media more often influences how , rather than why, someone commits a crime (Helfgott, 2015 ; Marsh & Melville, 2014 ).

Given the public interest, there is relatively little research devoted to exactly what copycat crimes are and how they occur. Part of the problem of studying these types of crimes is the difficulty defining and measuring the concept. In an effort to clarify and empirically measure the phenomenon, Surette offered a scale that included seven indicators of copycat crimes. He used the following factors to identify copycat crimes: time order (media exposure must occur before the crime); time proximity (a five-year cut-off point of exposure); theme consistency (“a pattern of thought, feeling or behavior in the offender which closely parallels the media model”); scene specificity (mimicking a specific scene); repetitive viewing; self-editing (repeated viewing of single scene while “the balance of the film is ignored”); and offender statements and second-party statements indicating the influence of media. Findings demonstrated that cases are often prematurely, if not erroneously, labeled as “copycat.” Surette suggested that use of the scale offers a more precise way for researchers to objectively measure trends and frequency of copycat crimes (Surette, 2016 , p. 8).

Media Exposure and Violent Crimes

Overall, a causal link between media exposure and violent criminal behavior has yet to be validated, and most researchers steer clear of making such causal assumptions. Instead, many emphasize that media does not directly cause aggression and violence so much as operate as a risk factor among other variables (Bushman & Anderson, 2015 ; Warburton, 2014 ). In their review of media effects, Brad Bushman and psychologist Craig Anderson concluded,

In sum, extant research shows that media violence is a causal risk factor not only for mild forms of aggression but also for more serious forms of aggression, including violent criminal behavior. That does not mean that violent media exposure by itself will turn a normal child or adolescent who has few or no other risk factors into a violent criminal or a school shooter. Such extreme violence is rare, and tends to occur only when multiple risk factors converge in time, space, and within an individual. (Bushman & Anderson, 2015 , p. 1817)

Surette, however, argued that there is no clear linkage between media exposure and criminal behavior—violent or otherwise. In other words, a link between media violence and aggression does not necessarily mean that exposure to violent media causes violent (or nonviolent) criminal behavior. Though there are thousands of articles addressing media effects, many of these consist of reviews or commentary about prior research findings rather than original studies (Brown, 2007 ; Murray, 2008 ; Savage, 2008 ; Surette, 2011 ). Fewer, still, are studies that specifically measure media violence and criminal behavior (Gunter, 2008 ; Strasburger & Donnerstein, 2014 ). In their meta-analysis investigating the link between media violence and criminal aggression, scholars Joanne Savage and Christina Yancey did not find support for the assertion. Instead, they concluded,

The study of most consequence for violent crime policy actually found that exposure to media violence was significantly negatively related to violent crime rates at the aggregate level . . . It is plain to us that the relationship between exposure to violent media and serious violence has yet to be established. (Savage & Yancey, 2008 , p. 786)

Researchers continue to measure the impact of media violence among various forms of media and generally stop short of drawing a direct causal link in favor of more indirect effects. For example, one study examined the increase of gun violence in films over the years and concluded that violent scenes provide scripts for youth that justify gun violence that, in turn, may amplify aggression (Bushman, Jamieson, Weitz, & Romer, 2013 ). But others report contradictory findings. Patrick Markey and colleagues studied the relationship between rates of homicide and aggravated assault and gun violence in films from 1960–2012 and found that over the years, violent content in films increased while crime rates declined . After controlling for age shifts, poverty, education, incarceration rates, and economic inequality, the relationships remained statistically non-significant (Markey, French, & Markey, 2015 , p. 165). Psychologist Christopher Ferguson also failed to find a relationship between media violence in films and video games and violence (Ferguson, 2014 ).

Another study, by Gordon Dahl and Stefano DellaVigna, examined violent films from 1995–2004 and found decreases in violent crimes coincided with violent blockbuster movie attendance. Here, it was not the content that was alleged to impact crime rates, but instead what the authors called “voluntary incapacitation,” or the shifting of daily activities from that of potential criminal behavior to movie attendance. The authors concluded, “For each million people watching a strongly or mildly violent movie, respectively, violent crime decreases by 1.9% and 2.1%. Nonviolent movies have no statistically significant impact” (Dahl & DellaVigna, p. 39).

High-profile cases over the last several years have shifted public concern toward the perceived danger of video games, but research demonstrating a link between video games and criminal violence remains scant. The American Psychiatric Association declared that “research demonstrates a consistent relation between violent video game use and increases in aggressive behavior, aggressive cognitions and aggressive affect, and decreases in prosocial behavior, empathy and sensitivity to aggression . . .” but stopped short of claiming that video games impact criminal violence. According to Breuer and colleagues, “While all of the available meta-analyses . . . found a relationship between aggression and the use of (violent) video games, the size and interpretation of this connection differ largely between these studies . . .” (APA, 2015 ; Breuer et al., 2015 ; DeCamp, 2015 ). Further, psychologists Patrick Markey, Charlotte Markey, and Juliana French conducted four time-series analyses investigating the relationship between video game habits and assault and homicide rates. The studies measured rates of violent crime, the annual and monthly video game sales, Internet searches for video game walkthroughs, and rates of violent crime occurring after the release dates of popular games. The results showed that there was no relationship between video game habits and rates of aggravated assault and homicide. Instead, there was some indication of decreases in crime (Markey, Markey, & French, 2015 ).

Another longitudinal study failed to find video games as a predictor of aggression, instead finding support for the “selection hypothesis”—that physically aggressive individuals (aged 14–17) were more likely to choose media content that contained violence than those slightly older, aged 18–21. Additionally, the researchers concluded,

that violent media do not have a substantial impact on aggressive personality or behavior, at least in the phases of late adolescence and early adulthood that we focused on. (Breuer, Vogelgesang, Quandt, & Festl, 2015 , p. 324)

Overall, the lack of a consistent finding demonstrating that media exposure causes violent crime may not be particularly surprising given that studies linking media exposure, aggression, and violence suffer from a host of general criticisms. By way of explanation, social theorist David Gauntlett maintained that researchers frequently employ problematic definitions of aggression and violence, questionable methodologies, rely too much on fictional violence, neglect the social meaning of violence, and assume the third-person effect—that is, assume that other, vulnerable people are impacted by media, but “we” are not (Ferguson & Dyck, 2012 ; Gauntlett, 2001 ).

Others, such as scholars Martin Barker and Julian Petley, flatly reject the notion that violent media exposure is a causal factor for aggression and/or violence. In their book Ill Effects , the authors stated instead that it is simply “stupid” to query about “what are the effects of [media] violence” without taking context into account (p. 2). They counter what they describe as moral campaigners who advance the idea that media violence causes violence. Instead, Barker and Petley argue that audiences interpret media violence in a variety of ways based on their histories, experiences, and knowledge, and as such, it makes little sense to claim media “cause” violence (Barker & Petley, 2001 ).

Given the seemingly inconclusive and contradictory findings regarding media effects research, to say that the debate can, at times, be contentious is an understatement. One article published in European Psychologist queried “Does Doing Media Violence Research Make One Aggressive?” and lamented that the debate had devolved into an ideological one (Elson & Ferguson, 2013 ). Another academic journal published a special issue devoted to video games and youth and included a transcript of exchanges between two scholars to demonstrate that a “peaceful debate” was, in fact, possible (Ferguson & Konijn, 2015 ).

Nonetheless, in this debate, the stakes are high and the policy consequences profound. After examining over 900 published articles, publication patterns, prominent authors and coauthors, and disciplinary interest in the topic, scholar James Anderson argued that prominent media effects scholars, whom he deems the “causationists,” had developed a cottage industry dependent on funding by agencies focused primarily on the negative effects of media on children. Anderson argued that such a focus presents media as a threat to family values and ultimately operates as a zero-sum game. As a result, attention and resources are diverted toward media and away from other priorities that are essential to understanding aggression such as social disadvantage, substance abuse, and parental conflict (Anderson, 2008 , p. 1276).

Theoretical Perspectives on Media Effects

Understanding how media may impact attitudes and behavior has been the focus of media and communications studies for decades. Numerous theoretical perspectives offer insight into how and to what extent the media impacts the audience. As scholar Jenny Kitzinger documented in 2004 , there are generally two ways to approach the study of media effects. One is to foreground the power of media. That is, to suggest that the media holds powerful sway over viewers. Another perspective is to foreground the power and heterogeneity of the audience and to recognize that it is comprised of active agents (Kitzinger, 2004 ).

The notion of an all-powerful media can be traced to the influence of scholars affiliated with the Institute for Social Research, or Frankfurt School, in the 1930–1940s and proponents of the mass society theory. The institute was originally founded in Germany but later moved to the United States. Criminologist Yvonne Jewkes outlined how mass society theory assumed that members of the public were susceptible to media messages. This, theorists argued, was a result of rapidly changing social conditions and industrialization that produced isolated, impressionable individuals “cut adrift from kinship and organic ties and lacking moral cohesion” (Jewkes, 2015 , p. 13). In this historical context, in the era of World War II, the impact of Nazi propaganda was particularly resonant. Here, the media was believed to exhibit a unidirectional flow, operating as a powerful force influencing the masses. The most useful metaphor for this perspective described the media as a “hypodermic syringe” that could “‘inject’ values, ideas and information directly into the passive receiver producing direct and unmediated ‘effects’” (Jewkes, 2015 , pp. 16, 34). Though the hypodermic syringe model seems simplistic today, the idea that the media is all-powerful continues to inform contemporary public discourse around media and violence.

Concern of the power of media captured the attention of researchers interested in its purported negative impact on children. In one of the earliest series of studies in the United States during the late 1920s–1930s, researchers attempted to quantitatively measure media effects with the Payne Fund Studies. For example, they investigated how film, a relatively new medium, impacted children’s attitudes and behaviors, including antisocial and violent behavior. At the time, the Payne Fund Studies’ findings fueled the notion that children were indeed negatively influenced by films. This prompted the film industry to adopt a self-imposed code regulating content (Sparks & Sparks, 2002 ; Surette, 2011 ). Not everyone agreed with the approach. In fact, the methodologies employed in the studies received much criticism, and ultimately, the movement was branded as a moral crusade to regulate film content. Scholars Garth Jowett, Ian Jarvie, and Kathryn Fuller wrote about the significance of the studies,

We have seen this same policy battle fought and refought over radio, television, rock and roll, music videos and video games. Their researchers looked to see if intuitive concerns could be given concrete, measurable expression in research. While they had partial success, as have all subsequent efforts, they also ran into intractable problems . . . Since that day, no way has yet been found to resolve the dilemma of cause and effect: do crime movies create more crime, or do the criminally inclined enjoy and perhaps imitate crime movies? (Jowett, Jarvie, & Fuller, 1996 , p. 12)

As the debate continued, more sophisticated theoretical perspectives emerged. Efforts to empirically measure the impact of media on aggression and violence continued, albeit with equivocal results. In the 1950s and 1960s, psychological behaviorism, or understanding psychological motivations through observable behavior, became a prominent lens through which to view the causal impact of media violence. This type of research was exemplified by Albert Bandura’s Bobo Doll studies demonstrating that children exposed to aggressive behavior, either observed in real life or on film, behaved more aggressively than those in control groups who were not exposed to the behavior. The assumption derived was that children learn through exposure and imitate behavior (Bandura, Ross, & Ross, 1963 ). Though influential, the Bandura experiments were nevertheless heavily criticized. Some argued the laboratory conditions under which children were exposed to media were not generalizable to real-life conditions. Others challenged the assumption that children absorb media content in an unsophisticated manner without being able to distinguish between fantasy and reality. In fact, later studies did find children to be more discerning consumers of media than popularly believed (Gauntlett, 2001 ).

Hugely influential in our understandings of human behavior, the concept of social learning has been at the core of more contemporary understandings of media effects. For example, scholar Christopher Ferguson noted that the General Aggression Model (GAM), rooted in social learning and cognitive theory, has for decades been a dominant model for understanding how media impacts aggression and violence. GAM is described as the idea that “aggression is learned by the activation and repetition of cognitive scripts coupled with the desensitization of emotional responses due to repeated exposure.” However, Ferguson noted that its usefulness has been debated and advocated for a paradigm shift (Ferguson, 2013 , pp. 65, 27; Krahé, 2014 ).

Though the methodologies of the Payne Fund Studies and Bandura studies were heavily criticized, concern over media effects continued to be tied to larger moral debates including the fear of moral decline and concern over the welfare of children. Most notably, in the 1950s, psychiatrist Frederic Wertham warned of the dangers of comic books, a hugely popular medium at the time, and their impact on juveniles. Based on anecdotes and his clinical experience with children, Wertham argued that images of graphic violence and sexual debauchery in comic books were linked to juvenile delinquency. Though he was far from the only critic of comic book content, his criticisms reached the masses and gained further notoriety with the publication of his 1954 book, Seduction of the Innocent . Wertham described the comic book content thusly,

The stories have a lot of crime and gunplay and, in addition, alluring advertisements of guns, some of them full-page and in bright colors, with four guns of various sizes and descriptions on a page . . . Here is the repetition of violence and sexiness which no Freud, Krafft-Ebing or Havelock Ellis ever dreamed could be offered to children, and in such profusion . . . I have come to the conclusion that this chronic stimulation, temptation and seduction by comic books, both their content and their alluring advertisements of knives and guns, are contributing factors to many children’s maladjustment. (Wertham, 1954 , p. 39)

Wertham’s work was instrumental in shaping public opinion and policies about the dangers of comic books. Concern about the impact of comics reached its apex in 1954 with the United States Senate Judiciary Subcommittee on Juvenile Delinquency. Wertham testified before the committee, arguing that comics were a leading cause of juvenile delinquency. Ultimately, the protest of graphic content in comic books by various interest groups contributed to implementation of the publishers’ self-censorship code, the Comics Code Authority, which essentially designated select books that were deemed “safe” for children (Nyberg, 1998 ). The code remained in place for decades, though it was eventually relaxed and decades later phased out by the two most dominant publishers, DC and Marvel.

Wertham’s work, however influential in impacting the comic industry, was ultimately panned by academics. Although scholar Bart Beaty characterized Wertham’s position as more nuanced, if not progressive, than the mythology that followed him, Wertham was broadly dismissed as a moral reactionary (Beaty, 2005 ; Phillips & Strobl, 2013 ). The most damning criticism of Wertham’s work came decades later, from Carol Tilley’s examination of Wertham’s files. She concluded that in Seduction of the Innocent ,

Wertham manipulated, overstated, compromised, and fabricated evidence—especially that evidence he attributed to personal clinical research with young people—for rhetorical gain. (Tilley, 2012 , p. 386)

Tilley linked Wertham’s approach to that of the Frankfurt theorists who deemed popular culture a social threat and contended that Wertham was most interested in “cultural correction” rather than scientific inquiry (Tilley, 2012 , p. 404).

Over the decades, concern about the moral impact of media remained while theoretical and methodological approaches to media effects studies continued to evolve (Rich, Bickham, & Wartella, 2015 ). In what many consider a sophisticated development, theorists began to view the audience as more active and multifaceted than the mass society perspective allowed (Kitzinger, 2004 ). One perspective, based on a “uses and gratifications” model, assumes that rather than a passive audience being injected with values and information, a more active audience selects and “uses” media as a response to their needs and desires. Studies of uses and gratifications take into account how choice of media is influenced by one’s psychological and social circumstances. In this context, media provides a variety of functions for consumers who may engage with it for the purposes of gathering information, reducing boredom, seeking enjoyment, or facilitating communication (Katz, Blumler, & Gurevitch, 1973 ; Rubin, 2002 ). This approach differs from earlier views in that it privileges the perspective and agency of the audience.

Another approach, the cultivation theory, gained momentum among researchers in the 1970s and has been of particular interest to criminologists. It focuses on how television television viewing impacts viewers’ attitudes toward social reality. The theory was first introduced by communications scholar George Gerbner, who argued the importance of understanding messages that long-term viewers absorb. Rather than examine the effect of specific content within any given programming, cultivation theory,

looks at exposure to massive flows of messages over long periods of time. The cultivation process takes place in the interaction of the viewer with the message; neither the message nor the viewer are all-powerful. (Gerbner, Gross, Morgan, Singnorielli, & Shanahan, 2002 , p. 48)

In other words, he argued, television viewers are, over time, exposed to messages about the way the world works. As Gerbner and colleagues stated, “continued exposure to its messages is likely to reiterate, confirm, and nourish—that is, cultivate—its own values and perspectives” (p. 49).

One of the most well-known consequences of heavy media exposure is what Gerbner termed the “mean world” syndrome. He coined it based on studies that found that long-term exposure to media violence among heavy television viewers, “tends to cultivate the image of a relatively mean and dangerous world” (p. 52). Inherent in Gerbner’s view was that media representations are separate and distinct entities from “real life.” That is, it is the distorted representations of crime and violence that cultivate the notion that the world is a dangerous place. In this context, Gerbner found that heavy television viewers are more likely to be fearful of crime and to overestimate their chances of being a victim of violence (Gerbner, 1994 ).

Though there is evidence in support of cultivation theory, the strength of the relationship between media exposure and fear of crime is inconclusive. This is in part due to the recognition that audience members are not homogenous. Instead, researchers have found that there are many factors that impact the cultivating process. This includes, but is not limited to, “class, race, gender, place of residence, and actual experience of crime” (Reiner, 2002 ; Sparks, 1992 ). Or, as Ted Chiricos and colleagues remarked in their study of crime news and fear of crime, “The issue is not whether media accounts of crime increase fear, but which audiences, with which experiences and interests, construct which meanings from the messages received” (Chiricos, Eschholz, & Gertz, p. 354).

Other researchers found that exposure to media violence creates a desensitizing effect, that is, that as viewers consume more violent media, they become less empathetic as well as psychologically and emotionally numb when confronted with actual violence (Bartholow, Bushman, & Sestir, 2006 ; Carnagey, Anderson, & Bushman, 2007 ; Cline, Croft, & Courrier, 1973 ; Fanti, Vanman, Henrich, & Avraamides, 2009 ; Krahé et al., 2011 ). Other scholars such as Henry Giroux, however, point out that our contemporary culture is awash in violence and “everyone is infected.” From this perspective, the focus is not on certain individuals whose exposure to violent media leads to a desensitization of real-life violence, but rather on the notion that violence so permeates society that it has become normalized in ways that are divorced from ethical and moral implications. Giroux wrote,

While it would be wrong to suggest that the violence that saturates popular culture directly causes violence in the larger society, it is arguable that such violence serves not only to produce an insensitivity to real life violence but also functions to normalize violence as both a source of pleasure and as a practice for addressing social issues. When young people and others begin to believe that a world of extreme violence, vengeance, lawlessness, and revenge is the only world they inhabit, the culture and practice of real-life violence is more difficult to scrutinize, resist, and transform . . . (Giroux, 2015 )

For Giroux, the danger is that the normalization of violence has become a threat to democracy itself. In our culture of mass consumption shaped by neoliberal logics, depoliticized narratives of violence have become desired forms of entertainment and are presented in ways that express tolerance for some forms of violence while delegitimizing other forms of violence. In their book, Disposable Futures , Brad Evans and Henry Giroux argued that as the spectacle of violence perpetuates fear of inevitable catastrophe, it reinforces expansion of police powers, increased militarization and other forms of social control, and ultimately renders marginalized members of the populace disposable (Evans & Giroux, 2015 , p. 81).

Criminology and the “Media/Crime Nexus”

Most criminologists and sociologists who focus on media and crime are generally either dismissive of the notion that media violence directly causes violence or conclude that findings are more complex than traditional media effects models allow, preferring to focus attention on the impact of media violence on society rather than individual behavior (Carrabine, 2008 ; Ferrell, Hayward, & Young, 2015 ; Jewkes, 2015 ; Kitzinger, 2004 ; Marsh & Melville, 2014 ; Rafter, 2006 ; Sternheimer, 2003 ; Sternheimer 2013 ; Surette, 2011 ). Sociologist Karen Sternheimer forcefully declared “media culture is not the root cause of American social problems, not the Big Bad Wolf, as our ongoing public discussion would suggest” (Sternheimer, 2003 , p. 3). Sternheimer rejected the idea that media causes violence and argued that a false connection has been forged between media, popular culture, and violence. Like others critical of a singular focus on media, Sternheimer posited that overemphasis on the perceived dangers of media violence serves as a red herring that directs attention away from the actual causes of violence rooted in factors such as poverty, family violence, abuse, and economic inequalities (Sternheimer, 2003 , 2013 ). Similarly, in her Media and Crime text, Yvonne Jewkes stated that U.K. scholars tend to reject findings of a causal link because the studies are too reductionist; criminal behavior cannot be reduced to a single causal factor such as media consumption. Echoing Gauntlett’s critiques of media effects research, Jewkes stated that simplistic causal assumptions ignore “the wider context of a lifetime of meaning-making” (Jewkes, 2015 , p. 17).

Although they most often reject a “violent media cause violence” relationship, criminologists do not dismiss the notion of media as influential. To the contrary, over the decades much criminological interest has focused on the construction of social problems, the ideological implications of media, and media’s potential impact on crime policies and social control. Eamonn Carrabine noted that the focus of concern is not whether media directly causes violence but on “how the media promote damaging stereotypes of social groups, especially the young, to uphold the status quo” (Carrabine, 2008 , p. 34). Theoretically, these foci have been traced to the influence of cultural and Marxist studies. For example, criminologists frequently focus on how social anxieties and class inequalities impact our understandings of the relationship between media violence and attitudes, values, and behaviors. Influential works in the 1970s, such as Policing the Crisis: Mugging, the State, and Law and Order by Stuart Hall et al. and Stanley Cohen’s Folk Devils and Moral Panics , shifted criminological critique toward understanding media as a hegemonic force that reinforces state power and social control (Brown, 2011 ; Carrabine, 2008 ; Cohen, 2005 ; Garland, 2008 ; Hall et al., 2013 /1973, 2013/1973 ). Since that time, moral panic has become a common framework applied to public discourse around a variety of social issues including road rage, child abuse, popular music, sex panics, and drug abuse among others.

Into the 21st century , advances in technology, including increased use of social media, shifted the ways that criminologists approach the study of media effects. Scholar Sheila Brown traced how research in criminology evolved from a focus on “media and crime” to what she calls the “media/crime nexus” that recognizes that “media experience is real experience” (Brown, 2011 , p. 413). In other words, many criminologists began to reject as fallacy what social media theorist Nathan Jurgenson deemed “digital dualism,” or the notion that we have an “online” existence that is separate and distinct from our “off-line” existence. Instead, we exist simultaneously both online and offline, an

augmented reality that exists at the intersection of materiality and information, physicality and digitality, bodies and technology, atoms and bits, the off and the online. It is wrong to say “IRL” [in real life] to mean offline: Facebook is real life. (Jurgenson, 2012 )

The changing media landscape has been of particular interest to cultural criminologists. Michelle Brown recognized the omnipresence of media as significant in terms of methodological preferences and urged a move away from a focus on causality and predictability toward a more fluid approach that embraces the complex, contemporary media-saturated social reality characterized by uncertainty and instability (Brown, 2007 ).

Cultural criminologists have indeed rejected direct, causal relationships in favor of the recognition that social meanings of aggression and violence are constantly in transition, flowing through the media landscape, where “bits of information reverberate and bend back on themselves, creating a fluid porosity of meaning that defines late-modern life, and the nature of crime and media within it.” In other words, there is no linear relationship between crime and its representation. Instead, crime is viewed as inseparable from the culture in which our everyday lives are constantly re-created in loops and spirals that “amplify, distort, and define the experience of crime and criminality itself” (Ferrell, Hayward, & Young, 2015 , pp. 154–155). As an example of this shift in understanding media effects, criminologist Majid Yar proposed that we consider how the transition from being primarily consumers to primarily producers of content may serve as a motivating mechanism for criminal behavior. Here, Yar is suggesting that the proliferation of user-generated content via media technologies such as social media (i.e., the desire “to be seen” and to manage self-presentation) has a criminogenic component worthy of criminological inquiry (Yar, 2012 ). Shifting attention toward the media/crime nexus and away from traditional media effects analyses opens possibilities for a deeper understanding of the ways that media remains an integral part of our everyday lives and inseparable from our understandings of and engagement with crime and violence.

Over the years, from films to comic books to television to video games to social media, concerns over media effects have shifted along with changing technologies. While there seems to be some consensus that exposure to violent media impacts aggression, there is little evidence showing its impact on violent or criminal behavior. Nonetheless, high-profile violent crimes continue to reignite public interest in media effects, particularly with regard to copycat crimes.

At times, academic debate around media effects remains contentious and one’s academic discipline informs the study and interpretation of media effects. Criminologists and sociologists are generally reluctant to attribute violence and criminal behavior directly to exposure to violence media. They are, however, not dismissive of the impact of media on attitudes, social policies, and social control as evidenced by the myriad of studies on moral panics and other research that addresses the relationship between media, social anxieties, gender, race, and class inequalities. Scholars who study media effects are also sensitive to the historical context of the debates and ways that moral concerns shape public policies. The self-regulating codes of the film industry and the comic book industry have led scholars to be wary of hyperbole and policy overreach in response to claims of media effects. Future research will continue to explore ways that changing technologies, including increasing use of social media, will impact our understandings and perceptions of crime as well as criminal behavior.

Further Reading

  • American Psychological Association . (2015). Resolution on violent video games . Retrieved from
  • Anderson, J. A. , & Grimes, T. (2008). Special issue: Media violence. Introduction. American Behavioral Scientist , 51 (8), 1059–1060.
  • Berlatsky, N. (Ed.). (2012). Media violence: Opposing viewpoints . Farmington Hills, MI: Greenhaven.
  • Elson, M. , & Ferguson, C. J. (2014). Twenty-five years of research on violence in digital games and aggression. European Psychologist , 19 (1), 33–46.
  • Ferguson, C. (Ed.). (2015). Special issue: Video games and youth. Psychology of Popular Media Culture , 4 (4).
  • Ferguson, C. J. , Olson, C. K. , Kutner, L. A. , & Warner, D. E. (2014). Violent video games, catharsis seeking, bullying, and delinquency: A multivariate analysis of effects. Crime & Delinquency , 60 (5), 764–784.
  • Gentile, D. (2013). Catharsis and media violence: A conceptual analysis. Societies , 3 (4), 491–510.
  • Huesmann, L. R. (2007). The impact of electronic media violence: Scientific theory and research. Journal of Adolescent Health , 41 (6), S6–S13.
  • Huesmann, L. R. , & Taylor, L. D. (2006). The role of media violence in violent behavior. Annual Review of Public Health , 27 (1), 393–415.
  • Krahé, B. (Ed.). (2013). Special issue: Understanding media violence effects. Societies , 3 (3).
  • Media Violence Commission, International Society for Research on Aggression (ISRA) . (2012). Report of the Media Violence Commission. Aggressive Behavior , 38 (5), 335–341.
  • Rich, M. , & Bickham, D. (Eds.). (2015). Special issue: Methodological advances in the field of media influences on children. Introduction. American Behavioral Scientist , 59 (14), 1731–1735.
  • American Psychological Association (APA) . (2015, August 13). APA review confirms link between playing violent video games and aggression . Retrieved from
  • Anderson, J. A. (2008). The production of media violence and aggression research: A cultural analysis. American Behavioral Scientist , 51 (8), 1260–1279.
  • Bandura, A. , Ross, D. , & Ross, S. A. (1963). Imitation of film-mediated aggressive models. The Journal of Abnormal and Social Psychology , 66 (1), 3–11.
  • Barker, M. , & Petley, J. (2001). Ill effects: The media violence debate (2d ed.). London: Routledge.
  • Bartholow, B. D. , Bushman, B. J. , & Sestir, M. A. (2006). Chronic violent video game exposure and desensitization to violence: Behavioral and event-related brain potential data. Journal of Experimental Social Psychology , 42 (4), 532–539.
  • Beaty, B. (2005). Fredric Wertham and the critique of mass culture . Jackson: University Press of Mississippi.
  • Bracci, P. (2010, March 12). The police were sure James Bulger’s ten-year-old killers were simply wicked. But should their parents have been in the dock? Retrieved from
  • Breuer, J. , Vogelgesang, J. , Quandt, T. , & Festl, R. (2015). Violent video games and physical aggression: Evidence for a selection effect among adolescents. Psychology of Popular Media Culture , 4 (4), 305–328.
  • Brooks, X. (2002, December 19). Natural born copycats . Retrieved from
  • Brown, M. (2007). Beyond the requisites: Alternative starting points in the study of media effects and youth violence. Journal of Criminal Justice and Popular Culture , 14 (1), 1–20.
  • Brown, S. (2011). Media/crime/millennium: Where are we now? A reflective review of research and theory directions in the 21st century. Sociology Compass , 5 (6), 413–425.
  • Bushman, B. (2016, July 26). Violent video games and real violence: There’s a link but it’s not so simple . Retrieved from
  • Bushman, B. J. , & Anderson, C. A. (2015). Understanding causality in the effects of media violence. American Behavioral Scientist , 59 (14), 1807–1821.
  • Bushman, B. J. , Jamieson, P. E. , Weitz, I. , & Romer, D. (2013). Gun violence trends in movies. Pediatrics , 132 (6), 1014–1018.
  • Carnagey, N. L. , Anderson, C. A. , & Bushman, B. J. (2007). The effect of video game violence on physiological desensitization to real-life violence. Journal of Experimental Social Psychology , 43 (3), 489–496.
  • Carrabine, E. (2008). Crime, culture and the media . Cambridge, U.K.: Polity.
  • Chiricos, T. , Eschholz, S. , & Gertz, M. (1997). Crime, news and fear of crime: Toward an identification of audience effects. Social Problems , 44 , 342.
  • Cline, V. B. , Croft, R. G. , & Courrier, S. (1973). Desensitization of children to television violence. Journal of Personality and Social Psychology , 27 (3), 360–365.
  • CNN Wire (2016, July 24). Officials: 18-year-old suspect in Munich attack was obsessed with mass shootings . Retrieved from
  • Cohen, S. (2005). Folk devils and moral panics (3d ed.). New York: Routledge.
  • Cullen, D. (2009). Columbine . New York: Hachette.
  • Dahl, G. , & DellaVigna, S. (2012). Does movie violence increase violent crime? In N. Berlatsky (Ed.), Media Violence: Opposing Viewpoints (pp. 36–43). Farmington Hills, MI: Greenhaven.
  • DeCamp, W. (2015). Impersonal agencies of communication: Comparing the effects of video games and other risk factors on violence. Psychology of Popular Media Culture , 4 (4), 296–304.
  • Elson, M. , & Ferguson, C. J. (2013). Does doing media violence research make one aggressive? European Psychologist , 19 (1), 68–75.
  • Evans, B. , & Giroux, H. (2015). Disposable futures: The seduction of violence in the age of spectacle . San Francisco: City Lights Publishers.
  • Fanti, K. A. , Vanman, E. , Henrich, C. C. , & Avraamides, M. N. (2009). Desensitization to media violence over a short period of time. Aggressive Behavior , 35 (2), 179–187.
  • Ferguson, C. J. (2013). Violent video games and the Supreme Court: Lessons for the scientific community in the wake of Brown v. Entertainment Merchants Association. American Psychologist , 68 (2), 57–74.
  • Ferguson, C. J. (2014). Does media violence predict societal violence? It depends on what you look at and when. Journal of Communication , 65 (1), E1–E22.
  • Ferguson, C. J. , & Dyck, D. (2012). Paradigm change in aggression research: The time has come to retire the general aggression model. Aggression and Violent Behavior , 17 (3), 220–228.
  • Ferguson, C. J. , & Konijn, E. A. (2015). She said/he said: A peaceful debate on video game violence. Psychology of Popular Media Culture , 4 (4), 397–411.
  • Ferrell, J. , Hayward, K. , & Young, J. (2015). Cultural criminology: An invitation . Thousand Oaks, CA: SAGE.
  • Frosch, D. , & Johnson, K. (2012, July 20). 12 are killed at showing of Batman movie in Colorado . Retrieved from
  • Garland, D. (2008). On the concept of moral panic. Crime, Media, Culture , 4 (1), 9–30.
  • Gauntlett, D. (2001). The worrying influence of “media effects” studies. In ill effects: The media violence debate (2d ed.). London: Routledge.
  • Gerbner, G. (1994). TV violence and the art of asking the wrong question. Retrieved from
  • Gerbner, G. , Gross, L. , Morgan, M. , Singnorielli, N. , & Shanahan, J. (2002). Growing up with television: Cultivation process. In J. Bryant & D. Zillmann (Eds.), Media effects: Advances in theory and research (pp. 43–67). Mahwah, NJ: Lawrence Erlbaum Associates.
  • Giroux, H. (2015, December 25). America’s addiction to violence . Retrieved from
  • Graham, C. , & Gallagher, I. (2012, July 20). Gunman who massacred 12 at movie premiere used same drugs that killed Batman star Heath Ledger . Retrieved from
  • Gunter, B. (2008). Media violence: Is there a case for causality? American Behavioral Scientist , 51 (8), 1061–1122.
  • Hall, S. , Critcher, C. , Jefferson, T. , Clarke, J. , & Roberts, B. (2013/1973). Policing the crisis: Mugging, the state and law and order . Hampshire, U.K.: Palgrave.
  • Helfgott, J. B. (2015). Criminal behavior and the copycat effect: Literature review and theoretical framework for empirical investigation. Aggression and Violent Behavior , 22 (C), 46–64.
  • Hetsroni, A. (2007). Four decades of violent content on prime-time network programming: A longitudinal meta-analytic review. Journal of Communication , 57 (4), 759–784.
  • Jewkes, Y. (2015). Media & crime . London: SAGE.
  • Jowett, G. , Jarvie, I. , & Fuller, K. (1996). Children and the movies: Media influence and the Payne Fund controversy . Cambridge, U.K.: Cambridge University Press.
  • Jurgenson, N. (2012, June 28). The IRL fetish . Retrieved from
  • Katz, E. , Blumler, J. G. , & Gurevitch, M. (1973). Uses and gratifications research. The Public Opinion Quarterly .
  • Kitzinger, J. (2004). Framing abuse: Media influence and public understanding of sexual violence against children . London: Polity.
  • Krahé, B. (2014). Restoring the spirit of fair play in the debate about violent video games. European Psychologist , 19 (1), 56–59.
  • Krahé, B. , Möller, I. , Huesmann, L. R. , Kirwil, L. , Felber, J. , & Berger, A. (2011). Desensitization to media violence: Links with habitual media violence exposure, aggressive cognitions, and aggressive behavior. Journal of Personality and Social Psychology , 100 (4), 630–646.
  • Markey, P. M. , French, J. E. , & Markey, C. N. (2015). Violent movies and severe acts of violence: Sensationalism versus science. Human Communication Research , 41 (2), 155–173.
  • Markey, P. M. , Markey, C. N. , & French, J. E. (2015). Violent video games and real-world violence: Rhetoric versus data. Psychology of Popular Media Culture , 4 (4), 277–295.
  • Marsh, I. , & Melville, G. (2014). Crime, justice and the media . New York: Routledge.
  • McShane, L. (2012, July 27). Maryland police arrest possible Aurora copycat . Retrieved from
  • Meyer, J. (2015, September 18). The James Holmes “Joker” rumor . Retrieved from
  • Murray, J. P. (2008). Media violence: The effects are both real and strong. American Behavioral Scientist , 51 (8), 1212–1230.
  • Nyberg, A. K. (1998). Seal of approval: The history of the comics code. Jackson: University Press of Mississippi.
  • PBS . (n.d.). Culture shock: Flashpoints: Theater, film, and video: Stanley Kubrick’s A Clockwork Orange. Retrieved from
  • Phillips, N. D. , & Strobl, S. (2013). Comic book crime: Truth, justice, and the American way . New York: New York University Press.
  • Rafter, N. (2006). Shots in the mirror: Crime films and society (2d ed.). New York: Oxford University Press.
  • Reiner, R. (2002). Media made criminality: The representation of crime in the mass media. In R. Reiner , M. Maguire , & R. Morgan (Eds.), The Oxford handbook of criminology (pp. 302–340). Oxford: Oxford University Press.
  • Reuters . (2016, July 24). Munich gunman, a fan of violent video games, rampage killers, had planned attack for a year . Retrieved from
  • Rich, M. , Bickham, D. S. , & Wartella, E. (2015). Methodological advances in the field of media influences on children. American Behavioral Scientist , 59 (14), 1731–1735.
  • Robinson, M. B. (2011). Media coverage of crime and criminal justice. Durham, NC: Carolina Academic Press.
  • Rubin, A. (2002). The uses-and-gratifications perspective of media effects. In J. Bryant & D. Zillmann (Eds.), Media effects: Advances in theory and research (pp. 525–548). Mahwah, NJ: Lawrence Erlbaum Associates.
  • Savage, J. (2008). The role of exposure to media violence in the etiology of violent behavior: A criminologist weighs in. American Behavioral Scientist , 51 (8), 1123–1136.
  • Savage, J. , & Yancey, C. (2008). The effects of media violence exposure on criminal aggression: A meta-analysis. Criminal Justice and Behavior , 35 (6), 772–791.
  • Sparks, R. (1992). Television and the drama of crime: Moral tales and the place of crime in public life . Buckingham, U.K.: Open University Press.
  • Sparks, G. , & Sparks, C. (2002). Effects of media violence. In J. Bryant & D. Zillmann (Eds.), Media effects: Advances in theory and research (2d ed., pp. 269–286). Mahwah, NJ: Lawrence Erlbaum Associates.
  • Sternheimer, K. (2003). It’s not the media: The truth about pop culture’s influence on children . Boulder, CO: Westview.
  • Sternheimer, K. (2013). Connecting social problems and popular culture: Why media is not the answer (2d ed.). Boulder, CO: Westview.
  • Strasburger, V. C. , & Donnerstein, E. (2014). The new media of violent video games: Yet same old media problems? Clinical Pediatrics , 53 (8), 721–725.
  • Surette, R. (2002). Self-reported copycat crime among a population of serious and violent juvenile offenders. Crime & Delinquency , 48 (1), 46–69.
  • Surette, R. (2011). Media, crime, and criminal justice: Images, realities and policies (4th ed.). Belmont, CA: Wadsworth.
  • Surette, R. (2016). Measuring copycat crime. Crime, Media, Culture , 12 (1), 37–64.
  • Tilley, C. L. (2012). Seducing the innocent: Fredric Wertham and the falsifications that helped condemn comics. Information & Culture , 47 (4), 383–413.
  • Warburton, W. (2014). Apples, oranges, and the burden of proof—putting media violence findings into context. European Psychologist , 19 (1), 60–67.
  • Wertham, F. (1954). Seduction of the innocent . New York: Rinehart.
  • Yamato, J. (2016, June 14). Gaming industry mourns Orlando victims at E3—and sees no link between video games and gun violence . Retrieved from
  • Yar, M. (2012). Crime, media and the will-to-representation: Reconsidering relationships in the new media age. Crime, Media, Culture , 8 (3), 245–260.

Related Articles

  • Intimate Partner Violence
  • The Extent and Nature of Gang Crime
  • Intersecting Dimensions of Violence, Abuse, and Victimization

Printed from Oxford Research Encyclopedias, Criminology and Criminal Justice. Under the terms of the licence agreement, an individual user may print out a single article for personal use (for details see Privacy Policy and Legal Notice).

date: 09 May 2024

  • Cookie Policy
  • Privacy Policy
  • Legal Notice
  • Accessibility
  • [|]

Character limit 500 /500

share this!

May 7, 2024

This article has been reviewed according to Science X's editorial process and policies . Editors have highlighted the following attributes while ensuring the content's credibility:


peer-reviewed publication

trusted source

'Doing your own research' can make fake news seem believable

by Leila Okahata, University of Oregon

fake news

While it's healthy to question what we see and hear in the media, those quick internet searches to fact-check news stories can unexpectedly backfire and lead people to believe false stories , according to the director of the University of Oregon's undergraduate journalism program.

As more people tune into the press for the upcoming election cycle, Seth Lewis, who holds the Shirley Papé Chair in Emerging Media at UO's School of Journalism and Communication, said caution is in order when trying to verify media accounts.

For those who plan to cast a vote in this year's statewide and presidential elections , not knowing what media sources and stories to trust can lead one to end up more misinformed.

"The big takeaway is there are social costs to not trusting journalists and institutions," Lewis said. "There's the cost of encountering poor-quality information and the cost in time that could be spent on other activities besides trying to fact-check the news."

Drawing on interviews conducted in 2020, a time when people were relying heavily on the news for guidance on the COVID-19 pandemic, Lewis and his University of Utah colleague Jacob L. Nelson found that Americans had greater faith in their abilities to fact-check the news than they had in the news itself. Many of those interviewed reported feeling the need to "do their own research" using search engines because of their distrust in journalism as biased and politicized.

But those who reject journalism in favor of their own internet research can wind up more misinformed, falling into conspiracy theories , rabbit holes and low-quality data voids, a problem heightened during election season, Lewis said.

As supported in recent work by a different set of researchers, which appears in the journal Nature , when people were encouraged to do additional searching after reading true and fake stories on the COVID-19 pandemic, for example, they were more likely to believe in fake news than those who hadn't performed an online search.

As ballots for Oregon's statewide election hit mailboxes in May and the 2024 presidential campaign heats up, equipping voters with the tools to more effectively navigate the infinite information environment can increase their access to high-quality news sources, research shows.

In their 2020 interviews, Lewis and Nelson found that frustration and distrust in the news surprisingly crossed partisan lines. People who were interviewed shared the sentiment that only "sheep" would trust journalists and also had a common desire to better understand the world. Yet to uncover that clear, accurate picture, information seekers must challenge not only a news source's biases and reputability but also one's own biases that might influence what stories they trust or dismiss, Lewis said.

"That skepticism should be applied as much to ourselves as to others," he said. "You should be a little bit skeptical of your own opinions."

Waning trust in news media can be traced back to the 1970s and has been rapidly accelerating in recent years because of several challenging crises the United States has faced, Lewis said.

"We're in a moment where we are increasingly realizing that news is both everywhere and nowhere," he said. "News is all around us yet seems to have, in some sense, less impact than it did before. It's never been easier to stumble upon news, but people often talk about being exhausted by it and, therefore, are turning away from it at unprecedented levels."

Journalists can do better to earn the public's trust, Lewis said. Many individuals don't see journalists as experts nor have a strong relationship with them as they do with their doctors, for example.

Although there is a fair bit of distrust in both journalism and health care as institutions, people are more trusting of individual doctors and don't feel the need to fact-check them as they do for individual journalists, Lewis found in a 2023 study published in the journal Media and Communication .

"But journalists are experts," Lewis said. "They are experts in finding accurate information and trying to present it in a professional manner, but they can also do better in presenting themselves as practitioners with expertise."

Bringing transparency into the practice of journalism can illuminate what some people see as a black box. In their latest research study , published April 25 in the research publication Journalism , Lewis and his team noticed in interviews that many Americans perceived journalists as motivated by profits. But in reality, most journalists are paid rather poorly and are motivated more by passion than pursuit of profit, he said. Widespread job cuts also have hit the industry, with hundreds of journalists laid off at the start of 2024.

A disconnect exists between how people perceive journalism and how it actually works, and journalists should share the principles, techniques and challenges that go into it, Lewis said.

Journalists can also embrace more public engagement in their work. For instance, Lewis' UO colleague Ed Madison leads the Journalistic Learning Initiative, which gives middle- and high-schoolers the opportunity to learn journalistic techniques, become more media literate and tell factual stories about their world.

"What it takes to build trust in journalism is the same as anywhere else," Lewis said. "By building relationships."

Journal information: Nature

Provided by University of Oregon

Explore further

Feedback to editors

research topics on media manipulation

Every drop counts: New algorithm tracks Texas's daily reservoir evaporation rates

5 hours ago

research topics on media manipulation

Genetic study finds early summer fishing can have an evolutionary impact, resulting in smaller salmon

7 hours ago

research topics on media manipulation

Researchers discovery family of natural compounds that selectively kill parasites

research topics on media manipulation

Study suggests heavy snowfall and rain may contribute to some earthquakes

8 hours ago

research topics on media manipulation

The spread of misinformation varies by topic and by country in Europe, study finds

research topics on media manipulation

Webb presents best evidence to date for rocky exoplanet atmosphere

research topics on media manipulation

Human activity is making it harder for scientists to interpret oceans' past

research topics on media manipulation

Quantum simulators solve physics puzzles with colored dots

research topics on media manipulation

Chemists produce new-to-nature enzyme containing boron

research topics on media manipulation

Improving timing precision of millisecond pulsars using polarization

Relevant physicsforums posts, i was wondering how english letters are standardized.

3 hours ago

Biographies, history, personal accounts

6 hours ago

Interesting anecdotes in the history of physics?

Translation of transcripts, favorite mashups - all your favorites in one place.

May 6, 2024

Etymology of a Curse Word

May 5, 2024

More from Art, Music, History, and Linguistics

Related Stories

research topics on media manipulation

Research shows journalists can restore media trust

Dec 10, 2018

research topics on media manipulation

Social journalists and social news media logic when social media is adopted in journalism

Aug 27, 2019

research topics on media manipulation

The 2016 US presidential election coverage a 'game changer' for reporters

Jun 22, 2020

research topics on media manipulation

How audience data is shaping Canadian journalism

Feb 29, 2024

research topics on media manipulation

COVID-19 presents unique challenges for news industry

Mar 26, 2020

research topics on media manipulation

How to restore trust in media: Fewer biases and conflicts of interest, a new study shows

Apr 17, 2020

Recommended for you

research topics on media manipulation

Study of new method used to preserve privacy with US census data suggests accuracy has suffered

research topics on media manipulation

New study is first to use statistical physics to corroborate 1940s social balance theory

May 3, 2024

research topics on media manipulation

Historical data suggest hard knocks to human societies build long-term resilience

May 2, 2024

research topics on media manipulation

Targeting friends to induce social contagion can benefit the world, says new research

research topics on media manipulation

Religious intolerance predicts science denial, surveys suggest

May 1, 2024

Let us know if there is a problem with our content

Use this form if you have come across a typo, inaccuracy or would like to send an edit request for the content on this page. For general inquiries, please use our contact form . For general feedback, use the public comments section below (please adhere to guidelines ).

Please select the most appropriate category to facilitate processing of your request

Thank you for taking time to provide your feedback to the editors.

Your feedback is important to us. However, we do not guarantee individual replies due to the high volume of messages.

E-mail the story

Your email address is used only to let the recipient know who sent the email. Neither your address nor the recipient's address will be used for any other purpose. The information you enter will appear in your e-mail message and is not retained by in any form.

Newsletter sign up

Get weekly and/or daily updates delivered to your inbox. You can unsubscribe at any time and we'll never share your details to third parties.

More information Privacy policy

Donate and enjoy an ad-free experience

We keep our content available to everyone. Consider supporting Science X's mission by getting a premium account.

E-mail newsletter

Trending Topics

  • Queen Of Tears
  • Lovely Runner

Public Sentiment Sours Amid BTS’s Chart Manipulation Probe

research topics on media manipulation

Netizens criticized media reports claiming that BTS were victims of misinformation.


On May 7, a post titled “BTS Are The Sacrificial Lamb Amid HYBE And Min Hee Jin Fight… Victims Of Rumors” went viral.

In the post, the author uploaded a Newsis article in which the publication quoted foreign media outlets that claimed false rumors were wrongfully targeting BTS.

International outlets didn’t report on the rumors due to the lack of evidence, but Korean media outlets focused on the allegations. BTS may be a target of misinformation. — French media quoted by Newsis

The rumors in question allude to allegations claiming that Big Hit Music had manipulated music charts in favor of BTS. Recently, the Korean government ordered an investigation into the allegation after a 2017 court ruling resurfaced, in which a Seoul District Court ruled that it had found evidence of manipulation.

Korean Government To Order Investigation On BTS’s “Sajaegi” Allegations

Korean netizens responded to the report with anger, with many objecting to the classification of the allegations as merely rumors or misinformation. Many pointed to the 2017 court ruling as fact and that an investigation was absolutely warranted.


  • “Rumors? Am I misunderstanding the definition of the word?”
  • “Laughing at the use of ‘Rumor,’ are they preparing to feed us more bulls#it?”
  • “The court ruling is a fact, so it will be hard to manipulate this through media play.”
  • “This makes you guys so unlikable. Really embarrassing.”
  • “Oh, so a court ruling is a rumor now…”
  • “They are calling the court ruling a rumor now.”
  • “Korean courts be like: ????”
  • “HYBE is claiming victim after reaping the benefits (of their manipulation).”

Stay tuned for updates.

research topics on media manipulation

BTS’s Exorbitant “PROOF” Collectors Album Price Becomes Hot Topic

Introducing The Beautiful Female Lead Who Will Star Alongside BTS’s RM

“Actor Joon?” BTS’s RM Shocks ARMY With Teaser Posters

Accusations Of BTS’s Association With Cult Continue Despite Dahn World’s Rebuttal

See more BTS

Share This Post

Facebook logo

  • Girls' Generation
  • Super Junior

View Dark Theme

Numbers, Facts and Trends Shaping Your World

Read our research on:

Full Topic List

Regions & Countries

  • Publications
  • Our Methods
  • Short Reads
  • Tools & Resources

Read Our Research On:

  • Americans’ Changing Relationship With Local News
  • 2. Local news topics

Table of Contents

  • 1. Attention to local news
  • Americans’ changing local news providers
  • How people feel about their local news media’s performance
  • Most Americans think local journalists are in touch with their communities
  • Interactions with local journalists
  • 5. Americans’ views on the financial health of local news
  • Acknowledgments
  • The American Trends Panel survey methodology

Local news topics range from useful daily information like weather and traffic, to civic information about local government, crime and the economy, to cultural news about the arts and sports.

A bar chart showing many Americans follow news about local weather, crime, traffic and government

Majorities of Americans say they get news at least sometimes about each of these topics, although weather is the only one followed often by most Americans. Roughly two-thirds of U.S. adults (68%) say they often get news about local weather – double the share who often consume news about crime, the next most common topic.

Those who pay more attention to local news generally are more likely to follow many of these topics. For example, adults 65 and older are more likely to get news about all of these topics than those ages 18 to 29, reflecting the broader difference between the age groups in attention to local news .

There are other differences that hint at how local news needs vary across the U.S. For instance, Americans who live in urban areas are more likely than those in rural areas to say they often get local traffic news (32% vs. 24%). And parents of children under 18 are about twice as likely as those without young children to often get news about local schools (30% vs. 14%).

Midwesterners are more likely than people in other regions to often get news about local sports (29%, vs. 22% or lower in other regions). And in general, Americans who describe themselves as “very attached” to their local community are more likely to say they often get news about all of these topics, reflecting a more general sense of engagement among those with high levels of community connection.

The survey asked respondents who get news about each topic how satisfied they are with the quality of the news they get in that area.

A bar chart showing most local news consumers are not highly satisfied with the news they get about various topics

Weather is the only topic news consumers are highly satisfied about, with 63% of those who get local news about weather saying they are extremely or very satisfied with the news they get on the topic. Among those who get news about local traffic and sports, just over four-in-ten say they are highly satisfied with the news they get (44% and 43%, respectively).

Only about a quarter of those who consume news about the local economy (26%) or local government and politics (25%) say they are extremely or very satisfied with the quality of this news, although about twice as many respondents in each category say they are somewhat satisfied.

Sign up for our weekly newsletter

Fresh data delivery Saturday mornings

Sign up for The Briefing

Weekly updates on the world of news & information

  • Digital News Landscape
  • Journalists
  • Trust in Media

Introducing the Pew-Knight Initiative

8 facts about black americans and the news, audiences are declining for traditional news media in the u.s. – with some exceptions, how black americans engage with local news, local tv news fact sheet, most popular, report materials.

1615 L St. NW, Suite 800 Washington, DC 20036 USA (+1) 202-419-4300 | Main (+1) 202-857-8562 | Fax (+1) 202-419-4372 |  Media Inquiries

Research Topics

  • Age & Generations
  • Coronavirus (COVID-19)
  • Economy & Work
  • Family & Relationships
  • Gender & LGBTQ
  • Immigration & Migration
  • International Affairs
  • Internet & Technology
  • Methodological Research
  • News Habits & Media
  • Non-U.S. Governments
  • Other Topics
  • Politics & Policy
  • Race & Ethnicity
  • Email Newsletters

ABOUT PEW RESEARCH CENTER  Pew Research Center is a nonpartisan fact tank that informs the public about the issues, attitudes and trends shaping the world. It conducts public opinion polling, demographic research, media content analysis and other empirical social science research. Pew Research Center does not take policy positions. It is a subsidiary of  The Pew Charitable Trusts .

Copyright 2024 Pew Research Center

Terms & Conditions

Privacy Policy

Cookie Settings

Reprints, Permissions & Use Policy


  1. Media Manipulation 101: What Is It and How Can You Spot It?

    research topics on media manipulation

  2. Your role in media manipulation

    research topics on media manipulation

  3. The Politics of Social Media Manipulation, edited by Richard Rogers and

    research topics on media manipulation

  4. PPT

    research topics on media manipulation

  5. (PDF) Typology and Mechanisms of Media Manipulation

    research topics on media manipulation

  6. Media Manipulation

    research topics on media manipulation


  1. Medien Manipulation, ein Beispiel

  2. Manipulation durch Medien

  3. The Media Matrix: How propaganda and mass media are impacting America's contact with reality

  4. The manipulation of social media

  5. Discourse

  6. Every Manipulation Explained in 2 Minutes


  1. Social media manipulation by political actors an industrial scale

    Organised social media manipulation campaigns were found in each of the 81 surveyed countries, up 15% in one year, from 70 countries in 2019. Governments, public relations firms and political parties are producing misinformation on an industrial scale, according to the report.

  2. Misinformation, manipulation, and abuse on social media in the era of

    The COVID-19 pandemic represented an unprecedented setting for the spread of online misinformation, manipulation, and abuse, with the potential to cause dramatic real-world consequences. The aim of this special issue was to collect contributions investigating issues such as the emergence of infodemics, misinformation, conspiracy theories ...

  3. Digital media and misinformation: An outlook on multidisciplinary

    Lately, we have seen a shift in the stance of digital platforms, with greater collaboration with academia, government, and industry to avoid media manipulation. Table Table5 5 lists some policies adopted by leading companies to curb misinformation. The most common actions include content moderation, partnering with fact-checking networks ...

  4. Propaganda, misinformation, and histories of media techniques

    Introduction. Propaganda has a history and so does research on it. In other words, the mechanisms and methods through which media scholars have sought to understand propaganda—or misinformation, or disinformation, or fake news, or whatever you would like to call it—are themselves historically embedded and carry with them underlying notions of power and causality.

  5. PDF The Media Manipulation Casebook

    Research Methods 6 Coding Process 6 The Media Manipulation Life Cycle 7 Variables 11 Case Name 11 Region 11 Date 11 Strategy 11 ... Campaign Adaptation 26. 3 About the Media Manipulation Casebook The Media Manipulation Casebook (the Casebook) is a research repository consisting of documented attempts to manipulate on- and offline media ...

  6. Controlling the spread of misinformation

    Misinformation on COVID-19 is so pervasive that even some patients dying from the disease still say it's a hoax.In March 2020, nearly 30% of U.S. adults believed the Chinese government created the coronavirus as a bioweapon (Social Science & Medicine, Vol. 263, 2020) and in June, a quarter believed the outbreak was intentionally planned by people in power (Pew Research Center, 2020).

  7. Digital media and misinformation: An outlook on ...

    This review discusses the dynamic mechanisms of misinformation creation and spreading used in social networks. It includes: (1) a conceptualization of misinformation and related terms, such as rumors and disinformation; (2) an analysis of the cognitive vulnerabilities that hinder the correction of the effects of an inaccurate narrative already assimilated; and (3) an interdisciplinary ...

  8. Inside the 'Misinformation' Wars

    The news media's handling of that narrative provides "an instructive case study on the power of social media and news organizations to mitigate media manipulation campaigns," according to ...

  9. PDF Propaganda, misinformation, and histories of media techniques

    Perhaps most importantly, little research on media and communication understands ideology in terms of ^discrete falsehoods and erroneous belief, preferring to focus on processes of deep structural misrecognition that serves dominant economic interests (Corner, 2001, p. 526). This obviously marks a

  10. Data & Society

    Data & Society's Media Manipulation & Disinformation research examines how different groups use the participatory culture of the internet to turn the strengths of a free society into vulnerabilities, ultimately threatening expressive freedoms and civil rights. Efforts to exploit technical, social, economic, and institutional configurations of ...

  11. A Blueprint for Documenting and Debunking ...

    The Media Manipulation Casebook is a tool to help journalists, researchers, and policymakers know how and when to respond to misinformation in all its forms. In 2020, amid a pandemic and protests and a presidential election, misinformation lays in wait everywhere. It's on our social media feeds, coming out of the mouths of our politicians ...

  12. Informed consent and the Facebook emotional manipulation study

    The Facebook study, entitled Experimental evidence of massive-scale emotional contagion through social networks (Kramer et al., 2014), was a collaborative endeavour between Facebook and Cornell University's Departments of Communication and Information Science.In it, Facebook researchers directly manipulated Facebook users' news feeds to display differing amounts of positive and negative ...

  13. Full article: Combating fake news, disinformation, and misinformation

    1. Introduction. Fake news is "news articles that are intentionally and verifiably false, and could mislead readers" (Allcott & Gentzkow, Citation 2017, p. 213).It is also sometimes referred to as information pollution (Wardle & Derakshan, Citation 2017), media manipulation (Warwick & Lewis, Citation 2017) or information warfare (Khaldarova & Pantti, Citation 2016).

  14. The disaster of misinformation: a review of research in social media

    It was observed that scholarly discussion about 'misinformation and social media' began to appear in research after 2008. Later in 2010, the topic gained more attention when Twitter bots were used or spreading fake news on the replacement of a USA Senator . Hate campaigns and fake follower activities were simultaneously growing during that ...

  15. Qualitative and Mixed Methods Social Media Research:

    Kaplan and Haenlein (2010) defined social media as "… a group of Internet-based applications that build on the ideological and technological foundations of Web 2.0, and that allow the creation and exchange of User Generated Content" (p. 61). The emergence of social media technologies has been embraced by a growing number of users who post text messages, pictures, and videos online ...

  16. PDF The Media Manipulation Casebook

    Research methods 6 Coding process 6 The media manipulation life cycle 7 Variables 11 Case Name 11 Region 11 Date 11 Strategy 11 Tactics 12 ... Campaign adaptation 19 . 3 About the Media Manipulation Casebook The Media Manipulation Casebook (the Casebook) is a research repository consisting of documented attempts to manipulate on- and offline ...

  17. Misinformation, manipulation, and abuse on social media in the era of

    The COVID-19 pandemic represented an unprecedented setting for the spread of online misinformation, manipulation, and abuse, with the potential to cause dramatic real-world consequences. The aim of this special issue was to collect contributions investigating issues such as the emergence of infodemics, misinformation, conspiracy theories, automation, and online harassment on the onset of the ...

  18. media manipulation

    Browse media manipulation news, research and analysis from ... Articles on media manipulation. Displaying all articles ... Trump's tweets systematically divert attention away from topics that ...

  19. Tackling Disinformation

    THE KENNEDY SCHOOL'S Shorenstein Center on Media, Politics and Public Policy reflects the School's digital-era transformation. The center has broadened its focus well beyond legacy newspapers and broadcasters to take on research projects on digital-heavy subjects, such as how news media should handle disinformation and how to document the impact of digital technology on users.

  20. Violence, Media Effects, and Criminology

    There have been over 1000 studies on the effects of TV and film violence over the past 40 years. Research on the influence of TV violence on aggression has consistently shown that TV violence increases aggression and social anxiety, cultivates a "mean view" of the world, and negatively impacts real-world behavior. (Helfgott, 2015, p.

  21. (PDF) Social Media Manipulation Report 2020

    est at 2,73 € and 1,24 € respectively. 21. We found an increase of roughly 20 per cent. in the price of a basket of manipulation ser-. vices across all social media platforms from. 2018 to ...

  22. IJERPH

    Social media is not only an essential platform for the dissemination of public health-related information, but also an important channel for people to communicate during the COVID-19 pandemic. However, social bots can interfere with the social media topics that humans follow. We analyzed and visualized Twitter data during the prevalence of the Wuhan lab leak theory and discovered that 29% of ...

  23. 'Doing your own research' can make fake news seem believable

    Social journalists and social news media logic when social media is adopted in journalism Aug 27, 2019 The 2016 US presidential election coverage a 'game changer' for reporters

  24. US Hispanics' news habits and sources

    Younger people prefer digital devices and social media for news at higher rates. Among Latino adults ages 18 to 49, 73% prefer to get their news on digital devices, including 27% who prefer social media specifically. Among Latinos ages 50 and older, 43% prefer digital devices and just 5% prefer social media.

  25. Fake news, disinformation and misinformation in social media: a review

    Social media outperformed television as the major news source for young people of the UK and the USA. 10 Moreover, as it is easier to generate and disseminate news online than with traditional media or face to face, large volumes of fake news are produced online for many reasons (Shu et al. 2017).Furthermore, it has been reported in a previous study about the spread of online news on Twitter ...

  26. How Americans view their local news media

    The share who say local media are doing well as political watchdogs is down slightly from 66% in 2018. About a third of U.S. adults say their local news media are not doing well at keeping an eye on local political leaders (35%) or being transparent about their reporting (34%).

  27. 3. Sources of local news

    News from friends, family and neighbors is still most often shared by word of mouth (i.e., in person or on the phone), but it is increasingly likely to be shared on social media. ­Among those who get local news from people in their community, 25% now say that primarily happens on social media, up from 17% in 2018.

  28. How closely do Americans follow local news?

    The share of Americans who say they follow local news very closely now stands at 22% - a decline of 15 percentage points since 2016, when 37% of U.S. adults said the same. Most U.S. adults (66%) still say they follow local news at least somewhat closely, although this number is also down. Roughly ...

  29. Public Sentiment Sours Amid BTS's Chart Manipulation Probe

    On May 7, a post titled "BTS Are The Sacrificial Lamb Amid HYBE And Min Hee Jin Fight… Victims Of Rumors" went viral. In the post, the author uploaded a Newsis article in which the publication quoted foreign media outlets that claimed false rumors were wrongfully targeting BTS.. International outlets didn't report on the rumors due to the lack of evidence, but Korean media outlets ...

  30. Local news and the topics Americans follow

    Local news topics range from useful daily information like weather and traffic, to civic information about local government, crime and the economy, to cultural news about the arts and sports. Majorities of Americans say they get news at least sometimes about each of these topics, although weather is the only one followed often by most Americans ...