In the lead-up to the 2024 election, the U.S. government has announced a series of aggressive steps to safeguard the process from foreign interference. Among these measures are the seizure of internet domains and the imposition of sanctions on Russian operatives, reflecting an escalating commitment to counter foreign disinformation campaigns. However, this effort has sparked a wider conversation about the balance between national security and the regulation of online discourse, particularly as social media companies face increasing scrutiny.
Just last week, Meta founder Mark Zuckerberg submitted a letter to Congress that stirred fresh debate. Zuckerberg’s claim? In 2021, senior Biden administration officials pressured Facebook to censor certain COVID-19 posts, including humor and satire, as part of efforts to curb misinformation. Zuckerberg noted that while the government exerted pressure, Facebook ultimately retained the final say on content moderation. “It was our decision whether or not to take content down,” he emphasized. His statement reignited concerns about the role of government influence in shaping the information landscape, particularly when it comes to public health and sensitive political issues.
While Zuckerberg’s allegations have drawn mixed reactions, one thing is clear: disinformation, particularly in the digital age, represents a formidable challenge. The World Economic Forum (WEF) recently named misinformation and disinformation as the greatest threats to global stability in 2024-2025. And while foreign actors such as Russia and China are frequently accused of manipulating online platforms to sow discord, there is also a growing realization that no nation is entirely innocent in this domain—including the United States.
The U.S. has long projected itself as a staunch opponent of foreign disinformation, with Russia and China often singled out as major culprits. However, history reveals a more complex picture. The U.S. itself has engaged in covert online influence campaigns that mirror the tactics it decries from adversaries.
One of the most prominent examples was ‘Operation Earnest Voice’, launched in 2011, which involved the use of ‘sock puppets’—fake social media accounts—to disseminate pro-American narratives. Far from a relic of the past, similar efforts have persisted into the present day. A 2022 study from the ‘Stanford Internet Observatory’ uncovered extensive evidence of U.S.-based social media sock puppets, which targeted populations in countries like Russia, China, and Iran. These accounts spread sensational rumors, including stories of Iranian officials harvesting the organs of Afghan refugees. Some accounts even impersonated Iranian hardliners, criticizing the government for its perceived moderation. Subsequent investigations revealed that many of these accounts were linked to the Pentagon.
Such operations have raised critical questions about the ethical boundaries of psychological warfare and the use of disinformation as a tool of statecraft. In a surprising twist, U.S. disinformation campaigns have employed many of the same strategies that have been used against it. This trend is evident in official documents, including an October 2022 procurement request from ‘U.S. Special Operations Command (SOCOM)’. The document sought tools for “influence operations, digital deception, communication disruption, and disinformation campaigns,” even suggesting the use of deepfake technology for online influence.
The implications of these tactics are profound. Shelby Grossman, a researcher at Stanford’s Internet Observatory, noted the irony of these efforts in a 2022 interview: “The sock puppet accounts were kind of funny to look at because we are so used to analyzing pro-Kremlin sock puppets, so it was weird to see accounts pushing the opposite narrative.” In other words, the U.S. has increasingly embraced methods it once exclusively attributed to its geopolitical rivals.
One of the most concerning aspects of the U.S.’s foray into online influence operations is the potential for unintended blowback. In a globalized digital environment, information meant for a foreign audience can quickly spill over into domestic discourse. For instance, in June 2023, a Reuters investigation revealed that the U.S. military had been behind a covert anti-China vaccine campaign in the Philippines. While this operation targeted Southeast Asian audiences, there is always the risk that such content could be amplified and spread within the U.S., fueling internal divisions.
As evidenced by recent advertisements (in Arabic) on dating apps in Lebanon, promoting pro-American messages, the U.S. military appears increasingly willing to explore new and unconventional channels to wage its influence operations. However, such tactics come with significant risks. The disinformation landscape is inherently volatile, and the rapid spread of online content means that carefully crafted narratives can easily spiral out of control.
As the U.S. tightens its defenses ahead of the 2024 election, it faces a paradox. On the one hand, its measures to combat foreign disinformation—such as sanctioning Russian operatives—are crucial for preserving the integrity of the democratic process. On the other hand, its own history of disinformation campaigns raises troubling questions about its commitment to transparency and ethical behavior on the global stage.
For social media platforms like Facebook, the stakes are equally high. Zuckerberg’s recent letter underscores the delicate balance between respecting free speech and preventing the spread of harmful misinformation. The pressure from governments to regulate content more aggressively will only intensify as the election approaches. Yet, as history has shown, government involvement in shaping the online narrative is fraught with its own set of dangers.
Ultimately, the battle against disinformation is far from straightforward. Both foreign and domestic actors have a role to play in shaping what people see and believe online, and the line between protecting national security and infringing on free expression remains a difficult one to draw. As 2024 looms, the U.S. will need to navigate this delicate balance with care—or risk undermining the very values it seeks to protect.