Published on August 05, 2022

Big Tech is incurring the wrath of Republicans and Democrats alike. Albeit for different reasons, each side of the aisle wants to rein in the economic and political influence of these companies.

Twitter, Meta, and Google are embroiled in federal lawsuits. In late May, the FTC sued Twitter for failing to protect users’ personal data (the company used phone numbers for ad purposes). Meta faces an antitrust suit from the FTC for engaging in a “bury or buy” scheme to squash competition, and Google faces three high-profile antitrust suits: one from the DOJ; another from the Texas AG and ten states; and a third from a state coalition made up of over 30 states.

Citing risks to our democracy, the DOJ recently gave support to the forthcoming bipartisan bill, American Innovation and Choice Online Act (AICOA), which is scheduled to hit the Senate floor this summer. Led by Amy Klobuchar (D-MN) and Chuck Grassley (R-IA), the AICOA seeks to prohibit these digital platforms from discriminating in favor of their own products and services.

Additionally, as politicians and regulators both realize, these tech platforms wield entirely too much economic and political power. There are concerns from the right that social media companies silence conservative voices, as well as concerns (from both sides) that these platforms interfere with elections and amplify disinformation and misinformation.

What are the best ways to fight disinformation and rein in the self-preferencing of dominant platforms?

Aside from breaking up the Big Tech platforms—which would take years, if not decades, in the unlikely event such legislation were to pass—some have called for self-regulation. However, as has become abundantly clear, relying on Meta, Google, and Twitter to curate content with the public interest in mind is not realistic.

Federal data privacy legislation

Others have suggested issuing a federal data privacy law, much like the GDPR. This is a good idea.

With such a law in place, Big Tech platforms would not be able to use consumer data in one sector to bolster their market share in a different sector (unless users provide their consent). The GDPR already has this rule in place, and it’s a good one.

One quick caveat: Google and Meta are so powerful because of network effects. By acquiring data on users across a variety of sectors, they have become behemoths in the digital marketing space. If a new law were to arise preventing the aggregation of user data across sectors, this would give Google and Meta first mover advantage and potentially solidify them as the winners in their respective spaces.

Mandated data portability

Another idea put forth is mandated data portability. This concept, which is also already mandated by the GDPR, says that if users want to leave a social media company, they can take all their own data with them. The problem here is that the most valuable user data is platform-specific metadata, which inherently does not transfer from one platform to a different platform. After all, users behave differently from one API to another.

An underacknowledged answer: middleware

Like many others, Francis Fukayama (Stanford Cyber Policy Center), Barak Richman (Duke Law), and Ashish Goel (Stanford Computer Science) recognize that Big Tech platforms exploit user data and function as “information monopolists.”

As they write in their Stanford Policy Cyber Center report, “Middleware for dominant digital platforms: A technological solution to a threat to democracy”, “Digital platforms’ concentrated economic and political power is like a loaded weapon sitting on a table.” I won’t name any names, but some person or entity could pick up this “weapon” and use it for his or her own purposes. One way around this issue is middleware software.

Potential of middleware

Middleware refers to a competitive layer of new companies that mediate the relationship between users and the Big Tech platforms. At the most basic level, this could entail a company that performs fact-checking or content labeling services on social media platforms. For example, social media tweets, shares, or posts could be labeled as “unverified,” “misleading,” or “lacks context.”

In regard to search platforms, such as Google or Amazon, one could use middleware to adjust search results to show more eco-friendly, lower priced, or domestically-produced products. One could block certain content or products from appearing in search results altogether.

Importantly, users would choose what middleware software they want to utilize, and the middleware’s algorithms would need to be transparent.

Issues with middleware

A lot of details would need to be worked out; however, Fukayama and his colleagues believe that a profit-sharing plan between the middleware companies and the platforms is a viable model. That said, these revenue sharing agreements could not apply to advertising revenue though, as this would undoubtedly jeopardize the integrity of the middleware companies’ algorithms and services.

Additionally, middleware’s engagement with social media companies could encourage more splintering and lead to (even worse) filter bubbles than we already see today.


Big Tech’s digital platforms have reached an unprecedented level of economic and political power, and users do not have much transparency into how the platforms’ algorithms operate. Moreover, these same companies have routinely violated their users’ privacy, while operating in an exploitative and monopolistic manner.

In addition to federal antitrust legislation, many in the public policy realm have offered state regulation, data portability, and privacy law as potential solutions. Middleware gets less attention. Middleware should be explored as a viable solution to the undue influence currently wielded by Big Tech platforms. Although there are a lot of kinks to work out, a solution that offers users algorithmic transparency and more control over user privacy would certainly be a good thing.

John Donegan

John Donegan

Enterprise Analyst, ManageEngine

John is an Enterprise Analyst at ManageEngine. He covers infosec, cybersecurity, and public policy, addressing technology-related issues and their impact on business. Over the past fifteen years, John has worked at tech start-ups, as well as B2B and B2C enterprises. He has presented his research at five international conferences, and he has publications from Indiana University Press, Intellect Books, and dozens of other outlets. John holds a B.A. from New York University, an M.B.A. from Pepperdine University, an M.A. from the University of Texas at Austin, and an M.A. from Boston University.

 Learn more about John Donegan

Elevate productivity: Achieving the essential balance of tech and human well-being

close icon