Private Sector Firms Are Telling the Stories — And Calling the Shots — About AI

|


On 12 September 2024, the Biden-Harris Administration convened a roundtable on US ‘leadership in AI infrastructure’, bringing together government representatives with AI industry leaders from OpenAI, Anthropic, Microsoft, AWS and Nvidia. No civil society or academic representatives were in attendance. Almost one year earlier, in November 2023, former Prime Minister Rishi Sunak hosted the Bletchley AI Safety Summit; despite inviting a seemingly balanced cohort of companies, academia, civil society and government to attend, news coverage and officials’ attention were almost entirely focused on AI industry CEOs. The former Prime Minister stated proudly: ‘I’ll be joined by businesses like OpenAI, Anthropic, Meta, Google DeepMind, and Microsoft. And […] I’ll be in conversation with Elon Musk this evening — exclusively on X.’ 

Former Prime Minister Sunak and the Biden-Harris Administration’s prioritisation of AI industry leaders represents a long-standing trend for emerging technologies across the world: government leaders have designated private sector technology leadership as impartial ‘thought leaders’ on critical emerging technologies like AI — because of, rather than despite, their concurrent roles as CEOs of for-profit enterprises, even though these dual roles clearly present competing interests.

While these dual hats — thought leadership and CEO-hood — have provided major tech executives with a megaphone to set the narrative on AI, their companies also exert influence over research agendas in the field, shaping the formal production of knowledge around AI. Together, these two elements constitute what we term ‘narrative power’ — an underexplored dimension of Big Tech’s influence over the development and governance of AI.  

Narrative power is the use of resonant storytelling or framing as well as one’s own clout to naturalise chosen discourses and language, and shape knowledge production. Built on a foundation of corporate dominance over AI’s material infrastructure, and further enabled by a decades-long erosion of governments’ in-house technological capacity and expertise, narrative power represents an important dimension of the private sector’s influence over public and policy discourses around AI. As nascent domestic and international AI policy regimes are at their most malleable, policymakers and critical researchers must consider the downstream implications of Big Tech’s de facto role as ‘impartial experts’, and their outsize influence over research agendas in the field. Together, these elements threaten to sideline critical voices in government consultation processes, de-democratise knowledge production around AI, and leave the door open for regulatory capture of AI governance processes.

Corporate narrative power is not new, but has thrived ever more in the AI boom

The origins of governments’ capability loss can be traced back to the New Public Management reforms of the 1980s and 90s, which in many OECD countries reflected the expectation that contracting and outsourcing technology to the private sector would promote frugality, optimise the use of public funds, and increase operational agility. Most countries gradually transferred the management of their digital infrastructure to private providers. This extensive outsourcing has created a vicious cycle of increasing dependence on external contractors and private firms, and blurred lines of responsibility for essential public services, now increasingly handled by private companies without a public purpose mandate. Recent debacles such as the Horizon Post Office scandal and the troubled digitalisation of the UK courts highlight the persistent failures of this model.

Unique to AI is big technology firms’ strong advantage in developing frontier AI over newer start-ups or public institutions. Firms like Google, Apple, Meta, Microsoft and Amazon — dubbed ‘hyperscalers’ because they possess massive amounts of computing power and storage capacity and provide them as services to other businesses — amassed the wealth to do so by leveraging enormous quantities of proprietary data accrued from users of their other consumer products like social media, and secondly, revenue streams from renting out cloud storage — all to pursue costly advanced AI development. They have done so both directly (in the case of Apple and Meta) as well as through their proxy frontier AI labs (e.g. Microsoft’s partnership with OpenAI or Google’s absorption of DeepMind). 

These two resources grant large firms unparalleled advantage: they can afford the high cost of the most advanced chips required for frontier AI research. In 2023, Epoch estimated that AI labs were pursuing the aggressive scaling of computing power — as scale was tied to significant improvements of model performance — at a rate of around 4 times per year. Given that Epoch also estimated that GPT-4’s final training run cost as much as $40 million, Lennart Heim forecasts ballooning compute costs all the way up to the GDPs of nations if research continues on-trend in the next decade, a scale of overhead costs required to stay competitive that only hyperscalers are equipped to handle. 

These hyperscalers have also increasingly invested in in-house chipmaking, which would lead to even further vertical integration and potentially seed high levels of public sector, consumer, and developer dependency on hyperscalers’ entire technology ecosystems. The hyperscalers possessing this global privilege also represent a sliver of the world population; the Global Majority is strongly underrepresented. 

Economies of scale providing existing firms advantages over newer actors is an old phenomenon, but the drastic nature of this advantage in such a critical emerging technological field — and that this status quo has already impacted systemic early decision-making within governments — should raise concern.

Private sector narrative power

With an unprecedented concentration of technical expertise and resources in private, for-profit labs, hyperscalers now dominate frontier AI research itself, and narrative-making around these new technologies. These companies leverage their material advantages to gain further narrative power by shaping academic priorities, such as gatekeeping access for academic institutions to the necessary resources for frontier research, and limiting the development of frontier AI technologies to select firms. To that end, Big Tech alone effectively determines the trajectory of the majority of all frontier AI research. Public academia relies on private cloud resources, while hyperscalers can set career incentives or provide discounts on rents in ways that define the research priorities of universities and other public sector research entities. Private frontier labs also lead on output, with the Stanford AI Index reporting that, in 2023, industry released 51 major machine learning models, while academia only produced fifteen. In academia, this status quo restricts the ability of researchers to do critical work or projects that don’t suit the needs of hyperscalers.

As these hyperscalers (or their associated private labs) either directly produce or fund nearly all major frontier AI research efforts, the public sphere counts on them as sources of impartial authority to support governance and regulation. These for-profit AI enterprises often echo existing narratives suggesting that government alone cannot effectively govern new technologies, and that the private sector ought to self-regulate. Big Tech has also generated and promoted discourse that ‘not only promotes [their] technologies as necessary to economic and national security interests, but also positions Big Tech firms themselves as security assets that need to be bolstered rather than held back by regulation.’ These companies are viewed as conveyors of impartial information on AI whilst simultaneously selling AI products — a clear conflict of interest.

Meanwhile, leaders of these enterprises gladly provide their insight to the public sector when asked. For instance, in 2023, Sam Altman, the CEO of OpenAI, hosted a discussion at the US Capitol on AI and the future, co-hosted by GOP Conference Vice Chairman Mike Johnson and Democratic Caucus Vice Chairman Ted Lieu; both lawmakers stated that the goal was to ‘educate’ members of Congress and other government stakeholders. Altman then joined two other tech experts to testify to the Senate Judiciary Committee on how generative AI and other frontier technologies ought to be regulated. Senator John Kennedy was quoted by the Daily Beast as telling Altman, ‘This is your chance […] to tell us how to get this right. Please use it […] tell us what rules to implement.’ The pre-hearing event was credited for softening lawmakers to Altman, and included a live demonstration of ChatGPT’s capabilities, to which many present ‘reacted as if they had just witnessed a magic show, rather than a tech demo.’

In September 2023, Chuck Schumer was criticised for the invite list to a closed-door summit on how Congress should regulate AI. Among 22 reported attendees, thirteen were tech CEOs, with the rest split across ‘researchers, labour leaders and user advocates’. One civil society invitee, Deborah Raji, tweeted, ‘[As usual], there’s a severe underrepresentation of civil society, academia and advocacy groups. I suspect it’s because many in [the government] falsely assume those groups are less familiar with the tech.’ In the UK, the same private sector voices, mostly from the US, are being foregrounded, with 2023 visits from Elon Musk and Sam Altman heralded as an opportunity to hear from AI experts on what to do about the emerging technology. 

Private sector actors are also bolstering their policymaking authority by playing up their technologies’ centrality to solving national security issues. This is particularly prevalent in the US, such as through recent testimony before the House Armed Services subcommittee by individuals like Scale AI CEO Alexander Wang and Palantir CEO Alex Karp, both of whom touted AI’s contributions to the warfighting effort and hyped up the threat posed by China’s AI capabilities. Wang has already become ‘a popular government partner’ after reportedly briefing the Select Committee on China twice, and Scale AI now lists the US Army, Air Force and the Chief Digital and Artificial Intelligence Office as clientele. In news coverage Wang and similar figures are heralded as “experts” first, masking their identities as CEOs of commercial AI companies hoping to secure lucrative defence contracts. 

In the UK, companies like Palantir have also been called upon to testify about AI’s weaponisation in Parliament, and the ‘race against China’ narrative has been slower-moving but still prevalent. If the pattern follows the US, we should expect to see companies increasingly leveraging this narrative to stay relevant and gain further contracts in the UK as well. 

Sam Altman’s famous May 2023 call for stricter government oversight of AI, made at the aforementioned Senate Committee hearing, may seem to contradict the argument that Big Tech favours self-regulation. The reality is more complicated: tech companies have long employed a strategy in the US of asking for regulation in Congress while actively fighting against it via lobbying or in the courts. This strategy also serves to embed tech companies into resulting governance solutions — creating industry-led initiatives or ‘ethical principles’, but always pushing aggressively against regulation. Here, we see narrative power at work once again: tech leaders using their elevated expert status to penetrate policy-making circles, facilitating and obfuscating regulatory capture and ultimately opening the door for hyperscalers to participate in their own watered-down oversight.

***

Narrative power is a cornerstone of Big Tech’s influence over AI governance. Because of their domination over AI’s material infrastructure and the decades-long depletion of governments’ in-house technical capacity and know-how, tech giants hold outsized influence over AI research agendas. Big Tech benefits from being considered ‘impartial experts’ on a range of policy issues despite their competing incentives, often contrary to public interest. These actors’ narrative power helps to lock in a self-perpetuating cycle of private power accumulation while AI governance regimes are still taking shape.

This corporate narrative supremacy has downstream implications that are immediately relevant for policymakers. Critical non-industry voices are sidelined in public and media discourse, and in many government consultation processes. Meanwhile, research agendas are subtly shaped in line with hyperscaler priorities, threatening to deprioritise important research that could move away from dominant directions in AI design. Left unchecked, this narrative supremacy could result in policies that fail to adequately address data harms, privacy concerns, and other risks from AI. It also threatens to erode transparency and accountability in the democratic governance of these systems, leaving citizens with little to no recourse in cases of harm. The latter will be particularly true in cases where the public has no alternative but to engage with private AI providers via outsourced government services.

Propelled by the twin engines of material and narrative power, private sector giants are stepping into the gap left by a receding state. This raises important questions about the relative weight of public versus shareholder value, and the role of the state in modern democracy. What is the government’s role in safeguarding public interests when private entities increasingly control critical infrastructure and services? Whose vision of the future is informing emergent technological governance, and how might we make space for a wider array of alternatives?
Tech companies’ power is not necessarily unassailable. Google’s recent loss in a landmark anti-trust case in the US, along with more aggressive antitrust regulation in the UK and EU this year, suggests that governments are willing to push back to curb anti-competitive behaviour. But these initiatives alone won’t be enough: power isn’t just about market share — it’s also about the stories that shape public perception, frame policy decisions and guide research agendas, and who is called on to tell them.

This article was originally published in OPR’s Issue 14: Fictions and Narratives.