AI Research

My work on AI tends to fall into three categories: using LLMs for cost benefit analysis, policy analysis of state and federal AI laws, and the economics of AI and transformative AI.

LLMs for cost benefit analysis

I’ve been experimenting with large language models (LLMs) to estimate AI bill compliance costs. In March 2025, I published “How much might AI legislation cost in the U.S.?,” which compared official cost compliance estimates to leading LLMs for two recent amendments to the California Consumer Privacy Act (CCPA) and regulations implementing President Biden’s Executive Order on AI.

After doing a deep dive into these three regulations, I then prompted ChatGPT, Claude, and Grok to act as compliance officers at companies after reading each of the new rules with the goal of estimating the hours needed for first-year implementation and ongoing compliance.

What was surprising is that the LLMs were usually close to first year estimates, but they tended to predict much higher ongoing annual costs, suggesting a systematic underestimation. The chart below displays all of the estimates.

Similar to discounted cash flow (DCF) analysis, we can think of regulation as a stream of future costs that a regulation is expected to generate over the 10 year period. By summing them all up and then discounting them back to present value using a discount rate that reflects the costs’ long-term time horizon, we can estimate the current market value of the regulation. The table below calculates those regulatory costs using the federal two percent discount rate for each of the scenarios.

All of the graphics from piece are listed below:

In June, I published a piece in the City Journal on New York’s RAISE Act which I subjected to the same kind of method. As I wrote,

I asked the leading LLMs to read the RAISE Act and estimate the hours needed to comply with the law in the first year and in every year after that for a frontier model company. The results, displayed in the table below, suggest that initial compliance might fall between 1,070 and 2,810 hours—effectively requiring a full-time employee. For subsequent years, however, the ongoing burden was projected to be substantially lower across all models, ranging from 280 to 1,600 hours annually.

The wide range in estimates underscores the fundamental uncertainty with the RAISE Act and other similar bills. The fact that sophisticated AI models are not converging on consistent compliance costs suggests just how unpredictable this legislation could prove in practice. The market is moving quickly. We need laws that prioritize effective risk mitigation over regulatory theater.

A chart of the costs of the RAISE Act is available here and also posted below.

During a Hackathon in May, I coded up a first version of a prompt script, which I am going to iterate on in the future. It is built on persona prompting, where you direct the LLM to take on a role and then answer questions. So, I set up the script to reflect different industries, resources, legal teams, and familiarity with the law.

The benefit of this kind of scripting is that you can run many estimates simultaneously and then summarize the results. In the future, I am going to match these personas with what we know about the market to create more accurate predictions. The code is still messy, but I intend to come back to it when I work on the full paper this fall.

My recent article, “The Hidden Price Tag of California’s AI Oversight Bill,” continues this line of research. For this piece, I thought I would try to push the LLMs further by analyzing the potential impact of California’s AB 1018.

Although it didn’t pass, this bill would applied regulations to any decision that impacts the cost, terms, quality, or accessibility for employment-related decisions; education and vocational training; housing and lodging; anything that involves your utilities; family planning, adoption services, and reproductive services; health care and health insurance; financial services; the criminal justice system; legal services; arbitration; mediation; elections, access to government benefits or services; places of public accommodation; insurance; and internet and telecommunications access. Even California’s State Water Board warned that Excel workbooks could trigger regulatory requirements. So I wondered, could LLMs help figure out which businesses would be regulated?

For the first part of this project, I followed the typical method of running a cost calculation in public policy like I did in my two previous piece. First, you estimate the hours of compliance (table), then multiply it against market labor rates (table) to calculate an economic cost for a firm. The cost compliance for individual firms are detailed below.

Then you take this number and multiply it against the number of impacted businesses. However, estimating the number of impacted businesses tends to be a blunt measure, so I used a second set of scripts to then estimate which industries are likely to be affected by the law. All of the data can be found in this spreadsheet. From here, I projected these costs over a ten-year period and applied standard economic methods to arrive at a discounted cost, as I detailed above.

The result were three tables: an estimate of “Economy-wide Discounted Regulatory Costs,” another estimate for “Economy-wide Discounted Regulatory Costs with 5% Compliance,” and finally an estimate for “Discounted Regulatory Costs for Individual Companies.” I tend to think that the two economy-wide estimates were heroic, writing that,

The LLM classifications represent informed predictions rather than definitive legal interpretations of AB 1018’s scope. More importantly, the NAICS matching process necessarily involves aggregation. Specific business types identified by LLMs are matched to broader industry categories in the official data. This means that the estimates are sure to be higher than actual impact.

However, I am more confident in the ten year totals for individual companies, which represent the value of sustained compliance costs that firms would need to factor into their long-term business planning and automated system adoption decisions. That chart is reprinted below.

Still, these rough estimates represent just the tip of the iceberg. They capture only the direct compliance costs of hiring staff, conducting audits, implementing new processes, and maintaining documentation. What they don’t account for are the cascading economic effects that would ripple through entire sectors. Every dollar spent on regulatory overhead is a dollar not invested in innovation, service improvements, or competitive pricing. For the economy as a whole, it would represent a massive shift of resources from productive activities to regulatory compliance.

In the coming months, I will formalize all of this work in a academic paper.

State and Federal Regulation

We need to get ahead of this thing is a popular phrase among policy makers when discussing AI. But this framing misses a crucial point: Significant AI regulation is already happening, in the states, through regulatory agencies and the executive, as well as in Congress and the courts.

In “AI’s Automatic Stabilizers,” I walked through the governance mechanisms that are already regulating AI systems. Like the automatic stabilizers in fiscal policy that steady the economy without new laws, AI’s regulatory stabilizers are embedded in existing law and will continue to guide AI development even without a comprehensive federal AI statute. This includes:

  1. Consumer protection authorities, both federal and state, police “unfair or deceptive acts or practices.” The Federal Trade Commission has made clear it intends to use this power to its fullest;
  2. Property and contract law, now central to copyright disputes involving OpenAI, Microsoft, Meta, and others;
  3. Tort and common law, which let injured parties seek damages from AI-related harms;
  4. Product recall authority, as shown when NHTSA ordered Tesla to recall its autonomous driving software;
  5. Insurance and compensation systems, which indirectly shape AI risk-taking by pricing liability; and in
  6. Sectoral regulatory adaptation, where agencies such as the Department of Education, EEOC, CFPB, FCC, and FEC are extending existing frameworks to AI systems.

Indeed, there is a value in waiting to regulate, which I explored in “The value of waiting: What finance theory can teach us about the value of not passing AI Bills.” Borrowing a concept from real options theory, I explained why acting too early can eliminate future flexibility. Similar to companies looking to invest, regulators possess a regulatory real option where the smart choice is often to wait for more information rather than rushing to act.

The real danger isn’t under-regulation, but fragmentation as states to write their own rules. Nevertheless, that fragmentation is already underway.

In “The Lurking Dangers in State-Level AI Regulation,” I laid out my concerns with Colorado’s AI Act, the first comprehensive state AI law. Governor Jared Polis signed the bill into law but with deep reservations, warning in a signing letter about “the impact this law may have on an industry that is fueling critical technological advancements” and noting that “government regulation that is applied at the state level in a patchwork across the country can have the effect to tamper innovation” It was sad to see Polis not put up a fight against the bill, especially since the signing letter highlights the problems that plague every AI bill.

While it thankfully didn’t pass, California’s SB 1047 would have been an even bolder step than what Colorado has adopted. In “A New Proposed AI Regulation in California” and “California’s SB 1047 Moves Closer to Changing the AI Landscape,” I unpacked SB 1047, the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, which would have subjected advanced models to safety assessments, kill switches, certification regimes, and broad “know-your-customer” obligations. SB 1047 was the first AI safety bill to gain traction. It was designed to explicitly target frontier models, regulating the act of developing intelligence itself, by imposing sweeping pre-deployment safety obligations on model creators. Yet few advocates seem to appreciate the First Amendment concerns and the challenges in regulating for bias and fairness.

California and Colorado aren’t the only states toying with AI legislation. Texas and Virginia have followed with their own proposals, which I discussed in “The Best AI Law May Be One That Already Exists.” Texas’s TRAIGA would have required AI distributors to prevent algorithmic discrimination even though companies are already subject to anti-discrimination laws, and would have created a new regulatory body with broad powers to issue binding rules on “ethical AI development.” Virginia’s HB 2094, which was eventually vetoed by the governor, borrowed heavily from the EU’s regulatory playbook with similarly vague language around “consequential decisions” and “high-risk” applications.

I expect that the states will continue to adopt AI bills. In the next decade, we are likely to see “a patchwork of fifty AI laws, each trying to get ahead of the future," as I warned in “The Best AI Law May Be One That Already Exists.” That outcome would mirror the fragmentation we saw in state privacy laws, which I have written about before. It would mean duplicative compliance regimes, overlapping definitions, and conflicting obligations that raise costs without improving accountability. Unless Congress steps in with a preemptive framework or courts intervene on constitutional grounds, America’s comparative advantage in innovation could be blunted not by a single act of overreach, but by hundreds of well-intentioned state experiments.

The Executive has also been active on AI policy.

In the fall of 2023, the Biden Administration adopted the “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.” As I explained in detail in “Problems with Biden’s Executive Order on Artificial Intelligence,” this EO represented the most sweeping assertion of regulatory authority in decades. Important to note, it invoked the Defense Production Act to compel developers of frontier AI models to share testing and safety data with the government. I suspect that this won’t be the last time this law is used to justify AI regulation.

I also filed comments in the Trump Administration’s “AI Action Plan.” While there is a lot to these comments, I would bring your attention to three parts. First, it makes the case that state level AI regulation presents the greatest risk, noting that “The White House would do well to push back against a tangle of conflicting state rules that make cutting-edge AI too costly or risk-laden to develop and deploy.” Second, the Administration should champion permitting reform modeled after the successful Prescription Drug User Fee Act (PDUFA). This law has successfully accelerated pharmaceutical approvals without compromising safety standards by allowing applicants to fund expedited reviews. Third, I called for OIRA to pilot a project that uses AI simulations to standardize how agencies model compliance burdens across diverse businesses.

Instead of blithely pushing for new rules, policy makers should be using AI tools to clean up government processes. In an op-ed in Fox News titled “Let’s use AI to clean up government,” I introduced the notion of ChatGVT, a framing device to explore how LLMs could “provide straight answers about the newest tax plan, if a bill is stuck in committee, or the likelihood that a piece of legislation will pass. Or a ChatGVT could be turned on the regulatory code to understand its true cost to households and businesses.” I extended this analysis in “Government in the Age of AI,” ultimately concluding that the “promise of AI to revolutionize government efficiency is undeniable, but realizing these benefits will require careful implementation that prioritizes accuracy, transparency, and constitutional protections.”

Economics of Transformative AI

When people ask how AI will reshape the economy, they usually want a simple and clean answer, like a chart showing inevitable job losses or explosive productivity. But that’s not how technology actually works. It’s uneven, conditional, and shaped by the structure of firms, markets and regulation. To help understand these changes, I’m constantly updating a table titled “Papers on the Economics of AI,” which compiles all of the empirical work in economics.

Most assume that the decision to adopt a new technology follows a simple logic, you invest when the expected benefits exceed both types of costs. Yet in practice, the real determinant of success is whether the technology integrates smoothly with the firm’s existing production processes. For my newsletter in The Dispatch, I wrote a two-part series on the economics of AI that discussed “The Economics of AI and the Impending Robot Takeover” and “Transformative Growth with AI Is Likely. Explosive Growth Is Science Fiction.” I’m finding it all too common that people simply dismiss the effort that’s needed to transform a company, let alone transform an industry, with a technology like AI. Adoption doesn’t occur all at once. Firms must not only invest in hardware and software but also reconfigure workflows, retrain workers, and rebuild managerial hierarchies. These are nontrivial costs, which I have been bringing attention to since at least 2020 in a piece titled “Tracing the impact of automation on workers and firms.”

In “To Understand AI Adoption, Focus on the Interdependencies” I drew a parallel to the telephone switchboard, which was an innovation that took decades to become automatic because it was entangled with other organizational systems. The interdependencies between call switching and other production processes within the firm presented an obstacle to change. But the same is true today of firms thinking of adopting AI. The interdependencies between AI and other production processes will also be obstacle to change.

Moreover, people tend to couple robots with advanced AI tech. But when you look at the data, as I did, you learn that industries investing the most in robotics tend to be using AI the least. Manufacturing and retail trade spend the most on robotic equipment but they aren’t going big on machine learning, natural language processing, virtual agents, and the like.

Another strand of my work connects the economics of AI to the political economy of semiconductors. In “Nvidia’s Blockbuster Quarter and the Value of ‘Compute’,” I analyzed Nvidia’s extraordinary rise and the broader economic transformation driven by data centers, energy demand, and chip supply chains. Nearly 40 percent of Nvidia’s revenue now comes from AI compute infrastructure in the form of servers, GPUs, and networking gear that power model training and inference. This surge has turned compute into a new economic input alongside labor and capital.

The result is a new convergence between AI policy, energy policy, and industrial strategy. The CHIPS and Science Act, which I examined in my paper “The CHIPS Act and Semiconductor Economics,” is best understood not just as an industrial subsidy but as a reallocation of risk. In other words, it is a public bet on the domestic supply of the new factor of production: compute.

If there is a unifying argument across this body of work, it’s that AI-driven growth will be transformative but constrained. It will be accelerated in some sectors, delayed in others, and filtered through institutional complexity.

Complete AI Research Portfolio

Senator Sanders’ AI Report Ignores the Data on AI and Inequality – October 9, 2025 – AEIdeas – Senator Bernie Sanders released a new report which claims that 100 million jobs will be lost in the next ten years due to AI. Beyond the issues with how the AI job loss model was constructed (it simply asked ChatGPT), my biggest concern with the report is that it “reviews some key papers on automation and income inequality, but nowhere does it review the current literature showing that new AI tools are reducing inequality. In Brynjolfsson et al. (2023); Caplin et al. (2024); Choi et al. (2023); Hoffmann et al. (2024); Noy & Zhang (2023); and Hauser & Doshi (2024), advanced AI tools were found to be skill equalizers, raising the performance of those at the bottom in customer support, legal work, and software development, among others. If Sanders was truly concerned with worker inequality, he should be optimistic about AI tools and engaging with the empirical work on this subject.”

How AI Is Changing Hiring – September 12, 2025 – City Journal – AI tools like ChatGPT have transformed job hunting from a personalized process into a high-volume, low-commitment numbers game. Using the Diamond-Mortensen-Pissarides framework of labor economics, I explain how generative AI has intensified market frictions rather than reducing them. Workers now mass-apply to jobs with AI-generated resumes and cover letters, while employers post more listings, many of which are low-intent. This flood of applications and postings makes it harder for both sides to signal genuine intent, eroding the efficiency of labor market matching. As early-career workers in AI-exposed occupations face declining prospects, I caution policymakers against banning ghost jobs and suggest they focus on improving transparency and rebuilding trust in the job-matching process.

The AI Revolution in Property Tax Assessment – September 10, 2025 – AEIdeas – Traditional property tax assessments are plagued by regressivity. Lower-value homes are over-assessed while expensive ones get under-assessed. Cities like Chicago, Riverside County, and NYC are now piloting predictive AI systems that could bring fairness, transparency, and efficiency. As with any other change, however, successful implementation will require careful oversight to ensure these tools deliver on their promise of fairness while maintaining public trust.

The Hidden Price Tag of California’s AI Oversight Bill – September 5, 2025 – Exformation – California’s AB 1018 is meant to bring back a human touch back to consequential decisions in housing, health, and finance. But AB 1018’s definitions cast such a wide net, that they would regulate virtually any computational process used in business operations. Even California’s State Water Board warned that Excel workbooks could trigger regulatory requirements. Using four large language models to estimate compliance costs, individual firms could face between \$2 million and \$6 million over a decade with this bill. Even assuming only 5 percent of businesses comply, the total ten-year cost for the entire economy could reach the billions to low trillions. To be fair, these estimates vary widely in their ranges, suggesting substantial uncertainty. Still, the compliance cost estimates for AB 1018 reveal a broader pattern emerging across proposed AI regulation. There is an enormous hidden price tag of mandating human oversight in automated decision-making systems.

Illinois Bans AI Therapy. Questions about Enforcement Remain. – August 14, 2025 – AEIdeas – Illinois became one of the first states to ban AI therapy with the Wellness and Oversight for Psychological Resources Act (WOPR), joining Nevada in restricting artificial intelligence from providing mental health services. Enforcement could be its linchpin because the legislation provides minimal guidance for navigating the borderline scenarios: Should meditation apps offering stress-reduction techniques be restricted? What about journaling platforms that track mood patterns? How should regulators approach general-purpose AI systems that naturally provide empathetic responses to users’ emotional concerns? Still, Illinois’ AI therapy ban exemplifies a fundamental tension emerging across American law. The United States’ sectoral approach to regulation is colliding with the inherently boundary-crossing nature of AI systems.

AI Tools for Economists and Policy Analysts – August 1, 2025 – AEIdeas – New artificial intelligence tools are rapidly transforming how economists and policy analysts conduct research, dissect data, and communicate findings. Rather than replacing traditional research methods, ChatGPT, Claude, and Gemini are serving as force multipliers, allowing analysts to explore ideas more thoroughly and overcome common bottlenecks in the research process. Here are some ways to use AI tools produtively.

Government in the Age of AI – July 15, 2025 – AEIdeas – In 2023, I argued for using large language models (LLMs) to streamline government operations, proposing exactly this type of regulatory cleanup. Since then, the technology has advanced dramatically and government officials are recognizing that these tools could finally enable regulatory reform agendas that have long proven difficult to implement at scale. But how else might government processes be affected by AI? This article discusses some pathways.

Why New York’s New AI Legislation Is Problematic – June 10, 2025 – City Journal – New York’s RAISE Act has admirable goals in trying to protect people from AI harms. But the bill risks turning a technical challenge into a bureaucratic burden with all of its requirements.

Two Cheers for the AI Moratorium! – June 10, 2025 – Exformation – The AI moratorium is a pragmatic compromise that prioritizes getting regulation right over getting it fast, even though it probably should be just 5 years. We need smart AI regulation. But we need it to be consistent and evidence-based. A temporary pause on state-level rules gives us the best chance to get this right.

Are Software Jobs Collapsing? – June 6, 2025 – AEIdeas – While tech companies have indeed cut hundreds of thousands of jobs since 2022, with Amazon, Google, and Meta leading massive workforce reductions, the data suggests broader economic factors rather than AI displacement are the primary cause. This piece dives into the data behind software jobs.

The Evidence So Far: What Research Reveals About AI’s Real Impact on Jobs and Society – May 22, 2025 – AEIdeas – As organizations race to integrate new AI models into their workflows, everyone is wondering what the effects will be on industries, jobs, and society. This lit review compiles research on large language models (LLMs), chatbots, and AI systems published since ChatGPT 3.5’s late 2022 debut.

China’s AI Strategy: Adoption Over AGI – May 8, 2025 – AEIdeas – China’s AI strategy isn’t about chasing AGI breakthroughs but about rapid, large-scale adoption. This stark contrast with the US could have far-reaching implications for the future of AI and global power dynamics. As the US focuses on regulatory frameworks and long-term risks, China is capitalizing on the immediate, transformative potential of AI. It’s time for US leaders to pay closer attention to China’s strategy, which may offer valuable lessons on how to harness AI’s full potential.

How Much Might AI Legislation Cost in the US? – March 19, 2025 – Exformation – As policymakers rush to regulate artificial intelligence, the true economic burden of compliance remains largely unexplored. This analysis dives into recent California privacy amendments and Biden’s Executive Order implementation to reveal potentially massive costs. When prompted to estimate compliance requirements, large language models (LLMs) like Claude, ChatGPT, and Grok consistently predicted much higher costs than official government estimates, particularly for ongoing compliance. For California’s risk assessment regulations alone, LLMs project costs up to ten times higher than official figures. These findings suggest systematic underestimation in regulatory impact assessments and highlight the potential for using AI itself to create more realistic, diverse simulations of regulatory burden across different business types and scenarios.

Response to the Development of an Artificial Intelligence (AI) Action Plan – March 15, 2025 – Regulatory Filing – The US stands at a critical juncture in AI policy, requiring measured governance to maintain technological leadership. This comprehensive response to the Trump Administration’s AI Action Plan outlines six strategic priorities: exercising strategic patience in regulation rather than rushing new rules; using AI to reform and streamline government itself; leading global AI governance by supporting open-source development and funding standards; preparing the workforce through targeted skills development without premature intervention; addressing data center and energy infrastructure barriers to AI deployment; and securing semiconductor supply chains and critical minerals.

Is AI Moving Too Fast or Is Regulation? – January 30, 2025 – Techne – To enact the restraint AI regulation desperately needs, legislators should follow three guiding principles. First, they should focus on actual harms rather than theoretical ones. Second, they should leverage existing legal frameworks. Third, they shouldn’t outsource legislative work. AEI repost here.

AI on the Cheap vs. Stargate’s Big Splash – January 23, 2025 – Techne – The rise of DeepSeek alongside the launch of OpenAI’s Stargate project presents a fascinating paradox: As the U.S. bets on maintaining technological superiority through massive infrastructure investments, innovative approaches from abroad are demonstrating alternative paths forward. As the Trump administration settles in and considers its approach to both AI regulation and China policy, these early developments suggest that maintaining technological advantage may require more nuanced tools than export controls alone. AEI repost here.

CHIPS Politics and AI Breakthroughs – January 9, 2025 – Techne – The hardware and software aspects of AI are intertwined yet distinct, each with its own economic and political dynamics. Two pivotal developments - chip shortages during COVID highlighting semiconductor politics, and breakthrough language models sparking a generative AI race - have fundamentally altered AI’s trajectory. While discussions of AGI and regulation continue, we’re witnessing parallel revolutions: a hardware transformation reshaping global supply chains alongside a software evolution expanding machine capabilities. AEI repost here.

What Our Leaders Need to Know About Tech Policy – November 7, 2024 – Techne – Much of this year’s election cycle was rooted in nostalgia. But, I argue that we need a forward-thinking administration. This article proposes a tech and innovation agenda for 2025, including AI leadership rather than sending it to the states, a more measured approach to big tech antitrust, and a focus on BEAD implementation, among others. AEI repost here.

Insights From the Zeitgeist – October 31, 2024 – Models, like metaphors, provide frameworks to help us understand and explain the world around us. By synthesizing various analytical models from the past year, I aim to present a cohesive snapshot of politics, technology, and culture in 2024. AEI repost here.

Welcome to the Techlash – October 24, 2024 – Techne – Tech regulatory approaches have shifted from the Clinton administration’s innovation-focused framework to the Biden administration’s sprawling AI executive order and similar legislation making its way through Congress. This is part of the “techlash” – the sense that we got this wrong and didn’t regulate quickly enough. However, this sense isn’t true – there is considerable evidence supporting the idea that our tech sector is booming precisely because we’ve allowed them significant latitude. AEI repost here.

California’s High-Stakes AI Bill Lacks Legal Awareness – September 6, 2024 – Techne – California’s State Assembly recently passed SB 1047, a controversial AI safety bill aimed at regulating advanced AI models to prevent potential threats. The bill’s approach is problematic when viewed from legal and administrative perspectives. It regulates emerging “frontier models” using safety protocols that lack expert consensus and are not fully developed. Furthermore, SB 1047 relies on vague “reasonableness standards,” which are notoriously difficult to define legally, making its implementation and enforcement challenging. AEI repost here.

Why the Self-Driving Car Craze Slowed Down – August 29, 2024 – Techne – For years, we’ve been promised that self-driving cars were imminent, but the reality has proven more complex. While autonomous vehicles (AVs) show potential, significant challenges remain in making them practical and safe. AVs present a big data problem, requiring the ability to navigate countless scenarios, which must be addressed meticulously. Even as driverless cars slowly progress toward viability, implementing the necessary policy changes to fully integrate them into our urban landscapes poses an even greater challenge. AEI repost here.

What Tech Bros Really Think About AI – August 1, 2024 – Techne – In the past year, I’ve had probably a dozen meetings with people that care about AI policy. This edition of Techne is a high-level report of sorts from these conversations, detailing four fault lines in AI policy: (1) A cultural divide exists between Washington, DC and Silicon Valley, stemming from mutual illiteracy in technology and policy, respectively; (2) existential risk appears to be the primary motivating factor in these discussions. (3) There’s a prevailing concept of escalating risk of catastrophic outcomes associated with the development of AGI and ASI; and (4) The precautionary principle neglects the progress when adhering to risk-neutrality. AEI repost here.

How Gun-Shy Legislators Could Hamper AI – June 20, 2024 – Techne – The Colorado bill (SB 24-205) regulating AI exemplifies common flaws in AI regulation, focusing on outcomes rather than intent and imposing burdensome compliance requirements. It introduces complex reporting processes and costly human review procedures, potentially hampering technological progress. The bill’s approach suggests a trend towards state-level AI regulation, overlooking the dynamic nature of AI and the potential for leveraging existing regulatory mechanisms. AEI repost here.

AI’s Advent Doesn’t Spell Labor Doom and Gloom – June 13, 2024 – Techne – AI can significantly enhance productivity and performance across various tasks, as demonstrated by ChatGPT’s abilities. But, the concept of AI-driven explosive growth, which would dramatically increase consumption and contradict predictions of future GDP decline, remains contentious. Such explosive growth would require seamless, automatic adaptation by people and businesses, which seems improbable given current technological and societal constraints. AEI repost here.

Technological Disruption Takes Time – June 6, 2024 – Techne – New technology adoption by companies can augment labor and capital in various ways, automating some tasks while enhancing productivity in others. Historically, even revolutionary technologies like tractors and electricity required decades for widespread adoption. While automation technologies have diverse effects, they often benefit skilled workers and can expand labor markets. The potential economic implications of superintelligence could be far-reaching, potentially accelerating these trends and introducing new dynamics to the global economy. AEI repost here.

Nvidia’s Blockbuster Quarter and the Value of ‘Compute’ – May 30, 2024 – Techne – Nvidia’s market dominance is primarily driven by its data center business, which generated 87% of its \$26 billion Q1 revenue. This success highlights the growing importance of computational power. As artificial intelligence continues to expand, the costs associated with AI training and inference are becoming more apparent. AEI repost here.

Regulating Frontier Models in AI – May 9, 2024 – Techne – 31 bills that would regulate artificial intelligence (AI) systems are currently before California’s state legislature. SB 1047—the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act—seems increasingly likely to pass, but this bill doesn’t seem like common sense to me. It would legislate an extensive safety compliance regime that accords serious power to a new agency. It also has countless gaps. Artificial intelligence (AI) safety advocates have been dramatically underplaying how extensive these requirements would be, and there has been effectively no discussion about the bill’s dubious constitutionality. AEI repost here.

Would It Even Be Constitutional to Pause AI? - March 22, 2024 – AEIdeas – While figures like Elon Musk and Steve Wozniak have endorsed a pause on AI for ethical and safety grounds, the discussions have largely ignored the crucial legal frameworks needed to implement such policies. First Amendment precedents could protect AI from regulation, particularly with the law set by cryptography-related cases like Bernstein v. United States and Junger v. Daley. Reposted in Congressional Digest

AI’s Automatic Stabilizers – March 05, 2024 – AEIdeas – Automatic stabilizers are government mechanisms, like unemployment insurance and progressive taxes, that help to stabilize the economy without needing direction from Congress. In a similar way, there are a range of mechanisms that will automatically stabilize artificial intelligence (AI) adoption without Congress acting.

To Understand AI Adoption, Focus on the Interdependencies – February 12, 2024 – AEIdeas – If we want to understand how AI technology is likely to progress, how it will affect workers, and how it might impact productivity, we should be focused on understanding its interdependencies. AI is getting adopted into work processes, but like any other tech adoption, it will take time to actually shake out.

The Complex Case of TikTok in the United States – January 30, 2024 – CGO Policy Paper – This paper aims to document TikTok’s moment in the spotlight, charting its rise in prominence and its recent legal troubles. Only with this context can the most important public policy questions be answered: What risks, if any, does TikTok pose? How would a ban work in practice? What other options are available? And most important of all, is any of this even constitutional?

Focusing on the workforce will turn the CHIPS Act into a high-tech triumph – January 9, 2024 – Washington Examiner – The growth of well-paying manufacturing jobs has been one of the selling points for the CHIPS Act. But creating jobs should not be the primary lens through which we view this act. While employment opportunities are a welcome byproduct, the primary goal should be to elevate the U.S. as a leader in high-tech manufacturing. Building a workforce to staff the new chip factories will be where the bill succeeds or fails.

The Political Economy of the CHIPS and Science Act – November 14, 2023 – CGO Research in Focus – This primer is designed to bridge a void in the existing literature by examining the semiconductor industry from a political economy perspective. Here is the paper’s nutgraf: “Chip fabrication faces unique economic conditions that tend to push out supply lines to Taiwan, South Korea, and China. When COVID hit, the reliance on Chinese and East Asian production became clear as supply chain issues arose, creating the crucible for the CHIPS and Science Act.”

AI, Canadian regulation, and ChatGVT – October 14, 2023 – Fraser Institute – This presentation offers an analysis of AI advancements and regulatory frameworks in Canada. It first traces the progression of AI technologies, emphasizing the pivotal role of generative pre-trained transformers and their implications for future policy. The discussion extends to the Canadian government’s legislative maneuvers with the Artificial Intelligence and Data Act (AIDA), projecting the potential trajectories and impacts of such regulations on AI development and application.

New Net Neutrality Rules Could Threaten Popular Services – October 3, 2023 – Reason – Since the FCC’s last attempt at pushing net neutrality rules, the dynamics have shifted. Following January 6th, Google, Apple and Amazon distanced from Parler. Then last year, Cloudflare withdrew from Kiwifarms. Power dynamics are evolving.

Are ‘Killer Acquisitions’ by Tech Giants a Real Threat to Competition? – October 3, 2023 – CGO Research in Focus – In this brief, I define what is meant by a killer acquisition. I then explain why the Facebook-Instagram merger wasn’t a killer acquisition. I use the framework set out by Cunningham, Ederer, and Ma to explain why Zuckerberg decided to buy the app company. Following this, I chart the relationship between killer acquisitions and a concept called “the kill zone.” Finally, I review the benefits of acquisitions from the point of the seller.

New AI poll reveals elites are way out of step with the rest of us – September 26, 2023 – Fox News – In Silicon Valley, Congress, and the Biden administration, leaders are buzzing about AI. For those in the Valley, killer robots with lasers for eyes dominate the conversation. The Beltway, in contrast, is spending a lot of political capital to deal with bias in algorithms. And yet, the general public isn’t primarily worried about either machines gaining control or about algorithms being biased. What concerns them about AI are the national security implications and the potential for job losses.

Let’s use AI to clean up government – July 21, 2023 – Fox News – ChatGPT needs to be turned on the government. A ChatGVT is needed. A ChatGVT could take any number of forms, as I write, “It could provide straight answers about the newest tax plan, if a bill is stuck in committee, or the likelihood that a piece of legislation will pass. Or a ChatGVT could be turned on the regulatory code to understand its true cost to households and businesses…Using AI to turn law into code will mean that the true impact of government will be understandable and accessible. Most know that the burden imposed by regulation is colossal but the exact costs are hard to quantify. A ChatGVT could help sort out that problem.”

Real Options Analysis Could Help Improve Regulatory Decisions – June 22, 2023 – Regulatory filing – The OMB’s Draft Circular A-4 proposes that real options analysis should be considered in “some situations . . . when you are regulating an exhaustible resource or an endangered species.” However, to fully leverage this analytical tool’s potential, it is essential for the Office of Management and Budget (OMB) to endorse its broader application in benefit-cost analysis.

As I argue, the adoption of real options analysis in regulatory proceedings would serve to intensify the examination of three critical, yet often underemphasized, elements of any regulation including irreversibility, how easily a regulation could be revoked once implemented; uncertainty over the future benefits and costs of the regulation; and timing or the value in postponing action to get more information. While this method may not entirely eliminate ineffective regulations, it can provide a reliable framework for informed and accountable decision-making in the public sector.

Public Interest Comment on the National Telecommunications and Information Administration (NTIA) AI Accountability Policy Request for Comment – June 13, 2023 – Regulatory filing – The National Telecommunications and Information Administration (NTIA) has issued a Request for Comment (RFC) on “how to develop a productive AI accountability ecosystem.” This public interest comment is written by CGO senior research fellows Neil Chilson and Will Rinehart and cosigned by other policy centers and experts. In it, the authors draw NTIA’s attention to society’s existing and highly effective accountability ecosystem for software: markets. Our society has been using markets to hold software, algorithms, and automated systems accountable for decades.

Public Interest Comment for the National Telecommunications and Information Administration (NTIA) on the Intersection of Privacy, Equity, and Civil Rights – March 6, 2023 – Regulatory filing – The National Telecommunications and Information Administration (NTIA) recently put out a proceeding to better understand how commercial entities data collect and use data. Importantly, it was seeking to understand how “specific data collection and use practices potentially create or reinforce discriminatory obstacles for marginalized groups regarding access to key opportunities, such as employment, housing, education, healthcare, and access to credit.” What the NTIA seeks to tackle is a wicked problem in Rittel and Webber’s classic definition.

The first section explains how data-generating processes can create legibility but never solve the problem of illegibility. The second section explains what is meant by bias, breaks down the problems in model selection, and walks through the problem of defining fairness. The third section explores why people have a distaste for the kind of moral calculations made by machines and why we should focus on impact.

Public Interest Comment on the FTC Trade Regulation Rule on Commercial Surveillance and Data Security – November 30, 2022 – Public interest comment – The Federal Trade Commission (FTC) is pursuing a topic of immense importance to the American public and economy with its proposed rulemaking on commercial surveillance and data security. Indeed, if the agency moves to an NPRM, it is likely to go beyond its authority. Congress would be better suited to provide guidance, which the Commission could then implement. Still, many of the questions in this ANPRM rest on fundamental assumptions that are still debated and remain unresolved. However, if the FTC does pursue a rule, there will be costs that could easily outweigh the benefits.

Tracing the impact of automation on workers and firms – August 14, 2020 – The Benchmark – Automation will be a slow process in many sectors. Instead, productivity data is uneven. Firms are reluctant to change, and only some industries seem to be affected by robotics or other automation methods.