most all the big AI news this year was about how fast the technology is progressing , the harm it ’s causing , and surmise about how shortly it will develop past the point where humans can control it . But 2024 also saw government make significant inroads into govern algorithmic systems . Here is a breakdown of the most crucial AI legislation and regulatory efforts from the past twelvemonth at the state , Union , and international levels .

State

U.S. United States Department of State lawmakers took the lead on AI regulation in 2024 , introducinghundreds of bills — some had modest goals like creating study committees , while others would have imposed serious polite financial obligation on AI developer in the event their creations cause catastrophic harm to society . The vast majority of the billhook failed to go on , but several body politic enacted meaningful legislation that could serve as mannequin for other states or Congress ( assuming Congress ever starts functioning again ) .

Unsurprisingly , given that it ’s the backyard of the tech industry , some of the most challenging AI proposal came out of California . One high - profile bill would have impel AI developers to take base hit forethought and held companies liable for catastrophic indemnity due to their system . That visor pass along both bodies of the legislature amid a fierce lobbying effort but wasultimately vetoedby Governor Gavin Newsom .

Newsom did , however , signal more than a dozenother billsaimed at less apocalyptic but more immediate AI harms . One newfangled California law requires wellness insurance firm to ensure that the the AI systems they apply to make insurance coverage determinations are fair and equitable . Another requires generative AI developer to produce tool that recording label contentedness as AI - generated . And a twosome of bank bill prohibit the statistical distribution of a dead person ’s AI - generated semblance without anterior consent and mandate that concord for support peoples ’ AI - yield likenesses must clearly specify how the content will be used .

The AI executive order Joe Biden signed in 2023 paved the way for much of the federal government’s work in 2024.

The AI executive order Joe Biden signed in 2023 paved the way for much of the federal government’s work in 2024.© Bloomberg via Getty Images

Colorado passed afirst - of - its - kind in the U.S. lawrequiring companies that produce and use AI systems to take sane footprint to assure the tool are n’t discriminatory . Consumer advocates call the legislating animportant service line . It ’s likely that similar bills will be hotly debated in other state in 2025 .

And , in a in-between fingerbreadth to both our future robot overlord and the planet , Utahenacted a lawthat proscribe any governmental entity from granting legal personhood to artificial intelligence , pulseless target , bodies of water , atmospherical gases , weather , plant life , and other non - human things .

Federal

Congress talked a lot about AI in 2024 , and the House ended the year by releasing a273 - pageboy two-way reportoutlining conduct principles and recommendations for future regulation . But when it came to really pop off legislation , federal lawgiver did very little .

Federal agency , on the other hand , werebusy all yeartrying to get in touch with the goals set out in President Joe Biden ’s 2023 executive director rescript on AI . And several regulator , particularly the Federal Trade Commission and Department of Justice , crack down on deceptive and harmful AI system .

The oeuvre government agency did to comply with the AI executive order was n’t especially aphrodisiacal or headline grabbing , but it laid important foundations for the governance of public and secret AI systems in the future . For example , federal agencies ship on an AI - talent lease spree and createdstandardsfor creditworthy example development and trauma mitigation .

Tina Romero Instagram

And , in a self-aggrandising step toward increasing the public ’s sympathy of how the political science uses AI , the Office of Management and Budget wrangled ( most of ) its fellow agencies into disclosingcritical informationabout the AI organisation they use that may bear on hoi polloi ’s rights and safety .

On the enforcement side , the FTC’sOperation AI Complytargeted companies using AI in deceptive ways , such as to write fake reviews or furnish effectual advice , and itsanctionedAI - gun detection company Evolv for making misleading claim about what its merchandise could do . The agency alsosettledan investigation with facial recognition fellowship IntelliVision , which it accuse of falsely allege its engineering was liberal of racial and gender prejudice , andbannedthe pharmacy chain Rite Aid from using facial realization for five years after an probe regulate the society was using the peter to discriminate against shoppers .

The DOJ , meanwhile , joined state lawyer general in a lawsuit accusing the real estate software companyRealPage of a massive algorithmic terms - fixing schemethat raised rent across the nation . It also won several anti - trust lawsuits against Google , include one involving the company’smonopoly over net searchesthat could importantly shift the balance of power in the burgeon AI hunt diligence .

Dummy

Global

In August , the European Union ’s AI Actwent into effect . The law , which is already serving as a model for other jurisdictions , requires AI arrangement that do high-pitched - peril map , such as assisting with hiring or aesculapian decisions , to undergo danger mitigation and meet sure standards around breeding data caliber and human oversight . It also bans the use of other AI systems , such as algorithms that could be used to attribute a country ’s occupier societal scores that are then used to deny right and privileges .

In September , China bring out a major AI safety governanceframework . Like similar framework write by the U.S. National Institute of Standards and Technology , it ’s non - binding but creates a vulgar band of standard for AI developers to survey when identifying and palliate risks in their systems .

One of the most interesting pieces of AI policylegislationcomes from Brazil . In late 2024 , the body politic ’s senate passed a comprehensive AI safety bill . It faces a challenging road forwards , but if passed , it would create an unprecedented set of protections for the variety of copyright fabric commonly used to train generative AI system . Developers would have to disclose which copyrighted stuff was included in their training data , and creators would have the major power to disallow the manipulation of their study for training AI systems or negotiate compensation agreements that would be based , in part , on the size of the AI developer and how the material would be used .

James Cameron Underwater

Like the EU ’s AI Act , the proposed Brazilian natural law would also require high - jeopardy AI systems to follow certain safety communications protocol .

AItech policy

Daily Newsletter

Get the best technical school , science , and culture tidings in your inbox day by day .

News from the future , delivered to your present tense .

You May Also Like

Anker Solix C1000 Bag

Naomi 3

Sony 1000xm5

NOAA GOES-19 Caribbean SAL

Ballerina Interview

Tina Romero Instagram

Dummy

James Cameron Underwater

Anker Solix C1000 Bag

Oppo Find X8 Ultra Review

Best Gadgets of May 2025

Steam Deck Clair Obscur Geforce Now

Breville Paradice 9 Review