Texas draws lines on artificial intelligence
Everything is bigger in Texas, and that now includes how seriously the state is taking artificial intelligence. As AI tools move from experimental labs into classrooms, startups and public institutions, Texas lawmakers have begun putting formal guardrails around how the technology can be developed and used. Beginning this year, new state laws aim to curb harmful or deceptive applications of artificial intelligence while preserving the innovation that has helped Texas emerge as a growing tech hub.
In a statement following the bill’s passage, Gov. Greg Abbott said the legislation is intended to protect Texans from harmful uses of artificial intelligence while supporting innovation.
At a high level, the legislation focuses on preventing AI systems from violating individual rights, enabling discrimination or manipulating consumers. Rather than prescribing how AI must be built, the laws emphasize outcomes that are considered unacceptable. For those working in the field, that is an important clarification.
Tim Mata, co-founder of Mind & Machine, a Dallas-Fort Worth-based AI networking organization, described his initial reaction as cautious but optimistic. He characterized the current moment as “governance in an AI wild west” and questioned whether new rules could slow solution-focused innovation. “How structured is it?” Mata said. “What new hoops will founders have to go through?”
Mata said the law is less about limiting technology itself and more about protecting consumers. Rather than imposing technical requirements on developers, the legislation draws boundaries around how AI should not be used.
That approach means the impact will not be felt evenly across the AI ecosystem. Mata pointed to sectors where AI systems make high-stakes decisions about people’s lives, including hiring, finance, health care, education, government and biometrics. In these areas, AI tools often determine access to opportunities or resources, making oversight more critical.
For example, a Texas company using AI to screen job applicants may choose to closely audit its hiring models for bias or be prepared to explain how automated decisions are made, particularly when those systems affect employment or access to services.
Concerns that regulation could slow innovation are common, but Mata sees the Texas approach as closer to guardrails than a brake. Given the pace of AI development, he believes some limits are necessary. At the same time, he does not see Texas becoming less attractive to founders or investors, particularly since the state is not mandating specific development processes and continues to draw significant capital and infrastructure growth.
That view is shared by leaders working inside large organizations deploying AI at scale. Tori Begg, senior manager of AI programs at V Digital Services, said the legislation responds less to innovation itself and more to uncontrolled deployment. She pointed to black-box decisions, unclear accountability, biased outcomes and teams using AI tools without leadership fully understanding where or how those systems are being applied.
The Texas Responsible Artificial Intelligence Governance Act, known as TRAIGA or HB 149 (signed June 22, 2025 in Effect Jan 1, 2026), establishes statewide limits on certain AI uses while allowing companies flexibility in how systems are developed. Begg does not see the law as slowing progress. Instead, she said it creates the conditions needed for AI to scale responsibly. “I don’t think of laws like TRAIGA as stopping the car,” she said. “I think of them as finally putting lane markers on a highway everyone was already speeding down.”
In education, the challenge looks different. Allison Reid, former senior director of digital learning and libraries for Wake County Public School System in North Carolina, said AI has been present in schools for years through adaptive learning platforms, analytics and automated tools, even if it was not labeled as such. That changed after the release of ChatGPT in late 2022, when generative AI became visible and immediately usable, forcing districts to respond more directly.
Reid said that policy can easily become misaligned with reality. State laws are often broad and slow-moving, while schools operate under tight timelines and limited capacity. She argued that states should set baseline expectations around data privacy and vendor governance, while leaving implementation decisions to local districts that better understand their communities.
Taken together, perspectives from founders, enterprise leaders and educators point to the same uncertainty. The real test of Texas’s AI laws will not be in how they are written, but in how they are applied as the technology continues to evolve.
For now, Texas is positioning itself as a test case, betting that early boundaries can prevent harm without pushing innovation elsewhere.

Comments (0)