
AI regulations: What publicly elected boards need to know

With its power to automate tasks, supercharge analysis and draw insight from vast troves of data, AI offers public sector leaders the promise of a magic wand.
Local governments see AI as a way to stretch taxpayer dollars further and deliver more responsive, personalized services to their communities. For school boards and community colleges, the technology promises more — AI could expand educational opportunities, improve administrative efficiency and, ultimately, boost student success.
But with power comes risk. How well does your organization manage risks like inaccuracies and “hallucinations,” and how well have you protected sensitive or personal data?
As districts and municipalities build AI into new processes and new aspects of their operations, boards need to stay up to date on a fast-changing regulatory landscape. Not keeping pace can expose whole institutions to meaningful risks, from fines and reputational damage to lost community trust. Getting ahead of the curve, by contrast, gives decision-makers the chance to ensure compliance, maintain stakeholders’ trust and think strategically about long-term adoption.
To help publicly elected boards develop strong, up-to-date policies for AI governance, we’ve rounded up regulatory developments into a user-friendly guide that offers:
- A detailed overview of laws and frameworks to know, plus agencies and issues to watch, in the AI regulatory landscape.
- Tips and guidance on AI governance and administration, from peers grappling with the same decisions.
- Advice on how modern governance software can help.
The regulatory landscape: AI in local government and public education
The emerging meshwork of AI laws covers a wide and ever-expanding range of areas, from cybersecurity and data protections issues to usage policies and safety precautions.
When navigating AI governance for your school or municipal organization, we recommend consulting with your legal team and doing in-depth research on specific agencies, laws and issues. But this high-level overview of select countries — the agencies involved, what they’re focused on, who they’re partnering with and specific policies (both enacted and proposed) — is a powerful start to understanding the landscape.
United States
Federal requirements
Data privacy has long been governed at the federal level by legislation such as FERPA and HIPAA. The situation with AI, however, is more complex.
American AI legislation took off when the White House released an AI Bill of Rights through its Office of Science and Technology in 2022, then issued an Executive Order on AI one year later. Both steps added urgency to existing efforts to develop AI legislation, of which the two most important were:
- The National Artificial Intelligence Initiative Act of 2020 (Division E of the NDAA of FY2021, P.L. 116-283). This law formalize the American AI Initiative and created an AI Initiative Office, alongside an inter-agency committee and a National Advisory Committee, all aimed at guidance and recommendation.
- The AI in Government Act of 2020 (Division U, Title I, of the Consolidated Appropriations Act, 2021, P.L. 116-260), which directs the General Services Administration (GSA) to create an AI Center of Excellence and undertakes various facilitating and streamlining measures.
From there, AI regulation in the United States has evolved through a patchwork of federal, state and industry frameworks — especially after July 2025, when the Senate struck down proposed legislation that would have blocked state and local governments from regulating AI for a decade.
For school board members, start with Department of Education guidance on the use of AI, as well as its accompanying illustrative use-cases. Current rules clarify the status of AI in relation to federal grants and funding and lay out key constraints on how it can be used. These directives respond to the April 23 Executive Order that your team will also need to become familiar with.
Local government officials and board members can instead look to the White House’s AI Action Plan, also from 2025, and may wish to follow along with developing conversations about the use of AI in government activities.
State requirements
Regulation of AI at the state level has been more direct and more active than among federal policymakers. Many of the key considerations apply both to school boards and to local government.
The National Council of State Legislatures published an overview and tracker of 2025 AI-related regulatory developments. It tracks the most common areas for AI regulation across states, including transparency requirements for automated decision-making tools to guardrails for AI-powered bots and agents.
Colorado became the first US state to enact comprehensive AI legislation in May 2024, with a focus on algorithmic discrimination and systems in essential areas like housing, healthcare, education and employment.
California also has been a forerunner in AI regulation, addressing areas like increasing business accountability, combatting discrimination and regulating how businesses use data.
Many of these policies — such as those adopted in Texas and Utah — also establish a state office or center of excellence to sponsor, host and disseminate additional research or guidance.
Most states have, for now, opted for more limited regulation — such as Connecticut, which this year chose not to regulate AI in education, business, or government directly but rather to prohibit certain uses of AI, such as “deepfake” images that can be harmful.
In total, dozens of states have enacted or proposed hundreds of pieces of AI-related legislation and regulation, with the most common areas of focus including:
- Notifying people that they’re interacting with AI systems or AI-generated content
- Using algorithms to determine employment, services and housing
- Offering ways to opt out of data collection and profiling
- Testing AI systems for discrimination and bias
- Taking measures to monitor, mitigate and disclose the potential risk and impact of AI applications like automated decision tools and bots for purposes like mental health services
In public education, a majority if states now impose at least some formal guidance on school systems’ uses of AI tools and platforms, with a helpful overview provided by EducationWatch.
In Tennessee and Ohio, school districts are required to maintain a detailed policy on AI use. In states like Massachusetts, the absence of school-specific regulation is balanced by robust guidelines covering legal, ethical, pedagogical and social issues related to AI.
For local government, the top concern is not about targeted regulation but the complexity — and overlapping demands — of multiple sources of AI policy and guidance.
Canada
At the federal level, the proposed Artificial Intelligence and Data Act (AIDA) was Canada’s first attempt at comprehensive AI legislation. Although AIDA was paused in early 2025 due to parliamentary changes, its principles — including risk-based governance, transparency, and accountability — continue to shape voluntary codes and sector-specific guidance.
In the absence of binding federal law, the Department of Innovation, Science and Economic Development (ISED) has introduced a Voluntary Code of Conduct for Generative AI, which encourages organizations to uphold standards around fairness, safety, human oversight, and transparency when deploying advanced AI systems.
In April, a bill was introduced that would regulate the use of AI in Canadian higher education by extending the AIDA, but is currently tabled. The law, if passed, would implement strict transparency and safety requirements for AI in K-12 contexts. There have also been recent calls for a national policy strategy for generative AI in higher education and to fill the “legislative vacuum” currently surrounding the technology.
Some school boards have been proactive about AI — in Alberta, for instance, the School Board Association released its own guidance for schools’ AI policies.
Other provinces have followed suit, albeit with a more general focus. Québec’s Law 25, for example, imposes strict data privacy and automated decision-making disclosure requirements, including mandatory notification when decisions are made solely by AI.
Ontario’s Bill 194 requires public sector entities to disclose AI use, manage associated risks, and implement accountability frameworks. Despite examples like this, a 2025 report on local government pointed out that nearly one-third of municipalities still lack formal guidelines for using AI.
These rules will continue to change. Debates about the use of AI in Canada’s schools and local governments remains fierce — so savvy board members will make proactive moves to head off the greatest risks and set up flexible, powerful systems for governance and internal management of AI technologies.
Practical tips and resources for regulatory compliance
Knowing the latest laws is only half the battle. Now you need to put that knowledge to work by figuring out how your organization will comply with them.
One place to start is with an AI governance framework. Such a framework sets guardrails, guidelines and expectations, as well as outlining what’s acceptable and what’s prohibited for those using AI tools.
A municipal or school district board can make the process of developing an AI framework more manageable with these seven steps:
1. Getting up to speed with resources like the AI Risk Management Framework and Playbook from the National Institute of Standards and Technology (NIST) or the Artificial Intelligence Governance and Auditing (AIGA) list of AI governance tasks — and trying out AI tools and applications first-hand to explore their capabilities and limitations.
2. Applying this research to their organization by considering questions like:
- What will your organization consider “acceptable use” in terms of AI?
- Who’s involved in making this decision and how?
- What uses of AI will be prohibited, or allowed but considered high risk?
- How would you govern these projects and mitigate the risk?
3. Come to a mutual understanding of how you’d like to see it used in your organization — and document this discussion.
4. Get administrators and staff familiar with AI tools and how to use them correctly, to minimize the chances they get frustrated and dismiss them.
5. Rolling out a few pilot projects Proceed in a targeted, easy-to-manage fashion.
6. Bringing in other technology to help, like specialized governance software.
7. Keeping humans in the loop every step of the way.
AI policy — guidelines and rules for AI use — is a complementary next step and brings up many more questions. A public education board might consider:
- For students, what are the consequences of using AI for class assignments but not disclosing its use?
- For teachers, what AI tools will be permitted in the classroom, and for what purposes?
- How will data be collected, used and shared, and how will you proactively inform parents, students and stakeholders about this?
- How will your district or college avoid access and digital divide issues when incorporating AI tools into classwork and assignments?
Learn more in our Governance checklist for public education, Governance checklist for local government, and creating a safe and transparent AI governance framework.
Diligent Community: Bringing efficiency, effectiveness and ease to AI governance
Just like the technology itself, AI regulations are evolving fast — as are the related opportunities and risks. Diligent Community helps busy boards keep up by:
- Centralizing policies, meeting materials, onboarding documents and training resources in one online library, so they can access the right data at the right time for better decision-making.
- Delivering secure communications through the same platform so clerks and administrators can consistently keep board members up-to-date and prepared.
- Equipping staff and board members with custom workflows for seamless reviews and approvals and meeting management features like in-platform voting and “sticky note” annotation tools, to keep them engaged and aligned in their work.
- Enhancing transparency with an ADA-compliant, easily searchable public website for sharing agendas, minutes and other public documents.
- Engaging the community even more through a one-stop Livestream Manager that’s easy to set up with a high-quality video stream and features that bring viewers into the process, like split-screen video alongside the agenda and time-stamped minutes that can be created quickly from livestream transcripts via AI.
- Enabling policy management from creation to adoption with Policy Publisher.
Throughout, this cloud-based board management software uses secure servers and 256-bit encryption, the strongest level of encryption currently available — to ensure privacy and security for the most sensitive school or municipal data.
By delivering the right data to the right people at the right place and time, Diligent Community makes AI governance more efficient, effective and engaging. Schedule a demo today.
More to explore

Livestreaming regulations: What public facing boards need to know
Livestreaming meetings boost transparency and compliance for education boards and councils. Learn about US regulations and best practices for seamless implementation.

Board training regulations: what public boards need to know
Discover the latest regulations for public board training, including state mandates, tips for compliance and how technology can help streamline the process.

Cybersecurity regulations: What public facing boards need to know
Stay ahead with the latest cybersecurity regulations for public facing boards, including federal, state and local laws. Protect data and build trust.

A checklist for responsible AI governance in public education
This free checklist helps public education organizations maximize the benefits of AI while minimizing risk exposure