Major technology companies and industry groups just responded to the White House’s request for feedback on its AI plan and called for a national framework to prevent fragmented state laws on workplace use and more. The majority of comments from the country’s most influential tech companies asserted that a unified federal approach would provide much-needed clarity and consistency for businesses, particularly those using AI in hiring, HR, and workplace automation. These evolving policies could reshape your compliance obligations and operational strategies, so you’ll want to be in the know on proposed recommendations. Here’s a breakdown of the key proposals and their potential implications.
What Happened?
To help develop a comprehensive national AI regulatory framework, the White House recently issued a request for public comments by March 15 from stakeholders across industries. In response, major technology companies, business coalitions, and industry groups were among those that submitted policy proposals advocating for various approaches to AI governance.

Top 5 Proposals That Would Impact The Workplace
Among the collection of submissions (links to which can be found below), a series of proposals would directly impact employers and the use of AI for workplace purposes. Here are the top five proposals to track:
1. Federal Preemption of State AI Laws
The call for federal AI regulation is being driven by concerns over the rapid growth of state-level legislation, which is beginning to create a complex and inconsistent regulatory environment for businesses.
- Why the Need? As of the time of publication, there are close to 900 AI-related bills pending in 48 statehouses across the country, each of which seek to impose different or conflicting requirements on AI use, transparency, and bias mitigation (see for example laws passed in Colorado, Illinois, Virginia, and New York City, and those pending in California and Texas as just a few examples). Tech industry leaders argue that a patchwork of state laws will make compliance more difficult and costly, particularly for companies operating in multiple jurisdictions.
- Who’s Pushing It: OpenAI, Google, TechNet, Microsoft, Business Roundtable, Andreessen Horowitz (a16z)
- What It Would Mean for Employers: A consistent federal AI regulatory framework that preempts state-level AI laws would simplify compliance for multistate businesses. However, it would also introduce new federal requirements and obligations that could impact your use of AI for workforce purposes.
- What You Should Do: You will want to begin early preparations for standardized rules that could affect AI applications in hiring and other HR processes. Here are some general practical tips for companies adopting AI in the Trump era.
2. Balanced and Flexible AI Regulation
While the push for federal preemption focuses on eliminating conflicting state laws, this effort aims to ensure that any federal AI regulation remains flexible and industry-driven rather than overly restrictive.
- Why the Need? Tech groups note that AI is still evolving. Rigid, prescriptive rules could stifle innovation and slow adoption in the workplace. Instead, a collection of business advocates calls for voluntary guidelines, sector-specific approaches, and risk-based oversight.
- Who’s Pushing It: TechNet, Business Roundtable, a16z
- What It Would Mean for Employers: This could mean fewer one-size-fits-all mandates and more tailored compliance obligations based on industry and AI use case. However, it may also require you to provide input to lawmakers and regulators to help shape how these flexible standards are crafted.
- What You Should Do: Engage with industry groups and regulators to stay informed about best practices and evolving standards – and to offer your opinions about how regulation is developed. You should also consider working with a team to assist you in having your voice heard where it matters.
3. Regulation Focused on Risk-Based AI Use, Not Development
Some industry leaders argue that AI regulation should be risk-based, focusing on how AI is used rather than how it is developed.
- Why the Need? Some contend that imposing restrictions at the development stage could stifle innovation and slow AI progress in the U.S., giving an advantage to international competitors. Instead, they advocate for application-based oversight, ensuring that AI deployment in workplaces, hiring, and decision-making meets ethical and legal standards without burdening early-stage research and development.
- Who’s Pushing It: a16z, U.S. Chamber, TechNet
- What It Would Mean for Employers: For employers, this approach means that compliance efforts would likely focus on how AI tools are integrated into business operations rather than on the technology itself.
- What You Should Do: Employers should focus on responsible AI use in their operations to align with this regulatory aim, as it seems to be the underpinning of most existing and forthcoming regulation. Establishing an AI governance program would be a great first step.
4. Copyright Exemptions for AI Training Data
The push to allow AI models to train on publicly available data is being driven by concerns that restricting such access would hinder AI development and put the U.S. at a competitive disadvantage.
- Why the Need? Advocates argue that AI systems need vast amounts of data to improve accuracy and functionality, and limiting training sources because of copyright laws could slow innovation. However, AI training has sparked ongoing legal battles over intellectual property rights, particularly in industries reliant on copyrighted content. With several high-profile lawsuits already filed against AI companies for unauthorized use of copyrighted materials, the outcome of these debates could have lasting effects on AI-driven recruitment tools, HR analytics, and workplace automation.
- Who’s Pushing It: OpenAI, Google
- What It Would Mean for Employers: Allowing AI models to train on publicly available data, including copyrighted material, could enhance AI capabilities but raises concerns about intellectual property and data privacy.
- What You Should Do: Employers using AI-powered systems for hiring or content generation should stay informed about potential regulatory changes and emerging legal risks. And on the flip side, you can take these five steps to ensure your company’s AI-generated works are protected by copyright.
5. Federal Investment in AI Workforce Training
Business leaders are advocating for federal investment in AI workforce development, including tax incentives, public-private training partnerships, and government-backed AI education initiatives. Proponents argue that these measures will help businesses adapt to AI-driven changes without relying solely on external hires, reducing the risk of large-scale job displacement.
- Why the Need? The rapid integration of AI into the workplace is reshaping job roles and creating new skill demands. Many companies report a growing skills gap, with existing workers lacking the technical expertise to maximize AI tools.
- Who’s Pushing It: Anthropic, Microsoft, Google, Business Roundtable, a16z, U.S. Chamber, TechNet
- What It Would Mean for Employers: Federal investment in AI workforce training could provide opportunities for upskilling employees, addressing talent shortages, and enhancing competitiveness.
- What You Should Do: You should assess how AI will impact your workforce and explore partnerships and programs that support AI skills development. Here is some more information about upskilling your workforce to get up to speed on AI.
More Resources: Links to Big Tech Submissions