A Primer on Regulating Novel AI Technologies

AI regulation has become a big goal for local and federal legislatures. With AI having biases and causing job displacement, issues as artificial intelligence integrates further into various sectors have become more prominent. Although these problems still exist governments are pushing regulation that is either not working or hinders innovation. 

AI bias occurs in many areas but one are it manifests significantly in is predictive policing models, which aim to identify individuals or places at risk of crime involvement but often harbor serious biases. These biases disproportionately impact people of color and marginalized communities. Black people, for instance, are overrepresented in criminal databases, leading to unfair targeting by facial recognition tools. These tools generally disfavor people of color due to major racial disparities in these databases. A 2020 ACLU study found that Black people are more likely to be arrested for minor crimes than white people because their faces and personal data are more prevalent in mugshot databases.

Automated enforcement cameras also exhibit bias, with Black and Latinx communities facing higher rates of traffic citations. A study in Chicago found that speed cameras ticketed households in majority Black and Latinx neighborhoods at twice the rate of those in majority white zip codes. In Washington DC, drivers in Black segregated areas are 17 times more likely to receive a photo-enforced citation than those in white-segregated areas. Automated traffic enforcement has also been shown to make errors, such as in 2009 when a Black woman was detained for 20 minutes due to a license plate reader error. These cameras measure vehicle speed, capture license plate details, and automatically issue tickets, but their presence disproportionately affects marginalized communities and can result in erroneous citations.

To mitigate these problems, addressing root causes like false incarceration and the criminalization of people of color is essential. Avoiding biased measures and regulating the specific use cases for facial recognition software can reduce racial biases and promote fairer law enforcement practices. The UK, for instance, has introduced the Surveillance Camera Code of Practice to ensure the use of cameras is legitimate and necessary. Regular reviews of the system’s use and creating bylaws that limit facial recognition software to specific cases like finding missing persons or identifying victims of natural disasters can reduce harmful impacts. This approach ensures that law enforcement cannot convict someone solely based on AI predictions but must have reasonable cause, similar to obtaining a search warrant.

Job displacement due to AI and automation technologies is another major concern. A McKinsey report estimates that up to 800 million jobs worldwide could be automated by 2030, affecting one-fifth of the global workforce. The World Economic Forum predicts that by 2025, 85 million jobs may be displaced due to a shift in the division of labor between humans and machines. Low-skilled workers are more vulnerable to this displacement, as they often hold routine jobs that can be easily automated. According to the Brookings Institution, low-wage jobs have a higher risk of automation compared to high-wage jobs, exacerbating economic inequality. However, the creation of 97 million new roles by 2025 presents an opportunity for low-skilled workers. The key issue is reskilling workers to fit these new roles.

Encouraging companies to invest in employee education and securing government funding for specific AI training for non-AI-savvy employees can create a more productive and efficient workforce, mitigating the economic damages posed by job displacement. Mandated training, similar to California's sexual harassment training requirements, should be enforced. Government-funded or subsidized programs can help achieve this. Germany’s approach to vocational training and public funding for worker retraining is a robust model for adapting to technological changes. Germany's dual system combines classroom instruction with practical on-the-job training, allowing students to acquire both theoretical knowledge and practical experience. German companies are heavily involved in the training process, ensuring that skills taught are relevant to the current job market. The Bundesagentur für Arbeit (Federal Employment Agency) provides funding and support for retraining programs, offering various subsidies and financial aid to both individuals and companies involved in retraining efforts. Initiatives like the “Digital Skills Initiative” aim to upskill workers in response to the digital economy.

Governments can also regulate third-party testing in the AI sector to ensure powerful systems are deployed safely and responsibly. Frontier AI systems, such as large-scale generative models, present risks of misuse and accidents. Third-party testing helps identify and mitigate these risks by ensuring systems perform safely under various conditions and do not enable harmful applications. This testing builds public and institutional trust in AI systems by providing transparent and objective validation, fostering greater acceptance and integration of AI in society. Government involvement is crucial for establishing standardized testing procedures across the industry, preventing discrepancies in testing quality, and ensuring all significant AI systems meet minimum safety and performance criteria. Independent oversight reduces the risk of regulatory capture, where well-resourced companies might influence regulations in their favor.

To implement third-party testing, governments should work with industry and academic experts to develop comprehensive testing standards for AI systems. This involves defining safe and reliable performance and setting clear guidelines for testing procedures. Governments can fund organizations responsible for conducting third-party testing, such as national research institutions or independent testing firms. For example, the U.S. National Institute of Standards and Technology (NIST) could play a significant role in developing and overseeing these standards. International collaboration is also essential, as AI is a global technology. Governments can foster cooperation between countries to develop shared standards and mutual recognition agreements, ensuring consistent and comprehensive testing across borders. Enacting laws that mandate third-party testing for certain high-risk AI systems can ensure compliance and accountability. There are concerns about the effectiveness of third-party testing. Current evaluations of AI models are criticized for being limited in assessing the safety and capabilities of AI systems. Bias and data quality are also significant issues, as AI models often have biases due to skewed or incomplete training data. The dynamic nature of data, especially from the internet, makes static tests less effective over time. Regulatory capture is another concern, as powerful companies might influence regulations to their benefit. Developing effective tests for complex AI systems and managing the high costs associated with these tests are ongoing challenges.

The European Union (EU) is actively working on AI regulations, including provisions for third-party testing and certification. The AI Act aims to establish a comprehensive regulatory framework with safety and performance assessments for high-risk AI systems. Although this seems like a step forward in mitigating the risks of AI, this regulation in the EU has faced significant backlash. Colorado’s AI Act, similar to the EU’s AI Act, receives similar criticisms and covers any AI system that significantly influences consequential decisions on essential services, potentially imposing heavy regulatory burdens on many industries. The law targets AI system results rather than just intentional discrimination, potentially holding entities accountable for unintended consequences. Enforcement and compliance challenges are also significant, with complex requirements that could be costly and burdensome, especially for smaller entities. Governor Polis has expressed concerns about regulating AI outcomes instead of just intentional discrimination, suggesting the law might need refinement for better practical implementation.

Several states in the U.S. have proposed regulations to address AI safety and biases. California’s AB2930 bill focuses on preventing biases in the use of automated decision tools and mandates annual impact assessments, notices to individuals, and opt-out options. Critics argue that this bill could impose excessive regulatory burdens and stifle innovation. SB 1047 aims to enforce rigorous safety measures and oversight for developing advanced AI models to prevent significant public safety risks. Tech industry opposition argues that the proposed regulations could hinder innovation and calls for more federal guidance. SB 294 requires advanced AI models to undergo safety testing and empowers the state attorney general to sue for consumer harms resulting from AI technologies. Critics argue that the proposal is vague, impractical, and could generate significant regulatory uncertainty.

Scott Wiener's SB 1047, in particular, has faced substantial backlash from industry experts. They argue that the bill's stringent requirements could slow down technological progress and innovation in the AI sector. The tech industry emphasizes the need for flexibility and warns that overly prescriptive regulations could hamper the development and deployment of beneficial AI technologies. Furthermore, experts point out that the bill's provisions might be too restrictive for startups and smaller companies, potentially driving innovation out of state. Despite these criticisms, supporters of the bill insist that the robust safety measures are necessary to protect public safety and ensure the ethical development of AI systems.

In conclusion, AI bias and job displacement are critical issues that require comprehensive regulatory frameworks. Addressing biases in predictive policing, automated enforcement cameras, and other AI applications, along with reskilling workers to adapt to technological changes, are essential steps. Implementing third-party testing and establishing standardized procedures can ensure AI systems are safe and reliable. While current regulations face criticisms and challenges, ongoing efforts to refine and develop effective frameworks will help mitigate the risks associated with AI technologies and promote a fairer and more equitable society.

Next
Next

The Untold Stories of India's Independence