Across the technology industry, artificial intelligence (AI) has boomed over the last year. Lensa went viral creating artistic avatar artwork generated from real-life photos. The OpenAI chatbot ChatGPT garnered praise as a revolutionary leap in generative AI with the ability to provide answers to complex questions in natural language text. Such innovations have ignited an outpouring of investments even as the tech sector continues to experience major losses in stock value along with massive job cuts. And there is no indication the development of these AI-powered capabilities will slow down from their record pace. Governments and corporations are projected to invest hundreds of billions of dollars on associated technologies globally in the next year.
With this unprecedented growth, however, communities have grown more concerned about the potential risks that accompany AI. Reports indicate Chatbot GPT is already being leveraged by criminals to perpetrate fraud against unsuspecting victims. The Lensa app generated explicit images of individuals without their consent. Georgetown University School of Law’s Center for Privacy and Technology recently released a report highlighting long-held concerns of the use of face recognition in criminal investigations. Jurisdictions often lack the proper policies and procedures necessary to govern the use of face recognition evidence, and that has led to rights violations and wrongful arrests.
Existing Regulatory Frameworks
Faced with these concerns of privacy and safety, a patchwork of state and local regulation has begun to form in the United States. In 2020, Madison, Wisconsin outright banned the use of facial recognition and associated computer vision AI algorithms by any entity. In 2021, the city of Baltimore banned the use of face recognition technology with a limited exception for some use by police. That ban expired in December 2022, as council members continue to determine how to best address the privacy and data collection concerns of the community. Three states – Illinois, Texas, and Washington – have all enacted strict laws pertaining to data and privacy with face recognition. Illinois’s Biometric Information Privacy Act, or BIPA, remains one of the country’s strictest set of AI associated privacy regulations, gaining regular challenges from tech companies over complacency issues. In recent years, a host of states from Alabama to California enacted legislation intended to regulate the use of AI. However, regulation of AI domestically still remains a patchwork, with the U.S. Chamber of Commerce estimating less than an estimated one-third of states have at least one law that specifically addresses the use of AI technologies. Most of the existing laws focus on privacy collection, data protection, and data sharing.
Federally, there currently is no comprehensive law that governs AI development or use. The American Data and Privacy Protection Act, which would have created a national standard and safeguards for personal information collection and address algorithmic bias, failed to pass last year, and divided party control of an arguably hyper-partisan landscape doesn’t immediately give rise to the comity needed to pass new legislation.
The international regulatory landscape is just as uneven, with the European Union and China taking action to protect rights. Last year, the Chinese government’s first major AI regulatory initiative focused on informed consent, in which companies had to inform users whether or not an algorithm was being used to display certain information about them and provide them an opportunity to opt out. The government has since focused on a variety of policy initiatives with different government entities aimed at impacting international development of AI technologies and governance. However, the Chinese government’s own use of AI in privacy-invasive ways remains a deep concern. The European Union’s AI Act is much broader, designed as an all-encompassing framework focused on specific levels of risk associated with the use of AI technology.
However, thus far it has mostly been up to the tech industry to self-regulate when it comes to AI, but in a 2021 survey conducted by the consulting firm McKinsey, only fifty-six percent of responding companies had AI ethics policies and practices in place. Although countries are beginning to establish governance standards, without a unified approach or model guidance, industry will still be required to self-regulate to the requirements of the most arduous laws to which they’ve availed themselves while simultaneously attempting to understand how their business may be affected by pending global legislation.
Toward a Consistent Regulatory Approach
AI presents many possible sweeping benefits through its ability to enhance the capabilities of current technology. When algorithms are properly trained, they can make unbiased decisions, reduce human error by making processes faster and more efficient, solve complex problems, and support a host of other potential improvements to society. Conversely, AI can present challenges and risks from cyberattacks, to the aforementioned support of criminal conduct, the potential misuse of autonomous weapons, general misuse and unforeseen consequences due to poorly or improperly trained models, and a host of other potential threats.
Given the disparities in regulation both domestically and internationally and the inherent levels of risk associated with its use, the United States must pass formal regulation that provides clear guidance for industry and proper protections for society while making room for continued innovation within industry. The government will need to address concerns such as protection of privacy rights and use, aggregation, and security of personal data while ensuring loopholes are closed for potential unforeseen abuses and misuse of associated technologies. It will take a comprehensive framework to achieve this consisting of measured policies that provide protections and not draconian blanket prohibitions. Outright bans don’t allow for industry to collaborate with governments and academia to find thoughtful, sustainable answers to outgoing concerns. Additionally, companies will likely avoid engaging in business in those jurisdictions that prohibit all use, forgoing investments, infrastructure and training that will be crucial for the American workforce moving forward. Finally, setting proper regulations on the development and use of AI will make the United States safer. Ensuring all AI technologies utilized in the country meet baseline safety standards and protocols set by agencies such as the National Institute of Science and Technology, the Department of Defense, and Department of Homeland Security as they relate to cybersecurity and the protection of the Internet of Things, misinformation and disinformation amplification online, and other potential interests that may disrupt security operations will be paramount.
Drafting and passing a legislative framework will be difficult in this Congress, but not necessarily impossible, as legislators on both sides of the aisle have indicated strong interests in, and often concerns about, the capabilities and enhancements AI presents. The Biden administration has provided a model blueprint for an AI Bill of Rights that could serve as a good foundation for federal and state officials to build on. The AI Bill of Rights focuses on five key principles – Safe and Effective Systems, Algorithmic Discrimination Protections, Data Privacy, Notice and Explanation, and Alternative Options – each with its own correlating technical point.
U.S. legislators could also look abroad for models. The EU’s executive office, the EU Council, adopted a common position (or general approach) for its AI Act. Similar to the AI Bill of Rights, the model legislation aims to provide a balance between ensuring the rights of citizens and supporting continued growth and innovation in the sector. Both documents seek to reduce and prevent unsafe practices while allowing industry to succeed and governments to become more efficient. The EU Council AI Act is proposed legislation that takes a risk assessment-based approach while highlighting specific prohibitions, establishes an AI Board for oversight, and presents assessments for conformity, governance framework, and enforcement of law and penalties for violations. The EU Parliament has its own separate legislative process, and its own AI Act is in committee. While the EU Council AI Act takes a more nuanced, risk-based approach to governing the technology, the current draft of the parliament draft legislation has many prohibitions of AI technology to include a blanket ban on “remote biometric systems.” The two bodies will enter negotiations known as a trilogue that is similar to a conference committee in Congress to hopes of reaching an agreement on proposed legislation by the end of this year.
Both the Bill of AI Rights and the EU Council AI Act could serve as a good starting point for comprehensive American legislation, as both documents seek to strike the challenging balance between protections and innovation. Interested parties will have a keen eye set towards the legislative process in the EU, as the two opposing approaches for sweeping bans versus mitigating risk will have to be resolved during the trilogue. The resulting legislation could set a new standard of how nations address all the combined concerns.
If most legislative efforts stall on the federal level, AI regulation still could present a rare opportunity for both parties to work with stakeholders at the state and local levels in a win for bipartisanship. Government and the tech industry can work together with community leaders and subject matter experts to smartly shape AI regulation so that they don’t have a chilling effect on innovation or unforeseen consequences on positive uses of the technology. In the meantime, industry leaders should work to provide reasonable transparency about company actions in the absence of stronger regulation to help put government and societal concerns at ease.
Government officials must recognize that the AI industry has been the lead in development of this technology and endeavored at self-regulation for a long time. I’ve seen this personally as a member of the industry in a Government Affairs and Public Policy position. Working with companies to find reasonable protections for privacy and other concerns is paramount in maintaining trust and safety between society, government, and industry, and such a collaborative effort ensures that the best possible practices are established, and healthy, reasonable safeguards are put in place. Without such an effort, society runs the risk of creating policies that allow unconscious bias within algorithms, loopholes within otherwise acceptable business cases that allow for abuse and misuse by third party actors, and other negative unforeseen consequences associated with AI technology. These actions will erode societal trust in the technology as well as institutions meant to serve and protect it.
All interested parties are working towards the same goal: the protection of the rights and safety of American citizens and allies. Clear frameworks exist as models for congressional legislation that can provide much needed guidance and regulation for the tech industry as the world witnesses the evolutionary leap of AI technologies. 2023 could prove to be a major inflection point for policy, law, and regulation that govern a variety of this industry. The U.S. government must also work with communities and industry leaders to properly draft protections that won’t have a chilling effect on innovation. This is a historic opportunity to shape the future of the world through this pivotal and powerful technology. The United States should do what it has done for generations now when it comes to innovative thought and be a world leader ensuring AI supports society by providing the most benefits while producing the least possible harm.
IMAGE: Futuristic digital circuit background.(Getty Images)
The post Regulating Artificial Intelligence Requires Balancing Rights, Innovation appeared first on Just Security.