Artificial Intelligence will have a revolutionary impact on the business world in the coming years. It will automate complex processes, improve decision-making, and play a major role in new product and service development.
Until now, the legal challenges of AI technology have been limited to GDPR, but this is expected to change soon with the EU’s AI Act. All government agencies that use AI technology, either in their core product or as a complementary tool, will have to comply with new legislation. If you aren’t prepared for the new reality, you might risk a costly technical debt.
While consensus on the precise approach to artificial intelligence (AI) regulation in the U.S. remains elusive, canaries in the coalmine are calling for more comprehensive federal regulation and legislation, mainly to protect privacy.
Some states have taken the lead by enacting their own laws, like the
Deepfakes legislation in California, albeit for a limited duration. This legislation is intended to assess the risks and use of deep fakes in California. Questions persist about the enforcement mechanisms and potential fines, mirroring the strict measures seen in the EU AI Act.
AI will continue to stand out as a dominant and pervasive topic, particularly the impact of generative AI on enforcing privacy, securities, and antitrust laws. The legal landscape is further complicated by the number of copyright disputes making their way through the judicial system.
In a groundbreaking move, the EU adopted the AI Act, ushering in new restrictions on AI use cases and mandating transparency from companies, including Open AI, regarding data usage. The EU AI Act will be finalized by the end of the year, and pending final EU procedures, the Act will likely enter law sometime in early 2024. In contrast, the United States has put forth plans and statements, such as releasing the
AI Bill of Rights in October 2022 and the Biden Administration’s
Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. Despite these initiatives, concerns linger about enforceability and binding measures.
Another cause for concern lies in the unbalanced attention given to the largest corporations in AI regulation discussions, sidelining the startups and smaller companies making great strides in AI. Achieving a fair and comprehensive regulatory landscape necessitates including these smaller entities in the dialogue, acknowledging their shared scrutiny with larger counterparts.
Some are calling for 2024 to be designated the Year of AI Regulation in the U.S., emphasizing the need for comprehensive accountability by governing bodies, irrespective of a company’s size. Advocates stress the importance of upholding collective responsibility and moving beyond incremental progress.
While there have been huge strides in technology relating to generative artificial intelligence, the road to artificial general intelligence remains very far away. The ability of artificial general intelligence to displace human logic or governance systems remains very far down the line. Beyond speculation about how robots will take over the world or how computers will turn the earth into a prison for other life forms, we do not have much hard evidence that unregulated AI poses a significant risk to society.
AI is a technology still in its early stages of development. There is much we do not understand about how AI works, so attempts to regulate AI could easily prove counterproductive, stifling innovation and slowing progress in this rapidly developing field. Any regulations issued are likely to be tailored toward existing practices and players. That doesn’t make sense when it is not obvious what AI technologies will prove the most successful or which AI players will become dominant in the industry. Nonetheless, with the potential impact of AI on society coming into focus, expect 2024 to continue bringing incremental regulatory steps as new AI applications come to market.