New Artificial Intelligence Regulations Will Require New Enterprise Controls, Deeper Software Screening

New AI Regulations

The frontier is closing on artificial intelligence: Where AI once represented the “Wild West” of software development, recent reports of disinformation and discrimination on AI-powered platforms have driven regulators to take the industry more seriously. This shift to litigating a once untouched hi-tech field signals to tech companies that it’s time to focus on compliance.

The European Union recently proposed a series of regulations aimed at mitigating AI-related harm and granting recourse to victims. To understand how these laws will affect your organization, it’s necessary to know the specifics of each component:

  • AI Act: The AI Act requires organizations to implement additional controls for implementations that pose the most serious risk to their users. These cases include systems for grading student work, making hiring decisions, completing legal documentation, etc. The law will also ban certain “unacceptable” uses of AI, like evaluating people based on their “trustworthiness,” or exploiting vulnerable groups.
  • AI Liability Directive: This bill grants users harmed by AI systems the right to sue developers for damages. Where the AI Act outlines which practices will not be accepted in the domain, the Liability Directive gives users recourse when those practices cause them harm.

These bills signal a regulatory shift for artificial intelligence and other hi-tech industries. AI-focused organizations and those with AI vendors can no longer take for granted that the “black box” of machine learning can stay opaque forever. Here are three takeaways for international businesses facing this potential regulatory burden for the first time:

1. These bills, once enacted, will directly impact any organization implementing or developing AI systems in the EU or any organization that sends AI outputs into the region.

While organizations based in the EU will be beholden to the AI Act, its implications are much farther-reaching: any organization that sends AI outputs to the EU falls under its jurisdiction. Thus, this bill, which is expected to be implemented by 2023, gives international AI organizations an ultimatum: improve your standards or find a new market.

It’s clear that by 2023, a significant portion of international AI business will be conducted according to EU standards: organizations that fortified their controls to meet the rising regulatory burden will operate throughout the rest of the world market, and whether it’s mandatory in their region or not, risk professionals will begin to see the benefits of more responsible AI implementations.

2. The Biden-Harris Administration is taking cues from the EU and moving toward a more regulated AI space

The White House recently released a “Blueprint for an AI Bill of Rights,” a document which outlines the five core protections to which Americans should be entitled in the AI space. These protections, per the White House website, include:

  • Safe and Effective Systems: You should be protected from unsafe or ineffective systems.
  • Algorithmic Discrimination Protections: You should not face discrimination by algorithms and systems should be used and designed in an equitable way.
  • Data Privacy: You should be protected from abusive data practices via built-in protections and you should have agency over how data about you is used.
  • Notice and Explanation: You should know that an automated system is being used and understand how and why it contributes to outcomes that impact you.
  • Alternative Options: You should be able to opt out, where appropriate, and have access to a person who can quickly consider and remedy problems you encounter.

While the EU is leading the charge on regulating artificial intelligence, the United States has begun to set the stage for its own regulatory project. The focus is the same: prevent discriminatory practices, encourage privacy, and ensure that the public knows what they’re signing up for. The days when American companies could make money by selling user data to be processed by an opaque third party are coming to an end: not only does your organization need to know what’s happening to user data, so do your customers.

3. If your organization employs AI technology via a third-party, it’s time to evaluate their practices

If your organization gives user data to an AI vendor, it is imperative that you know how it will be processed and what controls the vendor has in place. As regulators in Europe and the United States race to chart the boundaries of acceptable and unacceptable AI practices, any organization whose data is processed by a third party’s AI should work to build an understanding of the systems at play and the risks that may be involved. Could your vendor’s systems compromise user data? Is there any opportunity for discrimination in the function being carried out? Do your clients know what is being done to their data? By implementing controls to account for these questions, you can begin to adapt to the new market of responsible AI.

As AI becomes commonplace, your organization will need to know what happens “under the hood” of the technologies you implement. You should know what data is processed by which systems and how it is processed, which organizations have access to your data, and how your users’ data privacy is ensured at each step. Your organization will need to implement new controls to maintain compliance in the face of new regulations on hi-tech solutions—but implementing those controls correctly can have real benefits for your organization.