In a recent proposed rule, the Department of Commerce has taken additional steps toward imposing significant regulations on infrastructure as a service (IaaS) providers, including providers engaged in training certain large AI models. The notice of proposed rulemaking (NPRM) is published by Commerce’s Bureau of Industry and Security (BIS) and, in particular, its newly-created Office of Information and Communications Technology and Services (OICTS). The NPRM does not impose any immediate obligations on industry. Rather it requests comments on the proposed rules, which Commerce will consider before issuing a final rule. Comments are due by April 29, 2024.

The NPRM is OICTS’s first step toward implementing the Biden Administration’s executive order on AI (discussed in Steptoe’s alert here) and further implements a prior executive order on IaaS providers (discussed in Steptoe’s alert here).

The NPRM would require providers of IaaS products to implement customer identification programs (CIPs) to verify the identity of foreign customers. The CIP requirement is similar, in many respects, to the CIPs that certain US financial institutions must implement as part of their anti-money laundering (AML) compliance programs. The NPRM also delineates the ability of Commerce to identify foreign jurisdictions and persons posing a heightened threat to US national security and to prohibit or require conditions on the provision of IaaS products to such jurisdictions or persons. IaaS providers would be obligated to identify and report to Commerce when a foreign person uses their products to train a large AI model with potential capabilities that could be used in malicious cyber-enabled activity. Furthermore, IaaS providers would be required to ensure their resellers comply with the same set of rules.Continue Reading Commerce Proposes Significant New Regulations on AI Training and IaaS Providers

On April 21, 2021 the European Commission (EC) published its proposal for a Regulation laying down harmonized rules on artificial intelligence, the Artificial Intelligence Act (the Proposal). The EC sets ambitions to play a key role in the regulation of artificial intelligence (AI), not only by coming out the first in the area but also as its Proposal has elements of extraterritorial reach. The EC is proposing a legal framework consisting of rules developed on a risk-based approach that aim to ensure that AI systems are safe, ethical, transparent and human-centered. The overarching goal is to increase trust AI systems to ensure their uptake, which the 2021 Coordinated Plan outlines.

You can find an outline of the Proposal in our infographics available here. The key components are below.Continue Reading The EU response to AI challenges – Another (risk-based) Regulation

On September 21, Steptoe associate Peter Jeydel commented about recent U.S. export control developments relating to facial recognition.

To listen to Pete’s comments, please press play above. To listen to the entire episode, please visit Steptoe’s Cyberlaw Podcast on the Steptoe Cyberblog.