AI in 2024: What Comes Next?
Transcript
Craig Smith
Welcome everybody to 2024. I'm Craig Smith, a partner in Wiley Rein’s Government Contracts Practice Group. With me today, my partners, Duane Pozza, Kat Scott, and Joan Stewart. It's early in 2024, as I said, and this is a good time to check in and talk about artificial intelligence. It's something that if we went back a year, probably burst into everyone's consciousness about this time but now that we've had a year to start getting our arms around, what is this thing and how does it affect my company, let's take some of those questions we might have been asking and refine them a little bit for looking ahead to 2024.
Kat, Duane, and Joan have really been offering a lot of expertise today, and I'm excited to spend some time asking them. We really distilled it down to five questions that we're going to talk about. So, Duane, Kat, and Joan, thanks so much for joining us today.
I want to start with Duane, and for you, the question that may seem kind of basic, but I think is a little more refined at this point, is what is AI? People probably understand at this point conceptually what it may be, but the definitions, you know, we need to be subtle with that, and I think you had some thoughts about what should we be paying attention to this year in terms of that question of what is AI?
Duane Pozza
Thanks, Craig. So I think that the question what is AI, it's not just a question in the metaphysical sense. There are plenty of academic debates about the exact definition of AI and what qualifies in the margins. Instead, I think of that question as a scoping question. What kind of technologies a company is deploying can be characterized as AI under any of the laws or regulations that might apply to their use of it. So overall, the definition of AI varies from different laws and different frameworks. I think there are at least a couple of definitions that are driving policy discussions and are good places to start.
One of those is found in the Biden Administration's recent executive order on AI, which was released on October 30th of last year. That definition is based on a federal statute, and it's a “machine-based system that can for a given set of human defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments.” It goes on to say “artificial intelligence systems use machine and human based inputs to perceive real and virtual environments, abstract such perceptions into models through analysis in an automated manner, and use model inferences to formulate options for information or action.”
Now, a second place to look is proposed regulations by the California Privacy Protection Agency or the CPPA. The CPPA is currently in the process of making rules around automated decision-making technology. So that includes, but it's not limited to AI. The CPPA is defining a broader set of automated decision-making technology and AI, but it's still important for businesses to pay attention to because the CPPA plans to issue rules around how this technology is used. Now those draft regulations released in late 2023 define automated decision-making technology as:
Any system software process, including one derived from machine learning statistics or other data processing or artificial intelligence that processes personal information and uses computation as whole or part of the system to make or execute a decision or facilitate human decision making. Automated decision-making technology includes profiling and profiling in turn is any form of automated processing of personal information to evaluate certain personal aspects related to a natural person and in particular to analyze or predict aspects concerning that natural person's performance at work, economic situation, health, personal preferences, interests, reliability, behavior, location of movements.
Now, one caveat to California, it's that the CPPA will put this out for public comment in early 2024, so this is not final. But putting those together, these definitions are fairly broad, and they both include an element of technology that makes recommendations or decisions that will have an impact on real world environments. That can include AI that's integrated into other tools. For example, a cyber security tool can use AI technologies or AI powered analytics tools can be used for planning product distribution just as a couple examples. It also includes generative AI, which is generally used to describe AI models that can generate contents, like text images or videos. Generative AI, as we know, has exploded in popularity with individual users, and it's driving much of the public policy discussion around AI but it's not the only example of AI that's including these definitions. So, the bottom line is that companies need to review and draw from the most relevant definitions to the organization, including which of these frameworks, including California's, that might apply to them and identify a definition and apply it consistently and evaluating their internal AI usage.
Craig Smith
I think that's really helpful, Duane and I think you made a good point at the end about generative AI maybe has really brought these topics and concepts to the forefront for a lot of people but there's a whole world of software and other devices that might be considered to have AI or be AI beyond that that you need to be thinking about in setting up the rubric for your company and that rubric, Joan, I want to turn to you, really focuses on the question of is my organization using AI? Now that I have a framework for figuring out what it is, how do I determine whether we're using it at all today?
Joan Stewart
Yeah, exactly. So a lot of organizations right now are asking themselves this question and honestly if they aren't asking this question, they should be. So, as Duane discussed, there are many different uses of AI that are making their way into the workplace and are becoming more common. So first, if your organization has not asked themselves this question and started identifying whether and how different lines of business and departments are using AI, now is really the time to start; and it can sound daunting, right, that this questioning of, are we using it, what are the different ways that we could be using it, but it really doesn't have to be that difficult. So, let's talk about some practical tips to start this conversation.
First, leverage the tools you've already had. So, for example, as the state privacy laws started exploding a couple of years ago, we really started encouraging organizations to begin tracking their use of personal information. Those of you that have worked with our team know that we are big advocates of using a data map to track the life cycle of how your organization uses PI. So, consider conducting a similar exercise for the use of AI. This would involve going to your business groups and asking a series of very pointed questions about the tools they're using and the features of those tools.
So, the questions that are going to be asked are slightly different than we're asked during the PI mapping exercise, but the process is largely the same, it's encouraging your business division to really be thoughtful and document about their uses of technology and the process that those technologies engage in. When you conduct this exercise, similar to when we conducted it for personal information, it's really important to use common definitions that are relevant to your organization. So, Duane explained that there are different ways to define key terms within the AI world, but you need to determine which definitions make sense for your organization and even more importantly, making sure that these definitions are clearly explained to the employees that will be engaged in that mapping exercise.
Once you determine if your organization is using AI, there’s some additional important steps that you need to take to make sure the use of documented and managed responsibly. First, that map or inventory that we just discussed must be a living inventory. There's absolutely no value in an inventory that is not kept up to date, or that does not grow with your business and I think as we've all seen over the past year, just as we get a handle on what tools are out there that are using AI by the time we wake up the next morning they're going to be new and different tools, so this needs to be an ongoing living process. Create documented procedures; once you get that baseline, make sure that your business understands that if they want to engage in new uses, there needs to be a process for that and those new uses need to be pulled into your inventory and documented. There needs to be a prior approval process and that process needs to be carefully managed within your organization and finally, and I really think this is the important part. This isn't just your legal department making these decisions. You need to have clear conversations with your business division about AI and how they want to use it and how it can be ethically used within the company. So, it's really important to both understand what people are doing now with AI and what they want to do with AI. This discussion, just like your data map, is not a one and done. It needs to be a constant ongoing process. AI uses are changing day by day, and your folks are going to want to be creative. They're going to want to use these new technologies and so you need to constantly be having these conversations to make sure that you know how AI is being used and you're tracking that use within your company.
Craig Smith
It sounds like that's a really good example of where, whether you're legal, compliance, or otherwise, taking some of this responsibility, being really business centric and making this a dialogue, not just throwing policies or rules up over the fence. That's really going to make your policies here, I think, a lot more effective in your awareness. Is that kind of the sense I'm getting?
Joan Stewart
Yeah, exactly. We've definitely found with companies, once you put that policy in place, it embeds in people's minds. They remember that process of going through that mapping exercise and when they hear about some great new technology, they're like, wait, I remember I'm supposed to ask questions about this; I'm supposed to let somebody know that we want to use this so we can make sure that we're updating these policies and it's just a really great way to make sure people are staying engaged with it and that it's being passed through a review process.
Craig Smith
And so there's that collaborative part of the process, but there's also, and Kat, I want to ask you about this, there's the bringing the rules of the road to the table and helping people understand what they are and bringing that aspect and analysis and so if I'm trying to be that person and fill that role, how is AI going to be regulated for an organization or what are the questions to ask along those lines?
Kat Scott
Yeah, that's a great question, Craig. So, I think we all have the tendency to discuss AI like it is entirely unique or entirely new and while it is really exciting and it's rapidly developing, in many ways under our legal and regulatory frameworks that already exist, it should be and is treated like any other technology. So that's to say there are many laws that are technology agnostic. So if your organization is using AI, you will want to ensure that the use of AI complies with generally applicable laws, just like any other use of technology or data practices and for that kind of activity hopefully you can leverage some of the existing work that you have and systems in your organization that you have already in place to help guide compliance on that.
So, for example, there are existing laws that are generally applicable that prevent unfair and deceptive acts or practices. Those apply equally to AI as any other technology. There are also existing anti-discrimination laws that prohibit discrimination on the basis of protected classes and there are laws in this space that are well established in certain areas, like employment or credit, for example. At the same time, regulators right now are looking to see whether those principles can apply more broadly. So these laws are technology agnostic and would apply, as was the case with the unfair and deceptive acts or practices laws, to AI in the same way that it applies to other technology and data practices; and then, of course, there are the generally applicable privacy laws that govern AI. Joan, did you want to discuss those briefly?
Joan Stewart
Yeah, so for example, most state privacy laws impose additional compliance obligations on a business that engages in profiling, which again, can be defined slightly differently, depending on the law, but generally is any form of automated processing of personal information to evaluate, analyze, predict information about an individual. So depending on your use of AI within your organization, you could fall under this definition of profiling or these other definitions in privacy laws that are really looking at this automated process. So again, realize that AI regulation is not siloed, it really crosses over into a lot of other laws that are already in play, and you need to make sure that you're tracking that for your compliance purposes.
Kat Scott
That's exactly right, and at the same time there are AI specific laws that folks need to be aware of and keeping up with, in addition to these generally applicable laws. The list of examples of these AI specific laws is increasing. Right now we have some examples in California that I think are good ones. In 2019, California adopted a bot disclosure law. That law already requires businesses to notify customers if they're interacting online with a bot instead of a human in certain circumstances like incentivizing the purchase or sale of goods or services in a commercial transaction or influencing a vote in an election.
California also has a deep fake law on the books. That one prohibits maliciously creating or distributing campaign material with candidate photos superimposed without a clear disclosure. There are others. So the key takeaway, really to your question, Craig, of how does somebody within an organization keep up with these laws that apply to AI? The answer really is that there are two buckets that they need to be aware of: the generally applicable bucket of laws that likely already apply to an organization's use of AI, and then these AI specific laws, which typically are focused on higher risk applications, but are for sure growing with each month it seems.
Craig Smith
Well, and you say growing with each month and Kat, I want to stay with you for a moment. If we were trying to make predictions about what seems likely to change looking ahead into 2024, where should companies be making some plans or getting ready?
Kat Scott
Yeah; what seems likely to change is a lot. I think the whole landscape is really experiencing the seismic shift right now and so I think we can walk you through a handful of areas that we are watching for for 2024, and that we'd recommend folks in this space watch as well. So the first and the biggest news regarding the U.S.'s federal AI policy from this past year was the release of President Biden's executive order on AI. Duane mentioned that earlier in discussing the definition of AI, but overall, this EO was released in October. It is highly complex and substantive. It sets into motion a massive volume of work streams for federal agencies, both in terms of how they develop and deploy AI, but also in terms of how they may regulate the private sector in doing so. I think this is, in particular, an interesting one for government contractors, because there are a lot of nuanced layers to this for an entity that is a government contractor. So, for one thing, I think the EO presents government contractors with big opportunity. So companies capable of developing and deploying AI, even those that don't typically view themselves as government contractors are likely to see a flurry of opportunities across the federal government with respect to AI, as the government appears motivated to utilize nontraditional procurement methods and technologies, for that matter.
On the other hand, I think government contractors are also likely to see new rules and restrictions flowing from the federal government, especially with respect to AI based goods and services that they sell to the government. So, a lot of unique and special issues for government contractors tucked in to that EO.
At the state level, so that was at the federal regulatory level, at the state level, I believe Joan already touched on this, but I think the biggest development from last year that will continue to be an area to watch for 2024 is California's AI rules. California has taken the lead on novel regulation as it has historically done in other areas, such as privacy. So the state's comprehensive privacy law, as was mentioned earlier, the C.C.P.A., gives the new California privacy agency the tasking of developing new rules to govern a business' use of automated decision making technology. California right now is only in the preliminary phases of that rulemaking, but it certainly is a really hot area to watch in 2024 and we expect some onerous rules coming out of that process.
Other states have also adopted AI specific provisions in their privacy laws. We also discussed this. I think the best example there is Colorado, which has both the legislation and the sort of detailed rules backing it up with respect to profiling, and of course other states have already adopted or are contemplating AI specific laws as well. The beginning of the year right now, from January through probably the end of March, is typically incredibly busy when it comes to state legislation, and so I think companies should definitely be watching and monitoring state activity right now.
Craig Smith
Well, Kat, you mentioned legislation from Colorado, and I think that naturally leads one to think about at the federal level and I want to ask Duane, can we expect anything from Congress in the year ahead?
Duane Pozza
Well, that's a good question. I think in 2023 we saw a lot of discussion in Congress about potential proposals and an attempt at trying to find some sort of bipartisan consensus on potential approaches. We'll be looking at that more in the coming year and it's hard to predict what will happen but there are a number of proposals that are already on the table of more to come.
One in particular I want to highlight, because it does have some bipartisan buy in, is a proposal is introduced by Senator John Thune on the Republican side and Senator Amy Klobuchar on the Democratic side that would impose certain mandates on AI technology that is classified as having a certain level of risk. So, among other things, the bill would define high impact or “high impact” and “critical impact” AI systems and then it would require deployers of these high impact AI systems to submit transparency reports to the Commerce Department. That is submitting reports to the government about how the systems are being used and for critical impact AI it would require critical impact AI systems to be subject to a certification framework in which certain organizations that use this critical impact AI with self-certified compliance with standards that would be prescribed by the Commerce Department.
So, if this kind of law were passed, it would impose fairly significant obligations on certain AI uses above and beyond what we've been discussing so far. Again, that's only been introduced, so not making any predictions about whether or not it will pass but it's certainly an area to watch, and I think it also reflects this general sort of sense that what regulators and policy makers classify as the “highest risk” uses of AI are the ones that are going to be in the spotlight as they start looking at legislation.
Craig Smith
I get that and then swinging, you know, as with soccer, you got to pay attention to Europe. So, Joan, can we expect anything from across the Atlantic?
Joan Stewart
There have been some really exciting developments in Europe just at the very end of 2023. So the European Parliament and Council have reached a provisional agreement on the final text of the EU's AI Act in early December, with a marathon, several day round of negotiations. So, the AI Act is now expected to be finalized in early 2024, and it will be one of the first really comprehensive laws addressing the use of AI and more importantly, specifically banning certain uses of AI. The final text of that act is expected to be released in late first quarter of 2024. So for businesses that engage with individuals in the EU and use AI, it's really going to be critical that they're paying attention when we see that final text and I'll just flag for people, that it does have tier deadlines with the first deadline and that's really to stop prohibited uses of AI six months after the law takes effect. So it's going to be a pretty fast turnaround and of course with most of the laws that we see out of the EU, there's some really significant compliance obligations built into the law. So, I really expect that this AI Act is going to drive a lot of the compliance obligations that we see businesses looking at in 2024.
Craig Smith
You weren't kidding about the marathon. I remember reading the news reports about the negotiation session. I think that you described it quite aptly and even with, you mentioned compliance obligations, Joan, I mean, there's still a recognition that AI is here; it's going to continue to grow in use and sophistication. It's not saying no and I think that's the same posture people want to take within their own organizations. No one's saying no to this, I think it's not practical to just wall off any organization from the AI of the future but you still want to have the tools in place to kind of take advantage of the benefits competitively, internally, and other ways, while managing the risks and that can feel like a lot. So maybe Duane, you could say, when someone comes to you with that question, how do I do this, balance the risks and benefits? What are the approaches you'd suggest?
Duane Pozza
Yeah, that's a great question, Craig, and fortunately, the answer is that there are tools that have been developed that companies can use to manage risks and leverage to develop their own internal policies and procedures to deal with AI as it's deployed. One potential tool that we like to highlight is the AI risk management framework that was developed by NIST, which is part of the Department of Commerce. This is a multi-stakeholder collaborative process to develop this risk management framework and version one, this was released in early 2023, along with what they call a playbook, which provides additional detail on how what they call the RMF, short for Risk Management Framework, can be implemented. So, it's not meant as a one size fits all approach and in fact it's the opposite. The RMF is meant to be adapted to an organization's needs based on a level of risk, and it's meant to be flexible. So beyond the RMF from an organizational standpoint, there's a number of things any organization can do or consider putting in place in order to manage AI.
First, as we discussed earlier, having a general AI policy is pretty important. This could include specific rules around use of generative AI tools, particularly given the interest that many organizations are seeing from their employees and business units and using these tools but I think those also wrap up into a broader AI policy. The benefit of this is that it helps companies get out in front of how their business units or their employees within the organization are using AI or thinking about using AI and some organizations also incorporate or separately adopt an AI ethics code, depending on how it's structured.
Second, we often see an AI governance board or committee or a sort of similar structure that can help review and manage AI risks. One key part of having this sort of board or committee structure is that it can be multidisciplinary, involve a range of stakeholders that would include, I think, in a minimum, internal business leads, legal and compliance personnel, and technologists. The point here is to get diverse perspectives from across the organization about how AI is being deployed and managed in order to make sure that organization's approach takes account of all those different interests.
Third, is to consider having an individual with a dedicated or assigned role to oversee this AI risk management. This could be similar to chief privacy officer role as AI development has expanded dysfunction we've seen as often part of the chief privacy officer's role but it is important to note that AI risk can go beyond privacy and companies should really think through the best way to structure this within their own organization. But the critical point is that you have at least one single point of contact who can sort of oversee the implementation of the risk management and be the person that folks can go to with any issues and sort of policy planning.
And finally, I would just emphasize to think about implementing training on responsible use of AI and compliance with the company policy to make sure that employees understand how to use AI within the organization. I think this is now much more commonplace, if not mandated, for a privacy and cybersecurity risk and think about how AI fits into that kind of framework where employees are sort of regularly given up to date trainings on what the current policy is and any development so that they can be sure that they're using it in a way that's consistent with how the company would like it to be used.
Craig Smith
And when I think of that training, Duane, I mean, it sounds like part of the challenge there will be making that training kind of relevant and tangible for people at various levels of the organization. I mean, thinking about what we've talked about today, that AI is more than just typing words into a box and getting a picture of cats playing poker back. There's a lot more to it and making sure everyone understands in ways that bear on their job responsibilities. Sounds like that'll be an important part of making sure that training achieves the purpose of that risk management framework.
Duane Pozza
Yeah, that's an important point and I think that's pretty critical in designing and rolling out training. Somebody within an organization who's doing marketing might be more likely to use generative AI and somebody doing work on the technical infrastructure might be using AI for all kinds of other things that are not generative AI uses. So there's always a balance in rolling out training to make sure it has broad applicability, but also its sufficiently tailored to what a company or organization is actually doing in the real world with using AI.
Craig Smith
I think that also underscores, as all three of you have pointed out, there's a lot that's going to change this year. There's a lot to do to stay on top of this. We're certainly staying on top of this area as well and look forward to keeping everyone updated and providing the support that people need to make sure that AI, you know, those benefits really do materialize at manageable risk.
So, Duane, Kat, Joan, thank you so much; really enjoyed the conversation and looking forward to 2024.
Thanks everyone.
Kat Scott
Thanks so much for having us.