BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

Implanting Legal Reasoning Into AI Could Smartly Attain Human-Value Alignment Says AI Ethics And AI Law

Following

In today’s column, I am going to weave together a myriad of seemingly disparate AI-related topics into one nicely woven fabric.

Are you ready?

Imagine that you are using an AI-powered app that is aiding you while undertaking some kind of significant task. Perhaps the matter is a financial one or could be health-related. The essence is that you are depending upon the AI to do the right thing and perform in a presumed safe and sound manner.

Suppose the AI veers into unethical territory.

You might not realize that the AI is doing so.

For example, the AI could be relying upon potentially hidden discriminatory factors such as race or gender, though you might not have any viable means of discerning the untoward usage. There you are, all alone, getting the short end of the stick via an AI that has either been devised from the get-go in a problematic way or has managed to steer into dicey and borderline ethical endangerment (I’ll be saying more about this in a moment).

What can you do or what can be done about AI that opts to go down an unethical path?

Besides trying to beforehand construct the AI so that it won’t do this kind of insidious action, I’ve previously detailed too that there is a rising interest in embedding an AI ethics-checking component into the burgeoning morass of otherwise Wild West anything-goes AI systems being tossed into the marketplace. The idea is that to try and prevent an AI-infused app from going afield of ethical dimensions, we could use additional AI to do a check-and-balance. This added AI might be outside of the targeted AI app or could be a component embedded or implanted directly into the AI that we want to do some double-checking on.

As I’ve stated before, see the link here: “A recently emerging trend consists of trying to build ethical guardrails into the AI that will catch as the rest of an AI system begins to go beyond preset ethical boundaries. In a sense, the goal is to use the AI itself to keep itself from going ethically awry. You could say that we are aiming to have AI heal itself” (Lance Eliot, “Crafting Ethical AI That Monitors Unethical AI And Tries To Deter Bad AI From Acting Up”, Forbes, March 28, 2022).

You might also find of relevant interest my book on AI guardian bots, sometimes referred to as guardian angels, which covers the technical underpinnings of this state-of-the-art AI-within-AI embedded double-checkers, see the link here.

Bottom-line is that your bacon might thankfully be saved by the use of an embedded AI double-checking ethics-gauging element that has been devised and implanted into an AI app that you are using. But will that be enough of a guardian to really make sure that the AI doesn’t entirely stiff you and venture into even hasher harmful ground?

You see, the AI app could perform ostensibly illegal acts.

It is one thing to have AI that goes into a gray area of what we consider to be ethical or unethical behaviors. An equally disconcerting behavior and likely worsening concern entail AI that just leaps the shark as it were and descends into the shoddy darkness of outright unlawful acts.

Unlawful AI is bad. Allowing unlawful AI to go unattended is bad. Some legal scholars are openly worried that the advent and pervasiveness of AI are going to gradually and terrifyingly undercut our semblance of the rule of law, see my analysis at the link here.

Wait for a second, you might be saying.

You might be tempted to think that AI developers would never program their AI to go against the law. Unimaginable. Only evil villains would do so (which, by the way, keep in mind that there are those that are intentionally devising and using AI for evil purposes, a growing area of interest to criminals and others wishing to use AI for nefarious activities).

Sorry, but it is wishful thinking to assume that all non-evil AI developers are going to strictly make sure their AI is fully law-abiding. It could be that the AI self-adjusts and wanders into illegal activity. Of course, there is also the potential that the AI developers either wanted the AI to act illegally or that they weren’t aware of what constituted illegal versus legal acts when they were crafting the AI (yes, this is quite possible, namely that a heads-down all-tech AI team can be ignorant of the legal shenanigans of their AI, which isn’t excusable and yet happens with alarming frequency).

What can be done about this?

Once again, besides trying to ensure that the AI out-the-gate is ironclad and lawful, an additional approach to gaining steam involves embedding or implanting an AI component that does legal double-checking for the rest of the AI app. Sitting quietly and often unheralded, this added AI is observing the rest of the AI to try and discern whether the AI is going to go rogue or at least stride past the limits of legal or regulatory imposed restrictions.

We now then have two kinds of AI double-checking that are potentially embedded into an AI app:

  • AI Ethics double-checker: In real-time, this component or AI add-in assesses the rest of the AI for ethical and unethical behaviors that the AI exhibits
  • AI Legal double-checker: In real-time, this component or AI add-in assesses the rest of the AI for assurance of staying within legal keystones and for the catching of emerging illegal activities by the AI

To clarify, those are relatively new conceptions and as such the AI you are using today might be in any of these present conditions:

  • AI that has no double-checkers included at all
  • AI that has an AI Ethics double-checker included but no other double-checkers
  • AI that has an AI Legal double-checker included but no other double-checkers
  • AI that has both an AI Ethics double-checker and an AI Legal double-checker
  • Other

There are some markedly tricky aspects of having both the AI Ethics double-checker and the AI Legal double-checker working side-by-side in an AI app as kindred brother and sister. This is a type of dualism that can be harder to coordinate than you might so assume (I think we all know that brothers and sisters can have the tightest of bonds, though they can also fight like the dickens from time to time and have vigorously opposing views).

I’ve discussed at length this type of arduous dualism: “A neglected dualism is occurring in AI for Social Good involving the lack of encompassing both the role of artificial moral agency and artificial legal reasoning in advanced AI systems. Efforts by AI researchers and AI developers have tended to focus on how to craft and embed artificial moral agents to guide moral decision-making when an AI system is operating in the field but have not also focused on and coupled the use of artificial legal reasoning capabilities, which is equally necessary for robust moral and legal outcomes” (Lance Eliot, “The Neglected Dualism Of Artificial Moral Agency And Artificial Legal Reasoning In AI For Social Good,” Harvard University CRCS Annual Conference 2020, Harvard Center for Research and Computation Society)

If you’d like to noodle on why there might be tension between an AI Ethics double-checker and an AI Legal double-checker, you might find this notable quote of mind-bending conceptual worthiness: “The law may permit some particular act, even though that act is immoral; and the law may forbid an act, even though that act is morally permissible, or even morally required” (Shelly Kagan, The Limits of Morality, 1998).

Let’s slightly shift our focus and see how these double-checkers dovetail into another highly scrutinized AI topic, namely Responsible AI or a concerted consideration of the alignment of human values and AI.

The general notion is that we want AI that abides by proper and desirable human values. Some refer to this as Responsible AI. Others similarly discuss Accountable AI, Trustworthy AI, and AI Alignment, all of which touch upon the same cornerstone principle. For my discussion on these important issues, see the link here and the link here, just to name a few.

How can we get AI to align with human values?

As earlier suggested, we would hope that AI developers would be cognizant of developing AI that attains Responsible AI adherence. Regrettably, they might not, as per the reasons earlier elucidated. In addition, they might try to do so, and yet nonetheless the AI ends up self-adjusting beyond the salient realm of ethical behaviors or possibly into unlawful waters.

Alright, we need to then consider our handy dandy double-checkers as a means to shore up these risks and exposures. The use of a well-devised AI Ethics double-checker can materially aid in aligning AI with human values. Similarly, the use of a well-devised AI Legal double-checker can substantively aid in aligning AI with human values.

Thus, a crucial and not yet well-known means of seeking to arrive at Responsible AI, Trustworthy AI, Accountable AI, AI Alignment, etc., would involve the use of AI double-checkers such as an AI Ethics double-checker and an AI Legal double-checker that would work tirelessly as a double check on the AI that they are embedded within.

In this herein discussion, I’d like to go into a bit further detail about the nature and constructs of AI Legal double-checkers that might be embedded into AI. To do so, it might be helpful to share with you some additional background on the overall topic of AI & Law.

For a no-nonsense examination of how AI and the law are intermixing with each other, see my discussion at the link here. In my discerning look at the AI & Law coupling, I provide this straightforward conception of two major ways to interrelate AI and the law:

  • (1) Law-applied-to-AI: The formulation, enactment, and enforcement of laws as applied to the regulating or governance of Artificial Intelligence in our society
  • (2) AI-applied-to-Law: Artificial Intelligence technology devised and applied into the law including AI-based Legal Reasoning (AILR) infused into LegalTech high-tech apps to autonomously or semi-autonomously perform lawyering tasks

The first listed viewpoint consists of considering how existing and new laws are going to govern AI. The second listed perspective has to do with applying AI to the law.

This latter category usually involves employing AI-based Legal Reasoning (AILR) in various online tools used by lawyers. For example, AI might be part of a Contract Life Cycle Management (CLM) package that aids attorneys by identifying contractual language that will be useful for drafting new contracts or might detect contracts that have legally wishy-washy language that allows for mishaps or legal loopholes (for my look at so-called “law smells” that can be discerned by AI, see the link here).

We will inevitably have AI applied to the law that becomes available for use by the general public and that does not require an attorney to be in the loop. Right now, as a result of various restrictions, including the UPL (Unauthorized Practical of Law), making available AI-based legal advising apps is a thorny and controversial matter, see my discussion at the link here.

I brought up this introduction about AI & Law to point out that another instrumental use of AI applied to the law would be to create AI Legal double-checkers.

Yes, the same kind of technological prowess involved in applying AI to the law is able to serve as a double duty by using the AI for serving as an embedded or implanted AI Legal double-checker. The AI Legal double-checker is a component that has to be versed in legal facets. When the rest of the AI app is performing various actions, the AI Legal double-checker is gauging whether the AI app is doing so legally and within lawful constraints.

An AI Legal double-checker component does not necessarily need to cover the full gamut of everything there is to know about the law. Depending upon the nature of the AI app as to the purpose and actions of the AI overall, the AI Legal double-checker can be much narrower in terms of the legal expertise that it contains.

I’ve identified a useful framework for showcasing how AI in the legal domain ranges across a series of autonomous capacities, known as Levels of Autonomy (LoA). For an overview see my Forbes column posting of November 21, 2022, “The No-Nonsense Comprehensive Compelling Case For Why Lawyers Need To Know About AI And The Law” at the link here, and for a detailed technical depiction see my in-depth research article in the MIT Computational Law Journal of December 7, 2021, see the link here.

The framework elucidates five levels of AI as used in legal endeavors:

  • Level 0: No automation for AI-based legal work
  • Level 1: Simple assistance automation for AI-based legal work
  • Level 2: Advanced assistance automation for AI-based legal work
  • Level 3: Semi-autonomous automation for AI-based legal work
  • Level 4: Domain autonomous for AI-based legal work
  • Level 5: Fully autonomous for AI-based legal work

I’ll briefly describe them herein.

Level 0 is considered the no automation level. Legal reasoning and legal tasks are carried out via manual methods and principally occur via paper-based approaches.

Level 1 consists of simple assistance automation for AI legal reasoning. Examples of this category would include the use of everyday computer-based word processing, the use of everyday computer-based spreadsheets, access to online legal documents that are stored and retrieved electronically, and so on.

Level 2 consists of advanced assistance automation for AI legal reasoning. Examples of this category would include the use of query-style rudimentary Natural Language Processing (NLP), simplistic elements of Machine Learning (ML), statistical analysis tools for legal case predictions, etc.

Level 3 consists of semi-autonomous automation for AI legal reasoning. Examples of this category would include the use of advanced Knowledge-Based Systems (KBS) for legal reasoning, the use of Machine Learning and Deep Learning (ML/DL) for legal reasoning, advanced NLP, and so on.

Level 4 consists of domain autonomous computer-based systems for AI legal reasoning. This level reuses the conceptual notion of Operational Design Domains (ODDs), as utilized for self-driving cars, but as applied to the legal domain. Legal domains might be classified by functional areas, such as family law, real estate law, bankruptcy law, environmental law, tax law, etc.

Level 5 consists of fully autonomous computer-based systems for AI legal reasoning. In a sense, Level 5 is the superset of Level 4 in terms of encompassing all possible legal domains. Please realize that this is quite a tall order.

You can conceive of these Levels of Autonomy on par with the akin uses when discussing self-driving cars and autonomous vehicles (also based on the official SAE standard, see my coverage at the link here). We do not yet have SAE Level 5 self-driving cars. We are edging into SAE Level 4 self-driving cars. Most conventional cars are at SAE Level 2, while some of the newer cars are nudging into SAE Level 3.

In the legal domain, we do not yet have Level 5 AILR. We are touching upon some Level 4, though in extremely narrow ODDs. Level 3 is beginning to see the light of day, while the mainstay of AILR today is primarily at Level 2.

A recent research article about AI as applied to the law has posited a typification known as Law Informs Code. The researcher states this: “One of the primary goals of the Law Informs Code agenda is to teach AI to follow the spirit of the law” (John J. Nay, “Law Informs Code: A Legal Informatics Approach to Aligning Artificial Intelligence with Humans”, Northwestern Journal of Technology and Intellectual Property, Volume 20, forthcoming). There are some essential considerations that the Law Informs Code mantra brings up and I’ll walk you through several of those keystone precepts.

Before diving into the topic, I’d like to first lay some essential foundation about AI and particularly AI Ethics and AI Law, doing so to make sure that the discussion will be contextually sensible.

The Rising Awareness Of Ethical AI And Also AI Law

The recent era of AI was initially viewed as being AI For Good, meaning that we could use AI for the betterment of humanity. On the heels of AI For Good came the realization that we are also immersed in AI For Bad. This includes AI that is devised or self-altered into being discriminatory and makes computational choices imbuing undue biases. Sometimes the AI is built that way, while in other instances it veers into that untoward territory.

I want to make abundantly sure that we are on the same page about the nature of today’s AI.

There isn’t any AI today that is sentient. We don’t have this. We don’t know if sentient AI will be possible. Nobody can aptly predict whether we will attain sentient AI, nor whether sentient AI will somehow miraculously spontaneously arise in a form of computational cognitive supernova (usually referred to as the singularity, see my coverage at the link here).

The type of AI that I am focusing on consists of the non-sentient AI that we have today. If we wanted to wildly speculate about sentient AI, this discussion could go in a radically different direction. A sentient AI would supposedly be of human quality. You would need to consider that the sentient AI is the cognitive equivalent of a human. More so, since some speculate we might have super-intelligent AI, it is conceivable that such AI could end up being smarter than humans (for my exploration of super-intelligent AI as a possibility, see the coverage here).

I’d strongly suggest that we keep things down to earth and consider today’s computational non-sentient AI.

Realize that today’s AI is not able to “think” in any fashion on par with human thinking. When you interact with Alexa or Siri, the conversational capacities might seem akin to human capacities, but the reality is that it is computational and lacks human cognition. The latest era of AI has made extensive use of Machine Learning (ML) and Deep Learning (DL), which leverage computational pattern matching. This has led to AI systems that have the appearance of human-like proclivities. Meanwhile, there isn’t any AI today that has a semblance of common sense and nor has any of the cognitive wonderment of robust human thinking.

Be very careful of anthropomorphizing today’s AI.

ML/DL is a form of computational pattern matching. The usual approach is that you assemble data about a decision-making task. You feed the data into the ML/DL computer models. Those models seek to find mathematical patterns. After finding such patterns, if so found, the AI system then will use those patterns when encountering new data. Upon the presentation of new data, the patterns based on the “old” or historical data are applied to render a current decision.

I think you can guess where this is heading. If humans that have been making the patterned upon decisions have been incorporating untoward biases, the odds are that the data reflects this in subtle but significant ways. Machine Learning or Deep Learning computational pattern matching will simply try to mathematically mimic the data accordingly. There is no semblance of common sense or other sentient aspects of AI-crafted modeling per se.

Furthermore, the AI developers might not realize what is going on either. The arcane mathematics in the ML/DL might make it difficult to ferret out the now-hidden biases. You would rightfully hope and expect that the AI developers would test for the potentially buried biases, though this is trickier than it might seem. A solid chance exists that even with relatively extensive testing that there will be biases still embedded within the pattern-matching models of the ML/DL.

You could somewhat use the famous or infamous adage of garbage-in garbage-out. The thing is, this is more akin to biases-in that insidiously get infused as biases submerged within the AI. The algorithm decision-making (ADM) of AI axiomatically becomes laden with inequities.

Not good.

All of this has notably significant AI Ethics implications and offers a handy window into lessons learned (even before all the lessons happen) when it comes to trying to legislate AI.

Besides employing AI Ethics precepts in general, there is a corresponding question of whether we should have laws to govern various uses of AI. New laws are being bandied around at the federal, state, and local levels that concern the range and nature of how AI should be devised. The effort to draft and enact such laws is a gradual one. AI Ethics serves as a considered stopgap, at the very least, and will almost certainly to some degree be directly incorporated into those new laws.

Be aware that some adamantly argue that we do not need new laws that cover AI and that our existing laws are sufficient. They forewarn that if we do enact some of these AI laws, we will be killing the golden goose by clamping down on advances in AI that proffer immense societal advantages.

In prior columns, I’ve covered the various national and international efforts to craft and enact laws regulating AI, see the link here, for example. I have also covered the various AI Ethics principles and guidelines that various nations have identified and adopted, including for example the United Nations effort such as the UNESCO set of AI Ethics that nearly 200 countries adopted, see the link here.

Here's a helpful keystone list of Ethical AI criteria or characteristics regarding AI systems that I’ve previously closely explored:

  • Transparency
  • Justice & Fairness
  • Non-Maleficence
  • Responsibility
  • Privacy
  • Beneficence
  • Freedom & Autonomy
  • Trust
  • Sustainability
  • Dignity
  • Solidarity

Those AI Ethics principles are earnestly supposed to be utilized by AI developers, along with those that manage AI development efforts, and even those that ultimately field and perform upkeep on AI systems.

All stakeholders throughout the entire AI life cycle of development and usage are considered within the scope of abiding by the being-established norms of Ethical AI. This is an important highlight since the usual assumption is that “only coders” or those that program the AI are subject to adhering to the AI Ethics notions. As prior emphasized herein, it takes a village to devise and field AI, and for which the entire village has to be versed in and abide by AI Ethics precepts.

I also recently examined the AI Bill of Rights which is the official title of the U.S. government official document entitled “Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People” that was the result of a year-long effort by the Office of Science and Technology Policy (OSTP). The OSTP is a federal entity that serves to advise the American President and the US Executive Office on various technological, scientific, and engineering aspects of national importance. In that sense, you can say that this AI Bill of Rights is a document approved by and endorsed by the existing U.S. White House.

In the AI Bill of Rights, there are five keystone categories:

  • Safe and effective systems
  • Algorithmic discrimination protections
  • Data privacy
  • Notice and explanation
  • Human alternatives, consideration, and fallback

I’ve carefully reviewed those precepts, see the link here.

Now that I’ve laid a helpful foundation on these related AI Ethics and AI Law topics, we are ready to jump into the heady topic of AI Legal double-checkers and the realm of Law Informs Code.

AI Legal Double-Checkers Embedded Into AI For Human-Value Alignment

I sometimes refer to AI Legal Double-Checkers via an acronym of AI-LDC. This is a bit visually jarring for those that aren’t familiar with the acronym. As such, I won’t be using this particular acronym in this discussion but wanted to mention it to you as a heads-up.

To unpack some of the complexities of AI Legal double-checkers, let’s address these major points:

  • Use of AI Legal double-checkers as an AI human-values alignment mechanism
  • More expansive AI will correspondingly require more robust AI Legal double-checkers
  • AI Legal double-checkers enact the law and notably aren’t making law (presumably)
  • Delicate balance between AI Legal embodiment of the law as rules versus standards
  • Requiring proof of the pudding when it comes to AI abiding by the law

Due to space constraints, I’ll just cover those five points for now, though please be on the watch for further coverage in my column covering additional and equally noteworthy considerations on these rapidly evolving and forward-moving matters.

Right now, engage your seatbelt and get ready for an invigorating journey.

  • Use of AI Legal double-checkers as an AI human-values alignment mechanism

There are numerous ways to try and achieve a harmonious alignment between AI and human values.

As earlier mentioned, we can produce and promulgate AI Ethics precepts and seek to get AI developers and those that field and operate AI to abide by those keystones. Unfortunately, this alone won’t do the trick. You’ve got some devisers that inevitably won’t get the message. You’ve got some contrivers that will flaunt Ethical AI and attempt to circumvent the somewhat loosey-goosey prescribed principles. And so on.

The use of “soft law” approaches entailing AI Ethics has to almost inexorably be paired with “hard law” avenues such as passing laws and regulations that will send a hefty signal to all that create or utilize AI. The long arm of the law might come to get you if you aren’t judiciously leveraging AI. The sound of prison doors clanging could garner sharp attention.

A big problem though is that sometimes the barn door has already let the horses out and about. An AI that is fielded will potentially be producing all manner of illegal acts and continue to do so until not just caught, but also when some enforcement finally steps up to stymy the flow of unlawful actions. All of that can take time. Meanwhile, humans are being harmed in one fashion or another.

Into this foray comes the AI Legal double-checker.

By residing within an AI app, the AI Legal double-checker is able to immediately detect when the AI appears to be running afoul of the law. The AI Legal double-checker might stop the AI in its tracks. Or the component might alert humans as to the discerned unlawful activities, doing so on a timely basis that could prod overseers into urgent corrective action. There is also the considered formalized logging that the component could create, providing a tangible semblance of an audit trail for purposes of codifying the adverse actions of the AI.

Our laws are said to be a form of multi-agent conglomeration such that the laws inevitably are a mixture of what society has sought to cast as a balance amongst likely conflicting views of proper and improper societal behaviors. An AI Legal double-checker based on our laws is therefore embodying that mixture.

Notably, this is more than just programming a list of definitive legal rules. Laws tend to be more malleable and strive toward overarching standards, rather than specifying the most minute of microscopic rules. Complexities are aplenty.

Returning to the earlier noted research paper, here’s how these considerations can also be viewed regarding the AI Alignment pursuit: “Law, the applied philosophy of multi-agent alignment, uniquely fulfills these criteria. Alignment is a problem because we cannot ex ante specify rules that fully and provably direct good AI behavior. Similarly, parties to a legal contract cannot foresee every contingency of their relationship, and legislators cannot predict the specific circumstances under which their laws will be applied. That is why much of law is a constellation of standards” (ibid).

Embodying law into an AI Legal double-checker is a lot more challenging than you might at first assume.

As AI advances, we will need to leverage such advances accordingly. Turns out that what’s good for the goose is also good for the gander. Those of us making progress in AI as applied to the law are pushing the envelope on AI and indubitably forging new advances that can ultimately feed into AI progress altogether.

  • More expansive AI will correspondingly require more robust AI Legal double-checkers

A cat-and-mouse gambit confronts this topic.

The chances are that as AI gets further advanced, any AI Legal double-checker component is going to find matters harder and harder to cope with. For example, an AI app being scrutinized might have newly devised super-sneaky ways to hide the illegal actions that the AI is taking. Even if the AI isn’t taking an underhanded route, the overall complexity of the AI could alone be a daunting hurdle for seeking to have the AI Legal double-checker assess.

Here's how this becomes especially significant.

Suppose an AI developer or some firm utilizing AI proclaims that there is an AI Legal double-checker that has been embedded into the AI-based app. Voila, they seem to have now washed their hands of any further concerns. The AI Legal double-checker will take care of everything.

Not so.

The AI Legal double-checker might be insufficient for the nature of the AI app involved. There is also the possibility that the AI Legal double-checker becomes outdated, perhaps not being updated with the latest laws pertaining to the AI app. A slew of reasons can be foreseen as to why the mere presence of an AI Legal double-checker won’t be a silver bullet.

Consider these insights by the earlier cited research: “As the state-of-the-art for AI advances, we can set iteratively higher bars of demonstrated legal understanding capabilities. If a developer claims their system has advanced capabilities on tasks, they should demonstrate correspondingly advanced legal comprehension and legal, reasoning abilities of the AI, which have practically no ceiling of difficulty when considering the morass of laws and regulations across time, precedent, and jurisdiction” (ibid).

  • AI Legal double-checkers enact the law and notably aren’t making law (presumably)

I’m sure that some of you are aghast at the idea of having these AI Legal double-checkers.

One often-voiced concern is that we are apparently going to allow AI to decide our laws for us. Good gosh, you might be thinking, some piece of automation will overtake humanity. Those darned embedded AI Legal double-checkers will become the default kings of our laws. Whatever they perchance do will be what the law seems to be.

Humans will be ruled by AI.

And these AI Legal double-checkers are the slippery slope that gets us there.

A counterargument is that such talk is the stuff of conspiracy theories. You are wildly postulating and getting yourself into a tizzy. The reality is that these AI Legal double-checkers are not sentient, they are not going to take over the planet, and hyping about their existential risk is plainly preposterous and immensely overstated.

All in all, remaining with a calm and reasoned posture, we do need to be mindful that the AI Legal double-checkers serve to appropriately reflect the law and not by design and nor by accident go further to somehow default into the revered realm of making law. Setting aside the sentience extrapolations, we can certainly agree that there is a real and pressing concern that the AI Legal double-checker might end up misrepresenting the true nature of a given law.

In turn, you could claim that therefore that particular “misrepresented” law is essentially being made anew since it no longer aptly signifies what was intended by the actual law. I trust that you frankly can see how this is a subtle but telling consideration. At any point in time, the AI Legal double-checker could on a virtual basis by making or shall we say “hallucinate” new laws simply by how the AI component is interpreting the law as originally stated or embodied in the AI (for my coverage of AI so-called hallucinations, see the link here).

Care on this must be stridently exercised.

On this topic, the aforementioned research study proffers this parallel thought in terms of seeking to avert crossing that sacred line: “We are not aiming for AI to have the legitimacy to make law, set legal precedent, or enforce law. In fact, this would undermine our approach (and we should invest significant effort in preventing that). Rather, the most ambitious goal of Law Informing Code is to computationally encode and embed the generalizability of existing legal concepts and standards into validated AI performance” (ibid).

  • Delicate balance between AI Legal embodiment of the law as rules versus standards

Laws are messy.

For just about any law on the books, there is likely a multitude of interpretations about what the law stipulates in actual practice. In the parlance of the AI field, we refer to laws as being semantically ambiguous. That’s what makes developing AI as applied to the law such an exciting and also simultaneously vexing challenge. Unlike the precise number crunching that you might see for say financial-oriented AI applications, the desire to embody our laws into AI entails dealing with a tsunami of semantic ambiguities.

In my foundation's book on the fundamentals of AI Legal Reasoning (AILR), I discuss how prior attempts at merely codifying laws into a set of bounded rules did not get us as far as we would like to go in the legal domain (see the link here). Today’s AILR has to encompass an integration between the use of rules and what might be called overarching standards that the law represents.

This important balance can be expressed in this fashion: “In practice, most legal provisions land somewhere on a spectrum between pure rule and pure standard, and legal theory can help estimate the right combination of “rule-ness” and “standard-ness” when specifying objectives of AI systems” (ibid).

  • Requiring proof of the pudding when it comes to AI abiding by the law

Wanting something is different than having something.

That whit of wisdom comes up when proffering that though we might want to have AI Legal double-checkers, we abundantly need to assure that they work, and work correctly. Note that this presents another tough and grueling hurdle. I’ve covered previously the latest advances and challenges in the verification and validation of AI, see the link here.

As noted in the research paper: “To address the gap, before AI models are deployed in increasingly agentic capacities, e.g., fully autonomous vehicles on major roads, the deploying party should demonstrate the system’s understanding of human goals, policies, and legal standards. A validation procedure could illustrate the AI’s ‘understanding’ of the ‘meaning’ of legal concepts” (ibid).

Conclusion

I urge you to consider joining me on this noble quest to build and field AI Legal double-checkers. We need more attention and resources devoted to this virtuous pursuit.

This also provides double duty, as mentioned earlier, toward achieving AI Legal Reasoning (AILR) that can be used for aiding attorneys and potentially used directly by the general public. Indeed, some argue vehemently that the only viable means of arriving at a fuller sense of access to justice (A2J) will be via the crafting of AI that embodies legal capacities and can be accessed by all.

One quick final point for now.

The discussion so far has emphasized that the AI Legal double-checker would be embedded or implanted into AI. This is indeed the primary focus of those researching and undertaking this emerging realm.

Here’s a question worth mulling over.

Put on your thinking cap.

Why not make use of AI Legal double-checkers in all software?

The gist is that rather than exclusively using the AI Legal double-checkers in AI, perhaps we should widen our viewpoint. All kinds of software can go legally astray. AI has admittedly gotten the lion’s share of attention due to the ways in which AI is usually placed into use, such as rendering gut-wrenching decisions that impact humans in their everyday lives. You could though readily maintain that there are lots of non-AI systems that do likewise.

In essence, we ought to not let any software have a free ride to avoid or avert the law.

Recall earlier too that I mentioned the two categories of combining AI and the law. We herein have focused on the use of AI as applied to the law. On the other side of the coin is the application of the law to AI. Suppose we enact laws that require the use of AI Legal double-checkers.

This might at first be confined to AI systems, especially those rated as especially high-risk. Gradually, the same AI Legal double-checker requirement could be extended to non-AI software too. Again, no free rides.

While you noodle on that above consideration, I’ll spice things up as a closing teaser. If we are going to be trying to require AI Legal double-checkers, we might as well also be doing likewise about AI Ethics double-checkers. The use of an AI Legal double-checker is only half the story, and we cannot neglect or forget about the AI Ethics concerns too.

I’ll end this jaunty discourse with one of my favorite quotes. Per the wise words of Earl Warren, the famous jurist that served as the Chief Justice of the United States: “In civilized life, law floats in a sea of ethics.”

It might be best to stridently put those budding and bubbling AI Legal double-checkers and AI Ethics double-checkers into avid use if we want to keep our heads above potentially looming heavy seas from sour AI and dour non-AI systems that endanger our safety.

They might be humanity's life vest.

Follow me on Twitter