Yes it does. Sorry.
I spent years automating it. And was one of the first people to do it. It was deceptively easy to write an AI that did the work of lawyers. It took a team of three of us. That’s all. And this was in the late 1980s. Tech today dwarfs it.
The principle issue is that a lawyer must read and understand documents because his submissions to the court as an officer of the court make him liable. So we found humans the gating issue.
That said, most of it’s rubber stamping, and managing the process.
We are within two years of these new AI’s that don’t have to be taught narrow disciplines.
We (our organization) are unable to train them at this point because the emergent nature of the logical capacity of LLMs trails the ability to synthesize text. However, as capacity increases, we can increase the number of adversarial predictions and create internal competition for the best solutions. If you look at AI’s today, they are creating only a few. However, combining increasing emergence and more permutations, At that point, the function of lawyers is largely to extract information from the client and the opposition to feed into the AIs.
My work (our work) is in composing the formal logic of decidability, and training the AI’s so that we can measure the divergence between legislation, findings of the court, regulations, and reciprocity (equity), and therefore triangulate between the opposing propositions the body of law, and universal decidability.
We are still left with only one human-constrained problem: lie detection. Which is the principal function of juries. And it’s very unlikely that juries will be defeated in that capacity for a very long time.
Reply addressees: @J_Hurstman @DinnertimeDr @scrumble_eggs @lauferlaw @elonmusk @alx
Source date (UTC): 2023-09-05 03:41:21 UTC
Original post: https://twitter.com/i/web/status/1698904120810205184
Replying to: https://twitter.com/i/web/status/1698893519018971241
Leave a Reply