Viewing a single comment thread. View all comments

Fake_William_Shatner t1_j748na3 wrote

When you work at a law firm, the AI doing the work of artists and writers, you might be able to tell them; "Be flexible, find another career."

When you hear about an AI creating legal documents and helping people in court. "Everybody sue this guy!!!!" Hey, and you could probably use an AI Lawyer to write that lawsuit -- make sure to send a LOT of them. Bankrupt the business before they can test it out!

9

I_ONLY_PLAY_4C_LOAM t1_j74ft6v wrote

I'm not convinced that this technology, in it's current form, will replace lawyers. It lacks the precision required by legal reasoning and still gets shit wrong all the time. Furthermore, as a software engineer, I have doubts on whether this tech is capable of solving these problems without radical new ideas. I foresee a lot of people giving themselves a lot of headaches by thinking they can rely on this technology, but not much more than that.

13

likethatwhenigothere t1_j74qmyo wrote

I asked it something today and it came back with an answer that seemed correct. I then asked for it to give me examples. It gave two examples and the way it was written seemed absolutely plausible. However I knew the examples and knew that they were wrong. It gave other examples that I couldn't verify anywhere, yet as I asked more questions it kept doubling down on the previous examples.

I won't go into detail about what I was asking, but it basically said the Nintendo logo was made up of three rings to represent three core values of the business. I went through Nintendo's logo history to see if it ever had three rings and as far I can tell it didn't. So fuck knows where it got the info from.

6

I_ONLY_PLAY_4C_LOAM t1_j74rwgf wrote

It's just giving you a plausible and probabilistically likely answer. It has absolutely no model of what is and isn't true.

13

likethatwhenigothere t1_j76c7nb wrote

But aren't people using it as factual tool and not just getting it to write content that could be 'plausible'? There's been talk about this changing the world, how it passed medical and law exams - which obviously needs to be factual. Surely if theres a lack of trust in the information its providing, people are going to be uncertain about using it. If you have to fact check everything its providing, you might as well just to do the research/work yourself because you're effectively doubling up the work. You're checking all the work chatgpt does and then having to fix any errors its made.

Here's what I actually asked chatgtp in regard to my previous comment.

I asked if the borrowmean symbol (three interlinked rings) was popular in Japanese history. It stated it was, and give me a little bit of history about how it became popular. I asked it to provide examples of where it can be seen. It came back saying temple gates, family crests etc. But it also said it was still widely used today and could be seen in Japanese advertising, branding and product packaging. I asked for an example of branding where its used. It responded...

"One example of modern usage of the Borromean rings is in the logo of the Japanese video game company, Nintendo. The three interlocking rings symbolize the company's commitment to producing quality video games that bring people together".

Now that is something that can be easily checked or confirmed or refuted. But what if its providing a response that can't be?

2

Fake_William_Shatner t1_j77obea wrote

These people don't seem to know the distinctions you are bringing up. Basically, it's like expecting someone in the middle ages to tell you how a rocket works.

The comments are "evil" or "good" and don't get that "evil and good" are results based on the data and the algorithm employed and how they were introduced to each other.

Chat GPT isn't just one thing. And if it's giving accurate or creative results, that's influenced by prompts, the dataset it is drawing from, and the vagaries of what set of algorithms they are using that day -- I'm sure it's constantly being tweaked.

And based on the tweaks, people have gotten wildly different results over time. I can be used to give accurate and useful code -- because they sourced that data from working code and set it to "not be creative" but it's understanding of human language helps do a much better job of searching for the right code to cut and paste. There's a difference between term papers and a legal document and a fictional story.

The current AI systems have shown they can "seem to comprehend" what people are saying and give them a creative and/or useful response. So that I think, proves it can do something easier like legal advice. A procedural body of rules with specific results and no fiction is ridiculously simple compared to creative writing or carrying on a conversation with people.

We THINK walking and talking are easy because almost everybody does it. However, for most people -- it's the most complicated thing they've ever learned how to do. The hardest things have already been done quite well with AI -- so it's only a matter of time that they can do simpler things.

Getting a law degree does require SOME logic and creativity -- but it's mostly memorizing a lot of statutes, procedures, case law and rules. It's beyond ridiculous if we think THIS is going to be that hard for AI if the can converse and make good art.

1

ritchie70 t1_j75anat wrote

I played with it today. It wrote two charming children’s stories, a very simple program in C, a blog post about the benefits of children learning ballet, a 500 word essay about cat claws, answered a “how do I” question about Excel, and composed a very typical corporate email.

Of the fact based items, they were correct.

I may use it in future if I need an especially ass-kissy email.

2

Fake_William_Shatner t1_j77myki wrote

>I went through Nintendo's logo history to see if it ever had three rings and as far I can tell it didn't.

You are working with a "creative AI" that is designed to give you a result you "like." Not one that is accurate.

AI can definitely be developed and trained on case law and give you valid answers. Whether or not they've done it with this tool is a very geeky question that requires people to look at the data and code.

Most of these discussions are off track because they base "can it be done" by current experience -- when the people don't even really know what tool was used.

1

lycheedorito t1_j754icl wrote

It won't replace artists either. Like chat, it gets shit wrong, it doesn't understand what it's making, you still need artists who understand art to curate and fix things at the very least, every time I explain this it feels like I'm talking to a wall which is not surprising. Probably the same for writing, or music, or whatever.

5

KSRandom195 t1_j74njna wrote

Everyone will say this about their pet industry.

“Clearly my industry is harder than all the others because <reason>.”

No, your pet industry isn’t special, it will either be replaced or not like all the others.

Being a technical person, I don’t think AI is where it needs to be yet to replace practically any industries. If I’m wrong, it’s not really a problem I was going to be able to deal with anyway.

1

demonicneon t1_j74p1a9 wrote

It’s still very much a tool.

2

KSRandom195 t1_j74p4uz wrote

As a tool I see great potential. As a replacement I do not.

2

I_ONLY_PLAY_4C_LOAM t1_j74rg37 wrote

Having actually worked in legal technology, I'm honestly not sure what this does for existing lawyers. As I said before, legal documents require extremely specific and precise language. Lawyers are likely to have templates for common documents their firms create, and anything beyond that requires actually knowing about law, which LLMs like ChatGPT are not capable of. The actual money to be made in legal technology is not in generative AI, but in document processing and search. Lawyers are increasingly having to deal with hundreds of gigabytes or even terabytes of documents in a given case. Ocr, which is also AI and is seeing in use in the industry, makes handwriting searchable. Advanced search techniques make legal review, the real driver of cost in the legal industry, faster and cheaper. Making legal arguments in court is not the reason why interaction with the legal system can be so expensive.

9

Fake_William_Shatner t1_j77p3ko wrote

>legal documents require extremely specific and precise language.

Which computer software is really good at -- even before the improvements of AI.

>and anything beyond that requires actually knowing about law, which LLMs like ChatGPT are not capable of.

Yeah, lawyers memorize a lot of stuff and go to expensive schools. That doesn't mean it's actually all that complicated relative to programming, creating art or designing a mechanical arm.

I agree that document processing and search are going to see a lot of growth with AI. But being able to type in a few details about a case and have a legal document created, a discovery, and a bulk of all the bread and butter that is using the same templates over and over again with a few sentences changing -- that's going to be AI.

Most of what paralegals and lawyers do is repetitive and not all that creative.

1

I_ONLY_PLAY_4C_LOAM t1_j74pox9 wrote

This attitude that tech bros have about disrupting industries they don't actually understand or know anything about is pretty funny sometimes.

1

Fake_William_Shatner t1_j77pz70 wrote

"Tech bros"? There are AI developers. If they team with some lawyers to double-check and they get good case law data -- I can guarantee you it isn't a huge jump to create a disruptive AI based on that.

Revisit these comments in about a year. The main thing that will hinder AI in the legal world is humans suing it to not be allowed. Of course, all those attorneys will use it and then proof the output. And sign their names. And appear in court with nice suits and make deals. And they won't let AI be used in court because it is not allowed. For reasons.

The excuse that it can give an inaccurate result does put people at risk, so more effort is required for accuracy. But, AI will be able to pass the Bar exam easier than beat a human at chess.

It's not funny, but sad, that people are trying to convince themselves this is more complicated than writing a novel or creating art.

1

I_ONLY_PLAY_4C_LOAM t1_j77zlt0 wrote

RemindMe! one year

Has the machine consciousness supplanted the fleshy meat bags in the legal industry.

1

Fake_William_Shatner t1_j78fcnq wrote

No -- I didn't say it would replace them. The legal system won't allow it.

I'm saying it will be used to create legal documents and win cases -- albeit with the pages printed out before they go in the courthouse.

This isn't about the acceptance, but the capabilities. If there is one group that can protect their industry it's the justice system.

1

henningknows t1_j7494j0 wrote

It will be interesting to see how some of those ideas pan out. Ai lawyer will definitely be put to the test quickly with the fastest law suits to decide if it will be legal or smart to have an ai that can create legal docs. As for writing we have already seen you can’t patent work created by ai, and my assumption is search engines will learn to identify and de rank ai written articles.

0

Fake_William_Shatner t1_j74cr07 wrote

I have full confidence in America's legal system to protect itself from innovation, efficiency and fairness.

No longer having an expensive lawyer making it impossible for some people to be taken to jail, and to bury people who challenge a corporation in a two-tiered justice system -- well, that's just not going to happen on their watch.

2

henningknows t1_j74d0ii wrote

Not totally sold that ai could bring that change. I can agree on the legal system sucking

1