User1539

User1539 t1_j8dajs9 wrote

It seems like every time I see a 'cool prosthetic' it's something the person has created at home, or with the help of a university professor.

I just find it really interesting that, of all things, it's prosthetics that are home made more than anything else I see. 3D printing, along with the need from something completely custom, seems to be creating a culture of DIY for replacing your own limbs.

It's very strange if you think about it.

1

User1539 t1_j6i0aj1 wrote

It's hard to suggest it's '50% of the way' to AGI when it can't really do any reasoning.

I was playing with its coding skills, and the feeling I got was like talking to a kid that was copying off other kids papers.

It would regularly produce code, then do a summary at the end, and in that summary make factually incorrect statements.

If it can't read its own code, then it's not very reliable, right?

I'm not saying this isn't impressive, or a step on the road toward AGI, but the complete lack of reliable reasoning skills makes it less of an 'intelligence' and more like the shadow of intelligence. Like being able to add large numbers instantly isn't 'thinking', and calculators do it far better than humans, but we wouldn't call a calculator intelligence.

We'll see where it goes. I've seen some videos and papers I'm more impressed with than LLMs lately. People are definitely building systems with reasoning skills.

We may be 50% of the way, but I don't feel that LLMs represent that on their own.

2

User1539 t1_j5nhi04 wrote

No, you're being offensive.

70 is a measure of a human IQ. Lots of humans have an IQ of around 70. They're regular, hard working people. There's nothing wrong with them.

I'm using that number because it is a number used to determine if someone is capable of employment,not to determine if they're good people.

I'm saying if we took away all the jobs from people with 70 and below IQs, it would be earth shattering. Because those people do a lot of work. Good work. Like good people do.

I'm literally saying that most people worry about AI becoming smarter than the smartest human, forgetting that most of us fall far below that line, and replacing all the hard working people in factories is going to change EVERYTHING.

But, deep down, you think people with low IQs are disgusting, and anyone that talks about them must be insulting them. Because you literally can't imagine a world where someone with a 70IQ is simply a reference point, and not an insult.

If we were talking about flying jets, and I offhand mentioned the robot would have to be 6ft tall, as that is the height cutoff for flying a jet, would you be insulted on the pilots behalf? No. Because you don't think 6ft tall jet fighter pilots are 'less' and need your defending.

Not only do factory workers not need you to stick up for them, you're showing your true colors with how you're acting like they're so mentally challenged no one should talk about them at all.

1

User1539 t1_j5ndt3y wrote

70 is the limit for getting Social Security and not having to work. It is literally the line where someone is expected to go out and get a job.

I'm literally saying we don't need 'smarter than human' AGI. An AI that could do the work we give to the people of whom we expect the least would be an existential change.

IQ is a common measure of someone's intelligence. But, if the mention of a measure of human intelligence offends you, then you probably shouldn't take part in conversations where human intelligence is routinely compared to machine intelligence.

1

User1539 t1_j5jjimd wrote

The same argument has been made about google, and it's a real concern. Some moron killed his wife a week or so ago, and the headline read 'Suspect google history included 'How to hide a 140lb body''

So, yeah. It's already a problem.

Right now we deal with it by having Google keep records and hoping criminals who google shit like that are just too stupid to use a VPN or anonymous internet.

Again, we don't need AGI to have that problem. It's already here.

That's the whole point of my comment. We need to stop waiting for AGI before we start to treat these systems as being capable of existential change for the human race.

1

User1539 t1_j5gxrwt wrote

Honestly, what we need is something to translate between what an LLM can 'understand' needs to be done and the physical world.

Right now, we can ask an LLM what the process of, say, changing the oil in a car is.

We can also program an industrial robot to do that task, basically blind.

To automate jobs, we need an LLM style understanding of the task, and the steps required, coupled to the ability to take each of those steps and communicate it to a 'body', checking as it goes that the process is following correctly.

So, if an LLM could, say, break the problem into steps, taking into account the situation around it, it could probably do the job.

Imagine typing into Chat GPT a prompt like 'You are programming a robot arm. You need to pick up a glass. Write the code to pick up the glass in front of you. '

Then automatically send that to a camera/arm, and have the image processing describing back 'The arm is to the left of the glass by 2 inches', please program the arm to grab the glass.

'The glass has been knocked over to the left, and is now on its side, 4 inches in front of the hand. please program the arm to grab the glass'

Ultimately it would be more complicated than that, but I think that's the basic idea of what many researchers are working on moving forward.

With a feedback loop of video being able to 'describe' to the LLM what is happening, and the LLM adjusting to meet its task, you could have a very useful android.

3

User1539 t1_j5grqw0 wrote

> If we want effective automation or make general human tasks faster we certainly do not need AGI.

Agreed. We're very, very, close to this now, and likely very far away from AGI.

> If we want inventions and technology which would be hard for humans to come up with in a reasonable time frame, we do need AGI.

This is where we disagree. I have many contacts at universities, and most of my friends have a PHD and participate in some kind of research.

In their work, they were evaluating Watson (IBMs LLM style AI) years ago, and talking about how it would help them.

Having a PHD necessarily means having tunnel vision. You will do research that makes you the single person on earth that knows about the one cell you study, or the one protein you've been working with.

Right now, the condition of science is that we have all these researchers writing papers to help other scientists have a wider knowledge on things they couldn't possibly dedicate time to.

It's still nowhere near wide enough. PHDs aren't able to easily work outside their field, and the result is that their research needs to go through several levels of simplification before someone can find a use for it, or see how it effects their own research.

A well trained LLM can tear down those walls between different fields. Suddenly, you've got an infinitely patient, infinitely knowledgeable assistant. They can write code for you. You can ask it what effect your protein might have on a new material, without having to become, or know, a material scientist.

Everyone having a 'smart' assistant that can offer an expert level understanding of EVERY FIELD will bridge the gaps between the highly specialized geniuses of our time.

Working with the sort of AI we have now will take us to an entirely new level.

9

User1539 t1_j5f9cdo wrote

This is why I keep saying we don't need 'real AGI' to feel the vast majority of the effects we all think we'll see when AGI happens.

We don't need a superhuman thinking machine to do 99% of the tasks people want to automate. What we need is a slice of a 70IQ factory worker's brain that can do that one thing over and over again.

We already have the building blocks for that.

75

User1539 t1_j04yt4t wrote

I think you're right that this technology, if not any specific implementation, has the potential to destabilize the world as we know it.

I've already had friends losing work to these. I had graphic designers tell me they just don't get asked to do commissions hardly at all anymore. I have a friend who did dictation for a law office, and that dried up all at once. She had to go back to teaching.

It's just the edges of things, today, but it doesn't have to get much better to take your order at McDonalds, answer phones, help you schedule classes, etc ...

It also doesn't take anywhere near 100% market saturation to destabilize things. The unemployment rate peaked at just over 25% during the great depression.

2

User1539 t1_izz0sqa wrote

But, again, people who have the beta of the self driving Tesla all seem to agree it's not ready for primetime. I've ridden in one in the past 6 months where the owner was telling me he won't use it because it's 'like riding with a teenager, you never know when it's just going to do something stupid and you have to panic and slam the brakes'.

They're still limited hours on the ones they are using as driverless taxis (not Teslas, so who knows how far ahead they are?), but I don't think this is entirely regulatory.

If we had video after video of beta users saying 'I just put my hands on the wheel and fall asleep in NYC traffic', I'd be there with you, but that's not what I'm hearing.

1

User1539 t1_izyrhvl wrote

But, those taxis prove that the regulations have been met. There are licenced trials of driverless taxis.

So, why aren't we using them all the time, everywhere?

The answer seems to be that the driverless taxis are still only used when there's not a lot of traffic, and in very specific areas where the AI has been trained and the roads are well maintained.

So, in certain circumstances that favor the AI, the technology seems pretty much ready. Even the government is allowing it.

I think it really is a technical hurtle to get the AI driving well enough that it can handle every real-world driving situation.

2

User1539 t1_izyr47x wrote

I actually looked that up, and ... well, kind of, but mostly no.

That claim was actually 9X safer, and it was done with a tiny sample size of accidents, and didn't take into account that a person basically has to be driving the car with autopilot (so, there's no accounting for the number of times the human took over to stop an accident).

Also, almost no one is using autopilot in congested cities, and the tests that have been done weren't promising.

So, 9X safter, with sparse, cherry picked, data?

For areas without a ton of traffic, that are well known to the AI? It seems to do a pretty good job.

I'm not saying we don't nearly have it, or that we won't have it very soon. I'm just not sure it's as good as some people think it is.

2

User1539 t1_izyqe02 wrote

I've been playing with ChatGPT quite a bit, and you can kind of catch it not really understanding what it's talking about.

I was testing if it could write code, and it's pretty good spitting out example code for a problem that's 90% what I want it to be. I'm not saying that isn't impressive as hell, especially for easy boilerplate stuff I'd otherwise google and look for an answer.

That said, in its summary of what it did, it was sometimes wrong. Usually just little things like 'This opens an HTTP server on port 80', where the actual example it wrote opened the port on 8080.

It was like talking to a kid who'd diligently copied their homework from another kid, but didn't quite understand what it said.

Still, as a tool it would be useful as-is, and as an AI it's impressive as hell. But, if you play with it long enough you'll catch it contradicting itself and clearly not quite understanding what it's telling you.

I have seen other PHD level experiments with AI where you're able to talk to a virtual bot about its surroundings, and it will respond in a way that suggests it really does know what's going on around it, and can help you find and do things in its virtual world.

I think that level of 'understanding' of the text it's producing is still a ways off from what ChatGPT is doing today. Maybe that's what they're excited about in the next version already, or what Google is talking about?

Either way, I'm prepared to have my mind blown by AI's progress on a weekly basis.

1

User1539 t1_izy1t6f wrote

I'm not sure what the hold up is, honestly. I'm sure that's part of it, but also you've all seen the tech demos that show Tesla's pulling into oncoming traffic, so it's tough to argue that it's ready for prime time, but no one is willing to pull the trigger.

I'm sure we'll get there, but we are definitely behind the imagined timeline of Elon Musk, who's really proven that he's mostly full of shit at this point, and shouldn't be listened to or trusted.

I think there was a lot of hype, and frankly lies, that clouded our judgement on that one, and now I'm hesitant to say that I feel like I know what the state of things really is.

I'm not sure if we're in a similar bubble with other things or not?

Things are definitely moving along at breakneck speeds. 5 months or 5 years probably doesn't really matter in the long run.

5

User1539 t1_izv2sai wrote

I doubt anyone knows for sure. OpenAI is already telling people not to take this iteration seriously, because what they're working on is so much better. Meanwhile, you've got Google telling everyone this is nothing compared to what they're working on.

So, I'd say it's certainly possible we'll see that kind of rapid improvement at least over the short term.

But, then you've got spaces like self-driving cars where it seemed very realistic that we'd have that problem solved 5 years ago.

We'll just have to wait and see.

32

User1539 t1_iysht53 wrote

Reply to comment by Kolinnor in The year in conclusion by Opticalzone

I was going to say, AI breakthroughs alone would have been enough to keep me feeling like we're speeding toward something.

Then you look at breakthroughs in Graphene, Fusion, magnetics, etc ... and it feels like the future is coming at an increasing pace.

Which is what we're here for, right?

12

User1539 t1_iyksfc2 wrote

It's okay, we're all getting hit at the same time. It's not like there's enough 'future proof' jobs for enough people that we won't simply need to move beyond the concept that everyone needs to work.

It'll probably happen in phases. First we'll just let the job market shrink and pay people more. The current system is absurd anyway. We throw away a lot of what we create, for no good reason than to pad the market. There's no reason for everyone to work. We used to be fine with women staying home, they're 51% of the workforce.

So, we shrink to WWI levels of employment. Then we set the retirement age very low. Then we just adjust that until people are, basically, doing a 4-year 'tour' after college, and those with better jobs get more.

Eventually, of course, all the jobs will just go away, but you've done your 5 years and have been living on a pension for a decade before then anyway, so no worries.

2

User1539 t1_iy6d6pi wrote

Just because we know how to build a lighter/stronger bridge doesn't means every bridge in the country will suddenly be lighter and stronger. Someone has to go out and actually build those things.

But, one can imagine a future where the design of a new factory line is done in seconds by an AI, and assembled by machine.

So, I can see where people, especially in this particular sub, could imagine new factories popping up where the curing time of concrete is the only time factor.

I'm not going to tell them that'll never happen, but having worked to create automated systems on factory floors, I know that right now it takes months to get some basic wiring purchased and installed.

4

User1539 t1_iy6bvxd wrote

The light bulbs are in production, and commercially available. Apparently using graphene for the filament makes them more reliable?

I know there have been some samples of CR2032 rechargeable batteries sent out, and that company produces its own graphene. Also, the CR2032 are just a short to market product, and they have plans for much larger graphene aluminum batteries.

I feel like graphene gets unfairly beat on because the second it was discovered, it was found to have so many interesting and useful properties the news was literally flooded with different possible applications.

By the time there was even a single industrial scale source, it was already well past the hype-cycle. People just don't understand that most 'new' technologies are just more plastic/coater technologies DuPont has been running for decades, and even the tweaks we call a breakthrough amount to spraying something slightly different on rolls of plastic.

So, when something genuinely new comes out, you have to know from industry experience that it takes years to set up a 'new' industrial coating system, and that's just basically doing the same thing with some slightly different chemicals.

People just don't understand the time it takes to build a genuinely new factory line.

But, there is commercially available graphene, in industrial quantities. So, at least there's a reliable source of it to start working with.

It's happening. Granted, slowly, but no one said that it would make it to market immediately when they started realizing how useful it was.

7

User1539 t1_iy4tr9y wrote

I think you're conflating two different aspects of the argument.

You seem to be suggesting that if the code produced is, ultimately, just adding, modifying, or using, existing codebases then it's not 'AI', or if it's not 'from scratch' then it's not 'AI'.

There's a few things to break down here, first the code generated isn't the AI, and if the AI is just stitching together libraries to achieve a goal, well, that's what humans are doing too.

Most libraries will be re-written, by humans, over time, because new languages are invented and newer design patterns are accepted, etc ... and those new libraries, right now, are being written with the help of machine learning.

So, the 'produced code' not being wholly original isn't really any different than what people are doing now.

The 'AI' part of the process is where the pattern recognition abilities of machine learning are leveraged to generate working 'code' from human spoken language.

A computer without a trained natural language processor couldn't be told 'I need a webpage, that you log into, that will display results of a test where the database of the results are ...'

So, you would tell that to a developer, and count on his years of experience to understand how to pull the results of the test into a database, write a simple application to provide some system of logging in, displaying data, etc ...

If a human were doing that, likely he would use something like Spring boot, to generate boilerplate code, then something like KeyCloak to handle the security features, and ultimately a front-end javaScript framework to handle displaying the data.

So, where the AI comes in, is that it can recognize what the human wants from a natural language description and build it without the need for any more input than a human would have to give.

We're almost there, too. We can already describe fairly low-level logic, like sorting through a set of data and retrieving a record based on criteria, then using that record to perform a task, with machine learning systems like copilot.

If we see a broadening of something like that, to allow for the high-level description of complex algorithms, it'll become the defacto standard for creating future AI, and that AI will just be turned right around and used on the problem of understanding natural language and generating code, like a feedback loop.

When the AI is good enough, I'm sure someone will say 'rewrite all these libraries, but find any bugs (and there are plenty), and fix them'.

Then we'll see the tables turn. We'll have AI using code written by AI, to produce applications as described to it from humans speaking natural language.

The compiler is already doing some optimization too. If you code something in a human readable, but ultimately inefficient, way the compiler will likely just re-organize that to be more efficient when it generates machine code.

A good example of where things may go is that AI is starting to find some interesting algorithms in pure math. An important one to pay attention to is matrix multiplication, because it's something that computers have to do all the time, and it's very tedious, and difficult to optimize. In general, there is one good way to do it, and that's what any human will code when asked.

However, under certain circumstances, for specific sizes of matrices, you can optimize the algorithm and save the computer a ton of resources.

Almost no developer, today, even knows these algorithms exist. They're basically an AI curiosity. Even knowing they exist, I'll be practically no one is using them, because the time and effort to study them, and code them, is more effort than the general performance gain from implementing them would be worth.

What we'll see, and are frankly already starting to see, is that an AI will recognize those rare, special, conditions under which it can optimize something, and will generate the code to do so.

So, it really won't be long before we see a re-implementation of a lot of those libraries and stuff.

Then we'll all be stitching together AI code ... except, probably not, because we probably won't be coding at all. We'll just be describing our needs in natural language, and the AI platform will do the development.

1