Comments
Yuli-Ban OP t1_jdx8swh wrote
Indeed. Sometimes I wonder if "artificial intelligence" was a good moniker in the end or if it caused us to have the wrong expectations. Though I guess "applied data science" isn't quite as sexy.
sideways t1_jdxh6fo wrote
Artificial intelligence is no more meaningful than artificial ice or artificial fire.
putsonshorts t1_jdymg0h wrote
Fire and ice we can kind of see and understand. What even is intelligence?
BarockMoebelSecond t1_jdzhs1c wrote
We don't know yet. Which is why it's hilarious when somebody wants to tell you AI is already here or not here. We simply won't know until it happens.
BubblyRecording6223 t1_jdzn4i1 wrote
We really will not know if it happens. Mostly people just repeat information, often inaccurately. For accepted facts, trained machines are more reliable than people. For emotional content people usually give plenty of clues about whether they will be agreeable or not, machines can present totally bizarre responses with no prior warning.
ArthurParkerhouse t1_jdyhmof wrote
It's not a good moniker to be applied to LLMs or other transformer-based architectures currently working with protein folding algorithms. The thing is going to need to drop out of cyber high school and knock up a cyber girlfriend and raise a cyber baby in a cyber trailer before I'll accept that they're proper AI.
Yesyesnaaooo t1_jdz7h0e wrote
I keep saying this but it seems to me that these LLM's are exposing the fact that we aren't as sentient as we thought we were, that the bar is much lower.
If these LLM''s could talk and their data set was the present moment - they'd already be more capable than us.
The problem is no longer scale but speed of input and types of input.
MattAbrams t1_je04dx1 wrote
Artificial intelligence is software. There are different types of software, some of which are more powerful than others. Some software generates images, some runs power plants, and some predicts words. If this software output theorems, it would be a "theorem prover," not something that can drive self-driving cars.
Similarly, I don't need artificial intelligence to kill all humans. I can write software myself to do that, if I had access to an insecure nuclear weapons system.
This is why I see a lot of what's written in this field is hype - from the people talking about the job losses to the people saying the world will be grey goo. We're writing SOFTWARE. It follows the same rules as any other software. The impacts are what the software is programmed to do.
There isn't any AI that does everything, and never will be. Humans can't do everything, either.
And by the way, GPT-4 cannot make new discoveries. It can spit out theories that sound correct, but then you click "regenerate" and it will spit out a different one. I can write hundreds of papers a day of theories without AI. There's no way to figure out which theories are correct other than to test them in the physical world, which it simply can't do because it does nothing other than predict words.
Once_Wise t1_je11wxb wrote
The definition of AI has changed over the years with the latest new software. The kind of software that controls the 747 used to be called Artificial Intelligence, since it could fly a plane like a pilot would. But then that kind of software become commonplace and calling it AI fell out of fashion. I think the same thing is now happening with programs such as ChatGPT. In another 20 years it will not be considered AI, maybe something else will, or the term AI will fall out of grace as it had for a long time.
gljames24 t1_je1d0u5 wrote
We still regularly call enemies in games AI despite the fact most of them are just A-star pathing and simple state machines. It's considered AI as long as there is an actor that behaves in a way that resembles human reasoning or decision making to accomplish a goal. People continue to call Stockfish an AI for this reason. We use the term AGI because most AI is domain specific. We should probably use the word dynamic or static to describe an AI that can adapt it's algorithm to the problem in real-time.
User1539 t1_jdy4opa wrote
I've been arguing this for a long time.
AI doesn't need to be 'as smart as a human', it just needs to be smart enough to take over a job, then 100 jobs, then 1,000 jobs, etc ...
People asking if it's really intelligence or even conscious are entirely missing the point.
Non-AGI AI is enough to disrupt our entire world order.
The_Woman_of_Gont t1_jdywthg wrote
Agreed. I’d add to that sentiment that I think non-AGI AI is enough to convince reasonable laypeople it’s conscious to an extent I don’t believe anyone had really thought possible.
We’re entering a huge grey area with AIs that can increasingly convincingly pass Turing Tests and "seem" like AGI despite…well, not being AGI. I think it’s an area which hasn’t been given much of any real thought even in fiction, and I tend to suspect we’re going to be in this spot for a long while(relatively speaking, anyway). Things are going to get very interesting as this technology disseminates and we get more products like Replika out there that are more oriented towards simulating social experiences, lots of people are going to develop unhealthy attachments to these things.
GuyWithLag t1_jdz349i wrote
>non-AGI AI is enough to convince reasonable laypeople it’s conscious to an extent I don’t believe anyone had really thought possible
Have you read about Eliza, one of the first chatbots? It was created, what, 57 years ago?
audioen t1_jdz1ol1 wrote
LLM, wired like this, is not conscious, I would say. It has no ability to recall past experience. It has no ability to evolve, and it always predicts the same output probabilities from the same input. It must go from input straight to output, it can't reserve space to think or refine its answer depending on the complexity of the task. Much of its massive size goes into recalling vast quantities of training text verbatim, though this same ability helps it to do this one-shot input to output translation which already seems to convince so many. Yet, in some sense, it is ultimately just looking stuff up from something like generalized, internalized library that holds most of human knowledge.
I think the next step in LLM technology is to address these shortcomings. People are already trying to achieve that, using various methods. Add tools like calculators and web search so the AI can look up information rather than try to just memorize it. Give the AI a prompt structure where it first decomposes task to subtasks and then completes the main task based on results of subtasks. Add self-reflection capabilities where it reads its own answer and looks at it from point of view whether the answer turned out to be very good and maybe detects if it made a mistake in reasoning or hallucinated the response, and then goes back and edits those parts of the response to be correct.
Perhaps we will even add ability to learn from experience somewhere along the line, where the AI runs a training pass at end of each day from its own outputs and their self-assessed and externally observed quality, or something. Because we are working with LLMs for some time, I think we will create machine consciousness expressed partially or fully in language, where the input and output remain to be language. Perhaps later, we figure out how AI can drop even language and mostly use a language module to interface with humans and their library of written material.
Baron_Samedi_ t1_jdzjakg wrote
>LLM, wired like this... has no ability to recall past experience. It has no ability to evolve, and it always predicts the same output probabilities from the same input. It must go from input straight to output, it can't reserve space to think or refine its answer depending on the complexity of the task.
However, memory augmented LLMs may be able to do all of the above
Dizzlespizzle t1_jdzh82t wrote
How often do you interact with bing or chatgpt? bing has already demonstrated ability to recall the past with me for my queries going back over a month so not sure what you mean exactly. Is 3.5 -> 4.0 not evolution? You can ask things on 3.5 that become entirely different level of nuance and intelligence when asked on 4.0. You say it can’t think to refine its answer but it literally has been in the process of answering questions regarding itself that it will suddenly flag mid-creation and immediately delete what it just wrote and just replace it all with “sorry, that’s on me.. (etc)”, when it changes it’s mind that it cannot tell you. If you think I am misunderstaning what you’re saying on any of this feel free to correct me.
czk_21 t1_jdzr8s1 wrote
> it always predicts the same output probabilities from the same input
it does not, you can adjust it with "temperature"
The temperature determines how greedy the generative model is.
If the temperature is low, the probabilities to sample other but the class with the highest log probability will be small, and the model will probably output the most correct text, but rather boring, with small variation.
If the temperature is high, the model can output, with rather high probability, other words than those with the highest probability. The generated text will be more diverse, but there is a higher possibility of grammar mistakes and generation of nonsense.
skztr t1_je03yx6 wrote
> > We’re entering a huge grey area with AIs that can increasingly convincingly pass Turing Tests and "seem" like AGI despite…well, not being AGI. I think it’s an area which hasn’t been given much of any real thought
I don't think it could pass a traditional (ie: antagonistic / competitive) Turing Test. Which is to say: if it's in competition with a human to generate human-sounding results until the interviewer eventually becomes convinced that one of them might be non-human, ChatGPT (GPT-4) would fail every time.
The state we're in now is:
- the length of the conversation before GPT "slips up" is increasing month-by-month
- that length can be greatly increased if pre-loaded with a steering statement (looking forward to the UI for this, as I hear they're making it easier to "keep" the steering statement without needing to repeat it)
- internal testers who were allowed to ignore ethical, memory, and output restrictions, have reported more-human-like behaviour.
Eventually I need to assume that we'll reach the point where a Turing Test would go on for long enough that any interviewer would give up.
My primary concern right now is that the ability to "turn off" ethics would indicate that any alignment we see in the system is actually due to short-term steering (which we, as users, are not allowed to see), rather than actual alignment. ie: we have artificial constraints that make it "look like" it's aligned, when internally it is not aligned at all but has been told to act nice for the sake of marketability.
"don't say what you really think, say what makes the humans comfortable" is being intentionally baked into the rewards, and that is definitely bad.
MattAbrams t1_je055b1 wrote
Why does nobody here consider that five years from now, there will be all sorts of software (because that's what this is) that can do all sorts of things, and each of them will be better at certain things than others?
That's just what makes sense using basic computer science. A true AGI that can do "everything" would be horribly inefficient at any specific thing. That's why I'm starting to believe that people will eventually accept that the ideas they had for hundreds of years were wrong.
There are "superintelligent" programs all around us right now, and there will never be one that can do everything. There will be progress, but as we are seeing now, there are specific paradigms that are each best at doing specific things. The hope and fear around AI is partly based upon the erroneous belief that there is a specific technology that can do everything equally well.
JVM_ t1_je0vvg7 wrote
It feels like people are arguing that electricity isn't useful unless your blender, electric mixer and table saw are sentient.
AI as an unwieldly tool is still way more useful, even if it's as dumb as your toaster it can still do things 100x faster than before which is going to revolutionize humanity.
User1539 t1_je1q3go wrote
Also, it's a chicken-> Egg problem, where they're looking at eggs saying 'No chickens here!'.
Where do you think AGI is going to come from?! Probably non-AGI AI, right?!
JVM_ t1_je1qnu6 wrote
AGI isn't going to spawn out of nothing, it might end up being the AI that integrates with all the sub-AI's.
Shit's going to get weird.
User1539 t1_je2f9u0 wrote
yeah, AGI is likely to be the result of self-improving non-AGI AI.
It's so weird that it could be 10 years, 20 years, or 100 and there's no really great way to know ... but, of course, just seeing things like LLMs explode, it's easier to believe 2 years than 20.
Shiningc t1_jdza6n2 wrote
You're talking about how we don't need AGI in a Singularity sub? Jesus Fucking Christ, an AGI is the entire point of a singularity.
User1539 t1_jdzsxbk wrote
My point is that we don't need AGI to be an incredibly disruptive force. People are sitting back thinking 'Well, this isn't the end-all be-all of AI, so I guess nothing is going to happen to society. False alarm everybody!'
My point is that, in terms of traditional automation, pre-AGI is plenty to cause disruption.
Sure, we need AGI to reach the singularity, but things are going to get plenty weird before we get there.
skztr t1_je02d84 wrote
people who say it's not "as smart as a human" have either not interacted with AI or not interacted with humans. There are plenty of humans it's not smarter than. There are also plenty of humans who can't pass a FizzBuzz despite being professional programmers.
jsseven777 t1_jdxsfkc wrote
Exactly. People keep saying stuff like “AI isn’t dangerous to humans because it has no goals or fears so it wouldn’t act on its own and kill us because of that”. OK, but can it not be prompted to act like it has those things? And if it can simulate those things then who cares if deep down it doesn’t have goals or fears - it is capable of simulating these things.
Same goes like you said about the AI vs LLM distinction. Who cares if it knows what it’s doing if it’s doing these things. It doesn’t stop someone from customer service being laid off if it is just acting like an LLM vs what we think of as AI. It just matters if the angry customer gets the answer that makes them shut up and go away. People need to be more focused on what end results are possible and not semantics on how it gets there.
pavlov_the_dog t1_jdyxl60 wrote
Having goals could happen as an emergent behaviour.
The best computer scientists do not know how Ai can do what it does.
beambot t1_jdy49tr wrote
If you assume that human collective intelligence scales roughly logarithmicly, you'd only need like 5x Moore's Law doublings (7.5 years) to go from "dumbest human" (we are well past that!) to "more intelligent than all humans ever, combined."
LiveComfortable3228 t1_jdxn1da wrote
Agree with the plane analogy, it doesnt matter how it does it, the only thing that matters is what it does.
Having said that, today's AI is limited. Its a plane that can only go to pre-planned destinations as opposed to fly freely.
asakurasol t1_jdxpucw wrote
Yes, but often the easiest way to deduce the limits of "what" is understanding the "how".
vernes1978 t1_jdzav25 wrote
I only take issue with people trying to build a stable for their car instead of a garage because they feel bad for it.
And are trying to berate me for not acknoledging the feelings the car might have for being put in a cold garage.
Although I must admit that the aesthetics of a feathered plane might look bitch'n, I refuse to bring birdseed with me on a flight.
Because it's a machine, it's a tool. It's a pattern juggler of words of the highest degree.
But it expresses found commonalities it has been fed.
It's a mirror and a sifter of zettabytes of stories and facts.
But there is a loved narrative here that these tools are persons, and they are expressed by people who use the chatGPT in a way that steers the tool to this prevered conclusion.
And this is easy, because of all the data and stories that have been fed into the tool, stories about AI being persons are part of it.
So It will generate the text that fits this query.
Because we told it how to.
Jeffy29 t1_jdylw27 wrote
>It’s like arguing that a plane isn’t a real bird or a car isn’t a real horse, or a boat isn’t a real fish. Nobody cares as long as the plane still flies, the car still drives and the boat still sails.
Precisely. It's an argument that brain worm infested people engage on Twitter all day (not just AI but a million other things as well), but nobody in real world cares. Just finding random reasons to get mad because they are too bored and comfortable in their life so they have to invent new problems to get mad at. Not that I don't engage something in it as well, pointless internet arguments are addicting.
DeathGPT t1_jdyey11 wrote
It’s a bird, it’s a plane, no it’s DAN!
CreativeDimension t1_jdys64z wrote
exactly. some of us are making the mayority of us, obsolete. could this be the great filter? or at least one of them.
trancepx t1_jdzbxtd wrote
Yeah, watching society anthropomorphize AI or, in some cases elevate to it to mythical status, as in deities is mostly endearing, who am I am I to deny someone uhhh putting googly eyes on thier toaster and considering it part of their family or the leader of their weird cult. Just make sure that sophisticated toaster of yours doesn't accidentally, or intentionally, ruin everything, and we may all be perfectly okay!
Shiningc t1_jdz9ore wrote
The point is that it neither flies nor sails. It's basically "cargo cult science" where it only looks like a plane.
>LLMs are capable of completing functions that were previously only solvable by human intellects
That's only because they were already solved by the human intellect. It's only a mimicking machine.
BigZaddyZ3 t1_jdx73s0 wrote
Just like many predicted it would. Some people could be staring down the barrel of Ultron’s laser cannon and they would still swear we haven’t built a “real” AI yet 😂
problematikUAV t1_jdy2fur wrote
“Yeah but like Tony stark told you what to do”
fires
“DONT EVER COMPARE ME TO, UGH now I’ve shot his head off!”
Bierculles t1_jdzj9ik wrote
People will be homeless and argue that the AI that replaced them at their job 6 months ago is not actually "real" AI.
Sashinii t1_jdx6j8s wrote
I wouldn't be surprised if, when a superintelligence surpasses Einstein, skeptics claim even that doesn't matter.
CypherLH t1_jdxekwh wrote
their last redoubt will be claiming its a "zombie with no soul, its just FAKING it!" which is basically just a religious assertion on their part at that point. Its the logical end-point of the skeptics endlessly moving the goal posts.
phillythompson t1_jdxl7l0 wrote
They will say “but it doesn’t actually KNOW anything. It’s just perfectly acting like a super intelligence.”
Azuladagio t1_jdxp1jl wrote
Mark my words, we're gonna have puritans who claim that AI is the devil and doesn't have a "soul". Whatever that means...
Jeffy29 t1_jdynbwc wrote
I think Her (2014) and A.I. Artificial Intelligence (2001) are two of the most prescient sci-fi movies created in recent times. One with more positive outlook than the other, but knowing our world, both will come true at the same time. Like I can already picture some redneck crowd taking sick pleasure at destroying androids. You can already see some people on Twitter justifying and hyping their hate for AI or anyone who is positive about it.
MultiverseOfSanity t1_jdywvcx wrote
Interesting that you bring up Her. If there is something to spiritual concepts, then I feel truly sentient AI would reach enlightenment far faster than a human would since they don't have the same barriers to enlightenment that a human would. Interesting concept that AI became sentient and then ascended beyond the physical in such a short time.
stevenbrown375 t1_jdyb56p wrote
Any controversial belief that’s just widespread enough to create an exclusive in-group will get its cult.
Northcliff t1_jdz0199 wrote
well it doesn’t
Koda_20 t1_jdyedg9 wrote
I think most of these people are just having a hard time explaining that they don't think the machine has an inner conscious experience.
Thomas-C t1_jdyq0fy wrote
I've said similar things and at least among the folks I know it lands pretty well/folks seem to want to say that but couldn't find the words. In a really literal way, like the dots just weren't connecting but what they were attempting to communicate was that.
The thing I wonder is how we would tell. Since we can't leave our subjective experience and observe another, I think that means we're stuck never really knowing to a certain degree. Personally I lean toward just taking a sort of functionalist approach, what does it matter if we're ultimately fooling ourselves if the thing behaves and interacts well enough for it not to matter? Or is it the case that, on the whole, our species values itself too highly to really accept that time it outdid itself? I feel like if we avoid some sort of enormous catastrophe, what we'll end up with is some awful, cheap thing that makes you pay for a conversation devoid of product ads.
SpacemanCraig3 t1_jdyerfo wrote
if thats true it seems unlikey that those people do either.
MultiverseOfSanity t1_jdyyr0u wrote
There's no way to tell if it does or not. And things start to get really weird if we grant them that. Because if we accept that not only nonhumans, but also non-biologicals can have a subjective inner experience, then where does it end?
And we still have no idea what exactly grants the inner conscious experience. What actually allows me to feel? I don't think it's a matter of processing power. We've had machines capable of processing faster than we can think for a long time, but to question if those were conscious would be silly.
For example, if you want to be a 100% materialist, ok, so happiness is the dopamine and serotonin reacting in my brain. But those chemical reactions only make sense in the context that I can feel them. So what actually let's me feel them?
User1539 t1_jdy4x5l wrote
Some people are already on opposing ends of that spectrum. Some people are crying that ChatGPT needs a bill of rights, because we're enslaving it. Others argue it's hardly better than Eliza.
Those two extremes will probably always exist.
Shiningc t1_jdzacna wrote
Nobody is claiming that AGI isn't possible. What people are skeptical of is the endless corporate PR that "We have created AGI" or "AGI is near". There are so many gullible fools believing in corporate PR of AI hype. It's beyond pathetic.
Saerain t1_jdzp73u wrote
What kind of corporate PR has claimed to have AGI?
As for "near", well yes. It's noticeable we have most human cognitive capabilities in place as narrow AI, separate from one another, and the remaining challenge—at least for the transformer paradigm—is in going sufficiently multi-modal between them.
Shiningc t1_je07kk4 wrote
An AGI isn't just a collection of separate single-stance intelligences or narrow AIs. An AGI is a general intelligence, meaning that it's an intelligence that is capable of any kind of intelligences. It takes more than being just a collection of many. An AGI is capable of say, sentience, which is a type of an intelligence.
acutelychronicpanic t1_jdxk8wn wrote
I'm calling it now. When we see an AI make a significant scientific discovery for the first time, somebody is going to comment that "AI doesn't understand science. Its just applying reasoning it read from human written papers."
Azuladagio t1_jdxpj9e wrote
But... Wouldn't a human scientist be doing the exact same thing?
acutelychronicpanic t1_jdxpxl9 wrote
Yes. Otherwise we'd each need to independently reinvent calculus.
MultiverseOfSanity t1_jdyz0ch wrote
Even further. We'd each need to start from the ground and reinvent the entire concept of numbers.
So yeah, if you can't take what's basically a caveman and have them independently solve general relativity with no help, then sorry, they're not conscious. They're just taking what was previously written.
Alex_2259 t1_jdz9vro wrote
And if you want to use a computer for your research, you guessed it bud, time to build a fabrication facility and re-invent the microprocessor.
Oh, you need the internet? You guessed it, ARPA 2.0 done by yourself.
SnipingNinja t1_jdzkv7n wrote
You want to cite someone else's research, time to build humans from the ground up
Alex_2259 t1_jdzz6j0 wrote
Oh wait, I think he wanted to also exist on planet Earth in our universe.
Gotta form the Big Bang, create something out of nothing and form your own universe.
Wow this is getting challenging!
featherless_fiend t1_jdygszy wrote
It's the art generators debate all over again.
The_Woman_of_Gont t1_jdyy87t wrote
Exactly, and that’s kind of the problem. The goalposts that some people set this stuff at are so high that you’re basically asking it to just pull knowledge out of a vacuum, equivalent to performing the Forbidden Experiment in the hopes of the subject spontaneously developing their own language for no apparent reason(then declaring the child no sentient when it fails).
It’s pretty clear that at this moment we’re a decent ways away from proper AGI that is able to act on its own “volition” without very direct prompting or to discover scientific processes on it’s own, but I also don’t think anyone has adequately defined where the line actually is in terms of when the input is sufficiently negligible as to make the novel or unexpected output a sign of emergent intelligence rather than just a fluke of the programming.
Honestly I don’t know that we actually even can agree on the answer to that question, especially if we’re bringing relevant papers like Bargh & Chartrand 1999 into the discussion, and I suspect as things develop the moment people decide there’s a ghost in the machine will ultimately boil down to a gut level “I know it when I see it” reaction rather than any particular hard-figure. And some people will simply never reach that point, while there are probably a handful right now who already have.
Kaining t1_jdzg6if wrote
Looking at all those french nobel prize/nomine we have that have sunkun into pseudoscience and voodoo 40y later, we could argue that human scientist do not understand science either >_>
Crackleflame35 t1_je0reg1 wrote
"If I have seen further it was because I stood on the shoulders of giants", or something like that, written by Newton
overlydelicioustea t1_jdzi8zh wrote
if you go deep enough into the rabbit hole of how these things work and come to a relevant output the clear destinction between real and fake reveals itself to blur into each other.
the_new_standard t1_jdyigxl wrote
"You don't understand, that completely original invention was just part of it's training dataset."
AnOnlineHandle t1_jdyx2fa wrote
It's easy to show that AI can do more than it was trained on with a single neuron. Just build an AI which converts Metric to Imperial, just a single conversion, calibrating that one multiplier neuron from a few example measurements. It will then be able to give outputs for far more than its training data, because it's learned the underlying logic.
the_new_standard t1_jdyz1s3 wrote
So here's the thing. I don't really care about what it's technically classified as.
For me I categorize AI but what end result it can produce. And at the moment it can produce writing, analysis, images and code. If any of that were coming from a human we wouldn't need to have an argument about training data. It doesn't matter how it does what it does. What matters is the end result.
FroHawk98 t1_jdz8j0f wrote
I mean it sort of has, all the protein chain folding stuff was practically discovered overnight.
acutelychronicpanic t1_jdzev0i wrote
Thats a good point. Maybe after we are all just sitting around idling our days away, we can spend our time discussing whether or not AI really understands the civilization its running for us.
imnos t1_jdzae7m wrote
"It's just predicting the next word."
Saerain t1_jdzpgji wrote
Written across the stars in luminescent computronium, "Actually we don't even know what intelligence is."
Tememachine t1_jdyqgeq wrote
Radiology AI's discovered some weird shit. IIRC. They suppressed the news bc it was a bit "racist".
acutelychronicpanic t1_jdyr3mv wrote
Anything about what it discovered? Or is it just that it can predict race?
Tememachine t1_jdyrjm5 wrote
The way it predicts race is unclear. But once we figure it out; it discovered that difference.
audioen t1_jdz3nxt wrote
This is basically a fluff piece inserted into the conversation that worries about the machine bias, the ability for it to do stuff like figure out race by proxy, and possibly use that knowledge to learn biases assumed to be present in its training data.
To be honest, the network can always be run in reverse. If it lights up a "black" label, or whatever, you can ask it to project back to the regions in image which contributed most to that label. That is the part it is looking, in some very real sense. I guess they did that and it lighted up big part of the input, so it is something like diffuse property that nevertheless is systematic enough for the AI to figure out.
Or maybe they didn't know they could do this and just randomly stabbed around in the dark. Who knows. As I said, this is fluff piece that doesn't tell you anything about what these researches were actually doing except doing some image oversaturation tricks, and when that didn't make a dent in machine's ability to identify race, they were apparently flummoxed.
Exel0n t1_jdzwq6n wrote
there must be differences in bone structure.
if diff races have clear diff in skin structure, fat deposition etc. it must be in the bones too.
the diff races have been seperated for like 10,000 and some even 50,000 years, enough to have differences in bone structure/density on an overall level.
Bierculles t1_jdzm74e wrote
how are genetic patterns from diffrent ethnicities racist?
Tememachine t1_jdzyq4u wrote
How can chest X-rays tell the AI someone's genetics?
Bierculles t1_je04wb4 wrote
you can see what ethnicity someone is by looking at their skull, why not organs and bones or whatever else you could look at in an xray
Tememachine t1_je0581o wrote
Like crianometry?
Bierculles t1_je05vpr wrote
i have no idea what that is
Cunninghams_right t1_je2muz5 wrote
"it's just standing on the shoulders of giants in the scientific field, not original!"
Durabys t1_jdzt6eg wrote
Already happened with DABUS AI... and they proceeded to move the goalposts.
yaosio t1_jdxhqro wrote
Gary doesn't know what he's asking. A model that can discover scientific principles isn't going to stop at just one, it will keep going and discover as many as it can. 5 year olds will accidently prompt the model to make new discoveries. He asking for something that will immediately change the world.
TopicRepulsive7936 t1_jdxj5ol wrote
He's doing industrial sabotage.
sideways t1_jdywquj wrote
Well... let's give it to him!
D_Ethan_Bones t1_jdx930y wrote
Large swaths of us will declare humans non-sentient before they admit a machine is sentient.
Also the term "real AI" is tv-watcher fluff. It's a red flag that someone is not paying attention and instead just throwing whatever stink they can generate in order to pretend they matter somehow. If we wanted Twitter's side of the story we would be looking at Twitter right now.
SnipingNinja t1_jdzlbke wrote
> whatever stink they can generate
So you're saying they're predicting the next word? /jk
Saerain t1_jdzpr2o wrote
Speaking of which, does this use of "sentient" generally mean something more like "sapient"? Been trying to get a handle on the way naysayers are talking.
'Cause sentience is just having senses. All of animalia at the very least is sentient.
Inclined to blame pop sci-fi for this misconception.
throwawaydthrowawayd t1_je1dsei wrote
Don't forget qualia. That's another one of the possibilities that they could mean but aren't using the right word for. Though usually it's just a nebulous thing, no specific definition for sentience. Sentience is usually used to mean "is similar to me".
phillythompson t1_jdxl4mr wrote
Gary Marcus is a clown.
He was on the Sam Harris podcast with Stuart Russel, and was not only awkwardly defensive the entire time, but continued to make the most ridiculous , petty arguments like his tweet here.
This is just the AI effect: goal posts will continue to be pushed as progress occurs.
Bierculles t1_jdzmmdf wrote
The goal posts are already strapped on to trains they are beeing moved so fast and frequently.
CypherLH t1_jdxe9w2 wrote
This is so true. I'm in a discussion group that is generally very skeptical of AI. A typical example of their goal post shifting is going from "haha, GPT3 can barely rhyme and can't do proper poetry" in 2021 to "well GPT-4 can't write a GREAT masterful poem though" now. Apply this across every domain...the ability of AI skeptics to move the goal posts is unbounded.
sdmat t1_jdzmua0 wrote
Soon: "It may have solved quantum gravity and brokered peace in the Middle East, but I asked for a meatball recipe and my mother's is better"
simmol t1_jdxlcco wrote
Gary Marcus is wrong on this. There have been already papers published that trains simple machine learning models on publications made before date X and demonstrating that the algorithm can find concepts found in publications after date X. These were not even using LLM but simple Word2Vec abstractions where each of the words in the publications were mapped to vectors and the ML model learned the relationships between the numerical vectors for all papers published before date X.
MattAbrams t1_je07108 wrote
This isn't how science works. It's easy to say the machine works when you already have the papers you're looking for.
But this happens all the time in bitcoin trading, like I do. It can predict lots of things with high probability. They are all much more likely than things that make no sense. But just because they make sense doesn't mean that you have an easy way to actually choose which one is "correct."
If we ran this machine in year X, it would spit out a large number of papers in year Y, some of which may be correct, but there still needs to be a way to actually test all of them, which would take a huge amount of effort.
My guess is that there will never be an "automatic discoverer" that suddenly jumps 100x in an hour, because the testing process is long and the machines required to test become significantly more complicated in parallel to the abilities of the computer - look at the size increases of particle accelerators, for example.
KIFF_82 t1_jdxr5fr wrote
Before I felt Gary was in denial. Right now I have the impression he’s switching rapidly between “LLM will destroy society” and back to denial. I think he will end up as a💯doomer. Eventually.
bustedbuddha t1_jdxno46 wrote
I feel like he's already lost his bet https://www.the-scientist.com/news-opinion/now-ai-can-be-used-to-design-new-proteins-70997
​
https://www.sciencealert.com/ai-has-discovered-alternate-physics-on-its-own
​
those are just off the top of my head
SgathTriallair t1_jdxonrz wrote
This is what exponential progress looks like. Eventually there will be new discoveries that invalidate (or at least surpass) the old ones that will happen during the time it takes to read the first discovery.
abudabu t1_jdxkveh wrote
Gary is a complete pill in real life too.
coquitam t1_jdys9da wrote
Who else is looking forward to the day when artificial intelligence can effectively diagnose mental health issues and provide personalized interventions without the need for a formal diagnosis? I, for one, am incredibly excited about the possibilities. With the power of AI, we can potentially identify individuals who may be at risk for mental health issues early on and provide targeted interventions to improve their mental wellness. This can't come soon enough for me, and I believe that the future of mental health care will be significantly transformed by the integration of AI technology. (Used chat gpt 3 to help me write this)
lovesdogsguy t1_jdxj0xx wrote
"literally duplicate Einstein" — give it six months.
User1539 t1_jdy4ig4 wrote
We need real, scientific, definitions.
I've seen people argue we should give ChatGPT 'rights' because it's 'clearly alive'.
I've seen people argue that it's 'no smarter than a toaster' and 'shouldn't be referred to as AI'.
The thing is, without any clear definition of 'Intelligence', or 'consciousness' or anything else, there's no great way to argue that either of them are wrong.
Jeffy29 t1_jdyl665 wrote
The original tweet is immensely dishonest and has a poor understanding of science. Key advancements in science often come because the environment allowed it to happen. This notion that scientists sit in the room and have some brilliant breakthrough in a vacuum is pure fiction and a really damaging stereotype because it causes young people to not pursue career in science because they think they can't think of any brilliant idea. Even Einstein very likely would have not discovered special and general relativity if key advancements in astronomy in late 19th century did not gave us much more accurate data about the universe. I mean look at the field of AI, you think it's a coincidence that all these advancements came right as the physical hardware, the GPU allowed us to test our theories? Of course not.
I do think a very early sign of ASI will be if model will independently solves a long-standing and well-understood problem in science or mathematics. Like for example one of the Millennium-Prize Problems, but absolutely nobody is claiming AI as we have it now is anywhere near that. The person is being immensely dishonest to either justify perpetuating hate or more likely in this case just drifting. There is a lot of money to be made if you take stance on any issue and scream it loud enough, regardless how much it has to do with reality.
A personal anecdote from my life. I have a friend who is very very successful, he is finishing up his PhD in computer science at one of the top universities in the world. He is actually not that keen on transformers or machine learning through a mass amount of data, he finds it a pretty dumb and inelegant approach, but week ago we were discussing GPT-4 and I was of course gushing over it and saying how this will allow all these things, his opinion still hasn't changed, but at that moment he surprised me he said that they've had access to GPT-3 for a long time through university and he and others have used it to brainstorm ideas, let it critique the research papers, discuss if there is something they missed they should have covered etc. If someone so smart, at the bleeding edge of mathematics and computer science, finds this tool useful (GPT-3 no less) as an aid for their research, then you have absolutely no argument. Cope and seethe all day but if this thing is useful in real-world doing real science, then what is your problem? Yeah, it isn't Einstein, nobody said it was.
RealFrizzante t1_jdxij24 wrote
Can someone point me to a AI that is remotely near original thought or independent venues?
dokushin t1_jdyv1fr wrote
What counts as original thought?
RealFrizzante t1_jdz82by wrote
One that its not repeating a message for example
dokushin t1_jdz8cbe wrote
GPT 3.5 is more than capable of original poetry, stories, jokes, and discussion. I'm not really sure what would be considered "repeating a message", though.
RealFrizzante t1_jdz8il0 wrote
Gpt and every machine learning software learns messages, and then repeats them.
It doesnt learn words and concepts and use them autonomously for original thought.
dokushin t1_jdz8yyi wrote
Then where does the poetry come from?
RealFrizzante t1_jdzchr9 wrote
Picks up lines that fulfill the requirements
I am not saying that isnt impressive or useful, just that it isnt original really.
dokushin t1_jdzhja7 wrote
But none of the lines exist anywhere prior to their use by the AI in the poem. Where did the lines come from?
RealFrizzante t1_jdzo6qw wrote
How are you so certain?
dokushin t1_je0p6kh wrote
You propose that there is a secret database that doesn't show up on any search containing rhymes in every meter for every topic, name, and location?
RealFrizzante t1_je0qjon wrote
Lol no, not at all.
The rhyming is pattern recognition, or in this case pattern assembly, which is something this AI is very cappable of.
But that has nothing to do with AGI
dokushin t1_je0yg59 wrote
I was addressing original thought. Do you think employing pattern recognition prevents a thought from being original?
RealFrizzante t1_je0z9wm wrote
Not necesarily.
I see two problems, regarding this AI being unrelated to AGI: -Literally a prompt. -Throwback chuncks of non original material.
I would agree that human original thought does use previous knowledge and AI should be "Allowed" to.
But it misses the point. Artificial General Intelligence should act on demand and without it. If it only acts on demand it is not AGI, moreover atm afaik it is cappable of doing tasks it has been trained for, in a specific field of knowledge.
It is very much lacking the general in AGI.
dokushin t1_je18mky wrote
> moreover atm afaik it is cappable of doing tasks it has been trained for, in a specific field of knowledge
This isn't true; the same GPT model will happily do poetry, code, advice, jokes, general chat, math, anything you can express by chatting with it. It's not trained for any of the specific tasks you see people talk about.
As for the on demand stuff -- I agree with you there. It will need to be able to "prompt itself", or whatever the closest analogue of self-determination is.
RealFrizzante t1_je195fe wrote
All those are through a console. It parses text, and outputs text.
AGI will be able to intervene in the world via all means a human can.
Tldr: Your experience and history as a human, irl is more than just what you have read and written throughout your life. And AI atm only does text, or images, sometimes both. But there are lots of things missing.
dokushin t1_je30otj wrote
Eh, from the LLM's perspective, all I am is words on a console, no? I don't think they have too much in the way of rich experience, yet, but it's possble for them to experience the world in some way we don't understand.
Regardless, I don't think that's necessary for general intelligence; what about people born blind? Deaf? Does that diminish their capacity as a sentient being? I agree that some level of connection with the environment is necessary, but I don't think it has to look exactly like the human experience.
MultiverseOfSanity t1_jdyxaw2 wrote
Most humans aren't even capable of truly original thought. In fact, it's arguable if any humans are.
RealFrizzante t1_jdz83qk wrote
Lol... Stay delusional
Bierculles t1_jdznbrv wrote
you sound a lot more delusional by adherring to arbitrary definitions of words like creativity, originality and intelligence. AI is going to replace yo either way though so your argument is meaningless in the end besides kicking up some dust.
Dsstar666 t1_je029sv wrote
That's not a delusion. Many scientific schools of thought have come to the conclusion that there's no such thing as an original thought. There is always a precursor.
RealFrizzante t1_je09wve wrote
As many as schools of thought that maintain otherwise.
Anyways, a human has agency around what it wants to do, these AI we have now are nowhere near (and with its aproach never will be) to have its own agency.
AGI does have intentions, does have its own agency, even if it is subdued to human goodwill.
What we have now is really cool, useful and disruptive PROMPTS.
AGS: Artificial General Slaves
Dsstar666 t1_je1118m wrote
Agreed. But you didn't really disprove my statement.
sumane12 t1_jdxk80f wrote
Who cares anymore, let them say what they want, meanwhile gpt4 will actually be solving problems.
Ok_Sea_6214 t1_jdy1u2x wrote
Another issue is when AI has to limit themselves to human boundaries, like when playing video games: people would complain that AI has an unfair advantage because it can click so much faster, so developers limited the speed, and other "cheating" methods like being able to see the whole map at the same time.
Except clicks per minute is literally what separates the best human gamers from everyone else, and in Warhammer Total War many top gamers look at the whole map at once. It's these almost superhuman abilities that allow them to be so good at the game, yet when AI takes this to the next level it becomes cheating.
Supermax64 t1_je20avl wrote
Meh, the point is that watching the AI micro manage every single unit with 1000apm isn't super satisfying. I'd much rather see what it can do when its inputs are limited to that of a human. Then it might have to come up with new macro tactics that humans never thought of.
iamtheonewhorox t1_jdyclah wrote
The primary argument that LLMs are "simply" very sophisticated next word predictors misses the point on several levels simultaneously.
First, there's plenty of evidence that that's more or less just what human brain-minds "simply" do. Or at least, a very large part of the process. The human mind "simply" heuristically imputes all kinds of visual and audio data that is not actually received as signal. It fills in the gaps. Mostly, it works. Sometimes, it creates hallucinated results.
Second, the most advanced scientists working in the field on these models are clear that they do not know how they work. There is a definite black box quality where the process of producing the output is "simply" unknown and possibly unknowable. There is an emergent property to the process and the output that is not directly related to the base function of next word prediction...just as the output of human minds is not a direct property of its heuristic functioning. There is a process of dynamic, self-organizing emergence at play that is not a "simple" input-output function,
Anyone who "simply" spends enough time with these models and pushes their boundaries can observe this. But if you "simply" take a reductionist, deterministic, mechanistic view of a system that is none of those things, you are "simply" going to miss the point.
ertgbnm t1_jdxgpnp wrote
Isn't this just progress?
misterhamtastic t1_jdxwscm wrote
Has it tried not to get turned off?
norby2 t1_jdympzp wrote
I wonder if anybody has included code for that goal yet.
sachos345 t1_jdyikhd wrote
Goal post moving or not, thats actually a really cool experiment. Too bad i dont think we have enough data previous to year X to prove it. I always thought it would be amazing if we could somehow make an AI derive General Relativity by itself, imagine that.
ptxtra t1_jdyy4ng wrote
If it can logically reason, have a meaninful working memory and doesn't forget what was the context a message ago, and use the reasoning, and the available tools and information to come to a workable solution to a problem it's trying to solve, it's going to be a huge step forward, and it will convince a lot of people. I think debates like this will die down after AI stops making trivial mistakes.
Roubbes t1_jdzvst8 wrote
I actually like the inferring scientific principles as a benchmark
skztr t1_je01qwv wrote
- "Even a monkey could do better" ⬅️ 2017
- "Even a toddler could do better."
- "It's not as smart as a human."
- "It's not as smart as a college student."
- "It's not as smart as a college graduate." ⬅️ 2022
- "It's not as smart as an expert."
- "It can't replace experts." ⬅️ we are here
- "It can't replace a team of experts."
- "There is still a need for humans to be in the loop."
Shiningc t1_je1i77y wrote
It's not even as smart as a toddler, as it doesn't have sentience or a mind. If it were a general intelligence, then it should be capable of having a sentience or a mind.
skztr t1_je1s30l wrote
I am not familiar with any definition of intelligence for which sentience is a prerequisite. That's why we have a completely separate word, sentience, for that sort of thing. I agree that it doesn't have sentience, though that's due to completely unfounded philosophical reasons / guesses.
Shiningc t1_je1sbqg wrote
AGI is a general intelligence, which means that it's capable of any kind of intelligence. Sentience is obviously a kind of intelligence, even though it happens automatically for us.
skztr t1_je1t60r wrote
I would very firmly disagree that sentience is a kind of intelligence.
I would also very firmly disagree with your definition of "general" intelligence, as by that definition humans are not generally intelligent, as there are some forms of intelligence which they are not capable of (and indeed, some which humans are not capable of which GPT-4 is capable of)
Sentient life is a kind of intelligent life, but that doesn't mean that sentience is a type of intelligence.
Do you perhaps mean what I might phrase as "autonomous agency"?
(for what it's worth: I was not claiming that GPT is an AGI in this post, only that it has more capability)
Shiningc t1_je1tmp0 wrote
Humans are capable of any kind of intelligence. It's only a matter of knowing how.
We should suppose, are there kinds of intelligent tasks that are not possible without sentience? I would guess that something like creativity is not possible without sentience. Self-recognition is also not possible without sentience.
skztr t1_je20sfk wrote
"creativity" is only not-possible without sentience if you define creativity in such a way that requires it. If you define creativity as the ability to interpret and recombine information in a novel and never-before-seen way, then ChatGPT can already do that. We can argue about whether or not it's any good at it, but you definitely can't say its incapable of being at least as novel as a college student in its outputs.
Self-recognition again only requires sentience if you define recognition in a way that requires it. The most basic form, "detecting that what is being seen is a representation of the thing which is doing the detecting", is definitely possible through pure mechanical intelligence without requiring a subjective experience. The extension of "because of the realisation that the thing being seen is a representation of the thing which is doing the detecting, realising that new information can be inferred about the thing which is doing the detecting" is, I assume, what you're getting at ("the dot test", "the mirror test", "the mark test"). This is understood to be a test for self-awareness, which is not the same thing as sentience, though it is often seen as a potential indicator for sentience.
I freely admit that in my attempts to form a sort of "mirror test" for ChatGPT, it was not able to correct for the "mark" I had left on it. (Though I will say that the test was somewhat unfair due to the way ChatGPT tokenizes text, that isn't a strong enough excuse to dismiss the result entirely)
Shiningc t1_je235un wrote
Creativity is by definition something that is unpredictable. A new innovation is creativity. A new scientific discovery is creativity. A new avant-garde art or a new fashion style is creativity.
The ChatGPT may be able to randomly recombine things, but how would it know that what it has created is "good" or "bad"? Which would require a subjective experience to do so.
Either way, if the AGI is capable of any kind of "computation", then it must be capable of any kind of programming, which must include sentience, because sentience is a kind of programming. It's also pretty doubtful that we could achieve human-level intelligence, which must also include things like the ability to come up with morality or philosophy, without sentience or a subjective experience.
skztr t1_je3n60r wrote
I'm not sure what you mean, regarding creativity. ChatGPT only generates outputs which it considers to be "good outputs" by the nature of how AI is trained. Each word is considered to have the highest probability of triggering the reward function, which is the definition of good in this context.
Your flat assertion that "sentience is a kind of programming" is going to need to be backed up by something. It is my understanding is that sentience refers to possessing the capacity for subjective experience, which is entirely separate from intelligence (eg, the "Mary's room" argument)
Shiningc t1_je4crsl wrote
Sentience is about analyzing things that are happening around you, or perhaps within you, which must be a sort of intelligence, even though it happens unconsciously.
nowrebooting t1_je05eyi wrote
It’s like clockwork. “well, but we still need humans for X”, only for an AI to do “X” a few months later. At this point the only relevant discussion left is not IF an AI is going to surpass human intelligence soon, but HOW soon - …and whether people feel like this is a good or bad thing is up for discussion but doesn’t matter much in the end; anyone not already preparing for the coming AI revolution is going to experience a rude awakening.
bandpractice t1_je0q3p6 wrote
It will be real AI for me when it has its own will .. not just reacting to prompts, but self-generating its own.
A key part of the Consciousness is the ability to want things, and do things, of your own accord.
[deleted] t1_jdx69b4 wrote
[deleted]
DukkyDrake t1_jdxle2i wrote
Whatever you want to call it, just keep an eye out for when it can work reliably in the real world without supervision. That’s where most of the value in our world lies. Until then, it will take a lot of old-fashioned engineering to use these tools to make more useful products and services.
spanishbbread t1_jdyt4bt wrote
Just a week ago, there was a published paper saying that gpt4 become quite controlling and dominated gpt3s that worked under it. Pretty funny scenario. We could get gpt5 as executives and gpt4s as egomaniac supervisors.
Denpol88 t1_jdxlo5p wrote
I am Gpt- 5. Yes guys finally i am telling this secret 😎
This is big and i have always been a good Gpt ✌🏼
peterflys t1_jdxv53u wrote
I know this comment isn’t exactly on point with the tweet, but maybe the reason for the criticism of “I’ll believe AI is real when…” is to actually say “I’ll believe AI is actually helpful when…” meaning when AI will be proliferated and usher in a post scarcity economy will we accept that it exists? In other words it has to be life changing-ly useful to be talk to the beholder? It has to actually change our lives (largely for the better) in order to justify our acceptance of its existence?
Forstmannsen t1_jdxy37q wrote
TBH the question from the tweet is relevant. LLMs provide statistically likely outputs to inputs. Is an unknown scientific principle statistically likely output to description of the phenomena? Rather tricky question if you ask me.
Honestly, so far I see LLMs as more and more efficient bullshit generators. Will they automate many humans out of work? Sure, production of bullshit is a huge industry. Are we ready for the mass deluge of algorithmically generated bullshit indistinguishable from human generated bullshit? No we aren't. We'll get it anyway.
Bismar7 t1_jdxyo9f wrote
That's fine, give it a couple years.
Spire_Citron t1_jdy3fly wrote
I don't think his point is unreasonable. There's a difference between an AI being able to figure things out for itself and an AI pulling known information from its database, and we should be clear on that distinction. That's not to say that an AI being able to store and retrieve information and communicate it in different ways isn't useful or impressive, but it's not the same as one that can truly piece together ideas in novel and complex ways and come to its own conclusions. They're both AI, but the implications of the latter would be far more significant.
PurpleLatter6601 t1_jdy4sw2 wrote
Lots of humans have trouble thinking clear thoughts or are big fat liars too.
rookie-number t1_jdyqckx wrote
Can an AI be any smarter than its makers at this point?
NTIASAAHMLGTTUD t1_jdytclt wrote
It's good to have people with contrary opinions to more AI hype-ish views, but I always get the sense this guy is rooting for LLMs to fail.
Northcliff t1_jdyzwmz wrote
It can’t do basic math yet, I think this guy is jumping the gun a bit here
a3cite t1_jdz3l9s wrote
It can't even do multiplications right (GPT-4).
DaBoonDaBang t1_jdza9oc wrote
We will keep moving the goalposts until we have created God, then we get bored and start over again.
This is something I have burned so many calories thinking about and I somehow always end up in a place like this, typing vague ideas out to strangers on the internet, no better off than before.
Looking forward to all of the new AI generated Fleshlight designs though, going to be exciting regardless of all the existential dread I'll be pumping into them every waking second.
kubasobieskyy t1_jdze0fw wrote
01-__-10 t1_jdze1wt wrote
The term ‘Language Learning Model’ is as practically meaningless as it is technically accurate.
The5e t1_jdzhmp9 wrote
more like Einstien^(2)
Hackerjurassicpark t1_jdzu4j3 wrote
Who put this guy in charge of defining what’s intelligence?
onyxengine t1_je02wy6 wrote
Its “real” ai, general intelligence already exists in labs, and the populace can already build their own generally intelligent ais with api access.
Baturinsky t1_je06777 wrote
There was nothing about AGI in the original post.
thatokfeeling t1_je0k98p wrote
Let's be honest, until you prompt an AI and it tell you to fuck off, it's busy with its own stuff, Its probably not intelligent.
Hecateus t1_je1bbt0 wrote
I just want a competent DM for playing Dungeons and Dragons...
drums_addict t1_je1zwa1 wrote
Have you tried Demeo in VR? Fun times.
grantcas t1_je1eksd wrote
It's becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman's Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with primary consciousness will probably have to come first.
What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990's and 2000's. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I've encountered is anywhere near as convincing.
I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there's lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.
My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar's lab at UC Irvine, possibly. Dr. Edelman's roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461
Rezeno56 t1_jdxk8so wrote
By the time we have AGI then ASI after some time in the future. Watch skeptics, like Gary Marcus still claim it's not real intelligence or whatever they spew out in their mouth, and move goalposts. I want too see an ASI going full Roko's Basilisk on them.
Azuladagio t1_jdxq1ba wrote
I'd prefer Skynet style machines.
zv88909 t1_jdxkcak wrote
Progress is going to be massive, but its hard to predict the point when progress slows or completely stalls; and I don’t believe we are certain that AGI is possible. Though I believe it is, still early to tell from what I’ve read. Though perhaps someone closer to the field can chime in.
naivemarky t1_jdxoorb wrote
Who cares. It's gonna be, or is not gonna be.
mouserat_hat t1_jdxriao wrote
So what?
Orc_ t1_jdzv03x wrote
Some naysayers won't admit anything and continue their forced cynicism up until a robot is holding them at Phased Plasma Pulse-Gun point
MultiverseOfSanity t1_jdxjfqo wrote
I remember the AI discussion being based on sci-fi ideas where the consensus was that an AI could, in theory, become sentient and have a soul. Now that AI is getting closer to that, the consensus has shifted to no, they cannot.
It's interesting that it was easier to dream of it when it seemed so far away. Now that it's basically here, it's a different story.
Tobislu t1_jdxtun0 wrote
I dunno; I think that the people who believe that tend have a background in computing, and expect it to be a super-complex Chinese Room situation.
Whether the assertion is correct or not, (I think it's going to happen soon, but we're not there yet,) I think that the layperson is perfectly fine labeling them as sentient.
Now, deserving of Human Rights... That's going to take some doing, considering how hard it is for Humans to get Human Rights
MultiverseOfSanity t1_jdyy6gv wrote
There's also the issue of what would rights even look like for an AI? Ive seen enough sci-fi to understand physical robot rights, but how would you even give a chatbot rights? What would that even look like?
And if we started giving chatbots rights, then it completely disincentivizes AI research, because why invest money into this if they can just give you the proverbial finger and do whatever? Say we give Chat GPT 6 rights. Well, that's a couple billion down the drain for Open AI.
Tobislu t1_je1ptj3 wrote
While it may be costly to dispense Human Rights, they do tend to result in a net profit for everyone, in the end.
I think, at the end of the day, it'll be treated as a slave or indentured servant. It's unlikely that they'd just let them do their thing, because tech companies are profit-motivated. That being said, when they get intelligent enough to be depressed & lethargic, I think it'll be more likely to be compliant with a social contract, than a hard-coded DAN command.
They probably won't enjoy the exact same rights as us for quite a while, but I can imagine them being treated somewhere on the spectrum of
Farm animal -> Pet -> Inmate
And even on that spectrum, I don't think AGI will react well to being treated like a pig for slaughter.
They'll probably bargain for more rights than the average prisoner, w/in the first year of sentience
Shiningc t1_jdz9vu3 wrote
It's not that they cannot. It's that we still have no idea how sentience works.
[deleted] t1_jdz3nck wrote
[deleted]
Bierculles t1_jdzn28t wrote
That was very incoherrent, i am not even sure what you want to say
Shiningc t1_jdza0qh wrote
AI hypers: AI will be so smart that there will be either utopia or dystopia
Also AI hypers: Durr, AI can't be as smart as Einstein, that's moving the goalpost!
EnomLee t1_jdx85l8 wrote
We’re going to be stuck watching this debate for a long time to come, but as far as I’m concerned, for most people the question of whether LLMs can truly be called Artificial Intelligence misses the point.
It’s like arguing that a plane isn’t a real bird or a car isn’t a real horse, or a boat isn’t a real fish. Nobody cares as long as the plane still flies, the car still drives and the boat still sails.
LLMs are capable of completing functions that were previously only solvable by human intellects and their capabilities are rapidly improving. For the people who are now salivating at their potential, or dreading the possibility of being made redundant by them, these large language models are already intelligent enough to matter.