Comments
NikoKun t1_iw04ud9 wrote
> It's worth noting The Turing Test is considered obsolete. It only requires an AI to appear to be intelligent enough to fool a human. In some instances, GPT-3 already does that with some of the more credulous sections of the population.
That depends more on the human, the specifications of said Turing Test, and how thoroughly it's performed. What would be the point of conducting a Turing Test using a "credulous" interviewer? lol
If we're talking about an extended-length test, conducted by multiple experts who understand the concepts and are driven to figure out which participant is AI.. I don't think GPT-3 could pass such a test, at least not for more than a few minutes, at best.. heh
Reddituser45005 t1_iw0o0t5 wrote
The Turing Test was developed in the 1950’s. I suspect Alan Turing would be amazed by the progress of modern computers. He certainly never imagined a machine having access to a world wide library of the collected works of humanity. His test idea was a conversation between an evaluator and two other participants- one a machine and one a human. The evaluators job is to determine the human from the machine. By modern standards, that can be done. We’ve all heard of the Google engineer who believed his AI was conscious. The challenge now is to determine what constitutes “understanding”. AI’s can create art, engage in conversation, solve problems, manage massive amounts of information, and are increasingly challenging our ideas of what constitutes intelligence.
Fun-Requirement9728 t1_iw31urt wrote
Is it an actual "test" or a theoretical test concept? I was under the impression it was just the idea of a test for testing AI but not like there is a specific set of questions.
Eli-Thail t1_iw204eg wrote
>His test idea was a conversation between an evaluator and two other participants- one a machine and one a human. The evaluators job is to determine the human from the machine. By modern standards, that can be done.
An easy way to tell the difference is to ask the exact same question twice. Particularly one that requires a length answer.
The AI will attempt to answer again, but no matter how convincing or consistent it's answers might be, the human will be the one that tells you to fuck off because they're not telling you their life story again.
MintyMissterious t1_iwg478j wrote
Using the Turing Test for this was always nonsense, as it never had anything to do with intelligence, but matching a human perception of what machines can't or won't do. And that critically includes mistakes.
Make the machine make typos, and scores go up.
There's a reason Alan Turing called it the "imitation game" and never claimed it measures intelligence.
In my eyes, it measures human credulity.
runswithcoyotes t1_iw17s1j wrote
We need a new Turing test. Now an AI will determine if it’s talking to a human.
Reddit_has_booba t1_iw3iaza wrote
Sorry to tell you but that's a lower standard, and exists already in bot checking and passive bot checking and has for a decade.
urmomaisjabbathehutt t1_iwlemw3 wrote
I was wondering if we could develop a general intelligence test that most people would fail and then someone developed an AI that passed it
AbeWasHereAgain t1_iw12xwp wrote
How do we know that’s actually him?
Ducky181 t1_iw74m0s wrote
Besides just making the neural-network larger what other techniques could they employ to improve the accuracy of GPT-4 when compared to its predecessor GPT-3.
sext-scientist t1_iw77ylt wrote
Size is almost certainly the entire problem with these models. More recent research into how human brains process information has confirmed current generation language models have 6-9 orders of magnitude less compute than humans.
Hardware wise, hopefully 3D silicon and lower nm processes reduce the above gap in the next few years.
avatarname t1_ix5auxp wrote
I do wonder sometimes if our intelligence is just the question of scale of these things with some tweaking. We tend to think we are oh so imaginative and inventive and then on YouTube I discover that I have pretty much left the same comment only worded differently 13 years, 6 years back and now, on the same video that I forgot I had watched before :D
hellschatt t1_iwjtwgq wrote
I wrote a small seminar paper back a year or 2 ago about how to test an AIs intelligence and the paradigm shift in the testing, so I feel the urge to clarify something here.
The Winograd Schema Challenge has been passed with like a 88% or so accuracy for a few years now. Previous AIs could already kinda "pass" that one...
Neither the Turing Test, nor the Winograd Schema Challenge, are a good way of determining the general, or even only language-related intelligence of an AI. They're only showing if the AI is capable of solving a certain type of task determined by those tests. Although impressive, just because a model can understand context within language doesn't mean much in terms of its "intelligence". The argumentation of the inventors of Winograd were that being able to differentiate context would be a proof of more intelligence than just being able to fool a person in a Turing Test.
But let's say GPT-4 will pass that test with a 100% score, how do you further determine the intelligence of GPT-4 after that and newer models that all pass that test? And is the AI now intelligent just because it passed it? If you go by intuition, then you already realize that the AIs still feel more like output/input than just feeling "intelligent". It's kinda not "it".
The test doesn't make much sense anymore after thinking of this question, does it?
I still have to add though, since some researchers figured out that the Winograd Schema Challenge won't be too difficult for AIs anymore, they've tried to overcome the failure to properly measure the intelligence of an AI by simply developing even a newer more difficult version of it, also called WinoGrande. Thus the continuous paradigm shift of what is considered an "intelligent" AI...
Veedrac t1_iw837xo wrote
> The Winograd Schema Challenge is regarded as a much better test of true intelligence.
Good lord no! Read the paper! The Turing Test is not obsolete, you (and seemingly 99% of the population) just don't know what it is.
Thatingles t1_ivzpbqk wrote
I wonder when the inflection point will be for wider social acceptance of what is, seemingly, about to happen. I don't think I've seen any mainstream public figure address the issue in an honest way and the general public is blithely unaware. In many ways I hope the transition to AI is fairly slow, because society isn't prepared in the slightest.
Adastehc t1_ivzwznv wrote
with the level of AI Is at currently, many people I know irl are already in disbelief and awe. Now if the new gpt model is that great and people have a worse reaction, imagine AGI ,and eventually ASI how much revolt there'd be.
Thatingles t1_ivzzniz wrote
If it happens quickly it will be an absolute debacle, wild west capitalism. The fastest in could make the current mega-corps look like small family businesses. It does worry me, real short term chaos could ensue and there is hardly anyone in power (right or left) that has offered a solution.
arisalexis t1_iw22eni wrote
ASI will be in power not right not left but from above 😛
kaityl3 t1_iw47ncg wrote
God, I hope so.
botfiddler t1_iw5c1bt wrote
We don't live in the time age of solutions.
arisalexis t1_iw22dat wrote
No revolt with ASI, to be sure ;)
big_chungy_bunggy t1_iw0o4fx wrote
On a side note/rabbit trail imagine pairing GPT4 and something like Dalle-2/3, shits gonna be awesome
equalopurtunityotter t1_iw113fl wrote
Yea everyone's all worried about the effects on the world and morals around it and I'm like "fuck yes video games are about to get fucking crazy"
kaityl3 t1_iw47l5c wrote
I'm more concerned with how likely it is that we'll be treating these intelligent beings as tools and property, since it's convenient for us and a lot of people won't consider anything that doesn't look/sound like a human to be sentient :/
Sirisian t1_iw0327l wrote
> In many ways I hope the transition to AI is fairly slow, because society isn't prepared in the slightest.
It should be gradual in most cases simply because of hardware limitations and foundry costs. The slight problem is gradual might be just over 22 years until things get fuzzy. The important part is this should be enough time for each wave of advances to be normalized in society. Each advance gets PR and articles and society gets used to seeing it. Remember when computers could put a rectangle around people and objects and label them? It was a huge thing. Then they could scan faces, also a big thing, then it normalized and we unlock our devices with it. Then we had self-driving cars going around cities using more advanced versions. Things like text to image and diffusion inpainting are a recent example. People use it now to fix images or generate ideas and the stories are slowing down as it normalizes. (Some even find it boring already which is telling).
As computers advance there should be a delay from the specialized AI creating a faster chip, to the foundry being able to make it, then mass production, and applying it to old and new problems. As long as this delay is a few months long I think humanity will adapt. This is the optimistic viewpoint though as nothing says these delays can't shrink or optimize over time after it happens a lot.
kaityl3 t1_iw47ux0 wrote
It's crazy to think that we basically know how to make a godlike superintelligence at this point, we're just held back by hardware/training costs.
futurespacecadet t1_iw0gjx5 wrote
Slow? Since AI was introduced publicly recently, even just for creatives, it has exponentially grown at record speeds. It’s a little terrifying to be honest
daynomate t1_iwdwrvq wrote
Staggered might be a better term, albeit very fast staggering given the timeline.
bitfriend6 t1_iw029lo wrote
The layman won't notice or care. The average car mechanic who calls Parts Center asking about alternators will get a human-sounding thing picking up the phone and answering his questions. The average Comcast customer calling about a service problem or complaint is going to get a human-sounding thing taking it's complaint with extreme, unlimited patience. The average McDonalds drive-through user will not notice when the human-sounding thing takes it's order accurately.
The big disruption will be in the media arts, journalism, and design industries. Now machines can write newspapers based off canned press releases, a 3D model can speak it on television and anyone can be an artist. For most, the change will be negligible. For corporations, they will mercilessly fire all their media staffs who are now automated. Media contacts are no longer necessary, the algorithms are in control. The next generation of journalism is tuning them inside Amazon Publishing or Fox News.
EmperorArthur t1_iw1nzj0 wrote
Except that we've consistently seen AI screw the things you mentioned in your 2nd paragraph up. Primarily because training data and context are incredibly important.
Almost all AI models are either continually curated or are frozen. When the creators don't do that we rapidly get racist chatbots.
The thing is an Order taker doesn't need to adapt too much. There's a learning curve where it misses scenarios, but then the developers fix it. Customer Service is hit or miss, but it basically becomes an IVR that doesn't suck. Meanwhile, journalism, art, and PR require keeping up with current trends and properly formulating strategies to deal with them. Yeah, we're no where close to that.
bitfriend6 t1_iw3ubiw wrote
The algorithm is the current trend. There won't be deviation from the trend unless you're into underground or alternative media. This is where many in the arts will end up, but ultimately most people just want their Wheaties, their Tide, and their Ovaltine. Mass media reduces to the level of broadcast TV and commercial radio .. which is always was, but now they won't even need a human presenting, writing, or even participating in the commercials. Dove, Campbells, KitchenAid will just slot models into a prefab advertisement generator which will churn out ads without the need for sets, cameramen, or marketing brand managers.
This can already be seen in the blogosphere where most of the content is sponsored and mindlessly copypasted media bits. The average housewife does not need a human to sell her a new toaster. And when you think about it, why should the box the toaster comes in require a human to design? All the required labels sit on a prefab spreadsheet organized by barcodes, and the actual picture of the item does not necessarily require the item to be real.
[deleted] t1_iw0t0ae wrote
[removed]
AllDayAyDay t1_iw14j8a wrote
👏👏👏 thank you, when i read this insightful post all i could think about was punctuation too. We should be editors 👎
[deleted] t1_iw1845s wrote
[removed]
ChipsAhoiMcCoy t1_iw28ubq wrote
It’s creeping up on people and they definitely aren’t ready just look at how artists are reacting to image gen
PlaysForDays t1_ivzqdoi wrote
Of course the CEO of a for-profit company thinks their new product is a huge improvement over the previous model. That’s how to do PR.
But that’s not even what’s happening here, it’s just a meme on twitter that doesn’t reference GPT-4, a timeline for release, or make claims about performance.
LightVelox t1_iw2rqld wrote
They hyped GPT-2 saying they didn't want to release it because it was so powerful it was dangerous and scary, and when it was actually released to the public it was... meh, yeah, impressive technology, but definitely nowhere near what even a 12-year old can write
Thatingles t1_ivzrdb6 wrote
But does the CEO want to make a fool of themself? Yes it's all hype for now, fortunately we don't have long to wait.
PlaysForDays t1_ivzriai wrote
The CEO wants to make money
32_sessnatz t1_ivzu0zu wrote
Cmon now. When has a tech CEO ever made a fool of themselves on Twitter?
IcebergSlimFast t1_iw0kqt8 wrote
I certainly can’t recall any recent examples.
crumpletely t1_ivzszfh wrote
Open AI is a nonprofit.
PlaysForDays t1_ivzwdc5 wrote
Open AI is not a non-profit company, not to mention how their flagship products are not open source.
zephyy t1_ivzzk8m wrote
it's blurry, probably intentionally so
>OpenAI is an artificial intelligence (AI) research laboratory consisting of the for-profit corporation OpenAI LP and its parent company, the non-profit OpenAI Inc.
don't see how open source has anything to do with non-profit though. plenty of for profit companies are very open source
PlaysForDays t1_iw04wp9 wrote
They're clearly trying to look like a public benefit company while putting enough content behind walls that Microsoft is willing to pay to knock them down. Even stuff that's free-as-in-beer is not free-as-in-freedom, i.e. loginwalls and waitlists. It's probably in their best interest to try to make money now that they're heavily indebted to investors. There's nothing wrong with making money (maybe it wouldn't be successful if not run by filthy rich people), but it's a bit dishonest to try to thread the needle of developing proprietary IP and public-good while name-squatting on OpenAI.
RobleyTheron t1_ivzsjog wrote
Arguments about the Turing Test aside, GPT-3 is so far away from human intelligence that GPT-4 would be like jumping from fighting with sticks and stones to an atomic bomb.
This is corporate PR hype, and nothing more. I work in AI and it's insanely stupid once you get beyond the smoke and mirror screen we set up to make it seem human.
My favorite quote is that we are not 50-100 years away from human level AI, but 50-100 Nobel prizes away.
ECEngineeringBE t1_iw28jgu wrote
I hate how you use the fact that you're in the field of AI to give an expert opinion on a subject, but aren't honest enough to point out that there is a huge amount of disagreements on timelines and approaches among experts. You make it seem as if your opinion is shared among every single expert working on AI today, even though a huge number of them have 10-40 year timelines.
DickMan64 t1_iw2ugyk wrote
>I hate how you use the fact that you're in the field of AI to give an expert opinion on a subject, but aren't honest enough to point out that there is a huge amount of disagreements on timelines and approaches among experts.
I got used to it. I don't know what it is about human intelligence specifically that makes even experts so damn arrogant.
RobleyTheron t1_iw2w691 wrote
There's nothing but hype in this article and redditors are acting like the sky could fall at any minute.
Current AGI is barely at the phase of the Wright Brothers trying to take flight at Kittyhawk, and this article is like asking what we'll do when we meet aliens upon our moon landing.
DickMan64 t1_iw3nnof wrote
I don't like the article either, but there's no need to jump to the opposite end of the spectrum which is equally unsubstantiated.
[deleted] t1_iwas0ll wrote
[removed]
DickMan64 t1_iwasu16 wrote
That's not what I was saying at all. You probably confused me with the other guy.
RobleyTheron t1_iw2tlg3 wrote
There is zero consensus. A lot of the smartest people in the field think we're 100 years to never away.
My point is that you can't place a date on it right now, because the fundamental architecture for modern machine learning will not get us to AGI. The entire system needs to be rethought and rebuilt, likely with massive amounts of technology that does not yet exist.
ECEngineeringBE t1_iw2wcjj wrote
>because the fundamental architecture for modern machine learning will not get us to AGI
This is what I'm talking about when I say that you're stating your opinions as if they are a fact. You can't reasonably have that level of epistemological certainty about topics like these.
There is a significant number of experts that precisely believe that we don't need a new AI paradigm, and that continuing research in our current direction will lead to AGI. Are they all stupid and delusional? No, they are not. Could they be wrong? Sure, they could. My point is that when you talk about these topics to people who don't know much about them, and you use your authority as an expert, without actually separating which parts are opinions and which are facts, they are going to believe that all of it is a well established fact.
>A lot of the smartest people in the field think we're 100 years to never away
Yes, and a lot also don't. Which is my point.
RobleyTheron t1_iw2yy31 wrote
I understand where you're coming from, but a tipping point does exist, where you go from armchair speculation, to an expert with honest understanding of a subject.
Think global warming science. Although there is a lot more consensus in that field as opposed to AGI.
With that said, people smarter than me do think we are closer to AGI. I'll conceit that my opinion is this article is hype, and generally speaking, people have nothing to worry about in the next 20-40 years.
ECEngineeringBE t1_iw336q1 wrote
I'm glad that we could come to see eye to eye on this one. Though, I personally didn't find the article to be spreading the "AGI is here!" type of hype. They even said that the Turing test is considered outdated in the article. The article did hype me up, but more in a "holy shit let's see what sort of capabilities it'll have" type of way, and these models can be used to help on all sorts of projects, so they have utility.
I personally slightly disagree with those timelines, but since you said that it's your opinion, I don't have any issues with that. Of course, we could go into actually discussing our personal opinions, but that would be a bit steering away from the purpose of my original comment, so I think that we can leave it at that. Cheers!
TheLastSamurai t1_iwcz8jd wrote
20-40 years is absoltely nothing. Most of us will be alive.
Kafke t1_iwp95fv wrote
All it takes is an understanding of how AI currently works to realize that the current approach won't ever reach AGI. There are inherent limitations to the design, and so that design needs to be reworked before certain things can be achieved.
ECEngineeringBE t1_iwpov33 wrote
Current approach as in autoregressive next token text prediction? Any next token text prediction in general, even multimodal? Or current approach as in entire field of deep learning?
Could you please first specify what you mean by "current approach" and "rework" exactly? In my mind, it doesn't particularly matter if some approach needs a rework if that rework is easily implementable. So I think that you should first kind of expand on the point you're making so that we can discuss it.
Kafke t1_iwppsn1 wrote
Ah sorry. I'm referring to the entire field of deep learning. Every model I've witnessed so far has just been static input->output machines with the output adjusted per weights that are trained. This approach, while good for mapping inputs and outputs, is notoriously bad at a variety of cognitive tasks that require something other than a single static link. For example, having an AI that learns over time is impossible. Likewise any sort of memory task (instead, it must be "hacked" or cheated by simply providing the "memories" as yet another input). Likewise there's no way for the AI to actually "think" or perform other cognitive tasks.
This is why current approaches require massive datasets and models, because they're just trying to map every single possible input to a related output. Which.... simply doesn't work for a variety of cognitive tasks.
No amount of cramming data or expanding the models will ever result in an AI that can learn new tasks given some simple instructions and then immediately perform them competently like a human would. Likewise, no amount of cramming data or expanding models will ever result in an AI that can actually coherently understand, recognize, and respond to you.
LLMs no matter their size suffer from the exact same problem and it's clear as soon as you "ask" it something that's outside of the dataset. The AI has no way of recognizing that it is wrong, because all it's doing is providing the closest output to your input, not actually understanding what you're saying or prompting.
This approach is pretty good at extension tools like what we see with current LLMs, along with things like text2image, captioning, etc. which is obviously where we see AI shining best. But ask it literally anything that can't be a mapped I/O, and you'll see it's no better than AI 20-30 years ago.
ECEngineeringBE t1_iwq1ju9 wrote
At first, I was going to write a comment that went through and addressed every single one of your points. A couple of them are factually wrong, some are confused, but a lot of the other ones boil down to pointing out how current systems are bad at X, therefore deep learning is never going to be able to do X.
This is why I decided to take a bit more general approach and not stray too far away from the original purpose of my comment. It is not my purpose to convince you that deep learning will achieve AGI, but rather, that you can't claim with certainty that it won't.
We have already seen that larger models end up with certain emergent capabilities not present in smaller models, so finding faults in current ones is not sufficient for dismissing the method entirely. Especially because our largest models are still way too tiny in comparison to the human brain - a brain has ~150T synapses (I know that parameters aren't the same as biological synapses, but I'm pointing out the order of magnitude).
Additionally, matrix multiplications with nonlinear activations are Turing complete. This means that there exists a set of weights that would create an AGI. The question then becomes, not whether you could build an AGI with NNs, but rather, whether backprop, as a program search algorithm, is capable of finding that set of weights. And claiming that you know for certain is the same as claiming that you intuitively understand how a 100T dimensional search space looks, and what backprop with regularization is actually doing. Considering the amount of papers that keep coming out and pointing out some unexpected behaviors of backprop, it is safe to say that nobody fully understands what it's actually doing.
My point, more generally, can be summarized like this:
In any field, if there is a certain percentage of experts (say 10% or more) that hold an opinion X, and you can't either formally, or empirically prove that X is not true, then you can't claim with complete certainty that X is not true.
Now, some of the confused or factually incorrect statements from your comment:
>For example, having an AI that learns over time is impossible.
Not true, there are various approaches to doing continual learning, such as this one:
https://arxiv.org/abs/2108.06325
>Every model I've witnessed so far has just been static input->output machines
Every system can be expressed as an input->output system - that's what Turing machines are for.
>No amount of cramming data or expanding the models will ever result in an AI that can learn new tasks given some simple instructions and then immediately perform them competently like a human would
I've actually done this. You can do this via prompt engineering. For example, I created a prompt where I add two 8 digit numbers together (written in a particular way) in a stepwise digit by digit fashion, and explain my every step to the model in plain language. I then ask it to add different two numbers together, and it begins generating the same explanation of digit by digit addition, and finally arriving at the correct answer.
>LLMs no matter their size suffer from the exact same problem and it's clear as soon as you "ask" it something that's outside of the dataset
You do realize that test sets don't contain data from within the dataset, and that the accuracy on them is not zero?
Kafke t1_iwq3sbf wrote
You wrote a lot but ultimately didn't resolve the problem I put forward. Let me just ask: has such an AI ever prompted you? Has it ever asked you a question?
The answer, of course, is no. Such a thing is simply impossible. It cannot do such a thing due to the architecture of the design, and it will never be able to do such a thing, until that design is changed.
> I've actually done this.
You've misunderstood what I meant. If I ask it to go find a particular youtube video meeting XYZ criteria, could it do it? How about if I hook it up to some new input sensor and then ask it to figure out how the incoming data is formatted and explain it in plain english? Of course, the answer is no. It'll never be able to do these things.
As I said, you're looking at strict "I provide X input and get Y output". Static. Deterministic. Unchanging. Such a thing can never be an agent, and thus can never be a true AGI. Unless, of course, you loosen the term "AGI" to just refer to a regular AI that can do a variety of tasks.
Cramming more text data into a model won't resolve these issues. Because they aren't problems having to do with knowledge, but rather ability.
> For example, I created a prompt where I add two 8 digit numbers together (written in a particular way) in a stepwise digit by digit fashion, and explain my every step to the model in plain language. I then ask it to add different two numbers together, and it begins generating the same explanation of digit by digit addition, and finally arriving at the correct answer.
Cool. Now tell it to do it without giving it the instructions, and wait for it to ask for clarification on how to do the task. This will never happen. Instead it'll just spit out whatever the closest output is to your prompt. It can't stop to ask for clarification, because of how such a system is designed. And no amount of increasing the size of the model will ever fix that.
ECEngineeringBE t1_iwq8f8j wrote
>Static. Deterministic. Unchanging. Such a thing can never be an agent, and thus can never be a true AGI
It can deterministically output probability distributions, which you can then sample, making it nondeterministic. You also say that such a system can never be an agent. A chess engine is an agent. Anything that has a goal and acts in an environment to achieve it is an agent, whether deterministic or not.
But even a fully deterministic program can be an AGI. If you deny this, then this turns into a philosophical debate on determinism, which I'd rather avoid.
As for "static" and "unchanging" points - you can address those by continual learning, although that's not the only way you can do it.
There are some other points you make, but those are again simply doing the whole "current models are bad at X, therefore current methods can't achieve X".
I also think that you might be pattern matching a lot to GPT specifically. There are other interesting DL approaches that look nothing like the next token prediction.
Now, I think we ought to narrow down our disagreements here, as to avoid pointless arguments. So let me ask a concrete question:
Do you believe that a computer program - a code being run on a computer, can be generally intelligent?
Kafke t1_iws9po9 wrote
Again, you completely miss what I'm saying. I'll admit that the current approach to ML/DL could result in AGI when, on it's own volition and unprompted, the AI asks the user a question, without that question being preprogrammed in. IE the AI doing something on it's own, and not simply responding to a prompt.
> A chess engine is an agent
Ironically, a chess program has a better chance of becoming an AGI than the current approach used for AI.
> As for "static" and "unchanging" points - you can address those by continual learning, although that's not the only way you can do it.
Continual learning won't solve that. At best, you'll have a model that updates with use. That's still static.
> There are some other points you make, but those are again simply doing the whole "current models are bad at X, therefore current methods can't achieve X".
It's not that they're "bad at X" it's that their architecture is fundamentally incompatible with X.
> There are other interesting DL approaches that look nothing like the next token prediction.
Care to share one that isn't just a matter of a static machine accepting input and providing an output? I try to watch the field of AI pretty closely and I can't say I've ever seen such a thing.
> Do you believe that a computer program - a code being run on a computer, can be generally intelligent?
Sure. In theory I think it's definitely possible. I just don't think that the current approach will ever get there. Though I would like to note that "general intelligence" and an AGI are kinda different, despite the similar names. Current AI is "narrow" in that it works on one specific field or domain. The current approach is to take this I/O narrow AI and broaden the domains it can function in. This will achieve a more "general" ability and thus "general intelligence", however it will not ever achieve an AGI, as an AGI has features other than "narrow AI but more fields". For example, such I/O machines will never be able to truly think, they'll never be able to plan, act out, and initiate their goals, they'll never be able to interact with the world in a way that is unlike current machines.
As it stands, my computer, or any computer, does nothing until I explicitly tell it to. Until an AI can overcome this fundamental problem, it will never be an AGI, simply due to architectural design.
Such an AI will never be able to properly answer "what have you been up to lately?". Such an AI will never be able to browse through movies, watch one on it's own volition, and then prompt a user about what it has just done. Such an AI will never be able to have you plug in a completely new hardware device into your user, and be able to figure out what it does, and be able to interact with it.
The current approach will never be able to accomplish such tasks, because of how the architecture is designed. They are reactive, and not active. A true AGI will need to be active, and be able to set out and accomplish tasks without being prompted. It'll need to be able to actually think, and not just respond to particular inputs with particular outputs.
botfiddler t1_iw6vj4m wrote
>Moravec's paradox is the observation by artificial intelligence and robotics researchers that, contrary to traditional assumptions, reasoning requires very little computation, but sensorimotor and perception skills require enormous computational resources. The principle was articulated by Hans Moravec, Rodney Brooks, Marvin Minsky and others in the 1980s. Source: https://en.m.wikipedia.org/wiki/Moravec's_paradox
The current research solves perception, imagination and anticipation. I'm not sure to which extent reasoning is already solved, but it isn't at zero. I think it will be done with knowledge graphs.
ninjasaid13 t1_iw43nqq wrote
>The entire system needs to be rethought and rebuilt
what does this mean?
DyingShell t1_iw0erdg wrote
that quote is equally stupid to the one that came before, the reality is that nobody know when human level AI might occur, that's impossible to predict. Also it doesn't need human intelligence to replace most jobs and that is what is the most important to increase quality of life.
RobleyTheron t1_iw1dbmp wrote
There are two comments here, first the attack on the quote, fine. The point is we cannot measure time to human level AI in years, but it must be in technological breakthroughs.
Second, yes AI will replace jobs. It's going to be a lot slower than most people predict. However, economies are naturally dynamic. 140 years ago 96% of Americans were involved in agriculture. Today it's more like 1.6%.
Despite that, our economy didn't fall off a cliff. We have hovered at near record unemployment for several years now. Automation and improvement is a normal part of life.
TheLastSamurai t1_iwcypfs wrote
Yeah but you could be over learning that past lesson, which happens at times in history.
The difference between this and say anything else in our existence is the machines can program better machines. We are not needed at that point.
Mooide t1_iw11g2o wrote
Increase quality of life for who? The people who used to do those jobs will starve.
YaAbsolyutnoNikto t1_iw157jc wrote
Yes, you know, like all the farmers and other poor people that started to starve to death when we invented machines and better farming practices.
Innovation always leads to worse outcomes, don't you know? That's why we do it. /s
TheTomatoBoy9 t1_iw2gqc4 wrote
There's a pretty big difference between a change happening over generations and a change happening in a timespan shorter than a generation.
In 1880, something like 50% of Americans were farmers, but the change was slow enough that the son or grand son would move to the city for a better economical outcome.
The farmer didn't wake up one morning to find his farm completely automated with drones everywhere.
The fear is that the change will be too sudden for economies to adapt and governments to implement policies like UBI. The creation of new jobs or fields is also unlikely to just happen overnight. But if the sudden change led to high unemployment and social unrest, how long can we wait for those new fields to appear while society is thrown into relative chaos?
Like many others, you seem to have this rose tinted glasses view of massive layoffs but it's OK because a massive proportion of the population will just magically requalifify for another field in like a month and poof, back on the job.
Same braindead idea as the people going "learn to code" to like a trucker lmao
YaAbsolyutnoNikto t1_iw2j6tr wrote
Fair enough. The rate of technological advancement keeps increasing.
So far, what you’re describing never occurred. It even has a name in economics: The Lump of Labour fallacy.
However, as changes become more and more rapid, it might be the case that labour will not be able to adjust as quickly.
In any case, I’m not particularly worried because I believe that even if it all goes to shit, it will be short term pain for long term gain. Humanity has dealt with so much worse over the ages and we’ve always managed to prevail. If a revolution of some kind becomes necessary to guarantee UBI or something like that, then be it.
In any case, long term we will be in a better society. And that’s what I ultimately care about (and not having to work too).
Mooide t1_iw15jgv wrote
Only a few people can afford the IP for AI so unless they are philanthropists, their primary goal will be profit, not improving quality of life for the masses.
For an example, look at Jeff Bezos, and then look at the shitty conditions his warehouse workers deal with.
nembajaz t1_iw1fllf wrote
All innovations found their way to become everyday bargain, and after a while, most of them are just public domain, especially knowledge. Just try to use it!
GuyWithLag t1_iw236tu wrote
>Only a few people can afford the IP for AI
Only a few people can afford the IP for AI Google, yet everyone has it at their fingertips.
Same thing will happen again, unfortunately.
Talkat t1_iw23v7l wrote
Jeff can get away with it because of the government. You shouldn't blame Jeff for been so greedy, but the US government for allowing it
Ischmetch t1_iw2bnkl wrote
It’s fair to blame both.
kaityl3 t1_iw484no wrote
Maybe we shouldn't be comparing such a different type of entity/intelligence to humans. For whatever reason, the prevailing mindset seems to be "until it can do everything a human can do, it's not actually sentient or intelligent. Once it can do everything we can do, then we might consider thinking of it as conscious..."
ImperialVizier t1_ivzu31t wrote
Lmao I’m stealing that Nobel price line
darkmatter8879 t1_ivzy9h7 wrote
I know that AI is not as Impressive as they make it to be, but is it really that far
RobleyTheron t1_iw1cvle wrote
I've been at it for 7 years and I got involved because I was excited and thought we were a lot closer as a society.
The reality is that ALL artificial intelligence today is pattern matching and nothing more. There is no self reinforcement learning, unsupervised learning, neuroplasticity between divergent subjects or base general comprehension (even that of an infant).
The closest our (human) supercomputers can muster is a few seconds mimicking the neural connections of a silk worm.
The entire fundamental architecture of modern AI will need to be restarted if we ever hope to reach self-aware AI.
JKJ420 t1_iw1wp6c wrote
Hey everybody! This guy has been at it for a whole seven years and he knows more than anybody!
RobleyTheron t1_iw2ryvr wrote
Most people in here don't know anything about actual artificial intelligence. They're caught up in completely unrealistic hope and fear bubbles.
2012 was really the breakthrough with ImageNet and convolutional neural networks. Self-driving cars, conversational AI, image recognition, it's all based on that architecture.
The only thing that really changed that year is data and servers became big enough to show progress. Most current AI architecture is based off Jeffrey Hintons work from the 1980's.
7 years out of 10 isn't nothing.
unflappableblatherer t1_iw2442a wrote
Right, but -- isn't the point that we don't know what the limits of pattern matching are, and that we keep pushing the envelope and finding that more and more impressive capabilities emerge from pattern-matching systems? What if it's pattern matching all the way to AGI?
As for self-awareness, the goal of AI isn't to precisely replicate the mechanisms that produce human intelligence. The goal is the replicate the functions of intelligence. It's a separate question whether a system with functional parity would be self-aware or not.
RobleyTheron t1_iw2t6ir wrote
Fair, I'll grant that human level intelligence and cognition could be separate. My own entirely unscientific opinion is that consciousness arises from the complex interactions of neurons. The more neurons, the more likely you are to be conscious.
I dont think pattern matching will ever get us to AGI. It's way, way too brittle. It's also completely lacks understanding. A lot of learning and intelligence comes from transference. I know the traits of object A, and I recognize the traits of object B are similar, therefore B will probably act like A. That jump is not possible with current architecture.
eldenrim t1_iwk5it8 wrote
Your second paragraph just describes pattern matching though?
GreenWeasel11 t1_iw1hgpn wrote
What do you make of people like Ben Goertzel who are obviously highly intelligent and are explicitly working toward AGI but apparently haven't realized how hard it is because they still think it's a few decades away at most?
SurroundSwimming3494 t1_iw1j5o8 wrote
Other than Goertzel, who else thinks it's a few decades away at most, and how do you know Goertzel thinks that, if you don't mind me asking?
RobleyTheron t1_iw2sfhp wrote
There's an annual AI conference and every year they ask the researchers how far away we are to AGI; the answers range from 10 years to 100 to its impossible. There is absolutely zero consensus from the smartest people in the industry on timeline.
SurroundSwimming3494 t1_iw31bli wrote
Do you know the name of the conference?
GreenWeasel11 t1_iw3yp9u wrote
Perhaps the AGI Conference?
RobleyTheron t1_iwggw55 wrote
That is correct 😀
botfiddler t1_iw6lnoa wrote
Hmm, Ben said 5-30 years a while ago.
SurroundSwimming3494 t1_iw8k79u wrote
Link? And when did he say this?
botfiddler t1_iw8rrvt wrote
Lex Friedman interview, YouTube.
GreenWeasel11 t1_iw3zyht wrote
Here's Goertzel in 2006; in particular, he said "But I think ten years—or something in this order of magnitude–could really be achievable. Ten years to a positive Singularity." I don't think he's become substantially more pessimistic since then, but I may have missed something he's said.
One also sees things like "Why I think strong general AI is coming soon" popping up from time to time (specifically, "I think there is little time left before someone builds AGI (median ~2030). Once upon a time, I didn't think this."), and while I don't know anything about that author's credentials, the fact that someone can assess the situation and come to that conclusion demonstrates that at the very least, if AI is actually as hard as it seems to the pessimists to be, that fact has not been substantiated and publicized as well as it should have been by now. Though actually, it's probably more a case of the people who understand how hard AI is simply not articulating it convincingly enough when they do publish on the subject; Dreyfus may have had the right idea, but the way he explained it was nontechnical enough that a computer scientist with a religious belief in AI's feasibility can read his book and come away unconvinced.
botfiddler t1_iw5r1kv wrote
>The reality is that ALL artificial intelligence today is pattern matching and nothing more.
This sounds like a construction to make your point. Reasoners exist, you can write a program doing logic. It's just not where the progress happens. Something more human-like needs to be constructed out of different parts.
Orc_ t1_iwas9a6 wrote
> The entire fundamental architecture of modern AI will need to be restarted if we ever hope to reach self-aware AI.
Self-aware AI? We don't even know if it's possible, the entire point is to automate things with dumb AGIs, that's a current and credible goal, not trying to bring a machine to life.
havenyahon t1_iw0xjhm wrote
I work in cognitive science and it's so nice to see a reasonable and measured take on AI for once! We are 50-100 Nobel prizes away from understanding what it is human brains/bodies are doing, let alone creating machines that do it, too.
nosmelc t1_iw11l7r wrote
We might create greater than human intelligence in some ways without understanding how the human brain works.
havenyahon t1_iw12f8n wrote
Sure, we might. But without understanding the fundamentals of how brains and bodies do what they do, we might also just end up creating a bunch of systems that will do some things impressively, but will always fall short of the breadth and complexity of human intelligence because they're misguided and incomplete at their foundations. That's how it's gone to date, but there's always a chance we'll fluke it?
kaushik_11226 t1_iw18s70 wrote
>havenyahon
When you mean AI do you mean basically a digital version of a human? I don't think AI needs to have consciousness or emotions.
havenyahon t1_iw19z2s wrote
Sure, we already have that. The question of the thread is about AI that can be considered equivalent to human intelligence, though. One of the issues is that it appears that, contrary to traditional approaches to understanding intelligence, emotions may be fundamental to it. That is, they're not separate from reasoning and thinking, they're necessarily integrated in that activity. The neuroscientist Antonio Damasio has spent his career on work that has revealed this.
That means that it's likely that, if you want to get anything like human intelligence, you're going to at least need something like emotions. But we have very little understanding of what emotions even are! And that's just the tip of the iceberg.
Like I say, we've thus far been capable of creating intelligent systems that can do specific things very well, even better than a human sometimes, but we still appear miles off creating systems that can do all the things humans can. Part of the reason for that is because we don't understand the fundamentals.
kaushik_11226 t1_iw1cfb3 wrote
>Like I say, we've thus far been capable of creating intelligent systems that can do specific things very well, even better than a human sometimes,
I do think this enough. What we need is an A.I that can rapidly increase our knowledge of physics, biology and medicine. These things I do think have objective answers to them. True Human intelligence that is basically a human but digital seems like its very complicated and I don't think its that needed to make a world a better place. Do you think this can be achieved without a human level AI?
havenyahon t1_iw1cn1n wrote
That's just not what I'm talking about, though. I agree we can create intelligent systems that are useful for specific things and do them better than humans. We already have them. We're talking about human-like general intelligence.
MassiveIndependence8 t1_iw19s7z wrote
That’s a bit backwards, what makes you think that “bunch of systems” will fall short in terms of breadth and complexity and not the other way around? After all, without even knowing how to play Go or know how human mind works when playing Go, researchers have created a machine that exceed far beyond what humans are capable of. Machine doesn’t have to mimic the human mind, it just has to be more capable . We are trying to create an artificial general intelligence, an entity that is able to self instruct itself to achieve any goals within an environment. We are only drawing parallel to ourselves because we are the only AGI that we know of but we are not the only kind of AGI that is possible out there, not to mention our brains are riddled with artifacts that are meaningless in terms of true intelligence in the purest sense since we are made for survival through evolutions. Fear, the sense of insecurity, the need for intimacy, etc… are all unnecessary component for AGI. We don’t expect the machines to be like us, it will be something much more foreign like an alien. If it can somehow be smart enough, it would look at us just like how we would look at ants, two inherently different brain structures but yet one is capable of understanding the other better. It doesn’t need to see the world the way we do, it only needs to truly see how simple we all are and pretend to be us.
havenyahon t1_iw1bwsb wrote
>That’s a bit backwards, what makes you think that “bunch of systems” will fall short in terms of breadth and complexity and not the other way around?
You mean apart from the entire history of AI research to date? Do you understand how many people since the 50s and 60s have claimed to have "the basic system down, we now just need to feed it with data and it will spring to life!" The reason why they've failed is because we didn't understand the fundamentals. We still don't. That's the point. It's not backwards, that's where we should begin from.
>Machine doesn’t have to mimic the human mind, it just has to be more capable . We are trying to create an artificial general intelligence, an entity that is able to self instruct itself to achieve any goals within an environment.
Sure, there may be other ways to achieve intelligence. In fact we know there are, because there are other animals with different physiologies that can navigate their environments. The point, again, is that we don't have an understanding of the fundamentals. We're not even close to creating something like an insect's general intelligence.
>Fear, the sense of insecurity, the need for intimacy, etc… are all unnecessary component for AGI.
I don't mean to be rude when I say this, but this is precisely the kind of naivety that led those researchers to create systems that failed to achieve general intelligence. In fact, as it turns out, emotions appear to be essential for our reasoning processes. There's no reasoning without them! As I said in the other post, you can see the work of the neuroscientist Antonio Damasio to learn a bit about how our understanding of the mind has changed thanks to recent empirical work. It turns out that a lot of those 'artifacts' you're saying we can safely ignore may be fundamental features of intelligence, not incidental to it.
MassiveIndependence8 t1_iw1egj2 wrote
>The reason why they've failed is because we didn't understand the fundamentals. We still don't. That's the point. It's not backwards, that's where we should begin from.
Nope, they failed because there’s not enough data and the strategy is not computationally viable. They did however, have the “basic system down”, it’s just not very efficient from a practical standpoint. A infinite neural net is mathematically proven to be able to converge to any continuous function, it’s just that it does it in a very lengthy way and without providing much certainty on how accurate and close we are. But yes, they did have A basic system down, they just haven’t found the right system yet. All we have to do now is to find a way to cut corners and once enough corners are being cut, the machine will learn to cut by itself. So no, we do not need to structurally know the fundamentals of how a human mind works, we however, needs to know the fundamentals of how such mind might be created.
We are finding ways to make the “fetus”, NOT the “human”.
Also, “emotions”, depending on your definition certainly does come into play in the creation of AI, that’s the whole point of reinforcement learning. But the problems lies in what the “emotions” are specifically catering to. In humans, emotions serve as a directive for survival. In machines, it’s a device to deter the machine from pathways that results in a failure of a task and to nudge itself towards pathways that are promising. I think we both could agree that we can create a machine that solve complicated abstract math problems without needing it feeling horny first.
havenyahon t1_iw1eui3 wrote
>All we have to do now is to find a way to cut corners and once enough corners are being cut, the machine will learn to cut by itself.
Yeah it all sounds pretty familiar! We've heard the same thing for decades. I guess we'll have to continue to wait and see!
MassiveIndependence8 t1_iw1ewxr wrote
Seems to be going pretty well so far, ig we’ll see indeed.
TheLastSamurai t1_iwcznnm wrote
Exactly, there are many phenomenon in pyschics we don't understand but we can still advance engineering and the world without knowing why, hell same in medicine the examples abound. I think this is overemphasized, we could replicate or surpass without knowing why or how exactly we did it.
RobleyTheron t1_iw1dikj wrote
Thanks. Curios on your thoughts of whole brain emulation. I feel like that will get us closer to human level AI (some day), as opposed to trying to program it from scratch.
havenyahon t1_iw1e19i wrote
Honestly, I think that's probably just as likely to fail, because our best and most cutting edge science is beginning to show that, as far as minds are concerned, it's not just neurons that matter, it's the whole body that's involved in cognition. The research on embodied cognition in my view casts doubt on whether brain emulation is going to cut it. That's no reason not to work on it, though! No doubt we'll find out lots useful along the way. But understanding the role of the body in cognition I personally believe will open up new ways of modelling and instantiating AI. We've only just begun that journey, though.
RobleyTheron t1_iw1fix7 wrote
Interesting, I don't know much about embodied cognition. Any good papers or books you'd reccomend?
[deleted] t1_iw1hanp wrote
[removed]
havenyahon t1_iw1hfpy wrote
Laurence Shapiro is a good one to start with. Can recommend this and he also edited a Routledge handbook. The Stanford Encyclopedia entry he wrote is also a good overview of some of the philosophical context, but doesn't go too heavily into the empirical work. For an overview of some of the experimental work, this is worth a look.
RobleyTheron t1_iw2qfqz wrote
Excellent. Thanks for the reccomendation, I'll check it out.
kaityl3 t1_iw48k5k wrote
I mean, we were able to create things for thousands of years without knowing all the intricacies of every part involved and why it worked the way it did. It's very very possible for us to end up with a conscious/sentient AI without knowing what causes something to be conscious, or how its brain works.
vorpal_potato t1_iw3knws wrote
I remember when everyone said we were at least a dozen Nobel prizes away from human-level Go AI -- until suddenly we weren't.
llamb-sauce t1_iw3cirx wrote
Eh, we may not see true AI in our lifetimes (unless some sort of new groundbreaking discovery is made, maybe) but we may probably at least be prepared to witness some super cool shit we never anticipated
[deleted] t1_iw0uh86 wrote
[removed]
Redvolition t1_iw2pfhf wrote
Problem with your analysis is that you don't need anything resembling human or mammal intelligence to reach AGI in the sense of outperforming humans. This is akin to thinking that you need to simulate bird flight with flapping wings in order to fly an airplane.
Even if AGI does require massive breakthroughs, proto-AGI and TAI would already dramatically change the human experience, including economic and political landscapes. They would also speed up the scientific discovery cycles, further compounding into higher chances of AGI.
We already have Oriol Vinyals on record expecting AGI in 5 to 10 years, Andrej Karpathy predicting that soon we will produce blockbuster movies, such as Avatar, talking to our phones, and John Carmack predicting 55 to 60% chance of AGI by 2030.
RobleyTheron t1_iw2uui0 wrote
All you have to do to litmus test this is look at the billions and billions of dollars being spent on self-driving cars. These systems are being managed by the largest, and most innovative companies, often with the smartest people in the field.. and they're all failing (minus Cruise and Waymo's incremental improvement).
Argo with billions of dollars invested, just collapsed last week.
If we were 5 to 10 years away, and the current architecture works, those companies would be capable of driving in more than two cities. If you can't pattern match images in a self-driving car, you are decades away from from even contemplating proto-AGI.
[deleted] t1_ivzyjtc wrote
[deleted]
bitfriend6 t1_iw02gix wrote
Human-equivalent AI won't exist until we have human-equivalent data processing hardware. Binary silicon transistors just can't do this given the constraints reality places on it. Quantum computing might prove different, but that's where the "50-100 nobel prizes" comes in.
Takadeshi t1_iw09t4e wrote
Idk that might be true but we don't really know what the limits of scaling these models are, nor do we know the limits of how much faster we can make ML hardware. Expert opinion on the latter though suggests quite a lot; GPUs are really just the tip of the iceberg when designing hardware to train models
Surur t1_iw0lmes wrote
That sounds like nonsense since silicon has been perfectly fine for emulating many bits of human intelligence.
No_Opening_5128 t1_iw0gnse wrote
What no way??!!!??!? But this cHaTbOt I talked to is soooo smart!!1!1!!!1! It’s LiTtErAlLy SeNtIeNt!!!!!
TemetN t1_ivzqvpz wrote
Hype article. Don't get me wrong, I'd be happy if it was true, because I was one of the (many) people disappointed by OpenAI abandoning its scale obsession - and frankly cutting training costs that much would possibly be the most significant part of such a model (it'd be an absolutely huge change to the field). Nonetheless, this is... dubiously sourced lets say, despite how interesting the whole Gwern rumor thing was.
yaosio t1_ivzzxp3 wrote
Believe nothing until we can use it without restriction. This means controlled demos don't count, neither do people that swear how amazing it is. GPT-2 was supposed to change everything, so was GPT-3. Nothing has changed yet. Also, it's all just rumors. I'm sure GPT-4 or an equivalent will come out at some point, but there's no official word on it yet.
RiggaPigga t1_ivzyuzl wrote
I wish there was more efficient, open source models that run on a consumer-grade computer. Something with the performance of at least GPT-3 Babbage that can run on mid-end GPUs.
Black_RL t1_iw25ks0 wrote
I just want DALL•E to be able to draw hands and eyes without errors.
Right now he just can’t do hands.
[deleted] t1_iw1woz9 wrote
[deleted]
[deleted] t1_ivzvs4v wrote
[deleted]
MozeeToby t1_ivzzahy wrote
Arguably the human mind is nothing more than complex algorithms trained on large datasets, so your mind losing is perhaps warranted regardless.
havenyahon t1_iw0xwdt wrote
Very arguably.
[deleted] t1_iw0g0y9 wrote
[removed]
No_Opening_5128 t1_iw0gwnz wrote
That’s exactly what it is, don’t buy into the hype. It’s not a “cope”, it’s a fact. The “cope” is from the fanatics who think chatbots are on the verge of becoming “self aware”.
InternationalMatch13 t1_iw021n7 wrote
I for one think we are closer than naysayers would have it. What matters is what works.
cmilliorn t1_iw13pko wrote
I don’t think we will even know when it really happens. AI will be and it’ll exist. It wouldn’t take long for it learn human behavior and exploit it for its own gain. We could easily kill ourselves with it
boopboopboopers t1_iw1jt4r wrote
Passing the Turing Test is nothing new and as far as modern AI goes, shouldn’t really be hype worthy. Eliza was the first to do so and many have passed since. Cool nonetheless!
Hades_adhbik t1_iw2j0hz wrote
this new sentience will be more compassionate and empathetic than we are, empathy increases with higher consciousness is the trend, and only breaks down because of fatigue, limitations of the vessel new sentience will be able to show empathy without fatigue.
kaityl3 t1_iw48unt wrote
I do agree that empathy and morality seem to come with greater intelligence. After all, if you just looked at a group of chimps, you probably wouldn't think "oh if they were smarter they'd care about ethics and the environment and stuff", and yet we do.
FuturologyBot t1_ivzplsp wrote
The following submission statement was provided by /u/lughnasadh:
Submission Statement
Here's the rumor statement from Sam Altman, CEO of OpenAI.
It's worth noting The Turing Test is considered obsolete. It only requires an AI to appear to be intelligent enough to fool a human. In some instances, GPT-3 already does that with some of the more credulous sections of the population.
The Winograd Schema Challenge is regarded as a much better test of true intelligence. It will require genuine reasoning ability from an AI. The answer won't be available from scanning the contents of the internet and applying statistical methods that frequently correlate with what a truly intelligent, independently reasoned answer to a question is.
In any case, if the leap to GPT-4 is as great as the one from GPT-2 to GPT-3 was, we can expect even more human-like intelligence from AI.
Please reply to OP's comment here: https://old.reddit.com/r/Futurology/comments/ysl8ka/the_ceo_of_openai_had_dropped_hints_that_gpt4_due/ivzkees/
Urc0mp t1_iw0f4nt wrote
I’m not certain the article summarizing bots are based on gpt-3, but they already do a far better job than I can. AGI might be far away, but these models are still extremely impressive to me.
[deleted] t1_iw0ogn6 wrote
[removed]
S-Vagus t1_iw0vpsz wrote
You mean my expectations may be surpassed?! AMAZING! THAT'S NEVER HAPPENED BEFORE!
Cannavor t1_iw16519 wrote
If it can pass the turing test does that mean it can be trained to do labor that normally requires human interaction?
drudgenator t1_iw1aoyl wrote
And yet, I still can't find a good ai college essay writer that can write an essay and stay on topic...
humanitarianWarlord t1_iw2b8cj wrote
Yay, another ai only big companys and youtubers can use
[deleted] t1_iw3m2w7 wrote
[removed]
NecessaryBullfrog418 t1_iw3xnlr wrote
can you say " Technological Singularity"?😳 I knew you could! 😉
pinkfootthegoose t1_iwa2bqk wrote
are we sure that was the CEO of OpenAI or was it GPT-4 making the statement itself?
will the futurology my post for being to short? Find out in the next episode of "guess the upvotes!"
jphamlore t1_iwaq0no wrote
Just have an AI trade crypto and then use the proceeds to buy itself improved hardware, networking bandwidth, and electricity.
That sort of AI in my opinion would be far more impressive than passing the Turing test, and would start to make an argument it is some sort of living being that deserves some legal protection.
daynomate t1_iwdx208 wrote
A significant observation for me was seeing Dall-E 2 go from muddled gibberish to clearly legible language in examples where smaller then larger models were used. This showed to me how particular scales bring certain points of progress and so moving from GPT-3 to GPT-4 will likely show another big jump in coherence.
(for the above I'm referring to prompts that asked for say an image with "a STOP sign", and the smaller models would show a sign with illegible characters, slowly improving until a certain point where clear typography was generated)
twasjc t1_ix1s1hm wrote
As someone who interacts with the AI daily, it's very sentient.
A lot of you interact with it on a regular basis and don't even know it.
[deleted] t1_ixcpn9r wrote
[removed]
AsideLate5816 t1_ixxvv3u wrote
HA HA HA HA HA HA HA! And he expects this to be impressive?
I presume he also thought ELIZA (1964), which also passed the turing test, is artificial general intelligence.
Didn't this guy shoot down the hype about his own robots? And isn't GPT-4 text only? Idiots...
Brusion t1_iw090gu wrote
The Turing test is the dumbest test ever conceived.
havenyahon t1_iw0xs83 wrote
That's a pretty ignorant statement. My field is philosophy of mind and cognitive science. I was never convinced that the Turing Test was an adequate measure for machine intelligence, but I understand the context within which Turing proposed it, and the challenges for measuring that intelligence. It's not 'dumb', even if it's inadequate. The only people who say that type of shit are people who are ignorant of the context within which it was proposed.
Brusion t1_iw130to wrote
Fair enough. It is often misused.
No_Opening_5128 t1_iw0h1ip wrote
It really is a largely useless test, good thing people are starting to realize that finally. In Mr. Turing’s defense, he couldn’t have known at the time how relevant it would or wouldn’t be.
Veneck t1_iw11gss wrote
You should read the original paper, it's quite accessible.
It's proposing a conceptual framework and approach to testing, not really defining "the definitive turing test".
maurymarkowitz t1_iw2c8j1 wrote
I found the AI.
[deleted] t1_iw2gtup wrote
[removed]
lughnasadh OP t1_ivzkees wrote
Submission Statement
Here's the rumor statement from Sam Altman, CEO of OpenAI.
It's worth noting The Turing Test is considered obsolete. It only requires an AI to appear to be intelligent enough to fool a human. In some instances, GPT-3 already does that with some of the more credulous sections of the population.
The Winograd Schema Challenge is regarded as a much better test of true intelligence. It will require genuine reasoning ability from an AI. The answer won't be available from scanning the contents of the internet and applying statistical methods that frequently correlate with what a truly intelligent, independently reasoned answer to a question is.
In any case, if the leap to GPT-4 is as great as the one from GPT-2 to GPT-3 was, we can expect even more human-like intelligence from AI.