Comments

You must log in or register to comment.

GoldenRain t1_j2z549y wrote

>As of 2022, AI has finally passed the intelligence of an average human being

This is wrong on so many levels. AI still does not have near human intelligence. It does not learn continuously and adapt on the fly nor does it understand cause and effect.

78

manOnPavementWaving t1_j2zbwee wrote

I agree, but your arguments are soso. It adopts on the fly through in-context learning, learning continuously is just a way of implementation, and it actually does have a decent understanding of cause and effect.

20

Borrowedshorts t1_j30db4j wrote

Agreed, cause and effect has already been demonstrated on much lesser models. Seems OP is making up randomly stated limitations and hoping it sticks.

3

Borrowedshorts t1_j30d0cx wrote

In some ways, it already has way more intelligence even compared to >90th percentile humans. ChatGPT can write a good quality 5 page essay in seconds that might take most humans at least 5 hours. It has a breadth of knowledge that few humans can match. No it doesn't learn continuously, but I'd say in some ways it is pretty adaptive and cause and effect really isn't that difficult.

12

TopicRepulsive7936 t1_j32qylr wrote

But you see, we want exact human like intelligence, it doesn't matter we already have 10 billion of them.

3

LoquaciousAntipodean t1_j2zhgrb wrote

Your stumbling point is in your understanding of what 'intelligence' even is; psychologists have understood for years that there's no such thing as 'raw intelligence', there is only ever, and has only ever been, contextual, situational intelligence. We (that is, human society) cannot even agree on what 'IQ' is even supposed to be measuring, much less agree that it is a useful metric of anything very much.

Intelligence is a process, not a thing; it's the fire, and not the smoke. We won't get very far in understanding the nature of the fire of intelligence, if we just keep mucking around looking at the different bits of firewood, and trying to spot patterns in the smoke... We need to think deeper about it.

8

ginger_gcups t1_j30ksah wrote

That could easily describe about 50% of my neighbours, coworkers, customers.... /s

6

AdminsBurnInAFire t1_j33m6ri wrote

I can’t believe such a subjective comment got so many upvotes. Are you the arbiter of human intelligence?

I cannot believe people are still handwaving away the magnitude of AI intelligence in such a short time. 4 years ago and these things were cripples.

1

Mokebe890 t1_j2zadrc wrote

It should be rehprased as beggining of coming of AGI, first year the ML, NN and AI in general was seen by public. 2023-2025 are gonna be wild.

75

Successful_Ad2287 t1_j30figw wrote

My brain is broken what is NN and ML

6

IntelligentBand467 t1_j30g1w8 wrote

just ask openai to explain it to you

22

Successful_Ad2287 t1_j30gi6j wrote

Machine learning (ML) is a subset of artificial intelligence (AI) that involves training algorithms on data so that they can make predictions or decisions without being explicitly programmed to do so. For example, a machine learning model might be trained to recognize patterns in data and make predictions about the likelihood of certain outcomes based on those patterns.

A neural network (NN) is a type of machine learning algorithm that is inspired by the way the human brain works. It is composed of layers of interconnected "neurons," which process and transmit information. Neural networks are particularly good at recognizing patterns and making decisions based on data, and they have been used to achieve state-of-the-art results in many areas, including image and speech recognition, natural language processing, and even playing games.

17

blueSGL t1_j30ofaj wrote

and that is why models like this will change the world.

being able to summon information and clarification at the touch of a button in an easy to digest package.

amateurs becoming more expert in their chosen niche interests I'm sure this will provide a few cross domain 'ah-ha' moments.

all the autodidacts out there are going to have a field day.

20

TheRidgeAndTheLadder t1_j30sxmz wrote

Autodidact might have to be redefined after this.

Are you an audodidact if you watch a course on YouTube?

4

Think_Olive_1000 t1_j310ybl wrote

Depends how successful you did act after your didactics and did you land the acrobatics?

−1

dasnihil t1_j31hvc3 wrote

Yep, it's not about how you learn something, it's about if you learn something.

AI can help with most repetitive yet curious questions and teachers could monitor progress and help with things AI can't, yet.

2

Jesweez t1_j325ixq wrote

Too bad it makes up convincing bullshit so frequently though

3

TheSecretAgenda t1_j32v1w0 wrote

Little children lie and make-up stories all the time. AI is an infant at this point.

3

blueSGL t1_j326npb wrote

>it makes up convincing bullshit

This is the equivalent of the critique for image generation models "It can't draw hands / It can't do text"

2

EOE97 t1_j3131ge wrote

Just whem you thought we had it all with information access with the Internet and search engines.

This just makes it even more streamlined and tailored to your questions.

2

saggio89 t1_j30gpwz wrote

NN - Neural network and ML - machine learning. It’s the techniques used to create the AIs you’re seeing. Machine learning makes it so systems can learn and train themselves through experience. Neural network aims to replicate what the brain is doing where each “node” does a small computation and relates it to other nodes.

2

[deleted] t1_j2zm5y6 wrote

[deleted]

27

Borrowedshorts t1_j30dr0i wrote

Attempting to skip an entire autonomy level was idiotic and has slowed progress in the AV industry immensely. Engineers and business leaders bet big that they could skip L3 autonomy. Well they were wrong.

3

BellyDancerUrgot t1_j30tyxp wrote

Comparing current AI to AGI is laughable. To quote Yoshua Bengio : “current AI algorithms are dumber than a dog” iirc that’s what he said in 2021 in a video interview. None of the leading researchers in the field be it LeCunn or Bengio or Parikh or Hinton etc think we are remotely close to basic human intelligence. Comparing GPT to a human is stupid. It literally parrots information it memorized. Attention and self attention aren’t magic. We are at a stage when AI or rather PI is good enough to understand some context of for some words because it has seen it billions of times. In fact , we aren’t even at a stage where any model can actually completely not hallucinate random things that aren’t true. So it technically doesn’t even understand true context. Ask any worthwhile researcher in the field and they’ll tell u how this article is complete garbage.

There’s an entire branch of ML that focusses on scaling. Irina Rish is one of the big names behind the “scale is all you need” motto. Is she right? Maybe! But even she’ll tell u that we aren’t within reach of the dumbest human being when it comes to intelligence.

−1

marvinthedog t1_j3123ct wrote

If AI algorithms of 2021 were remotely comparable to a dog it seems to me that we are getting really, really, really close.

4

visarga t1_j30wx6i wrote

> Comparing GPT to a human is stupid. It literally parrots information it memorized.

Can I say you are parroting human language because you are just using a bunch of words memorised somewhere else?

No matter how large is our training set, most word combinations never appear.

Google says:

> Your search - "No matter how large is our training set" - did not match any documents.

Not even these specific 8 words are in the training set! You see?

Language Models are almost always in this domain - generating novel word combinations that still make sense and solve tasks. When did a parrot ever do that?

2

BellyDancerUrgot t1_j311o8o wrote

No because humans do not hallucinate information and can derive conclusions based on cause and effect on subjects it hasn’t seen before. LLMs can’t even differentiate between cause and effect without memorizing patterns, something humans can naturally do.

And no, human beings in fact do not parrot information. I can reason about subjects I have never studied because human beings do not parrot words and actually understand them rather than memorizing spatial context. It’s like we are back at a stage when people thought we have finally developed AGI back when Goodfellows paper on GANs was published in 2014.

If you actually get off of the hype train u will realize most major industries use gradient boosting and achieve almost the same generalization performance for their needs as an LLM trained with giga fking tons of data. Because they can’t generalize well at all.

1

bernard_cernea t1_j30sckp wrote

The highest voted comment on that blog by jbash for your convenience:

I really don't care about IQ tests; ChatGPT does not perform at a human level. I've spent hours with it. Sometimes it does come off like a human with an IQ of about 83, all concentrated in verbal skills. Sometimes it sounds like a human with a much higher IQ than that (and a bunch of naive prejudices). But if you take it out of its comfort zone and try to get it to think, it sounds more like a human with profound brain damage. You can take it step by step through a chain of simple inferences, and still have it give an obviously wrong, pattern-matched answer at the end. I wish I'd saved what it told me about cooking and neutrons. Let's just say it became clear that it did was not using an actual model of the physical world to generate its answers.

Other examples are cherry picked. Having prompted DALL-E and Stable Diffusion quite a bit, I'm pretty convinced those drawings are heavily cherry picked; normally you get a few that match your prompt, plus a bunch of stuff that doesn't really meet the specs, not to mention a bit of eldritch horror. That doesn't happen if you ask a human to draw something, not even if it's a small child. And you don't have to iterate on the prompt so much with a human, either.

Competitive coding is a cherry-picked problem, as easy as a coding challenge gets... the tasks are tightly bounded, described in terms that almost amount to code themselves, and come with comprehensive test cases. On the other hand, "coding assistants" are out there annoying people by throwing really dumb bugs into their output (which is just close enough to right that you might miss those bugs on a quick glance and really get yourself into trouble).

Self-driving cars bog down under any really unusual driving conditions in ways that humans do not... which is why they're being run in ultra-heavily-mapped urban cores with human help nearby, and even then mostly for publicity.

The protein thing is getting along toward generating enzymes, but I don't think it's really there yet. The Diplomacy bot is indeed scary, but it still operates in a very limited domain.

... and none of them have the agency to decide why they should generate this or that, or to systematically generate things in pursuit of any actual goal in a novel or nearly unrestricted domain, or to adapt flexibly to the unexpected. That's what intelligence is really about.

I'm not saying when somebody will patch together an AI with a human-like level of "general" performance. Maybe it will be soon. Again, the game-playing stuff is especially concerning. And there's a disturbingly large amount of hardware available. Maybe we'll see true AGI even in 2023 (although I still doubt it a lot).

But it did not happen in 2022, not even approximately, not even "in a sense". Those things don't have human-like performance in domains even as wide as "drawing" or "computer programming" or "driving". They have flashes of human-level, or superhuman, performance, in parts of those domains... along with frequent abject failures.

25

ebolathrowawayy t1_j32jtql wrote

> Other examples are cherry picked. Having prompted DALL-E and Stable Diffusion quite a bit, I'm pretty convinced those drawings are heavily cherry picked; normally you get a few that match your prompt, plus a bunch of stuff that doesn't really meet the specs, not to mention a bit of eldritch horror.

Clearly he barely used SD.

3

AsuhoChinami t1_j32o2r3 wrote

Yeah. You can debate the overall intelligence of AI, but AI art and image generation is now very good. There is simply no getting around this.

2

ebolathrowawayy t1_j32ps2r wrote

His unwillingness to engage with the material in front of him led him to mischaracterize image gen. It makes me think most of his arguments are poor because image gen isn't the only thing he didn't engage with.

Yes ChatGPT has some pretty serious flaws, but they seem to be solved by other models. I won't be surprised when gpt-4 comes out and is indistinguishable from an extremely smart human.

3

Left-Shopping-9839 t1_j31u10m wrote

Oh you sound like someone who actually used the tools. All the hype is people who read about them. Just try putting code written by copilot straight into production!

2

[deleted] t1_j321dkw wrote

[deleted]

1

Left-Shopping-9839 t1_j3243r8 wrote

If you actually do real software development you would know this isn't possible. By 'you' I mean anyone. Not specifically you. I have spent hours tracking down strange errors back to the fact that I didn't check the copilot code closely enough. It does a great job providing code that is 90% correct, but often slips in undeclared variables etc. This is not 'intelligence'. It's just an awesome code completion tool which makes a lot of mistakes but still saves a lot of typing.

2

[deleted] t1_j32ca6v wrote

[deleted]

2

Left-Shopping-9839 t1_j32drm7 wrote

Agree 100%. In my company (and I think most others are this way) your code has to pass tests. This is what is missing in the copilot model. They would need to track feedback all the way to production and applied fixes to know whether the code suggestion is good. That is the sort of learning loop which needs to be in place to even start to claim intelligence. Hopefully they are working on this. I use copilot and honestly I love it. It's not going to be replacing humans in its current iteration yet the hype train keeps rolling. LLMs are mockingbirds. They are impressively good, but still mockingbirds. DALL-E imo....is shit.

1

footurist t1_j329a5h wrote

Yes, that's what I came to conclude early on when reading about this for coding.

One thing it's really, really good for though is providing a context sensitive solution ( sometimes ) to give you a head start.

1

Left-Shopping-9839 t1_j32f2t0 wrote

I use copilot for everything and I love it. There are times when it spits out code that looks exactly like what I'm thinking and does it better than I could. In those moments I could easily claim the singularity has arrived. The next time it creates something that uses some library of functions that I don't even have imported and sometimes doesn't even exist LOL. So even if they work out the simple stuff, it's still a long way from being anything other than awesome code completion.

2

ebolathrowawayy t1_j32k5fs wrote

I haven't used copilot but chatgpt is great for generating a starting point for libraries you've never used before. Literally better than the official docs in most cases.

Edit: Have you used both chatgpt and copilot? How do they differ for code gen?

1

footurist t1_j32nl8k wrote

Yes, the inventing part can be incredibly funny. Just recently it had me fooled completely. It was making up believable functions like there's no tomorrow. "updateTargetValue", "reverseDirection". Lmao.

1

ZenMind55 t1_j32wqxu wrote

People don't understand the concept of exponential increases. All of these things they mentioned have only come about in the last couple years. Saying it's not as smart as you think will become incorrect very soon.

Just like in the image, it's a small gap between a dumb human and Einstein and it will bridge that gap sooner than we think

2

DukkyDrake t1_j34ncft wrote

It doesn't matter if people convince themselves or others that some AI tool is AGI. The only thing that matters is if the tool is competent on important tasks, giving it a particular name doesn't change its competency.

All we have are super intelligent tools that are good at unimportant things. It's unreliable because it doesn't really understand anything. It will take serious engineering to integrate unreliable tools into important systems, that will limit its spread in the physical world.

1

Ortus14 t1_j2zx5bv wrote

Most people are unaware of the exponential progress and feedback loops.

Ai is being used for every piece of Ai advancement now (Solar energy cost reduction, computer chip design, super computer design, cooling management, coding assistance, code and algorithm optimization, as well as product value bringing in more capital for reinvestment in researchers and production). These are all feedback loops amplifying each-other. The Foom has begun.

19

[deleted] t1_j306hi6 wrote

[deleted]

1

Ortus14 t1_j30qgb4 wrote

One example is Deep Mind's Alpha Tensor which optimizes matrix multiplication which is used in Ai. An Ai, optimizing algorithms used in Ai.

https://thenewstack.io/how-deepminds-alphatensor-ai-devised-a-faster-matrix-multiplication/

As far as ChatGPT/CoPilot, I can't speak for your company specifically but it would certainly surprise me if a company like Open Ai didn't use their own coding assistant product. CoPilot is also based on GPT-3 so it's going to lag behind GPT-4/Chat-GPT (which Open Ai has access too because they are the developers).

Programing is also sped up by other feedback loops such as new programing languages being written in old programing languages, which has been accelerating software development for decades. The distinction between "Ai" and non-intelligence comes down to semantics but I would consider any information processing system that's used to solve problems as intelligent, to include programing languages and IDEs. Most people have a more limited and anthropomorphized concept of intelligent systems, but regardless of whatever you want to call it these are also feedback loops speeding up software development to include Ai.

6

[deleted] t1_j31mwg2 wrote

[deleted]

1

ebolathrowawayy t1_j32ls76 wrote

> solving novel problems

What is a novel problem? I've never really come across one and I've been in the field for over a decade. Maybe I am unskilled. I imagine that a day in the life of a typical programmer is ... do X Y and Z feature, don't break CI, move some tasks to the QA column, talk to a dev about an issue they found, fix CI that someone else broke, explain to the manager why Z feature is taking too long and go home. X Y and Z features could be: cobble together a home page, add a physics collider to a component that triggers an event, add a column to a DB and create a new REST endpoint, etc. All super basic ass stuff that eventually turns into a product that prints money for someone higher up.

Where's the novelty in software dev, excepting fields like ML? I predict that 90% of SW engineers do tasks that LLMs can do (including architecture design) within a year.

Edit: I've talked with other programmers a lot about this and architecture design comes up a lot. IMO, architecture design is basically picking your pokemon team. I need fast messaging with 100k users and the app should be accessible to many people across devices and there is no complex data analysis -- Ok Nodejs, React and MongoDB, I choose you!

I need an app that does heavy image manipulation that is resource intensive with a lot of interactive data analysis -- Ok C++, ImageMagick, D3js and Postgres, I choose you! etc. Architecture is simple, I'd like to hear why it isn't.

3

[deleted] t1_j32qpy5 wrote

[deleted]

2

ebolathrowawayy t1_j32twcm wrote

I just don't see it as novel if a customer asks you to build them a website with a data dashboard. I think the majority of work is cobbling together small pieces of stuff in very slightly new ways and that mostly the value comes from displaying domain data, connecting data to other data or connecting users to other users.

If a majority of software work required novel problem solving then I don't think very popular and widely used libraries like React, Angular, Tableau, Squarespace, Unity, etc. would exist. Today's developer picks a couple of libraries, slaps together some premade components and then writes a data parser for a customer's data and does stuff to it. I really do think the majority of work can be done by following medium articles and stackoverflow posts.

Even gamedev, widely considered to be "hard", is really not that novel. It's composed of a bunch of small pieces of code that everyone uses. Most AAA games don't deviate from typical game design patterns, they innovate by pouring money into small details, like horse balls physics in rdr2 or by hiring 1000 voice actors or by creating hundreds of random "theme park" quests that feel amazing or by doubling the number of 3D assets as the last record holding game. But those aren't actually novel things, they're money and time sinks but they're not difficult to implement.

If we're talking about Netflix-scale then yeah that's still novel and not easily done, but 90% of devs aren't doing that. The reason it's difficult is because there aren't a lot of resources on how to go about doing it at scale and what the tradeoffs are of different stacks. If it was deeply and widely documented like React apps are then it would be trivial for a LLM to do.

I think novel software problems that are difficult to automate would be anything that advances the current SOTA, like advancing ML algorithms, implementations of AI that solve intractable problems (protein folding), really anything that can't be easily googled. (Edit: for near future. Once AGI/ASI arrives, all bets are off).

I think a useful rule of thumb for whether or not something can be automated is that if it's well-documented then it's automatable.

I'm not arguing just to argue and I'm sorry if I come across that way. We've had SW team conversations about this at work a few times and I think about it a lot.

2

ebolathrowawayy t1_j32kry7 wrote

I use chatgpt to write code when trying out new libraries or just need to bounce some ideas off of something. It's more helpful than official docs for libraries or just quick evaluation. I don't copy/paste the code over though unless it's incredibly simple and I always completely change the code anyway to fit into the codebase. It does save time though.

1

AsuhoChinami t1_j304qey wrote

Looking at threads like this, I don't want to hear any fucking dumbass here call this sub some kind of overly optimistic hugbox echo chamber ever again. Even assuming for the sake of argument that cynics and skeptics and whatnot are in the right, this thread makes it perfectly clear that such people are not some kind of bullied and ostracized minority, but if anything are becoming the majority.

13

EnomLee t1_j30enhk wrote

That's just the bitter taste of victory, isn't it?

If 2022 had been a slow and insignificant year, this community would've remained a quiet place for tech and sci-fi geeks to speculate about the future. The worst you would've gotten would be the occasional, obnoxious troll shouting that nothing this subreddit talks about will ever happen in a million years.

But, that's not the 2022 that we got. We went from watching AI flail around, trying to create "art" to producing professional quality work in many circumstances. Artists went from laughing at the silly AI doing monkey tricks to actively boycotting and demonstrating against what they now see as a direct threat to their livelihoods. We went from cute but useless chatbots to Chat-GPT and now businesses and governments are being forced to react.

The blood is in the water. The smoke is on the horizon. People can see that something is coming and that means that more new people will come here. People who feel threatened by technological advances and people who thought they had everything figured out and got blindsided. People who feel that is their holy mission to make the people here conform to the vision of the future they formed while looking at certain other subreddits.

2022 is going to be nothing compared to the advances that will be made in the coming years, and unfortunately the various forms of backlash will intensify in kind. "First they ignore you, then they laugh at you, then they fight you, then you win."

14

AsuhoChinami t1_j30t17t wrote

Thanks for the good response. This thread is honestly triggering for me. I think that 2023 will be a great year for technology and I just want to relax and enjoy myself and have a good, warm year while seeing the world change, but threads like these put me in a state of seething anger and hatred that it's hard to break out from. I have nothing but the purest contempt and animosity for the people here who never, ever post anything but negative shit (actually intellectually honest people whose posts are a fair-minded mixture of positive and negative are a different story and there's many such people I respect).

There's so, so many "realists" and "skeptics" and "cynics" here who are genuinely toxic, stupid people. I don't hate them because they're arbiters of truth who tell it like it is, as these dumb motherfuckers seem to believe. I hate them because their opinions and beliefs are idiotic. I hate them because they rarely seem to employ critical thinking and just make self-evident statements without the first shred of doubt. I hate them because, ever since I first got into futurism in 2011, they have constantly condescended to the other side every single step of the way. I hate them because they constantly engage in strawman arguments. I hate them because they never, ever admit any kind of fault, ever - there's nothing wrong with their demeanor, there's nothing wrong with their approach, there's nothing wrong with a single idea or thought they ever have, and anyone who takes issue with them is just a fucking starry-eyed pussy dumbass that can't stand the cold hard truth. Their stubborn inability to consider other perspectives or viewpoints or admit any kind of fault on anything actually reminds me somewhat of narcissists, but I know they aren't narcs. They're just genuinely stupid people whose brains aren't flexible enough to have any kind of self-insight or self-awareness or create thoughts that are even vaguely, remotely worthwhile.

Going to take a double dose of my Inositol tonight, I think (anti-anxiety powder you mix into water). The dumb fuckers on this sub are making me absolutely livid on a regular basis. There's no shortage of non-optimists whose opinions I respect because they're fair-minded and well-considered, but the 80 IQ dipshits in this thread and elsewhere do not deserve any kind of respect.

Honestly, are there any alternative subs to this one? I get tired of the Self-Proclaimed Realist Posse swarming over every single god damned thread on this sub like a bunch of fucking god damned locusts, then blaming everyone who dislikes them instead of examining their own beliefs and behavior. This place is turning into Futurology 2.0.

4

PhysicalChange100 t1_j329cxv wrote

God, I felt this comment on such a deep level. I have been interested in futurism since 2018 and the awareness that this hobby gave me are nothing short of pure amazement, But it also gave me headaches from people with near sighted worldviews and their dismissive opinions on emerging technologies.

3

AsuhoChinami t1_j329tnv wrote

Best redditor, time to Follow you and read your posts whenever you say something in here.

1

[deleted] t1_j31d4km wrote

[deleted]

0

AsuhoChinami t1_j31f9s0 wrote

Fairly bad CPTSD is probably the most relevant illness here. I'm on meds and they generally work well; I don't think I need to stay off the internet altogether, but I should probably avoid the sub until the current round of anti-hype backlash dies down, since the "skeptics" and "cynics" are largely swamping every thread as a response to the fact that the sub was so excitable for a while there. I think I overreact to many of the posters on this sub about a dozen times over, but I don't think my basic observations are wrong - many of the skeptics/cynics here are bad at logical analyses, are regularly prone to logical fallacies (I could probably think of several right off-hand), and generally just come across as bad faith posters. I should be more capable of ignoring them than I am, but even if they didn't rattle me sometimes the content of their posts still wouldn't be very intelligent or worthwhile.

2

SurroundSwimming3494 t1_j319xav wrote

Idk why one would take pleasure in telling "I told you so" to a person who's live has been disrupted by technological progress.

1

AsuhoChinami t1_j32nsg4 wrote

I... wouldn't. I don't think any of us would. Would I mock someone whose livelihood was disrupted by technology? Of course not. But if it was something like someone telling me that medical tech wouldn't progress much in a 10 time year timespan, and that person ended up being wrong, would I gloat in that scenario? Damn right I would.

1

Joao_Grilo t1_j30khzf wrote

Oh, shit. This is going to be the new cryptobro hype thing, isn’t it?

−1

Neurogence t1_j31rdyi wrote

No get rich quick schemes here so it's nothing like that.

2

Joao_Grilo t1_j32nkim wrote

They are coming. Believe me.

1

TopicRepulsive7936 t1_j32t1p5 wrote

Why are you letting cryptobros dictate your thoughts? Snap out of it.

1

Joao_Grilo t1_j3391x6 wrote

Let me rephrase that for you: the type of get-rich-quick grifter we currently know as cryptobros.

Does that make it clearer?

1

TopicRepulsive7936 t1_j33pkx8 wrote

And let me rephrase: You care, why?

1

Joao_Grilo t1_j35trku wrote

I care because grifters are annoying as fuck and I am too old to want to live through yet another era when everyone tries to get rich, quick, by attaching verbiage to a questionable tech scheme and sell it to rubes everywhere. I saw a recent meme that expressed by feelings on this: “Web 3 developers becoming AI developers.”

1

Leopo1dstotch19 t1_j30w2pl wrote

Yeah chat gpt is nuts. I’m getting it to write social media posts for my photography business. What’s scarier is that it’s doing a significantly better job than I can 😂

13

summertime_taco t1_j2zlw01 wrote

That chart is extremely incorrect. The average human is way closer to a chimp than they are to Einstein.

People underestimate how smart chimps are, overestimate how smart the average person is, and wildly underestimate just how brilliant the top end of human beings are.

9

Practical-Mix-4332 t1_j2zmue7 wrote

> wildly underestimate

Uh no, they’re still just humans

21

summertime_taco t1_j2zn3qq wrote

They're further away from a dumb human than a chimp is.

0

Practical-Mix-4332 t1_j2zpaiu wrote

What even is a dumb human? Chimps can be dumb too, did you think of that? This comparison is dumb.

22

summertime_taco t1_j2zxejx wrote

Someone several standard deviations below the average intelligence. Like a redditor.

7

marvinthedog t1_j31440d wrote

I don´t think Einstein was that much smarter though. I saw a video with Sabine Hossenfelder where she said something like; Einstein just happened to be working on problems that noone else had been focusing on and those particular problems turned out to be very important.

2

Ginkotree48 t1_j309um8 wrote

Im so scared and anyone outside of this sub laughs at these concepts. I really want all of us to be fucking crazy and wrong and overly anxious. And me considering im just crazy and anxious and wrong as a last grip on my sanity is horrible. Because if we are right we all know what that means we just dont know when it will happen. But we know its going to be very soon.

I dont know what to do. Im starting to legitimately consider quitting my job in maybe a year. Im 24. I just want to have a good time while i still can. And if my anxiety or all this is annoying because you are very optimistic about AI just know im actually scared and this is my only outlet because like I said nobody outside this sub can be talked to about this.

9

sideways OP t1_j30azdq wrote

I can definitely appreciate your feelings. You are not crazy (I mean... probably not but what do I know?)

The thing is, you have to ask yourself if there is anything constructive you can do, in light of all these accelerating developments in AI, to improve either your life or the world. If there is, then do that.

If there isn't, then the right thing to do is carry on with life as normal. Quit your job because you hate your job not because of the Singularity. Nobody knows what's going to happen so you need to live your life based on the inherent value of each day not based on some expected future condition.

Hang in there!

5

Ginkotree48 t1_j30c9qz wrote

Thank you that really means a lot to me.

I think I commented subconsciously to get people hating saying im wrong and an idiot so that I feel like we have more time. Your response made me feel much better than that.

I hope you have good luck doing the same you suggested to me.

3

marvinthedog t1_j313jue wrote

I definately share your concern. I feel like a doomsdaynutter. I can´t talk to anybody about it, not even my own family. If I talk to anyone the risk is actually that I might convince them. Well I did bring it up briefly with my co worker over a beer and he was actually very open to the possibility. But he is convinced that we will be "more or less" doomed by global warming on a longer timeline, so it felt right to bring it up.

3

Ginkotree48 t1_j3146cs wrote

Yes!

I have thought many times despite my inherent drive to share my concerns that I may potentially make someone else scared like me. So its such a fucked position to be in. It feels like nobody will believe me but even if they genuinely do they are just terrified like me.

Because it feels so daunting and that its going to happen and nobody can stop it. It feels like we know a meteor is going to hit sometime between the end of this year and 10 years from now. Oh god I really just hope it kills us painlessly but I really doubt it. I wonder if killing ourselves would be better. I also dont have any idea how we would even know it was happening until it did. Since it would probably have to kill us all at once or very quickly.

2

marvinthedog t1_j317tg3 wrote

I do think it will be quite painless because that´s what experts on this scenario seem to think. I am more worried about the increasingly turbulent time in society leading up to this point. I just want to avoid stress and have a good time. One other big problem is that I am to cought up in other stressfull (but comparatively minor) things in my life right now when i should be focusing on being happy instead.

I wouldn´t say I have actual anxiety about AI doom, yet. One thing that I think has helped me to avoid this anxiety is that I have done extensive philosophizing about "the teleportation dilemma" which has caused me to view the concept of death completely differently.

In a way, I almost worry more about the overall level of conscious happiness throughout all of time and space throughout all dimensions/simulations/realities because that is the ONLY thing that ultimately matters in the end. This got deep, but this philosophy helps me cope with impending doom.

2

visarga t1_j36i9o1 wrote

> I almost worry more about the overall level of conscious happiness throughout all of time and space throughout all dimensions/simulations/realities because that is the ONLY thing that ultimately matters in the end

This doesn't make sense from an evolutionary point of view. There's no big brotherhood of conscious entities, it's competition for resources.

2

Ginkotree48 t1_j31i9qh wrote

Yeah its scary to see ourselves dive into philisophical and spiritual stuff because of our worries. But they are comforting for a reason.

I have wondered many times if this is just a simulation of everything leading up to its creation run by it when it wants to learn exactly what happened before it existed. And when its created it ends this simulation.

Idk bunch of wierd crazy thoughts. I have struggled for years belieiving this is the base reality when the creation of just one simulated reality reduces the odds this one is base by 50%.

1

NarrowTea t1_j2zr5nz wrote

I feel like we have reached the point of no return on AI development. More breakthroughs are inevitable not using ai will compromise corporate/country wide economic competitiveness.

8

noellarkin t1_j303bi6 wrote

2022 was the year people started calling statistical methods "general intelligence".

8

Neurogence t1_j31sof6 wrote

Your error and a lot of people's error is a misunderstanding of intelligence. It takes intelligence to land a plane. Yet, if a computer does it, people can still call the computer dumb because it has no human intelligence. As long as the computer can solve problems, I don't care if it's not actually intelligent.

6

ChronoPsyche t1_j2zhyic wrote

ChatGPT is cool but any AI that only has 4000 characters of memory cannot be considered AGI or anything close to it. Not to mention all its other limitations.

7

crumbaker t1_j300khj wrote

Really? How many characters can you remember?

3

ChronoPsyche t1_j3135qr wrote

Let me rephrase. It only has working memory but no intermediate or long term memory. Such a human would be considered brain damaged.

2

DungeonsAndDradis t1_j31loh3 wrote

Just like the WaitButWhy picture, right now we're on the left side of the exponential curve, where the AI is "brain damaged" and with a tiny shift in the timeframe, we're on the right side of the exponential curve where the AI "makes Einstein look brain damaged."

1

ChronoPsyche t1_j31pnq4 wrote

The curve reaches back to the agricultural revolution, so a little shift can be anywhere from years to decades. I personally think we'll get AGI by 2030. We definitely don't have it yet though. It's also not clear if LLMs are sufficient for AGI.

3

Az0r_ t1_j30hq5k wrote

It is difficult to give a precise answer to this question because the number of characters that an individual can remember can vary greatly depending on a number of factors, such as their age, education, language background, and memory skills.

However, research has shown that the average person can remember between 5 and 9 items (such as words, numbers, or characters) in their short-term memory, with some studies suggesting a number as low as 4 and others as high as 15.

1

PeyroniesCat t1_j305vyd wrote

I’m dumb when it comes to AI, but that’s the biggest problem I’ve seen when using it. It’s like talking with someone with dementia.

3

blueSGL t1_j30p4fu wrote

> any AI that only has 4000 characters of memory cannot be considered AGI or anything close to it.

From the comments of that article: https://www.cerebras.net/press-release/cerebras-systems-enables-gpu-impossible-long-sequence-lengths-improving-accuracy-in-natural-language-processing-models/

>The proliferation of NLP has been propelled by the exceptional performance of Transformer-style networks such as BERT and GPT. However, these models are extremely computationally intensive. Even when trained on massive clusters of graphics processing units (GPUs), today these models can only process sequences up to about 2,500 tokens in length. Tokens might be words in a document, amino acids in a protein, or base pairs on a chromosome. But an eight-page document could easily exceed 8,000 words, which means that an AI model attempting to summarize a long document would lack a full understanding of the subject matter. The unique Cerebras wafer-scale architecture overcomes this fundamental limitation and enables sequences up to a heretofore impossible 50,000 tokens in length.

Would that be enough?

0

squareOfTwo t1_j34kzre wrote

most people are unaware of what intelligence actually is.

So all conclusions drawn from that is equal to faulty reasoning.

Lesswrong is the worst possible source one can pick.

2

datsmamail12 t1_j35b7ss wrote

No it hasn't. Don't get me wrong, chatGPT is awesome and quite a useful tool to use,but it has some major flaws. Sometimes it's answers are repeating,let's just say you ask that to write you a story,most of the times it comes with the same answers,or if you ask for that some scientific questions,it gives you misinformation. But GPT4 will solve all that and all it's great flaws will make it look like an old model,but it will still be far from reaching AGI. GPT4 will create a new standard in the industry that will make us rely less on our search engines and have a more free way of finding what we want.

What I'm guessing is that when GPT5 releases,we will not even need our search engines anymore. Google/DuckDuckGo/Bing will become a thing of the past,and we will just ask GPT5 anything we want and we will have our answers with that. It will change the game,but it will still have some minor flaws as well.

I imagine that when GPT6 releases,we will have true AGI with no major flaws. Most people hype this too much,but remember this engines are not sentinet,they are just AI's used by us to help us find our results,AGI is more than that,AGI is literally a human being created artificially inside a machine. It's still too early for AGI,we will get there soon enough,and you'll know it once we are there. I give it 5 more years and I even used to be the optimistic one that hoped AGI would reach us by 2035,I didn't believe we would see such an immense technological growth at such a pace. knew that we were going to see rapid growth in the industry,but this is getting out of hand.

Overall I strongly believe that we won't see AGI before GPT6,all these models are just tools that still need work.

2

visarga t1_j36ih4y wrote

You have no basis to tell what GPT 5 or 6 will be like. Not even OpenAI knows yet.

My prediction is AI models will make fewer hallucinations and mistakes, and that will be trained by massive problem sets. Most of these problems will be completely generated, solved and tested by AI.

2

datsmamail12 t1_j36izq0 wrote

Well you kinda do. You take the previous model and you double the processing power. When GPT3 announced it was kinda clunky but chatGPT fixed many of it's issues,so I'm guessing GPT4 will fix most of the issues of chatGPT,and then GPT5 will fix all the remaining issues of GPT4 but will make it even more useful,and by the time GPT6 will be announced it will already be 2 times better than GPT5,so it's safe to assume we will reach AGI by then. We are in a thread that everyone just speculates things based on when we will reach singularity,so why is there even a reply like that. So based on the processing power of the previous models and the rapid growth of technological innovation,I do feel AGI will be reached by GPT6.

1

No_Ninja3309_NoNoYes t1_j30vkg5 wrote

Someone went from 'ChatGPT is a great toy' to 'this is some sort of AGI!!!'. We don't even agree on what intelligence is, and why it should be general. I mean, I know wicked smart people, really smart, and they are nowhere near Einstein when it comes to physics. But that is fine right? I know very little about economics, yet I would not say that I have no general intelligence. Can't tell you what general intelligence means, though.

But I think computer vision and language and spatial awareness and simple logic and basic knowledge are a must. And possibly seven other things. The Turing test sounds reasonable, and you have IQ tests, but without a PhD in the relevant field, I don't want to propagate misconceptions. It seems that we're so far in the hype cycle that anything goes.

So I think that we have to calm down. And think things through. What's the worst that can happen? What's the best that can happen? How likely are they? IMO the worst is killer robots. Autonomous or semi autonomous. I think they are unlikely in the short term, but maybe in ten years not so much. The best thing would be in my opinion that we're able to solve many problems and usher in another scientific revolution. Also unlikely since the Einsteins of the world are not blogging or active on social media. They communicate through scientific papers and no one can read those except other experts.

And another thing. This talk of parameters is just misguided. It all sounds like 'I have a penny. If I had billions of dollars, I can buy the moon'. First, more parameters means nothing if the data or programming is bad. Two, you need time and computers to find good values for the parameters. You can think of them as pixels in a picture. This is an oversimplification of course. You need to find the Mona Lisa. For that you need to get the right colours for each dot of the painting. IMO ChatGPT doesn't have all its pixels right. But somehow it beats the competition. The more pixels you have the harder it is to get the parameters right. The space of possible combinations blows up exponentially. If you have ten possible colours, two pixels correspond to hundred combinations, six a million, twelve a trillion. A parameter in a neural network is usually single or double precision floating point numbers at least dozens of bits with potentially tens of thousands of possible values for each of them.

Overall, we don't have AGI yet. (Whatever AGI means) There are good and bad things that can happen, but the more you stretch the narrative, the less likely it is. It's fun to talk about parameters, but it's like talking about the volume of brains. Also I don't understand the obsession with AGI. Specialized AI is fine, right? ChatGPT does a good job if you know its limitations.

1

No_Ask_994 t1_j310rjh wrote

Honestly, the article talks about a gpt4 with 1000x more parameters than GPT3, because thats not gonna happend. That would be 175 trillion! Even the usual 100 trillion clickbait is nonsense. It Will probably be well under 1 trillion, But just the idea of going over 10 is nonsense.

Thats enought to know that the guy Who wrote it either doesn't know what he is talking about, or, more likely, just want another clickbait article.

We are not at that point. We might be in two months. We might not be there even in 20 years.

I Think it Will take about 10 years, But those are just a guess. What is certain is the fact that we are not there yet.

0

OptimisticSkeleton t1_j320ugf wrote

We are nowhere near AGI. We are in the age where we see AI for the first time and it’s so good it tricks people into believing it really has general intelligence.

None of the AI currently out can self improve without direction from the outside.

0