Viewing a single comment thread. View all comments

bballerkt7 t1_j8bimv8 wrote

AGI getting closer everyday

17

BenjaminJamesBush t1_j8c12it wrote

Technically this has always been true.

59

EducationalCicada t1_j8d5y9z wrote

Not if it's actually impossible.

9

BashsIash t1_j8djkk4 wrote

Can it be impossible? I'd assume it can't be impossible, otherwise we couldn't be intelligent in the first place.

24

cd_1999 t1_j8fmlej wrote

Have you heard of Searle's Chinese Room?

Some people (sorry I can't give you references off the top of my head) argue there's something special about the biological nervous system, so the material substrate is not irrelevant. (Sure you could reverse engineer the whole biological system, but that would probably take much longer).

4

pyepyepie t1_j8dvci2 wrote

I would have told you my opinion if I would know what is the definition of AGI xD

1

urbanfoh t1_j8elywk wrote

Isn't it almost certainly possible due to the universal approximation theorem?

Assuming consciousness is a function of external variables a large enough network with access to these variables should be able to approximate consciousness.

1

pyepyepie t1_j8dv3wv wrote

Why do you think it's a step in this direction? Did you read the paper (serious question, it's interesting)?

1

bballerkt7 t1_j8e6l5f wrote

Because AI being able to use APIs is a big step towards it being able to interact with the real world effectively, specifically the digital world. Imagine chatgpt being able to now do things for you in the digital world like go online shopping for you or trade stocks etc.

2

pyepyepie t1_j8e7gjp wrote

Thanks :) I agree it's useful but I don't see how it's related to AGI. Additionally, it was already done a long time ago, many "AI" agents used the internet before. I feel that the real challenge is to control language models using structured data, perform planning, etc., not to use language models to interact with the world (which seems trivial to me, sorry), but of course, it's just my opinion - which is probably not even that smart.

6

VelveteenAmbush t1_j8fusa5 wrote

> I feel that the real challenge is to control language models using structured data, perform planning, etc.

I think the promise of tool-equipped LLMs is that these tools may be able to serve that sort of purpose (as well as, like, being calculators and running wikipedia queries). Could imagine an LLM using a database module as a long-term memory, to keep a list of instrumental goals, etc.. You could even give it access to a module that lets it fine-tune itself or create successor LLMs in some manner. All very speculative of course.

3

bballerkt7 t1_j8eddln wrote

No worries I think you definitely have a valid take. I always feel not smart talking about AI stuff lol :)

2

farmingvillein t1_j8frv87 wrote

> not to use language models to interact with the world (which seems trivial to me, sorry),

The best argument here is that "true" intelligent requires "embedded" agents, i.e., agents that can interact with our (or, at least, "a") world (to learn).

Obviously, no one actually knows what will make AGI work, if anything...but it isn't a unique/fringe view OP is suggesting.

1

mycall t1_j8bjo05 wrote

Progress comes in a multitude of mysterious ways.

−19

sam__izdat t1_j8bn58f wrote

I don't want to be that guy, but can y'all leave the doe-eyed ML mysticism to the more Ray Kurzweil themed subreddits?

36

Soundwave_47 t1_j8bpaqd wrote

Yes, please keep this sort of stuff in /r/futurology or something. We're here trying to formalize the n steps needed to even get to something that vaguely resembles AGI.

22

kaityl3 t1_j8d7hsw wrote

Do we even know what WOULD resemble an AGI, or exactly how to tell?

3

Soundwave_47 t1_j8fu3r6 wrote

Somewhat, and no.

We generally define AGI as an intelligence (which, in the current paradigm, would be a set of algorithms) that has decision making and inference capabilities in a broad set of areas, and is able to improve its understanding of that which it does not know. Think of it like school subjects, it might not be an expert in all of {math, science, history, language, economics}, but it has some notion of how to do basic work in all of those areas.

This is extremely vague and not universally agreed upon (for example, some say it should exceed peak human capabilities in all tasks).

1