Viewing a single comment thread. View all comments

big_gondola t1_j8biqv4 wrote

I might say we gain general intelligence by creating different models for different tasks and gain experience on when to call which. This has the when to call which, but not the creation of new models.

48

diviludicrum t1_j8bxeji wrote

I still think u/belacscole is right - this is analogical to the rudimentary use of tools, which can be done by some higher primates and a small handful of other animals. Tool use requires a sufficient degree of critical thinking to recognise a problem exists and select the appropriate tool for solving it. If done with recursive feedback, this would lead to increasingly skilful tool selection and use over time, resulting in better detection and solution of problems over time. Of course, if a problem cannot possibly be solved with the tools available, no matter how refined their usage is, that problem would never be overcome this way - humans have faced these sorts of technocultural chokepoints repeatedly throughout our history. These problems require the development of new tools.

So the next step in furthering the process is abstraction, which takes intelligence from critical thinking to creative thinking. If a tool-capable AI can be trained on a dataset that links diverse problems with the models that solve those problems and the process that developed those models, such that it can attempt to create and then implement new tools to solve novel problems, then assess its own success (likely via supervised learning, at least at first), we may be able to equip it with the “tool for making tools”, such that it can solve the set of all AI-solvable problems (given enough time and resources).

41

uristmcderp t1_j8db0gw wrote

The whole assessing its own success is the bottleneck for most interesting problems. You can't have a feedback loop unless it can accurately evaluate if it's doing better or worse. This isn't a trivial problem either, since humans aren't all that great at using absolute metrics to describe quality, once past a minimum threshold.

9

ksatriamelayu t1_j8ebpx4 wrote

Do people use things like evolutionary fitness + changing environments to describe those quality? Seems dynamic environment might be the answer?

1

Oat-is-the-Best t1_j8ef5x0 wrote

How do you calculate your fitness? That has the same problem of a model not being able to assess its own success

1

LetterRip t1_j8dpgxc wrote

There are plenty of examples of tool use in nature that don't require intelligence. For instance ants,

https://link.springer.com/article/10.1007/s00040-022-00855-7

The tool use being demonstrated by toolformer can be purely statistical in nature, no need for intelligence.

6

thecodethinker t1_j8dpuru wrote

It is purely statistical, isn’t it?

LLMs are statistical models after all.

4

imaginethezmell t1_j8g4f64 wrote

there are apis for auto ml already

it can simply learn the task to use other ai to create models

its over

2