Viewing a single comment thread. View all comments

samwell_4548 t1_jeashtb wrote

One issue is that LLM's cannot actively learn from their surroundings, they need to be trained prior to use. This is very different to how human brains work

7

elehman839 t1_jebley2 wrote

Yes, and I think this reflects an interesting "environmental" difference experienced by humans and AIs.

Complex living creatures (like humans) exist for a long time in a changing world, and so they need to continuously learn and adapt to change. Now, to some extent, we do follow the model of, "Spend N years getting trained and then M years reaping the benefit", but that's only a subtle shift in emphasis, not a black-and-white thing as for ML training vs. inference.

In contrast, AI developed largely for short-term, high-volume applications. In that setting, it makes sense to spend spend a lot of upfront time on training, because you're going to effectively clone the thing and run it a billion times, amortizing the training cost. And giving it continuous learning ability isn't that useful, because each application lasts only minutes, seconds, or even milliseconds.

Making persistent AI that continuously learns and remembers seems like a cool problem! I'm sure this will require some new ideas, but with the number of smart people now engaged in the area, I bet those will come quickly-- if there's sufficient market demand. An I can believe that there might be...

3

yeah_i_am_new_here OP t1_jebnrn5 wrote

Well put! To piggy back off your point, I think the persistence issue in it's current state is what will ultimately stop it from taking over too many knowledge worker jobs. The efficiency it currently creates for each current knowledge worker will of course be a threat to employment if production doesn't increase as well, but if history is at all trustworthy, production will increase.

I think the biggest issue right now (outside of data storage) for creating AI that is persistent in it's knowledge is the algorithm to receive and accurately weigh new data on the fly. You could say it's the algorithm for wisdom, even.

1

yeah_i_am_new_here OP t1_jeatis3 wrote

Interesting. So then we can suppose that if you had enough of these humanoids walking around, they could gather data and feed it back into a "hive mind" (as much as I hate that saying), and retrain the software running the humanoid with that new data, basically giving it a chance to "learn".

I see many hardware limitations with this possibility, but it's an interesting thought.

Perhaps another interesting thought based off of yours is, how much brand new data in our surroundings (that's not already trained on the internet) do you suppose exists in the world?

0

Mercurionio t1_jebcy9s wrote

The question is how machine will iterate the stuff. Like, it gets new info about surroundings and add to the code immediately and completely changing it's behavior on the outcome. Or just collects the data and then reprocess words into bigger salad.

Currently, gpt4 can return to original incorrect answers because it keeps iterating the salad until the user is satisfied.

1