Viewing a single comment thread. View all comments

t1_j9ue4ov wrote

Slightly different things. That's more the episodic memory.

For Life-Long-Learning: No system gets it right all the time; if there is a mistake that it makes, like misclassifying a penguin as a fish (it doesn't make this mistake), then there is no way for it to get fixed. Similarly, countries, organizations, and the news change constantly and so it quickly becomes out of date.

It can't do incremental training. There are ways around this; some AI/ML systems will do incremental training (there was a whole DARPA program about it). Or the AI/ML system (which is stable) can reason over a dynamic data set / database or go get new info; this is the Bing Chat approach. It works better, but if something is embedded in the logic, it is stuck there until re-training.

1