Viewing a single comment thread. View all comments

ChurchOfTheHolyGays t1_iurbi15 wrote

Isn't the entire point of AI to surpass human intelligence at some point? We of course need to understand as much as possible but we can't bound the limits of AI to that which we can understand, that would be against the entire reason to do AI instead of vanilla hardcoded algorithms.

20

Artanthos t1_iusflyq wrote

Skeptics: how can we create consciousness, we don’t even understand it.

Realist: AI is already moving past our ability to understand. It will soon create more things even further beyond our understanding.

17

Desperate_Donut8582 t1_iurmoki wrote

Nope that’s not what the article is saying at all you need to read the article….and even if what your saying happened we still need to limit its abilities and understand it…. A calculator calculates math way way faster than you yet we know how it works

7

ChurchOfTheHolyGays t1_iurn15p wrote

A calculator is an analogy for vanilla algorithms it isn't an analogy for AI. Thanks for making my point for me.

4

Desperate_Donut8582 t1_iurn7w9 wrote

Ai is a bunch of algorithms tho….human brain isn’t but AI as we have it now is one

−4

ChurchOfTheHolyGays t1_iurnyys wrote

AI are algorithms to generalize from data, vanilla algorithms are specialized from strict human-made rules. A calculator is just us figuring out how to make arithmetics in base 2 instead of 10 and then designing physical circuits with ports that allow us to achieve our end. It would be analogous to AI if you showed a calculator examples of calculations and results and then asked it to generalize and be able to do maths outside of the examples it was fed. That's not how we made calculators in the past (but we can now with AI), if it doesn't generalize it is vanilla algorithm, if it generalizes it is AI, the generalization being a black box is exactly the point, if we knew how to generalize and it was easy we wouldn't need AI we would just code and design circuits that do exactly what we need.

7

fingin t1_iutjd7v wrote

It depends what you mean by AI. If you mean state of the art technology most people are referring to as AI (i.e deep learning models), then we might want to bound the limits of AI because we know how sensitive it is to "mistakes" such as a data and concept drift.

On the other hand, if you mean some conceptual AI that is different from current technology in a meaningful way, then I think I see your point. The problem with the discourse today is no distinction between these two things, one which exists today and the other that could appear anywhere from months to centuries from now.

2