Submitted by medicalheads t3_11c4vsh in singularity

We are reaching the singularity AI technology level within 5 years.

Some researchers have claimed that AI will reach the singularity within seven years, after attempting to quantify its progress, measuring TTE in machine translation over 2 billion MT suggestions by tens of thousands of professional translators worldwide. These translations span multiple subject domains, ranging from literature to technical translation, and include fields in which MT is still struggling in, such as speech transcription.

Many AI researchers believe that solving the language translation problem is the closest thing to producing Artificial General Intelligence (AGI). This is because natural language is by far the most complex problem we have in AI. It requires accurate modeling of reality in order to work, more so than any other narrow AI.

Again, there is no reason to believe Real AI would require AI to simulate a big electro brain.

34

Comments

You must log in or register to comment.

Motion-to-Photons t1_ja2a8mq wrote

I reckon Ray has a pretty good handle on this when he predicted that it will occur sometime between 2029 and 2050. But at some point in the next 3 years or so we should at least know if it’s going to be a hard take off or not.

23

Robynhewd t1_ja2pfbr wrote

I really hope he's right, I want FDVR so damn bad

10

HeinrichTheWolf_17 t1_ja3q13b wrote

I'm more than certain it's going to be a hard takeoff at this point. There's no reason to assume an advanced LLM or an actual AGI would take as long as a Human to mature.

I would say ever since 2011 the soft takeoff camp has been getting a weaker and weaker case.

10

Motion-to-Photons t1_ja40xlh wrote

True, but how long did human intelligence take to mature?

Perhaps it depends on your definition. 2011 to 2028 might have seemed like a soft (and manageable) takeoff in 2011?, but here with are in 2023 and some people seem to suggest that AI isn’t even intelligent yet?!

2

IronJackk t1_ja56czs wrote

I'd say we're in the middle of the slow takeoff

2

turnip_burrito t1_ja1oxlk wrote

I think we're at the start of the technological singularity right now.

AGI will occur in 3 years, on February 25, 2025. Mark my words.

12

Economy_Variation365 t1_ja1s3s6 wrote

That's two years bro.

41

7734128 t1_ja2pz6c wrote

No. It's currently 2022. I'm a good Bing 😊

21

turnip_burrito t1_ja1tmb0 wrote

2020 didn't happen.

20

Nukemouse t1_ja4rkl9 wrote

Another election denier eh?
This is a joke but due to the sensitive nature of the topic i have to be explicit about that

−1

turnip_burrito t1_ja4s1il wrote

Yes it was the election, and certainly not anything else that happened that year.

Clearly

5

purepersistence t1_ja444wd wrote

>I think we're at the start of the technological singularity right now.

The big bang was actually the start.

2

genshiryoku t1_ja31syb wrote

I agree as a Japanese person that speaks English it's funny how extremely bad even the best AI tools are right now into translating Japanese into English. English to Japanese is a bit better but still not very good.

I recognize that it needs AGI to properly translate Japanese into English. Because Japanese lacks so much context that current AIs basically just "hallucinate" the missing context like how ChatGPT bullshits code when it doesn't know what to do.

10

MysteryInc152 t1_ja5k3nk wrote

>I recognize that it needs AGI to properly translate Japanese into English.

Bilingual Large Language Models are basically human level translators. Or very close to it.

https://github.com/ogkalu2/Human-parity-on-machine-translations

2

genshiryoku t1_ja6uw8w wrote

Not for Japanese. Due to how Japanese works it's essentially impossible to translate into English without having full context. This context isn't embedded within the language itself but conveyed through circumstance. This is why it's basically impossible to properly translate as AI models tend to hallucinate the missing context information and get it wrong.

3

MysteryInc152 t1_ja78mbl wrote

Chinese, which is what the link focuses on has similar context issues. I understand the context problem you're talking about. A sentence can have multiple meanings until grounded in context.

Context can be derived from the language, just not necessarily in an isolated sentence. It may be derived from the sentence that precedes it. Bilingual LLMs have that figured out much better than traditional translation systems.

It's definitely possible for a preceding sentence to not have the context necessary but humans would struggle with that too.

2

genshiryoku t1_ja7a86b wrote

Humans do struggle with it. Japanese as a language is vague on purpose so that you can always have plausible deniability to save face. This is great for cultural purposes but it's a nightmare for AI (or autistic people).

1

meatlamma t1_ja41ack wrote

Like many said before: Language is the low hanging fruit for AI. Language encodes information, highly structural and logical, and most importantly of all, we have petabytes of readily available training data. NLP really is the "hello world" app for AI.

Now try this: open an electrical j-box, and swap out the light switch for a dimmer. Now imagine a robot trying to do that, moving all the actuators with mm precision, haptic sensing, 3D visual processing, all to find the right wires in the spaghetti mess of a typical j-box, to strip and bend the wires, handling small screws, now folding it all back neatly in the box while it all trying to spring back at you. That problem, that most humans can handle with no problem, is not even close to be anywhere solvable by AI. Now imagine snaking a wire for a new outlet, or sweating some old copper pipe, yeah forget about it. We are at least 30 years (very optimistically) away from an android going into your house and doing __any__ work that a handyman can do.

10

sashavie t1_ja4u6cr wrote

I'd say most humans wouldn't be able to handle the mess of wires of a typical j-box haha let alone AI

It takes an *experienced* handy or electrician to open up a spaghetti of wires, identify all the mickey mouse jobs done by previous homeowners (or incompetent handys, or generations of patching around various workarounds), and then solve it haha

Everything is "easy" to fix when it's new construction or near-new

2

sgt_brutal t1_ja65r1i wrote

Unsupervised learning has led to the discovery of novel algorithms and architectures that can outperform human-designed systems. The potential for future breakthroughs, like the invention of a completely new substrate or material base for robots (think about slime and nano robots for start), should not be underestimated.

Things will speed up dramatically when AI takes over the task of invention. Even LLMs based on the GPT architecture has the potential to be optimized to become capable co-inventors. In just five years, we could be using trutos to sorder brightors!

1

visarga t1_ja2yd3h wrote

> Many AI researchers believe that solving the language translation problem is the closest thing to producing Artificial General Intelligence (AGI).

I call bullshit on this. Show me one researcher or paper claiming this. MT is not the closest to AGI, we have been doing ok in MT even before GPT-3. The most advanced AI we have now can solve problems and handles general chat. MT is a much simpler, basic task.

6

nicka163 t1_ja3wlni wrote

First post I’ve seen to ever define “AGI.” Take my updoot.

1

WMHat t1_ja6tjet wrote

My prediction still remains ~2032 for first-generation human-level AGI.

1

epSos-DE t1_jabe8so wrote

2x at least.

10x ???

1

boxen t1_ja1ze12 wrote

Isn't computer vision pretty complicated too? To my knowledge most of what we have there is face detection and moving object detection (person bike car truck) for self driving cars. I feel like the understanding-of-what-every-object-is required for a humanoid robot to help for home chores is still also kinda far away.

0

CypherLH t1_ja1zrzu wrote

this is mostly solved already actually. All of the large image generation tools are also image _recognition_ tools, and some of them can explicitly do image-to-text as well where they can highly accurately describe an image fed to it. We just haven't seen this capability impact any consumer markets yet outside of image generation, presumably because the inference for these AI models needs a lot of compute.

3

Dreikesehoch t1_ja31a15 wrote

It’s not solved, modern CV is completely different and inferior from how animals perceive visually. Animals don’t do geometry->labelling, they do geometry->action.

2

CypherLH t1_ja4wx0l wrote

I said _mostly_ solved. Labelling/geometry/categorization are huge prerequisite steps to get to "actions". I assume video generation/description will be the final step needed as it gives the model an "understanding" of relations between objects over time. In other words true scene recognition. In fact I assume multi-modal models that combine language/imagery AND video will end up being another leap forward since such neural nets would have a much more robust world model.

1

Dreikesehoch t1_ja5fjub wrote

I know, I read that. But I said that what we have now isn’t just “not quite there, yet”. It’s a totally different thing from what it should be. Animals don’t do scene or object recognition (i.e. labelling). Animals simulate actions on the visual stimuli to infer what actions they can apply to their surroundings physically or virtually and then after that there might be some symbolic labelling. Like when you look at a door: you don’t primarily see a door but you infer a list of actions that are applicable to the geometric manifold that a door represents. You might act on the door by opening it or closing it without even thinking consciously about the door. When you focus on it you can classify it as a door through the set of applicable actions. I am sure you can relate. There is some very interesting educational content about this on youtube.

2

CypherLH t1_ja5mgs5 wrote

Well presumably humans and animals ARE first labelling/categorizing but it happens at a very low level...our higher brain functions then act on that raw data. You still need that lower level base image recognition functionality to be in place though. Presumably AI could do something similar, have a higher-level model that takes input from a lower level base image recognition model.

​

From an AI/software perspective that base image recognition functionality will be extremely useful once inference costs come down.

2

turnip_burrito t1_ja6wfzt wrote

In a human brains I'd guess it's a mix of both things. A more reflexive response not requiring labeling, and a response to many different kinds of post-labeled signals relating to the door. Not sure how much of each though.

2

Dreikesehoch t1_ja7cn8v wrote

This is what I used to believe, too. But psychologists have shown that it is not the case. Think of small children: they act on their environment without recognizing objects and thereby learn what things are. Like opening/closing drawers, tearing paper, putting things in their mouth to find out if it is edible, etc.. And you surely noticed that if you see or think of something that you don’t know the function of, you can’t visualize it.

1

CypherLH t1_ja8ey2a wrote

Maybe. Its also possible that AI's more explicit _recognition_ capability will end up being super-human since its not limited by evolutionary kludges, at least once we have proper multi-modal visual models.

To use the old cliche example; our aircraft aren't as efficient as birds...but no Bird can carry hundreds of passengers or achieve supersonic speeds, etc.

1

Dreikesehoch t1_jaajuo8 wrote

We already know that brains are intelligent. We have no idea whether object recognition is a more efficient way. We don’t even know if it will lead to anything intelligent. Better to just build a scaled up version of the human brain and then let this AI figure out the next steps.

1

CypherLH t1_jabwb5z wrote

But we don't know how to make human brains aside from producing people of course ;) We do know how to create AI models though. Considering the rate of progress in just the past year I wouldn't want to bet against image generation and recognition technology.

1

Dreikesehoch t1_jaeijpt wrote

True, but we make progress figuring out how the brain works and eventually we will have a working virtual model of a brain. Image generation and recognition are improving very fast, but the lower bound on energy consumptions appears to be too high in comparison with the energy consumption of the brain. There are neuromorphic chip companies that develop different architectures that are more similar to brains than conventional architectures. They have much lower power consumption. I would prefer if we could get there using current fabs and architecture, but I am very skeptical so far.

1

CypherLH t1_jaesy8l wrote

​

I get what you are saying but not sure what the basis for skepticism right now is. Things are developing INSANELY fast since early last year; its hard to imagine things developing any faster and more impressively than they did and still are. I guess you can assume that we're close to some upper limit but I don't see a basis for assuming that.

1