Viewing a single comment thread. View all comments

mjrossman t1_iyo70iv wrote

no, if anything what I've observed with chatgpt, as well as the drama surround stablediffusion 2.0, the singularity will not be publicly noticeable or available in public consumer products. these applications are demonstrating a negative feedback where arbitrary limitations become more necessary for increasingly social (not technical) reasons. additionally, chatgpt is like a snapshot of everything that's been said in the past, and whatever it spits out sounds convincingly authoritative but has no certain accuracy for basic logic & reasoning (like incorrect math). I suspect that further iterations will be more convincing, perhaps even frighteningly "informative", but sussing out errors and inaccuracies will just get proportionately more demanding for the human domain experts. it does spit out a lot of code, but give it a complex enough prompt, and the code will abruptly end. there's might be a subscription service that matches the work being done to serve up output. I still suspect that the advances in AI will accelerate for quite a while, and only past a certain threshold (maybe 2030 or later) will a collection of humans procure a novel methodology that self-evidently produces all the necessary reasoning and self-awareness that an AGI would require. until then, there is 0% chance that we build an AI that builds an AI, so on and so forth, that would actually reach another stage of complexity. in all likelihood, AGI is closest to those that have the most scaled computational facilities with the most optimized ASICs and the widest distribution of feedback mechanisms. this does not 100% overlap with current academic work using AWS and other cloud compute.

5

EntireContext OP t1_iyo964o wrote

Current methods can solve maths though. A paper from November showed a net that solved ten International Mathemtical Olympiads problems. It's not like transformers can't do math. And ChatGPT wasn't trained to do math.

I didn't find its limits in terms of web development at least. It's a capable pair-programmer. Of course I guess it can't create innovative new hardcore algorithms that are state-of-the-art in complexity, but I didn't expect it to do that.

2

mjrossman t1_iyobcpe wrote

maybe I'm misunderstanding, but if you don't expect state-of-the-output or, for lack of a better term, gain of function from the output of these current AI, how do you see our approach to the singularity being shortened based on the current consumer product. as far as the math olympiad reference, I'm assuming you're referencing Minerva or something at the same level. Again, it doesn't show completely error-free answers, it just shows a sequence of words & algorithms that are statistically adjacent enough to be convincing. it should be expected that if olympiad (or college level) question sets were available in the training data, then the bot can just recall the answers as complete chunks without "thinking".

2

EntireContext OP t1_iyoc3ib wrote

GPTChat is state-of-the art in terms of what's available as a general conversational model. It's obviously not state-of-the-art at everything though, because it can't solve IMO problems in maths for example.

When you answer any question, what you do is give a sequence of words rhat are statistically adjacent enough to be convincing...

5

mjrossman t1_iyomjy8 wrote

I would disagree with your point about how we answer questions, we optimize for comprehensively sound and valid answers, not for statistical adjacency. If someone says a whole bunch of techno-jargon or other word salad just to sound convincing, the wisdom of the crowds is already powerful enough to call that redundant. Likewise, the wisdom of the crowds can break GPTChat and there's already actively collected techniques to "jailbreak" the application.
My point is that a general conversational model is a gimmick at this point, and likewise GPT4 is already prescribed to have limitations like being text-centric and is not multimodal. It'll definitely being uncannily entertaining as a conversational homunculus, but a homunculus does not a singularity make.

1

markasoftware t1_iz8gzmn wrote

When the code abruptly ends, that's just because OpenAI put a limit on the length of the output, not because it can't generate more code.

If you ask "Can you write the second part of the code, starting from let foo = bar", for example, it will print out the rest of the code starting at the line you mention.

2

mjrossman t1_iz8h35y wrote

thanks, just tried it with some ps5.js samples

1