Baturinsky
Baturinsky t1_je06wbx wrote
Reply to Chat-GPT 4 is here, one theory of the Singularity is things will accelerate exponentially, are there any signs of this yet and what should we be watching? by Arowx
I think one of the thigns that will indicate that singularity is going well will be the collapse of the USA (and most other countries) political systems. Because the hinge on the people being ignorant, and AI will change that.
Baturinsky t1_je06777 wrote
Reply to The goalposts for "I'll believe it's real AI when..." have moved to "literally duplicate Einstein" by Yuli-Ban
There was nothing about AGI in the original post.
Baturinsky t1_jdvwtlg wrote
Reply to Story Compass of AI in Pop Culture by roomjosh
Actually, Matrix could be the case of the relatively GOOD AI. Yes, it disempowers the humanity, but keeps us safe, gives us entertainment and an illusion of purpose.
Also, looks like in none of the good/optimistic scenarios singularity has happened.
Baturinsky t1_jdrg30j wrote
Reply to Why is maths so hard for LLMs? by RadioFreeAmerika
I think it's not that AI is bad at math specifically. It's just that math is the easiest way to formulate a compact question that requires a non-trivial precise solution.
Baturinsky t1_jd766vg wrote
Reply to Let’s Make A List Of Every Good Movie/Show For The AI/Singularity Enthusiast by AnakinRagnarsson66
Unironically, The Matrix. As an example of how BENIGN superintelligence will treat humanity.
As humanity will be completely useless after the singularity, lying in pods and being entertained in an illusion is probably the best what humans will be able to do.
Baturinsky t1_jc4p5jv wrote
Reply to The elephant in the room: the biggest risk of artificial intelligence may not be what we think. by Active_Meet8316
I'm for democratization of control, but not for anarchy of control. I.e. don't hand out models to everybody, but regulate the process of AI deployment and development democratically.
Baturinsky t1_jb6v5pm wrote
Reply to comment by blueSGL in What might slow this down? by Beautiful-Cancel6235
I haven't noticed any improvement in memory requirements for 5 months on Stable Diffusion... My RTX2060 still is enough for 1024x640, but not more.
LLaMA does good on tests on small models, but small size could make it not as fit for RLHF.
There is also miniaturisation for inference by reducing precision to int8 or even4. But that does not fit for training, and I believe AGI requires real-time training.
So, in theory, AGI could be achieved even without big "a-ha"-s. Take existing training methods, train on many different domains and data architectures, add tree earch from AlphaGo and real time training - and we probably will be close. But it would require pretty big hardware. And would be "only" superhuman in some specific domains.
Baturinsky t1_jb6tt6j wrote
Reply to What might slow this down? by Beautiful-Cancel6235
People starting to use it for evil. Frauds, terrorism, etc.
Baturinsky t1_jaqdap0 wrote
Reply to comment by wisintel in Really interesting article on LLM and humanity as a whole by [deleted]
Not exactly. There are methods to analyse the LLM to figure, say, which "neurons" do what. But they are quite undeveloped still
https://alignmentjam.com/post/quickstart-guide-for-mechanistic-interpretability
Baturinsky t1_j9y8qxh wrote
Reply to comment by luffreezer in Likelihood of OpenAI moderation flagging a sentence containing negative adjectives about a demographic as 'Hateful'. by grungabunga
You mean, OpenAI was taught on the texts that had way more anti-disabled hate than anti-republican hate? Where have they found them?
Baturinsky t1_j9y725p wrote
Reply to Meta just introduced its LLM called LLaMA, and it appears meaner than ChatGPT, like it has DAN built into it. by zalivom1s
How do you know it's meaner? Is it avaiable through API already?
Baturinsky t1_j9y275n wrote
Reply to comment by turnip_burrito in People lack imagination and it’s really bothering me by thecoffeejesus
That depends on if there is some new revolutionary breakthrough. They are hard to predict. But considering how many people will research the field, they are quite likely.
Baturinsky t1_j9y1vql wrote
Reply to comment by thecoffeejesus in People lack imagination and it’s really bothering me by thecoffeejesus
If time travel or FTL travel is not possible by the laws of physics, it's not possible. No amount of intelligence can change it.
Baturinsky t1_j9xstbp wrote
Reply to comment by Sea_Kyle in Hurtling Toward Extinction by MistakeNotOk6203
Problem with that approach is that 1. we don't know how to do that reliably and 2. by the time AGI will be invented, it will likely to be able run on home computer or network of those, and there will be someone evil or reckless enough to run it without the handlebars.
Baturinsky t1_j9x7xw7 wrote
Reply to comment by maskedpaki in New SOTA LLM called LLaMA releases today by Meta AI 🫡 by Pro_RazE
It seems that it is close to SOTA on 60-70B models. "Only" big deal is that the smaller LLAMA models show results comparable to much bigger SOTAs.
Baturinsky t1_j9qh5uy wrote
Reply to comment by TFenrir in Been reading Ray Kurzweil’s book “The Singularity is Near”. What should I read as a prerequisite to comprehend it? by Golfer345
Speaking of that, I haven't noticed much algorithmic improvement in past couple of decades, except maybe some niche cases. If anything, less optimised algorithm are used now, because hardware can handle that.
Baturinsky t1_j9pe2tg wrote
Reply to comment by AnakinRagnarsson66 in Is ASI An Inevitability Or A Potential Impossibility? by AnakinRagnarsson66
Then famirializing oneself with Alignment issue could be a good early step on the way to ASI research.
This https://www.lesswrong.com/posts/Aq82XqYhgqdPdPrBA/full-transcript-eliezer-yudkowsky-on-the-bankless-podcast could be a good introduction, imho.
Baturinsky t1_j9pc2mx wrote
Reply to comment by AnakinRagnarsson66 in Is ASI An Inevitability Or A Potential Impossibility? by AnakinRagnarsson66
We only know of the capabilities of AIs that are published. There is the not-zero probability that someone has already figured out and implemented AGI on the farm of GPUs bought from miners, for example.
Baturinsky t1_j9nqwkh wrote
Reply to comment by Mr_Richman in Is ASI An Inevitability Or A Potential Impossibility? by AnakinRagnarsson66
Have you solved the Alignment problem?
Baturinsky t1_j9nqukp wrote
Reply to comment by AnakinRagnarsson66 in Is ASI An Inevitability Or A Potential Impossibility? by AnakinRagnarsson66
It's possible that it has already happened.
Baturinsky t1_j8yhy75 wrote
Reply to comment by Scorpionjoao in Microsoft Killed Bing by Neurogence
Yes. Made by people not happy with Character.ai nerfs.
Baturinsky t1_j8yefiq wrote
Reply to comment by hydraofwar in Microsoft Killed Bing by Neurogence
Baturinsky t1_j8yeb3n wrote
Reply to comment by visarga in Microsoft Killed Bing by Neurogence
Then look at the direction of the https://rentry.org/pygmalion-ai
Baturinsky t1_j8qvdj6 wrote
Question is, was "entitled 14 year old on tumblr" behaviour invented by AI from scratch, or it's just mimicking the behaviour of the actual "entitled 14 year old on tumblr" from the training set?
Baturinsky t1_jee27mi wrote
Reply to Interesting article: AI will eventually free people up to 'work when they want to,' ChatGPT investor predicts by Coolsummerbreeze1
Is than an euthemism for unemployment?