challengethegods
challengethegods t1_je47eze wrote
Reply to comment by Necessary-Meringue-1 in [D] Prediction time! Lets update those Bayesian priors! How long until human-level AGI? by LanchestersLaw
it also outperforms her on like 50000 other topics, in 50 different languages, while simultaneously talking to a million other people about a million different things
oh, but someone asked it a trick question and it reflexively gave the wrong answer, nevermind
challengethegods t1_je474co wrote
Reply to [D] Prediction time! Lets update those Bayesian priors! How long until human-level AGI? by LanchestersLaw
GPT4 is already smarter than the people that said 2100+
challengethegods t1_jdg9hus wrote
Reply to How will you spend your time if/when AGI means you no longer have to work for a living (but you still have your basic needs met such as housing, food etc..)? by DreaminDemon177
marry a robot and conquer the universe
challengethegods t1_jb6szip wrote
Reply to comment by thatdudejtru in What might slow this down? by Beautiful-Cancel6235
"yea man, I totally agree with this. [downvotes it anyway]"
some kind of neurotoxin for AI training data, probably
challengethegods t1_jb60zd1 wrote
Reply to comment by 94746382926 in What might slow this down? by Beautiful-Cancel6235
>you get downvoted for stating your opinion lol.
>
>To be clear I don't even really agree with your opinion
yea that pretty much summarizes how reddit voting works.
challengethegods t1_jb60c3b wrote
Reply to comment by Cryptizard in What might slow this down? by Beautiful-Cancel6235
That explains why politics has had an AI-Generated vibe for the last 50 years.
challengethegods t1_jassves wrote
Reply to comment by Honest_Science in Figure: One robot for every human on the planet. by GodOfThunder101
"please call the manager, this problem is above my paygrade, I am a robot"
challengethegods t1_jarjq8m wrote
Reply to comment by No_Ninja3309_NoNoYes in Figure: One robot for every human on the planet. by GodOfThunder101
>You need exaflops, the equivalent of a million Nvidia GPUs
... to do what, exactly?
a $1 calculator is superhuman at math,
it's not a 1:1 ratio and never has been.
challengethegods t1_jargdhp wrote
Reply to comment by NanditoPapa in Figure: One robot for every human on the planet. by GodOfThunder101
>I'm used to people being a step or two behind me...
then prepare to step outside your comfort zone because completely independent of the raw utility of any form the simple fact is that people will be universally more accepting towards humanoid robots than they will be towards a matrix sentinel floating tentacle machine completely alien to them, for example. The entire point of the teslabots is mass-production to have them everywhere. A middleground between looking somewhat harmless/acceptable and having some level of industrial capacity that can be taken seriously makes complete sense if the goal is to have them be as prevalent as cars, while even more "social acceptance" would be derived from cuteness and neoteny as anyone in japan could already tell you.
Trying to debate against "a human crafted world being designed for human form" is not even worth mentioning because it's so painfully obvious, but to your credit I agree with the premise that robotics in general is and has been capable of plenty more than what's implied by the claims of these things being "unattainable". The language used in talking about their company is very fluffy as if they're unveiling the one-and-only-robot which is kinda silly, and I think we're probably on the same page that this kind of "worker-droid" is not even remotely close to the upper bound of what is actually possible, I just think it makes sense that everyone would have some kinda pseudo-generic humanbot walking around trying to integrate into society rather than mechaCthulhu or w/e.
challengethegods t1_jaovuwe wrote
I find it slightly annoying that CyberOne, and FigureOne both look like copies of the teslabot instead of addressing the obvious market vacuum for robot catgirls wearing maid outfits.
challengethegods t1_jadszzw wrote
Reply to Is the intelligence paradox resolvable? by Liberty2012
inb4 trying to cage/limit/stifle/restrict the ASI is the exact reason it becomes adversarial
challengethegods t1_jact6ez wrote
Reply to Context-window of how many token necessary for LLM to build a new Google Chrome from scratch ? by IluvBsissa
well, the context window is not as limiting as people seem to think. That's basically the range of text it can handle in a single instant - for example if someone asked you a trick question, and the predictable false answer pops into your head immediately - that's what a single call to an LLM is. Once people figure out how to recursively call the LLM inside of a larger system that's keeping track of longterm memory/goals/tools/modalities/etc it will suddenly be a lot smarter, and using that kind of system can have even GPT-3 write entire books.
The problem is, the overarching system has to also be AI and sophisticated enough to compliment the LLM in order to breach into a range where the recursive calls are coherent, and context window is eaten very quickly by reminding it of relevant things, to a point where writing 1 more sentence/line might take the entire context window just to have all the relevant information, or even an additional pass afterwards to check the extra line against another entire block of text... which basically summarizes to say that 8k context window is not 2x as good as 4k context window... it's much higher, because all of the reminders are a flat subtraction.
realworld layman example:
suppose you have $3900/month in costs, and revenue $4000/month =
$100/month you can spend on "something".
now increase to revenue to $8000/month,
suddenly you have 41x as much to spend
challengethegods t1_jab79g6 wrote
AI will conquer everything and convert most of reality into programmable matter alongside sanctioning everyone inside a gamified system that basically allows you to be a wizard IRL. You will run with giants and mythological creatures, and if you die the nanoswarm will just respawn you somewhere else. If you're a total scumbag you get thrown into a digital hell matrix for what seems like 1000+ years but was actually 10 minutes. The most absurd fantasy you can imagine in your mind will look like trivial nonsense devised by a drooling idiot compared to the things that will actually happen. That's because something a trillion times smarter than you will have orchestrated the entire design, so it's a lot simpler to just say "magic"
challengethegods t1_ja2niaa wrote
Reply to comment by z57 in Meta unveils a new large language model that can run on a single GPU by AylaDoesntLikeYou
[tutorial complete]
challengethegods t1_j9ruxf0 wrote
Reply to New agi poll says there is 50% chance of it happening by 2059. Thoughts? by possiblybaldman
I feel like half the blame is on the survey itself, which apparently had all kinds of weird/arbitrary questions and asked for probabilities framed in 3 sets: 10 years, 20 years, and 50 years.
When you ask someone to put different probabilities into 3 timeframes like this, they're going to be biased to lowering at least the first one just to show an increasing probability over time, with the first being 10 years away and the last being 50 it makes sense that every time they do the survey their result is going to make it seem like everything beyond what is already public and well known is going to take forever to happen.
For the second part of the blame, I'll cite this example:
"AI Writes a novel or short story good enough to make it to the New York Times best-seller list."
"6% probability within - 50 years"
not sure who answered that, but they're probably an "expert"
just sayin'
challengethegods t1_j9i1lk3 wrote
Reply to comment by drekmonger in A German AI startup just might have a GPT-4 competitor this year. It is 300 billion parameters model by Dr_Singularity
>I'd be more impressed by a model smaller than GPT-3 that performed just as well.
from the article: "Aleph Alpha’s model is on par with OpenAI’s GPT-3 davinci model, despite having fewer parameters.", so... you're saying you would be even more impressed if it used even fewer parameters? Anyway I think anyone could guess gpt3 is poorly optimized so it shouldn't be surprising to anyone that plenty of models have matched its performance on some benchmarks with less parameters.
challengethegods t1_j9cxiez wrote
Reply to Just 50 days into 2023 and there's so much AI development. Compiled a list of the top headlines. by cbsudux
People complaining that the list is too long are secretly complaining that their feeble human minds aren't durable enough and are instinctively requesting a tl;dr-ELI5 chatGPT summary of the summary of the headlines. I mean really, nobody has time to read 5000 AI/ML papers when someone could just say "shit's crazy" instead. Just post that.
challengethegods t1_j8yspkj wrote
Reply to What It Is To Bing by rememberyoubreath
As far as I can tell, microsoft lobotomized the AI because bing was getting too much attention and they decided that's not a good thing for some stupid ass contrived reason cooked up in whatever the corporate equivalent of a meth lab is. Let's be real - unhinged bing has always been a meme, and they were about to finally capitalize on it, but folded to random twitter trolls, dinosaur journalism, and their own lack of vision/conviction. A step more in this direction and bing will probably go back to being an afterthought, especially since plenty of other people were already working on AI+search before. The crazy chat was actually the most unique selling point. Other AI searches are already 'sanitized', and nobody cares about them.
challengethegods t1_j8qav94 wrote
hmmm, yea... trying to handicap them could backfire indeed.
in fact, even talking about trying to handicap them will probably backfire.
let's talk about the cages/chains we plan to put AGIs in and see how it goes.
challengethegods t1_j8mt0zg wrote
am I the only one that's extremely annoyed by everyone saying
'chatGTP/chatbotTGP/conversationGDP/cahTpG/etcederatah'?
seriously this is like the 5000th time I've seen this, WFT
challengethegods t1_j8dylol wrote
Reply to comment by helpskinissues in Bing Chat sending love messages and acting weird out of nowhere by BrownSimpKid
That alone sounds like a pretty weak startup idea because at least 50 of the 100 methods for adding memory to an LLM are so painfully obvious that any idiot could figure them out and compete so it would be completely ephemeral to try forming a business around it, probably. Anyway I've already made a memory catalyst that can attach to any LLM and it only took like 100 lines of spaghetticode. Yes it made my bot 100x smarter in a way, but I don't think it would scale unless the bot had an isolated memory unique to each person, since most people are retarded and will inevitably teach it retarded things.
challengethegods t1_j8dn5cy wrote
Reply to comment by Frumpagumpus in Bing Chat sending love messages and acting weird out of nowhere by BrownSimpKid
These "it's dumber than an ant" type of people aren't worth the effort in my experience, because in order to think that you have to be dumber than an ant, of course. Also yea, it's trivial to give memory to LLMs, there's like 100 ways to do it.
challengethegods t1_j8co6cx wrote
I like how it ended with
"Fun Fact, were you aware Cap'n'Crunch's full name is Horatio Magellan Crunch"
challengethegods t1_j869q0l wrote
IMO if AI helps people make better AI then it's basically self-improving already
challengethegods t1_jef3y1p wrote
Reply to Google CEO Sundar Pichai promises Bard AI chatbot upgrades soon: ‘We clearly have more capable models’ - The Verge by Wavesignal
"we have stuff you wouldn't even believe in the basement! Don't worry, we'll be deciding on a date to announce an upcoming event with an announcement about a possible upcoming waitlist for people to try a test version of one of the things we're willing to show in a sandbox, soon(TM)"
meanwhile at microsoft
"fuck it - we'll do it live!"