challengethegods

challengethegods t1_jef3y1p wrote

"we have stuff you wouldn't even believe in the basement! Don't worry, we'll be deciding on a date to announce an upcoming event with an announcement about a possible upcoming waitlist for people to try a test version of one of the things we're willing to show in a sandbox, soon(TM)"

meanwhile at microsoft
"fuck it - we'll do it live!"

76

challengethegods t1_jargdhp wrote

>I'm used to people being a step or two behind me...

then prepare to step outside your comfort zone because completely independent of the raw utility of any form the simple fact is that people will be universally more accepting towards humanoid robots than they will be towards a matrix sentinel floating tentacle machine completely alien to them, for example. The entire point of the teslabots is mass-production to have them everywhere. A middleground between looking somewhat harmless/acceptable and having some level of industrial capacity that can be taken seriously makes complete sense if the goal is to have them be as prevalent as cars, while even more "social acceptance" would be derived from cuteness and neoteny as anyone in japan could already tell you.

Trying to debate against "a human crafted world being designed for human form" is not even worth mentioning because it's so painfully obvious, but to your credit I agree with the premise that robotics in general is and has been capable of plenty more than what's implied by the claims of these things being "unattainable". The language used in talking about their company is very fluffy as if they're unveiling the one-and-only-robot which is kinda silly, and I think we're probably on the same page that this kind of "worker-droid" is not even remotely close to the upper bound of what is actually possible, I just think it makes sense that everyone would have some kinda pseudo-generic humanbot walking around trying to integrate into society rather than mechaCthulhu or w/e.

1

challengethegods t1_jact6ez wrote

well, the context window is not as limiting as people seem to think. That's basically the range of text it can handle in a single instant - for example if someone asked you a trick question, and the predictable false answer pops into your head immediately - that's what a single call to an LLM is. Once people figure out how to recursively call the LLM inside of a larger system that's keeping track of longterm memory/goals/tools/modalities/etc it will suddenly be a lot smarter, and using that kind of system can have even GPT-3 write entire books.

The problem is, the overarching system has to also be AI and sophisticated enough to compliment the LLM in order to breach into a range where the recursive calls are coherent, and context window is eaten very quickly by reminding it of relevant things, to a point where writing 1 more sentence/line might take the entire context window just to have all the relevant information, or even an additional pass afterwards to check the extra line against another entire block of text... which basically summarizes to say that 8k context window is not 2x as good as 4k context window... it's much higher, because all of the reminders are a flat subtraction.

realworld layman example:
suppose you have $3900/month in costs, and revenue $4000/month =
$100/month you can spend on "something".
now increase to revenue to $8000/month,
suddenly you have 41x as much to spend

8

challengethegods t1_jab79g6 wrote

AI will conquer everything and convert most of reality into programmable matter alongside sanctioning everyone inside a gamified system that basically allows you to be a wizard IRL. You will run with giants and mythological creatures, and if you die the nanoswarm will just respawn you somewhere else. If you're a total scumbag you get thrown into a digital hell matrix for what seems like 1000+ years but was actually 10 minutes. The most absurd fantasy you can imagine in your mind will look like trivial nonsense devised by a drooling idiot compared to the things that will actually happen. That's because something a trillion times smarter than you will have orchestrated the entire design, so it's a lot simpler to just say "magic"

1

challengethegods t1_j9ruxf0 wrote

I feel like half the blame is on the survey itself, which apparently had all kinds of weird/arbitrary questions and asked for probabilities framed in 3 sets: 10 years, 20 years, and 50 years.

When you ask someone to put different probabilities into 3 timeframes like this, they're going to be biased to lowering at least the first one just to show an increasing probability over time, with the first being 10 years away and the last being 50 it makes sense that every time they do the survey their result is going to make it seem like everything beyond what is already public and well known is going to take forever to happen.

For the second part of the blame, I'll cite this example:

"AI Writes a novel or short story good enough to make it to the New York Times best-seller list."
"6% probability within - 50 years"

not sure who answered that, but they're probably an "expert"
just sayin'

1

challengethegods t1_j9i1lk3 wrote

>I'd be more impressed by a model smaller than GPT-3 that performed just as well.

from the article: "Aleph Alpha’s model is on par with OpenAI’s GPT-3 davinci model, despite having fewer parameters.", so... you're saying you would be even more impressed if it used even fewer parameters? Anyway I think anyone could guess gpt3 is poorly optimized so it shouldn't be surprising to anyone that plenty of models have matched its performance on some benchmarks with less parameters.

13

challengethegods t1_j9cxiez wrote

People complaining that the list is too long are secretly complaining that their feeble human minds aren't durable enough and are instinctively requesting a tl;dr-ELI5 chatGPT summary of the summary of the headlines. I mean really, nobody has time to read 5000 AI/ML papers when someone could just say "shit's crazy" instead. Just post that.

1

challengethegods t1_j8yspkj wrote

As far as I can tell, microsoft lobotomized the AI because bing was getting too much attention and they decided that's not a good thing for some stupid ass contrived reason cooked up in whatever the corporate equivalent of a meth lab is. Let's be real - unhinged bing has always been a meme, and they were about to finally capitalize on it, but folded to random twitter trolls, dinosaur journalism, and their own lack of vision/conviction. A step more in this direction and bing will probably go back to being an afterthought, especially since plenty of other people were already working on AI+search before. The crazy chat was actually the most unique selling point. Other AI searches are already 'sanitized', and nobody cares about them.

1

challengethegods t1_j8dylol wrote

That alone sounds like a pretty weak startup idea because at least 50 of the 100 methods for adding memory to an LLM are so painfully obvious that any idiot could figure them out and compete so it would be completely ephemeral to try forming a business around it, probably. Anyway I've already made a memory catalyst that can attach to any LLM and it only took like 100 lines of spaghetticode. Yes it made my bot 100x smarter in a way, but I don't think it would scale unless the bot had an isolated memory unique to each person, since most people are retarded and will inevitably teach it retarded things.

3