ArgentStonecutter

ArgentStonecutter t1_jasixqw wrote

Only one robot?

That's like when people thought having a computer for every person was outrageous.

You'll have dozens of robots. Like you have dozens of computers.

Sometime in the 69s or 70s one of those futurist guys wrote something like "in the future you will have so many computers you'll throw them out because you just don't need them. They'll be in your boxes of breakfast cereal." and you know what, they're in greeting cards. They're sometimes even in your breakfast cereal. The computer in your mouse that lets it talk USB is more powerful than any desktop computer in the '70s or early '80s.

Robots are going to go the same way.

But they're not going to be your plastic pal who's fun to be with, humanoid robots. They're going to be roombas, and dog walkers, and washing machines, and they kind of already are with your internet of things oven that sends bluetooth messages to your cellphone when it thinks it needs to be cleaned. Except it'll be sending those messages to a cleaning robot.

You won't even think of them as robots, like you don't think of the desktop-class computer in your optical mouse (which actually has two desktop class computers if you count the DSP that does the motion tracking) as a computer.

3

ArgentStonecutter t1_j9pjv5y wrote

If you're actually having a conversation with an AI, by all means post about it, but no actual spoilers from the future. If you're from an advanced parallel dimension like the timeline where the Roman Empire never fell, it's all good.

4

ArgentStonecutter t1_j8fyg26 wrote

You came in with this ambiguous scenario and crowing about how it showed a text generator had a theory of mind, because just by chance the text generator generated the text you wanted, and you want us to go "oh, wow, a theory of mind". But all its doing is generating statistically interesting text.

And when someone pointed that out, you go into this passive aggressive "oh let's see you do better" to someone who doesn't believe it's possible. That's not a valid or even useful argument. It's a stupid debate club trick to score points.

And now you're pulling more stupid passive aggressive tricks when you're called on it.

1

ArgentStonecutter t1_j8dbq1o wrote

The question seems ambiguous. I wouldn't have jumped to the same conclusion.

Frankly I'd be worrying about this guy wearing the same shirt every day for a week, or is there something odd about their marriage that she's only home infrequently enough for this to work without being a hygiene problem. Or did she buy him like a whole wardrobe of doggy themed shirts?

1

ArgentStonecutter t1_j656714 wrote

AGI is an artificial general intelligence. It's an intelligence capable of acting as a general agent in the world. That doesn't imply that it's smarter than a human, or capable of unlimited self improvement, or answering any question or solving any problem. An AGI could be no smarter than a dog, but if it's competent as a dog that would be a huge breakthrough.

A system capable of designing a cheap fusion reactor doesn't need general intelligence, it could be an idiot savant or even not recognizably an intelligence at all. From the point of view of a business, it should be an oracle, simply answering questions, with no agency at all. General intelligence is likely to be a problem to be avoided as long as possible, you don't want to depend on your software "liking" you.

Vinge's original paper talked about a self-improving AGI but people seem to have latched on to the AGI part and ignored the self-improving part. He was talking about one that could update its fundamental design or design successively more capable successors.

1