drsimonz

drsimonz t1_iwj6w9e wrote

The trouble is most people still don't understand exponential growth. In the past, very little changed within one lifespan so any thoughts of the distant future were just whimsical fantasies with no consequence. Now things are moving quickly enough that, realistically, being born one year later could make the difference between dying and becoming immortal. Maybe it creates some kind of cognitive dissonance to imagine such dramatic changes, because people seem to actively avoid thinking about it.

30

drsimonz t1_iv900iu wrote

I have a good friend who believes reality is subjective - that events may be determined more by where you choose to focus your attention, than by some universally consistent instance of the laws of physics. If that is true (which I think it would have to be, if we were an attention-oriented simulation like you describe), then it seems pretty difficult to come to any conclusions at all. If causality doesn't have to be globally consistent, it should be possible to "break" the laws of physics and get things like free energy or faster than light travel. I highly doubt Mr. Kurzweil would want to entertain such notions, since the possibilities are already so exciting even if we assume that universe is objective (i.e. the laws of physics apply everywhere simultaneously).

Of course, the possibility of us being the only intelligent species certainly would depend on whether we're in a simulation designed specifically for us. But I don't see any reason to prefer that idea over a simulation with 1 billion intelligent species per galaxy. To prefer the former seems no better than assuming the earth is the center of the universe.

1

drsimonz t1_iv8291a wrote

> we may choose to have ASI create the perfect simulation for us and keep us safe inside it, rather than expending energy to expand.

Yeah. This video made me wonder, what if there are as-yet-unknown natural limits to intelligence? What if minds pursuing greater intelligence universally lose interest in that goal once they reach a certain level, and pursue other things like entertainment, creativity, or even self destruction? Since we have zero examples of ASI, how can we possibly know? And consider how tiny a percentage of people alive today who actually consider intelligence a goal at all? Most people don't even seem to have a concept of intelligence being a good thing, let alone being something you can change. I think people like Ray (and to be fair, myself) like to assume that the obvious choice is to continue increasing intelligence forever, since it increases your future capabilities for any other goals you might have.

Also worth noting that "saturate the universe with computronium" thing obviously isn't compatible with the existence of other intelligent species. Unless we're unique in the universe, it's extremely unlikely we just happen to be the first species to have a chance to trigger a singularity, which we'd have to be since we can look in any direction and see billions of non-computronium stars.

2

drsimonz t1_iuh18cy wrote

Definitely, though I would argue that the same issue exists for currency, which is why USD is used so widely in other countries besides the US - people feel much safer using the dollar, backed by a world superpower, than they would using the local money with some dictator printed on just one side. Is the US banking system actually trustworthy? Absolutely not, but I guess it's all relative?

1

drsimonz t1_iuc9yl1 wrote

I've been saying this for like a decade. There is only one way the social media story ends: people will post under their true name, using cryptographically secure identity verification provided by their government or some kind of international cooperative. Something more secure than a credit card. You get exactly one identity. You do not get to create a new account. Every post is tied to your birth certificate, or your passport, for life. Sure, you can still post on others' behalf, but you're literally selling your identity. Eventually, even the dumbest members of society will understand that if you see a post without a verified identity, it's not trustworthy. Like, it's basically guaranteed to be advertisement, propaganda, or scammers. When Facebook first came out I remember people being really put off by having to use their real names, but at this point it seems like we've gotten over it, so I don't see the problem. The sooner this kind of platform takes off, the sooner we can reclaim our democracy.

8

drsimonz t1_isr8npb wrote

Sorry if I sounded too cynical. I am looking forward to seeing what this company does! But I do think that taking serious investment money is almost guaranteed to affect their priorities. Just look at OpenAI. It was all "save humanity this" and "working for a better future that" but now that they're backed by Microsoft their ideals seem to be a bit less lofty. And let's not forget Google's long lost "don't be evil" motto.

Now IANAL so I don't know if Stability AI has some kind of clever charter document that prevents the their long term goals from being undermined even if the board of directors changes their mind...but I'd be pretty surprised if they did. Honestly it doesn't bode well for The Control Problem when you consider that we can't even prevent corporations, made entirely of human employees, from turning evil eventually.

8

drsimonz t1_ispt7ul wrote

I wouldn't get too excited. It's very common for companies to start charging for previously free services. Stable Diffusion was very well timed since it was able to ride the considerable DALL-E hype, while sweeping the rug out from under them by publishing the weights. But at the end, this is a for-profit company and will inevitably need to deliver to shareholders by creating revenue streams. There's an unavoidable conflict of interest once you realize you're competing with your own open source distribution.

31