fluffy_assassins

fluffy_assassins OP t1_je05ghj wrote

Well, I'm wondering if the shareholders will "overthrow" the CEOs if they see that having an AI in charge will actually get them more stock value, and therefore more power.

We could see executives vs. shareholders. Although, if the executives are the main shareholders, that could be an obstacle. Could be.

Imagine a CEO seeing they will make more money if they watch the AI, than if they actively manage. They'll remove themselves from the decision-making system to get more of that green.

Edit: this is kind of a guess. I'm wondering what CEOs will do to prevent this, as I don't feel they will go out without a fight.

I'm really curious with this question HOW the executive levels will counter superior AI. It will be interesting to see.

12

fluffy_assassins t1_jb6vmgz wrote

I had a kind of a theory.

There used to be self-modifying code in assembler because computing power was more expensive than programmers' time. So programmers took more time to get more out of the more expensive hardware.

I'm thinking, when transistors can't shrink anymore(quantum effects and all), we're going to need to squeeze out all the computing power we can get to the point where... right back to self-modifying code. Though probably done by AI this time. I don't think a human could debug that though!

3

fluffy_assassins t1_j9bo1io wrote

Yeah, exactly. I almost edited it to say something like that.

Also the Trinity of remote technologies, probably.

I imagine AI will always be cloud based because there's so much more efficient and dedicated competing per available. Quantum computing will probably also always be remote because of the cooling requirements.

Nuclear fusion? We already have it. Look up. Just, remote access to it.

2