Viewing a single comment thread. View all comments

ReadSeparate t1_iv6blo0 wrote

Why would it need symbols to do that though? It would just do it directly. The reason why humans use money is because we don’t know the direct comparison from iPhones to chickens.

Additionally, there would not be market forces in such a system, so nothing would have a price, just an inherent value based on scarcity/utility. That wouldn’t change, they’d just be fundamental constants, more of less.

1

World_May_Wobble t1_iv6k0dr wrote

>Why would it need symbols to do that though?

I think bartering has problems besides converting between iPhones and chickens. Even if you know how many chickens an iPhone is worth, what if one ASI doesn't *want* iPhones? Then you can't "just do it directly," you have to find an intermediary agent who wants your iPhone who has something chicken-ASI wants.

Then symbols have other benefits. For example, you can't pay in fractions of an iPhone, but symbols are infinitely divisible, and symbols store value longer than chickens, which die and rot.

>there would not be market forces in such a system

Why not? Agents are (I presume) exchanging things based on their supply and demand. That's a market.

1

ReadSeparate t1_iv6p95l wrote

Are we talking about a world in which there are multiple ASIs existing at the same time? In that case you could be right, I have no idea how to model such a world though. I have no idea what their systems would look like. Would they compete? Would they cooperate? Would they merge? Would game theory still apply to them in the same way? I have no answers for any of those.

I was under the assumption that we were talking about a singular ASI with complete control over everything. I don’t know why the ASI, or whoever is controlling it, would allow any other ASIs to come into existence.

1

World_May_Wobble t1_iv6zi3l wrote

We have to make a lot of assumptions, and there's very little to anchor those assumptions to. So all we can say is given set of assumptions x, you tend toward world y.

One of my assumptions is that, depending on its capabilities, constraints, and speed of takeoff, an ASI may not be in a position to establish a singleton. Even an uploaded human mind is technically superintelligent, and it's easy to imagine a vast ecosystem of those forming.

Even if you imagine a singleton arising, you have to make some assumptions about its activities and constraints. If it's going to be doing things in places that are physically separated, latency may be an issue for it, especially if it's running at very high speeds. It may want to delegate activities to physically distributed agents. Those may be subroutines, or whole copies of the ASI. In either case, you again have a need for agents to exchange resources.

1