sdmat

sdmat t1_iyf7dck wrote

> Hypercomputation confirmed? (Note: we could easily change the last line so further outputs were monotonically increasing and larger than the step number for the current candidate for BB_2(5), while keeping correctness on the first four. Imagine an infinite series such machines, each more cleverly obfuscatory than the last; they exist.)

No, because there is no plausible computational principle giving the answer to the general Busy Beaver problem embodied in that system. Notably, it's a turing machine.

An inductive proof needs to establish that the inductive step is valid - that there is a path from the base case to the result, even if we can't enumerate the discrete steps we would take to get there.

By analogy proof of hypercomputation would need to establish that the mechanism of hypercomputation works for verifiable examples and that this same mechanism extends to examples we can't directly verify.

Of course this makes unicorn taxonomy look down to earth and likely.

> edit: And rewinding a bit, the original claim was that there's an effectively realizable device, that is, one which can be implemented, and whose implementation can be accurately described with finite time, space, and description length, ie by a TM, the usual sense of 'effective'. If this were the case, the TM could just simulate it, proving it was not a hypercomputer. This is the sense in which the claim is flat-out wrong, aside from the difficulty of trying to evaluate it with 'evidence'.

That's a great argument if the universe is Turing-equivalent. That may be the case, but how to prove it?

If the universe isn't Turing-equivalent then it's conceivable that we might be able to set up a hypercalculation supported by some currently unknown physical quirk. Doing so would not necessarily involve infinite dimensions - you are deriving those from the behavior of Turing machines.

An example non-Turing universe is one where Real numbers are physical, I.e. it is fundamentally non-discretizable. I have no idea if that would be sufficient to allow hypercomputation, but it breaks the TM isomorphism.

0

sdmat t1_iycv8zl wrote

I agree that a hypercomputer is almost certainly impossible and it would be difficult to prove.

But your standard of proof is absurd - do we only accept a computer as correct on demonstrating correctness for every operation on every possible state?

No, we look inside the box. We verify the principles of operation, translation to physical embodiment, and test with limited examples. The first computer was verified without computer assistance.

You might object that this is a false analogy because the computational model is different for a hypercomputer. But if we verify the operation of a plausible embodiment of hypercomputation on a set of inputs that we happen to know the answer for, that does tell us something. If the specific calculation we can validate is the same in kind as the calculations we can't validate, a mere difference in input values to which the same mechanism of calculation applies, then it is a form of proof. In the same sense we prove the correctness of 64 bit arithmetic units despite not being able to test every input.

What those principles of operation and plausible embodiment might look like, no idea. As I said it's probably impossible. But you would need to actually prove it to be impossible to completely dismiss the notion.

0

sdmat t1_iyc8ab6 wrote

> The problem of producing task specifications does not get worse with AI intelligence (because as we've already seen, the difficulty of producing a specification is independent) which is fundamentally inconsistent with the LessWrongist viewpoint.

I think LW viewpoint is that for the correctness of a task specification to be genuinely independent of the AI it is necessary to include preferences that cover the effects of all possible ways to execute the task.

The claim is that for our present AIs we don't need to be anywhere near this specific only because they can't do very much - we can accurately predict the general range of possible actions and the kinds of side effects they might cause in executing the task, so only need to worry about whether we get useful results.

Your view is that this is refuted by the existence of approaches that generate a task specification and check execution against the specification. I don't see how that follows - the LW concern is precisely that this kind of ad-hoc understanding of what we actually mean by the original request is only safe for today's less capable systems.

1

sdmat t1_iyc3x8t wrote

> If I tell you there is an effectively realizable device with hypercomputational abilities, you should tell me straight up that I am wrong.

We should tell you that extraordinary claims demand extraordinary evidence.

> Also, I cannot emphasize enough that Yudkowsky was a 20 year old poster with no formal training, high school, college, or professional coding experience, when he drafted the above scheme to supersede all governments on Earth by force.

So? Plenty of people who have made contributions to philosophy and science have been autodidacts with very weird ideas.

The wonderful thing about open discussion and the marketplace of ideas is that we are under no obligation to adopt crazy notions.

−2