severoon

severoon t1_jbouxw9 wrote

Also consider that "life" is likely to be an arbitrary feature in this discussion.

If you have a molecule that happens to be an enzyme which builds itself, you have yourself a self-replicating thing. Is it "alive"? Definitely not.

Well that's one feature of "life" but it's one that most people tend to think of only in the context of life. But all the features we typically think of in the context of life exist in much simpler, not-alive things.

By looking for the simplest thing we consider "alive," no matter how we define that, it's likely to end up being much simpler than what we would feel comfortable calling "life."

1

severoon t1_iyc0yst wrote

It's not a bad practice because it's impossible to use it in a way that avoids problems; it's considered bad practice because it's possible to use it in a way that causes problems.

This is often the case in programming. You'll find that people will argue that such-and-such is okay because "if you just use it correctly, there's no problem." But correct usage isn't always straightforward, and the less straightforward it is, the more problems there are.

Ideally, a perfect programming language would provide only constructs that make it impossible to represent unintended things. In fact, this perfect programming language actually exists in theory! It would be a programming language which requires formal proofs of all possible functionality. This sounds absurd, but in reality, even though it's impossible, it's less impossible than it at first seems because you can do things like method guards and early escape returns, which reduce the space of test cases from virtually infinite to merely unmanageable and totally unrealistic (for real systems, not toys).

Example time. Say that I want to write a method that can multiply integers from 1 to 10. A first year student in coding 101 will write something like:

/** Return product of a and b. */
int mult(int a, int b) { return a*b; }

…and feel proud that they not only addressed the requirement, but produced something much more general purpose that will also work for any two integers.

How silly and naive. What they failed to account for is that this doesn't actually work for any two inputs, and if you were to exhaustively test every single combination of 32-bit inputs, you'd find a large number of them that result in overflow conditions (like a = b = max int). But that's okay, it still does the thing it's supposed to for the expected inputs, so what's the problem? The problem is that since you specified a looser contract for this method than was required, you have not addressed just the requirement but the looser contract you chose to support (because of Hyrum's law).

If you actually restricted the contract to just the requirement:

/** Return product of a and b if both are between 1 and 10 (inclusive). */
int mult1to10(int a, int b) {
  if (a > 0 && a <=10 && b > 0 && b <= 10) { return a*b; }
  // do some crazy ish
}

In design-by-contract rules, if you don't specify in the contract what a method does, then it is unspecified, meaning that you can literally do anything and still be in compliance. You could go into an infinite loop. Call System.exit(9). Whatever.

Because everything is allowed when a or b are outside the specified bounds, it's super easy to prove that this method formally complies with its contract. The space of the inputs is so small you can even practically write a set of exhaustive tests, and you could also formally prove using lambda calculus or whatever based on the code as written that it does exactly what it says it does.

What does this have to do with your question? What I'm doing here is introducing these notions of formal proof and exhaustive testing not because it's practical, but rather to connect the idea of "covering the entire space of possible inputs" with the ability to reason about code. In mathematics we have this notion of continuity, whereby we say a function is continuous if the outputs transition smoothly from one value to another if the inputs transition smoothly from one value to another. Now just because these transitions are smooth doesn't mean that outputs don't move around in an absolutely wild and unpredictable way, but for a lot of functions you can make strong statements that outputs definitely don't do that. You could have a function where you prove that it it's monotonically increasing, and the increase in the output is proportional to the increase in the input (y = 5x, for instance), so you'll never see weird or erratic behavior in the output for a steady increase in the input.

See where I'm going with this? If you write methods that behave like simple mathematical functions that are easy to reason about because they are constrained in particular ways, then we don't need to exhaustively test every possible combination of inputs. We can say, for example, that if you put in a 1 and get a 5, and you put in a 100 and get a 500, that's good enough to assume if we put in something between 1 and 100 we'll get something between 5 and 500. But you can only say this if your function behaves like y = 5x and isn't trying to tell you angles of Foucault's pendulum or some chaotic process.

This was the problem with the simple code we wrote in the first example above. Given a simple requirement, we decided to tackle the entire class of inputs and wound up with an output that has significant areas of discontinuity (int overflow). We could solve that problem by returning a long, and now we can say that, hey, it's now a simple method we can reason about. Problem solved! Right? Right???

Not really. Clients ruin everything.

The problem is still not solved if what clients really need is an int. Remember, we're writing this method according to some contract not just to occupy our time; presumably we have callers we are trying to support. If we're going to change the contract by returning a long instead of an int, we had better be darn sure that change does not diminish the utility of the contract!

If it does, and clients actually need an int, then what they're going to do is cast that result to an int. If they screw this up by not checking for overflow, then we have not really solved a problem, we just relocated it. Even if they don't screw it up and they do all the correct bounds checking, we are putting extra work on each and every client that we could centralize in one place … and that's the whole point of doing this work in the first place, right? I mean ,couldn't callers just do the multiplication themselves too? Why ever provide any API that does anything?

So there's no escape. As a provider of an API, your job is to provide a contract that meets the need of the caller, and do it as correctly as possible. That's a good API.

But as we see from the above two examples, when we did a seemingly innocuous thing, we introduced unexpected behaviors into our implementation. It's really, really easy to do because it's really, really hard to write code that's really, really easy to reason about.

And, finally, at long last, we go way back up to the top of this post and reread what I wrote there: Good practices are good not because they make it possible to do the right thing, but because they make it impossible to do the wrong thing. Ideally. In practice, the best you can do is make it harder to do bad things, and easier to do right things.

And that's why goto is bad. There is nothing you can do with a goto that you can't also do with other programming constructs minus the big ol' footgun that goto hands you because these alternatives all come with additional restrictions on how control flow is handled that make them easier to reason about.

2

severoon t1_iybxfuk wrote

Convention.

We could fully parenthesize everything, but it gets annoying to write polynomials if you have to put parens around everything, and it turns out that polynomials are super useful in math, so we set up a convention to make them easier to write purely for convenience.

If it happened to be that we most often wanted to do addition first instead of multiplication, then we would have set up the convention so that addition has a higher precedence so we wouldn't have to write a lot of parens for that thing we do all the time.

That's literally it. Mathematical notation simply exists to be a concise way of writing what we do most often to save us writing. It's nothing to do with math.

This is why it's super annoying when people insist that some equations like 6/2(1+2) are ambiguous. Literally the entire reason we set up these rules is to make sure that there are no ambiguities, so unless you're willing to accept that these super smart people that set up these rules just did a crap job of it and you found a flaw, the situation here is simply that you just don't know the rules they set up.

1