he_who_floats_amogus

he_who_floats_amogus t1_jea96bu wrote

Maybe. The problem is that money has time value, so in practice paying down the mortgage faster will only save you money if paying down the mortgage happens to be the best investment for your money. It's possible, but it's also possible you will have the opportunity for better investments.

1

he_who_floats_amogus t1_jdu8479 wrote

You could do that, but if it's just hallucinating the confidence intervals then it really isn't very neat. The language model have very high reward for hallucinated responses for things like confidence intervals in particular, because hallucinating figures like this will still produce very coherent responses.

72

he_who_floats_amogus t1_jdtwp8t wrote

>open source dataset ... feasible to train with a commercial computer ... decent results

Choose two. Therefore, you can approach this one of three ways:

  1. Use closed source data (eg. where your starting point is a pre-trained model and you're doing additional fine-tuning training)
  2. Use millions of dollars of compute resource (a "very good GPU - nvidia etc" does not meet this standard)
  3. Accept poor results
2

he_who_floats_amogus t1_j91cfcf wrote

Reply to comment by goolulusaurs in [D] Please stop by [deleted]

Not even guessing. When you're guessing, you're making a well defined conjecture concerning one or more possible outcomes. This assertion isn't well defined, which is why it cannot be measured. It's a much lower-order type of statement than a speculative guess.

3

he_who_floats_amogus t1_j893bfl wrote

Basically the answer is that it’s OpenAI’s tool and it’s their prerogative to implement it as they see fit. You don’t have any bargaining power to demand additional features or removal of constraints. Even if we take your perspective as correct as an axiom regarding safety, if the tool can meet OpenAI’s goals with excessive safety impositions, then the tool is successfully working as designed. Abundance of caution is only a problem if it’s hampering OpenAI in fulfilling their own goals.

There are many possibilities as to the “why” here. It’s possible that the system is logistically difficult to control to tight degrees of granularity in various ways and it’s better logistically for OpenAI to structure constraints with broad brush strokes in an attempt to make sure they capture the constraints they desire to have. That’s one high level possible explanation among many.

5

he_who_floats_amogus t1_j44x3vm wrote

I don't think it's quite as arbitrary as you're making it out to be. I haven't perfectly defined the concept of a task here, but a core concept of ML is that it's focused on the the learning itself rather than producing a solution to some problem statement. The idea of learning implies an element of generalization, but that's different than general applicability or usefulness. The agent working on the task is our abstraction layer; our algorithm should work on the agent rather than producing the solution to the agent's task. Through some defined process, you're to create a generalized form of knowledge in that agent, without solutions for specific cases being explicitly programmed.

If you train a NN to generate a representative knowledge model that solves a "simple" problem that could have been solved with an explicit solution, you're still doing ML. It's not about how complicated the problem is to tackle, or how generally applicable or useful the result is, but whether what you have explicitly programmed is the direct solution to some problem, or is itself a modeled form of learning that can be applied to some agent that can then go on to solve the problem.

In the Strandbeest example, the program that is running is not modeling any learning. There is no agent. The output of the program is a direct solution to the problem rather than some embodied form of knowledge an agent might use to solve a generalized form of the problem. It's not ML and it's not a fuzzy question, at least in this case, imho. There could be perhaps be cases or situations where there is more fuzziness, but this isn't it.

Optimization, including heuristic optimization as in genetic algorithms, could find applied use in ML, but they are not themselves ML, and the use of a genetic algorithm to solve a problem explicitly is not ML.

3

he_who_floats_amogus t1_j44kquo wrote

You can use all kinds of algorithms in machine learning. This is a “uses a” relationship rather than an equivalence relationship, in this case. If I’m building a piece of furniture, I am a carpenter. I could employ the use of a hammer to help me build the furniture. The hammer is not a carpenter.

I think you can imagine that the machine learning approach in that video may also rely on various data structures including graphs, trees, etc, and perhaps many other things which are also not machine learning.

2

he_who_floats_amogus t1_j44k1rd wrote

I’m going to say no. Machine learning is a field largely dedicated to methodology that improves performance of an agent at fulfilling some task, whereas a genetic algorithm is a heuristic approach that can be used to find optimal (enough) solutions to some specific problem, which is how it used in this case.

It’s not a good or bad thing, these are just categorical descriptions of types of things, which are meant to help us delineate. All algorithms that produce an output have now “learned” something (the output!), but to say that any machine that could be interpreted as having learned something is tantamount to machine learning is too broad to be linguistically useful, and isn’t what is being denoted by the categorical description.

1

he_who_floats_amogus t1_irhfd1g wrote

Great distance will greatly exacerbate the effects, because the entire universe is expanding. The reason why there's a limit to how far we can see is that there is an event horizon beyond which objects are moving away from us FTL. Relative velocity is proportional to distance.

We've confirmed this experimentally by observing distant cosmological events of known types. At a vast distance, we can observe events that literally function as a cosmological clocks, and can verify that they adhere to the expected time dilation. At a distance that results in relative velocity of 50% speed of light, you'll see events take 15% longer.

2