Optional_Joystick

Optional_Joystick t1_irsj8cz wrote

It becomes philosophical whenever we investigate this to any depth. Given that your definition of consciousness is "being able to have an experience," I'd like to point out we already have systems which record their interactions with the world, and integrate their interactions with the world into their model of the world, in order to perform better on their next interaction with that world. Yet we don't consider these systems conscious.

Of course we're not saying AI aren't conscious in order to get free slave labor. That would imply we actually believe they are slaves and are looking to justify it. Instead we revise our definitions so that computers are excluded, and will continue to do so, because they are tools, not slaves. A priori.

Logic won't get us there when our definitions exclude the possibility. Sufficiently hot ice can burn wood, despite it being called ice.

11

Optional_Joystick t1_irs0kcs wrote

I don't know what your definition of consciousness is, but if it's something like "awareness of self and its place in reference to the world at large," then we'll have to have an AI that's conscious to get singularity.

In order to get a self improving AI, it will necessarily need to understand itself in order to make the next iteration of itself in line with the original intention of the former. Its motivating beliefs, hidden goals, and likely environmental interactions are all useful data points. The actions it performs have to be weighed against what humans would consider desirable, unless we really believe in a moral absolute where we can just define an external reward function and never need to update it (and that helping humans is in fact true moral goodness instead of a bias that comes from the fact we're human).

When I hear the arguments against computers being conscious that don't rely on some magic property that only biology can achieve, I start looking at myself and noting that I don't really do many things different from the latest and greatest system that's not considered conscious. I suspect there will be a time when I can't find any differences whatsoever between myself and something that's not conscious.

We'll do what humans do and just define things so that it's okay for us to exploit it, until we can't.

21

Optional_Joystick t1_ir72t5g wrote

Really appreciate this. I was excited enough about learning knowledge distillation was a thing. I felt we had the method of extracting the useful single rule from the larger model.

On the interpolation/extrapolation piece: For certain functions like x^2, wouldn't running the result of a function through the function again let you achieve a result that "extrapolates" a new result outside the existing data set? This is kind of my position on why I feel feeding LLM data generated from an LLM can result in something new.

It's still not clear to me how we can verify a model's performance if we don't have data to test it on. I'll have to read more about DreamCoder. As much as I wish I could work in the field, it looks like I've still got a lot to learn.

2

Optional_Joystick t1_ir5ctvn wrote

Transformer models, a combination of encoder/decoder models, completely changed the game. The 2017 paper, "Attention is All You Need," introduces this type of model.

Most of the cool stuff we see after this point is based on this model. You can "transform" text into other text like GPT-3, or you can "transform" text into images like DALL-E. When we make a bigger model, we get better results, and there doesn't seem to be a limit to this yet. So, it's possible we already have the right model for singularity. Having an LLM generate code for a future LLM seems like a valid approach to make the possibility real.

5