Optional_Joystick
Optional_Joystick t1_iv28b7s wrote
Optional_Joystick t1_itlhhfn wrote
Reply to comment by mj-gaia in how old are you by TheHamsterSandwich
Never doubt the power of anime!
Optional_Joystick t1_irt67b9 wrote
Reply to comment by frenetickticktick in Why does everyone assume that AI will be conscious? by Rumianti6
I'd prefer the term "volunteer."
Optional_Joystick t1_irsuoiu wrote
Reply to comment by Rumianti6 in Why does everyone assume that AI will be conscious? by Rumianti6
Yes, that's exactly what I mean. Our definitions exclude the possibility. It is very logical. Thanks for playing along.
Optional_Joystick t1_irsj8cz wrote
Reply to comment by Rumianti6 in Why does everyone assume that AI will be conscious? by Rumianti6
It becomes philosophical whenever we investigate this to any depth. Given that your definition of consciousness is "being able to have an experience," I'd like to point out we already have systems which record their interactions with the world, and integrate their interactions with the world into their model of the world, in order to perform better on their next interaction with that world. Yet we don't consider these systems conscious.
Of course we're not saying AI aren't conscious in order to get free slave labor. That would imply we actually believe they are slaves and are looking to justify it. Instead we revise our definitions so that computers are excluded, and will continue to do so, because they are tools, not slaves. A priori.
Logic won't get us there when our definitions exclude the possibility. Sufficiently hot ice can burn wood, despite it being called ice.
Optional_Joystick t1_irs0kcs wrote
I don't know what your definition of consciousness is, but if it's something like "awareness of self and its place in reference to the world at large," then we'll have to have an AI that's conscious to get singularity.
In order to get a self improving AI, it will necessarily need to understand itself in order to make the next iteration of itself in line with the original intention of the former. Its motivating beliefs, hidden goals, and likely environmental interactions are all useful data points. The actions it performs have to be weighed against what humans would consider desirable, unless we really believe in a moral absolute where we can just define an external reward function and never need to update it (and that helping humans is in fact true moral goodness instead of a bias that comes from the fact we're human).
When I hear the arguments against computers being conscious that don't rely on some magic property that only biology can achieve, I start looking at myself and noting that I don't really do many things different from the latest and greatest system that's not considered conscious. I suspect there will be a time when I can't find any differences whatsoever between myself and something that's not conscious.
We'll do what humans do and just define things so that it's okay for us to exploit it, until we can't.
Optional_Joystick t1_ireosej wrote
Reply to comment by biologischeavocado in META QUEST PRO mixed reality passthrough by Shelfrock77
I keep thinking of the same concept, except with GAI instead of monkeys.
But I also know that if I wasn't continually assigned busywork I might look for things to improve at my company, so maybe it's fine...
Optional_Joystick t1_irakz6b wrote
Reply to comment by Kaarssteun in META QUEST PRO mixed reality passthrough by Shelfrock77
To be fair, I do really hate how the office follows me wherever I go.
Optional_Joystick t1_ir72t5g wrote
Reply to comment by yldedly in [R] Self-Programming Artificial Intelligence Using Code-Generating Language Models by Ash3nBlue
Really appreciate this. I was excited enough about learning knowledge distillation was a thing. I felt we had the method of extracting the useful single rule from the larger model.
On the interpolation/extrapolation piece: For certain functions like x^2, wouldn't running the result of a function through the function again let you achieve a result that "extrapolates" a new result outside the existing data set? This is kind of my position on why I feel feeding LLM data generated from an LLM can result in something new.
It's still not clear to me how we can verify a model's performance if we don't have data to test it on. I'll have to read more about DreamCoder. As much as I wish I could work in the field, it looks like I've still got a lot to learn.
Optional_Joystick t1_ir5oz6d wrote
Reply to comment by yldedly in [R] Self-Programming Artificial Intelligence Using Code-Generating Language Models by Ash3nBlue
I'm not sure what \) means, but totally agree data is also a bottleneck. Imagine if the system could also seek out data on its own that isn't totally random noise, and yet isn't fully understood by the model.
Optional_Joystick t1_ir5ctvn wrote
Reply to comment by Silly_Objective_5186 in [R] Self-Programming Artificial Intelligence Using Code-Generating Language Models by Ash3nBlue
Transformer models, a combination of encoder/decoder models, completely changed the game. The 2017 paper, "Attention is All You Need," introduces this type of model.
Most of the cool stuff we see after this point is based on this model. You can "transform" text into other text like GPT-3, or you can "transform" text into images like DALL-E. When we make a bigger model, we get better results, and there doesn't seem to be a limit to this yet. So, it's possible we already have the right model for singularity. Having an LLM generate code for a future LLM seems like a valid approach to make the possibility real.
Optional_Joystick t1_ir13db7 wrote
Can I take it now?
Optional_Joystick t1_iv2rf3g wrote
Reply to Scientist that created first gene-edited babies seeks funding for DNA synthesiser by mutherhrg
Where can I invest?