agorathird t1_jegwf73 wrote
Reply to comment by Nanaki_TV in I have a potentially controversial statement: we already have an idea of what a misaligned ASI would look like. We’re living in it. by throwaway12131214121
It's not naive you have not thought through the implications of what AGI means. You are also ignorant of what is doable with the current technology. Artificial general intelligence is equal to us but also inherently superior due to its computational capacities. There is no need for us after that.
You literally are not describing any useful idea of AGI and are only describing the most surface level uses of text-modality only LLMs in your responses.
The r/futurology work week stuff you talk about is possible right now with current public models of chatgpt. It's been possible for a while. But it's not implemented due to greed and beauruacrats being steadfast in their ways. Luckily, not implementing a change hasn't been critically dire for mass swaths of people thus far.
Nanaki_TV t1_jegznf7 wrote
>you have not thought through the implications of what AGI means.
Almost agreed. But because I cannot know what it means. I keep trying my darnest to picture it but I cannot. I'm not smart enough to know what thousands of AGI coming together to solve complex problems will come up with, nor will anyone here. It's hubris to assume anyone can.
>There is no need for us after that.
Again, assumption after assumption. More and new horizons will be created. What? I don't know. But electricity gave the ability for so much to exist on top of it once it was massively adopted. Once AGIs are massively adopted and in our homes, not requiring a supercomputer to train I mean, well, I can only hallucinate what that future will look like. If we are "not needed" then so be it, there's no use arguing. May we die quickly. But I doubt it very much.
> But it's not implemented due to greed and beauruacrats being steadfast in their ways.
It is greed that will cause these models to be implemented and jobs to be automated. I'm working on the risk assessment of doing so right now for work. I do understand. I think I'm just not explaining well due to being sleep deprived thanks to having newborn twins. Lol.
agorathird t1_jeh2s75 wrote
>Again, assumption after assumption. More and new horizons will be created. What? I don't know. But electricity gave the ability for so much to exist on top of it once it was massively adopted. Once AGIs are massively adopted and in our homes, not requiring a supercomputer to train I mean, well, I can only hallucinate what that future will look like. If we are "not needed" then so be it, there's no use arguing. May we die quickly. But I doubt it very much.
Not assumptions, that's what AGI means lol as far as current jobs are concerned. Unless there's an issue they have with space travel? You can make a few edges cases assuming slow takeoff. Which I can give you a boon on about new horizons sure. Maybe we merge, whatever.
This doesn't mean we die or it's unaligned or whatever. That's real speculation. Good luck with your twins.
Nanaki_TV t1_jeh345p wrote
Thanks. And thanks for the discussion. P
Viewing a single comment thread. View all comments