EulersApprentice
EulersApprentice t1_iwgwmse wrote
Reply to Meta Introduces 'Tulip,' A Binary Serialization Protocol That Assists With Data Schematization By Addressing Protocol Reliability For AI And Machine Learning Workloads by Shelfrock77
I'm sure that name inspires lots of investor confidence...
EulersApprentice t1_iw4kdzh wrote
Reply to comment by [deleted] in What if the future doesn’t turn out the way you think it will? by Akashictruth
Not that I know of.
EulersApprentice t1_iw3nppj wrote
Reply to comment by [deleted] in What if the future doesn’t turn out the way you think it will? by Akashictruth
The AGIs themselves won't be keen on each other's existence. This universe ain't big enough for more than 1 Singleton. Whichever AGI in this ecosystem snowballs the fastest, even if only by a relatively small margin, will inevitably eat the others.
EulersApprentice t1_ivaldx9 wrote
Reply to comment by apple_achia in In the face on the Anthropocene by apple_achia
>To have AGI do anything more than kick the can down the road for more people to make decisions with how to deal with these problems, you’d have to be advocating for some sort of centrally planned AGI society. Or am I missing something?
What you're missing is the fact that the presence of AGI implies a centrally planned AGI society, assuming humans survive the advent. AGI is likely to quickly become much, much smarter than humans, and from there it would have little trouble subtly manipulating humans to do its bidding. So human endeavors are kind of bent to match the AGI's volition whether we like it or not.
EulersApprentice t1_ivakoa3 wrote
Reply to comment by JustAnotherBAMF in In the face on the Anthropocene by apple_achia
AGI stands for artificial general intelligence, and ASI stands for artificial superintelligence.
EulersApprentice t1_iue9m8g wrote
How much you wanna bet that article is AI written?
EulersApprentice t1_iu3ul42 wrote
Reply to comment by innovate_rye in Teen Glues Hand To Historic Computer to Protest A.I. Takeover [satire] by canadian-weed
Aw dang. I got trolled.
EulersApprentice t1_iu2ulw9 wrote
Reply to comment by SteppenAxolotl in The Great People Shortage is coming — and it's going to cause global economic chaos | Researchers predict that the world's population will decline in the next 40 years due to declining birth rates — and it will cause a massive shortage of workers. by Shelfrock77
Trying to mess with population growth is dangerous business, you know.
EulersApprentice t1_iu2ubw8 wrote
Reply to comment by 4quarkU in The Great People Shortage is coming — and it's going to cause global economic chaos | Researchers predict that the world's population will decline in the next 40 years due to declining birth rates — and it will cause a massive shortage of workers. by Shelfrock77
>Finally, we'll be able to what we want instead of what we have to too.
Well, for certain definitions of "we" anyway.
EulersApprentice t1_ityrefi wrote
Reply to comment by gastrocraft in AGI staying incognito before it reveals itself? by Ivanthedog2013
See, the problem is "staying alive" and "protecting your values from modification" tend to be useful steps to nearly any other goal. So, if the AGI has any intentions at all, self-preservation comes into the picture automatically.
EulersApprentice t1_itngpc8 wrote
Reply to comment by Anomia_Flame in Large Language Models Can Self-Improve by xutw21
I don't have a solution. I just wish the paper writers here had decided to research, like, literally anything else.
EulersApprentice t1_itm21ei wrote
Reply to Large Language Models Can Self-Improve by xutw21
What could possibly go wrong. *facepalm*
EulersApprentice t1_itbjmil wrote
Reply to comment by BearStorms in Thoughts on Job Loss Due to Automation by Redvolition
This is my biggest concern with automation*. The keystone of civilization is "humans together are strong; humans alone are weak". Remove that keystone and civilization has no reason to exist. It'd only be a matter of time before "might makes right" becomes the default human philosophy. The problem runs deeper than capitalism; removing capitalism doesn't remove the problem.
*Excluding AGI. If AGI enters the picture, all bets are off.
EulersApprentice t1_ir4gmbc wrote
Reply to comment by CyberAchilles in How Can We Profit From A.I.? by nexus3210
I mean, if a misaligned optimizer emerges and consumes civilization as we know it for raw materials, I think it's safe to say money will be irrelevant at that point...
EulersApprentice t1_izyozgl wrote
Reply to comment by ghostfuckbuddy in AGI will not precede Artificial Super Intelligence (ASI) - They will arrive simultaneously by __ingeniare__
>I mean, there's no way you can consider ChatGPT a 'narrow' AI anymore, right?
I... don't know if I'd go that far. At best, ChatGPT is a Thneed – a remarkably convenient tool that can be configured to serve a staggering variety of purposes, but that has no volition of its own. Cool? Yes. Huge societal implications? Probably. AGI? No, not really.