agorathird
agorathird t1_jegwf73 wrote
Reply to comment by Nanaki_TV in I have a potentially controversial statement: we already have an idea of what a misaligned ASI would look like. We’re living in it. by throwaway12131214121
It's not naive you have not thought through the implications of what AGI means. You are also ignorant of what is doable with the current technology. Artificial general intelligence is equal to us but also inherently superior due to its computational capacities. There is no need for us after that.
You literally are not describing any useful idea of AGI and are only describing the most surface level uses of text-modality only LLMs in your responses.
The r/futurology work week stuff you talk about is possible right now with current public models of chatgpt. It's been possible for a while. But it's not implemented due to greed and beauruacrats being steadfast in their ways. Luckily, not implementing a change hasn't been critically dire for mass swaths of people thus far.
agorathird t1_jegunvj wrote
Reply to comment by Nanaki_TV in I have a potentially controversial statement: we already have an idea of what a misaligned ASI would look like. We’re living in it. by throwaway12131214121
>Describe the world today in 1980. You cannot predict Reddit or Twitter. You cannot make the claims you’re making with any substantial certainty. Stop acting as if you know.
Both in this thread and the other thread you seem to not to want to extrapolate based on presently given information. That's like the best thing about being sentient too. Or at least you don't want me to extrapolate since you gave me a r/futurology tier take on working.
You are acting like I'm describing hypothetical technology. It's already here. Look through the subreddit for direct sources. You seem to only be working off of ChatGPT-like text models. Even that can be quite autonomous with plug ins. You're like those people who don't know ai is starting to create functional code.
For as much as you love markets, which I also do, you seem to not acknowledge the profit motive and how human neutral it is.
---
On a sidenote, if I had access to books in the 1980s I might've predicted social media. A lot of singulitarians did. But really this is more like predicting social media in 2001 or 2007 depending how which sites you'd like to count. But I still think the analogy is flawed, as the tech is here.
agorathird t1_jegq0ky wrote
Reply to comment by Nanaki_TV in I have a potentially controversial statement: we already have an idea of what a misaligned ASI would look like. We’re living in it. by throwaway12131214121
That's not a claim but the premise. This is r/singularity. He is echoing the original claim that *you* mentioned and wanted to rebuke. You have not presented a cohesive line of logic that satisfies an alternative.
agorathird t1_jegpfqk wrote
Reply to comment by Nanaki_TV in I have a potentially controversial statement: we already have an idea of what a misaligned ASI would look like. We’re living in it. by throwaway12131214121
You are describing some kind of 1950s atom-punk idea of the future. That future has been cancelled. LLMs perfected, embodied, and multi-modal (general or specified) will cover the theorized 70 then 90 then 99% of human tasks. It only matters how long until companies feel like adopting it.
You will have capital owners and executives with machine employees. We are not in the picture as meaningful contributors. Hiring us will be like riding to work on horseback. No one will be going to work like George Jetson.
10 minutes of meaningful human labor to give WorkerGPT some extra oil sounds like a masquerade for what is really a society supported by UBI.
agorathird t1_jegjkel wrote
Reply to comment by Nanaki_TV in I have a potentially controversial statement: we already have an idea of what a misaligned ASI would look like. We’re living in it. by throwaway12131214121
That proves my point. You're acting like UBI isn't a logically neccesary idea in a hypothetical society that is massively unemployed but over-abundant. That's not just 'muh communism'. It's the most ideal default economic mode almost every singulitarian recognizes across the economic spectrum.
It's really beyond anything we know right now.
agorathird t1_jeggpm6 wrote
Reply to comment by Nanaki_TV in I have a potentially controversial statement: we already have an idea of what a misaligned ASI would look like. We’re living in it. by throwaway12131214121
>You may need to review your definitions. And I expect downvotes given this sub’s anti-capitalist stance. Shame
*Shits self* "I expect everyone to say that I stink. Shame"
Make a better argument before you claim unfairness prematurely.
agorathird t1_jeg6pr4 wrote
Reply to Google CEO Sundar Pichai promises Bard AI chatbot upgrades soon: ‘We clearly have more capable models’ - The Verge by Wavesignal
*Nods head* uh-huh. Who at Google is getting copium lines from this sub?
agorathird t1_jeg6a7m wrote
Reply to comment by MajesticIngenuity32 in ChatGB: Tony Blair backs push for taxpayer-funded ‘sovereign AI’ to rival ChatGPT by signed7
Day in the loife of an AI Geezer...
agorathird t1_jeduq0e wrote
So friends, what did some of you say about Google having something much better hidden? Bard is just playing it safe even though it's been an embarrassment since release?
edit juicy:
>shortly after leaving Google in January, Devlin joined OpenAI. Insider previously reported that Devlin was one of several AI researchers to leave Google at the beginning of the year for competitors.
>
>Devlin, who was at Google for over five years, was the lead author of a 2018 research paper on training machine learning models for search accuracy that helped initiate the AI boom. His research has since become a part of both Google and OpenAI's language models, Insider and The Information reported.
>
>OpenAI has hired dozens of former Alphabet staff over the years. Since the company's chatbot made headlines in November for its ability to do anything from write an essay to provide basic code, Google and OpenAI have been locked in an AI arms race.
agorathird t1_jecybw0 wrote
Reply to comment by blueSGL in There's wild manipulation of news regarding the "AI research pause" letter. by QuartzPuffyStar
​
>who published at the conferences NeurIPS or ICML in 2021.
누구? Conferences are meme. Also they still don't know about the internal workings of any companies that matter.
>I mean just exactly who do you want to tell you these things. I can pull quotes from people at OpenAI saying they are worried what might be coming in future.
Already addressed this to another commenter, no matter how capable they are it freaks people out less if they appear concerned.
One of the participants is legit just a PHD student, I'm sorry I don't take your study with credibility.
[Got blocked :( Please don't spread disinformation if you can! I see you've linked that study a lot for arguments. ]
agorathird t1_jecwk6a wrote
Reply to comment by blueSGL in There's wild manipulation of news regarding the "AI research pause" letter. by QuartzPuffyStar
What does behind mean? If it's not from someone who knows all of the details holistically for how each arm of the company is functioning then they're still working with incomplete information. Letting everyone know your safety protocols is an easy way for them to be exploited.
My criteria for what a 'leading artificial intelligence company' is would be quite strict. If you're some random senior dev at numenta then I don't care. A lot of people who work around ML think themselves a lot more impactful and important than what they actually are. (See: Eliezer Yudkowsky)
Edit: Starting to comb through the participants and a lot of them look like randoms so far.
This is more like if you got random engineers (some just professors) who've worked on planes before (maybe) and asked them to judge specifications they're completely in the dark about. It could be the most safe plane known to man.
Edit 2: Particpant Jongheon Jeong is literally just a phd student that appears to have a few citations to his name.
[Got blocked :( Please don't spread disinformation if you can! I see you've linked that study a lot for arguments. ]
agorathird t1_jec1fq5 wrote
Reply to comment by BigZaddyZ3 in There's wild manipulation of news regarding the "AI research pause" letter. by QuartzPuffyStar
I never said any of that. I just don't think it's sci-fi doomsday that's incentivized, especially if you have all the data in the world for prediction. But alas, no amount of discussion or internal risk analysis will some satisfy people.
Being scared doesn't mean you think you're incapable. Even so, I think Sam Altman tends to not put on a disagreeable face. Your public face should be "I'm a bit scared." as to not rock the boat. Being sure of yourself can ironically create more alarmism.
This whole discussion is pointless though. Genie is out of the bottle, I'll probably get what I want you probably won't. The train continues.
agorathird t1_jebykbw wrote
Reply to comment by BigZaddyZ3 in There's wild manipulation of news regarding the "AI research pause" letter. by QuartzPuffyStar
Suuure, this would track. If only the same businesses men running the companies were also scientists and the people developing it lol. AI companies have the best of three worlds when it comes to people who are at the helm. Also, social media is just an amplifier of the current world we live in. Most tech is neutral, thinking otherwise is silly. But I still don't think the example is comparable.
I'm not against capitalism. I love markets and stopped considering communism a long time ago as most of it's proponents conflict with my love for individualism. If you're a communist then how don't you know the difference between the managerial parts of the company and the developers?
agorathird t1_jebwpvf wrote
Reply to comment by BigZaddyZ3 in There's wild manipulation of news regarding the "AI research pause" letter. by QuartzPuffyStar
There's consideration from the people working on these machines. The outsiders and theorists who whine all day saying otherwise are delusional. Not to mention the armchair 'alignment experts'
Also, we live in a capitalist society. You can frame anything as the capitalist approach but I don't think doing so in this sense is applicable to its core.
Let's say we get a total 6 month pause (somehow) and then a decade pause because no amount of reasonable discussion will make sealions happy. Good now we get to fight climate change with spoons and sticks.
agorathird t1_jebonvf wrote
Reply to There's wild manipulation of news regarding the "AI research pause" letter. by QuartzPuffyStar
>This letter is basically the equivalent of the early 20th petition by scientists that asked to limit and regulate the proliferation of nuclear weapons. And yet, its being sold as a capitalist stratagem to gain time.
Oh, if this is what the media is saying then they're right for once. Capitalist gain, trying to get more time to milk their accolades whatever.
agorathird t1_je8y4c6 wrote
Reply to comment by Easyldur in The argument that a computer can't really "understand" things is stupid and completely irrelevant. by hey__bert
>Literally a LLM as it is today cannot learn: "Knowledge cutoff September 2021".
It's kind of poetic, this is was also the issue with Symbolic AI. But hopefully with the amount of breakthroughs, having to touch base, "What is learning?" every one in a while won't be costly.
agorathird t1_je8vlkk wrote
Reply to comment by Mindrust in The Only Way to Deal With the Threat From AI? Shut It Down by GorgeousMoron
Eliezer is a crank. I see his posts I scroll. Too bad less wrong can be decent at times.
agorathird t1_jczydgw wrote
Reply to comment by Eleganos in A technical, non-moralist breakdown of why the rich will not, and cannot, kill off the poor via a robot army. by Eleganos
I think the surrounding context serves as quotations enough, honestly.
agorathird t1_jczxao0 wrote
Reply to A technical, non-moralist breakdown of why the rich will not, and cannot, kill off the poor via a robot army. by Eleganos
>their hard earned resources
As much as I like markets this makes me kek.
agorathird t1_iy0oqdd wrote
Shojo protag boyfriend
agorathird t1_iwnwyxp wrote
Reply to comment by lovesdogsguy in Cerebras Builds Its Own (1 Exaflop) AI Supercomputer - Andromeda - in just 3 days by Dr_Singularity
agorathird t1_iwmfs3j wrote
Reply to comment by AsuhoChinami in Cerebras Builds Its Own (1 Exaflop) AI Supercomputer - Andromeda - in just 3 days by Dr_Singularity
*points toward subreddit name*
agorathird t1_iwhnap7 wrote
Reply to comment by Otarih in The debate is over: Humans are machines by Otarih
Okay, I will probably take another look. I can't say I agree with the substance of your ideas but the writing was entertaining.
agorathird t1_iwh2kfc wrote
Reply to The debate is over: Humans are machines by Otarih
tl;dr
Is this a contentious idea among anyone but theists? We are biological machines, as some are mechanical and others digital.
You should separate your blogposts into sections. And list them before the main content begins.
agorathird t1_jeh2s75 wrote
Reply to comment by Nanaki_TV in I have a potentially controversial statement: we already have an idea of what a misaligned ASI would look like. We’re living in it. by throwaway12131214121
>Again, assumption after assumption. More and new horizons will be created. What? I don't know. But electricity gave the ability for so much to exist on top of it once it was massively adopted. Once AGIs are massively adopted and in our homes, not requiring a supercomputer to train I mean, well, I can only hallucinate what that future will look like. If we are "not needed" then so be it, there's no use arguing. May we die quickly. But I doubt it very much.
Not assumptions, that's what AGI means lol as far as current jobs are concerned. Unless there's an issue they have with space travel? You can make a few edges cases assuming slow takeoff. Which I can give you a boon on about new horizons sure. Maybe we merge, whatever.
This doesn't mean we die or it's unaligned or whatever. That's real speculation. Good luck with your twins.