agorathird

agorathird t1_jeh2s75 wrote

>Again, assumption after assumption. More and new horizons will be created. What? I don't know. But electricity gave the ability for so much to exist on top of it once it was massively adopted. Once AGIs are massively adopted and in our homes, not requiring a supercomputer to train I mean, well, I can only hallucinate what that future will look like. If we are "not needed" then so be it, there's no use arguing. May we die quickly. But I doubt it very much.

Not assumptions, that's what AGI means lol as far as current jobs are concerned. Unless there's an issue they have with space travel? You can make a few edges cases assuming slow takeoff. Which I can give you a boon on about new horizons sure. Maybe we merge, whatever.

This doesn't mean we die or it's unaligned or whatever. That's real speculation. Good luck with your twins.

3

agorathird t1_jegwf73 wrote

It's not naive you have not thought through the implications of what AGI means. You are also ignorant of what is doable with the current technology. Artificial general intelligence is equal to us but also inherently superior due to its computational capacities. There is no need for us after that.

You literally are not describing any useful idea of AGI and are only describing the most surface level uses of text-modality only LLMs in your responses.

The r/futurology work week stuff you talk about is possible right now with current public models of chatgpt. It's been possible for a while. But it's not implemented due to greed and beauruacrats being steadfast in their ways. Luckily, not implementing a change hasn't been critically dire for mass swaths of people thus far.

2

agorathird t1_jegunvj wrote

>Describe the world today in 1980. You cannot predict Reddit or Twitter. You cannot make the claims you’re making with any substantial certainty. Stop acting as if you know.

Both in this thread and the other thread you seem to not to want to extrapolate based on presently given information. That's like the best thing about being sentient too. Or at least you don't want me to extrapolate since you gave me a r/futurology tier take on working.

You are acting like I'm describing hypothetical technology. It's already here. Look through the subreddit for direct sources. You seem to only be working off of ChatGPT-like text models. Even that can be quite autonomous with plug ins. You're like those people who don't know ai is starting to create functional code.

For as much as you love markets, which I also do, you seem to not acknowledge the profit motive and how human neutral it is.

---

On a sidenote, if I had access to books in the 1980s I might've predicted social media. A lot of singulitarians did. But really this is more like predicting social media in 2001 or 2007 depending how which sites you'd like to count. But I still think the analogy is flawed, as the tech is here.

1

agorathird t1_jegpfqk wrote

You are describing some kind of 1950s atom-punk idea of the future. That future has been cancelled. LLMs perfected, embodied, and multi-modal (general or specified) will cover the theorized 70 then 90 then 99% of human tasks. It only matters how long until companies feel like adopting it.

You will have capital owners and executives with machine employees. We are not in the picture as meaningful contributors. Hiring us will be like riding to work on horseback. No one will be going to work like George Jetson.

10 minutes of meaningful human labor to give WorkerGPT some extra oil sounds like a masquerade for what is really a society supported by UBI.

2

agorathird t1_jegjkel wrote

That proves my point. You're acting like UBI isn't a logically neccesary idea in a hypothetical society that is massively unemployed but over-abundant. That's not just 'muh communism'. It's the most ideal default economic mode almost every singulitarian recognizes across the economic spectrum.

It's really beyond anything we know right now.

5

agorathird t1_jeggpm6 wrote

>You may need to review your definitions. And I expect downvotes given this sub’s anti-capitalist stance. Shame

*Shits self* "I expect everyone to say that I stink. Shame"

Make a better argument before you claim unfairness prematurely.

8

agorathird t1_jeduq0e wrote

So friends, what did some of you say about Google having something much better hidden? Bard is just playing it safe even though it's been an embarrassment since release?

edit juicy:

>shortly after leaving Google in January, Devlin joined OpenAI. Insider previously reported that Devlin was one of several AI researchers to leave Google at the beginning of the year for competitors.
>
>Devlin, who was at Google for over five years, was the lead author of a 2018 research paper on training machine learning models for search accuracy that helped initiate the AI boom. His research has since become a part of both Google and OpenAI's language models, Insider and The Information reported.
>
>OpenAI has hired dozens of former Alphabet staff over the years. Since the company's chatbot made headlines in November for its ability to do anything from write an essay to provide basic code, Google and OpenAI have been locked in an AI arms race.

7

agorathird t1_jecybw0 wrote

​

>who published at the conferences NeurIPS or ICML in 2021.

누구? Conferences are meme. Also they still don't know about the internal workings of any companies that matter.

>I mean just exactly who do you want to tell you these things. I can pull quotes from people at OpenAI saying they are worried what might be coming in future.

Already addressed this to another commenter, no matter how capable they are it freaks people out less if they appear concerned.

One of the participants is legit just a PHD student, I'm sorry I don't take your study with credibility.

[Got blocked :( Please don't spread disinformation if you can! I see you've linked that study a lot for arguments. ]

2

agorathird t1_jecwk6a wrote

What does behind mean? If it's not from someone who knows all of the details holistically for how each arm of the company is functioning then they're still working with incomplete information. Letting everyone know your safety protocols is an easy way for them to be exploited.

My criteria for what a 'leading artificial intelligence company' is would be quite strict. If you're some random senior dev at numenta then I don't care. A lot of people who work around ML think themselves a lot more impactful and important than what they actually are. (See: Eliezer Yudkowsky)

Edit: Starting to comb through the participants and a lot of them look like randoms so far.

This is more like if you got random engineers (some just professors) who've worked on planes before (maybe) and asked them to judge specifications they're completely in the dark about. It could be the most safe plane known to man.

Edit 2: Particpant Jongheon Jeong is literally just a phd student that appears to have a few citations to his name.

[Got blocked :( Please don't spread disinformation if you can! I see you've linked that study a lot for arguments. ]

1

agorathird t1_jec1fq5 wrote

I never said any of that. I just don't think it's sci-fi doomsday that's incentivized, especially if you have all the data in the world for prediction. But alas, no amount of discussion or internal risk analysis will some satisfy people.

Being scared doesn't mean you think you're incapable. Even so, I think Sam Altman tends to not put on a disagreeable face. Your public face should be "I'm a bit scared." as to not rock the boat. Being sure of yourself can ironically create more alarmism.

This whole discussion is pointless though. Genie is out of the bottle, I'll probably get what I want you probably won't. The train continues.

0

agorathird t1_jebykbw wrote

Suuure, this would track. If only the same businesses men running the companies were also scientists and the people developing it lol. AI companies have the best of three worlds when it comes to people who are at the helm. Also, social media is just an amplifier of the current world we live in. Most tech is neutral, thinking otherwise is silly. But I still don't think the example is comparable.

I'm not against capitalism. I love markets and stopped considering communism a long time ago as most of it's proponents conflict with my love for individualism. If you're a communist then how don't you know the difference between the managerial parts of the company and the developers?

1

agorathird t1_jebwpvf wrote

There's consideration from the people working on these machines. The outsiders and theorists who whine all day saying otherwise are delusional. Not to mention the armchair 'alignment experts'

Also, we live in a capitalist society. You can frame anything as the capitalist approach but I don't think doing so in this sense is applicable to its core.

Let's say we get a total 6 month pause (somehow) and then a decade pause because no amount of reasonable discussion will make sealions happy. Good now we get to fight climate change with spoons and sticks.

−3

agorathird t1_jebonvf wrote

>This letter is basically the equivalent of the early 20th petition by scientists that asked to limit and regulate the proliferation of nuclear weapons. And yet, its being sold as a capitalist stratagem to gain time.

Oh, if this is what the media is saying then they're right for once. Capitalist gain, trying to get more time to milk their accolades whatever.

1

agorathird t1_iwh2kfc wrote

tl;dr

Is this a contentious idea among anyone but theists? We are biological machines, as some are mechanical and others digital.

You should separate your blogposts into sections. And list them before the main content begins.

10