acutelychronicpanic
acutelychronicpanic t1_je9rnp6 wrote
Reply to Pausing AI Developments Isn't Enough. We Need to Shut it All Down by Eliezer Yudkowsky by Darustc4
Any moratorium or ban falls victim to a sort of prisoner's dilemma where only 100% worldwide compliance helps everyone, but even one group ignoring it means that the moratorium hurts the 99% participants and benefits the 1% rogue faction.. to the extent that an apocalypse isn't off the table if that happens.
acutelychronicpanic t1_je9ri9i wrote
Reply to What to learn to secure your future by tonguei90
Use LLMs every day. Use it to plan your meals. Use it to help with personal problems. Use it to feed your curiosity.
You'll build an intuition of how they work and you'll be quite valuable during the transitional period where we have AI but not all companies have adopted it to their systems.
Of course trade school, construction, etc are all viable. But you can do both if you want.
*standard disclaimer for all advice that if it ruins your life it's all your fault for listening to a stranger on the internet.
acutelychronicpanic t1_je9qay6 wrote
Reply to comment by Trackest in LAION launches a petition to democratize AI research by establishing an international, publicly funded supercomputing facility equipped with 100,000 state-of-the-art AI accelerators to train open source foundation models. by BananaBus43
I don't mean some open-source ideal. I mean a mixed approach with governments, research institutions, companies, megacorporations all doing their own work on models. Too much collaboration on Alignment may actually lead to issues where weaknesses are shared across models. Collaboration will be important, but there need to be diverse approaches.
Any moratorium falls victim to a sort of prisoner's dilemma where only 100% worldwide compliance helps everyone, but even one group ignoring it means that the moratorium hurts the 99% participants and benefits the 1% rogue faction. To the extent that Apocalypse isn't off the table if that happens.
Its a knee-jerk reaction.
The strict and controlled research is impossible in the real world and, I think, likely to increase the risks overall due to only good actors following it.
The military won't shut its research down. Not in any country except maybe some EU states. We couldn't even do this with nukes and those are far less useful and far less dangerous.
acutelychronicpanic t1_je9mzk9 wrote
Reply to comment by Darustc4 in Pausing AI Developments Isn't Enough. We Need to Shut it All Down by Eliezer Yudkowsky by Darustc4
The problem is that its impossible. Literally impossible. To enforce this globally unless you actively desire a world war plus an authoritarian surveillance state.
Compact models running on consumer PCs aren't as powerful as SOTA models obviously, but they are getting much better very rapidly. Any group with a few hundred graphics cards may be able to build an AGI at some point in the coming decades.
acutelychronicpanic t1_je9mn7y wrote
Reply to Pausing AI Developments Isn't Enough. We Need to Shut it All Down by Eliezer Yudkowsky by Darustc4
Imagine thinking something could cause the extinction of all humans and writing an article about it.
Then putting it behind a pay wall.
acutelychronicpanic t1_je9ks0m wrote
Reply to comment by Trackest in LAION launches a petition to democratize AI research by establishing an international, publicly funded supercomputing facility equipped with 100,000 state-of-the-art AI accelerators to train open source foundation models. by BananaBus43
Here is why I respectfully disagree:
-
It is highly improbable that any one attempt at alignment will perfectly capture what humans value. For starters, there are at least hundreds of different value systems that people hold across many cultures.
-
The goal should not be minimizing the likelihood of any harm. The goal should be minimizing the chances of a worst-case scenario. The worst case isn't malware or the fracturing of society or even wars. The worst case is extinction/subjugation
-
Extinction/subjugation is far less likely with a distributed variety of alignment models than with one single model. With a single model, the creators could do a bait and switch and become like gods or eternal emperors with the AI aligned to them first and humanity second. Or they could just get it wrong. Even a minor misalignment becomes a big deal if all power is concentrated in one model.
-
If you have hundreds of attempts at alignment that are mostly good faith attempts, you decrease the likelihood that they share the same blindspots. But it is highly likely that they will share a core set of ideals. This decreases the chances of accidental misalignment for the whole system (even though the chances of having some misaligned AI increases).
Sorry for the wall of text, but I feel that this is extremely important for people to discuss. I want you to tear apart the reasoning if possible because I want us to get this right.
acutelychronicpanic t1_je9hb0a wrote
There is no shutting it down. Give it 3-10 years and even Russia will have one of GPT-4 quality.
You can't decide that no one will do it. Only that you won't.
acutelychronicpanic t1_je9gsif wrote
INT. TITANIC - DECK - NIGHT
Panic-stricken passengers are running in every direction. A mother is clutching her child, and people are pushing each other to get on lifeboats. Water is gushing onto the deck.
PASSENGER 1 (screaming) We're going down! We're all going to die! Amidst the chaos, CAPTAIN SMITH steps forward and raises his hands to calm the crowd.
CAPTAIN SMITH (firmly) Everyone, please, listen to me! I understand your fear, but there's no need to panic. The crowd quiets down, turning their attention to the captain.
CAPTAIN SMITH (continuing) Throughout history, every time the water level has risen, there has always been more boat to climb. We may not see it now, but there could be even more boat to climb that we can't imagine.
PASSENGER 2 (uncertain) But, Captain... the ship is sinking!
CAPTAIN SMITH (smiling reassuringly) Trust me. We'll find a way to climb higher. We always do.
acutelychronicpanic t1_je9fyhb wrote
Reply to comment by Ok_Faithlessness4197 in My case against the “Pause Giant AI Experiments” open letter by Beepboopbop8
The letter won't, but its still worth talking about. Harsh regulation could come as a result of a panic.
Right now most people just don't know or don't get it. How do you think they'll react when they do? That'll come soon with the integration into office products and search.
acutelychronicpanic t1_je9f9q0 wrote
Reply to The argument that a computer can't really "understand" things is stupid and completely irrelevant. by hey__bert
Understanding, as it is relevant to the real world, can be accurately measured by performance on tasks.
If I ask you to design a more efficient airplane wing, and you do, why would I have any reason to say you don't understand airplane wings?
Maybe you don't have perfect understanding, and maybe we understand it in different ways.
But to do a task successfully at a high rate, you would have to have some kind of mental/neural/mathematical model internal to your mind that can predict the outcome based on changing inputs in a way that is useful.
That's understanding.
acutelychronicpanic t1_je9eqky wrote
Reply to comment by FlyingCockAndBalls in Microsoft research on what the future of language models that can be connected to millions of apis/tools/plugins could look like. by TFenrir
The world is mostly sleepwalking through this.
The news tonight if this gets covered: "ChatGPT can do more than just essays? New developments in a field called aye eye might put chemistry homework at risk. More at 9."
acutelychronicpanic t1_je9efjl wrote
Reply to Microsoft research on what the future of language models that can be connected to millions of apis/tools/plugins could look like. by TFenrir
Imagine when the AI can create its own tools. Use an LLM with all the tools already mentioned as a base. If the AI detects that it has low confidence or bad results in a particular domain, it try and create a program or set up a narrow ML model to handle it.
acutelychronicpanic t1_je9dxaa wrote
Reply to LAION launches a petition to democratize AI research by establishing an international, publicly funded supercomputing facility equipped with 100,000 state-of-the-art AI accelerators to train open source foundation models. by BananaBus43
Yes! This is exactly what is needed.
Concentrated development in big corps means few points of failure.
Distributed development means more mistakes, but they aren't as high-stakes.
That and I don't want humanity forever stuck on whatever version of morality is popular at Google/Microsoft or the Military.
acutelychronicpanic t1_je7fuu7 wrote
Reply to comment by [deleted] in My case against the “Pause Giant AI Experiments” open letter by Beepboopbop8
Next time, have it give you a few bullet points. Nobody is going to read that wall..
acutelychronicpanic t1_je7fsgl wrote
Reply to comment by YaGetSkeeted0n in My case against the “Pause Giant AI Experiments” open letter by Beepboopbop8
They didn't get the memo on how to make it interesting and concise using GPT-4.
I don't mind people using AI to write. But nobody wants a wall of text that reads like a fluff piece and doesn't say anything.
acutelychronicpanic t1_je675wf wrote
I'd be very interested.
acutelychronicpanic t1_je1fo9n wrote
Reply to comment by Shiningc in Would a corporation realistically release an AGI to the public? by Shiningc
Eventually, yeah. But the first AGI need not be that good.
acutelychronicpanic t1_je1ddup wrote
Reply to comment by Shiningc in Would a corporation realistically release an AGI to the public? by Shiningc
I think we disagree on what an AGI is. I would define an AGI as roughly human level. It doesn't need to be superhuman.
And I still think they would if they suspected someone else would beat them to it.
acutelychronicpanic t1_je19pww wrote
Reply to comment by Shiningc in Would a corporation realistically release an AGI to the public? by Shiningc
I'd agree if it were true ASI (artificial super intelligence). But a proto-agi as smart as a highchooler that can run on a desktop would be worth hundreds of billions, if not trillions. They would have incentive to lease that system out before they reached AGI.
acutelychronicpanic t1_je15v21 wrote
Reply to comment by Shiningc in Would a corporation realistically release an AGI to the public? by Shiningc
They aren't the only ones with a goose. They're just the first to release it. Across the world, companies are scrambling right now to catch up, and my understanding of the tech is that it should work. The most important mechanisms exist as publicly available knowledge.
acutelychronicpanic t1_je15hcf wrote
Reply to comment by NazmanJT in Will AGI Need More Than Just Human Common Sense To Take Most White Collar Jobs? by NazmanJT
Well, I would think that companies would love nothing more than to hand Microsoft their employees' usage data to train them a fine-tuned model. At least at large enough companies.
As far as decisions, it doesn't have to make the. Just present the top 3 with citations of company policy along with its reasoning and the pros and cons. It can pretty much do this now with GPT4 if you feed it relevant info.
acutelychronicpanic t1_je13w4s wrote
For people that work in at risk jobs, learning how to leverage AI is a short term solution for the next few years. That and skilled/semiskilled physical work. Its harder to build a robot than to download a software update.
I would just get GPT4 and talk to it every day to build an intuition of what it can do and how to guide it to do what you want.
Past that, I don't know
acutelychronicpanic t1_je11gqq wrote
Reply to Will AGI Need More Than Just Human Common Sense To Take Most White Collar Jobs? by NazmanJT
Several of these questions can be answered by watching the Copilot 365 demo. They will be using a database of some kind with all your company documents to inform the AI system.
acutelychronicpanic t1_je0zic1 wrote
They would release the AGI because of competitors nipping at their heels. That, and it would make them a lot of money to be first.
I would buy your argument if one company was years ahead of everyone else. Right now the gap is more like months.
acutelychronicpanic t1_je9rstx wrote
Reply to comment by SkyeandJett in Pausing AI Developments Isn't Enough. We Need to Shut it All Down by Eliezer Yudkowsky by Darustc4
He's 100% right to be as worried as he is. But this isn't the solution. I don't think he's thought it through.