MassiveWasabi

MassiveWasabi t1_jegl538 wrote

Just for reference this paper showed why the safety testing was actually pretty important. The original GPT-4 would literally answer any question with very useful solutions.

People would definitely be able to do some heinous shit if they just released GPT-4 without any safety training. Not just political/ethical stuff, but literally asking how to kill the most people for cheap and getting a good answer, or where to get black market guns and explosives and being given the exact dark web sites to buy from. Sure, you could technically figure these things out yourself, but this makes it so much more accessible for the people who might actually want to commit atrocities.

Also consider that OpenAI would actually be forced to pause AI advancement if people started freaking out due to some terrible crime being linked to GPT-4’s instructions. Look at the most high profile crimes in America (like 9/11) and how our entire legislation changed because of it. I’m not saying you could literally do that kind of thing with GPT-4, but you can see what I’m getting at. So we would actually be waiting longer for more advanced AI like GPT-5.

I definitely don’t want a “pause” on anything and I’m sure it won’t happen. But the alignment thing will make or break OpenAI’s ability to do this work unhindered, and they know it.

10

MassiveWasabi t1_jegjiot wrote

Yes and for the better. I graduated with a STEM degree and almost every class was mainly PowerPoint slides ad nauseam. I believe very soon you will be able to plug your entire textbook into an AI model and essentially “talk” to the textbook. Unlimited personal tutoring, which will cause the level of true understanding in students to increase substantially.

24

MassiveWasabi t1_jeb67h2 wrote

I looked into that just now and my conclusion is that there may be some translation issue between the researchers and the AI. The researchers are all Chinese and I can see some other simple English mistakes, so I'm not sure if they were using something for translation or if they were just typing in English. Maybe they did all of the research in Chinese and then translated for us to read the paper. I don't really know, though.

3

MassiveWasabi t1_jeb55ha wrote

Check out this paper that Microsoft researchers just released. Among a ton of other cool things, they talk about how this new model they are working on, TaskMatrix.Ai, will be able to take control of "AI teammates" in team-based games, and that you can give each individual teammate different tasks in order to carry out a complex strategy. This seems like the next step to having truly dynamic characters controlled by AI, hopefully so dynamic that they seem completely real.

10

MassiveWasabi t1_je8atls wrote

This is really big, it’s basically a multimodal AI assistant that can be used for image, text, audio, etc. I’m really underselling it so at least skim the paper.

In terms of gaming, it can even control AI teammates individually so you can give different orders to each of your teammates to carry out complex strategies, which they say will let you feel like a team leader and increase the fun factor.

Most importantly:

All these cases have been implemented in practice and will be supported by the online system of TaskMatrix.AI, which will be released soon.

Sounds like this is something we will be able to play with sometime soon. Microsoft definitely wants to get these products into the hands of customers.

TL;DR: use ChatGPT

49