Viewing a single comment thread. View all comments

LanchestersLaw t1_j3afzkc wrote

I think this is a valid question and many of the responses here were quite snarky and unnecessarily rude. The short answer is yesnt. As with any regulation there is no one solution everyone will agree on or be happy with. I felt like a particularly ironic way to answer this question is with ChatGPT which gave me this response:

> “There is certainly a valid concern about the potential misuse of AI, and it is important for society to consider the risks and how to mitigate them. However, it is also important to consider the potential benefits of AI and not restrict research in a way that would prevent those benefits from being realized. It is a delicate balance to strike.

> There are already some regulations and guidelines in place for AI research and development, such as the Asilomar AI Principles and the EU's General Data Protection Regulation (GDPR). In addition, many companies and organizations have their own guidelines and best practices for responsible AI development.

> It is also important to note that AI is not a single monolithic technology, but rather a broad field with many different applications and use cases. The risks and benefits of AI can vary depending on the specific application, so a one-size-fits-all approach to regulation may not be effective.

> Overall, it is important for society to continue to have an open and honest dialogue about the risks and benefits of AI and how best to address them. It is also worth noting that AI is still in the early stages of development, and there are many technical and practical limitations to what it can currently do. While it is important to be mindful of the potential risks and to take steps to mitigate them, it is also important to keep these limitations in perspective and not let fear or anxiety about the potential misuse of AI obscure the many potential benefits it can bring.”

Edit: r/artificial is a better community to ask this question. This one is mostly about the technical details of how the complicated math works and “why is my model doing something stupid?”

1

Baturinsky OP t1_j3bh9kb wrote

Thanks.

I think people vastly underestimate the possibilities of use of ChatGPT-like model. If it has learned from the entire(-ish) interned scrapped, it's not just language model, it's the model of entire human kowledge avaialbe on the internet, neatly documented and cross-referenced for very easy use by algorithms. Currently it's used by quite simple algorithms, but what if it will be algorithms that try to use that data to write itself, btw? Or something else we don't forese yet.

And I don't even know how it's possible to contain the danger now, as algorithm for "pickling" internet like that is already widely known, so it could be easily done by anyone with budget and internet access. So, one of necessary measures could be switching off the internet...

1

LanchestersLaw t1_j3dh4ws wrote

The key word you to use for better answers are “control problem” and “AI safety”. For my personal opinion ChatGPT/GPT-3.5 is an inflection point. GPT-3.5 can understand programming code well and do a passable job generating it. This includes its own code. One of the beginner tutorials is using GPT to program its own API.

That said, GPT-3.5 has many limitations. It isnt a threat. Future versions of GPT have the potential to be very disruptive.

1