Viewing a single comment thread. View all comments

TonyTalksBackPodcast t1_j5iblmx wrote

I think the worst possible idea is allowing a single person or handful of people to have near-total control over the future of AI, which will be the future of humanity. The process should be democratized as much as can be. Open source is one way to accomplish that, though it brings its own dangers as well

11

KvanteKat t1_j5j3ewk wrote

>I think the worst possible idea is allowing a single person or handful of people to have near-total control over the future of AI

I'm not sure regulation is the biggest threat to the field of AI being open. We already live in a world where a small handful of people (i.e. decision makers at Alphabet, OpenAI, etc.) have an outsized influence on the development of the field because training large models is so capital-intensive that very few organizations can really compete with them (researches at universities sure as hell can't). Neither compute (on the scale necessary to train a state-of-the-art model) or well-curated large training datasets are cheap.

Since it is in the business interest of incumbents in this space to minimize competition (nobody likes to be disrupted), and since incumbents in this space already have an outsized influence, some degree of regulation to keep them in check may well be beneficial rather than detrimental to the development of AI and derived technologies and their integration into wider society (at least I believe so, although I'm open to other perspectives in this matter).

2

Historical-Coat5318 t1_j5jw8o8 wrote

I just can't even begin to comprehend this view. Of course, democratizing something sounds good, but if AI has mass-destructive potential it is obviously safer if a handful of people have that power than if eight billion have it. Even if AI isn't mass-destructive, which it obviously isn't yet, it is already extremely socially disruptive and if any given person has that power our governing bodies have basically no hope of steering it in the right direction through regulation, (which they would try to since it would serve their best interests as individuals). The common person would still have a say in these regulations through the vote.

−1

GinoAcknowledges t1_j5kb95p wrote

A vast amount of technological knowledge (e.g. how to create poisons, manufacture bombs) has mass destructive potential if it can be scaled. The difficulty, just like with AI, is scaling, and this mostly self-regulates (with help from the government).

For example, you can build dangerous explosive devices in your garage. That knowledge is widely available (google "Anarchists Handbook"). If you try and build thousands of them (enough to cause mass destruction) the government will notice, and most likely, you aren't going to have enough money and time to do it.

The exact same thing will happen for "dangerous uses of AI". The only actors which have the hardware and capital to cause mass destruction with AI are the big tech firms developing AI. Try running inference at scale on even a 30B parameter model right now. It's extremely difficult unless you have access to multiple server-grade GPUs which are very expensive and hard to get ahold of even if you had the money.

3