bitemenow999 t1_ja9dl6k wrote
The problem is that the AI ethics debate is done by people who don't directly develop/work with ML models (like Gary Marcus) and have a very broad view of the subject often taking the debate to science fiction.
Anyone who says ChatGPT or DallE models are dangerous needs to take ML101 class.
AI ethics at this point is nothing but a balloon of hot gas... The only AI ethics that has any substance is data bias.
Making laws to limit AI/ML use or keeping it closed-source is going to kill the field. Not to mention the amount of resources required to train a decent model is prohibitive enough for many academic labs.
EDIT: The idea of "license" for AI models is stupid unless they plan to enforce the license requirements to people buying graphic cards too.
admirelurk t1_ja9wy95 wrote
I counter that many developers of ML have a too narrow definition of what constitutes danger. Sure, chatGPT will not go rogue and start killing people, but the technology affects society in much more subtle ways that are hard to predict.
MW1369 t1_ja9f29c wrote
Preach my man preach
OpeningVariable t1_jaa3ldd wrote
This is not about academic labs, but about industry, governments, and startups. It is one thing that Microsoft doesn't mind rolling out a half-assed BingChat that can end up telling you ANYTHING at all - but should they be allowed to? What about Tesla? Should they be allowed to launch and call "autopilot" an unreliable piece of software that they know cannot be trusted and that they do not fully understand. I think not
bitemenow999 t1_jaa5b9n wrote
what are you saying mate, you can't sue google or Microsoft because it gave you the wrong information... all software services come with limited/no warranty...
As for tesla, there is FMVSS and other regulatory authorities that already take care of it... AI ethics is BS, a buzzword for people to make themselves feel important...
AI/ML is a software tool, just like python or C++... do you want to regulate python too on the off chance someone might hack you or commit some crime?
​
>This is not about academic labs, but about industry, governments, and startups.
Most of the startups are off shoots of academic labs.
OpeningVariable t1_jaa8zp8 wrote
BingChat is generating information, not retrieving it, and I'm quite sure that we will see lawsuits as soon as this feature becomes public and some teenager commits suicide over BS that it spat out or something like that.
Re the tool part - yes, exactly, and we should understand what that tool is good for, or more specifically - what it is NOT good for. No one writes airplanes' mission critical software using python, they use formally verifiable languages and algorithms because that is the right tool for the amount of risk involved. AI is being thrown around for anything, but it isn't a good tool for everything. Depending on the amount of risk and exposure for each application, there should be different regulations and requirements.
​
>Most of the startups are off shoots of academic labs.
This was a really bad joke. First of all, why would anyone care about off-shoots of academic labs? They are no longer academics, they are in the business, and can fend for themselves. Second of all, there is no way most startups are offshoots of academic labs, most startups are looking for easy money and throw in AI just to sound cooler and bring more investors.
VirtualHat t1_jaa4jwx wrote
An increasing number of academics are identifying significant potential risks associated with future developments in AI. Because regulatory frameworks take time to develop, it is prudent to start considering them now.
While it is currently evident that AI systems do not pose an existential threat, this does not necessarily apply to future systems. It is important to remember that regulations are commonly put in place and rarely result in the suppression of an entire field. For instance, despite the existence of traffic regulations, we continue to use cars.
PacmanIncarnate t1_jaafjl5 wrote
Don’t regulate tools, regulate their product and the oversight of them in decision making. Don’t let any person, institution or corporation use AI as an excuse for why they committed a crime or unethical behavior. The law should take it as an a priori that a human was responsible for decisions, regardless of whether or not an organization actually functioned that way, because the danger of AI is that it’s left to make decisions and those decisions cause harm.
lukasz_lew t1_ja9jsrf wrote
Exactly.
Requiring a licence for "chatting with GPT-3" is silly.
It would be like requiring a licence to talk to a child (albeit a very knowledgeable child with a tendency to make stuff up). You would not allow such kid to write your homework or thesis, would you?
Maybe requiring reading a warning akin to "watch out, the cup is hot", would make more sense for this use case.
[deleted] OP t1_ja9mpny wrote
[removed]
[deleted] OP t1_ja9z6ye wrote
[deleted]
enryu42 t1_jaa1lru wrote
> The only AI ethics that has any substance is data bias
While the take in the tweet is ridiculous (but alas common among the "AI Ethics" people), I'd disagree with your statement.
There are many other concerns besides the bias in the static data. E.g. feedback loops induced by ML models when they're deployed in real-life systems. One can argue that causality for decision-making models also falls into this category. But ironically, the field itself is too biased to do productive research in these directions...
WarmSignificance1 t1_ja9jnft wrote
You don’t have to understand the physics behind nuclear weapons to argue that they’re dangerous. Indeed, the people in the weeds are not always the best at taking a step back and surveying the big picture.
Of course making AI development closed source is ridiculous, though.
bitemenow999 t1_ja9p3n9 wrote
that is a very bad argument.. I would suggest you read up on the quote from Oppenheimer after the first nuclear test, whereas, the people surveying the "big picture" decided to bomb Hiroshima...
[deleted] OP t1_jaaaot2 wrote
[removed]
Viewing a single comment thread. View all comments