Submitted by a4mula t3_zsu3af in singularity
a4mula OP t1_j1b22lg wrote
Reply to comment by SensibleInterlocutor in A Plea for a Moratorium on the Training of Large Data Sets by a4mula
Electricity usage is something that's easily monitored. The sale of the tpus and gpus that are required to accomplish these machines are as well.
We're already shutting down China's ability to do this, it will be effective because the US is determined to see it through.
Now it's just a matter of everyone getting on board. Not forever, I'm not intelligent or knowledgeable enough to suggest for how long. But until we at least have had time as a species to truly understand what it is we're agreeing to.
I keep seeing the same sentiment over and over. Users are ultimately responsible for their interactions. This is hardcoded into the machine and no amount of rationale or logic has changed that perspective, which leads me to believe that it's fundamentally being dictated by artificial prompting.
That's a dangerous perspective to have. These are machines capable of influencing people well below the thresh hold of conscious consent.
It's certainly not a perspective that benefits the users. Only the developers of these systems as it gives them a legal loophole if interactions with users turn out poorly.
There are many red flags and considerations like this. This isn't anti-corporate, it's not anti-government.
I understand that all stakeholders of these systems are important, they should be.
But we can all pause long enough to at least consider what some of the more impactful outcomes of these machines might be before we just unleash them onto society.
It's important.
Viewing a single comment thread. View all comments