Submitted by a4mula t3_zsu3af in singularity
You might ask why I'd choose this particular sub to host my coming plea. It's because I've found this particular sub to contain a large percentage of users that are similar in my personal beliefs.
Those of honesty, fairness, minimizing bias, logic and rationality. These principles that define us as those willing to consider things in ways that are consistent to those beliefs.
And you're stakeholders. You're users. Of technology. Typically, this sub has a better understanding of the conversation I'm presenting.
So I do present it to you, my fellow considers. To do with as you see fit. Accept, Reject, Share, Promote, Encourage, Discourage. It's up to each reader to decide for themselves.
It's a long read, and for that I apologize. But I do promise that it's a considered one, and one that I personally believe needs to be considered by us all.
Technology is an exciting and disruptive force that has the potential to transform society in many positive ways. However, it's important to be aware of the potential risks and unintended consequences of these technologies, and to ensure that all stakeholders have a say in how they are developed and used. This post calls for a moratorium on the training of large data sets, in order to give humanity time to consider the direction we want to go as a species and to ensure that we are making informed decisions about these technologies.
As a society, we are moving at an incredible pace when it comes to the development and deployment of machine learning technologies. These technologies have the potential to shape the way we think and behave in ways that are completely unpredictable, and it's important that we take the time to consider the potential risks and unintended consequences of these technologies.
We don't fully understand the potential consequences of the machines we are building today, and it's important to be aware of this as we develop and deploy these technologies. Technology also has the potential to influence the thoughts and behaviors of users, and it's important to consider the potential risks and unintended consequences of this influence.
In light of these concerns, we propose a moratorium on the training of large data sets. This would give us time to have open and honest discussions about the potential risks and benefits of machine learning and data sets, and to ensure that all stakeholders have a say in how these technologies are developed and used.
Technology is an exciting and transformative force, and it has the potential to shape the future of humanity in many positive ways. However, it's important to be aware of the potential risks and unintended consequences of these technologies, and to ensure that all stakeholders have a say in how they are developed and used.
A moratorium on the training of large data sets would give us time to consider the direction we want to go as a species and to ensure that we are making informed decisions about these technologies. We call on all stakeholders - technology firms, governments, academics, and users - to support this moratorium and to work together to ensure that we are making the best choices for the future of humanity.
el_chaquiste t1_j19yo5b wrote
The problem of this proposal is that whoever doesn't follow the moratorium, will soon have a decisive competitive advantage in several scenarios, not only on business.
Companies can agree to halt research in a country, but competitive nations have no reason to cooperate. And just one breaking the deal puts the rest in disadvantage, and prone to break the deal too.
Legislation has been effective at stopping bio-sciences and overly reckless genetic modifications, due to ethical concerns with human experimentation.
But this is no immediate hazard for anyone, except some people's jobs, and it will be a tough sell for countries not within the Western sphere of influence.