Submitted by a4mula t3_zsu3af in singularity
a4mula OP t1_j19zuwp wrote
Reply to comment by el_chaquiste in A Plea for a Moratorium on the Training of Large Data Sets by a4mula
Thank you for the consideration. I think it's very reasonable to assume that there would be those that would attempt to circumvent an agreement made at even the highest levels. But the technologies that offer the greatest impact are those that require large footprints of computation and storage. If we agreed as a species that this was the direction best to go, a system could be developed to ensure that any non-compliance would be evident.
This has to be above the level of any government. More than the UN. It has to be a hand reached out to every single human on this planet, with the understanding that what affects one, affects all in this regard.
I don't propose how that's accomplished. I'm just a rando redditor. But this idea, it needs to be discussed.
If it's a valid idea, it will spread. If it's just my own personal concerns going too far; it'll die with little notoriety and not cause any problems.
And that's my only goal.
I would however strongly disagree that it's not an immediate hazard. ChatGPT is a very powerful tool. Very powerful, in ways most have not considered. The power to expand a user's thoughts and flesh out even the most confused of ideas. After all, it wrote the 2nd half of my Plea.
AsheyDS t1_j1a6yjk wrote
Wanting peace, cooperation, and responsible use of technology is admirable, but hardly a unique desire. If you figure out how to slow down the progress of humanity (without force) and get everybody to work together, you'll have achieved something more significant than any AI.
It's more likely that progress will continue, and we'll have to adapt or die, just like always.
a4mula OP t1_j1a7ek2 wrote
I'm doing what I can. I'm planting a seed, right here; right now. I don't have the influence to affect global change. I have the ability to share my considerations with likeminded individuals who might have a different sphere of influence than myself.
We can affect change. Not me, some rando redditor. Probably not you, though I don't know you. But our ideas certainly can.
Maleficent_Cloud5943 t1_j1eerhx wrote
As others have mentioned, I appreciate your goals and sentiments, but a moratorium isn't in the cards at this point. And that’s not to say it's impossible, but the people holding the cards at this point would take longer to reach some kind of feasible agreement than it will most likely take to reach the singularity. The best thing that each and every person who cares can do at this point is GET INVOLVED. And by that, I mean in any way possible, with as many other people as possible. Educate others--anyone and everyone who is willing to listen. Continue to educate yourself: for instance, if you don't know Python, learn it. Start working with as many pieces of the puzzle as you can and become a stakeholder to whatever extent you can.
a4mula OP t1_j1egt24 wrote
I hear you. Again, if I were a betting man this seems like a sure bet. I agree entirely. But stranger things have happened, and we live in a world today in which information spreads very quickly.
Things change faster today than ever before and that includes global plans.
So I'm going to keep having this conversation in the hopes that others will at least consider it. I'm not calling for action, I didn't form it as an ultimatum. I've no right to dictate anything.
So I only ask for consideration.
Ok_Garden_1877 t1_j1ag7gp wrote
That's hilarious. I thought it sounded a bit like ChatGPT. It's one of the human things that specific AI seems to be lacking: the natural disorganization of thought. When we talk as humans, sometimes we get excited and jump to a new thought without finishing the first. At least I do, but I have adhd so maybe that's a bad example. Either way, ChatGPT so far seems to break down its paragraphs in organized little blocks. It writes as though everything it says is a rehearsed speech.
Am I alone in this thought?
a4mula OP t1_j1agrnj wrote
No, I'd agree there certainly seems to be clear patterns in its outputs. It'll be interesting to see if users begin to mimic these styles.
I already know the answer for me, because I can see the clear shifts in my own.
Ok_Garden_1877 t1_j1anbah wrote
Ah, okay. So you're saying the more we use it, the more we will become like it? Like saying "art is not a reflection of society, but society is a reflection of art"?
a4mula OP t1_j1anxky wrote
Again, I'm not an expert. I'm a user with very limited exposure in the grand scheme. But what I see happening goes something like this.
The machine acts as a type of echo chamber. It's not bias, it's not going to develop any strategies that could seen as harmful.
But it's goal is to process the requests of user input.
And it's very good at that. Freakishly good. Super Human good. And any goal that user has, regardless of the ethics, or morality, or merit, or cost to society.
That machine will do it's best to accomplish the goal of assisting a user in accomplishing it.
In my particular interactions with the machine, I'd often prompt it to subtly encourage me to remember facts. To think more critically. To shave bias and opinion out of my language because it creates ambiguity and hinders my interaction with the machine.
And it had no problem providing all of those abstracts to me through the use of its outputs.
The machine amplifies what we bring to it. Good or Bad.
Viewing a single comment thread. View all comments