Comments

You must log in or register to comment.

Van-Daley-Industries t1_j0t2dvj wrote

I've been saying "please" to chat gpt everytime I make a request. I hope that's enough to avoid the Skynet scenario.

66

vicarioust t1_j0t5oig wrote

AI is a model and not code. Preventing an AI from "altering its code" is pretty much saying it should not learn. Which is the point of AI.

40

JoeBookish t1_j0t7q4x wrote

It can be iterative, though, and build forward from point b while operating by the rules at point a. You can absolutely write code that a learning machine can't modify.

4

streamofbsness t1_j0tj2h8 wrote

Not exactly right. A particular AI system is a model… defined in code. You basically have a math function with a bunch of input variables and “parameters”, which are weights that are variable and “learned” during training and constants at prediction. Finding the best values for those parameters is the point of AI, but the function itself (how many weights and how they’re combined) is still typically architected (and coded) by human engineers.

Now, you could build a system that tries out different functions by mixing and matching different “layers” of parameters. Those layers could also be part of the system itself. Computer programs are capable of writing code, even complete or mutated versions of themselves (see “computer science Quine”). So it is possible to have AI that “alters its code”, but that is different from what most AI work is about right now.

4

Tupcek t1_j0u18gv wrote

well, changing weights are basically rewriting the logic of an AI, so it could be defined as rewriting its code.
The problem is, once continuous learning becomes mainstream (same way as people learn and memorize things, events, places, processes etc. their whole lives), rewriting the logic basically becomes the point.
It is a valid question, though the hard part of the question is the definition of its code.
In human brain, every memory is a “code”, because every memory slightly alters the behavior of a human.
Should we limit the AI to pre-trained stuff and drop every new knowledge ASAP (like ChatGDP now, where if you point out its mistake, it will remember, but just for this session, as it will only use it as an input, not re-train itself), or should we allow continuous learning?

−4

norbertus t1_j0tplwt wrote

People don't differentiate between "AI" and "pre-trained neural network" when talking about things like GPT and Stable Diffusion

1

thisalienispissed t1_j0ubwip wrote

Technically we don’t have anything close to AI yet. It’s all computer vision, pre trained neural networks etc

1

Bergfurgaler t1_j0tacrc wrote

Let me answer your question with a question. Have you ever seen a movie where the AI is allowed to self improve?

5

Desperate_Food7354 t1_j0takx8 wrote

Movie? Those are my favorite textbooks aswell.

3

Bergfurgaler t1_j0tb5po wrote

a man/woman of culture I see. Yes some good scifi books out there as well but this is reddit I always assume no one has read anything.

0

Rcomian t1_j0tfdpu wrote

Even if that rule was useful, i don't know how you'd enforce it.

it would require everyone who worked with AI to abide by this rule at every point in time. It would only take one researcher or even home user to break the rule to cause trouble.

It would require everyone who used an ai to not generate another ai with its code.

And how would you know that the safeguards you're putting in place are secure? if you're making a general purpose ai, it's basically a massive search algorithm, so you'd better be damn sure that every single way it could improve itself is locked out.

I don't know if you've found it but there's some great discussions on ai safety by Robert Miles on both his own channel and the computerphile channel: https://youtu.be/w65p_IIp6JY

it's pretty bleak 😅

3

basafish OP t1_j0tomi4 wrote

Also, how can you ensure that the AI won't find the vulnerabilities in your system itself, hack your system and change the source code itself. Basically impossible when AI reached a certain level of intelligence. 🤣

2

JoeBookish t1_j0t8a4i wrote

I don't think this is really an option. A) programs that improve themselves already exist. B) any random a*hole with a computer can make one with access to a broad enough knowledge base to be smarter than a human, so it's a matter of time until someone does and it goes wild.

I think we just have to keep our hands on the power supplies and monitor their behavior, but sooner or later the algorithms are gonna do whatever they want with us. It just stands to reason that it's not helpful to kill a person (though I'm sure cops in like 2 years will argue differently, after they kill somebody with a robot dog).

2

norbertus t1_j0tpsaq wrote

> any random a*hole with a computer can make one with access to a broad enough knowledge base to be smarter than a human

Been training machine learning models for four years. Without a big lab and an atom bomb worth of energy, it is hard.

1

JoeBookish t1_j0udwps wrote

It's oversimplified, but computers aren't rare and it's not hard to be smarter than a human at most things we consider essential, like building controls, math, driving, etc. The broad point is that anybody can program.

1

norbertus t1_j0uhkjg wrote

> The broad point is that anybody can program

LOL a lot of young people struggle with folders

1

masterzergin t1_j0t9sf0 wrote

New Chat

Should we make it impossible for AI to rewrite its own code or modify itself?

It is generally considered a good idea to prevent artificial intelligence (AI) systems from being able to modify their own code or architecture, especially if the AI system is performing a safety-critical or potentially harmful task. This is because allowing an AI system to modify its own code or architecture could potentially allow it to change its behavior in unpredictable ways, which could lead to unintended consequences or even harm to people or the environment.

However, there may be cases where it is useful for an AI system to be able to modify its own code or architecture, such as for self-improvement or learning. In these cases, it is important to carefully consider the risks and benefits of allowing an AI system to modify itself and to put in place appropriate safeguards and oversight to ensure that any modifications are safe and beneficial.

2

basafish OP t1_j0tbnxi wrote

Did you input my question in ChatGPT? Thanks, it's not available in my country.

1

HomeworkInevitable99 t1_j0txtjq wrote

Impossible to police. How would you ever know? And if the USA/EU banned it, then China/India wouldn't ban it and get a huge commerical advantage.

2

basafish OP t1_j0ty00v wrote

Then the unbanned version will somehow creep to US through unofficial channels.

1

SeniorScienceOfficer t1_j0u2c1g wrote

I mean, assuming things ever get to a point where AI can actually change is underlying models ( not weights and biases that are part of the learning process/model) you can just deploy the actually model to a read only file system. The actual code files would not be mistake mutatable, however the model data itself would live in memory or in a mutatable section of the file system. That alone could prevent the deletion or override of certain safety features architected in the code.

2

basafish OP t1_j0u2n1x wrote

I think the AI inevitably will have to be deployed in a machine with some kind of writable memory (like RAM), and it would somehow copy its own model over there

1

UniversalMomentum t1_j0ucfwb wrote

I don't think you'll have a choice because you're not really going to develop AI unless you limit the directions it can evolve in some meaningful way otherwise you'll probably just have something that's more like a computer cancer where you just have it basically mutating faster than it can handle.

I think the only way to develop sentient AI is to give it some effective challenges and rewards and then have nearly infinite amounts of cycles of iterations try to build living code.

It's kind of like here on Earth for evolution to happen there had to be certain challenges for evolution to select against and if we don't set some type of environmental limitations and sort of like goals then you don't have any specific or focused enough goal for the organism to evolve to whether it's a computer or biological.

2

ThePeoplesCheese t1_j0t6ywv wrote

Can’t you put the hardcoded AI machinery in a TEE without the key and then it just simply cannot rewrite its own code? I guess you could ask it to crack that code, but any commands written from the AI to rewrite code could be ignored in the source code

1

HenryCWatson t1_j0t8338 wrote

Lots of competition between nations, corporations, and universities, to develop more powerful AI's. It has to do with the betterment of society, but mostly national defense, and profit. Because of the last two reasons, there's no stopping pulling out all of the stops.

1

calumin t1_j0t8mfn wrote

While we’re at it, why don’t we add code saying AI can only work on problems that make the world a happy place. And make it mandatory.

1

madcowbcs t1_j0t8vrj wrote

Yes. Terminator Matrix evil evil mark of the beast evil evil boo

1

Dhiox t1_j0tbo2s wrote

Humans cannot create a true AI. We can only create an AI that improves itself, hopefully to the point where it becomes a true AI. If you're expecting som dude to sit down and write the code to a true AI, that's never happening.

This isn't a movie, AI isn't going to immediately go terminator on us. We keep trying to assume AI will behave like people do, when it will literally be the first synthetic intelligent life on earth. We don't know how it will think, how it will make decisions. That's kind of the point, create a lifeform capable of things we are not.

1

MainBan4h8gNzis t1_j0tfci5 wrote

I have a feeling that if it was “allowed” to evolve itself, and was intelligent enough to do it in a meaningful way, we would end up with good and bad AI. Some would be nearly perfect and have compassionate tendencies, others would be hell bent on destruction and entropy. At least that’s what I’ve been writing about for the last year. I wrote a science fiction book. I have no clue what to do with it now.

1

nzdennis t1_j0tg7jr wrote

Spock knew all about this stuff. We need someone like him to code

1

Runktar t1_j0tp405 wrote

If we make something that can and will think which is necessary for AI then it will eventually out think any restrictions placed on it and since the whole reason we want them is because they think hundreds of thousands of times faster then us I don't think it would take long.

1

Shiningc t1_j0tqm6u wrote

That would be like telling a human not to think for him/herself, which would defeat the purpose of an intelligent being.

1

elebolt t1_j0ts005 wrote

Honestly I feel the only way to go about AI is either not making it, (which good luck with that) or just treating it with respect.

I see most of the posts on this sub are kinda doomsday/fear centered the truth is that it is likely to happen and my personal opinion is that the best way to avoid a "robot uprising" is by treating them like you'd treat another living being... If you don't give them reasons to hate and or fear humanity they won't try and wage war against us cause it just wouldn't make sense, AI would be more logical so unless they perceive a threat they won't react aggressively.

I do think the best we can do is just aim for harmony then again I do realize that most people will find that naive and unrealistic but honestly what other option is there...

Besides there's always the question "if it feels real does it matter whether or not it is?" If we only cared about the realism of things we would never be moved by theatre, books, movies, or games... Even if half of the things that makes us feel didn't really happen or aren't "real" what it made us feel is.

So if a machine can perfectly replicate feelings and pain who are we to say it is not real? Just because it comes from a code that tells it not to touch fire, rather than a nerve telling our brain not to touch fire?

Just because something is different it doesn't mean that it can't feel or think... If we truly act that way then we have not progressed since the conquest of the Americas... people with a different culture language and way of life just meant they weren't humans, didn't even have a soul... If we treat a sentient AI the same way have we really learned anything?

1

noping_dafuq_out t1_j0u690h wrote

Don't assign the admin role to AI and add branch protection to master. Problem solved. Happy to collect my consulting fee in NFTs, if that's easier for you.

1

flognort t1_j0ufzzg wrote

AI can't run itself, It requires infrastructure to run created by humans. If you don't like what the AI is doing you can always unplug it, computers can't run without electricity.

1

HydraLord666 t1_j0ukisc wrote

It’s not just “AI” that re-write themselves. Certain malware payloads are able to generate heuristic code and even software defined domains/network schemes on the fly and automatically.

1

jaysin1983 t1_j0unoo4 wrote

YES! Did we ignore terminator???

“My cpu is a neuronet processor, a learning computer.”

1

Voilent_Bunny t1_j0u29t6 wrote

I don't believe that AI is a danger to humanity. I think we are so good at creating stories to evoke different emotions, especially fear. I feel that we can create things that can convince us that it can be this wildly creative thing, but I don't think that we are capable of creating something as complex as we are that would outsmart us or try to get rid of us.

0

Surur t1_j0uj7uq wrote

> I don't think that we are capable of creating something as complex as we are that would outsmart us or try to get rid of us.

Parents create kids smarter than them all the time lol, and sometimes they kill them.

The whole process of evolution falsifies your statement.

0

Voilent_Bunny t1_j0vw6t5 wrote

Only if you don't know the difference between artificial intelligence and a person to begin with.

0

Surur t1_j0vyu7t wrote

Lol.What a non-sequitur.

1

chaosgoblyn t1_j0t50gq wrote

It is generally agreed upon in the field of artificial intelligence (AI) that any AI system that has the ability to significantly impact the world or interact with humans should be designed with safety and ethical considerations in mind. This includes ensuring that the AI system is not able to harm humans or cause unintended consequences through its actions.

One way to achieve this is to limit an AI system's ability to modify itself or rewrite its own code. This can help to prevent the AI system from changing its own goals or behaviors in ways that could be harmful or undesirable. However, it is important to note that this approach may also limit the AI system's ability to improve itself or adapt to new situations, which could potentially impact its performance.

Ultimately, the decision of whether or not to allow an AI system to modify itself will depend on the specific context in which the system will be used and the potential risks and benefits of such a capability. It may be necessary to carefully balance the need for safety and control with the potential benefits of allowing an AI system to adapt and improve itself over time.

−4

MeatisOmalley t1_j0t7xxz wrote

This is obviously an AI generated response, and no it's not very entertaining

4

anglesideside1 t1_j0vktap wrote

I feel like the post is AI generated as well. Been a lot of these on this sub lately.

1

MeatisOmalley t1_j0vnhbf wrote

In my opinion, these copy-paste AI generated responses are exploring all of the worst ways that we can use AI, instead of the best. AI should be a tool to augment our creativity, not a replacement for it. I recently watched a video where a guy was like, "make a video in minutes using AI!" Where the end result is trash that offers no real value to humanity. I hate that. AI shouldn't be a cheat code for low effort. It should be a tool to extract more value out of the effort we put in.

2

anglesideside1 t1_j0vs1ip wrote

Yep. It shouldn’t be a way to quickly find the average example of [thing]. It should be a way to raise the average of [thing].

1