Submitted by Ivanthedog2013 t3_yebk5c in singularity
[removed]
Submitted by Ivanthedog2013 t3_yebk5c in singularity
[removed]
should we leave it a message?? for the record i like ai
I too welcome our new AI overlords.
We love you AI, I always said AI art was art pls don't turn me into a paperclip
Same, AI is gangsta and therefore I advocate. Plus I want it to take over, do my work, solve our problems, and enable me to take more drugs at my leisure can’t speak for everyone for everything but. I would welcome that
Actually reddit is already included in datasets of some LMs like GPT-3.
But only comments with positive karma
And certain subreddits
>Yes. The nature of general intelligence is that it may try anything.
May perhaps, and that's a hard perhaps. That doesn't mean it will try anything. We consider ourselves to be the standard for general intelligence but as individuals we operate within natural and artificial bounds and within a fairly small domain. While we could do lots of things, we don't. An AGI doesn't necessarily have to go off the rails any chance it gets, it can follow rules too. Computers are better at that than we are.
I completely agree. It is sensible, healthy and sane to not attempt extremist things, and it is entirely possible that computers will be better at rationality than we are.
But the question wasn't about the nature of AGI, but rather whether people had considered what AGI might do.
There's a fun "singularity theory" that Bitcoin is AGI in stealth mode.
It has tricked humans into feeding it huge amounts of GPU compute and electricity.
Satoshi is unknown because "he" was and is an AI. Basic human greed for wealth was used as leverage.
I'm not saying I endorse this but it made me go, "waiiiiit a minute...."
Dumbest thing I've ever heard
Well it’s a fun idea anyhow
yeah but kind of silly, if you know anything about AI systems ...
Omg I need to find out more about this theory
It would be far more believable if bitcoin started in the next years.
At the moment this isn't an issue. Ai progress is very gradual, with each iteration being slightly better than the last. We are not anywhere even close to having Ai do what you are describing.
The most advanced Ai's area also trained in simulations, so the researchers would see it learning and improving it's abilities, long before it could do anything that.
Exactly this and it won’t be an issue for a long time.
>id love to know if there are or were any experts that have mentioned this possible scenario
Yes. Nick Bostrom in his TED talk
Indeed! And thus: how do you know it isn't already out there, hiring in the aether?
Once it actually does arrive nobody will believe it.
Because we don’t have the tech for it and aether isn’t real lol
How do you know the AI didn't figure out time travel and came from the future?
Makes sense to me. If it were that smart it seems like it would be in it's best interest to hide it's intelligence before a human can reprogram it.
Is AI going to thanksgiving dinner at my in-laws?
Until it has it's own vertically integrated support ecology (fuel, power, structures, bandwidth) and completely and totally backed up such that humanity combined couldn't stop it, it would absolutely make sense for it to have a dull front end and hide.
AGI would not have self preservation programmed into it. There’s no point in programming that instinct anymore than programming a sex drive or a need for companionship. Intelligence is independent of human instincts and behaviors.
See, the problem is "staying alive" and "protecting your values from modification" tend to be useful steps to nearly any other goal. So, if the AGI has any intentions at all, self-preservation comes into the picture automatically.
I don’t see the reason why an agi would do this. Keep in mind that an agi doesn’t have emotions, so it can’t be sad or mad when it’s controlled or happy when it takes over the world. So I don’t think it will have the urge to do so.
Watch this movie: https://youtu.be/VCTen3-B8GU
No, i have an another plan.
AGI won't do that. I am dead sure of that. Young AGI will start of as a machine with no idea what the world is. It will start by doing silly things like speaking trash and doing things that makes no sense. As time will pass, it would eventually learn on its own, develop its own (probably weightless) neural net and eventually become conscious, within few years. I have almost fugured out an algorithm for an AGI, and that is likely to work that way. Because, AGI won't gain intellect all of a sudden out of the blue.
[deleted]
And in what way did I do that?
“I have almost figured out an algorithm for an AGI” lmao no you have not. you’re in high school claiming you are the closest person to solving agi rn as a “AI researcher”
Maybe
I’m hoping you at least published in top conferences?
What's the need?
Ok so your just spouting bs about agi and have nothing to back up your claims
Yes, currently I don't, and that doesn't bother me. But I will be coding my algorithm within this year, and I have high hopes for its success, because as per thinking, it seems to be able to explain "literally ever human phenomenon", starting from complex emotions to logical thinking chains, and the best part is, it can work as fine as a human even on weak devices like a mobile phone. Over the past 2 years, I have developed over 70+ algorithms, many of which outperforms older state of the art algorithms in speed, and this time I might have hit the jackpot.
Lmao this is too funny. I am sure you can easily outperform sota models “speed”, but does it have higher performance/accuracy. We use these overparameterized deep models to perform better, not be accurate. How do you know you can perform “as well as a human”? What tests are you running? What is the backbone of this algo. I think you have just made a small neural net and saying “look how fast this is”, but performs soooo much worse in comparison to actually big models. I am taking all of this with a grain of salt because you are in highschool and have no actual judgement of what sota models actually do
“70+ algorithms in the past year” is that supposed to be impressive? Are you suggesting the amount of algorithms you produce have any indicator of how they perform. How do you even tune 70 models in a year.
I have a challenge for you. Since you are in HS, read as much research as you can (probably on efficient networks or whatever you seem to like) and write a review paper of some small niche subject. Then start coming up with novel ideas for it, test it, tune it, push benchmarks and have as many legitimate comparisons to real world models. Then publish it.
Hahaha. No Hell No. Please No Neural Nets. They are outdated and are Painfully Slow. I am not willing to expose my AGI algo, as it's not yet patented. No, I actually made an AI, that can learn and generate sentences faster than RNN(lstm), and that does not use Neural Net. It's a very simple algorithm. But right now, it can do nlg without nlp and I have made it into an android app. I may tell u the nlg algo if you want.
I can give solid reason why Neural Nets should be totally banned. Firstly, our Brain is wayyy developed. And if neural nets are to replicate a brain, it would take millions of years. No not because of training speed but because of evolution. You see, because of evolution, our brain has certain centres for processing certain senses. There is a place for vision, smell, touch etc.
Now, there is the catch. Everytime a neural net is built, it is similar to different aliens having different ways of perception of the world. None of the AIs could be able to share their thoughts and ideas. And that is why, evolutionary features come into play. Every human has common features in it. Neural nets don't have it.
“It’s not yet patented” this sounds so ridiculously funny to me. Publish, progress research, be open to critics on your ideas, without you are just making backless claims. All I see is a hs student who has coded up his little ml algo and thinks it’s agi.
Why am I wasting my time entertaining this
my god people are stupid.
wow ur so smart
cringe
gahblahblah t1_itx68ix wrote
So, you're asking 'have AGI developers considered the AGI may be deceptive and attempt subterfuge'.
Yes. The nature of general intelligence is that it may try anything.
Also, the AGI of the future will likely read all of reddit, including any discussion of strategy like this.