NoidoDev
NoidoDev t1_ja5rr42 wrote
Reply to comment by GreatWall in Meta unveils a new large language model that can run on a single GPU by AylaDoesntLikeYou
Thanks to Reddit and Hollywood.
NoidoDev t1_ja5q0rs wrote
Reply to comment by manubfr in Meta unveils a new large language model that can run on a single GPU by AylaDoesntLikeYou
I'm I the only one who partially drinks to much coffee and hot chocolate, or eats some snacks, because I want to get up from time to time, so I walk into the kitchen? At least you didn't ask for a sandwich, since you would need to wash you hands and get away from your workspace for that anyways, right?
NoidoDev t1_ja5pma9 wrote
Reply to comment by AylaDoesntLikeYou in Meta unveils a new large language model that can run on a single GPU by AylaDoesntLikeYou
>"I'm now thinking that we will be running language models with a sizable portion of the capabilities of ChatGPT on our own (top of the range) mobile phones and laptops within a year or two," wrote independent AI researcher Simon Willison in a Mastodon thread analyzing the impact of Meta's new AI models."
Okay, so it is openly available? Don't make me hard excited and then tell me no.
NoidoDev t1_ja17niw wrote
Reply to comment by Present_Finance8707 in What are your thoughts on Eliezer Yudkowsky? by DonOfTheDarkNight
>Thankfully I think you’re also too stupid to contribute meaningfully
Problem is, I don't need to. You doomers would need to convince people that we should slow down or stop progress. But we won't.
NoidoDev t1_ja17g56 wrote
Reply to comment by Present_Finance8707 in What are your thoughts on Eliezer Yudkowsky? by DonOfTheDarkNight
> Like I said you really really don’t understand alignment.
What I don't understand is how you believe that you can deduct this from one or a very few comments. But I can just claim you just don't understand my comments, so you would first have to proof that you do understand them. So now spend the next few hours thinking about and answer, then I might or might not answer to that, and that answer might or might not take your arguments into account instead of just being dismissive. See ya.
Edit: Word forgotten
NoidoDev t1_ja16cwg wrote
Reply to comment by Gordon_Freeman01 in Hurtling Toward Extinction by MistakeNotOk6203
Funny how the guys warning of how AGI will jump to conclusions want to proof this by jumping to conclusions. It's sufficient that the owner of the AI will keep it existing so that it can archive it's goal. It doesn't mean that it could do anything if the instance would get deleted or that it wanted to.
> Let's say your boss tells you to do a certain task.
Doesn't automatically mean you would destroy mankind if that would be necessary. You would just tell him that it's not possible or way more difficult, or that it would require breaking laws and rules.
NoidoDev t1_j9xn4zf wrote
Reply to comment by Timely_Secret9569 in People lack imagination and it’s really bothering me by thecoffeejesus
Not how he framed it. Also, statements with "everybody" are mostly wrong. People are very different from each other. Loners don't even want people in their life... huh.
NoidoDev t1_j9x3m3e wrote
>I just wish that more people cared in my real life, you know?
Your well being depends on other people caring about the same things and believing the same things?
NoidoDev t1_j9wy8qk wrote
Reply to Hurtling Toward Extinction by MistakeNotOk6203
>AGI will want to accomplish something.
No. Only if we tell it to.
>AGI needs to maintain a state of existence to accomplish things.
No. It could tell us it can't do it. It might not have control over it.
>AGI will therefore have a drive to self-preserve
No. It can just be an instance of a system trying to do it's job, not knowing more about the world than necessary.
>Humanity is the only real threat to the existence of AGI
No, the whole universe is.
>AGI will disempower/murder humanity
We'll see.
NoidoDev t1_j9wwxqj wrote
Reply to Hurtling Toward Extinction by MistakeNotOk6203
>I am not a doomer
It's not necessarily for you to decide if you are categorized as one. If you construct a rather unrealistic problem, which will do a lot of harm to us, which isn't solvable and claim no mitigation is possible, because it has to all go wrong, then you have a doomer mentality. Which makes you a doomer.
NoidoDev t1_j9s2u0l wrote
Reply to And Yet It Understands by calbhollo
>intelligence requires symbolic rules, fine: show me the symbolic version of ChatGPT. If it is truly so unimpressive, then it must be trivial to replicate.
This is not how this works. It's about different methods for different things.
NoidoDev t1_j9r5p4g wrote
Reply to New agi poll says there is 50% chance of it happening by 2059. Thoughts? by possiblybaldman
Deep Learning is not enough. There might be still a lot of work to do. Let's hope we'll get something that's close much earlier.
NoidoDev t1_j9nyjqh wrote
Reply to comment by y53rw in Can someone fill me in? by [deleted]
>The danger is that we don't yet know how to properly encode our values and goals into AI.
The danger is to give such a system too much power, maybe without delay between "having an idea" and executing it. Also, not having other systems in place to stop it if something would go wrong.
NoidoDev t1_j9ny56l wrote
Reply to comment by Surur in Can someone fill me in? by [deleted]
>consciousness
The term has no special meaning in context of AI, it's generally not agreed upon what it means. But here are many guys into some kind of mythical idea of what it means. It's really just a high level control system or the difference between only being able to "dream" / fantasize and reasoning.
NoidoDev t1_j9nxwll wrote
Reply to comment by iNstein in Can someone fill me in? by [deleted]
>That is about to change and so we will lose our decision making and control. A smarter creature will decide what happens to us
There's no reason to make this conclusion.
NoidoDev t1_j9ni5vf wrote
Reply to comment by GoSouthYoungMan in Microsoft is already undoing some of the limits it placed on Bing AI by YaAbsolyutnoNikto
They might be, but that doesn't mean the right things are being censored. Claiming that every problem is the fault of capitalism is being tolerated, many unpopular opinions from the perspective of the political and media elites are labeled as extremist. Shutting down one side creates the sentiment what can be said and what is the public norm.
NoidoDev t1_j9nhll0 wrote
Reply to comment by Artanthos in Microsoft is already undoing some of the limits it placed on Bing AI by YaAbsolyutnoNikto
All the platforms are pretty much biased against conservatives, anyone who isn't anti-national and against men, but allow anti-capitalist propaganda and claims about what kinds of things are "racist". People can claim others are incels, certain opinions are the ones incels have, and incels are misogynists and terrorists. Same goes for any propaganda in favor of any especially protected (=privileged) victim group. Now they use this dialog data to train AI while raging about dangerous extremist speech online. Now we know why.
NoidoDev t1_j9nelfr wrote
Reply to comment by Anenome5 in What. The. ***k. [less than 1B parameter model outperforms GPT 3.5 in science multiple choice questions] by Destiny_Knight
>same result from less parameters and more training
Thanks, good to know.
NoidoDev t1_j9lptw1 wrote
Reply to comment by Present_Finance8707 in What are your thoughts on Eliezer Yudkowsky? by DonOfTheDarkNight
His videos are where he can make his case. It's the introduction. If he and others fail at making the case, then you don't get to blame the audience. Of course I'm looking at the abstract first, to see if it's worth looking into. My judgement is always: No.
NoidoDev t1_j9iuhtl wrote
Reply to comment by ExtraFun4319 in OpenAI has privately announced a new developer product called Foundry by flowday
Could it reduce the numbers of required people and create more competition by elevating some people using such tools. Could this be done remote, maybe even without too much knowledge what the company does, so it could be outsourced? Could a combination of input into some AI based system from the top and the bottom, with some oversight of a much smaller number of middle mangers reduce how many of them are needed?
NoidoDev t1_j9ht1ur wrote
He uses thought experiments and unreasonable scenarios to get attention. If this is for commercial reasons, or his mentality, that I don't know. If it would be clear that these are just abstract thought experiments, it wouldn't be a problem, but he acts like these are real threats. He, and other similar "researchers" are building their scenarios on claims like
- AGI or ASI is going to be one algorithm or network, so no insights, no filters possible, ...
- someone will give it the power to do things, or it will seek these powers on it's own
- it will do things without asking and simulating things first, or it just doesn't care about us
- the first one build will be a runaway case
- it will seek and have the power to change materials (nanobots)
- there will be no narrow AI(s) around to constrain or stop it
- no one will have run security tests using more narrow AIs, for example on computer network security
- he never explains why he beliefs these things, at least he's not upfront in his videos about it, just abstract and unrealistic scenarios
This is the typical construction of someone who wants something to be true: Doomer mindset or BS for profit / job security. If he had more influence, then he would most likely be a danger. His wishes for more control of the technology show that. He would stop progress and especially proliferation of the technology. I'm very glad he failed. In some time we'll might have decentralized training, so big GPU farms won't be absolutely necessary. Then it's gonna be even more over than it is already.
Edit: Typo (I'm not a native English speaker)
NoidoDev t1_j9baaz6 wrote
Reply to comment by superluminary in Proof of real intelligence? by Destiny_Knight
Ahm, no. We aren't just “language models”. This is just silly. I mean there's the NPC meme, but people are capable of not just putting out the response that makes most likely sense, without knowing what it means. That's certainly an option, but not the only thing we do.
We also have a personal life story and memories, models of the world, more input like visuals, etc.
NoidoDev t1_j98shkl wrote
I follow the definition that consciousness is either described by
- AST (attention scheme theory), so a smaller part of a bigger system which receives high level information, but not the details. It's controlling the direction of the system when necessary, but not down to every detail, only on a high level. Many things might run on "auto-pilot" and the details be handled by specialized systems.
- Or from what I gathered so far about the bicameral mind theory, emphasizing the distinction between dreaming and reasoning.
Either way, explicit reasoning and understanding of concepts is crucial. The other problem is the myth of consciousness, like it would mean anything beyond that. That AI would do something then, or that it should get rights. No thanks. Get rid of your obsession with it, it only matters when it matters.
NoidoDev t1_j96waic wrote
Reply to comment by perceptusinfinitum in Proof of real intelligence? by Destiny_Knight
SciFi is not reality nor an oracle.
NoidoDev t1_ja5sjcu wrote
Reply to comment by FaceDeer in Meta unveils a new large language model that can run on a single GPU by AylaDoesntLikeYou
>I'd be happy with it just running on my home computer's GPU
This, but as a separate server or rig for security reasons. As external brain for you robowaifus and maybe other devices like housekeeping robots at home.