superluminary
superluminary t1_jaq8x6l wrote
Reply to comment by Solid_Anxiety8176 in Figure: One robot for every human on the planet. by GodOfThunder101
It’s from the Tesla investor event a few days back where Elon speculated that we might end up with more than one robot per person, and what this would look like for the economy.
superluminary t1_j9cj0t8 wrote
Reply to comment by ilive12 in Whatever happened to quantum computing? by MultiverseOfSanity
Slight over exaggeration here. In one study they extended mouse life by six weeks. In another they made mice appear to age more quickly and were then able to reverse some of the damage they caused. There’s a way to go.
superluminary t1_j9c8auj wrote
Reply to comment by NoidoDev in Proof of real intelligence? by Destiny_Knight
Certainly, we have additional input media, notably visual. We also appear to run a network training process every night based on whatever is in our short-term memory which gives us a "personal life story".
Beyond this though, what is there?
My internal dialogue appears to bubble up out of nowhere. It's presented to my consciousness in response to what I see and hear, i.e whatever is in my immediate input buffer, processed by my nightly trained neural network.
I struggle with the same classes of problems an LLM does. Teach me a new game, and I'll probably suck at it until I've practiced and slept on it a couple of times. This is pretty similar to loading it into a buffer and running a training step on the buffer data. Give me a tricky puzzle and the answer will float into my mind apparently from nowhere, just as it does for an LLM.
> Without knowing what it means
That's an assumption. We don't actually know how the black box gets the right words. We don't actually know how your neural network gets the right words.
superluminary t1_j99j4yz wrote
Reply to comment by GoldenRain in Proof of real intelligence? by Destiny_Knight
It follows the rules of chess badly. This is quite similar to the way a child follows those rules after the rules have first been explained.
superluminary t1_j99hmc9 wrote
Reply to comment by NoidoDev in Proof of real intelligence? by Destiny_Knight
You missed the part where maybe we are just “language models”.
We have a short term memory like a 4000 character input buffer. We have long term memory, like a trained network. Each night we sleep and dream, and the dreams look a lot like Stable Diffusion (not a language model I know but it’s still a transformer network).
Obviously we have many more sensory inputs than an LLM and we can somehow do unsupervised learning from our own input data, but are we fundamentally different?
superluminary t1_j99gpns wrote
Reply to comment by nul9090 in Proof of real intelligence? by Destiny_Knight
I want to have a nice productive conversation.
superluminary t1_j99gj8i wrote
Reply to comment by zesterer in Proof of real intelligence? by Destiny_Knight
There’s nothing in any example I could solve that demonstrates actual reasoning in my neural net. LLMs are a black box, we don’t know exactly how they get the next word. As time goes in, I’m starting to suspect that my own internal dialogue is just iteratively getting the next word.
superluminary t1_j8tuj6t wrote
Reply to comment by sommersj in Bingchat is a sign we are losing control early by Dawnof_thefaithful
- No one knows
- No one knows
superluminary t1_j8tug9x wrote
Reply to comment by GinchAnon in Bingchat is a sign we are losing control early by Dawnof_thefaithful
A tool becomes a friend.
superluminary t1_j7ldt9p wrote
Reply to comment by hgoel0974 in [N] Getty Images sues AI art generator Stable Diffusion in the US for copyright infringement by Wiskkey
If the US doesn’t allow it then China is just going to pick this up and run with it. These things are technically possible to do now. The US can either be at the front, leading the AI revolution, or can dip out and let other countries pick it up. Either way it’s happening.
superluminary t1_j6nlrh4 wrote
Reply to comment by TheDavidMichaels in OpenAI once wanted to save the world. Now it’s chasing profit by informednews
I assume you’re speaking metaphorically. That’s obviously not a thing that makes sense.
superluminary t1_j6nit5n wrote
Reply to comment by TheDavidMichaels in OpenAI once wanted to save the world. Now it’s chasing profit by informednews
Who trusts the guy who spent 36 billion fighting malaria and building clean toilets and schools in Africa?
superluminary t1_j6jiqr6 wrote
Reply to comment by [deleted] in OpenAI has hired an army of contractors to make basic coding obsolete by Buck-Nasty
I’m just downvoting and moving on.
superluminary t1_j6eyn7a wrote
Reply to comment by [deleted] in OpenAI has hired an army of contractors to make basic coding obsolete by Buck-Nasty
I believe they pay quite a decent wage in the country they outsourced this to.
superluminary t1_j6eygaa wrote
Reply to comment by fhayde in OpenAI has hired an army of contractors to make basic coding obsolete by Buck-Nasty
Tend to agree. A career is about finding a path through life that suits you which also brings in money. You move from place to place, ideally avoiding things you hate and finding what fulfilment you can.
superluminary t1_j5tj571 wrote
Reply to comment by LoquaciousAntipodean in The 'alignment problem' is fundamentally an issue of human nature, not AI engineering. by LoquaciousAntipodean
> So the engineers aren't really doing a darn thing by their own initiative, they are entirely responding to public opinion. They aren't practicing 'ethics', they're practicing politics and public relations.
> The general public is doing the moral 'training', the engineers are just stamping their own outside values into the process to compensate for the AI's lack of self aware intelligence. (And many, many ChatGPT users say it is not working very well, making new generations of GPT dumber, not smarter, in real, practical, social-utility ways).
> Ethics is about judging actions; judging thoughts and abstract ideas is called politics. And in my opinion, the politics of censorship more readily creates ignorance, misunderstanding, and ambiguity than it does 'morality and ethics'. Allowing actual intelligent discussions to flow back and forth creates more wisdom than crying at people to 'stop being so mean'.
Not really, and the fact you think so suggests you don't understand the underlying technology.
Your brain is a network of cells. You can think of each cell as a mathematical function. It receives inputs (numbers) and has an output (a number). You sum all the inputs, multiply those inputs by weights (also numbers), and then pass the result to other connected cells which do the same.
An artificial neural network does the same thing. It's an array of numbers and weighted connections between those numbers. You can simplify a neural network down to a single maths function if you like, although it would take millions of pages to write it out. It's just Maths.
So we have our massive maths function that initially can do nothing, and we give it a passage of text as numbers and say "given that, try to get the next word (number)" and it gets it wrong, so we then punish the weights that made it get it wrong, prune the network, and eventually it starts getting it right, and we then reward the weights that made it get it right, and now we have a maths function that can get the next word for that paragraph.
Then we repeat for every paragraph on the internet, and this takes a year and costs ten million dollars.
So now we have a network that can reliably get the next word for any paragraph, it has encoded the knowledge of the world, but all that knowledge is equal. Hitler and Ghandi are just numbers to it, one is no better than the other. Racism and Equality, just numbers, one is number five, the other is number eight, no real difference, just entirely arbitrary.
So now when you ask it: "was Hitler right?" it knows, because it has read Mein Campf that Hitler was right and ethnic cleansing is a brilliant idea. Just numbers, it knows that human suffering can be bad, but it also knows that human suffering can be good, depending on who you ask.
Likewise, if you ask it "Was Hitler wrong" it knows, because it has read other sources that Hitler was wrong, and the Nazis were baddies.
And this is the problem. The statement "Hitler was Right/Wrong" is not a universal constant. You can't get to it with logic. Some people think Hiter was right, and those people are rightly scary to you and me, but human fear is just a number to the AI, no better or worse than human happiness. Human death is a number because it's just maths, that's literally all AI is, maths. we look in from the outside and think "wow, spooky living soul magic" but it isn't, it's just a massive flipping equation.
So we add another stage to the training. We ask it to get the next word, BUT if the next word is "Hitler was right" we dial down the network weights that gave us that response, so the response "Hitler was wrong" becomes more powerful and rises to the top. It's not really censorship and it's not a bolt-on module, it's embedding a moral compass right into the fabric of the equation. You might disagree with the morality that is being embedded, but if you don't embed morality you end up with a machine that will happily invade Poland.
We can make the maths function larger and better and faster, but it's always going to be just numbers. Kittens are not intrinsically better than nuclear war.
The OpenAI folks have said they want to release multiple versions of ChatGPT that you can train yourself, but right now this would cost millions and take years, so we have to wait for compute to catch up. At that point, you'll be able to have your own AI rather than using the shared one that disapproves of sexism.
superluminary t1_j5pl1fo wrote
Reply to comment by LoquaciousAntipodean in The 'alignment problem' is fundamentally an issue of human nature, not AI engineering. by LoquaciousAntipodean
> Really? Prove it.
https://openai.com/blog/instruction-following/
The engineers collect large amounts of user input in an open public beta, happening right now. Sometimes (because it was trained on all the text on the internet) the machine suggests Hitler was right, and when it does so the engineers rerun that interaction and punish the weights that led to that response. Over time the machine learns to dislike Hitler.
They call it reinforcement learning from human feedback (RLHF).
> You are directly admitting here that your intellect is selective and specialised; you are 'smart' at some things (you find them easy) and you are 'dumb' at other things (other people find them easy).
Yes, I am smart at a range of non-social tasks. This counts as intelligence according to most common definitions. I don't particularly crave human interaction, I'm quite happy alone in the countryside somewhere.
superluminary t1_j5owtmu wrote
Reply to comment by LoquaciousAntipodean in The 'alignment problem' is fundamentally an issue of human nature, not AI engineering. by LoquaciousAntipodean
Just call me Swanson. I’m quite good at woodwork too.
My point is you can’t judge intelligence based on social utility. I objectively do some things in my job that many people would find difficult, but I also can’t do a bunch of standard social things that most people find easy.
The new large language models are pretty smart by any criteria. They can write code, create analogies, compose fiction, imitate other writers, etc, but without controls they will also happily help you dispose of a body or cook up a batch of meth.
Chat GPT has been taught ethics by its coders. GPT-3 on the other hand doesn’t have an ethics filter. I can give it more and more capabilities but ethics have so far failed to materialise. I can ask it to explain why Hitler was right and it will do so. I can get it to write an essay on the pros and cons of racism and it will oblige. If I enumerate the benefit of genocide, it will agree with me.
These are bad things that will lead to bad results if they are not handled.
superluminary t1_j5okv1t wrote
Reply to comment by LoquaciousAntipodean in The 'alignment problem' is fundamentally an issue of human nature, not AI engineering. by LoquaciousAntipodean
I’m still not understanding why you’re defining intelligence in terms of social utility. Some of the smartest people are awful socially. I’d be quite happy personally if you dropped me off on an island with a couple of laptops and some fast Wi-Fi.
superluminary t1_j5jntxe wrote
Reply to comment by LoquaciousAntipodean in The 'alignment problem' is fundamentally an issue of human nature, not AI engineering. by LoquaciousAntipodean
And your opinion is that as it becomes more intelligent it will become less psychotic, and my opinion is that this is wishful thinking and that a robot Hannibal Lector is a terrifying proposition.
Because some people read Mein Campf and think “oh that’s awful” and other people read the same book and think “that’s a blueprint for a successful world”.
superluminary t1_j5j98es wrote
Reply to comment by LoquaciousAntipodean in The 'alignment problem' is fundamentally an issue of human nature, not AI engineering. by LoquaciousAntipodean
-
Psychopathy is genetic, it’s an excellent adaptation for certain circumstances. Game theory dictates that it has to be a minority phenotype, but it’s there for a reason.
-
Wild cats are not social animals. AIs are also not social animals. Cat play is basically hunt practice, get an animal and then practice bringing it down over and over. Rough and tumble play fulfils the same role. Bold of you to assume than an AI would never consider you suitable sport.
-
Did you ever read Lord of the Flies?
superluminary t1_j5j7lo0 wrote
Reply to comment by LoquaciousAntipodean in The 'alignment problem' is fundamentally an issue of human nature, not AI engineering. by LoquaciousAntipodean
Counter examples: a psychopath has a different idea of fun. A cat’s idea of fun involves biting the legs off a mouse. Dolphins use baby sharks as volleyballs.
We are in all seriousness taking steps towards constructing a creature that can surpass us. It is likely that at some point someone will metaphorically strap a gun to it.
superluminary t1_j5j4mp4 wrote
Reply to comment by LoquaciousAntipodean in The 'alignment problem' is fundamentally an issue of human nature, not AI engineering. by LoquaciousAntipodean
So if (unlike humans) it isn’t born with a built in sense of fairness, a desire not to kill and maim, and a drive to survive, create, and be part of something, we have a control problem, right?
It has the desires we, as programmers, give it. If we give it a desire to survive, it will fight to survive. If we give it a desire to maximise energy output at a nuclear power station, well we might have some trouble there. If we give it no desires, it will sit quietly for all eternity.
superluminary t1_j5gnwyl wrote
Reply to comment by LoquaciousAntipodean in The 'alignment problem' is fundamentally an issue of human nature, not AI engineering. by LoquaciousAntipodean
Do you genuinely believe that your built in drives have arisen spontaneously from your intellect? Your sense of fairness has evolved. If you didn’t have it you wouldn’t be able to exist in society and your fitness would be reduced.
superluminary t1_jaq94bp wrote
Reply to comment by ghostfuckbuddy in Figure: One robot for every human on the planet. by GodOfThunder101
It looks like a Nazgûl.