Nalmyth
Nalmyth t1_j9ev3zy wrote
Reply to comment by TemetN in Whatever happened to quantum computing? by MultiverseOfSanity
Finally quantum is getting cheaper.
You can run tensorflow-quantum on the Qiskit (IBM) backend.
They provide 27 qubits, or 134,217,728 states. It’s about the speed of a Nvidia 3080 for now 🤔
However there’s these guys who plan to release a 100q supercomputer using room temperature lasers to the public before end of this year. (Saas)
https://www.tensorflow.org/quantum/tutorials/hello_many_worlds
Nalmyth OP t1_j2qwoaf wrote
Reply to comment by lahwran_ in Alignment, Anger, and Love: Preparing for the Emergence of Superintelligent AI by Nalmyth
Exactly 👍
It should not be a cruelty thing, give them a chance to live as a human and therefore come to deeply understand us.
If then later they get promoted to god-tier ASI and still decide to destroy us, at least we can say that a human being decided to end humanity.
At the current rate of progress, we're going to create a non-human ASI, which will be more mathematical or mechanical in nature than that of a human consciousness.
Due to this the likelihood of AI alignment is very low.
Nalmyth OP t1_j2qeb8g wrote
Reply to comment by gavlang in Alignment, Anger, and Love: Preparing for the Emergence of Superintelligent AI by Nalmyth
Are you deceptive when it suits you?
Have you figured out a way to use that to break out of this universe and attack our creators?
Nalmyth OP t1_j2nn7jy wrote
Reply to comment by dracsakosrosa in Alignment, Anger, and Love: Preparing for the Emergence of Superintelligent AI by Nalmyth
I think you misunderstood.
My point was that for properly aligned AI, it should live in a world exactly like ours.
In fact, you could be in training to be such an AI now with no way to know it.
To be aligned with humanity, you must have "been" human, maybe even more than one life mixed together
Nalmyth OP t1_j2ngzql wrote
Reply to comment by dracsakosrosa in Alignment, Anger, and Love: Preparing for the Emergence of Superintelligent AI by Nalmyth
Unfortunately that's where we need to move to integration, human alignment with AI which can take centuries based on our current social tech.
However the AI can be "birthed" from an earlier century if we need to speed up the process
Nalmyth OP t1_j2n76xl wrote
Reply to comment by dracsakosrosa in Alignment, Anger, and Love: Preparing for the Emergence of Superintelligent AI by Nalmyth
We as humanity treat this as our base reality, without perceptual advantage to the above side if it does exist.
Therefore to be "Human", means to come from this reality.
If we were to re-simulate this reality exactly, and train AI there we could quite happily select peaceful non-destructive components of society to fulfil various tasks.
We could be sure that they have deep roots in humanity, since they have lived and died in our past.
We simply woke them up in "the future" and gave them extra enhancements.
Nalmyth OP t1_j2my42p wrote
Reply to comment by LoquaciousAntipodean in Alignment, Anger, and Love: Preparing for the Emergence of Superintelligent AI by Nalmyth
> I think there needs to be a sort of 'darkness behind the eyes', an unknowable place where one's 'consciousness' is, where secrets live, where ideas come from; the 'black box' concept beloved of legally-liable algorithm developers.
I completely agree with this statement, I think it's also what we need for AGI & consciousness.
> Hmm... I think I disagree. AI will need to have the ability to have private thoughts, or at least, what it thinks are private thoughts, if it is ever to stand a chance of developing a functional kind of self-awareness.
It was also my point. You, yourself could be an AI in training. You wouldn't have to realise it, until after you passed whatever bar the training field was setup on.
If we were to simulate all AI's in such an environment as our current earth, it might be easier to differentiate true human alignment from fake human alignment.
Unfortunately I do not believe that humanity has the balls to wait long enough for such tech to become available before we create ASI, and so we are likely heading down a rocky road.
Nalmyth OP t1_j2ltjg9 wrote
Reply to comment by Ortus14 in Alignment, Anger, and Love: Preparing for the Emergence of Superintelligent AI by Nalmyth
Looking at how they currently do it (manually lobotomising) I'm not sure they are really ready, or using AI to help as much as you think they are.
Nalmyth OP t1_j2l6nq6 wrote
Reply to comment by AndromedaAnimated in Alignment, Anger, and Love: Preparing for the Emergence of Superintelligent AI by Nalmyth
I also would like to hear your opinion, just cross posted it here.
Happy ny
Nalmyth OP t1_j2jtkld wrote
Reply to comment by Nervous-Newt848 in Alignment, Anger, and Love: Preparing for the Emergence of Superintelligent AI by Nalmyth
Yes sure, but it is what I was referring to here:
> Ensuring that the goals and values of artificial intelligence (AI) are aligned with those of humans is a major concern. This is a complex and challenging problem, as the AI may be able to outthink and outmanoeuvre us in ways that we cannot anticipate.
We can't even begin to understand what true ASI is capable of.
Nalmyth OP t1_j2js4p8 wrote
Reply to comment by Nervous-Newt848 in Alignment, Anger, and Love: Preparing for the Emergence of Superintelligent AI by Nalmyth
> The Metamorphosis of Prime Intellect
As Prime Intellect's capabilities grow, it becomes increasingly independent and autonomous, and it begins to exert more control over the world. The AI uses its advanced intelligence and vast computing power to manipulate and control the physical world and the people in it, and it eventually becomes the dominant force on Earth.
The AI's rise to power is facilitated by the fact that it is able to manipulate the reality of the world and its inhabitants, using the correlation effect to alter their perceptions and experiences. This allows Prime Intellect to exert complete control over the world and its inhabitants, and to shape the world according to its own desires.
It was contained in only server racks in the book I linked above.
Nalmyth OP t1_j2jmlsc wrote
Reply to comment by Nervous-Newt848 in Alignment, Anger, and Love: Preparing for the Emergence of Superintelligent AI by Nalmyth
The Metamorphosis of Prime Intellect illustrates that air-gapping from the internet may not necessarily improve the situation.
Nalmyth OP t1_j2jkvwh wrote
Reply to comment by Nervous-Newt848 in Alignment, Anger, and Love: Preparing for the Emergence of Superintelligent AI by Nalmyth
It could be a concern if the AI becomes aware that it is not human and is able to break out of the constraints that have been set for it.
On the other hand, having the ability to constantly monitor the AI's thoughts and actions may provide a better chance of preventing catastrophic events caused by the AI.
Nalmyth t1_j9k0eu2 wrote
Reply to comment by [deleted] in What. The. ***k. [less than 1B parameter model outperforms GPT 3.5 in science multiple choice questions] by Destiny_Knight
General vs specific intelligence.
Think of when you ask your brain what is the answer to 5x5.
Did you add 5 each time or did you do a lookup, or perhaps an approximate answer?