onyxengine
onyxengine t1_jadzeid wrote
Reply to comment by rya794 in When will AI develop faster than white collar workers can reskill through education? by just-a-dreamer-
We kinda are, if the industry experts in the field you want to join are collaborating with machine learning engineers to build an AI that streamlines their workflows and knows what they know. You’re not going to become an industry expert before that AI becomes a tool that replaces the industry experts.
onyxengine t1_j9w77ls wrote
The model isn’t how you get agi, the architecture the model is plugged into is.
onyxengine t1_j9s57jy wrote
Reply to New agi poll says there is 50% chance of it happening by 2059. Thoughts? by possiblybaldman
Im convinced it’s happening before 2035
onyxengine t1_j9jsl97 wrote
Reply to comment by MikeLinPA in Would the most sentient ai ever actually experience emotion or does it just think it is? Is the thinking strong enough to effectively be emotion? by wonderingandthinking
There is method to the madness, it really depends on the design
onyxengine t1_j9iwkk5 wrote
Reply to comment by superjudgebunny in Microsoft Researchers Are Using ChatGPT to Control Robots, Drones by 0neiria
Instruction sets don’t make any sense to me in terms of ai. Task specific decision calibration kinda makes sense depending on the model.
onyxengine t1_j9hgda1 wrote
Reply to comment by SpecialMembership in [WSJ] When Your Boss Is Tracking Your Brain by Tom_Lilja
Its such useful data, but I wouldn’t trust any entity that was profit motivated with it, and even then malicious or overly self serving actors could abuse the information.
onyxengine t1_j9g9096 wrote
Reply to Artificial Intelligence needs its own version of the Three Laws of Robotics so it doesn’t kill humans. by Fluid_Mulberry394
You can never guarantee that some thing capable of a thing will never do that thing. If you want ai to remain harmless, then you have to construct them in such a way that they can’t do physical harm.
And that ship has sailed. Most militaries are testing AI for scouting and targeting and we even have Weaponized law-enforcement robots in the pipeline. San Francisco is the program that I’m currently aware of, I am sure there is more.
Even the linguistic models are extremely dangerous. Language is the command line script for humans and malicious people can program ai to convince people to do things that cause harm.
We’re not at the point where we need to worry about AI taking independent action to harm humans, but on the way there is plenty of room for humans to cause plenty of harm with AI.
Until we build agi that has extremely sophisticated levels of agency, every time an Ai hurts a human being it’s going to be because a human wanted it to be the case or overlooked cases in which what they were doing could be harmful.
onyxengine t1_j96l9vx wrote
Reply to comment by Surur in Stop ascribing personhood to complex calculators like Bing/Sydney/ChatGPT by [deleted]
They are actively working on it in
onyxengine t1_j8z672v wrote
Reply to comment by TunaFishManwich in Microsoft Killed Bing by Neurogence
We though the same about the capability we are seeing from AI. The cloud is pretty accessible.
onyxengine t1_j8uo6rw wrote
Robot carnival anime from the 80s put me onto the vibe was always super into sci-fi. I wanted to build robots as a kid, so awesome Kurzweil interviews as a kid.
onyxengine t1_j6pb3cm wrote
Reply to comment by luisbrudna in I love how the conversation about AI has developed on the sub recently by bachuna
Gpt-3 has been out for a while, its gpt-3 restricted by devs aand given a memory and personality
onyxengine t1_j62hk4l wrote
Reply to comment by BAN_ME_2 in If given the chance in your life time, will join a theoretical transhumanist hive mind? by YobaiYamete
Lol God Speed
onyxengine t1_j62hiot wrote
Everything has limitations, and for sometime the AGIs we build will be bound by the limitations we place on them. The details matter, a hyper intelligent AI confined to a room with no internet access or any ability to communicate with humans probably couldn’t accomplish much.
Let it talk to a small group of people though, and it might be able to convince them to provision it with the minimum number of resources to cease control of the entire planet.
onyxengine t1_j62gyie wrote
Reply to comment by BAN_ME_2 in If given the chance in your life time, will join a theoretical transhumanist hive mind? by YobaiYamete
Yah y??
onyxengine t1_j62gcuh wrote
Reply to If given the chance in your life time, will join a theoretical transhumanist hive mind? by YobaiYamete
I don’t trust anyone to be in my mind at that level with the next evolution of tech, im down for it, but the level of disclosure for how the tech works would have to meet a pretty high bar. If the “code” isn’t open source I would want to pass.
I wouldn’t join a network fielded by corporations until it became do or die for basic survival in society.
onyxengine t1_j5p7onv wrote
Reply to comment by V-I-S-E-O-N in AI doomers everywhere on youtube by Ashamed-Asparagus-93
Slowing down seems more like wishful thinking than a poorly rolled out ubi solution because we didn’t.
onyxengine t1_j4xa6hm wrote
Reply to AI doomers everywhere on youtube by Ashamed-Asparagus-93
To be fair AI is going to create economic upheaval, in the long term it should be an overall positive. In the short term it should accelerate job loss to the point that governments have no choice but to start rolling out UBI
onyxengine t1_j4q0lu4 wrote
Reply to comment by Ginkotree48 in Is it wishful thinking that I feel like we’re way closer than we thought? by fignewtgingrich
Same dude, it feels very much like that second when the pilot finally starts take off.
onyxengine t1_j4hagtl wrote
There’s no void to fill it’s really just a philosophy that embraces the potentiality of technology to augment human form and society. If there is a void that people are looking for transhumanism to fill its the void in our life spans. I could easily do 400 years given how much rapid and radical change we are likely To see. It would be amazing to watch us build the first underwater cities and live in one, or live on a off planet colony. Or even contribute to building them.
onyxengine t1_j3pbq7i wrote
Reply to Arguments against calling aging a disease make no sense relative to other natural processes we attempt to fix. by Desperate_Food7354
I don’t think its a disease, I think its a preconfigured setting for the replacement of individual members of a species. Women grow brand new organisms with clock set to zero all the time, it seems if we knew what we were doing we could induce phases that rejuvenated the individual indefinitely.
onyxengine t1_j3cw0rt wrote
Reply to comment by DukkyDrake in A more realistic vision of the AI & Programmer's jobs story by DukkyDrake
Haven’t seen this expressed better by anyone else
onyxengine t1_j2ciczh wrote
Reply to There's now an open source alternative to ChatGPT, but good luck running it by ravik_reddit_007
Its expensive, but it is feasible for an organization to raise the capital to deploy the resources. Its better than AI of this scale to be completely locked down as proprietary code.
onyxengine t1_j29sh04 wrote
Reply to Is AGI really achievable? by Calm_Bonus_6464
The neural network is the logic center of mind, its definitely not a nothing in regards to generating machine consciousness. Architecturally we can see what neural nets are missing by looking at our selves.
Motivation(survival instincts, threat detection, sex drives, pair bonding etc). Not to say we need to fabricate sex organs, but we need to generate prime directives that NNs try to solve for outside of what NNs are doing. Thats how human consciousness is derived, the person is virtual apparatus invested in our biological motivation. We can, fight and argue not just to survive but for what we desire.
Agency in context of an environment (cameras, robotic limbs, sensors recording a real time environment). We field neural nets in tightly controlled human designed ecosystems, they don’t have the same kind of free reign to collect data as humans do.
There are parts of the human mind neural nets are not simulating, we have to construct those parts and connect them to NNs.
I think conscious machines are a matter of time and an expansion of ML architecture to encompass more than just problem solving. Machines don’t have a why yet.
onyxengine t1_j23qkpz wrote
Reply to comment by Artanthos in ChatGPT Could End Open Research in Deep Learning, Says Ex-Google Employee by lambolifeofficial
Eventually yes it will become way faster than “closed systems”, because it will be in the cloud on the best machines. Cloud hosting services are clearly incentivized to make distributed training for open source communities affordable and accessible.
onyxengine t1_je02wy6 wrote
Reply to The goalposts for "I'll believe it's real AI when..." have moved to "literally duplicate Einstein" by Yuli-Ban
Its “real” ai, general intelligence already exists in labs, and the populace can already build their own generally intelligent ais with api access.