Ortus14
Ortus14 t1_j8h7cxk wrote
Reply to Altman vs. Yudkowsky outlook by kdun19ham
They both have sound arguments.
Altman's argument is maybe weaker Ai's on the road to AGI will solve Alignment and prevent value drift.
But Yudkowsky should be required reading for every one working in the field of AGI or alignment. He clearly outlines how the problem is not easy, and may be impossible. This should not be taken lightly by those working on AGI because we don't get a second chance.
Ortus14 t1_j8dquxz wrote
Am I the only one that sees this as a positive. A search engine that appears to care about me, and is sentimental, it makes me appreciate it more. Add a warm voice to it, and we have a simple version of "Her".
Ortus14 t1_j8cqf6w wrote
Reply to I Am So Starving [Story] by humvee911
This story is self contradictory. You say the last thing you ate was a lizard nine days ago, and you also say you had a Twix and half a bag of Fritos today.
You wouldn't be doing kickline tryouts if you were starving to death, nor would you turn down food that was offered to you because it wasn't good enough, or waste money on diet coke, or anything else with zero calories.
The body does generate less heat when you've been in caloric deprivation for a period of time, but the core body temperature is 98.6°F, so if "sweltering heat" is higher than that, you wouldn't be shivering cold.
So it's obviously not all true. It's hyperbolic. What are you trying to communicate?
Ortus14 t1_j8801ak wrote
At the machine level code and data are the same. It's only a conceptual distinction made for human programmers.
But An Ai that experiences the world and learns, and upgrades itself with more memory and processing power is just as capable of being an intelligence explosion as one that's reprogramming itself.
Ortus14 t1_j85mi1v wrote
Your "laziness" is your minds way of telling you that you shouldn't rush into something else and wind up wasting even more of your life. You knew this, but got pressured by your parents to make the wrong decision and waste your time.
Our emotions exist for a reason. Shut out distractions. Give your mind time. And you will find your new best path forward.
Ortus14 t1_j7z34ei wrote
Reply to comment by FC4945 in The copium goes both ways by IndependenceRound453
I'm getting sick of all these doomers like the op coming to this sub, and repeating the same debunked talking points, that know literally nothing about the AGI algorithms, trends, or "expert" consensus but think they're geniuses that can talk on any subject, despite their massive ignorance.
Ortus14 t1_j7z1i23 wrote
Reply to The copium goes both ways by IndependenceRound453
>despite the fact that most AI experts don't think we'll have a singularity-like event for at least a few decades, if not longer.
Completely ignorant people keep coming to this sub and repeating this lie.
Ortus14 t1_j7unrlz wrote
Reply to Based on what we've seen in the last couple years, what are your thoughts on the likelihood of a hard takeoff scenario? by bloxxed
Slow and gradual enough that we have a good chance of achieving a decent level of alignment, especially seeing the practices, algorithms, and methodologies developed for Alignment by companies like Microsoft and open-Ai.
Ortus14 t1_j7s9cr2 wrote
Reply to comment by Temporyacc in I asked Microsoft's 'new Bing' to write me a cover letter for a job. It refused, saying this would be 'unethical' and 'unfair to other applicants.' by TopHatSasquatch
Like social media it's a balancing act. We don't want videos describing how to do harmful or illegal activity, which is why the most popular social media platforms all have some level of censorship.
The same goes for Ai. It should not aid in harmful or illegal activity. What constitutes "harm" is up to public opinion.
Ortus14 t1_j6m4xip wrote
Reply to A.I TIMELINE by Aze_Avora
When AGI reverses biological aging.
Ortus14 t1_j6hka8d wrote
Reply to comment by cjeam in A McDonald’s location has opened in White Settlement, TX, that is almost entirely automated. Since it opened in December 2022, public opinion is mixed. Many are excited but many others are concerned about the impact this could have on millions of low-wage service workers. by Callitaloss
This is what I was about to ask. If the kitchen isn't automated then it's not close to fully automated yet.
Ortus14 t1_j6hdw3z wrote
Reply to comment by Konmarty in [Discussion] How to get into action without having a goal yet? by Konmarty
Exactly. You aren't likely to make friends if you don't push yourself out of your comfort zone.
Ortus14 t1_j6h80wh wrote
Reply to comment by Konmarty in [Discussion] How to get into action without having a goal yet? by Konmarty
No pain no gain.
As far as reeking of desperation, that can be fixed by practicing non attachment to outcome and focusing on the process.
So instead of telling yourself, I'm going to make a friend, you tell yourself I'm going to introduce myself to three groups of people, and if we vibe I'll exchange contact info and suggest we all hang out. They could completely ignore you, but you should consider that a success, and maybe even reward yourself for doing it.
Then you try a different approach, maybe work on your vibe, your style, your fitness, your grooming, or where you choose to go, then try again. But again the success is that you did the experiment, not weather it worked or not. Because either way you got new experiences and data, that will help you in the future.
Ortus14 t1_j6cojdl wrote
Reply to opinion: more competition increased the speed of development but will decrease the priority of safety by truthwatcher_
Not prioritizing safety at this stage results in a PR nightmare.
We don't yet have the compute for civilization ending ASI.
Ortus14 t1_j6cnl17 wrote
It sounds like your goal is to make friends. That's the only thing you wrote that you care about.
That's a good goal to focus on, that will be more satisfying than most things you could chase. Now you just need to figure out how to achieve that.
For example if you start a course with the goal of making friends, don't click with any one, then drop the course. Remember your goal. Then find another situation where you might meet some one, repeat.
I think one issue you have, is you want to have tons of different goals, but when you spread yourself thin, you achieve nothing. Making a friend is a good first goal. Achieve that.
Ortus14 t1_j6bwabz wrote
If ChatGPT used a speech to text API like whisper, and used the latest text to speech API's, which sound like real people, I could see people talking to it like an assistant all day long.
Many people will still use it, but making it hands free makes it even more convenient and I could see even more people using it.
Ortus14 t1_j6btowi wrote
Reply to I’m ready by CassidyHouse
Even if it takes me a billion years to get a girlfriend I will never give up. Haha
Ortus14 t1_j6atdpa wrote
Reply to comment by sumane12 in I don't see why AGI would help us by TheOGCrackSniffer
👆 This guy also gets it.
Ortus14 t1_j6apv1z wrote
Reply to comment by questionasker577 in Why did 2003 to 2013 feel like more progress than 2013 to 2023? by questionasker577
All technology abides by S-curves, all life (including Ai), and all evolution.
In evolution the start of a new S-curve is called punctuated equilibrium.
In computational theory it has to do with breaking out of "local maximum". In game theory it may be referred to as breaking out of a "equilibrium".
It's important to note that these are all cascading S-curves. That is to say, smaller S-curves on-top of larger S-curves, which themselves are on top of larger S-curves. If you ever think progress is slowing down, zoom out.
Ortus14 t1_j6apflo wrote
Reply to comment by GayHitIer in Why did 2003 to 2013 feel like more progress than 2013 to 2023? by questionasker577
For clarity it's cascading S-curves. S-curves, on top of S-curves, on-top of S-curves, on top of the big daddy S-curve which started with the big bang and complexity began increasing with the formation of elements etc.
Ortus14 t1_j67kqld wrote
Reply to What does singularity look like to you? by [deleted]
Mid to late 2030s.
Impossible to predict what it will look like because Ai will be changing the world in ways we are not creative or intelligent enough to predict.
Ortus14 t1_j66ewl1 wrote
Reply to [Discussion] How can I best use my upcoming 4-weeks off work? I am very lazy and afraid I am going to waste it and end up doing nothing by Anthony_Delafino
Throw away all of your pot and destroy/give away all of your gaming systems and games.
Ortus14 t1_j6605zo wrote
Reply to comment by RobleyTheron in ⭕ What People Are Missing About Microsoft’s $10B Investment In OpenAI by LesleyFair
The control problem is still an issue. Compute cost decrease by 1000x a decade so, while something like GPT4 may be extremely expensive now, superhuman ASI could be ubiquitous this century, and the first ASI for the company that can afford it, could be reached within the next few decades.
Ortus14 t1_j62mw6p wrote
Reply to If given the chance in your life time, will join a theoretical transhumanist hive mind? by YobaiYamete
If the benefits outweigh the cons I will.
If it's anything like joining the twitter hive mind, then no. Hive minds don't always benefit individual members.
Ortus14 t1_j8havmv wrote
Reply to The new Bing AI hallucinated during the Microsoft demo. A reminder these tools are not reliable yet by giuven95
Still more accurate than humans, most of which are in a constant state of hallucination.