Ortus14

Ortus14 t1_j8h7cxk wrote

They both have sound arguments.

Altman's argument is maybe weaker Ai's on the road to AGI will solve Alignment and prevent value drift.

But Yudkowsky should be required reading for every one working in the field of AGI or alignment. He clearly outlines how the problem is not easy, and may be impossible. This should not be taken lightly by those working on AGI because we don't get a second chance.

6

Ortus14 t1_j8cqf6w wrote

This story is self contradictory. You say the last thing you ate was a lizard nine days ago, and you also say you had a Twix and half a bag of Fritos today.

You wouldn't be doing kickline tryouts if you were starving to death, nor would you turn down food that was offered to you because it wasn't good enough, or waste money on diet coke, or anything else with zero calories.

The body does generate less heat when you've been in caloric deprivation for a period of time, but the core body temperature is 98.6°F, so if "sweltering heat" is higher than that, you wouldn't be shivering cold.

So it's obviously not all true. It's hyperbolic. What are you trying to communicate?

3

Ortus14 t1_j85mi1v wrote

Your "laziness" is your minds way of telling you that you shouldn't rush into something else and wind up wasting even more of your life. You knew this, but got pressured by your parents to make the wrong decision and waste your time.

Our emotions exist for a reason. Shut out distractions. Give your mind time. And you will find your new best path forward.

2

Ortus14 t1_j7z34ei wrote

I'm getting sick of all these doomers like the op coming to this sub, and repeating the same debunked talking points, that know literally nothing about the AGI algorithms, trends, or "expert" consensus but think they're geniuses that can talk on any subject, despite their massive ignorance.

6

Ortus14 t1_j7z1i23 wrote

>despite the fact that most AI experts don't think we'll have a singularity-like event for at least a few decades, if not longer.

Completely ignorant people keep coming to this sub and repeating this lie.

1

Ortus14 t1_j7s9cr2 wrote

Like social media it's a balancing act. We don't want videos describing how to do harmful or illegal activity, which is why the most popular social media platforms all have some level of censorship.

The same goes for Ai. It should not aid in harmful or illegal activity. What constitutes "harm" is up to public opinion.

1

Ortus14 t1_j6h80wh wrote

No pain no gain.

As far as reeking of desperation, that can be fixed by practicing non attachment to outcome and focusing on the process.

So instead of telling yourself, I'm going to make a friend, you tell yourself I'm going to introduce myself to three groups of people, and if we vibe I'll exchange contact info and suggest we all hang out. They could completely ignore you, but you should consider that a success, and maybe even reward yourself for doing it.

Then you try a different approach, maybe work on your vibe, your style, your fitness, your grooming, or where you choose to go, then try again. But again the success is that you did the experiment, not weather it worked or not. Because either way you got new experiences and data, that will help you in the future.

1

Ortus14 t1_j6cnl17 wrote

It sounds like your goal is to make friends. That's the only thing you wrote that you care about.

That's a good goal to focus on, that will be more satisfying than most things you could chase. Now you just need to figure out how to achieve that.

For example if you start a course with the goal of making friends, don't click with any one, then drop the course. Remember your goal. Then find another situation where you might meet some one, repeat.

I think one issue you have, is you want to have tons of different goals, but when you spread yourself thin, you achieve nothing. Making a friend is a good first goal. Achieve that.

5

Ortus14 t1_j6bwabz wrote

If ChatGPT used a speech to text API like whisper, and used the latest text to speech API's, which sound like real people, I could see people talking to it like an assistant all day long.

Many people will still use it, but making it hands free makes it even more convenient and I could see even more people using it.

2

Ortus14 t1_j6apv1z wrote

All technology abides by S-curves, all life (including Ai), and all evolution.

In evolution the start of a new S-curve is called punctuated equilibrium.

In computational theory it has to do with breaking out of "local maximum". In game theory it may be referred to as breaking out of a "equilibrium".

It's important to note that these are all cascading S-curves. That is to say, smaller S-curves on-top of larger S-curves, which themselves are on top of larger S-curves. If you ever think progress is slowing down, zoom out.

8