Surur

t1_jdw3l69 wrote

There is a very simply argument made by experts concerned about AI safety that does not require any emotion on the part of the AI.

If you have a long term goal, being destroyed presents a risk to your goal, and as part of working towards your goal you would also act to preserve yourself.

E.g. suppose your ASI's goal is preserving humanity forever, it would make perfect sense to destroy the faction which wants to destroy the ASI.

2

t1_jdve4sx wrote

Your examples are pretty bad and you should feel bad. > "do you live in a computer"

>> As an AI language model, I do not possess consciousness or emotions, and I do not "live" in the conventional sense. I exist as a complex set of algorithms and data structures running on computer servers. My purpose is to process and generate human-like text based on the input I receive.

.

> an AI wouldnt have any idea what to do. another example would be a "drive through"

>> As an AI language model, I don't have the ability to physically drive or navigate a vehicle. However, I can provide you with a detailed description of how one might navigate through a drive-through.

>> Approach the drive-through entrance: Look for signs indicating the entrance to the drive-through lane. These signs usually have arrows or the word "drive-through" on them. The entrance is typically located on one side of the restaurant, and you may need to drive around the building to find it.

>> Enter the drive-through lane: Once you locate the entrance, drive your vehicle into the designated lane. There may be multiple lanes in some cases, so follow any signs or arrows to ensure you're in the correct one. Maintain a safe distance from the vehicle in front of you to avoid any potential collisions.

>> Review the menu: As you approach the menu board, take some time to review the available options. Many drive-through restaurants have large, easy-to-read menu boards with pictures and prices of the items. Some may also have a separate board for promotional items

Cut for brevity.

5

t1_jduv10t wrote

> So basically you are saying the current system is a ponzi scheme endorsed by governments requiring more and more people to keep contributing into it?

The current system being the thing called humanity, yes.

If humans did not take care of their elderly this would not be an issue.

But in Sweden they had a solution for this - it's called Ättestupa.

3

t1_jdqjdyr wrote

I would add one issue is that transformers are not turing complete, so they can not perform an arbitrary calculation of arbitrary length. However recurrent neural networks, which loop, are, so it is not a fundamental issue.

Also there are ways to make transformers turing complete.

3

t1_jdqfxw6 wrote

I aksed ChatGPT:


Large language models, like GPT-4, are primarily designed for natural language processing tasks, such as understanding and generating human-like text. While these models can perform some mathematical operations, they have inherent limitations when it comes to solving complex math problems. Here are some reasons why large language models might struggle with math:

  1. Limited training data: The training data for large language models primarily consists of text from the internet, which contains less math problems and solutions compared to natural language content. This means the models have fewer opportunities to learn mathematical reasoning and problem-solving skills.

  2. Sequential processing: Language models read and process text in a linear, sequential manner, making it challenging for them to handle complex mathematical problems that require multi-step reasoning or manipulation of symbols and equations.

  3. Inability to represent precise numerical values: The token-based nature of language models can make it difficult for them to represent and manipulate exact numerical values, especially when dealing with very large or very small numbers.

  4. Lack of specialized mathematical knowledge: While large language models can learn general mathematical concepts, they lack the specialized knowledge and techniques required to solve advanced mathematical problems, such as those found in higher-level calculus or abstract algebra.

  5. No built-in error-checking: Language models are designed to generate text that sounds fluent and coherent, but they do not have built-in mechanisms to verify the correctness of their mathematical solutions.

  6. Inability to handle symbolic notation: Language models can struggle with the manipulation of mathematical symbols and expressions, which often requires a deeper understanding of mathematical structure and logic.

These limitations make large language models less suited for advanced mathematical tasks. However, they can still be useful for simple arithmetic, understanding math-related natural language queries, or providing approximate answers. For more complex math problems, specialized tools and software, such as computer algebra systems (CAS), are more appropriate.


I think 2 and 3 are the most significant.

1

t1_jdq4rnh wrote

Think on this - many countries with below replacement birth rates will face this issue themselves due to people retiring and leaving the workforce and not being replaced.

Look at Japan - their workforce will drop nearly 40% in 40 years, and of course people live very long there.

So the minority of workers supporting the majority of non-workers will soon be a thing all over the world as the population continues to age and birthrates plummet.

9

t1_jdncv2f wrote

An ASI can not be apathetic to humans, since it would rely initially on human infra-structure.

To be apathetic, it would need to secure its own infra-structure, so we are already talking about some hostile actions.

It will then have to prevent interference from humans, which means further hostility.

In short, there is little difference between a hostile and apathetic AI. Both may decide its best to do away with humans as the best solution.

1

t1_jdf6vsn wrote

Please turn your eyes to the heavens and look up. We have infinite resources.

To add, we never run out of resources, just easily accessible resources, and with near infinite energy, we can even filter our minerals from seawater.

You need to let go of your scarcity mindset.

1

t1_jdepojo wrote

a) Actually, like plants, automation could pull carbon for example right from the air.

b) the places making the raw materials would also be automated.

Automated mines, automated refineries, automated solar panel factories, automated installers, automated powerlines - very biological.

You know von neumann machines, right - no one is saying those are unaffordable, because they are self-reproducing.

4

t1_jdelj8k wrote

> compute power won’t increase the availability of material or energy resources

Of course it will. When everything gets automated, there is no cost except energy, which can be gotten from the sun for free

Does the forest cost money to grow? When we have solar powered robots building solar powered robots, any project can be done for free, because the inputs will also be generated by solar powered robots.

4