ArcaneOverride

ArcaneOverride t1_j08sbkt wrote

Yeah, I was using that premise to attempt to disprove it by contradiction. I know some people believe that minds, significantly smarter than us, aren't possible, so I wanted to address that belief before someone replied claiming it.

1

ArcaneOverride t1_j08r77b wrote

Lol snails! That reminds me of that description of the plot of Finding Nemo (or was it Finding Dory) that neglects to mention that they are all fish.

I had to look up Beckett, and, after Googling, I assume you were referring to Samuel Beckett. I've never heard of him before.

The only Beckett I could think of was the fictional vampire historian vampire (he's a vampire who studies the history of vampires) from Vampire the Masquerade.

2

ArcaneOverride t1_j06cpud wrote

No it's more like a wide multiaxis field. There are many kinds of intelligence and aspects of those kinds. So many that we might never classify them all.

When we create a mind that is as intelligent as us in the ways that matter for inventing and improving technology, it probably won't be anything like us.

But the mere fact that humans have the levels of intelligence we have proves that those levels of intelligence are possible. We are proof that it is possible for "human level" intelligences to exist.

Now you might postulate that we are the pinnacle and that further gain in intelligence isn't possible, but some people are better at inventing and improving technology (the relevant kind(s) of intelligence) than others. It is at least possible for a machine intelligence to match the greatest human inventors and scientists of all time. But then it could also think faster and with perfect memory and with as many copies of itself collaborating as its hardware can support.

A million Turings, Lovelaces, Einsteins, Newtons, Curies, Da Vincis, Babbages, etc all collaborating, with perfect memories and knowledge of each other's thoughts, operating at 1000 times the speed of human minds. All acting as one.

Is that not a mind more intelligent than any single human mind?

Would that not mean that the previous postulate is incorrect? That we are not the pinnacle?

Now consider that they need only to make one small improvement to themselves and then they are very slightly better at improving themselves.

Could enough small incremental improvements not eventually render them smart enough to start making larger improvements?

5

ArcaneOverride t1_j069kt5 wrote

By having it tell a story then prompting it with weird setups to scenes to write next, I got it to write a very soap opera like story.

It was about two lesbians who fall in love, move in together, and accidentally get each other pregnant. Then a woman who one of them had a one night stand with before they got together showed up and she was also pregnant with her child and she had lost her job and was now living out of her car. I was going to tell it to have them argue about giving her a place to stay but then the tab stopped responding.

It was pretty funny since I mostly just told it the setup for each scene and it wrote the scene and chose it's outcome. The story went ways I didn't expect.

Also I got weirdly invested in their relationship even though the writing wasn't great.

When I told it that they were in their OBGYN's office to tell the doctor about their pregnancies the conversation showed it recognized that there was something not quite right about two people being able to impregnate each other. It described the doctor as surprised, ask them if they were sure, and had him say it was a very unique situation. Then he just took them at their word that that's how they both got pregnant and started prescribing prenatal vitamins and stuff without any follow up questions. I was kind of sad that it didn't understand why that is "a very unique situation" well enough to have the doctor really question it.

3