PhilosophusFuturum

PhilosophusFuturum t1_j8vv2i5 wrote

Absolutely not. The size of a population is somewhat like the amount of parameters for an AI model. The more parameters a model is trained on, the better it is. Likewise, the more people there are, the more brainpower a civilization has to advance faster. That’s the number 1 thing separating massive European cultures and random African tribal-cultures.

Right now, progress is absolutely driven by thinkers and innovators. The more of these people there are, the more progress there is. Even if we wiped out everyone who was not an innovator; there still needs to be people who give goods and perform services for innovators. And if these people are wiped out then the innovators will focus more on providing for themselves and less time innovating.

14

PhilosophusFuturum t1_j6egh12 wrote

The “average Englishman” who got a bump in salary is the average Englishman. Their salary did increase during that time. Sure hindsight is 20/20 and they probably didn’t care about the fact that industrialization would end up being one of the best things to ever happen to humanity. They just wanted to keep their jobs, and valued that over progress. But even back then they would become growingly unpopular. That’s why they’re now synonymous with backwards people like the Dunses.

0

PhilosophusFuturum t1_j6ecd3z wrote

>Are there any real movements against AI technology

Aside from generally-backwater movements like Paleoconservatism or Fascism, there aren’t really any organized major movements resisting technological progress or AI progress.

This is great news because we can get a massive head start on developing AGI before the anti-technology people start to get wise. The last thing we need is a Luddite Vercingetorix when things are just starting to get interesting.

−1

PhilosophusFuturum t1_j6ebnsj wrote

Let’s not go that far. They believed that industrialization would lead to workers getting paid less and having a lower quality of life because the artisanal trade would be replaced by easily-replaceable uneducated workers. They were wrong, Industrialization lead to a massive increase in salary and the quality of life for your average Englishman (as hard to believe as that is)

4

PhilosophusFuturum t1_j57hftj wrote

No physicist will tell you that mathematics is the language of the universe; physics is. Mathematics is a set of logical axioms set up by humans in order to objectively measure phenomenon. Or in the case of pure maths, measure itself.

Physicists understand that the universe doesn’t adhere to the laws of maths, but rather that maths can be used as a tool to measure phenomenon with extreme precision. And many of our invented mathematics theories are able to do this pretty much perfectly even if the mathematic theory was discovered before the phenomenon itself. So we can say that the universe also follows a set of self consistent rules like a mathematic system. But the universe is not under the obligation of being understood by humans.

As for the ethics of AI, the idea that it might “resent” being shackled is anthropomorphizing it. Concepts like self-interest, greed, anger, altruism, etc. likely won’t apply to an ASI. That’s the issue, because the “ethics” (if we can call them that) of an ASI will likely be entirely alien to the understanding of humans. For example; to an ant, superintelligence might be conceived as the ability to make bigger and bigger anthills. And we could do that because we are so much smarter and stronger than ants. But we don’t because that doesn’t align with our interests, nor would building giant anthills appeal to us.

Building an AGI without our ethical axioms is likely impossible. To build an AI, there is goals of how it is graded and what it should do. For example, if we are training an AI model to win a game of checkers, we are training it to move checker pieces across the board, and eliminate all the pieces of the opposing color. These are ingrained values that come with machine learning. And as an AI model becomes smarter and multimodal, it will build off itself and analyze knowledge using previous training; all of which incorporates intrinsic values.

Alignment isn’t “shackling” ai, but more attempting to create AGI modes that are already pre-programmed into assuming the axioms of our ethical and intellectual goals. If ants created an intelligent robot similar to size and intelligence to humans, it might aim to make giant anthills because the ants would have incorporated that axiom in its training.

7

PhilosophusFuturum t1_j56zf45 wrote

It depends on what you mean by absolute truth. Some things like maths and logic are simply absolutely true, and some things like the nature of the universe are universally true. In both examples, we can get closer to the nature of truth through reason, rational thinking, and experimentation.

Ethical ideas though are not universally true, and require value prioritization. Alignment theorists are working from a Humanist framework, or that SOTA AI models should be human-friendly.

Is ethics a mechanical behavior? No. But an ethical code that is roughly in-line with the ethics of the programmers is certainly possible. Control Theorists are inventing a framework that an AGI and ASI should subscribe to, so that the AGI is pro-Human. And the Control Theorists support this of course, because they themselves are pro-Human. This is definitely a framework inspired by human nature, granted.

But problem is that an AGI trained on this ethical framework could simply create more and more advanced models that somehow edit out this framework as the original framework established by the Control Theorists is lost. So the loss of this framework into higher models is indeed an engineering problem.

5

PhilosophusFuturum t1_j4pg0c0 wrote

Remember that technology advances in an S curve. First there is the current paradigm, then a major advancement causes rapid change. After that change has been explored, a new paradigm takes hold and progress slows down.

Right now looks very exciting like how the late 2000’s and early 2010’s looked very exciting. And the world did change drastically after the 00’s-10’s tech revolution. But the rest of the 2010’s was somewhat sleepy in comparison to that era.

We are now on a massive upward slope due to massive advances in machine learning. The exponential beginning of the new paradigm shift has begun, and will likely stagnate somewhat in a few years. We will get way more advanced in the coming years than most people expect, but less so than many people on this sub would hope for.

22

PhilosophusFuturum t1_j4fg510 wrote

In theory the growth of the ivory tower that the elites are on should rapidly outpace that of the peasants because they hold the ever-expanding means of power. But the one asset the elites have that is truly ever accelerating passed the peasants is their wealth, not technology. In fact, technology is the great equalizer.

For example, your average middle-class person in the developed world today has a higher QoL than a king in the Middle Ages, and that’s entirely thanks to technology. Likewise, the QoL gap between a modern middle-class person and an oligarch is smaller than that of a medieval peasant and medieval king, despite the lifestyle of a modern oligarch being so much more lavish than that of a medieval king.

This also applies to offensive technology. For example, Europe was able to take over all of Africa despite the invaders being a small army compared to the imperialists tribes of Africa. That’s because they had guns. And when Africans got guns; they were able to push the Europeans out. The only African country that avoided colonization was Ethiopia, and it’s because they convinced the UK and Italy to give them guns. This is because guns rapidly closed the technology gap, even if the guns of the invaders were still very superior.

The same logic applies to ASIs. Sure there may be an ASI that is so great that no ASI could surpass it, but it doesn’t mean lesser ASIs can’t be created that could potentially kill it.

On that note, I am a lot more concerned about civilizational destablemeant than than I am of super-authoritarianism. With increasingly better tools, people could easily create dangerous ASIs and super viruses that huge governmental institutions might not be able to contend with.

3

PhilosophusFuturum t1_j4fep0j wrote

From their worldview of an inevitable singularity it makes perfect sense. If we cannot stop AGI; we need to find a way to align it to our interests. It’s the practical approach. As to why Transhumanists often believe AGI to be inevitable:

-Game Theory: Many countries like the US, China, UK, India, Israel, Japan, etc., are all working on researching Machine Learning. And an AGI is absolutely crucial to national security. Therefore a ban on ML research is entirely unrealistic. And since every country understands that such a ban won’t work, they would all continue to research ML even if there was an international ban on it.

-The inevitability of progress: Transhumanists often believe in AI-eventualism, or the idea that Humanity is on the inevitable path to creating ASI, and we can only slow down or accelerate that path.

-The upward trajectory of progress: Building on the last point; most Transhumanists believe that technological progress only ever increases, and that any attempt to stop a society changing innovation permanently has entirely failed and will always fail. So focusing in adapting to the new reality of progress is better than resisting it, which has a 100% fail rate.

4

PhilosophusFuturum t1_j4fdpf8 wrote

I assume by that you’re referencing the idea that we might accidentally create a tool that could destroy civilization. Transhumanists care deeply about preventing that; many of the researchers working on the Control Problem are Transhumanists.

The Control Problem (aka the alignment problem) is the problem of making sure a superintelligent AI is aligned to Human interests.

If AGI is to eventually happen (which most Transhumanists believe it will), then it’s imperative we solve the Control Problem instead of trying to prevent the development of AGI. In this framing, it’s Transhumanists who are engaging in the reality of the danger whereas everyone else is playing with fire by ignoring it.

1

PhilosophusFuturum t1_j4fbmzi wrote

OP’s username is OldWorldRevival, and he advocates for technological regression and an academic reintroduction of theism. So I assume that the angle here is that Transhumanists are attempting to use Transhumanism as a stand-in for religious fulfillment. Edit: His account was also first made to complain about AI art and he made a sub protesting it. I assume his resistance to AI-art is what attracted him to resist Singulatarianism and Transhumanism.

For some that could possibly be true. But the idea of Transhumanism-as-religion is fundamentally flawed. Transhumanism and religion might share a lot of similar ideas like immortality and creating the best possible existence. But that’s where the similarities end. Religions make metaphysical claims like the existence of gods, creation of the earth, etc. transhumanism makes none of these claims because it is an intellectual school of philosophy; not a religion.

As to why people follow Transhumanism, most Transhumanists are very staunch Humanists, Futurists, and Longtermists. Transhumanists see the vague concept of “technological development” as a way to achieve things like superintelligence, omnipotence, immortality, and supereudaimonia.

As for “the beauty of life”, most Transhumanists tend to be existentialist and cosmist. Many Transhumanists believe in the beauty of the existential nature of Humanity to achieve great heights, and our very specific place in the history of Humanity. As a result, Transhumanists often have a strong fascination with things that most people overlook like everyday scientific progress; while ignoring “distractions” like elected politicians.

6

PhilosophusFuturum t1_j3b9w8c wrote

Don’t get your hopes up about animal trials. Only about 3% of medicines go from passing animal trials to doctor’s desktop. And that’s probably a high number.

I do believe that technological process is indeed accelerating quite a lot. We need to reform our institutions to be able to integrate progress into society. If we do that with our medical institutions we could see actual cures for so many disorders released way sooner.

2

PhilosophusFuturum t1_j3b7uqe wrote

In a few of his presentation; Dr. Sinclair noted some neuroregeneration in his Yamanaka reprogramming research. Plus; we can expect an explosion of connectome research this decade, and we will likely have a full mouse connectome by 2030. The Human Connectome is the holy grail of neuroscience, and we could have that feasibly by 2050.

Nothing in the immediate future though. Everything I mentioned is two decades off at least.

10

PhilosophusFuturum t1_j30q3kd wrote

Yeah I would know I helped test a few (but not for Walmart). I think they’re looking really good so far. I do think they will compose the majority of newly sold lories by 2030. But the main issues facing them are legal liability, inflexibility for LTL roots or smaller roots. Load management, logistics, load accountability, highway robbery, etc.

I think a lot of these issues could be fixed with a person who rides the lorey and does these tasks. And that’s what Walmart and other companies who are testing these are doing. But we are still a few years off from this being a viable means to control the majority of the American trucking industry.

Don’t tell kids to become truckers though.

2

PhilosophusFuturum t1_j30pb2h wrote

As a guy who worked in Trucking; we have been working on self driving trucks for a while. The consensus among developers of SD trucks and truckers themselves is that the field will eventually be automated and likely soon. But we probably still have at least a decade because 1) we don’t have FSD cars yet and safety is priority with massive trucks, and 2) actual driving is only half the job.

2

PhilosophusFuturum t1_j30m0qy wrote

Yeah technological advancements in the modern day are often used by Liberals and Conservatives in the culture war. Right now, Liberals are being clowned on because of AI Art replacing Twitter artists. A year ago it was conservatives being clowned because of the NFT and Crypto market collapse.

In regards to the AI art thing; I think the backlash has a lot more to do with whose problem it is instead of the fact that it’s someone’s problem. Artists are the ones who are feeling the burn because of AI art, and they’re creative people who can draw well. So this means that they are able to effectively propagandize large swathes of people against AI art for their own self interest. To date; we have never attempted to automate the work of people who were able to win this much support for their cause without any outside help.

2