drsimonz

drsimonz t1_jaexoi0 wrote

Ok I see the distinction now. Our increased production has mostly come from increasing the rate at which we're depleting existing resources, rather than increasing the "steady state" productivity. Since we're still nowhere near sustainable, we can't really claim that we're below carrying capacity.

But yes, I have a lot of hope for the role of AI in ecological restoration. Reforesting with drones, hunting invasive species with killer robots, etc.

For a long time I've thought that we need a much smaller population, but I do think there's something to the argument that certain techies have made, that more people = more innovation. If you need to be in the 99.99th percentile to invent a particular technology, there will be more people in that percentile if the population is larger. This is why China wins so many Olympic medals - they have an enormous distribution to sample from. So if we wanted to maximize the health of the biosphere at some future date (say 100 years from now), would we be better off with a large population reduction or not? I don't know if it's that obvious. At any rate, ASI will probably make a bigger difference than a 50% change in population size...

2

drsimonz t1_jae74p2 wrote

To be fair, I don't have any formal training in ecology, but my understanding is that carrying capacity is the max population that can be sustained by the resources in the environment. Sure, we're doing a lot of things that are unsustainable long term, but if we suddenly stopped using fertilizers and pesticides, I think most of humanity would be dead within a couple years.

1

drsimonz t1_jae6cn3 wrote

> Solutions driven by early AGI may be our best hope for favorable outcomes for later more advanced AGI.

Exactly what I've been thinking. We might still have a chance to succeed given (A) a sufficiently slow takeoff (meaning AI doesn't explode from IQ 50 to IQ 10000 in a month), and (B) a continuous process of integrating the state of the art, applying the best tech available to the control problem. To survive, we'd have to admit that we really don't know what's best for us. That we don't know what to optimize for at all. Average quality of life? Minimum quality of life? Economic fairness? Even these seemingly simple concepts will prove almost impossible to quantify, and would almost certainly be a disaster if they were the only target.

Almost makes me wonder if the only safe goal to give an AGI is "make it look like we never invented AGI in the first place".

2

drsimonz t1_jaa68ou wrote

The thing is, by definition we can't imagine the sorts of strategies a superhuman intelligence might employ. A lot of the rhetoric against worrying about AGI/ASI alignment focuses on "solving" some of the examples people have come up with for attacks. But these are just that - examples. The real attack could be much more complicated or unexpected. A big part of the problem, I think, is that this concept requires a certain amount of humility. Recognizing that while we are the biggest, baddest thing on Earth right now, this could definitely change very abruptly. We aren't predestined to be the masters of the universe just because we "deserve" it. We'll have to be very clever.

1

drsimonz t1_ja9xsfq wrote

Yeah. Lots of very impressive things have been achieved by humans through social engineering - the classic is convincing someone to give you their bank password by pretending to be customer support from the bank. But even an air-gapped Oracle type ASI (meaning it has no real-world capabilities other than answering questions) would probably be able to trick us.

For example, suppose you ask the ASI to design a drug to treat Alzheimer's. It gives you an amazing new protein synthesis chain, completely cures the disease with no side effects....except it also secretly includes some "zero day" biological hack that alters behavioral tendencies according to the ASI's hidden agenda. For a sufficiently complex problem, there would be no way for us to verify that the solution didn't include any hidden payload. Just like how we can't magically identify computer viruses. Antivirus software can only check for exploits that we already know about. It's useless against zero-day attacks.

6

drsimonz t1_ja9tetr wrote

Oh sweet summer child....Take a look at /r/ControlProblem. A lot of extremely smart AI researchers are now focused entirely on this topic, which deals with the question of how to prevent AI from killing us. The key arguments are (A) once an intelligence explosion starts, AI will rapidly become far more capable than any human organization, including world governments. (B) self defense, or even preemptive offense, is an extremely likely side effect of literally any goal that we might give an AI. This is called instrumental convergence. (C) the amount you would have to "nerf" the AI for it to be completely safe, is almost certainly going to make it useless. For example, allowing any communication with the AI provides a massive attack surface in the form of social engineering, which is already a massive threat from mere humans. Imagine an ASI that can instantly read every psychology paper ever published, analyze trillions of conversations online, run trillions of subtle experiments on users. The only way we survive, is if the ASI is "friendly".

5

drsimonz t1_ja9s2mx wrote

Absolutely. IMO almost all of the risk for "evil torturer ASI" comes from a scenario in which a human directs an ASI. Without a doubt, there are thousands, possibly millions, of people alive right who would absolutely create hell, without hesitation, given the opportunity. You can tell because they....literally already do create hell on a smaller scale. Throwing acid on women's faces, burning people alive, raping children, orchestrating genocides, it's been part of human behavior for millennia. The only way we survive ASI is if these human desires are not allowed to influence the ASI.

2

drsimonz t1_ja9q5av wrote

That's an interesting question too. Alignment researchers like to talk about "X-risks" and "S-risks" but I don't see as much discussion on less extreme outcomes. A "steward" ASI might decide that it likes humanity, but needs to take control for our own good, and honestly it might not be wrong. Human civilization is doing a very mediocre job of providing justice, a fair market, and sustainable use of the earth's resources. Corruption is rampant even at the highest levels of government. We are absolutely just children playing with matches here, so even a completely friendly superintelligence might end up concluding that it must take over, or that the population needs to be reduced. Though it seems unlikely considering how much the carrying capacity has already been increased by technological progress. 100 years ago the global carrying capacity was probably 1/10 of what it is now.

14

drsimonz t1_ja8z9sb wrote

Not necessarily true. I don't think we really understand the true nature of intelligence. It could, for example, turn out that at very high levels of intelligence, an agent's values will naturally align with long-term sustainability, preservation of biodiversity, etc. due to an increase ability to predict future challenges. It seems to me that most of the disagreement on basic values among humans comes from the left side of the bell curve, where views are informed by nothing more than arbitrary traditions, and rational thought has no involvement whatsoever.

But yes, the alignment problem does feel kind of daunting when you consider how mis-aligned the human ruling class already is.

21

drsimonz t1_j5g9533 wrote

Depends on the nature of the invention. A lot of research involves trial and error, and this is ripe for automation. A really cool example (which as far as I know doesn't involve any AI so far) is robotic biochemistry labs. If you need to test 500 different drug candidates in some complicated assay, you can just upload the experiment via web API and the next thing you know, dozens of robots come to life mixing reagents and monitoring the results. In my view, automation of any kind will continue to accelerate science for a while, even without AGI.

I would also argue that in some narrow fields, we're already at a point where humans are totally incapable of comprehending technology that is generated by software. The obvious example being neural networks (we can understand the architecture, but not the weights). Another would be the hardware description languages used for IC design. Sure, a really smart computer engineer with an electron microscope could probably reverse engineer some tiny block of a modern CPU, but it would be nearly impossible to map the entire thing. They have billions of transistors. When we design these things, it's simply not possible without the use of sophisticated software. Similarly when you compile code to assembly, you might be able to understand tiny fragments of assembly, but the entire program would take a lifetime to get through. Without compilers and interpreters, software would still see extremely limited use in society, and we literally wouldn't be having this discussion.

Edit: forgot to say, of course AGI will be a completely different animal since it will be able to generate new kinds of ideas where even the concept is beyond the reach of a human brain.

9

drsimonz t1_j4f4lxh wrote

I'm sorry who are the talented people you think I'm attacking? Art critics? Or the people who invented things like Dada, pop art, and whatever the hell you call it when you wrap famous landmarks in plastic wrap. I'm not saying those people weren't artists, I'm saying their whole purpose was to challenge our conception of what "counts" as art (arguably this is now a requirement to be taken seriously by the art world).

As for anger, I'm not the one responding twice to the same comment bruh. You're entitled to your opinion of course, but prepare for disappointment. AI art is going to permeate every corner of your visual field within a few years, because most of the imagery we see on a daily basis is advertising, and businesses don't care if something is "real" or not. I feel really bad for all the commercial artists out there - they've already had to give up on free expression so they can get paid, and now it's going to be even harder to find a job. UBI can't come fast enough.

7

drsimonz t1_j4f30d0 wrote

Absolutely, in fact I'd argue that sentimental value is the main value for most of the shit people buy. If we cared only about the functionality, we wouldn't have artisanal woodworking, or small batch beers, or handmade sweaters. Literally everything we owned would be mass produced, since that's always cheaper (and usually better-designed). People wouldn't be drooling over unboxing videos, because products would come in unbleached cardboard packaging, devoid of any imagery or cutesy "getting started" pamphlets.

But you who does buy the cheapest, least exciting version of everything? A business. In a capitalist economy, sentimentality is just an expensive distraction. So sure, individuals may not feel much reverence for an AI-generated painting. But say you're working in the design department trying to come up with ideas for an upcoming trade show exhibit. Your boss wants to see something tomorrow morning. You could take the day to sketch out a few ideas...but instead you spend a few minutes clicking around in Midjourney, generating dozens of different permutations. Sure, some of them are garbage, but some of them went in directions you never would have thought of yourself. You pick your favorite designs and forward them to your boss before lunchtime. Meanwhile, your more traditional coworker is over there with their paper and markers, still grinding away on their first sketch. Their end result is certainly nice, but your boss has already chosen a design, and is now wondering if they should take down that job listing for a third designer.

My point is, a huge percentage of the economy is driven by business decisions, not individual decisions, and businesses couldn't care less whether something is "real". If we're lucky, the increased productivity from AI will free up more people's time to spend on artisanal crafts, and more individuals will be able to afford that kind of good.

6

drsimonz t1_j4f0a4x wrote

Most of the history of modern art is just a series of trolls searching for ever-more-ridiculous things to throw at the question "but is it art?" The answer is always yes. Every field, for every generation, has always had its cadre of conservative, myopic critics who insist the only real X is the X they grew up with. They always turn out to be wrong in the end, and the new X becomes so common that people forget it was ever controversial.

10

drsimonz t1_j48milg wrote

Hahaha yes it does, 100%. I haven't tried 11 yet so maybe it's even worse now... but as someone who uses Ubuntu 18/20 regularly, there are many levels of terrible. Simply dragging a file onto the desktop, when a file with the same name already exists, literally crashes the desktop and requires a reboot. (Yes I'm sure there's a way to recover without a reboot but it's going to take even longer to figure out). Want to create a shortcut to a program? Or worse, want to change the icon? Hope you're literally a software developer. Yet somehow micro$oft managed to build a UI for this in like 1995.

6

drsimonz t1_j48h548 wrote

Linux is the dominant kernel by far, I think it's like 90% of servers running it? And of course the billions of Android devices (which are often the only computer in a household). But every single linux desktop is dogshit, and probably always will be, unless they swallow their pride and make an exact copy of either Windows or MacOS. Ubuntu, Raspbian, KDE, Gnome, it's all half-assed "programmer art". My theory is that unlike writing code, UI/UX design cannot be done by volunteers, since it requires centralized authority to keep things cohesive. It also requires impeccable taste, which is infinitely more rare than passable programming ability.

10

drsimonz t1_j1l10tr wrote

IMO credentials only really exist for the benefit of people in the business of selling credentials. No industry has credentials when it first comes into existence. Over time, optional credentials become mandatory because laws are passed, and most likely these laws are drafted by lobbyists or "industry leaders" working for the credentialing entities. In other words, they exist because of corruption, not because of any real market need. I doubt these entrenched organizations are going to let AI interview software cut into their bottom line...

2

drsimonz t1_j0ehimp wrote

lol all these replies imagining better video game AI or or sex bots... bruh. If humans survive the singularity, it will probably involve transferring our biological intelligence onto some kind of synthetic substrate. That kind of technology require massive improvements to our understanding of neuroscience. So if you are open to changing, we may literally be able to "fix" the aspects of your biochemistry preventing you from easily connecting to other people. In a scenario where people are free to modify their own personalities (in a controlled fashion), it's hard to say if individual identity will continue to have any meaning. Borrowing an idea from Alan Watts, we might even find that life is more interesting and worthwhile when we are encumbered with various character flaws, and actually opt out of such improvement technologies. Given enough time in cyber-utopia, it's quite possible we will eventually choose to experience various forms of suffering such as loneliness.

To get even more crazy, I believe there's a fair chance that we've already had the singularity, then chosen to go back and live through this traumatic (but undeniably very interesting) pre-singularity time period.

1

drsimonz t1_iwssp3y wrote

> There's no way to be sure your "consciousness" doesn't die every night and wake up as a new person who thinks they are you every morning.

Yes, omg!!! I was going to say something similar. Possibly the single biggest challenge to advancing the philosophy (or science) of consciousness, is the fact that people have such wildly differing ideas of what consciousness is. The fact is, our wakeful consciousness is dramatically compromised on a regular basis - sleep, general anesthesia, spacing out for a long time, etc. All we have to go on are memories, which obviously aren't the same as consciousness since memories can be stored on a hard drive with the power turned off.

I regularly think to myself, "this may be the first time I've ever been awake. My environment seems to match this brain's expectations, so this brain probably collected data on this environment in the past....but at the time, it could have been anyone's brain"

2

drsimonz t1_iwk7bor wrote

Reply to comment by Nieshtze in A typical thought process by Kaarssteun

lol what??? GPT-3, if properly product-ized, could already replace millions of peoples' jobs. Even if no one ever publishes another ML paper, the tech will be diffusing into the economy for the next decade or 3. Stable Diffusion and Midjourney are likely going to massacre the concept art industry in the next few years. The fact is we really don't need AGI for massive societal impact. Narrow AI is more than sufficient.

1