RabidHexley

RabidHexley t1_jbb8lgk wrote

Would be surprised given this isn't isolated to the States. Anyplace that can acquire GPUs can theoretically perform AI research. And the potential bad outcomes of AI development don't really care about geographic location, so there's not any benefit to stopping the research being done here.

2

RabidHexley t1_jbb7h2z wrote

It being open seems unequivocally better in my eyes, even outside of being optimistic towards technological progress.

It's better for lots of actors to actually know what the cutting-edge actually is. More eyes means more solutions and scrutiny. We want all the best minds possible looking at this stuff.

Outside of actively outlawing ALL development on machine learning and neural networks (basically tracking down anything that looks remotely like neural network development and sending them to prison), and going to war with nations who don't comply, this isn't the kind of tech you can stop, only slow down and push into the shadows or other people's hands. And if you're concerned about uncontrollable AI agents that's not a remotely better situation to be in, even if you've slowed the tech's progress by however many years.

2

RabidHexley t1_jaf0j2e wrote

Reply to comment by V_Shtrum in Is style the next revolution? by nitebear

I feel like a lot of the malaise that comes from unemployment/underemployment are due to employment being the standard structure of society. The constant fear and anxiety of failure and poverty hanging over your head while you ponder how you actually want to live your life. Without employment you're not a functioning member of society, our cities are entirely built around there being places to work.

There would certainly be a transition, we and everyone else currently alive are born into this world. Accepting change is always difficult. But I don't see why society wouldn't be able structure itself around different systems. Clubs, associations, societies, performance, athletics, childcare (we're not gonna have robots overseeing kindergarteners), education (people still want to learn things that are already known), friends, family. Hell, join a farmstead.

Structures and systems that could replace obligatory employment have already been conceived. They're just limited by the need to function within a capitalist system. They're marginal to employment because almost everyone requires employment to function within society.

There'd probably be a meteoric rise in virtuosos and elite athletes in less financially rewarding sports given there'd be no fear of failure and poverty preventing talented people from pursuing their chosen craft to the utmost. Doesn't matter what AI or a computer can do, we'd still want to push human capabilities to the limit. And there'd still be prestige associated with such pursuits. (People still care about Chess and Go in a post Deep Blue and AlphaGo world)

The generation leading into this world would certainly have its members who struggle without the societal structure of employment. A UBI/welfare-based society would encounter challenges since we'd still be talking about a world based around economic (un)employment.

But I can't imagine the people born into and growing up in a truly post-employment world would view ours - riddled with poverty and people performing tedious busywork such as it is - with anything but horror.

Along with all of the intangible benefits that come from children no longer starving, people no longer living in eternal debt and eliminating the crime and instability that comes with systemic, generational poverty.

2

RabidHexley t1_jaemizx wrote

I think this would definitely be the case. We already like handcrafted items that machines can make just as well or significantly better. It's just a means of connecting with other people and the world around us. AI or machines being able to do it as well doesn't replace that dynamic.

3

RabidHexley t1_jaemcgf wrote

Reply to comment by V_Shtrum in Is style the next revolution? by nitebear

People still act & work in the absence of a need to work as well, in the current world (i.e. people who can afford to retire early). People also take on additional tasks, hobbies, and trades in their lives that have no practical benefit.

Gardening, musical instruments, hiking, fan fiction, all manner of crafts. Most hobbies can take a lot of work and don't have a practical return. An AI (or a supermarket, amazon, midi software, etc.) being able to do something for you doesn't replace the desire to do and experience things yourself.

Many people's actual jobs already don't serve any practical function outside of the narrow scope of something like a corporate structure. Middle manager, bureaucrat, many accounting roles, and all of the people serving in support positions for these roles. Completely divorced from any fruit of labor besides a paycheck.

3

RabidHexley t1_jaef09w wrote

>Everyone keeps screaming "Dey tooker jerbs!" but the market simply won't allow it in the big bang everyone's expecting.

This is the thing that gets me. Societal/economic collapse isn't some fun thing the rich can just "ride out" by hoarding their imaginary pennies.

One thing I also feel people don't necessarily discuss; rich people and the "elites" in power don't necessarily want to live in a dystopian hellscape either, despite their greed. A thriving population creates a world that you actually want to live in. There are forces of self-interest that work in our favor, not just altruism.

4

RabidHexley t1_jae2c7j wrote

>The AI will have to take the abstract and resolve to something concrete. Either we tell it how to do that or we leave that decision up to the AI which brings us back to the whole concept of AI safety. How much agency does the AI have and what will happen.

This is only the case in a hard (or close to hard) take-off scenario where AI is trying to figure out how to form the world into an egalitarian society from the ground up given the current state.

It's possible that we achieve advanced AI, but global change happens much slower. Trending towards effective pseudo-post-scarcity via highly efficient renewable energy production and automated food production.

Individual (already highly socialized) nation-states start instituting policies that trend those societies towards egalitarian structures. These social policies start getting exported throughout the western and eventually eastern worlds. Generations pass and social unrest in totalitarian and developing nations leads to technological adoption and similar policies and social structures forming.

Socialized societal structures and use of automation increases over time which causes economic conflict to trend towards zero. Long long-term (entering into centuries) certain national boundaries begin to dissolve as the reason for those structures existence begins to be forgotten.

I'm not advocating this as a likely outcome. Just as a hypothetical, barely-reasonable scenario for how the current world can trend towards an egalitarian, post-scarcity society over a long time-span via technological progress and AI without the need for AGI to take over the world and restructure everything. Just to illustrate how there are any number of ways history can play out besides AGI takes over and either fixes or destroys the world.

2

RabidHexley t1_jadyhsb wrote

>Once you optimize hard enough for any utility curve you get either complete utopia or complete dystopia the vast majority of times.

Yeah, if we assume that the future will guaranteed trend towards optimizing a utility curve. That isn't necessarily how the development and use of AI will actually play out. You're picking out data points that are actually only a subset of a much larger distribution.

1

RabidHexley t1_jadwxc5 wrote

I'm not trying to actually define utopia. The word is just being used as shorthand for "generally very good outcome for most people". Which is possible even in a world of conflicting viewpoints, that's why society exists at all. Linguistic shorthand, not literal.

The actual definition of utopia in the literary sense is unattainable in the real world, yes. But our general wants and needs on a large scale aren't so divorced from each other that a positive outcome for humanity is inconceivable.

7

RabidHexley t1_jadsn49 wrote

Utopia in this context doesn't mean "literary" utopia. But the idea of a world where we've solved most or all of the largest existential problems causing struggle and suffering upon humanity as a whole (energy scarcity, climate catastrophe, resource distribution, slave labor, etc.) . Not all possible individual struggle.

That doesn't mean we've created a literal perfect world for everyone. But an "effective" utopia.

7

RabidHexley t1_jad8r8t wrote

> for example if someone asked you a trick question, and the predictable false answer pops into your head immediately - that's what a single call to an LLM is

Yep. This is the biggest issue with current consumer LLM implementations. We basically force the AI to word-vomit the first thing it thinks of. It's very good at getting things right in spite of that, but if it gets it wrong the system has no recourse. Coming to a correct conclusion, well-reasoned response, or even just coming to the conclusion that we don't know something requires multiple passes.

3

RabidHexley t1_jaa3go2 wrote

> purely to see what if any hidden underlying structures humanity has collectively missed

This is one of the things I feel has real potential even for "narrow" AI as far as expanding human knowledge. Something may very well be within the scope of known human science without humans ever realizing it. If you represented all human knowledge as a sphere it'd probably have a composition as porous as a sponge.

AI doesn't necessarily need to be able to reason "beyond" current human understanding to expand upon known science, but simply make connections we're unable to see.

2

RabidHexley t1_j9u5q7t wrote

Hallucinating seems like a byproduct of the need to always provide output straight away, rather than ruminated on its response before providing an answer to the user. Almost like being forced to always word-vomit. "I don't know" seems obvious, but it's usually the result of multiple recursive thoughts beyond the first thing that comes to mind.

Sort of how we can experience visual and auditory hallucinations simply by messing with our visual input or removing it altogether (such as optical illusions or a sensory deprivation tank). Our brain is constantly making assumptions based on input to maintain functional continuity and thus has no qualms with simply fudging things a bit in the name of keeping things moving. External input processing must happen in real-time so it's the easiest thing to notice when our brain is fucking around with the facts.

LLMs simply do this in text form because that is the base token they function on. It's definitely a big problem. It seems like there needs to be a means for an LLM platform to ask "Does this answer seem reasonable based on known facts? Is this answer based on conjecture or hypotheticals? etc." prior to outputting the first thing it thinks of since it does seem at least somewhat capable of identifying issues with its own answers if asked. Though any attempt to implement this sort of behavior would be difficult with current publicly available models.

3

RabidHexley t1_j9oxvjl wrote

Not a plan I'd be on board with. Disincentivizes increasing efficiency/productivity, hurts competitiveness in a bad way, encourages further regulatory avoidance, and encourages maintaining human performed jobs for their own sake which I think is a detrimental mentality long-term. Tax the profits, plug loopholes, hold corporations to account, like what should be done anyways.

2

RabidHexley t1_j8ezwsb wrote

>usually follow up with something like "anyway I don't like to think about that kinda stuff."

I think for most non-techie types or other nerdy enthusiasts this is very much the common factor. Unless it's a specific interest of yours thinking about a rapidly changing future on the macro scale tends to be more of a practice in anxiety for most people.

2

RabidHexley t1_j7gkk7o wrote

If we had automated systems capable of even Level 1 it would completely change the nature of our economy. Forget it being a thing you buy, such a robot is replacing most manual labor.

Beyond that, tons of individual AI's housed in discrete, humanoid bodies isn't really a great design for a lot of reasons, and doesn't really reflect a realistic use of this tech in my opinion. And AI capable of your Level 2/3 tasks would already be changing the fabric of how our world works. In your hypothetical about staffing, it'd be more like having a distributed system for the building that operates all of the local units.

Otherwise, this would be something incredibly expensive. Like only bought by the wealthy and some upper-middle class. Like $50k-500k+ at the very least (just look at how much it costs to get one of those robot-arm camera operators, for instance). For everyone else it would be something they'd interact with as a service, like an app where you can hire a cleaning service, fast food workers, or the robot that picks up the trash.

1

RabidHexley t1_j6o5ji0 wrote

That's my thought as well. Though it could mean that the AI could pass the Turing test "continuously". Changing topics and returning to previous topics without any oddities occurring.

Because yeah, a properly pre-prompted ChatGPT without hard topic limits (so no "I'm afraid I can't do that" moments) put against an unaware subject could definitely fool a lot of people for at least a short conversation.

I feel like a true capital P "Pass" of the Turing test would be something like a model that can be provided with a persona, background data and history (or come up with one on the fly), and carry on a conversation consistent with that persona of arbitrary length with a subject believing them to be human.

And then, be able to have that same subject come back on a following day, and be able to continue conversing with the model in a manner consistent with time having passed in the life of the simulated persona.

Even if there were still some limitations that would be the point where I'd pretty much consider conversational AI a "solved" problem, since it would just be a matter of degrees. Where you can have something like an AI assistant provide a consistent experience of "Personhood" (even if that person is an AI).

By the time that problem is solved though we will almost certainly be capable of making multimodal Psuedo-AGIs work at the very least. So it's hard to say how many years it will talk to solve the problems with current models preventing this capability.

4

RabidHexley t1_j6k9ko7 wrote

We'll see, I'm not an oracle, but things rarely develop that quickly in the real world. Name a technology that was invented (the point where it became actually viable to use) and didn't have a pretty lengthy turnaround to being actually implemented in widely used products or industries. Many fields may be downsized in the coming decade, but it is impossible to predict the degree.

If anything could be considered a good bet, it's that "menial mental labor" tasks will be phased out first. Typing up/evaluating reports, rote clerical tasks, data entry, etc. These are the types of tasks currently coming AI tech will have the easiest time actually replacing humans for.

Talking specifically about the types of tasks we'd typically want someone highly trained or educated for (since the question is being asked about someone going to college). Not speaking to labor in general.

2

RabidHexley t1_j6k3jse wrote

Agreed. Not only because the technology doesn't yet exist. But because once it does it's impossible to accurately predict how it will be adopted and implemented by various industries on wildly unpredictable timelines, and what the actual impact would be once they did.

There's simply far too much to speculate on for any practical advice to be meaningful in this regard.

I think even when people aren't overestimating the rate AI tech will progress, they do overestimate how rapidly its effects will be felt in our actual lives. Once new tech is actually viable there's still significant delays before it's actually implemented, even AI is subject to this. Even if an AGI came out tomorrow, it will still likely be at least a decade before our lives we drastically changed by its development.

2