RabidHexley
RabidHexley t1_j64eeox wrote
If there is any worry about AGI/ASI I have, it would be about it being in the hands of malicious actors/nation-states. I find that assuming we're going to hand the reigns over to an algorithm that just decides to kill us (for some reason) largely baseless speculation.
RabidHexley t1_j5usxxx wrote
Reply to comment by dasnihil in it seems like the tipping point is coming soon... by captain_gumpy
> We will be soon desensitized with art as we know it today (pretty looking photos, beautiful landscapes, renaissance work), because when things become too cheap to produce without requiring any expertise, it automatically diminishes in value and society will come up with new trend
I already kind of feel like this is the case. Not on an individual level, I can still be impressed and enjoy work, and I don't think that will really change.
But what were previously virtuosic displays of skill in many domains aren't remotely as novel as they used to be. Simply due to the mass proliferation of skill and material access, the number of people who posses the technical skill of those previously considered masters is fairly huge.
If you're an aspiring artist or creative in any domain, you already have to assume there are thousands and thousands of people who are as good or better than you will likely ever be. AI changes the economic paradigm to be sure, but I'm not sure how much AI will change the emotional paradigm of why people pursue creation.
RabidHexley t1_j5u29qf wrote
Reply to comment by KSRandom195 in Anyone else kinda tired of the way some are downplaying the capabilities of language models? by deadlyklobber
>In that context you are implying that LLMs will be like the Industrial Revolution and replace our need to think.
To play devil's advocate, there are a lot of applications specifically for LLMs (and other AI applications) that could easily end up replacing a lot of "thinking human"-type jobs or tasks. Typing up reports, contract evaluation, code translation, etc. There are plenty of jobs that today require human thought and intuition that are the mental equivalent of manual labor. The kind of tasks that would previously go to "junior" positions in a lot of fields.
There would obviously still be people involved, but the AI in question is replacing a lot of the (thinking) manpower that previously would have been required. Same way a few farm workers can till 100s of acres of fields with the assistance of industrial machinery.
Or the way computers replaced the rooms filled with dozens upon dozens of women running manual calculations for accounting firms.
Even if we never moved beyond the current types AI tech we're seeing today, and only continued making it better and more efficient (without any kind of "AGI revolution"). The implications as far as force-multiplication do seem fairly similar to many previous revolutionary technologies.
GPT-3 has been around for a couple years, but it's also only been a couple years, long in tech, but not long at all for human-scale development of brand-new stuff. It's also an early version of tech that's only in recent years becoming sophisticated enough to actually be useful (that the public knows about).
Most importantly. It's also not a complete product, but the backbone for a potential product (ChatGPT being an early alpha for something like an actual product). Even if GPT-3 itself was ready for prime time (which I don't think it is), it would still take years before products were developed on it that began to actually change the game.
The iPhone was conceptualized many years before actually reaching it's final design and being released. It was also built on mobile technology that existed before it and on the backs of many previous mobile touchscreen devices. And even at that point only became widely recognized as the truly revolutionary product it was (as opposed to just a really cool phone) once the smartphone revolution actually kicked off a few years later.
This applies to AI's working in other verticals as well. Making what was previously only possible (or not possible) with a ton of people or computational power, possible with far far less. We don't have the insight to understand the full scope of implications yet.
RabidHexley t1_j5onv6d wrote
Reply to comment by LeIAmNeeson in Can humanity find purpose in a world where AI is more capable than humans? by IamDonya
My interpretation of "life has no purpose" is that we make our own purpose. There is no prescribed purpose native to the universe in an of itself, we make it. As the world changes around us we change the things we find purpose in, a lot of that being a means of adapting to the lives we live (whether we chose to live that life or not).
My point about technology overtaking us is that humans still partake in activities that could objectively be performed better or far more easily through the assistance of machines. We willingly forgo machine assistance in pursuit of a lived experience.
Just look at /r/mightyharvest. These folks aren't providing for anyone from the fruits of their labor, and practically speaking a small home garden for produce is inefficient to the highest degree, but joy is still found from the mere pursuit. Should they instead use that time to try and become doctors, scientists, athletes or paradigm shifting artists of renown? Would that be a truer pursuit of purpose?
This kind of stuff wouldn't go anywhere.
AI won't replace our experience of life, nothing can. What it can do is hopefully create a future in which more people can choose what they want their purpose to be based on the life they want to live.
Edit: The problem with AI art for instance isn't that it replaces artists. It's that it makes it economically less feasible to be an artist, because it's harder to use your art to provide for yourself. It's not because it makes replaces the human desire to create art.
RabidHexley t1_j5oerqt wrote
Reply to comment by Baturinsky in Can humanity find purpose in a world where AI is more capable than humans? by IamDonya
Live happily knowing my loved ones are safe and taken care of?
RabidHexley t1_j5oeiqg wrote
Reply to comment by LeIAmNeeson in Can humanity find purpose in a world where AI is more capable than humans? by IamDonya
I'm sorry. I get what you're saying, and it may apply to some people, but I really don't think it does to many. And I don't see why saying otherwise makes someone an asshole. This also is an unnecessarily depressing post.
A key point here is what is "useless" or "worthless", in terms of a human. In the current world we already have to accept that for 99.99% of us what we do could easily be accomplished by one of the other millions and billions of people on this planet. How different is an AI in this regard?
Everything that I personally get joy from in life, I have literally zero care if an AI is able to do it better, I'm still getting something out of it. And I'm not talking about hedonistic pleasures, I'm talking about genuine pursuits and passions. The only thing I care about is having the freedom to pursue those passions, not a need to express my unique ability as a human to perform a task.
AI doing what I can do better doesn't take away my desire to explore, to experience life, to enjoy the world and the people around me, to enjoy creating for its own sake (and not in an attempt to be the "best" at it).
We already have planes and cars, we already have computers that can realistically simulate virtuosic instrumental playing with programming, we have weapons that have invalidated human strength, massive machines that cultivate our food.
The domain of human superiority has already shrunk by magnitudes, but people still keep being humans.
RabidHexley t1_j5obf5s wrote
Reply to comment by YobaiYamete in Google AI's Great Comeback of 2023 - Will it be able to Respond to ChatGPT? by BackgroundResult
If their tech is around as good as ChatGPT, I understand their apprehension. ChatGPT reaches the level being just good enough to be dangerous. And Google is already dealing with an abundance of regulatory scrutiny. An AI that can easily instruct you on doing criminal activity, or confidently spits out misinformation as fact was likely a can of worms they'd rather kick down the road.
People really do want to use this technology though, so their hand is being forced.
RabidHexley t1_j5m1nq0 wrote
Reply to comment by IamDonya in Can humanity find purpose in a world where AI is more capable than humans? by IamDonya
> there is no positive difference left to make.
I think this is the core of where your problem lies, in holding the belief that the key to a meaningful, fulfilling existence is by contributing to works. And by making a "difference".
We make our own meaning, most of us find meaning in our lives while doing tasks that millions of other people could do just as well, or by finding personal fulfillment in hobbies or passions that don't provide a practical benefit to anyone other than a personal sense of joy and self-expression.
There isn't any amount of skill an AI could hold that could take that away from us. Would AI farmers take away the joy of cultivating my own garden? Or of in painting the perfect image of a sunset just as it feels in my own mind? Do I feel less fulfilled having climbed a mountain when a helicopter could get there in a fraction of the time? Could an AI prevent me from finding fulfillment in time spent with friends and family? Did potters give up their passion when manufacturing started producing high quality pots and bowls by the millions? Did piano players stop learning when you could program realistic sounds in MIDI? We could go on of course...
Our ability to find fulfillment in our lives isn't tied to any intrinsic need to complete practical tasks that only we can do. It comes entirely from within and each other and is something that is discovered by living in a way that fulfills our sense of self.
Trust me when I say that that isn't going anywhere.
In a world where AI replaced almost all practical tasks, there would certainly be individuals who feel crestfallen when the career or passion they pursued is no longer needed in the practical sense. But that drive was developed by living in a world where people were still needed to do those things, not because it was the only path to a fulfilling existence.
RabidHexley t1_j5lu71s wrote
Reply to comment by Original_Ad_1103 in People are already working on a ChatGPT + Wolfram Alpha hybrid to create the ultimate AI assistant (things are moving pretty fast it seems) by lambolifeofficial
I don't think they're saying that actual factory workers are unintelligent, but that an AI wouldn't need to simulate a great deal of intelligence in order to perform a lot of the menial tasks humans are made to do. Even many complex jobs or trades are largely task-oriented, demanding skill, but not necessarily great leaps of intuition to perform. Your average human is well well above the necessary intelligence to perform the average job, but we do them because somebody has to (and jobs, but that's a whole other thing).
RabidHexley t1_j5lthcl wrote
Reply to comment by TinyBurbz in People are already working on a ChatGPT + Wolfram Alpha hybrid to create the ultimate AI assistant (things are moving pretty fast it seems) by lambolifeofficial
> While I know art AI's make beautiful renderings, to me, their potential is squandered on the lazy. Getting more into this, AI art could be so much more if used as a tool. It could do amazing things like generating real-world textures allowing every tree in a game to be unique. But as it stands people seem so much more interested in letting AI do the work for them, instead of letting AI enhance the work they already have done.
This is the main thing that sticks out to me about the AI art revolution, in terms of how it'll really change the game. People are laser-focused on the idea of AI creating bespoke art pieces in their entirety. But a lot of art; be it illustration, animation, game design, comics, etc. contains a lot of tedious, repetitive "art <space> work" that is only tangential to the artists' creative vision and could automated by tech like this.
Another example would be something like a comic book, manga, or animated series. Where the artist designs the world and art style, draws out the characters and their unique looks etc. But then is able to use AI to rapidly generate back-drops or background characters that fit into their specific style. Allowing them to focus on the more specific, key, creative segments of the work.
This could drop the cost and massively increase the accessibility for mediums that currently require numerous tedious hours to produce an incredibly small amount of content, or huge teams of creatives made to do grunt work.
RabidHexley t1_j5l12ui wrote
Reply to comment by BackgroundResult in Google AI's Great Comeback of 2023 - Will it be able to Respond to ChatGPT? by BackgroundResult
Exciting stuff. If one thing can be attributed to ChatGPT at the very least it's actually kicking off a ton of renewed interest in the field. Without the public splash and interest from Microsoft the timeline likely would have been pushed back quite a bit.
RabidHexley t1_j56s60f wrote
Reply to comment by dasnihil in I was wrong about metaculus, (and the AGI predicted date has dropped again, now at may 2027) by blueSGL
> that kind of ask will take at least a 100 years to be implemented on our society. this is a big change.
I personally have come around to the thought that something like UBI being implemented due to automation won't be from compassionate, socialist ideals, but simply because it will become necessary for capitalism to continue functioning.
Reaching a point where you can produce arbitrary amounts of goods without needing to pay nearly anyone across numerous economic sectors is a recipe for rapid deflation. UBI would become one of the only practical methods of keeping the wheels turning and the money flowing.
Maybe after years of it being the norm it would lead to a cultural shift towards some sort of a properly egalitarian society, but it would start because hyper-efficiency resulting in economic collapse isn't good for anyone including the wealthy.
RabidHexley t1_j65xa8n wrote
Reply to comment by turnip_burrito in Superhuman Algorithms could “Kill Everyone” in Due Time, Researchers Warn by RareGur3157
I have no idea. I'm not saying "if bad people get it we're for sure screwed" just that if we do end up screwed it will be most likely because of bad or misguided people, rather than some inclination to destroy humanity on the AI's part.