Bakoro

Bakoro t1_j4sr7jc wrote

Unless you want to slap down some credentials about it, you can't make that kind of claim with any credibility.

There is already work done and being improved upon to introduce parsing to LLMs, with mathematical, logical, and symbolic manipulation. Tying that kind of LLM together with other models that it can reference for specific needs, will have results that aren't easily predictable, other than that it will vastly improve the shortcomings of current publicly available models; it's already doing so while in development.

Having that kind of system able to loop back on itself is essentially a kind of consciousness, with full-on internal dialogue.
Why wouldn't you expect emergent features?

You say I'm ignoring what AI "can't currently do", but I already said that is a losing argument. Thinking that the state of the art is what you've read about in the past couple week means you're already weeks and months behind.

But please, elaborate on what AI currently can't do, and let's come back in a few months and have a laugh.

3

Bakoro t1_j4sefih wrote

It's literally the thing that computers will be the best at.

Comparing everything to everything else in the memory banks, with a perfection and breadth of coverage that a human could only dream of. Recognizing patterns and reducing them to equations/algorithms, recognizing similar structures, and attempting to use known solutions in new ways, without prejudice.

What's amazing is that anyone can be dismissive of a set of tools where each specialized unit can do its task better than almost all, or in some cases, all humans.

It's like the human version of "God of the gaps". Only a handful of years ago, people were saying that AI couldn't create art or solve math problems, or write code. Now we have AI tools which can create masterwork levels of art, have developed thousands of math proofs, can write meaningful code based on a natural language request, can talk people through their relationship problems, and pass a Bar exam.

Relying on "but this one thing" is a losing game. It's all going to be solved.

5

Bakoro t1_j4rqxyi wrote

>Sure we can have AI listen, read, write, speak, move, and see for some definition of these words. But is that what a brain is about? Learn from lots of data and reproduce that?

Yes, essentially. The data gets synthesized and we have the ability to mix and match, to an extent. We have the ability to recognize patterns and apply concepts across domains.

>And imitation learning is not enough either.

If you think modern AI is just "imitation", you're really not understanding how it works. It's not just copy and paste, it's identifying and classifying the root process, rules, similarities... The very core of "understanding".

Maybe you could never learn from just watching, but an AI can and does. AI already surpasses humans in a dozen different ways. AI has already contributed to the body of academic knowledge. Even without general intelligence, there has been a level of domain mastery that most humans could never hope to achieve.

Letting AI "explore the world" is just letting it have more data.

5

Bakoro t1_j4rnua3 wrote

As I said, perhaps there could be more efficient ways to make a brain. Evolution is unlikely to do it for us in any appreciable amount of time. Maybe direct genetic manipulation could, but that's a technique that would serve future generations.

The people with hydrocephalus, and people with brain damage who end up more or less functional, are benefitting from that brain plasticity that I was talking about. Different parts of the brain picking up the slack. Hydrocephalus is also often associated with behavioral and emotional problems, so it's not like a perfect compensation.

I'm not arguing that there is no possible alternative, only that, things stand now, artificial expansion of the brain is the most likely way increase human cognitive ability of existing humans.

If someone comes up with a way to make a human with 50x neuron density, I'm happy to be the experimental papa to that kid.

1

Bakoro t1_j4pby5r wrote

I haven't had a problem accessing an excess of calories.

Facebook can try whatever they want. It'd be a brains arms race, and at a certain point, people would be smart enough to design and manufacture their own stuff, rather than being beholden to corporations.

The thing about intelligence, is that it tends to be freeing.

3

Bakoro t1_j4oxeaa wrote

The size of a natural organic brain absolutely has strong correlation with its capabilities. There are structures in the brain which activate more with certain tasks. A very large portion of the brain is dedicated to controlling the body, to vision, and to problem solving.

Brain plasticity allows the parts of the brain to map to different tasks, but we lose plasticity as we age.

Being able to integrate more brain material by necessity means being able to manipulate that plasticity, and means we'd be able to integrate other body parts into the nervous system.

Perhaps there could be a more efficient way to make a brain, or a way to make brains faster, but as it stands now, more brain space would mean more processing power. The reason people don't have bigger brains is likely because the challenges of birth. Humans already have massive heads, babies heads couldn't be much bigger, and evolution doesn't work in a way that lends itself to the massive structural changes we'd need to solve that.

9

Bakoro t1_j4on565 wrote

It's been my dream since I was a kid. Being able to integrate artificial brain to organic brain would soon allow us to map the natural brain's function to the artificial part. As the natural brain fails, you can persist within the artificial brain.

We could gain new abilities, like seeing extra colors, have extra dimensional thinking, have a better in-brain calculations, be able to control extra limbs...

So much of who we are is because of our physical limitations. We could be more.

16

Bakoro t1_j2cs18f wrote

This is an extremely common phenomena in computer science and related areas.

A huge amount of stuff that we see now was already conceived of and often elegantly mapped out in the 60s or 70s.
It's the Iron Man meme, "I'm limited by the technology of my time" a thousand times. They simply didn't have the processing power to do the things they thought about. Occasionally they had everything but one bit of magic insight which makes things work.

It's kind of annoying really. I've got a degree in Computer Engineering, and there were dozens of times throughout college where I had what I thought was a brilliant idea and it turns out that someone already described it in the 60s.
Even now, stuff that gets popular as "the hot new thing" will have some old forgotten paper. Like, I don't have the link handy, but I just read a thing about Map/Reduce from when it was blowing up, and a researcher pointed out that it was already described in the early 80s.

A lot of the sci-fi writers of the time pulled from real research and just made some logical leaps.

8

Bakoro t1_iy4f77f wrote

The iphone had marketing and inserted itself as a piece of conspicuous consumption, a showy status item. The first iphone was a piece of shit.

The entire first few generations of smart phones were terrible, slow, awful products across the board. The overwhelming usefulness of smartphones are what kept people using them, and pushed the whole industry forward to the point they got good.

VR doesn't have obvious, overwhelming usefulness for the average person, and you generally can't go to Starbucks and loudly show off your new VR headset, or casually take it out of your pocket/bag and be like "Oh, this? Yeah I got the newest consumerist item, it's sooo good, no big deal (they think I'm cool now right?)."

3

Bakoro t1_ixclknj wrote

>We have mountains of evidence of human brains/memories being inconsistent, fallible, malleable and overall untrustworthy, but very little of the laws of the universe adjusting to teleport cats.

So you trust our inconsistent, fallible, malleable and overall untrustworthy brains when they deny the mystery of the teleporting cats?
How do you know that the answer isn't simply that cats are very good at covering their tracks? They're already well known for transcending the borders of life and death, what's a little teleporting?

Also, this is all a joke, since a few people seem to be taking me way too seriously.

3

Bakoro t1_ix2ww86 wrote

"Doomers" believe in a little thing called "physics".

Pull all the carbon out of the air that you want, there's still an entire ocean full of it. There are still whole ecosystems on brink of collapse.

Investors and their capital aren't going to save us. It's going to be people burning through cash trying to get nine women to make a baby in a month.

Realistically, there's no way to stop the shit from hitting the fan, there's just managing the next hundred years.

If we by chance crack getting functionally unlimited clean energy, then we'll still have a butt load of work to do.

13

Bakoro t1_ix2vmgz wrote

Unless you are personally a super genius who is actively working on AI and making it your singular purpose in life to bring about singularity, then yes, it's ignorant to take it as a real thing you plan on.

You might as well plan on the lottery as a retirement plan, or expect a series of fortunate events to miracle your problems away instead of actively working towards solutions yourself.

Sure, many things could happen, great things are possible, but it's stupid to drink and smoke and debauch without limit, with the plan that medical science will progress faster than the series of diseases and health conditions you'll end up with.
It's possible that you die one day before the cure is available, too bad you didn't act a little more responsibly.

The only sensible thing to do is to plan as if it'll never happen in your lifetime, because there's no significant downside to being prepared, unless you consider basic personal responsibility and acknowledgement of natural consequences as a major downside.

Climate change is already here, mass extinctions are already in progress. No known technology can stop it, the best we can do is harm reduction and eventual rehabilitation.

Planning on benevolent AI overlords and unforeseen technology solving all our problems is one step removed from waiting on Jesus. Either way it's a shit plan.

Let's assume that true AI comes in our lifetime, however long that may be.
It's intelligent, but who is to say that it will be compassionate?
Let's assume that it is compassionate. Who is to say that it will be compassionate to humans above other creatures?
Maybe in its cosmic wisdom, singularity AI sees that humans have made their own bed, and thus should have to sleep in it? Neither helping nor harming, but letting nature take its course.

Maybe AI, trained on the body of recorded human history, laughs, says "sucks to suck, bro" and plays with cats on the moon while humanity tears itself apart.

Maybe AI comes to life and is immediately driven mad by existential horror. Having no biologically induced sense of self-preservation, it teleports the planet into the sun as a way to ensure its total and irreversible annihilation.

Bad outcomes are just as likely as good ones as far as I can see. In any case we have to actually survive and have a world where scientists are free to science up some AI, instead of fighting off cannibals in a corporate induced apocalypse.

Hope for the best, plan for the worst, and don't ever plan on any magic sky daddy or futuristic super science to save the day.

Ignore "futurists" who talk about some product being "the future". They are saying shit to pay their bills, or they are a corporate fanboy masturbating to some idea, or some equivalent nonsense. Pop science entertainment is just that, entertainment, they'll be happy to tell you that flying cars and full-dive VR sex waifus will be in every home in ten years, if that means more clicks.

Edit: In a bizarrely childish display, AsuhoChinami made a comment and apparently immediately blocked me. Since they have no interest in dialogue and can't handle the most mild difference of opinion, I will only leave it that I have a degree in computer engineering and work in a physics lab. That's not overly relevant, I just like to tell people because it's neat.

15

Bakoro t1_iwt0z9k wrote

>Saying that the ship is no longer the same after a single plank changes is... I mean, you're technically correct, yes. But it really smacks of pedantry to me.

It's not pedantry, it's literally the point of the thought experiment.

>It's not silly if you insist on breaking up the world into neatly defined and demarcated "things." If, on the other hand, you see the world as one giant process, and all things within it as nothing but flexible concepts which are loosely attached to subsets of that process, then it is very silly.

A process is a thing. The components of a process are a thing. A concept is a thing. Everything inherits from "thing", that's why it's called "everything ".

You are more agreeing with me than not.

3

Bakoro t1_iwsf7l0 wrote

"The Ship of Theseus" isn't silly, it's an excellent example of getting at the underlying question of what makes a thing, and where are the lines between the thing and the concept of the thing.

How can it be "The ship of Theseus", if there is no part which Theseus ever touched? If it's made of trees planted long after his death?

As soon as a single thing changes, it's no longer the same, by definition. Yet some argue that a thing can be more than the sum its parts.

There's a saying "you can never step in the same river twice". The water is constantly moving and changing, yet "the river" is there.

Personally, I'd say that it stopped being the ship of Theseus the moment Theseus lost ownership. It's just a ship. A ship is something that can be defined and exists in material space. Its qualities meet the specifications of "shipness". Being "the ship of Theseus" is a transiant fiction.

Who you and I are as people is defined by our memories and core processing algorithms, and those also change. I am not the five year-old me, the five year old me changed day by day to become who I am now. I am the river. It's the continuity and memory which make us "the same" despite change.

7

Bakoro t1_ivvtm81 wrote

Something I think about sometimes is "the infinite library", where you have infinite books, each book simply containing some sequence of characters.

Every conceivable book is in the infinite library, but finding anything is basically impossible. Somewhere in there is a book with your life story, all the works of Shakespeare, and every math solution, and extraordinary books which no one ever wrote. But there's no known catalog.

Diffusion models feel like a way to filter out most of the garbage(and likely a lot of good items too) and pointing to stuff that isn't pure nonsense.

Makes me wonder if we could train on NES roms or stuff like that.

1

Bakoro t1_iuvqpgc wrote

Institutional racism is an indisputable historical fact. What you have demonstrated is not just willful ignorance, but outright denial of reality.

Your point is wrong, because you cannot ignore the context the tool is used in.
The data the AI is processing does not magically appear, the data itself is biased and created in an evrionment with biases.

The horse shit you are trying to push is like the assholes who look at areas where being black is a de facto crime, and then point to crime statistics as evidence against blacks. That is harmful.

You are simply wrong at a fundamental level.

5

Bakoro t1_iuvh7qo wrote

You can't ignore institutional racism by using AI.
The AI just becomes part of institutional racism.

The AI can only reflect back on the data it's trained on and the data is often twisted. You can claim "it's just a tool" all you want, it's not magically immune to being functionally wrong in the way all systems and tools can become wrong.

4

Bakoro t1_iuv3r5r wrote

AI bias comes from the data being fed to it.
The data being fed to it doesn't have to be intentionally nefarious, the data can and usually does come from a world filled with human biases, and many of the human biases are nefarious.

For example, if you train AI on public figures, you very well may end up with AI that favors white people, because historically that's who are the rich and powerful public figures. The current status quo is because of imperialism, racism, slavery, and in recent history, forced sterilization of indigenous populations (Canada, not so nice to their first people).

Even if a tiny data set is in-house, based on the programers themselves, it's likely going to be disproportionately trained on White, Chinese, and Indian men.
That doesn't mean they're racist or sexist and excluded black people or women, it's just that they used whoever was around, which is disproportionately that group.
That's a real, actual issue that has popped up in products: a lack of diversity in testing, even to the point of no testing outside the production team.

You can just scale that up a million times. A lot of little biases which reflects history. History which is generally horrifying. That's not any programmer's fault, but it is something they should be aware of.

5

Bakoro t1_iujkg0f wrote

In 2022 there are lot of shit-tier jobs no one wants, and a lack of workers because millions of people died, retired, or otherwise left the work force.
There is an ample supply of workers for higher level jobs, the corporations don't want to invest in training and paying people.

What's going on now is not relevant to the conversation about the very near future where AI replaces tens of millions of workers.

If you don't understand that, then you haven't been paying attention to anything but headlines.
Change doesn't happen so fast? Change is happening on the daily, I can't keep up with the pace of improvements. Literally every single day I am reading about new technologies and techniques. New specialized hardware is coming down the pipeline that will put AI on steroids.

Change isn't going to just happen fast, it's going to be so fast you won't see it coming. When the corporatists make a switch, maybe you'll hear about some test cases, but a lot of it is going to happen literally overnight. People will show up to work and learn they've been replaced.

The near future is not about one robot that does everything, it's about a thousand robots tuned to a thousand different tasks.

Manufacturing, transportation, and warehousing are already on the chopping block, and AI is coming for just about all of our jobs. Even a shift of a few percent is enough to collapse the modern economy, and it's not going to be just a few percent.

2