Thebadmamajama

Thebadmamajama t1_j9kta95 wrote

Right. For code I think it's ok if 90% is accurate, and you can clean it up. Something secondary has to compile it and verify it does what you intended.

No such secondary check exists for transferring knowledge. It goes into someone's brain and the accept it as fact, or decide to research it and realize it's wrong. That's fundamentally unhelpful.

3

Thebadmamajama t1_j8l41cx wrote

In many ways we haven't really consciously decided what is at the core of being human. We've been willing to automate a lot and we're seeing limits.

We have largely used technology to ease our lives, and later automate things. We're more efficient at transportation, farming, manufacturing and communicating with each other. That has made things faster, helped us produce more, and create abundance.

Here's where we see limits.

If our communications are automated with avatars/AI, what's the purpose of f2f communication? Turns out, avoiding depression and having the ability to resolve conflict.

If our entertainment is entirely solitary due to hyper personalization, what's the point of shared events and experiences? Turns out, building real relationships and social connections

If our economy is mostly automatic, what's the purpose of jobs that produce things? Turns out, teaching, coaching, therapy and other human experiences are the things we can uniquely produce.

So to me, technologies will try to commercialize automating everything. They will hit these limits, create shitty consequences, and guardrails will arise.

But history tells us that we need to see the shitty side of things to care and establish the guardrails.

1

Thebadmamajama t1_j87056q wrote

Reply to comment by Grotto-man in Open source AI by rretaemer1

Working in a few related fields, they are already being combined to some extent. We have machine perception, where the bot can often fine objects in the world around them, and so things like pick them up and move them around. on the other end you have all these deep learning methods that can help simplify large data sets, and that helps make it easier to find things more reliably. The problem is they are all probabilistic... The machine will easily confuse objects (a dog for a loaf of bread), and then it can misjudge the world around it an unintentionally break things or hurt people.

There also practical issues, power, sensors, are all still in the early days, and largely inefficient and otherwise expensive. Most of the bots only have minutes of runtime before they need to charge again.

intelligent and helpful are tall orders given all that... combining the above is still wildly off from intelligently working side by side with a human.

I think a whole new operating system needs to be invented that sits above all this, and orchestrates things... Receiving commands without confusing intent, interacting with the world without serious mistakes, and working with objects it can reliably identified.

1

Thebadmamajama t1_j6ggi8f wrote

💯. Passing tests notwithstanding, the error rate and limits start to show themselves quickly. I've also found cases where there's repetitive information that leaves you believing there aren't alternative options.

2

Thebadmamajama t1_j6ff9ok wrote

This is wrong imo. An enormous number of companies can't make software to compete with big tech. Software engineers who are more efficient could feasibly be hired across small, medium sized businesses, and throw in nonprofits and government too. They all struggle to deliver services as good as big tech.

This will dramatically reduce the number of them in big tech companies, but unleash many more across an economy who views software as magic and too expensive.

2

Thebadmamajama t1_j6bajh2 wrote

I think this is why ChatGPT (and LLM transformer models generally)is dangerous. It is a probability machine, not some form of generalized intelligence.

You give it a question or instruction, and it's highly capable of producing a response that is the most probable response based on billions of articles, forum posts, and writings across the internet. Nothing more, no magic. It doesn't understand what you are asking, and it can reason about the words. It's just picking the highest probability words that come next.

Now, could a realtime AI be created to look for the probability of fake news. Maybe. The issue with fake news is the truth is not always immediately available. So an AI (like humans) might be in a position to say "I'm cannot confirm this is real or fake" for a while before the lies spread out of control. Solve that problem, and we can automate it later.

1

Thebadmamajama t1_j41pbd4 wrote

Yes. To me there's two waves that hit a wall.

  1. p2p protocols. At the height of the file sharing craze, there was a real sense that we could leverage the internet as a big hive mind. Media companies freaked out a sued with every toll they could, which pushed all the innovation to the cloud where centralized policies were pitched as easier. So we've been pulled in the direction of cloud since then, but at least have a lot of comp sci on how to build world scale p2p networks.

2). Blockchain, which is dominated by really inefficient distributed protocols, but it focused on how to prevent tampering in a distributed environment. That's huge, because in the first wave, there was no solid way to prevent poisoning the network with bad data (also a tactic media companies used). But, this wave has hit a wall because the tech has been used predominantly for classic scams, with people losing billions of dollars world wide. The hasn't played out, but there's a real risk regulation comes in because we can have a shadow finance system that can tank the economy. And I suspect it's going to encourage centralization all over again.

A different thought exercise is what happens after this. Communication and trust are somewhat solved, but there are properties of a centralized system we keep getting yanked back to. That gap is the next wave of innovation imo

5

Thebadmamajama t1_j1jiz0e wrote

More bullshit. Activision has significantly more games than CoD, and this concentration of IP on consoles, PC and mobiles games is unprecedented.

This is like when Facebook bought Whatsapp, and consolidated billions of users to their platform.

I can see the downvoting brigade on each of these posts. Dont get fooled. This is ridiculous market concentration.

45