KSRandom195

KSRandom195 t1_j39fo3z wrote

John Carmack is a very smart person, but he’s making a prediction out his ass. We have no idea how much code would actually be required. Let’s also be clear that he’s trying to run an AI startup which requires funding. So he has reason to be very rosy about what can be accomplished. Maybe he’s onto something revolutionizing in the realm of AGI, I hope he is, maybe he is not. Until he builds it end-to-end it’s hypothetical.

Some scientists believe that what gives us consciousness (something some argue is required for AGI) is that there are parts of our brain that are quantum entangled to other parts, but we have no idea how or why. Trying to make small pieces that might help that code aren’t going to be super useful if quantum entanglement hardware is required. It’s fundamentally different from what you would build on a classical computer.

Yes people should experiment and play around with it. But they’re not going to get something that looks like intelligence in their basement.

2

KSRandom195 t1_j398ztl wrote

The problem is it is expensive.

You want an AGI, but you don’t just want an AGI, you likely want it at least as smart as a person.

They say a single human brain has 2.5 Petabytes of information in it. Backblaze, which specializes in just storage, can do $35,000 per Petabyte. So that’s $87,500 in just storage, and that’s not redundant or fast storage, that’s just raw storage.

You need redundancy at that scale, so probably multiply that by ~2, so $175,000, again, only in storage.

Now you need compute. They estimate the human brain operates at 1 exaFLOP. The world fastest super computer is currently only ~ 1.1 exaFLOPs. It cost $600,000,000 to build, and that doesn’t include the cost to maintain and run it.

And that is even assuming we can do 1:1 the speed we need.

This isn’t something you can just do in your basement, not with the tech we have today.

10

KSRandom195 t1_j28nork wrote

Robot tax.

You have to create the incentives that discourage worker replacement with automation. A robot tax is one answer that can lead to funding a UBI

It’s a delicate balance because you want to encourage the automation but discourage full worker replacement, at least until you get enough of a robot tax you can fund a UBI for all the displaced workers.

The UBI is needed because the economy doesn’t work without the workers receiving income. So if the workers don’t receive income then they will not buy stuff in the economy, so the robots won’t be needed since there will be no one to buy the stuff they make.

15

KSRandom195 t1_j1fxhww wrote

It’s a complicated issue.

The existence of immigrant workers drives down the wages for folks that would be able to fill those jobs locally. For instance, the current worker shortage is somewhere around 3 million people. A full two-thirds of that is from immigrants that didn’t immigrate because of new immigration policies put in place during the Trump administration. Because of that we’re now seeing an increase in wages and job mobility because there is less supply of labor for the same demand, and thus price must go up and workers have more power.

I’m not saying what he did was good, but for local workers it’s one of the fuels for the current power struggle around wages and workplace conditions that may end up improving the situation for the worker.

The same applies to H1-B visas, that are popular at large tech companies, but is designed for hiring specialists that are not available locally. Could tech companies find software engineer locally? Absolutely, could they find enough? Probably. But there would be decidedly fewer tech employees, and that would have a similar affect of less supply, which means higher wages. But with H1-B visas there is more supply, and so that depresses wages for the local employer vs if that supply wasn’t there.

Now, that leads to a fun question of why should we try to protect the wages of local employees? After all, these are global companies. The reality is that these companies chose to start or are headquartered in the US because it provides the best place to do their business. So something specific about US policy make the US best able to host these companies, and so logically those companies should be giving back profits through taxes and wages to their local employees.

So I disagree with the notion that Americans should feel bad about this program. It’s a complicated issue with lots of inputs and outputs. It’s actually easier than what some other countries do, and of course is harder than others. As long as we have nation states we will have to deal with this kind of policy issue.

9

KSRandom195 t1_j1etoa3 wrote

My favorite aspect of this is if the future humans hadn’t done what they did with Cooper then the future humans wouldn’t have existed.

It’s my favorite oddity in stories in time travel where someone had to develop time travel and go back in time and change something, in order for them to be able develop time travel.

16

KSRandom195 t1_ixs9rzi wrote

The fun part about this is it doesn’t indicate that the AI was able to understand and actually predict other players actions. It’s just the AI was able to determine optimal paths through the decision matrix to get a winning game.

It’s more likely to be exploiting underlying flaws in the rules the AI was able to discover a human was not.

2

KSRandom195 t1_ixfiesa wrote

Yep, the easiest way to do this is probably the brain in the vat hypothesis.

We know that our eyes and brains lie to us and play tricks to explain or even “fix” our perception of the world through our bodies senses. So if the simulated input messes up for a few frames we are already programmed to just kind of ignore and correct it.

For instance, there are stories that when European ships first landed in the Americas that the natives just… couldn’t see them. It’s not that their eyes didn’t process the information, it was that their brains decided it was not possible, and so just didn’t register that the ships existed.

1

KSRandom195 t1_iv5bwon wrote

Our laws of physics are based on our current understanding of the universe. They used to think the Earth was flat. The best science at the time believed it was this way.

Further, we already know we have some pretty big unknowns, for instance, dark matter and dark energy exists solely to fill a gap in our understanding. And we also know we have some big assumptions that the same laws of physics apply everywhere.

For the applications we typically use these assumptions for, stuff we’re doing around Earth and the Sol system it’s fine. For principles of the universe these assumptions and unknowns are a much bigger deal.

2

KSRandom195 t1_iud5mnr wrote

> I live in Australia and we get paid less because our dollar is weaker. Are you suggesting companies should constantly adjust salaries to account for currency fluctuations?

What is the value of your labor to the company? That’s what you should be getting compensated for, and that doesn’t care about currency exchange rates or cost of living.

> I’d also get paid less if I move to a city with less demand for my skill set. If I want to maximise my pay I need to go where the demand is, regardless of which company I work for.

Industries like software don’t have a “local” demand. A software engineer in Sydney can do a job as well as a software engineer in San Francisco. So there isn’t “less demand” in Sydney vs San Francisco, there is a global demand for software engineers.

Yes some jobs have local demand, a factory can’t have workers around the world from it. But Microsoft is a software company with a few hardware exceptions.

1

KSRandom195 t1_iuau3sg wrote

Yes.

Because if the hiring process is biased then the compensation outcome will reflect that bias. It’s not like people of color or women didn’t exist or weren’t required to be treated equally under the law 5 years ago when most of their current employees were hired.

So if the compensation outcomes are not about equal then somewhere along the line they had a bias. It could have been in hiring, in promotions, in raises, in what opportunities were offered to who. But the bias is there if the outcomes are not the same.

0