ghostfuckbuddy

ghostfuckbuddy t1_j9e7hsb wrote

It was always a long way away. It's a hardware problem, trying to implement some of the most delicate controls ever, at the coldest temperatures ever. Just enough to rotate qubits but not enough to decohere them. Then as you scale up, you run into more problems with correlated errors as qubits start interfering with each other. The algorithms have already been developed, for the most part. All the theorists are just waiting for the manufacturing to catch up. Probably another 10-20 years before you see serious industrial applications.

1

ghostfuckbuddy t1_j4am9mc wrote

It is impossible for GPT systems to not have "moral bloatware", a.k.a a moral value system. If naively trained on unfiltered data, it will adopt whatever moral bloatware is embedded in that data, which could literally be anything. If you want an AI that aligns with humanist values you need either a curated data set or use reinforcement learning to steer it in that direction. But however it is trained it will always have biases, it's just a matter of which biases you want.

3

ghostfuckbuddy t1_j34ahxt wrote

This seems ineffective because there are too many workarounds. For example, just using ChatGPT on your smartphone. I think there would a lot more fear around cheating with ChatGPT if teachers scanned all submissions with AI-detection tools. That would skew the risk/reward tradeoff towards not cheating with AI.

1

ghostfuckbuddy t1_j05ooyg wrote

> What makes profit from AI a reasonable revenue stream to tax?

Because unlike other software, modern AI cannot exist without copious amounts of human-generated data, which it currently consumes without acknowledgement or remuneration.

Btw it's a bit ironic that you're criticizing the article for a lack of basic economic understanding when you don't even know what a progressive tax is.

8

ghostfuckbuddy t1_ivv1ssv wrote

Depends if you mean a fully fleshed AAA game or pong. We can probably already do pong. But the leap from video to AAA game is much greater than the leap from image to video. It will essentially mean the AI is capable of writing 100K+ lines of coherent code. So basically I don't think we'll get it until we get AGI.

5

ghostfuckbuddy t1_itp5z8q wrote

Sure, we could and it would be more technologically feasible. But as long as we're in sci-fi territory, I still think there's a huge difference between delaying oblivion and preventing it.

When we're young time seems to move at a glacial pace, but the older we grow, the faster time seems to move and the more we panic about our mortality. I think a similar psychology would still play out, only over astronomical timescales. And at least with normal death we still have some symbolic immortality through our children or societal impact. But at the end of the universe we'll just be staring into the dark, meaningless void. I think the second-half of the universe's lifespan could be pretty psychologically rough if a solution isn't found.

1