ghostfuckbuddy
ghostfuckbuddy t1_japf3uj wrote
Could they make a more sinister looking robot? It already looks like I-Robot nightmare edition.
ghostfuckbuddy t1_j9t2fqk wrote
Reply to comment by royalemate357 in [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
Corporations are paperclip maximizers, sometimes literally.
ghostfuckbuddy t1_j9nwjcg wrote
> Penalizing businesses for transitioning to AI workers will slow the process of becoming a fully automated economy with UBI
Wouldn't this tax be one of the things that fund UBI though?
ghostfuckbuddy t1_j9jijau wrote
Reply to Researchers were able to uniquely identify VR users with 94% accuracy from only 100 seconds of motion data, using anonymized data from 50K+ Beat Saber players by Tom_Lilja
Oh well back to being a featureless gray cube
ghostfuckbuddy t1_j9j93rq wrote
Reply to comment by ground__contro1 in Two Deans suspended after using ChatGPT to write email to students by Neurogence
I think if you're getting them to proofread after it's been written, then no, but if you're getting them to write the whole thing for you then yes.
ghostfuckbuddy t1_j9iwqjf wrote
Reply to comment by hijirah in Two Deans suspended after using ChatGPT to write email to students by Neurogence
The email's content is not as important as what it is supposed to represent - that the person writing it cared enough to invest time to personally craft a message. The whole point is completely undermined by outsourcing it to an AI.
ghostfuckbuddy t1_j9e7hsb wrote
It was always a long way away. It's a hardware problem, trying to implement some of the most delicate controls ever, at the coldest temperatures ever. Just enough to rotate qubits but not enough to decohere them. Then as you scale up, you run into more problems with correlated errors as qubits start interfering with each other. The algorithms have already been developed, for the most part. All the theorists are just waiting for the manufacturing to catch up. Probably another 10-20 years before you see serious industrial applications.
ghostfuckbuddy t1_j9e6r0j wrote
It might depend on who you surround yourself with, but my impression was that ChatGPT had gone fully mainstream, even in non-technical fields.
ghostfuckbuddy t1_j58g6tn wrote
Reply to comment by alexiuss in Google to relax AI safety rules to compete with OpenAI by Surur
I mean they're kind of solving it with reinforcement learning, aren't they? Just because it's a hard problem doesn't mean it's unsolvable, and it doesn't have anything to do with sentience.
ghostfuckbuddy t1_j58by7k wrote
AI safety is for wusses. Move fast and break things.
ghostfuckbuddy t1_j4am9mc wrote
Reply to Don't add "moral bloatware" to GPT-4. by SpinRed
It is impossible for GPT systems to not have "moral bloatware", a.k.a a moral value system. If naively trained on unfiltered data, it will adopt whatever moral bloatware is embedded in that data, which could literally be anything. If you want an AI that aligns with humanist values you need either a curated data set or use reinforcement learning to steer it in that direction. But however it is trained it will always have biases, it's just a matter of which biases you want.
ghostfuckbuddy t1_j46eikm wrote
Reply to comment by chimp73 in [D] Bitter lesson 2.0? by Tea_Pearce
The compute is cheap but the data may not be easily accessible.
ghostfuckbuddy t1_j34ahxt wrote
This seems ineffective because there are too many workarounds. For example, just using ChatGPT on your smartphone. I think there would a lot more fear around cheating with ChatGPT if teachers scanned all submissions with AI-detection tools. That would skew the risk/reward tradeoff towards not cheating with AI.
ghostfuckbuddy t1_j0tfnrr wrote
Reply to How far off is an AI like ChatGPT that is capable of being fed pdf textbooks and it being able to learn it all instantly. by budweiser431
You could make one! That sounds like ordinary finetuning but with a clean user interface.
ghostfuckbuddy t1_j0tfite wrote
Reply to comment by Kaarssteun in Is progress towards AGI generally considered a hardware problem or a software problem? by Johns-schlong
If we magically stumbled across the right algorithm, then it's only a software problem. But if we need to test a bunch of different approaches before we get there, then hardware becomes the limiting factor in progress.
ghostfuckbuddy t1_j0o1smk wrote
Reply to When AI automates all the jobs what are you going to do with your life? by TrainquilOasis1423
Research. Even if AI can do it better than me, I still want to be at the frontlines of technological progress.
ghostfuckbuddy t1_j05ooyg wrote
Reply to comment by green_meklar in The problem isn’t AI, it’s requiring us to work to live by jamesj
> What makes profit from AI a reasonable revenue stream to tax?
Because unlike other software, modern AI cannot exist without copious amounts of human-generated data, which it currently consumes without acknowledgement or remuneration.
Btw it's a bit ironic that you're criticizing the article for a lack of basic economic understanding when you don't even know what a progressive tax is.
ghostfuckbuddy t1_izutix1 wrote
Reply to AGI will not precede Artificial Super Intelligence (ASI) - They will arrive simultaneously by __ingeniare__
Arguably you could consider ChatGPT a pretty dumb AGI, since it has been measured to have an IQ of 86. I mean, there's no way you can consider ChatGPT a 'narrow' AI anymore, right?
ghostfuckbuddy t1_iw7b0mt wrote
Reply to Experimental Cancer Vaccine Yields Promising Results: NIH Finds Significant Tumor Regression by Shelfrock77
Why is it called a vaccine instead of a cure if it's causing tumors to regress? I normally think of vaccines as solely preventative.
ghostfuckbuddy t1_ivv1ssv wrote
Reply to Will Text to Game be possible? by Independent-Book4660
Depends if you mean a fully fleshed AAA game or pong. We can probably already do pong. But the leap from video to AAA game is much greater than the leap from image to video. It will essentially mean the AI is capable of writing 100K+ lines of coherent code. So basically I don't think we'll get it until we get AGI.
ghostfuckbuddy t1_iu1ru3w wrote
Reply to comment by RavenWolf1 in The Great People Shortage is coming — and it's going to cause global economic chaos | Researchers predict that the world's population will decline in the next 40 years due to declining birth rates — and it will cause a massive shortage of workers. by Shelfrock77
Bold prediction. We need enough workers to build all that automation first.
ghostfuckbuddy t1_itp5z8q wrote
Reply to comment by Anenome5 in Is anything better than FTL as a future? by ribblle
Sure, we could and it would be more technologically feasible. But as long as we're in sci-fi territory, I still think there's a huge difference between delaying oblivion and preventing it.
When we're young time seems to move at a glacial pace, but the older we grow, the faster time seems to move and the more we panic about our mortality. I think a similar psychology would still play out, only over astronomical timescales. And at least with normal death we still have some symbolic immortality through our children or societal impact. But at the end of the universe we'll just be staring into the dark, meaningless void. I think the second-half of the universe's lifespan could be pretty psychologically rough if a solution isn't found.
ghostfuckbuddy t1_itoc4qc wrote
Reply to Is anything better than FTL as a future? by ribblle
Yeah, some way to reverse entropy. That's the ultimate problem.
ghostfuckbuddy t1_isf4mnb wrote
Reply to We've all heard the trope that to be a billionaire you essentially have to be a sociopath; Could we cure that? Is there hope? by AdditionalPizza
I don't think they necessarily need to be sociopaths, they just need to be so laser-focused on their goals that they ignore everything, and everyone else. And if they have those traits that make them successful, why would they want to give them up?
ghostfuckbuddy t1_jbwobij wrote
Reply to comment by shahaff32 in [D] Is Pytorch Lightning + Wandb a good combination for research? by gokulPRO
That just sounds like a bug. It might take a lot less effort to report it for patching than rewrite all your own code.