Uristqwerty
Uristqwerty t1_j9gu3tn wrote
Reply to comment by Apart_Ad_5993 in Google starts rolling out Memory and Energy Saver modes to latest Chrome release by Stiven_Crysis
> Unused RAM is wasted RAM
Your actual OS is well aware of that fact, and will use spare RAM to make everything faster rather than just the one narcissistic program hogging extra. It'll cache files on disk so that commonly-used things are even faster than SSD. It'll erase some amount of old memory pre-emptively so that when a program demands a block of fresh RAM, the OS can immediately hand some over. Probably other background optimizations too.
Uristqwerty t1_j8rsa3r wrote
Reply to comment by Bad_Mood_Larry in They appeared in deepfake porn videos without their consent. Few laws protect them. by LiveStreamReports
When it comes to consumer behaviour, people flocking to the cheaper product and actively saying "I don't care about the supply chain! Give me my cheap phone/AI art" while others keep trying to draw attention to unethical practices? It's a very close parallel. Maybe the harm feels less tangible when spread out over orders of magnitude more people, or when you're so accustomed to abusive ToS conditions giving away your rights, but it's still there.
Uristqwerty t1_j8og1o5 wrote
Reply to comment by EmbarrassedHelp in They appeared in deepfake porn videos without their consent. Few laws protect them. by LiveStreamReports
The dataset used to train the model needs to be sourced ethically, just like the supply chain used by a physical manufacturer needs to be audited to ensure a supplier isn't using slave labour in a country too remote to attract much attention over the issue. In this case, I'd say the companies need to either dilute their datasets further, using fewer samples from any given person to the point that AI can't replicate the appearance of a specific person or the style of an artist except by improbable coincidence or extreme genericity, or get consent from each person who (or whose work) appears in the training data.
Though this is deepfakes, which I think involve users applying additional training material specifically of the target, so that the AI over-fits to that specific output. If the original AI was ethically/respectfully produced, then the people responsible for the additional rounds of training ought to be the ones at fault, at least as much as the prompt-writer themselves (assuming they're not the same individual!). For that, the only good solution I can think of is legislation.
Uristqwerty t1_j8j9tl9 wrote
Reply to comment by cdtoad in ChatGPT Passed a Major Medical Exam, but Just Barely | Researchers say ChatGPT is the first AI to receive a passing score for the U.S. Medical Licensing Exam, but it's still bad at math. by chrisdh79
The worst doctor leaving school will continue to learn throughout the rest of their career, shaping what they review to cover their known weaknesses. This is a current peak AI that has already finished learning everything it can from the dataset.
Uristqwerty t1_j534934 wrote
Reply to comment by ThatDoesNotRefute in Bloomberg: Amazon Packages Burn in India, Final Stop in Broken Recycling System. Plastic wrappers and parcels that start off in Americans’ recycling bins end up at illegal dumpsites and industrial furnaces — and inside the lungs of people by ombx
Better to leave the recycling programs in place, though. If the political will exists to upgrade what happens behind-the-scenes, it could only take a few short years to improve. For the public, though? Habits can transcend generations, so having everyone sort their recyclables from their trash regardless is valuable just to keep the opportunity open.
Uristqwerty t1_j13sro5 wrote
Reply to comment by lexartifex in Study finds AI assistants help developers produce code that's more likely to be buggy / Computer scientists from Stanford University have found that programmers who accept help from AI tools like Github Copilot produce less secure code than those who fly solo. by Sorin61
Developers' key value is their mindset for analyzing problems, and their ability to identify vagueness, contradiction, and mistakes in the given task, go back to the client, and talk through the edge cases and issues. AI might replace code monkeys who never even attempted to improve themselves, but as with every no-/low-code solution, management will quickly find that a) it's harder than it looks, as they don't have the mindset to clearly communicate the task in language the tool understands (this includes using domain-specific business jargon that the AI won't have trained on, or references to concepts prevalent in that specific company's internal email discussions), and b) a dedicated programmer has a time-efficiency bonus that makes it cheaper for them to do the work than a manager, so might as well delegate to the specialist anyway and free up time for managing other aspects of the business.
Thing is, developers are constantly creating new languages and libraries in an attempt to more concisely write their intentions in a way the computer can understand. Dropping back to human grammar loses a ton of specificity, and introduces a new sort of linguistic boilerplate.
Uristqwerty t1_iuk8e35 wrote
Reply to comment by KerouacsGirlfriend in Scientists Find Potentially Hazardous Asteroid Hiding in the Sun’s Glare by geoxol
Sadly, the planet will live on. It turns out geocide is a very tricky business, as the last serious attempt wasn't nearly powerful enough to overcome gravity, and the ol' hunk of iron merely gained a new moon from the ordeal rather than joining Pluto in the no-longer-a-planet club.
Uristqwerty t1_iug996y wrote
Reply to comment by 685327593 in Online age-verification system could create ‘honeypot’ of personal data and pornography-viewing habits, privacy groups warn by Lakerlion
Pretty much everyone has a phone, right? And pretty much every phone has a TPM that can store cryptographic keys and self-destruct rather than ever let them leak, right? So, you need two keys: One proof-of-age key that's the same for everyone, perhaps generated fresh each month by the government, for which simply having access to the key says that you're over the threshold and nothing more. Then, a unique-to-you key generated by your phone that is only used once a month on a fixed date to fetch the latest proof-of-age key. Setting that one up may require visiting a government office in-person once to verify your identity. Then, everyone over 18 in a single nation looks alike to the websites asking for your identity. To ensure they don't sneakily swap out the proof key for targeted individuals, each month's public half would be made public, for all users and websites alike to see. Perhaps have the TPM verify a fingerprint or face match before unlocking the proof key.
And if that's a scheme that a cryptography amateur can come up with in minutes, based on a high-level understanding of TPMs and SSL certificates, imagine what someone who properly understands M-of-N secret sharing, zero-knowledge proofs, and all sorts of other clever mathematical tools could do, given months to refine their design and peers to identify and help correct flaws all along the way!
Uristqwerty t1_iufytzi wrote
Reply to Online age-verification system could create ‘honeypot’ of personal data and pornography-viewing habits, privacy groups warn by Lakerlion
In theory, there are mathematicians working with cryptography systems (no relation to cryptocurrencies, cryptography is a vast field and the rest of it is very useful for everyday life) could invent a scheme where you can prove your age without leaking any metadata to either the website asking, or the government that verified your date of birth and identity at some point in the distant past.
In practice, most implementations will be utter shit, and leak details everywhere. If someone does propose a good solution, the public won't have the expertise, or even willingness to read the specification and think critically about it to tell the difference, and will rally against both good and bad solutions alike. Except the bad solutions will be forced forwards more fervently by people poised to abuse them, so any reasonable one is all but guaranteed to be shot down.
Uristqwerty t1_j9wkzcf wrote
Reply to comment by KoalaDeluxe in DeepMind created an AI system that writes computer programs at a competitive level by inaLilah
"Competitive" programming is nothing like ordinary software development: The problems are small, self-contained, clearly and unambiguously specified in natural language, might even come with a substantial set of test cases even. This is nothing new; at best a minor quality improvement.