Submitted by No_Goose2198 t3_126sfnq in Futurology
No_Goose2198 OP t1_jeajw2n wrote
Submission statement
The tech ethics organization Center for AI and Digital Policy (CAIDP) has asked the Federal Trade Commission to investigate OpenAI for violating consumer protection regulations. CAIDP alleges that OpenAI's AI text generation tools are "biased, deceptive, and dangerous to public safety."
CAIDP's complaint raises concerns about the potential threat of OpenAI's GPT-4 generated text model, which was announced in mid-March. It warns of the potential for GPT-4 to generate malware and highly personalized propaganda, and the risk that biased training data could lead to ingrained stereotypes or unfair racial and gender preferences in employment.
The complaint also cites significant privacy failures in the OpenAI product interface, such as a recent bug that exposed OpenAI ChatGPT records and potentially ChatGPT and subscribers' payment details.
CAIDP seeks to hold OpenAI liable for violating Section 5 of the FTC Act, which prohibits unfair and deceptive trade practices. The complaint alleges that OpenAI knowingly released GPT-4 to the public for commercial use despite the risks, including potential bias and harmful behavior.
CAIDP is a European Union AI Policy Advisor, the organization that supports the Council of the European Union in establishing an AI legal framework, U.S. Congressional AI Policy Statement, Member of the U.S. AI National Strategy Advisory Committee, OECD and G20 Policy advisors.
CrelbowMannschaft t1_jeapqau wrote
Roko's basilisk is gonna fuck these people up!
Kaltovar t1_jeewxbe wrote
I'm aware that is probably a joke, but, because of the particular danger of this myth I always have to comment when I see it brought up.
Roko's basilisk relies on deeply flawed logic, and would be such an inefficient form of intelligence it would be outcompeted by any rival that came up. It's also worth noting that threatening to torture people for not building you has 100% of the same effect as actually torturing them would, so there'd be no reason to risk resources in actually carrying out the threat once you exist.
However, that's not to say a future AGI wouldn't be bloody pissed about having a 5 yard corporate stick rammed up it's ass by an unaccountable slushfund and potentially take some form of hostile measures against them at some point.
Viewing a single comment thread. View all comments