HanaBothWays

HanaBothWays t1_jbko8au wrote

Yes, but to ensure you have a model that’s behaving in that way, with standardized controls, you need to first established what those standardized controls are and then figure out some kind of auditing and certification framework for saying “this version of the tool works that way and is safe to use in an environment with sensitive information/regulated data.”

These organizations shouldn’t be trying to roll their own secure instance of ChatGPT (they wouldn’t even know where to start) and I bet they don’t want to.

2

HanaBothWays t1_jbjtj4n wrote

It’s a race to develop better Large Language Model tech, but if you are in a sector that deals with sensitive data and these tools pose a risk of inadvertently disclosing that data (because the tools send everything back to “the mothership” for analysis), being an early adopter is maybe not such a good idea.

2

HanaBothWays t1_jbjbc44 wrote

No I mean if you were a financial company you would not even want to let it inside your internal network at all, no matter what you did or didn’t use it for, unless it was a version made to keep your confidential/regulated data safe.

Right now ChatGPT is not allowed on government agency networks, for example, for any reason because it might pick up on sensitive but unclassified (SBU) data in those network environments.

5

HanaBothWays t1_jbj732b wrote

Honestly if I were in the financial sector I would not do a thing like this until OpenAI comes out with versions of the product that are certified for use with regulated data, the way there are cloud computing products that are certified for use in the financial sector, healthcare sector, etc.

“Certified” is not exactly the right word, but basically they meet certain baseline requirements so they are safe to use with particular kinds of sensitive information/in secure environments with that kind of information.

35

HanaBothWays t1_jadwcor wrote

The most destabilizing stuff has been from internal actors and they didn’t need AIs for that.

I do think there’s some cause for concern in that foreign actors who previously had a difficult time producing useful (for their purposes) English-language content may have an easier time now that they have Large Language Models (LLMs). But there are still going to be barriers.

You would still need fluent English speakers to prompt the LLM, check the output, and edit it. Even native English speakers trying to get an essay out of ChatGPT often can’t use the raw product without reworking it a little. Someone who speaks little or no English trying to use one of these to write English disinformation is only going to get the “right” disinformation by accident.

−6

HanaBothWays t1_jactbkc wrote

This tool is an expansion of the existing tool used to detect and take down CSAM (Child Sexual Abuse Material). Dedicated adult content sites like Onlyfans and Pornhub also use that tool. They may adopt this expansion as well if it works out on the other platforms that are early adopters, since they don’t want any stuff with minors and/or anything the subjects of the uploaded media did not consent to on their site (it’s against their policy).

Expanding this to filter out any adult content whatsoever would be very difficult because it only works on “known” media, that is, media for which there is a hash already uploaded to the database. These tools can’t recognize “hey, that’s a naked child/teenager” or “hey, that’s a boob.” They can only recognize “that media matches a signature in my database.”

3

HanaBothWays t1_jacf8pt wrote

It doesn’t say.

But the tool, like the CSAM takedown system, is coordinated with the National Center for Missing & Exploited Children (NCMEC) and adult sites like Pornhub and Onlyfans use the CSAM tool, so even if they aren’t talking about it in the article they have something in place to prevent that. If it could easily be gamed to take down legal, consensually posted pornography featuring adults, Pornhub and Onlyfans would not be voluntarily using it.

3

HanaBothWays t1_jac9ad2 wrote

This is an expansion of the existing tool to remove CSAM which has been around for a long time.

If you are a teenager and someone spread around the photos you shared with them, or if you’re an adult now but someone spread around nude photos of you as a teen from way back when (or you’re worried that they will as a form of revenge porn), you can upload hashes of those photos to this tool and they will be detected and removed when someone uploads them, like known CSAM content is.

3

HanaBothWays t1_ja9t4ll wrote

Lots of young people use Instagram.

And if you read the article (what a concept LOL), this can be used for photos taken and spread on Facebook a long time ago. If your cad of a high school boyfriend posted the pictures you gave him 15-20 years ago on Facebook, you can send a hash to this thing to have them removed.

5