Viewing a single comment thread. View all comments

HanaBothWays t1_jbko8au wrote

Yes, but to ensure you have a model that’s behaving in that way, with standardized controls, you need to first established what those standardized controls are and then figure out some kind of auditing and certification framework for saying “this version of the tool works that way and is safe to use in an environment with sensitive information/regulated data.”

These organizations shouldn’t be trying to roll their own secure instance of ChatGPT (they wouldn’t even know where to start) and I bet they don’t want to.

2