Viewing a single comment thread. View all comments

habichuelacondulce OP t1_jbjastp wrote

If they don't use it for trading they can used it for copy/editing news/blogs/articles to frame a narrative for their algos to pick up and trade on such information

24

andylowenthal t1_jbjsaf4 wrote

More specifically, and immediately, they can use it to post comments on social media, including Reddit, to shift narrative based on a false majority consensus. It’s already happening now, they just pay people minimum wage for the comments, this would just make creating those false narratives cheaper and faster.

42

DebateGullible8618 t1_jblbhcc wrote

yeah and the bots will eventually be able to respond just like people to reinforce certain views onto people. AI is going to be the biggest evolution in tech since the smartphone.

5

HanaBothWays t1_jbjbc44 wrote

No I mean if you were a financial company you would not even want to let it inside your internal network at all, no matter what you did or didn’t use it for, unless it was a version made to keep your confidential/regulated data safe.

Right now ChatGPT is not allowed on government agency networks, for example, for any reason because it might pick up on sensitive but unclassified (SBU) data in those network environments.

5

thecookie93 t1_jbkzru6 wrote

Yeah, I don't think they would let it touch their systems. They just buy the license and run it on an off-site server where it can do it's thing to write targeted blog posts and "news" articles.

3

HanaBothWays t1_jbl07ni wrote

Something like that. They just shouldn’t let it near the financial and transaction records or correspondence.

1

stuffitystuff t1_jbkh5m6 wrote

They can afford to get a copy of the model and run it on their own systems, though. Just like Microsoft.

1