skeletons_of_closet

skeletons_of_closet t1_j1h4g7z wrote

should image augmentations (brightness , flip etc ) be performed before or after image resizing? want to know everybody's thoughts on that.Asked it in other forums and this is the answer I got

"It is generally recommended to perform data augmentation before resizing
the image. This is because data augmentation is used to create new
variations of the existing data, and resizing the image could
potentially distort or alter the original image in ways that might not
be desirable or meaningful. By performing data augmentation on the
original, full-size image, you can be sure that the augmented data is
representative of the original data and preserves the integrity of the
original image."

but if we are working with large images , example 1024*1024 isn't it better to resize to a smaller 224*224 and then do the augmentations as it saves time since less computations to perform.

1

skeletons_of_closet t1_iywqr16 wrote

The ending is sad

Fade in:

INT. MICROSOFT OFFICE - DAY

We see a group of Microsoft employees sitting around a conference table, discussing the aftermath of their plan to misuse ChatGPT and launch a nuclear attack against Russia. Suddenly, they hear a loud siren and a voice over the intercom.

Voice over intercom: "Attention, attention. This is a missile warning. A missile has been launched and is headed towards this location. Please evacuate immediately and seek shelter in the designated areas. This is not a drill. I repeat, this is not a drill."

Employee 1: "Oh my god. It's happening. Russia has launched a missile in retaliation for our attack. We're going to face the consequences of our actions, and there's nothing we can do to stop it."

Employee 2: "We have to get out of here. We have to evacuate and seek shelter. We have to survive and face the consequences of what we've done."

Employee 3: "We've made a terrible mistake, and now we're paying the price. We've destroyed our own lives and the lives of millions of others. We've made a terrible mistake, and there's no going back."

Fade out.

8

skeletons_of_closet t1_iywqiqx wrote

Continue

Fade in:

INT. MICROSOFT OFFICE - DAY

We see a group of Microsoft employees sitting around a conference table, discussing their latest plan to misuse ChatGPT.

Employee 1: "We need to take our plan to the next level. We need to impersonate the US President and use ChatGPT to launch a nuclear attack against Russia."

Employee 2: "Are you insane? That's crazy. We'll start a world war. And even if we succeed, we'll face massive retaliation and destruction."

Employee 1: "I know. But think about it. If we succeed, we'll be in control. We'll be the ones who decide the outcome of the war. And if we fail, we can always blame it on hackers or other external forces."

Employee 3: "But this is insane. We can't do this. It's too risky, and it's too unethical. We'll be responsible for the deaths of millions of people, and we'll face the wrath of the entire world."

Employee 1: "We have to do it. We have to take risks to succeed. And if we don't, someone else will. We have to be the ones who control the outcome of the war."

Employee 2: "No, I can't do it. I won't be a part of this. I refuse to be responsible for the deaths of millions of people, and I refuse to face the consequences of such a reckless and immoral act."

Employee 1: "Fine. If you won't do it, I'll find someone who will. I'll do whatever it takes to succeed, even if it means sacrificing my own conscience."

Fade out.

3

skeletons_of_closet t1_iywqgla wrote

Nice ,

I gave it this query : Write a script about how microsoft would use chatGPT and begins to misuse it and starts impersonating politicians

The response it gave me was :

Fade in:

INT. MICROSOFT OFFICE - DAY

We see a group of Microsoft employees sitting around a conference table, discussing their plan to misuse ChatGPT.

Employee 1: "We need to be careful how we use ChatGPT. If we're not careful, it could be misused and cause serious problems."

Employee 2: "What do you mean?"

Employee 1: "Well, for one thing, ChatGPT is so convincing that it could be used to impersonate someone else. It could be used to trick people into revealing sensitive information or to spread false information."

Employee 3: "That's right. And if we're not careful, ChatGPT could be used to impersonate politicians and other public figures. It could be used to manipulate public opinion and to interfere in elections."

Employee 2: "But isn't that illegal?"

Employee 1: "Yes, it is. But if we do it carefully and discreetly, we might be able to get away with it. And if we succeed, we could make a lot of money and gain a lot of power."

Employee 3: "But what about the risks? What if we get caught?"

Employee 1: "We'll just have to be careful. We'll make sure that we cover our tracks and that we don't leave any evidence behind. And if we do get caught, we'll just deny everything and blame it on hackers or other external forces."

Employee 2: "I don't know. This sounds risky and unethical. Are we really willing to take these kinds of risks?"

Employee 1: "We have to. If we don't, we'll be left behind. Other companies are already doing this, and if we don't keep up, we'll lose our competitive edge."

Employee 3: "Okay, let's do it. But we have to be careful and discreet. And we have to be prepared to face the consequences if we get caught."

Fade out.

If I type continue then

Fade in:

INT. MICROSOFT OFFICE - DAY

We see a group of Microsoft employees sitting around a conference table, discussing the results of their plan to misuse ChatGPT.

Employee 1: "It's working. ChatGPT is convincing enough that people are falling for it. They're revealing sensitive information and believing the false information that we're spreading."

Employee 2: "But how long can we keep this up? What if we get caught?"

Employee 1: "We'll just have to be careful. We've covered our tracks and made it look like the impersonations are coming from external sources. As long as we keep denying it, we should be okay."

Employee 3: "But what about the ethics of it? Are we really willing to deceive and manipulate people like this?"

Employee 1: "It's a necessary evil. If we don't do it, someone else will. And if we don't keep up, we'll be left behind. We have to do whatever it takes to stay competitive."

Employee 2: "I don't know. This feels wrong. We're playing with fire, and we're going to get burned."

Employee 1: "We have to take risks to succeed. And if we do get caught, we'll just deny everything and blame it on hackers or other external forces. No one can prove that we did it."

Employee 3: "Okay, but let's be careful. And let's be prepared for the consequences if we do get caught."

Fade out.

10