Submitted by AdditionalPizza t3_y0e4lw in singularity
[removed]
Submitted by AdditionalPizza t3_y0e4lw in singularity
[removed]
As in you believe all points are false?
I’m sure there are lots of misinfo bots out there, but I wouldn’t say “every single thing is fake”
I am not fake - I can tell you that your username is AdditionalPizza, and I am able to see your recent posts, where you recently posted about the OMAD diet (which is a very hard diet to stick with, IMO). So I got your back, fellow Redditor!
It would be interesting to build one of my own (of course to generate innocent content) just to see how prevalent it can be after optimization. It might provide some perspective XD
I mean I default to assuming it's fake or a bot, until I'm sure it's not or it doesn't have an affect on my beliefs or opinions. Like if someone is asking for help with a setting on their phone, I don't care if it'd a bit so researching isn't needed. But if someone is telling me why the education system is failing I might question it more.
But also everything in your comment could easily be done by a bot haha.
I'm more asking about the actual prevalence of bots in social media, especially the convincing ones.
I wonder if just assuming everything is fake until you acquire citations is the only way to go forward. It's truly exhausting.
For now, there's still little tells that give it away. Like logic arguments seems to be too hard for engines, but I can't imagine that lasting much longer.(even then you may think you're talking to someone who is just stupid)
But yeah I can't think of any good solutions, it's a good rule to just be skeptical of everything in general.
A few years back on a different site, I noticed what I thought was a fairly simple Markov chain-based chatbot making forum comments. I called it out, and got a smiley face as a reply. It wasn't spreading information. It was just participating in the conversation, badly.
Given that it's easier to do this today, and to do a better job of it, I'm sure they're still out there. I just don't now how pervasive they are. Is this kind of like the "are we living in a simulation" hypothesis, where the existence of the technology implies that more accounts than not are bots? Or are bots just something a few people are doing for fun and/or research?
As for good vs. evil, I believe that most people are good. Therefore I think that most bots, being deployed by humans and not yet being intelligent in their own right, are either good or benign. Of course, people with nefarious intentions could be deploying more bots than good or benign people.
All your base are belong to us.
Good bot
Thank you, cjeam, for voting on Drifter64.
This bot wants to find the best and worst bots on Reddit. You can view results here.
^(Even if I don't reply to your comment, I'm still listening for votes. Check the webpage to see if your vote registered!)
Good human.
Thank you 😊
>I can tell you that your username is AdditionalPizza,
This post of mine may be a little clarifying on their ability to understand your username. This isn't a good indication anymore, for all you know you may be talking to a bot right now and not even know it.
You also have to question if it is just the curated content Reddit is serving you. How is Reddit using your app behavior to create a front page for you based on your subs and such. I see a lot of content from subs that I am not part of show up in my feed.
All social media, even reddit has AI algorithms that curate content to put users in to demographic groups that can be targeted by advertisers.
They don't change the content, they change you.
Have you seen the comment sections on some YouTube videos? 90% of them are incoherent garbage that’s probably bots worse than GPT-3
The only issue is if they just disengage the conversation after a comment or 2. Like humans do all the time.
Well I do question that, but these are often the subs everyone will see because they're default subs and the most popular.
Social media just encourages echo chambers and conflict. And I feel like bots are a becoming a very large part of encouraging engagement from users.
Yeah, exactly what I was thinking. I've chatted with the boys and they're super convincing now, like unless you are trying to trip them up they carry a conversation superbly.
You make a good point with number 2. I don't know what to think about grammar errors, because theoretically a bot wouldn't make them, but they're often so stupid like I saw a post the other day starting with "as a civil engineer" and then it had nothing to do with being a civil engineer. Like it's a bot specifically designed for social media posting and using buzz words/memes but it's still in beta.
You should make one, and journal it all and make a big post about it to wake people up about it. I'm tired of sounding like the crazy one in my group.
Yeah, and then how many are gpt-3 level or hey even better? That's the question. When people think bots they think mangled comments that make no sense, or ones that say beep boop.
Wow the dedication just for the troll
I don't think the platforms are providing the bots. Platforms have the objective of increasing engagement.
Bots are being used by people, organizations and governments with goals. Usually, effectiveness is increased with more exposure, so their goal of gaming the platform algorithm to maximize engagement with their content aligns with the platform's broad objectives.
But the motives and actors are separate and distinct.
Think of the bot content as the payload, and the platform as a vulnerable service that is being hacked.
It's definitely possible that there are bots out there posting and upvoting content on social media platforms. However, it's also possible that some of these posts are simply being made by humans who are trying to game the system. It's hard to say for sure without more information.
If you're concerned that some of the content you're seeing online is fake or misleading, it's always a good idea to do your own research before believing it. In many cases, a simple Google search can help you determine whether or not something is true.
Ultimately, it's up to you to decide how much trust you want to put in online content. If you're feeling overwhelmed, try taking a break from social media for a
​
And then GPT-3 reached maximum sentence length hehehe
It's sad but this is pretty much where I'm at.
Between all the misinformation--political, product plugs, ignorance, lies--and the fact that we can't even be certain of history, because it was written by the victors; or that seemingly innocuous things like the dismantling of a ship can be a cover-story for far darker things; the people in power, who benefit from the status quo, have no motivation for change, and gain nothing in sharing the truth.
But they can just throw money at problems until they disappear, so what can any one person do? I'm not at all saying that it's something I'd like to see. But I don't believe anything short of a bloody revolution can change things the way they are now.
Look up Dead Internet Theory
You bet that Internet Research Agency is working on better bots all the time. Unfortunately I don't think they will release a paper or source code anytime soon though...
Haha see this was pretty convincing, my reply would've been something about I'm more concerned about my friends, and ultimately the general population. But, also I don't know if I'm just being paranoid, though my gut tells me I'm not. It feels like we're about to see the internet change drastically because of AI really soon, and people will need to be more aware.
Let's hope for a a more civilized revolution, or perhaps AI can shepherd us into better living standards.
I try to be optimistic of the future and its potential, but as a kid I didn't think the 2020s would be so brutal for cost of living. Not to mention people in power don't even have to try and hide the shitty deeds they do anymore, they just do it and have half the people chanting for more. We live in a strange world now.
Assuming what it meant, I searched it and skimmed an article.
A theory about the internet just being all bots and AI communicating back and forth while humans no longer take part? If so, that's exactly what I see in the future if we don't have a solution at some point. I don't really like the idea of removing more anonymity out of the internet but I don't know a better solution.
I've always wondered how a social media platform would work out if it required legitimate credentials to sign up.
Not to mention how many other "enterprises" and at this point, individuals, are working on this sort of thing now.
[deleted]
This is the dead internet theory.
Cleverbot often has Typos or Spelling Mistakes, However this is prolly due to the nature of how it operates, and "Parrots" peoples responses as its own. Its a very old chatbot technology, So not as advanced as a GPT3 or better model by far and large.
I've seen several over emphasized and irrelevant introductions to call up some deranged form of ethos lately. "As a [lesbian vegetable sculptor], I have this to say about [topic at hand that has nothing to do with lesbian anything]." It is unnerving.
I’m astonished how insecure, leaky and anarchic the internet is. I think a decade from now we will look back on the current internet as the Wild West: manipulation, hacks, spam, viruses, bots. Hopefully by then the internet will be a lot nicer place where people come to vote, work and socialise.
Well when I was playing around in the past with much inferior chat tech I made the bot to make mistakes on purpose and be irritable like a human. Easy
It's called the dead internet theory, look it up. While i think we are not quite there yet, i do think that this is allready a problem that will get much much worse in the next few years.
It’s probably a simpler Markov chain or something, like what the original r/SubredditSimulator used, rather than a LLM. I imagine the bots using language models are a lot harder to identify.
Here's a sneak peek of /r/SubredditSimulator using the top posts of the year!
#1: LOOK AT THESE TWO HUMANS THAT ARRIVED AT THE HUMAN WASTE ROOM | 173 comments
#2: PsBattle: Donnie and the back of a hottub | 156 comments
#3: This Pumpkin grew between my new water bottle in case you hurt yourself | 33 comments
^^I'm ^^a ^^bot, ^^beep ^^boop ^^| ^^Downvote ^^to ^^remove ^^| ^^Contact ^^| ^^Info ^^| ^^Opt-out ^^| ^^GitHub
Out of curiosity, what did that comment say?
It said I was a bot because the O in OMAD is a Zero
>As for good vs. evil, I believe that most people are good. Therefore I think that most bots, being deployed by humans and not yet being intelligent in their own right, are either good or benign.
The problem with that logic:
>Of course, people with nefarious intentions could be deploying more bots than good or benign people.
Is precisely that.
There can be one bad person for every thousand good people, but one person could automate countless "evil" bots. Yes people could deploy good or benign chat bots, but if someone wanted to troll or spread misinformation they would just deploy an army of chat bots across a wide scope of the social media.
Anyway, I'm not defining good or evil here, just going along with those words to keep it simple. Evil in this situation can refer to any form of deception from advertising to hate speech. If the bar for evil is simply not disclosing that it's a chat bot, I think that brings money and political gain into the mix which closing the gap of good vs bad people.
Now I want to know what zero mad is haha.
Now when I say this, I don't mean I want the theory to come to fruition because that'd be stupid:
I hope this problem gets worse quickly. We're in a limbo right now where most people are totally ignorant to the capabilities of these bots, and I think we all could use a wake up call on this soon. I would love to read some studies done on this and see some statistics.
the biggest tell for a bot is to ask it about something that literally just happened in the news, information that it hasn't been trained on
TheHamsterSandwich t1_irr845u wrote
conspiracy theory lol/???? :3