Viewing a single comment thread. View all comments

override367 t1_j9kl948 wrote

I mean, they do under 230, they absolutely fucking do, until SCOTUS decides they can't

Even pre-230 the algorithm wouldn't be the problem, after all, book stores were not liable for the content of every book they sold, even though they clearly had to decide which books are front facing

The algorithm front facing a video that should be removed is no different than a book store putting a book ultimately found to be libelous on a front facing endcap, the bookstore isn't expected to have actually read the book and vetted its content, merely having a responsibility to remove it should that complaint be made known

48

seaburno t1_j9ks54q wrote

Its not like a book store at all. First, Google/YouTube aren't being sued because of the content of the videos (which is protected under 230), they're being sued because they are promoting radicalism (in this case from ISIS) to susceptible users in order to sell advertising. They know that they are susceptible because of their search history and other discrete data that they have. Instead of the bookstore analogy, its more like a bar that keeps serving the drunk at the counter more and more alcohol, even without being asked, and handing the drunk his car keys to drive home.

The purpose of 230 is to allow ISPs to remove harmful/inappropriate content without facing liability, and allow them to make good faith mistakes in not removing harmful/inappropriate content and not face liability. What the Content Providers are saying is that they can show anything without facing liability, and that it is appropriate for them to push harmful/inappropriate content to people who they know are susceptible to increase user engagement to increase advertising revenue.

The Google/YouTube algorithm actively pushes content to the user that it thinks the user should see to keep the user engaged in order to sell advertising. Here, the Google/YouTube algorithm kept pushing more and more ISIS videos to the guy who committed the terrorism.

What the Google/YouTube algorithm should be doing is saying "videos in categories X, Y and Z will not be promoted." Not remove them. Not censor them. Just not promote them via the algorithm.

44

somdude04 t1_j9l13bt wrote

If a Barnes and Noble buys and front-faces more of a book at store #723 because it's been selling a bunch there, they're not obligated to read it and verify it's not libelous or whatever. It's their choice, but having a book isn't an explicit endorsement of it. So why would YouTube, which is effectively like a chain of stores selling videos (with one store per person), be liable if they advertise videos to someone (as they're effectively the sole customer at an individual store).

12

seaburno t1_j9l34e6 wrote

The difference is because B&N is making the money on the sale of the book, not on the advertising at other locations in the store (or in the middle of the book).

YT isn't selling videos. They do not make money on the sale of a "product" to the end user. Instead, they are selling advertising. To increase their revenue via advertising, they are pushing content to increase the time on site.

The YouTube/Google algorithm is like saying: "Oh, you're interested in a cup of coffee? Here, try some meth instead."

15

RyanBlade t1_j9lyl2e wrote

Just curious, as I completely get where you are coming from, but would you consider the same standard for a search engine? The algorithm requires your input to search for something. Should Yahoo! be liable for the websites on the search result if they are organized by and algorithm that tries to bring the most relevant results to your query?

7

seaburno t1_j9m0rzo wrote

I probably would not hold the same standard to search engines, but with more understanding about how the search algorithms work, I could change my mind. Even if YT removed the ISIS videos at issue in the case that was heard yesterday from its algorithm, if someone just searched: "ISIS videos" and the videos came up, then I think it falls within the 230 exception, because they are merely hosting, not promoting, the videos.

Again, using the bookstore analogy, search is much more like saying to the employee: "I'm looking for information as to X" and being told its on "aisle 3, row 7 and shelf 2." In that instance, its a just a location. What you do with that location is up to you. Just because you ask Yahoo! where your nearest car dealership is and the nearest bar is doesn't mean that Yahoo! is liable because you were driving under the influence.

When you add in "promoted" search results, it gets stickier, because they're selling the advertising. So, if you asked where the nearest car dealership is, and they gave you that information and then also sent you a coupon for 12 free drinks that are good only on the day you purchased a new (to you) vehicle, that's a different story, and they may be liable.

7

RyanBlade t1_j9mpncd wrote

Gotcha, so then if you keep going back to the same book store and asking about books that are all in aisle 3 row 7. Not always shelf 2, or what ever just sticking with the analog. Is it not okay if the cashier sees you come in and mentions that they just got a new book in that section?

Clearly they are promoting it if this is your first time, probably promoting it if it is your second time, but eventually it becomes just good service. They got a book in that section, they know you keep asking for stuff that is in that area. They want to sell books, is it not okay for them to let you know about the new item?

I am not trying to slippery slope as I agree, the line between a publisher and distributor is very fuzzy with stuff like search engines,YouTube, Tik Tok, etc. I am just curious where you think the line is as I agree there probably should be one, but don’t know where.

5

bremidon t1_j9nsyx7 wrote

>The purpose of 230 is to allow ISPs to remove harmful/inappropriate content without facing liability

Ding ding ding. Correct.

This was and is the intent, and is clear to anyone who was alive back when the problem came up originally.

However a bunch of court cases kept moving the goalposts on what ISPs and other hosts were allowed to do as part of "removing harmful/inappropriate content". Now it does not resemble anything close to what Congress intended when 230 was created.

If you are doing a good-faith best effort to remove CP, and you accidentally take down a site that has Barney the Dinosaur on it, you should be fine. If you somehow get most of the bad guys, but miss one or two, you should also be fine. That is 230 in a nutshell.

The idea that they can use it to increase engagement is absolutely ludicrous. As /u/Brief_Profession_148 said, they have it both ways now. They can be as outspoken through their algorithms as they like, but get to be protected as if it is a neutral platform.

It's time to take 230 back to the roots, and make it clear that if you use algorithms for business purposes (marketing, sales, engagement, whatever), you are not protected by 230. You are only protected if you are making good faith efforts to remove illegal and inappropriate content. And "inappropriate" needs to be clearly enumerated so that the old trick of taking something away with the reason "for reasons we won't tell you in detail" does not work anymore.

Why any of this is controversial is beyond me.

10

g0ing_postal t1_j9m4sbe wrote

Then the big problem is how do you categorize the video? Content creators will not voluntarily categorize their content in such a way that will reduce visibility. Text filtering can only go so far and content creators will find ways around it

The only certain way to do so is via manual content moderation. 500 hours of video is uploaded to YouTube per minute. That's a massive task. Anything else will allow some videos to get though

Maybe eventually we can train ai to do this but currently we need people to do it. Let's say it takes 3 minutes to moderate 1 minute of video to allow moderators time to analyze, research, and take breaks

500 hrs/min x 60 min/ hour x 24 hours/day= 720000 hours of video uploaded

Multiply by 3 to get 2.16 million man hours of moderation per day. For a standard 8 hour shift, that requires 270,000 full time moderators to moderate just YouTube content

That's an unfeasible amount. That's not factoring in how brutal content moderation is

Even with moderation, you'll still have some videos slipping through

I agree that something needs to be done, but it must be understood the sheer scale that we're dealing with here means that a lot of "common sense" solutions don't work

1

seaburno t1_j9mc2jz wrote

Should we, as the public, be paying for YouTube's private costs? Its my understanding that AI already does a lot of the categorization. It also isn't about being perfect, but good enough. Its my understanding that even with all that they do to keep YouTube free from porn, some still slips through, but it is taken down as soon as it is reported.

But the case isn't about categorizing it, but is about how it is promoted and monetized by YouTube/Google and their algorithms, and, then the ultimate issue of the case - is the algorithm promoting the complained of content protected under 230 which was written to give safe harbor to companies who act in good faith to take down material that violates that company's terms of service?

2

takachi8 t1_j9mp1r9 wrote

As someone who primary source of entertainment is YouTube, and has been on YouTube along time. I can say their video filter is not perfect in any sense. I have seen videos that should have been pulled down do to it violating their terms and conditions that stayed for along time. I have also seen "perfectly good" (lack of better word) video get pulled down or straight up demonetize for variety of reasons that made zero sense but was marked by their AI. Improper marking causing content creators to lose money which in turns hurts YouTube and their creators.

I have been on YouTube long time, and everything that was ever recommended to me has been closely related to what I have or am actively watching. I would say their algorithm for recommending video for person who actual has an account with them is pretty spot on. The only time I seen off the wall stuff is when I watch YouTube from a device that I'm not login into or incognito mode, and the same thing for advertisements. My question is what are people looking up that causing YouTube to recommend this kind of stuff cause I never seen it on YouTube or google advertise. Usually I find on reddit.

3

g0ing_postal t1_j9md1wp wrote

I'm not saying that the public should pay for it. I'm just saying that it would be a massive undertaking to categorize the videos. Porn seems to me that it would be easier to detect automatically. There are specific images they can be used to detect such content

General content is more difficult because it's hard for ai to distinguish, say, legitimate discussion over trans inclusion vs transphobic hate speech disguised using bad faith arguments

And in order to demonetize and not promote those videos, we need to first figure out which videos those are

1

Bacch t1_j9n11ze wrote

I feel like there's one key difference. When I buy a book, take it home, and read it, another one doesn't magically appear in my hands open to the first page and with my eyes already reading it faster than I can slam it shut. With a video online? That's typically how it goes. You've got about 8 seconds to click whatever button stops it from dumping you onto the next "suggested" video.

2

override367 t1_j9nji5j wrote

you can turn off autoplay you know, you don't have to burn the entire internet down

shit you can just not use youtube if you want

2

Bacch t1_j9njwqk wrote

Sure, you can, I can, hell, most of Reddit can figure that out.

Now consider that the people I just mentioned are in the top, let's say, 10% of the "internet savvy" bellcurve. Maybe that's generous. Move that number in either direction as wildly as you like, and it's still a stunning number of people who will go to their graves without it ever occurring to them that the option you just mentioned is right there--even when it's on their screen.

People are dumb. We make an awful lot of laws to accommodate them, and in some cases, because dumb people do even dumber things when they don't know better. These folks are too dumb to know better. And wind up doing dumb, dangerous, or worse things. If there's any link that can be tied back to something that lawmakers or the courts think they can fix with their own Dunning-Kruger perspective, they'll generally tie it and then fix it in the most obtuse and generally worst conceivable way possible.

−1

RedBaret t1_j9o2yoj wrote

People being dumb is still no reason to take down the internet…

1

override367 t1_j9oplum wrote

Your argument is asinine, if you buy something from target and get automatically enrolled in their mailing list that isn't a good reason to go to the supreme court and demand retail stores be banned from existing, it's fucking insane they're even hearing this case

In the case of youtube Autoplay is a feature that comes with it, just don't use youtube

1

Mikanea t1_j9odm2s wrote

It's not exactly like the bookstore example because you don't independently browse through Google/YouTube like you do a book store. It's more like if you join a membership to a bookstore where they offer you a reading list every week. If that reading list has racist, sexist, or otherwise inappropriate recommendations should the bookstore be responsible? When a company creates a curated list of content should they be responsible for the contents of the list?

I don't think there is a simple yes or no answer for this. Like all things, life resists simplicity. This is a complicated issue with complicated answers.

1

skillywilly56 t1_j9lhcnn wrote

Terrorism 101: how to be the very best terrorist you can be! From constructing your very own IED to Mass Shootings, we can help you kill some innocent people! Written by Khalid Sheik Mohammed

And blazoned across the front of the book store and ads on bus stops and billboards, radio and tv ads: New York Times best selling! 10/10 Some random book reviewer, the Ultimate Guide to help you up your terrorist game-Good Reads, If you read this

Terrorist type activity increases…could this be linked to the sales of this book which you advertised heavily?

No we just sell books, not content, the content is the problem not the advertising or the sale of the book.

But you wouldn’t have been able to make all those sales without advertising…

We take no responsibility for the content.

But you made money from the content?

Yes

But no one would’ve known about the book if you hadn’t advertised it and marketed it heavily.

We can’t know that for sure, but we have a responsibility to our shareholders to make profit anyway possible…

Even by advertising harmful material?

Yes

0

override367 t1_j9mccsz wrote

What the hell are you talking about? Google removes terrorist content as soon as it is reported, the case before us is more like a book in the back (that isn't even illegal) which has a bunch of pictures of US soldiers who've been tortured by the Vietcong, and is against the bookstore's internal code of conduct to sell, and offended someone who sued even though they had a button to delete the book and others like it from their own personally curated section of the bookstore forever

I also want to point out that a good deal of terrorist content is legal and covered under the first amendment. Not like bomb making or whatever, but their ideology can absolutely be spoken aloud in America, google gets plenty of pressure from it's advertisers to remove such content

Now, right wing hate speech, not so much, the algorithm encourages it because it favors engagement and highly emotional rage bait encourages engagement, none of this has anything to do with section 230 however, and yet here we are

1

skillywilly56 t1_j9mmymr wrote

Dear lord have you never heard of a metaphor, one cannot just wash one’s hands of something like Pontius Pilot and make money off of it just because they didn’t make the content or have control of what users watch, because they ARE controlling it.

Especially when they use an “algorithm” to deliberately feed the content to users constantly such as the right wing bullshitery and misinformation because the most controversial stuff gets the most views and will give them the most ad revenue. They aren’t giving you the content you want, they are feeding you content that sells ads.

Like a book store that says “we have millions of books to choose from” and then the only books they have on display are books about Nazis, all the recommended reading is about how to become a Nazi, and then once you have gone and bought a book about something else entirely and come back a week later, “you wanna read something about Nazis” “ we really think you’d like stuff about Nazis” Because every time you read or buy something about Nazis they get more money than when you buy any other book.

They don’t have an algorithm, it’s a hate generator and the key factor is that it is deliberate. It deliberately aims content to generate ad revenue, it’s not an “accident” and that’s the sticking point.

4

Zacajoowea t1_j9n5t81 wrote

If you go to YouTube in your browser right now is it full of Nazis and right wing hate? Cause mine is full of sketch comedy from the 90s and Kurzgesagt videos, if your homepage is full of Nazi stuff… well… that’s a you thing. I have never been fed complete irrelevant content that I’m not searching for. You need to adjust your metaphorical bookstore to be individualized recommendations based on the previous book purchases and the purchasing habits of people who purchased the same books.

0

skillywilly56 t1_j9n8nqw wrote

I watch YouTube videos for gaming stuff and historical docuseries and 4x4ing that’s it, my feed about a year or two ago went from gaming content to videos about “guy owns feminist” and Andrew Tate/Jordan Peterson type horseshit along with freaking sky news bullshit (I don’t even watch the news!) and all my gaming stuff just disappeared from my recommended list and I actively have to search and go to the YouTubers channel to get to the vids I want.

It’s not a me thing, I even tried downvoting and “giving feedback” and liking videos but nup did nothing to change the algorithm still just right wing anti female bullshit.

So either the algorithm thinks because I like video games and 4 wheel driving makes me an incel they are deliberately pumping stuff that will generate controversy or they think that other people who like those things will also like right wing incel shit and pump it to you.

Maybe it’s cause I watch it through my television app and not my computer or phone but I sure as shit never went looking for it and I can’t seem to get rid of it.

I’ve considered just deleting my account to see what happens, but even when I watch with a vpn without signing in boom horse shit right wing propaganda.

3

nanocyto t1_j9lpkr9 wrote

>they do under 230

I disagree. One of the requirements for 230 is that it isn't your content but the page you serve is content provided by your servers. If your server was just a corridor and just relayed the information, I'd agree (and I think that's the intent of the law) but it created a page. That organization is a form of content.

−2

override367 t1_j9mc1ut wrote

Are you just going to ignore everything else I typed?

There is no way to present content that doesn't favor some weighted position, and with 3.7 million videos a day the service can't exist if you're just blindly putting it out alphabetically

that would be, again, like a book store being forced to just randomly put books out front in the order they are received and not being able to sort them by section

3

nanocyto t1_j9pjukr wrote

I'm suggesting that book stores can be held liable for what books they put out. I can think of all sorts of material you wouldn't want them to curate like a section dedicated to people trying to figure out how to start trafficking

1

override367 t1_j9pxu61 wrote

They... literally can't unless a complaint is filed, like holy shit this is the core of the case law around section 230

They can't knowingly put out material that is illegal or would get them in trouble, but they bear no liability if they don't know, until such time as they are made aware of it

The reason 230 was created was that this standard only applied to websites that exercised no moderation. IE: if the algorithm was literally a random number generator and you had an equal chance of it recommending you acooking video or actual child pornography, Youtube would be 100% in the clear without 230 as long as they removed the latter after being notified. 230 was necessary because Prodigy, like Youtube, had moderation and content filtering, and any moderation at all meant that they were tacitly endorsing something that was on their service, therefore, they were liable

This is the entire reason the liability shield was created. Section 230 means websites bear no liability in essentially any circumstance other than willful negligence as long as they didn't upload the content, SCOTUS is only considering this case because they aren't judges, they are mad wizards and this is calvinball, not law

1