Comments

You must log in or register to comment.

vwb2022 t1_j9kecro wrote

Ars Technica is reporting that it doesn't look like the oral arguments went well for the plaintiffs and that the justices seem to view it more of a problem with the legislation itself (a Congress problem), rather than interpretation of the legislation (a Court problem).

212

bohreffect t1_j9kkdrp wrote

You can safely bet most rulings from a conservative court will be "Congress needs to make a law"

79

Youvebeeneloned t1_j9km8vb wrote

Except when it requires them to overturn law for multiple decades...

102

bohreffect t1_j9kqwtr wrote

Usually as the result of "congress needs to create new legislation for this to be constitutional"

34

Youvebeeneloned t1_j9ksrpm wrote

Except when we go reinterpreting the constitution to be what is constitution despite very clear caselaw that says the constitutional is clear on what is allowed like what happened with the First Amendment and prayer in school.

​

We can go all day about this man... the constitution is clearly being reinterpreted by "originalists" who are anything but. Its a tale as old as time and something that even in the early 19 century was rejected as "originalism" goes against the clear intent of the constitution.

13

patricide1st t1_j9l0t4d wrote

I think the person you are arguing with is describing reality, not endorsing it.

28

bohreffect t1_j9l9eb2 wrote

It's amazing how quickly people impute value judgement.

15

Delightful_Debutant t1_j9oczju wrote

Happened to me in another post. We have quick to react people, unthinking people, and disingenuous people. None of those people are who I want informing my opinon or others. So I see it as a blessing. You can block the jerks and eventually have a better experience.

1

bohreffect t1_j9ktmj8 wrote

I'm not disputing this?

I'm literally just taking the stance of "if you're taking bets on the current Justice cohort"

16

Real-Problem6805 t1_j9oe038 wrote

No dear boy it's being read literally with the original intent behind it as written

0

wbsgrepit t1_j9l4e54 wrote

that only flies on topics that have not been upheld a dozen times over 50 years before this seated court. When they revisit extremely well established and reaffirmed law and conflict clear stare decisis it is something totally different.

1

Maximus0314 t1_j9outq9 wrote

And they would be correct. Supreme Court should not be creating law only interpreting it.

3

Zetavu t1_j9oi9bs wrote

Right, headline is misleading and clickbait, this is already old news.

2

Brief_Profession_148 t1_j9kc6xo wrote

They curate with their algorithms. They want protections as passive hosts of information, they should have to turn off their algorithms. If they want to curate what you watch to maximize their profit, they should be responsible for what that algorithm directs people to. They don’t get to be passive hosts and active curators like a publisher at the same time.

146

override367 t1_j9kl948 wrote

I mean, they do under 230, they absolutely fucking do, until SCOTUS decides they can't

Even pre-230 the algorithm wouldn't be the problem, after all, book stores were not liable for the content of every book they sold, even though they clearly had to decide which books are front facing

The algorithm front facing a video that should be removed is no different than a book store putting a book ultimately found to be libelous on a front facing endcap, the bookstore isn't expected to have actually read the book and vetted its content, merely having a responsibility to remove it should that complaint be made known

48

seaburno t1_j9ks54q wrote

Its not like a book store at all. First, Google/YouTube aren't being sued because of the content of the videos (which is protected under 230), they're being sued because they are promoting radicalism (in this case from ISIS) to susceptible users in order to sell advertising. They know that they are susceptible because of their search history and other discrete data that they have. Instead of the bookstore analogy, its more like a bar that keeps serving the drunk at the counter more and more alcohol, even without being asked, and handing the drunk his car keys to drive home.

The purpose of 230 is to allow ISPs to remove harmful/inappropriate content without facing liability, and allow them to make good faith mistakes in not removing harmful/inappropriate content and not face liability. What the Content Providers are saying is that they can show anything without facing liability, and that it is appropriate for them to push harmful/inappropriate content to people who they know are susceptible to increase user engagement to increase advertising revenue.

The Google/YouTube algorithm actively pushes content to the user that it thinks the user should see to keep the user engaged in order to sell advertising. Here, the Google/YouTube algorithm kept pushing more and more ISIS videos to the guy who committed the terrorism.

What the Google/YouTube algorithm should be doing is saying "videos in categories X, Y and Z will not be promoted." Not remove them. Not censor them. Just not promote them via the algorithm.

44

somdude04 t1_j9l13bt wrote

If a Barnes and Noble buys and front-faces more of a book at store #723 because it's been selling a bunch there, they're not obligated to read it and verify it's not libelous or whatever. It's their choice, but having a book isn't an explicit endorsement of it. So why would YouTube, which is effectively like a chain of stores selling videos (with one store per person), be liable if they advertise videos to someone (as they're effectively the sole customer at an individual store).

12

seaburno t1_j9l34e6 wrote

The difference is because B&N is making the money on the sale of the book, not on the advertising at other locations in the store (or in the middle of the book).

YT isn't selling videos. They do not make money on the sale of a "product" to the end user. Instead, they are selling advertising. To increase their revenue via advertising, they are pushing content to increase the time on site.

The YouTube/Google algorithm is like saying: "Oh, you're interested in a cup of coffee? Here, try some meth instead."

15

RyanBlade t1_j9lyl2e wrote

Just curious, as I completely get where you are coming from, but would you consider the same standard for a search engine? The algorithm requires your input to search for something. Should Yahoo! be liable for the websites on the search result if they are organized by and algorithm that tries to bring the most relevant results to your query?

7

seaburno t1_j9m0rzo wrote

I probably would not hold the same standard to search engines, but with more understanding about how the search algorithms work, I could change my mind. Even if YT removed the ISIS videos at issue in the case that was heard yesterday from its algorithm, if someone just searched: "ISIS videos" and the videos came up, then I think it falls within the 230 exception, because they are merely hosting, not promoting, the videos.

Again, using the bookstore analogy, search is much more like saying to the employee: "I'm looking for information as to X" and being told its on "aisle 3, row 7 and shelf 2." In that instance, its a just a location. What you do with that location is up to you. Just because you ask Yahoo! where your nearest car dealership is and the nearest bar is doesn't mean that Yahoo! is liable because you were driving under the influence.

When you add in "promoted" search results, it gets stickier, because they're selling the advertising. So, if you asked where the nearest car dealership is, and they gave you that information and then also sent you a coupon for 12 free drinks that are good only on the day you purchased a new (to you) vehicle, that's a different story, and they may be liable.

7

RyanBlade t1_j9mpncd wrote

Gotcha, so then if you keep going back to the same book store and asking about books that are all in aisle 3 row 7. Not always shelf 2, or what ever just sticking with the analog. Is it not okay if the cashier sees you come in and mentions that they just got a new book in that section?

Clearly they are promoting it if this is your first time, probably promoting it if it is your second time, but eventually it becomes just good service. They got a book in that section, they know you keep asking for stuff that is in that area. They want to sell books, is it not okay for them to let you know about the new item?

I am not trying to slippery slope as I agree, the line between a publisher and distributor is very fuzzy with stuff like search engines,YouTube, Tik Tok, etc. I am just curious where you think the line is as I agree there probably should be one, but don’t know where.

5

bremidon t1_j9nsyx7 wrote

>The purpose of 230 is to allow ISPs to remove harmful/inappropriate content without facing liability

Ding ding ding. Correct.

This was and is the intent, and is clear to anyone who was alive back when the problem came up originally.

However a bunch of court cases kept moving the goalposts on what ISPs and other hosts were allowed to do as part of "removing harmful/inappropriate content". Now it does not resemble anything close to what Congress intended when 230 was created.

If you are doing a good-faith best effort to remove CP, and you accidentally take down a site that has Barney the Dinosaur on it, you should be fine. If you somehow get most of the bad guys, but miss one or two, you should also be fine. That is 230 in a nutshell.

The idea that they can use it to increase engagement is absolutely ludicrous. As /u/Brief_Profession_148 said, they have it both ways now. They can be as outspoken through their algorithms as they like, but get to be protected as if it is a neutral platform.

It's time to take 230 back to the roots, and make it clear that if you use algorithms for business purposes (marketing, sales, engagement, whatever), you are not protected by 230. You are only protected if you are making good faith efforts to remove illegal and inappropriate content. And "inappropriate" needs to be clearly enumerated so that the old trick of taking something away with the reason "for reasons we won't tell you in detail" does not work anymore.

Why any of this is controversial is beyond me.

10

g0ing_postal t1_j9m4sbe wrote

Then the big problem is how do you categorize the video? Content creators will not voluntarily categorize their content in such a way that will reduce visibility. Text filtering can only go so far and content creators will find ways around it

The only certain way to do so is via manual content moderation. 500 hours of video is uploaded to YouTube per minute. That's a massive task. Anything else will allow some videos to get though

Maybe eventually we can train ai to do this but currently we need people to do it. Let's say it takes 3 minutes to moderate 1 minute of video to allow moderators time to analyze, research, and take breaks

500 hrs/min x 60 min/ hour x 24 hours/day= 720000 hours of video uploaded

Multiply by 3 to get 2.16 million man hours of moderation per day. For a standard 8 hour shift, that requires 270,000 full time moderators to moderate just YouTube content

That's an unfeasible amount. That's not factoring in how brutal content moderation is

Even with moderation, you'll still have some videos slipping through

I agree that something needs to be done, but it must be understood the sheer scale that we're dealing with here means that a lot of "common sense" solutions don't work

1

seaburno t1_j9mc2jz wrote

Should we, as the public, be paying for YouTube's private costs? Its my understanding that AI already does a lot of the categorization. It also isn't about being perfect, but good enough. Its my understanding that even with all that they do to keep YouTube free from porn, some still slips through, but it is taken down as soon as it is reported.

But the case isn't about categorizing it, but is about how it is promoted and monetized by YouTube/Google and their algorithms, and, then the ultimate issue of the case - is the algorithm promoting the complained of content protected under 230 which was written to give safe harbor to companies who act in good faith to take down material that violates that company's terms of service?

2

takachi8 t1_j9mp1r9 wrote

As someone who primary source of entertainment is YouTube, and has been on YouTube along time. I can say their video filter is not perfect in any sense. I have seen videos that should have been pulled down do to it violating their terms and conditions that stayed for along time. I have also seen "perfectly good" (lack of better word) video get pulled down or straight up demonetize for variety of reasons that made zero sense but was marked by their AI. Improper marking causing content creators to lose money which in turns hurts YouTube and their creators.

I have been on YouTube long time, and everything that was ever recommended to me has been closely related to what I have or am actively watching. I would say their algorithm for recommending video for person who actual has an account with them is pretty spot on. The only time I seen off the wall stuff is when I watch YouTube from a device that I'm not login into or incognito mode, and the same thing for advertisements. My question is what are people looking up that causing YouTube to recommend this kind of stuff cause I never seen it on YouTube or google advertise. Usually I find on reddit.

3

g0ing_postal t1_j9md1wp wrote

I'm not saying that the public should pay for it. I'm just saying that it would be a massive undertaking to categorize the videos. Porn seems to me that it would be easier to detect automatically. There are specific images they can be used to detect such content

General content is more difficult because it's hard for ai to distinguish, say, legitimate discussion over trans inclusion vs transphobic hate speech disguised using bad faith arguments

And in order to demonetize and not promote those videos, we need to first figure out which videos those are

1

Bacch t1_j9n11ze wrote

I feel like there's one key difference. When I buy a book, take it home, and read it, another one doesn't magically appear in my hands open to the first page and with my eyes already reading it faster than I can slam it shut. With a video online? That's typically how it goes. You've got about 8 seconds to click whatever button stops it from dumping you onto the next "suggested" video.

2

override367 t1_j9nji5j wrote

you can turn off autoplay you know, you don't have to burn the entire internet down

shit you can just not use youtube if you want

2

Bacch t1_j9njwqk wrote

Sure, you can, I can, hell, most of Reddit can figure that out.

Now consider that the people I just mentioned are in the top, let's say, 10% of the "internet savvy" bellcurve. Maybe that's generous. Move that number in either direction as wildly as you like, and it's still a stunning number of people who will go to their graves without it ever occurring to them that the option you just mentioned is right there--even when it's on their screen.

People are dumb. We make an awful lot of laws to accommodate them, and in some cases, because dumb people do even dumber things when they don't know better. These folks are too dumb to know better. And wind up doing dumb, dangerous, or worse things. If there's any link that can be tied back to something that lawmakers or the courts think they can fix with their own Dunning-Kruger perspective, they'll generally tie it and then fix it in the most obtuse and generally worst conceivable way possible.

−1

RedBaret t1_j9o2yoj wrote

People being dumb is still no reason to take down the internet…

1

override367 t1_j9oplum wrote

Your argument is asinine, if you buy something from target and get automatically enrolled in their mailing list that isn't a good reason to go to the supreme court and demand retail stores be banned from existing, it's fucking insane they're even hearing this case

In the case of youtube Autoplay is a feature that comes with it, just don't use youtube

1

Mikanea t1_j9odm2s wrote

It's not exactly like the bookstore example because you don't independently browse through Google/YouTube like you do a book store. It's more like if you join a membership to a bookstore where they offer you a reading list every week. If that reading list has racist, sexist, or otherwise inappropriate recommendations should the bookstore be responsible? When a company creates a curated list of content should they be responsible for the contents of the list?

I don't think there is a simple yes or no answer for this. Like all things, life resists simplicity. This is a complicated issue with complicated answers.

1

skillywilly56 t1_j9lhcnn wrote

Terrorism 101: how to be the very best terrorist you can be! From constructing your very own IED to Mass Shootings, we can help you kill some innocent people! Written by Khalid Sheik Mohammed

And blazoned across the front of the book store and ads on bus stops and billboards, radio and tv ads: New York Times best selling! 10/10 Some random book reviewer, the Ultimate Guide to help you up your terrorist game-Good Reads, If you read this

Terrorist type activity increases…could this be linked to the sales of this book which you advertised heavily?

No we just sell books, not content, the content is the problem not the advertising or the sale of the book.

But you wouldn’t have been able to make all those sales without advertising…

We take no responsibility for the content.

But you made money from the content?

Yes

But no one would’ve known about the book if you hadn’t advertised it and marketed it heavily.

We can’t know that for sure, but we have a responsibility to our shareholders to make profit anyway possible…

Even by advertising harmful material?

Yes

0

override367 t1_j9mccsz wrote

What the hell are you talking about? Google removes terrorist content as soon as it is reported, the case before us is more like a book in the back (that isn't even illegal) which has a bunch of pictures of US soldiers who've been tortured by the Vietcong, and is against the bookstore's internal code of conduct to sell, and offended someone who sued even though they had a button to delete the book and others like it from their own personally curated section of the bookstore forever

I also want to point out that a good deal of terrorist content is legal and covered under the first amendment. Not like bomb making or whatever, but their ideology can absolutely be spoken aloud in America, google gets plenty of pressure from it's advertisers to remove such content

Now, right wing hate speech, not so much, the algorithm encourages it because it favors engagement and highly emotional rage bait encourages engagement, none of this has anything to do with section 230 however, and yet here we are

1

skillywilly56 t1_j9mmymr wrote

Dear lord have you never heard of a metaphor, one cannot just wash one’s hands of something like Pontius Pilot and make money off of it just because they didn’t make the content or have control of what users watch, because they ARE controlling it.

Especially when they use an “algorithm” to deliberately feed the content to users constantly such as the right wing bullshitery and misinformation because the most controversial stuff gets the most views and will give them the most ad revenue. They aren’t giving you the content you want, they are feeding you content that sells ads.

Like a book store that says “we have millions of books to choose from” and then the only books they have on display are books about Nazis, all the recommended reading is about how to become a Nazi, and then once you have gone and bought a book about something else entirely and come back a week later, “you wanna read something about Nazis” “ we really think you’d like stuff about Nazis” Because every time you read or buy something about Nazis they get more money than when you buy any other book.

They don’t have an algorithm, it’s a hate generator and the key factor is that it is deliberate. It deliberately aims content to generate ad revenue, it’s not an “accident” and that’s the sticking point.

4

Zacajoowea t1_j9n5t81 wrote

If you go to YouTube in your browser right now is it full of Nazis and right wing hate? Cause mine is full of sketch comedy from the 90s and Kurzgesagt videos, if your homepage is full of Nazi stuff… well… that’s a you thing. I have never been fed complete irrelevant content that I’m not searching for. You need to adjust your metaphorical bookstore to be individualized recommendations based on the previous book purchases and the purchasing habits of people who purchased the same books.

0

skillywilly56 t1_j9n8nqw wrote

I watch YouTube videos for gaming stuff and historical docuseries and 4x4ing that’s it, my feed about a year or two ago went from gaming content to videos about “guy owns feminist” and Andrew Tate/Jordan Peterson type horseshit along with freaking sky news bullshit (I don’t even watch the news!) and all my gaming stuff just disappeared from my recommended list and I actively have to search and go to the YouTubers channel to get to the vids I want.

It’s not a me thing, I even tried downvoting and “giving feedback” and liking videos but nup did nothing to change the algorithm still just right wing anti female bullshit.

So either the algorithm thinks because I like video games and 4 wheel driving makes me an incel they are deliberately pumping stuff that will generate controversy or they think that other people who like those things will also like right wing incel shit and pump it to you.

Maybe it’s cause I watch it through my television app and not my computer or phone but I sure as shit never went looking for it and I can’t seem to get rid of it.

I’ve considered just deleting my account to see what happens, but even when I watch with a vpn without signing in boom horse shit right wing propaganda.

3

nanocyto t1_j9lpkr9 wrote

>they do under 230

I disagree. One of the requirements for 230 is that it isn't your content but the page you serve is content provided by your servers. If your server was just a corridor and just relayed the information, I'd agree (and I think that's the intent of the law) but it created a page. That organization is a form of content.

−2

override367 t1_j9mc1ut wrote

Are you just going to ignore everything else I typed?

There is no way to present content that doesn't favor some weighted position, and with 3.7 million videos a day the service can't exist if you're just blindly putting it out alphabetically

that would be, again, like a book store being forced to just randomly put books out front in the order they are received and not being able to sort them by section

3

nanocyto t1_j9pjukr wrote

I'm suggesting that book stores can be held liable for what books they put out. I can think of all sorts of material you wouldn't want them to curate like a section dedicated to people trying to figure out how to start trafficking

1

override367 t1_j9pxu61 wrote

They... literally can't unless a complaint is filed, like holy shit this is the core of the case law around section 230

They can't knowingly put out material that is illegal or would get them in trouble, but they bear no liability if they don't know, until such time as they are made aware of it

The reason 230 was created was that this standard only applied to websites that exercised no moderation. IE: if the algorithm was literally a random number generator and you had an equal chance of it recommending you acooking video or actual child pornography, Youtube would be 100% in the clear without 230 as long as they removed the latter after being notified. 230 was necessary because Prodigy, like Youtube, had moderation and content filtering, and any moderation at all meant that they were tacitly endorsing something that was on their service, therefore, they were liable

This is the entire reason the liability shield was created. Section 230 means websites bear no liability in essentially any circumstance other than willful negligence as long as they didn't upload the content, SCOTUS is only considering this case because they aren't judges, they are mad wizards and this is calvinball, not law

1

mcphilclan t1_j9khsmz wrote

How would the internet work if companies like Twitter couldn’t curate trending topics over porn?

5

lentshappening t1_j9ks473 wrote

They could function more like a publisher, with editorial review of the content that is shared on a feed. Or they could revert to something like MySpace, where everything is published and it’s on the user to find accounts they like and visit them.

7

Simonic t1_j9lny0y wrote

And most would just shut down. Sites like YouTube would become unusable without an algorithm -- and curation/moderating costs could skyrocket to the point of no longer being profitable. Millions of minutes of new videos are added to YouTube daily. I assume most of these social media sites are the same.

4

Aspirin_Dispenser t1_j9o4w9j wrote

The same way they functioned before the content served to users on their sites was algorithmically amplified or suppressed. It wasn’t that long ago that Twitter and Facebook were using chronological feeds where content was served purely in the order it was posted. You kept pornography out of your feed simply by not following users that posted it. Back then, the argument that these sites were simply repositories of information posted by users was legitimate because the companies that ran them were doing little more than just serving it to users as it came in with ads interspersed throughout to generate revenue. Now, with social media sites choosing to amplify or suppress content, that argument doesn’t hold water. The content is now curated and editorialized in much the same way that a newspaper or book publisher curates their content. If they want to do that, that’s fine, but they need to be held to same legal standards as any other publisher. Or, they need to stop acting like a publisher and start acting like the simple “repositories of information” they claim to be. They can’t have all the financial benefits of providing curated content along with all the lax legal standards of providing un-curated content.

If being held to those standards means that their business model no longer works, then oh well. But the truth of the matter is that it does work, it just generates less money.

2

Anal_Forklift t1_j9kk5sr wrote

So is the solution just to not curate content at all? If that's the case, social media would be overrun with porn, conspiracy theories, etc.

1

Simonic t1_j9lo4ta wrote

Most are already curated/moderated to the best of their ability/capacity. And, that still isn't enough.

2

BigFitMama t1_j9osvm0 wrote

If you have ever set up ads in an big system like Amazon or Google you can see exactly how it works - the more you pay the more your ad gets seen. The more you refine your demographic audience, the more specific content targets them.

Are the websites complict? No, because they get paid to unbiasedly post content that is paid for using their pay scale and the customer's algorithm. And their job is to rank and sell for who pays them and we have to realize that.

And yes, because as corporations they have an image and overall agenda they are allowed to promote as private entities, not journalistic entities. So if they allow content contrary to their corporate mission that allows attacks to their interests, they have to deal with the consequences.

Thus, why any search or shopping website or social media site is NOT a journalistic entity and simply a place to pay for advertisement of your product, content, or website.

1

idungiveboutnothing t1_j9kk0s2 wrote

So you're saying if I went to a library and told them that I'm doing research on a paper about gangs and asked for any gang related materials they had and they give me a list that the library should then be considered aiding and abetting gangs?

−4

Brief_Profession_148 t1_j9klri2 wrote

Not even comparable. The problem isn’t that Isis video are being looked up or are being missed from being removed in a timely fashion. It’s that their reckless poorly written algorithms are suggesting Isis videos after people look at unrelated topics. It’s not the content gatekeeping as the sole issue here, it’s that algorithms are promoting terrible content that is dangerous because it decided it would keep engagement to sell ads. They control that algorithm, they have responsibility for the content they promote just like a normal publisher can be liable for their content. They crossed the line from host to publisher when they started using algorithms to curate content.

9

Simonic t1_j9lp30t wrote

But they aren't a normal publisher. They aren't the creators of the content. You are asking them to be responsible for millions of people who upload content daily. And no algorithm is going to fix removing all instances of "bad" suggestions. It would require staff from just about every language on earth curating/moderating every single video posted. Because that is the only way to remove videos before they "fall" in the algorithm.

Or remove the algorithm, and search videos by "newest first" or "most watched" etc.

−1

idungiveboutnothing t1_j9ko77n wrote

It's entirely comparable and it's literally what's happening under the hood of the algorithms. There are correlations built between what you engage with and what others who engaged with that also engaged with so it would be exactly the same as asking for gang related materials and the librarian said "oh you might also want to check out books on trafficking, people who look up stuff on gangs look at that too". Are they speaking on behalf of the entire library because you engaged with them and they answered your question? It really sounds to me like you don't understand how software, servers, algorithms, etc. work.

−4

dustofoblivion123 OP t1_j9kanx4 wrote

From the article:

"An upcoming Supreme Court case could answer one of the toughest questions of the internet age: Should online companies be held responsible for promoting harmful speech?

The case, Gonzalez v. Google, could upend the modern internet economy, sparing no online business. A ruling against Google will likely leave internet companies — from social media platforms to travel websites to online marketplaces — scrambling to reconfigure their businesses to avoid costly lawsuits.

The case, which will be argued Feb. 21, tests whether Google’s YouTube can be held liable for automated recommendations of Islamic State terrorism videos. The company is being sued by the family of Nohemi Gonzalez, a 23-year-old U.S. citizen who was among the at least 130 people killed in coordinated attacks by the Islamic State in Paris in November 2015."

16

wbsgrepit t1_j9l5jvq wrote

IMHO, if they destroy 230 it should be applied well outside of internet context too -- hold owners responsible for people saying and discussing things on their property in general. To me it is equivalent to Walgreens being asked to be held liable for two jihadists walking in their parking lot while plotting or shouting about their POV from their grass.

It should clearly be OK for them to remove or kick them off their property should they choose but they should not be held liable (or expected to police) for the speech of another or not taking the action to trespass. 230 just reaffirms the normal and usual case of exemption that is enjoyed in the physical world on internet platforms.

15

odinlubumeta t1_j9ljbyq wrote

No it promoted the content. To use your analogy if someone walked into Walgreens and the cashier said hey there is a meeting in the back you should attend (but doesn’t know what the meeting is about). And the person goes back and a bunch of Nazi are trying to convince people to kill Jews and the person organizes with others and does it. It’s a grey area because it has to be determined if Walgreens is at fault for pointing the guy to a group it didn’t know anything about.

And it matters because hate groups have trouble recruiting people in public places but not the internet. The rise of this problem is definitely be use of the internet. And the ability to organize is also made much easier because of the internet. So the question becomes do you allow more freedom at the cost of more death. You may think freedom should always be the case, but their are plenty of times freedom is restricted. From things like nuclear weapons to not allowing people to bring weapons into certain places. The reason to not allow such things is often how people will use them or potential to use them. Again it is not a black and white area.

27

Simonic t1_j9lmth0 wrote

Except from my understanding, YouTube/Google didn't expressly "promote" it. The algorithm suggested it. Under that, your analogy doesn't exactly hold up. Unless, you add to the cashier "I see that you've been attending and checking on a few of these meetings -- there's one in the back if you'd like to go check it out."

The problem here is that they're taking a flame thrower to solve the problem, when all they need is a match. And the reaction from the internet will be to simply curtail anything/everything that could get them a lawsuit. Many sites would simply cease to exist because they can't moderate millions of interactions.

And sites like YouTube would become unbearable without an algorithm.

3

odinlubumeta t1_j9loo9d wrote

Okay add the caveat if you want (it was a take on the other person’s analogy). You think that somehow complete negates the argument?

That’s why it is a grey area. How responsible should they be. What do they need to do? They will need to address it and answer it.

It is also not the job of lawmakers to make sure YouTube is bearable. That’s the worst way to approach a law. If your business can’t adapt to the laws then it should go out of business. It is weird to argue otherwise. Apply it in any business. The safety and well being should come first before entertainment. At least it should in the non-Roman gladiator days.

2

Simonic t1_j9lsvvk wrote

YouTube, Facebook/Instagram, Twitter, Reddit, just about all of these "general services" that allow third party participation are on the chopping block. If the protections granted by Section 230 are removed/diminished we have a far more restrictive internet.

Another unintended consequence would be making it harder to track the "bad people." If you remove their presence from social platforms, they will continue to operate -- just harder to track. Which was one of the unintended consequences of the law against websites that were targeted for human trafficking. They became a lot harder for law enforcement to track down.

5

odinlubumeta t1_j9lw5ji wrote

Again you don’t do this for any other business. You are ardent in your defense because you like one of them. That’s not how laws should be written. Again if they are incapable of adapting then they shouldn’t be in business. And I have yet to see you argue that. Just that they would go away.

We have plenty of history before the internet existed where they caught bad guys. We have plenty of mass shooting with by guys with red flags on the internet that weren’t stopped. The FBI adapting to the times is not an argument that it would worse if it were removed. That’s you speculating. And if we just wanted it to be easier for the government to find bad people we could allow them without a warrant to full access of peoples phones and computers. Laws are made with both idea of freedoms and the ideas of limits in those freedoms.

I am not saying what the laws should be by the way, I am saying that you cannot argue that things must stay the same simply because a company might go out of business or it is harder to track bad people.

0

SnooPuppers1978 t1_j9me40y wrote

If we lose an important service because of the companies going out of business that seems like a reasonable argument.

0

Iwasahipsterbefore t1_j9mibkp wrote

If its a service and passing laws threatens to affect the quality of life of the American people it should be nationalized and be a public utility.

So no really not a good argument

2

MINIMAN10001 t1_j9mve8t wrote

I mean nothing is more critical and endangering of life than healthcare yet the entire US political system is strictly against enacting nationalized healthcare.

Literally a matter of life and death and the whole nation turns a blind eye.

1

Iwasahipsterbefore t1_j9n3599 wrote

No arguments from me. My state has very limited single payer Healthcare, and people always say it's the absolute best healthcare they've ever gotten, and that they miss it when they make too much for it. Which is basically just having a job. At all.

1

wbsgrepit t1_j9qeu15 wrote

What state is this -- there is not an active single payer Healthcare sate in the USA as far as I know. Vermont passed a very neutered version of one in 2011 but it was disabled in 2014 because there was not enough power at the state level to force the cost savings and the cost became untenable.

2

Iwasahipsterbefore t1_j9qn87l wrote

Oregon. We've got two versions essentially, one for poor people and one for old people. Both are absolutely fantastic, and the only problem with the poor one is the drop-off limit should be like, tripled.

2

wbsgrepit t1_j9qobgm wrote

ahh thats not really single payer thats state funded Medicare/Medicaid plans -- similar in concept but not in scope or savings (where single payer fully locks out players and forces them to negotiate costs or lose the market access).

1

Iwasahipsterbefore t1_j9qozlw wrote

We do actually have some litigation in that direction, but it's all on the level of financial incentives rather than a true lockout. The incentives are strong enough and Healthcare companies are greedy enough that everyone generally plays ball, though

1

SnooPuppers1978 t1_j9ml09n wrote

What if nationalising it would make it run much worse? Govs are usually not very innovative.

0

Iwasahipsterbefore t1_j9mm3v1 wrote

And what if unicorns ate rainbows?

See I can do non-sequiters too

2

SnooPuppers1978 t1_j9mz6en wrote

Usually nationalising something like that wouldn't work because incentives aren't there to innovate and compete for the gov.

1

Iwasahipsterbefore t1_j9n2uvt wrote

Can you take a moment, read what you wrote, and actually fucking think about it for a second?

We're in this situation because the "incentives to innovate and compete" directly lead to YouTube recommending Isis training videos to people susceptible to wanting to join Isis because THAT MADE YOUTUBE THE MOST MONEY.

1

odinlubumeta t1_j9mii9b wrote

First it’s entertainment. How people can just publicly put entertainment over human life’s to me is so odd.

Second why can’t they adapt? We don’t know what the rules would be but we have all these algorithms and machine learning and soon to be AI, but these billion (soon to be trillion) dollar companies can find a way to adapt?

And yes it’s a stupid argument if your point is that corporations that can’t adapt shouldn’t come to an end. Are they also too big too fail? Seriously I want you to make an argument that a company shouldn’t have to adapt to the laws and have them written around the biggest companies.

0

SnooPuppers1978 t1_j9mkw82 wrote

People put entertainment over human lives every single day. Every action you do is a trade off. Any time you spend on entertainment could be spent on helping saving lives.

I am just saying that it should be considered based on trade offs.

1

odinlubumeta t1_j9mos8f wrote

I am not sure I understand your point. You are saying that the lawmakers should consider entertainment value when writing the laws?

1

SnooPuppers1978 t1_j9myv2z wrote

Yes, but in general all umbrella of different values. Since it is also practicality and productivity. Search and auto recommenders and other types of AI systems.

1

odinlubumeta t1_j9n3gye wrote

I never said to ignore everything than safety. I said you don’t make laws based on keep a few companies (that can’t adapt) afloat.

2

SnooPuppers1978 t1_j9n5qc2 wrote

The issue is that no company could provide such a service if there is no protection for algorithmic content filtering or suggestions.

1

odinlubumeta t1_j9n9xlq wrote

I don’t believe that they couldn’t survive without their current algorithm. Google and Facebook were profitable well before they came up with their current algorithms. Advertisers aren’t just going to disappear. But let’s say they just couldn’t, then they absolutely should go away and let a new company that can figure out how to survive under whatever laws exist. If you have a void someone will find a way to profit off it. You don’t have a viable business if you can only survive with one set of laws. Laws have changed so many times since Americas founding. Adapt.

2

SnooPuppers1978 t1_j9ncx0u wrote

YouTube for example wouldn't be what it is now. It would affect the whole ecosystem of different things, people livelihoods, because so much depends on those things. Content creators for discovery etc. You wouldn't be able to have personalised experience in YouTube or anywhere with third party content. And Reddit for that matter.

I for one want to have personalised content.

I hate the times of curated content like TV was or otherwise. I want to view content on demand, created by anyone and what is relevant to me.

But pretty sure it is going to be ruled in Google's favour anyway because of the sheer impracticality.

1

odinlubumeta t1_j9nksk5 wrote

Again you are arguing for things you like or it seems your needs. YouTube existed before it had an algorithm. You act as if this stuff can’t exist without it’s very predatory ad algorithm. People would also adapt. It’s a poor argument. There are technologies that will come that don’t currently exist and you will adapt to them, but giant corporations can’t?

And you are also arguing we can’t make new laws because content creators would either have to evolve or go away? You know we once had a giant book industry. Most people who worked in them had to find new jobs. We certainly don’t make laws to keep everything static.

I am sure it will go Googles way. They have a massive lobby and billions to spend. That’s not the argument. The fact that your whole argument seems to be that you like where things are is a poor argument. The southerners loved having slaves and change was so hard for them that they literally went to war to try to keep things the way they lived. That’s not a good argument then and it isn’t now. You don’t make laws for selfish wants.

1

MINIMAN10001 t1_j9mlokb wrote

I believe the same standards which DMCA falls under should be the same standards held here. Follow safe harbor protections about taking action on things that you learn about but are not required to seek out malfeasance actively to maintain your personal protection over other people's use of unauthorized copyright content on your platform.

1

cubenz t1_j9l0ydh wrote

No way the current court agrees with this, there's too much money being made.

I like the book store argument; if a book store was shown to be promoting Mein Kampf consistently over other worthy anti-Hitler books, to the extent of keeping those other books only on the store room and asking customers to go find them, should they get in trouble?

7

Nagi21 t1_j9p64l5 wrote

I mean normally you’d go to another bookstore. The wrinkle here is there’s not another YouTube.

1

nerdyitguy t1_j9n1u0q wrote

So, then they only offer youtube by subscription and verified sign up, and you have to agree that if you see videos that offend you thats your tought titties. Problem solved.

5

Mutiu2 t1_j9lnzhu wrote

The internet as we know it is mostly click bait. Much like this thread.

3

ThePhotoLife_ t1_j9lrn26 wrote

I’m for this if it means the algorithm gets destroyed

3

golighter144 t1_j9m3zo4 wrote

YouTube, especially the shorts, thinks I'm really REALLY into guns, ufc, Andrew Tate, Jordan Peterson, and whatever bs Dana White has to say. I don't look up any of these things. Hell if anything I only get on YouTube to decide if I want to buy a game or not.

It's fucking annoying and pops up no matter how many times I dislike them.

5

Mikanea t1_j9oexdb wrote

Don't think about it as likes or dislikes. Think of it like engagement or not engagement. Engagement drives ad revenue whether or not it's positive. If you comment to say how terrible a video is you're more likely to have spent time on the page and watched or seen an ad somewhere. If you dislike a video it means you're spending a little more time on the page and might see an ad. Plus your negative engagement, even clicking dislike, might lead other people to positively engage and thereby watch an ad.

Your best bet is to leave the page, scroll to the next short as quickly as possible, or leave YouTube entirely. This tells the algorithm that you're not engaging and won't be seeing ads at all. It's most effective to leave a video within the first 30 seconds and interact with as few things in the page as possible.

4

KINGMARKOXIV t1_j9lz5ty wrote

and this kids is what happens if you give control over the internet to like three monopiles

3

afedyuki t1_j9lzlbb wrote

Internet is interfering with the ruling class ability to indoctrinate and misinform. It also allows people a certain degree of free association, something else they don't like. Doh, of course they are doing everything they can to make it useless. This is 21st century equivalent of burning books (their favorite pastime), that's all.

3

Strawbrawry t1_j9oke78 wrote

Sensationalism. From the court transcripts/ recordings the judges think the plaintiffs are lunatics with barely a case to bring let alone decide anything major and overarching. This sub is straight burning garbage of autoposts

3

chippewaChris t1_j9mjy4l wrote

That article is a bit dramatic…

“upend the modern internet economy, sparing no online business”

Pretty sure and mom and pop’s restaurant on Main Street should be fine as long as they don’t let customers post comments on their website (which basically just has their hours and an outdated menu).

2

94746382926 t1_ja2a1zj wrote

That's not really an online business though. That's a brick and mortar with a website.

1

FuturologyBot t1_j9kffjr wrote

The following submission statement was provided by /u/dustofoblivion123:


From the article:

"An upcoming Supreme Court case could answer one of the toughest questions of the internet age: Should online companies be held responsible for promoting harmful speech?

The case, Gonzalez v. Google, could upend the modern internet economy, sparing no online business. A ruling against Google will likely leave internet companies — from social media platforms to travel websites to online marketplaces — scrambling to reconfigure their businesses to avoid costly lawsuits.

The case, which will be argued Feb. 21, tests whether Google’s YouTube can be held liable for automated recommendations of Islamic State terrorism videos. The company is being sued by the family of Nohemi Gonzalez, a 23-year-old U.S. citizen who was among the at least 130 people killed in coordinated attacks by the Islamic State in Paris in November 2015."


Please reply to OP's comment here: https://old.reddit.com/r/Futurology/comments/1194caa/google_case_at_supreme_court_risks_upending_the/j9kanx4/

1

Elegant_Pressure_984 t1_j9kylzj wrote

Yeah it would probably be better if they had to choose between promotion and passive hosting, because what we have now is not ideal.

1

Jnoper t1_j9lko9o wrote

I’m confused. Hasn’t it already been established that pages that host information are not responsible for the content unless an issue is brought to their attention. Like flagging copyright infringement. I guess Google’s algorithm is responsible for promoting it but they didn’t put the content there.

1

1015267 t1_j9mqpvq wrote

There’s more nuance. My understanding is this case revolves around the promotion of ISIS videos.

1

First-Translator966 t1_j9lp35m wrote

My take — if the platform isn’t acting as an editor with moderation, then they shouldn’t be held liable.

1

mb25sf t1_j9ncgfv wrote

Shitty ‘profits at any cost’ tech companies vs shitty ‘donations at any cost’ politicians.

I don’t trust anyone involved here to do what’s best for society.

1

DA-ZACHYZACHY t1_j9nmfcx wrote

Lol, for once the US is bringing in more stupid rules than the EU

1

novelexistence t1_j9ojlfm wrote

There is no risk. The court will rule in favor of the tech companies. I'm surprised they even were willing to listen to this case from this perspective. It's almost like they have an agenda to give more power to tech companies.

The case should have not be heard to begin with from the fame of reference it's being presented.

1

Dryandrough t1_j9optds wrote

I'm glad to hear Google has already made the decision.

1

Mundane-Ad-5355 t1_j9lmnuh wrote

Awwwwww too bad. Now you got to spend those billions of dollars to clean up / monitor the shit show you have created.

0

Kalel2319 t1_j9mg3ma wrote

I say fuck em. I’d like to see us revert back to when we had to watch what we say.

This free speech argument is bullshit anyway.

“I think women should not have abortions” is very different than “here’s the names of people who we should [redacted] join me my brothers!”

0

Syllabub_Cool t1_j9oyltb wrote

Have Congress make the law so we aren't responsible. Then we'll knock it down as unconstitutional.

Sorry. I can't read any of that OP without me also hearing the "judgment" I just made.

0

HeadLeg5602 t1_j9kdixj wrote

Something needs to be done to ram in these social media giants before the world gets ripped apart

−2

override367 t1_j9klp0t wrote

Don't be an idiot, the supreme court is not the tool for that

you're posting on reddit for fuck's sake, if you don't think 230 should exist, stop using things that only exist because of it

6

gjallerhorn t1_j9kn6rl wrote

This is not the way to do that. This would force them to censor anything that could possibly be right of as contribution to anyone.

2

HeadLeg5602 t1_j9knj6g wrote

We’ve tried going through legislation but they refuse to do anything remotely constructive… Something needs to be done

0

gjallerhorn t1_j9kocdi wrote

This would give them more incentive to control what's being said online, though. Because they'd have to or get sued

3

billetea t1_j9m0les wrote

Short answer is yes. Whether a commons, billboards or bulletin board - all these are policed and laws applied. If you trip on a manhole in a town square and fall down breaking a leg, the council is liable. The days of the Internet being the wild west are over and frankly good. Humans don't need a space to express themselves - we need a tool to build better lives, spread equality and opportunity and beneficial connection. Not flat earthers, terrorists and other low life losers and criminals.

−2

joey_diaz_wings t1_j9p2c07 wrote

We don't need ideological curation, censors, or astroturfed conversations that limit the ideas people can discuss.

It's ultimately beneficial to humanity if people can communicate freely and naturally rather than in the tiny bubble of commercial discourse so ads can be sold. Repeating only permitted ideas is infantile.

2

billetea t1_j9qc22j wrote

What absolute bullshit. We already have controls on what we discuss - terrorism, paedophilia, etc. It's not ideological to remove those discussions- It's civilisation. The problem with unfiltered social media is it leads to the congregation of idiots and evil.

Zuckerberg keeps calling it the town hall, but towns have idiots and people who are very sick in the head and they now congregate.

Previously they were ostracised and outsiders in their community but now thanks to the internet the one moron in each town who thought the world was flat now congregates with the one village idiot from the other million towns across the world making their numbers promotable by the stupid algorithms behind social media. Net result we have a rise in Flat Earth theory. Bravo for our civilisation.

I'm all for the free discourse of ideas that advance the human civilisation, but Joe and Bob down on the corner sharing their love of snuff porn or sharing photos of little kids should be excised from our society. Otherwise, we are bound for idiocracy.

0

joey_diaz_wings t1_j9ufmhp wrote

We're already bound for idiocracy: look who breeds and who abstains. Demographics are destiny.

Idiots are already discussing idiotic topics when not consuming idiotic media. Accepting this, why not also allow intelligent people to discuss topics without imposing moderators who only permit opinions and ideas that can be accompanied by advertising revenue?

The idea of online "community standards" is absurd for large sites like Facebook. There is no community. People should be allowed to talk about topics of interest and organize however they like. We've even seen PayPal insist they are a "community" with standards that can impose financial penalties when people discuss ideas contrary to their baseless rules.

The crazies will do whatever they do. We should preserve some space for sane adults too where censors cannot intrude and silence us all to infantile norms.

2

stephruvy t1_j9mvmxq wrote

But will I still be able to download stl models for printing or get... Pirated ebooks for my college classes? 🤔

1

CarCaste t1_j9nv8h7 wrote

sounds fascist

0

billetea t1_j9o0me9 wrote

Haha. Do you even know what fascism is?

I'm saying It's the end of liability free behaviour by major social media and online news service entities.. like a normal business, they should not be able to promote lies, incite violence or terrorism, host criminal material. That's being civilised. Even the Wild West had rules. It's not like they're a bunch of teenagers working out of their parents basement anymore. They're bigger and vastly more powerful than companies that are heavily regulated.

I think we are all getting to the end of endless self actualisation... even paedophiles are trying to normalise their behaviour as a sexual deviation or compare it to being something relatively normal like being gay rather than an evil act.. that's because they can congregate and normalise online.

−1

Shineliketheson t1_j9o57xe wrote

So, who gets to decide what is true, what opinion matters, what theory is a conspiracy, etc? What you are proposing is what leads to fascism. Only a few decide all of the above.

4

billetea t1_j9o5nqk wrote

We have a functional legal system and a functional democracy. Who said anything about one person making the decision. We have worked out regulations and laws for centuries - why so little faith now? II don't get how that somehow ends in fascism. How do you think we worked out property laws, libel laws, any law for that matter?

What we currently confront is a small group of extremely wealthy and powerful people who think they stand above the our legal and political system. That needs to end. They are not above either and their platforms need to be brought to account to the legal system and to the people. Their argument that what they've created is somehow a uniquely separate ecosystem to that which the rest of us operates is bullshit.. it's elitism. It's why we had revolutions to devolve power to the people away from kings and others who held individual power over us.

1

snowbirdnerd t1_j9ky5g2 wrote

This needs to be a keep the internet free or we will pack the courts moment.

−3

Manning88 t1_j9kse5d wrote

Luddites making decisions about technology, what could possibly go wrong?

−5

ABobby077 t1_j9lqkut wrote

You may have been joking, but this is also the case of Congress and why any technology laws seem to be written by lobbyists for the social media companies (when any actually has moved ahead). I just don't think the companies can have it both ways. There has to be some place where you have freedom of speech and expression and not promote and allow dangerous criminals. They can't have their media used for nefarious and illegal things or to promote terrorism or other violence and have no responsibility for what has happened. There has to be some moderation that can strike a logical, legal balance without clear censorship or the end of the internet.

2