Submitted by dustofoblivion123 t3_1194caa in Futurology
g0ing_postal t1_j9m4sbe wrote
Reply to comment by seaburno in Google case at Supreme Court risks upending the internet as we know it by dustofoblivion123
Then the big problem is how do you categorize the video? Content creators will not voluntarily categorize their content in such a way that will reduce visibility. Text filtering can only go so far and content creators will find ways around it
The only certain way to do so is via manual content moderation. 500 hours of video is uploaded to YouTube per minute. That's a massive task. Anything else will allow some videos to get though
Maybe eventually we can train ai to do this but currently we need people to do it. Let's say it takes 3 minutes to moderate 1 minute of video to allow moderators time to analyze, research, and take breaks
500 hrs/min x 60 min/ hour x 24 hours/day= 720000 hours of video uploaded
Multiply by 3 to get 2.16 million man hours of moderation per day. For a standard 8 hour shift, that requires 270,000 full time moderators to moderate just YouTube content
That's an unfeasible amount. That's not factoring in how brutal content moderation is
Even with moderation, you'll still have some videos slipping through
I agree that something needs to be done, but it must be understood the sheer scale that we're dealing with here means that a lot of "common sense" solutions don't work
seaburno t1_j9mc2jz wrote
Should we, as the public, be paying for YouTube's private costs? Its my understanding that AI already does a lot of the categorization. It also isn't about being perfect, but good enough. Its my understanding that even with all that they do to keep YouTube free from porn, some still slips through, but it is taken down as soon as it is reported.
But the case isn't about categorizing it, but is about how it is promoted and monetized by YouTube/Google and their algorithms, and, then the ultimate issue of the case - is the algorithm promoting the complained of content protected under 230 which was written to give safe harbor to companies who act in good faith to take down material that violates that company's terms of service?
takachi8 t1_j9mp1r9 wrote
As someone who primary source of entertainment is YouTube, and has been on YouTube along time. I can say their video filter is not perfect in any sense. I have seen videos that should have been pulled down do to it violating their terms and conditions that stayed for along time. I have also seen "perfectly good" (lack of better word) video get pulled down or straight up demonetize for variety of reasons that made zero sense but was marked by their AI. Improper marking causing content creators to lose money which in turns hurts YouTube and their creators.
I have been on YouTube long time, and everything that was ever recommended to me has been closely related to what I have or am actively watching. I would say their algorithm for recommending video for person who actual has an account with them is pretty spot on. The only time I seen off the wall stuff is when I watch YouTube from a device that I'm not login into or incognito mode, and the same thing for advertisements. My question is what are people looking up that causing YouTube to recommend this kind of stuff cause I never seen it on YouTube or google advertise. Usually I find on reddit.
g0ing_postal t1_j9md1wp wrote
I'm not saying that the public should pay for it. I'm just saying that it would be a massive undertaking to categorize the videos. Porn seems to me that it would be easier to detect automatically. There are specific images they can be used to detect such content
General content is more difficult because it's hard for ai to distinguish, say, legitimate discussion over trans inclusion vs transphobic hate speech disguised using bad faith arguments
And in order to demonetize and not promote those videos, we need to first figure out which videos those are
Viewing a single comment thread. View all comments