YouTube demonetization algorithm leaks: Google’s artificial intelligence is artificially biased

Recently leaked screenshots show YouTube’s questionnaires that subjectively determine what videos are acceptable for monetization, and consequently, what videos would appear higher in search and on trending lists.  There’s no money after all if a video has no ads, so demonetized videos will generally be suppressed by the search and trend algorithms as it represents negative revenue to YouTube.

The most incriminating section of the demonetization survey is shown below:

The problem with this overreaching “controversial” category is that it is subject to the whims and biases of the reviewer.  In fact, the bias is encouraged by the survey itself.  It is persuading the reviewer to adopt the same postmodern identitarian perspective as the survey creator(s).

Despite its best efforts to masquerade itself as fair by saying “please don’t use personal biases”, the survey is pushing for political bias on contentious topics. It is assuming the postmodern stance on these issues is correct, thus justifying their eradication of differing thoughts and opinions.

The language makes their intent obvious as it includes abstract qualifiers devoid of meaning like “problematic” and “creepy”. How does the reviewer truly know the intent of the uploader, and whether their benchmark of a “respectful” presentation is universally unanimous?


How to prevent your YouTube videos from being demonetized


Do you want to know how to stop having your videos demonetized right after uploading?

Think about Lindsay Shepherd case and how an open debate on a contentious topic was struck down by the postmodern kangaroo courts that rule over academia.  Then think about how the same class of authoritarians control this survey and how their biased input is fed into YouTube’s machine learning algorithms.

Consider how Google implements the majority of its AI.  Let’s look at two examples: ReCaptcha and Google Maps.

Human input is the cornerstone to Google’s AI.  Without their massive user base constantly feeding their algorithms, their AI wouldn’t be able to tell the difference between a car and a street sign, and their maps wouldn’t be able to give you the fastest route:

If all the ReCaptcha users in unison started to identify animals as street signs rather than actual street signs, then Google’s AI will think animals are street signs.

If all Android users in unison started to drive 10% the speed limit on highways all the time, Maps would always think the scenic side-road route would be the fastest.

In other words, if all users of their learning algorithms started to lean one direction, it will teach the AI to think along the same direction.  A lot of Google’s AI doesn’t evolve from first principles — it is consistently subject to argumentum ad populum.

The obvious problem with YouTube is that politically-charged postmodern identitarians are leading the evolution of its demonetization algorithm.  If you know how the AI thinks, which is equivalent to how the radical left thinks, then you can defeat the algorithm:

*     *     *

Leave a comment and share if you enjoyed this article.  This blog talks a lot about the deterioration of open dialogue and other disturbing trends in technology and society.  Keep up to date by subscribing to the RSS feed.

0 thoughts on “YouTube demonetization algorithm leaks: Google’s artificial intelligence is artificially biased”

Leave a Reply

Your email address will not be published. Required fields are marked *