YouTube Attempts to Curb Controversial Content
If you’ve ever signed up for a YouTube account, you’re probably more than familiar with YouTube’s head-scratching video recommendations at times. After mounting dissatisfaction over the new recommendation algorithm, which produced too many similar recommendations and videos promoting misinformation, the YouTube team published a January 2019 post indicating that the algorithm is still a work in progress, and announced changes in response to recent feedback.
The new algorithm changes are advertised to be capable of pulling in recommendations from a wider range of topics than before: “on any given day, more than 200 million videos are recommended on the homepage alone.” Contrasting the broadening of topics, YouTube is putting in an effort to reduce videos from being included in the recommendation algorithm which violate the YouTube Community Guidelines, or videos with the potential to misinform users “…such as videos promoting a phony miracle cure for a serious illness, claiming the earth is flat, or making blatantly false claims about historical events like 9/11.” However, the latter topics will still be available for people to access as YouTube believes in maintaining a balance between free speech and fulfilling their responsibility to its users. Those videos will have an “information pane” which will be displayed as an annotation to provide fact checks for when users search certain terms or phrases.
The recommendation algorithm is built through the collaboration of human experts and a machine learning system developed using TensorFlow software. TensorFlow is an open source software platform used for machine learning developed by Google Brain, a deep learning artificial intelligence research team at Google. The recommendation system consists of two neural networks: The first neural network is dedicated to candidate generation. Using collaborative filtering, it takes the user’s watch history as input and selects hundreds of potentially relevant videos. The network uses implicit feedback of video watches by users to train the model. Explicit features like a thumbs up or thumbs down are typically harder to come by and can create issues when applied to videos that are not popular. Videos watched by the user from partner sites are also used for training of the algorithm. The second neural network is dedicated to ranking the hundreds of selected videos by using “logistic regression to score each video and then A/B testing is continuously used for further improvement.” The expected amount of time the user will spend watching the video is the metric used in this score generation. If it were scored based on expected user click, then that could promote clickbait videos.
YouTube is arguably the biggest video content platform on the internet when it comes to viewership, so a chunk of the responsibility to not mislead the public with misinformation falls on their shoulders. Despite pledging to work on fixing their recommendation at the beginning of the year, the company has been criticized as recently as this past February for not preventing their system of recommending videos with content that exploit children. On top of that, the aforementioned videos are also being monetized by some of YouTube’s biggest partners such as McDonald’s, Disney, etc. Even though they’ve had their struggles, it is encouraging to see YouTube take the first step in combating misinformation and promoting fact checking. It will be interesting to see how YouTube and their partners fare with the upcoming presidential election as the spread of misinformation will be a hotly debated topic.