Bringing the Kid back into YouTube Kids: Detecting Inappropriate Content on Video Streaming Platforms
With the advent of child-centric content-sharing platforms, such as YouTube Kids, thousands of children, from all age groups are consuming gigabytes of content on a daily basis. With PBS Kids, Disney Jr. and countless others joining in the fray, this consumption of video data stands to grow further in quantity and diversity. However, it has been observed increasingly that content unsuitable for children often slips through the cracks and lands on such platforms. To investigate this phenomenon in more detail, we collect the first-of-its-kind dataset of inappropriate videos hosted on such children-focused apps and platforms. Alarmingly, our study finds that there is a noticeable percentage of such videos currently being watched by kids with some inappropriate videos having millions of views already. To address this problem, we develop a deep learning architecture that can flag such videos and report them. Our results show that the proposed system can be successfully applied to various types of animations, cartoons, and CGI videos to detect any inappropriate content within them.