인스타그램(Instagram)이 인공지능 기술을 활용해 악성 콘텐츠를 차단하는 “자동 신고”와 “비방 댓글 필터” 기능을 도입했다.
머신 러닝 기술을 기반으로 하는 해당 기능은 인스타그램 앱 내 게시되는 사진과 문구를 검토하여, 부적절한 콘텐츠를 자동으로 신고한다.
신고된 게시물은 검토를 위해 인스타그램 커뮤니티 운영자에게 직접 보내지며, 인스타그램 커뮤니티 가이드라인에 준수하지 않았을 시 게시물은 삭제된다.
“비방 댓글 필터” 기능은 피드(feed), 둘러보기, 프로필 페이지에 적용되며, 악성 댓글과 같은 부적절한 언어 사용을 즉시 차단한다.
“자동 신고” 기능은 인스타그램 스토리(Stories), 둘러보기(Explore) 뿐만 아니라, 비공개 개정의 게시물에도 적용되며, 이는 온라인에서 일어나는 청소년 따돌림과 괴롭힘을 적극적으로 예방할 수 있을 것으로 기대된다.
인스타그램 아담 모세리(Adam Mosseri) 부사장은 “인스타그램 내 괴롭힘을 없을 것”이라고 강조하며, “안전한 커뮤니티로 만들기 위해 노력할 수 있는 것에 자부심을 느끼고 있다”라고 전했다.
이번 업데이트에 새롭게 추가된 라이브 방송 댓글 필터 기능은 현재 영어, 독일어, 프랑스어 등 9개의 언어로만 한정 지원되고 있으며, 추후 한국어를 포함한 다양한 언어가 지원될 예정이다.
인스타그램은 “현재는 사진과 댓글에만 초점이 맞추어져 있지만, IGTV를 포함한 비디오 보호 기능을 추가할 예정”이라고 밝혔다.
Instagram and its users do benefit from the app’s ownership by Facebook, which invests tons in new artificial intelligence technologies. Now that AI could help keep Instagram more tolerable for humans. Today Instagram announced a new set of antii-cyberbullying features. Most importantly, it can now use machine learning to optically scan photos posted to the app to detect bullying and send the post to Instagram’s community moderators for review. That means harassers won’t be able to just scrawl out threatening or defamatory notes and then post a photo of them to bypass Instagram’s text filters for bullying.
In his first blog post directly addressing Instagram users, the division’s newly appointed leader Adam Mosseri writes “There is no place for bullying on Instagram . . . As the new Head of Instagram, I’m proud to build on our commitment to making Instagram a kind and safe community for everyone.” The filter for photos and captions rolls out over the next few weeks.
Instagram launched text filtering for bullying in May, but that could have just pushed trolls to attack people through images. Now, its bullying classifier can identify harassment in photos including insults to a person’s character, appearance, well-being, or health. Instagram confirms the image filter will work in feed and Stories. “Although this update only focuses on photos, we will be working to add protections for video, including IGTV, very soon” a spokesperson tells me.
Instagram users will see the “Hide Offensive Comments” setting defaulted on in their settings. They can also opt to manually list out words they want to filter out of their comments, and can choose to auto-filter the most commonly reported words. With text, it’s black and white so Instagram can just block keywords. With images, it won’t let the AI play executioner, and instead uses the filter to direct posts to human moderators who make the final call.
Meanwhile, Instagram is expanding its proactive filter for bullying in comment from the feed, Explore, and profile to also protect Live broadcasts. It’s launching a “Kindness” camera effect in partnership with Maddie Ziegler, best known as the child dancer version of Sia from her music video “Chandelier”. The effect showers your image with hearts and positive comments in different languages while prompting you to tag a friend you care about. It’ll be visible to in users’ camera effects tray if they follow Ziegler, or if they see a friend use it, they can try it themselves.
For Instagram to remain the favorite app of teens, it can’t let this vulnerable community be victimized. There’s been a lot of talk about Facebook interfering with Instagram after the photo app’s co-founders resigned. But the parent company’s massive engineering organization affords Instagram economies of scale that unlock tech like this bullying filter that an independent startup might not be able to develop.
[fbcomments url=“http://www.mobiinside.com/kr/2018/10/11/interpress-instagramcyberbully/” width=“100%” count=“off” num=“5″ countmsg=“wonderful comments!“]