Instagram Will Now Judge Posts That Can be Classified as Inappropriate, as it Adopts The Nanny Role
Instagram Will Now Judge Posts That Can be Classified as Inappropriate, as it Adopts The Nanny Role
Artificially intelligent algorithms and machine learning are possibly going to dictate morals and perhaps more.

Facebook owned Instagram is making tweaks to the Community Guidelines that dictate the posts that you see in the recommendations as well as with hashtag searches. The social network is reworking the algorithms to filter out posts that could be labeled as “inappropriate” but may actually not be breaking any rules or going against community guidelines.

“We have begun reducing the spread of posts that are inappropriate but do not go against Instagram’s Community Guidelines, limiting those types of posts from being recommended on our Explore and hashtag pages,” says Instagram in an official post. But what sort of posts would these be?

Apparently, Instagram will judge the content of each post and then decide whether it violates any community guidelines or not. If it doesn’t, but Instagram still doesn’t like the looks of it, the post will be classified as “inappropriate” and sent to sit on the naughty step. Instagram gives the example of a sexually suggestive post, which may be targeted in this new regime where artificially intelligent algorithms and machine learning are possibly going to dictate morals and perhaps more.

Instagram says such a post will still appear on your Feed if you follow the account that posts it. However, these posts will be downrated in a way, and may not appear in the Explore tab, the hashtag pages as well as when a user makes a specific search with a hashtag.

Instagram hasn’t given a timeline to suggest when these changes will be implemented, which could indicate these tweaks are possibly active as we speak. It is perhaps a bit funny that while Instagram may not really have a policy or guideline that is being violated by some posts, it may still decide against showing it in your search results. The method for this filtration isn't exactly clear, except for this one example that the social network illustrates. It'll be interesting to understand the how and why, the reasons for downrating any post and what dictates those reasons. At the moment, this seems to be going against content democracy, which pretty much means that if a content isn't breaking the community guidelines (in which case it is blocked and rightly so), it has to be given equal preference as any other piece of content on the platform. In a way, this indicates a larger problem with the framing of the guidelines in the first place, but it may be easier to paper over the cracks with a strong morality pitch.

What's your reaction?

Comments

https://chuka-chuka.com/assets/images/user-avatar-s.jpg

0 comment

Write the first comment for this!