Instagram will now reduce the visibility of ‘potentially harmful’ content

[ad_1]

Instagram is taking new steps content less visible in your application. The company says the algorithm that drives the way posts line up in user content and stories will now de-prioritize content that “may contain intimidation, hate speech or may incite violence”.

Although Instagram’s policies already prohibit most of this type of content, the change could affect border posts or content that has not yet reached the app’s moderator. “To understand if something might violate our policies, we’ll look at things like whether the inscription is similar to an inscription that previously violated our policies,” the company explains in an update.

So far, Instagram has been trying potentially unwanted content from public parts of the app, such as Explore, but it hasn’t changed the way it seems to users who track accounts that post this type of content. The latest change means that posts considered “similar” to those previously removed will be much less visible even to followers. A spokesman for Meta confirmed that “potentially harmful” posts could still be removed if the post violates community guidelines.

The update follows a similar change, when Instagram began to rank lower the accounts that shared misinformation revealed by fact checkers. However, unlike that change, Instagram says the latest policy will only affect individual posts, not “accounts in general”.

In addition, Instagram says it will now take into account the reporting history of each individual user to order their feeds. “If our systems predict that you’re likely to report a post based on your content reporting history, we’ll display the post below in your Feed,” Instagram says.

All products recommended by Engadget are selected by our editorial team, regardless of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn a commission for the partners.



[ad_2]

Source link

Leave a Comment