Table of contents
Reporting inappropriate content
Overview
Content moderation is an AI-powered tool that helps companies make sure that their intranet sites are clear of offensive feed posts and comments.
Note that as of now, history and analytics of content moderation are not available, but they will be added in an upcoming release.
Content moderation must be enabled for your org by Simpplr. Reach out to your account representative to have this feature enabled.
Content moderation engine
Simpplr's content moderation engine is an AI-powered algorithm used on every feed post, comment, and reply to ensure they aren't obscene. It's built to detect these types of objectionable content:
Content type: | Flagged in Simpplr as: |
hate | hateful content |
harassment | harassment |
hate/threatening |
a threat
|
harassment/threatening | |
self-harm |
self-harm related content
|
self-harm/intent | |
self-harm/instructions | |
sexual |
sexually explicit content
|
sexual/minors | |
violence |
violent content
|
violence/graphic |
Note:
Content moderation is not applicable to content (pages, events and albums), any editable profile fields, or the Q&A feature; only feed posts, comments and replies.How content moderation works
When a feed post (home or site), comment, or reply is submitted, it goes through the content moderation engine. If the engine doesn't flag anything, it's posted as normal. If it contains content that is flagged, the poster will be notified, given the reason for the flagging, and given the option to edit the post, or continuing posting it. If posted un-edited, content is sent to the queue for moderation. The content moderator can decide whether to keep the post, or hide it. Users can also report feed posts/comments as offensive, and give a reason for the report. These are also sent to the content moderator, who will decide to keep or hide the posts.
Enable content moderation
By default, the engine is turned off. To enable content moderation, go to Manage > Application > Setup > Privileges > Content moderation. Click Use content moderation. Content moderators can be added here. All admin users will have the ability to manage content moderation.
Note that as of now, content moderation will only scan about the first 500 words (or one page of text) in any given post.
Content moderation queue
Once content moderation is enabled for your organization, content moderations can view their queues by going to your User menu > Content moderation and accessing the Queue tab. App managers go to Manage > Content moderation. The latest reported content will appear in the queue first.
Click Remove comment to remove a comment. If removed, a comment will remain visible to moderators in the analytics section of Content Moderation.
Note that content moderators will be able to see all moderated activity across public, private and unlisted sites within the queue. However, if not already a member, they will not see any other information outside the literal content written. They cannot go to that site or make any changes.
Notifications
All content moderators will receive in-app, actionable notifications for any reported content. The latest reported content will appear in the queue first. These notifications cannot be disabled.
Report inappropriate content
Users can also report inappropriate content. A modal will open, prompting a reason for the report, and the content will be added to the moderators' queue.
Languages
Currently, content moderation supports any content written in the following languages:
- US English
- Spanish
- Danish
- German
- French
- Portuguese
- Italian
Note that a current known issue is that users' profile language must be set to US English in order for content moderation to flag content. If a user's profile language is set to something else, content moderation will not flag other languages accordingly. Our team is working on a fix for this.
Comments
Please sign in to leave a comment.