Some people argue that bloggers have a responsibility to moderate hateful comments, but this abstraction often assumes that the blogger is an able-bodied, middle-to-upper-class, heterosexual, white, cis man who is not the target of the hateful comments. If the blogger is from a marginalized group, is she responsible for protecting her readers from hateful comments directed at her or her group?
When readers ask the blogger to moderate hateful comments, there seems to be an unquestioned assumption that if the hateful comments or trolls are not publicly visible, then these comments and trolls have ‘disappeared’. However, what usually happens for most blog setups is that the hateful comments go straight into the blogger’s Inbox and need to be processed along with other e-mails.
Comment moderation requires time and energy. When I have to read hateful comments closely to press the appropriate moderation button, it is more unpleasant and time-wasting than when I skim and mentally skip hateful comments.
Moreover, banning trolls often has the effect of increasing their bigotry and directing bigoted (e.g., racist, sexist) personal attacks towards the blogger herself. For example, the only commenter I have banned so far goes by the name of “goaler”, “Anonymous”, “brett weir”, or “jerky boy”, a White Canadian man living in Metropolitan Toronto, which is where I live as well. Before I banned him, he at least tried to pretend he wasn’t racist. Now that he is silenced on my blog, I get racist comments in my Inbox calling me a “racist chink”; other comments with the words “chink”, “sp**k”, and “sp*c”; and shameless declarations that white people are superior to people of other races. This White Canadian man also appears to have a predilection for fellatio, and said, more than once, that he would perform sexual acts upon me.
(Dear Journalists: This is why bloggers from marginalized groups want to use pseudonyms. If I blogged under my real name, I would probably quit blogging by now.)
Comment moderation is not wrong in all contexts. The comment moderation at Racialicious makes Racialicious a diverse community of voices from people of colour instead of just the voices of the bloggers of colour against white detractors in the comments. However, comment moderation is not appropriate in all contexts, either.
An alternative to standard comment moderation is to crowdsource comment moderation, which was first invented or popularized by Slashdot in 1999 at the latest. Crowsourced comment moderation requires less work for site administrators, and is common on technology news sites like Slashdot, Reddit, Digg, and Hacker News. However, crowdsourced moderation tends to reflect the culture of the site’s readers (i.e., groupthink), and tech news readers tend to boost the signals of sexist or misogynist comments.
Can comment moderation co-exist with the discouragement of groupthink? On blogs by members of marginalized groups, can bigoted comments be dampened without requiring the time, effort, and emotional resiliency of the blogger? What alternative comment moderation systems can you think of, assuming that you had the technology?
(The banned troll may contribute in the comments of this post.)
Creative Commons image by Kurt Löwenstein Educational Center International Team.