13 Comments

I was a moderator on one of the StackExchange communities for a year or two. The StackOverflow/StackExchange management team had put a lot of thought into moderation, and set up a framework that seemed reasonably successful.

They'd realise that moderation effort had to scale with community size, so volunteer moderators were drawn in from each community's most responsible and dedicated users, as it grew. They shared a single online chat-room ("The Teachers' Lounge") for moderators across all Stack communities, where they could support each other and seek advice from their peers, and from staff members. And the staff members acted as supervisors, and Appeal Judges, when a moderator's judgement was questioned.

And crucially, there was quantitative support too: any user could raise a flag for content to receive a moderator's attention: the handling moderator would mark the flag as helpful or unhelpful; and a user's running proportion of helpful flags, weighted to more recent results, was used to prioritise that user's future flags.

Expand full comment

Sounds like a pretty smart approach

Expand full comment

One other thought here, Travis. I liken comments to shopping. If you go to a discount retailer looking for clothes, you find a lot of gems in a mass of stuff. But it takes hours and hours to sift through. If you're time constrained, sifting through stuff is something you just can't do so you shop somewhere where you're likely to find good stuff in a very short period of time even if it is more expensive to shop there. If you take the same thought to comments, then Twitter is like the discount retailer but one with tons of stuff you don't want. The key for Musk should be to make the 'shopping' for replies to power users beneficial enough to keep them on the platform because I think a small subset of active non-company users drive engagement. And it's those users who are least likely to get quality mentions.

Expand full comment

Absolutely love this analogy Ed!

Expand full comment

Good post, Travis. That proposal for a nominal charge for new users is interesting. I hadn't heard that one before.

Expand full comment

Twitter clearly needs community moderation (and therefore a communities feature) a la reddit, for a few important reasons:

- Community moderation is easier to scale, since there is less employee demand on the platform and therefore cheaper to run (not to say that community moderators shouldn't be compensated -- that is a missing piece that needs to be solved).

- Since community moderators are not employed by the platform, most moderation behavior isn't seen as the shadowy platform cabal asserting its will, but rather a real person who knows the community. It's also possible to dialog directly with these mods, and this helps to explain the reasons behind moderation actions and to hear and adjudicate appeals. More dialog is better; a key failing of Twitter's approach is that moderation actions are faceless and unidirectional, with decisions adjudicate in secret and dictated by fiat.

- Segmenting users into communities allows maximum "free speech" within the bounds of those communities, without having it leak into everyone else's timeline. In context of your "hosts privilege" metaphor, communities are like different bars with different rules. People can hang out in the community that suits them. Some are more biker bar, and some are classy whisky tasting rooms with a dress code.

- Likewise, communities engaging in egregious behavior can be disbanded (like reddit did with "TheDonald" when that subreddit started to advocate for political violence). This disrupts patterns of behavior from snowballing without needing to permaban swaths of individual users (you don't have to go home, but you can't stay here).

AI filtering of spam/bots plus strong community moderation is the way to go for scalable social platforms. The missing pieces as I see them are:

- Community mods should really be compensated; there needs to be a mechanism for that

- There needs to be a transparent and well governed process for how moderators are selected and cycled. One issue I've seen on reddit is that a few people who created a popular subreddit ages ago now wield crazy power and sometimes abuse it (looking at you r/politics). There needs to be a way for the community to hire & fire mods according to their performance, such as a vote of no confidence system and poll-based elections. This is also an important safeguard against "sleeper agent" mods, where a commercial interest may seek to install a favorable moderator to advantage discussion about their products and disadvantage discussion of a competitor's products. In these cases there must be a mechanism to remove the offending moderator.

- There should be a community reporting framework and toolset provided by the platform that provides ongoing transparency about moderation behavior over time, quantitatively and qualitatively. Thus the community can see the what, how, and why behind moderation decisions on an ongoing basis, hopefully resulting in trust between moderators and their communities. Thus the barkeep can distinguish between actions that were the result of platform rules (e.g. the patron isn't of drinking age and there's a law against serving them) or community norms (e.g. Frank was being a drunk creep and we suspended him for a few days to cool off, because he was scaring all the ladies away).

Expand full comment

You should send Elon an invoice for consulting services. This piece is better advice than he appears to be getting from his current entourage.

Expand full comment

My price would be included in him simply not killing twitter.

Expand full comment

If the value of the platform is based in size/scale and its then subsequent monetization of that base, the moderation needs to be focused on how to protect the interests of that base.

Some platforms gravitate towards trolling type behavior, because the base may want that. Other platforms maybe focus on a different type of interest. So very different user engagements happen and moderation is tailored accordingly. The issue w/ Twitter is those who were screaming 'censorship and we want back in' are the outgroup that wants to access the eyes of the base, but are simultaneously damaging to that base. By allowing them that back in, we are seeing the chaos of change and push-back from the formerly content base.

So if the goal is to stabilize Twitter into what it was before, the 3 steps make a lot of sense. If the goal is to transformer's Twitter base into something else, ignoring the 3 steps until the community stabilizes and then applying the 3 steps works better. However, from a business perspective I don't think it is easy to rebuild broken trust (advertisers have no reason to trust), so the base now needs to stabilize into something else and then monetization strategy designed around that.

Expand full comment

I think Twitter is to damn levered up to seek a new audience, not to mention it’s not clear what that audience would be since the platform is already at scale.

Expand full comment

I agree with what you note here.

Twitter shouldn't have looked to chase a new audience because churn risk of the base was going to be high in doing so. The only outcome would be a smaller base that maybe looks different. A better strategy would have been on better protecting the existing base (moderation) and then finding a way to incrementally monetize it. Once the base is 'set' a scale, it's too late to go on some journey to 'find itself'. The trajectory is likely down.

At this point, even if the majority of the base somehow decides to stay, the chaos is hurting the ad revenue. So even if they save the base, the revenue shortfall will continue. I'm not sure how Twitter saves itself.

Expand full comment

Me either. They’ve done quite a bit of damage to the existing model, without any coherent plan for what’s next.

Expand full comment
Comment deleted
Nov 14, 2022
Comment deleted
Expand full comment

I think so too, I mean... it could be a very pro-engagement feature if executed well.

Expand full comment