A Minute from the Moderators

About Hachyderm’s Moderation and Infrastructure Team Structures

Hello Hachydermia! It’s time for, you guessed it, the monthly Moderator Minute! Recently, our founder and former admin stepped down. As sad as we are to see her go, this does provide an excellent segue into the topics that we wanted to cover in this month’s Moderator Minute!

The big question: will Hachyderm be staying online?

Yes!

At Hachyderm we have been scaling our Moderation and Infrastructure teams since the Twitter Migration started to land in November 2022. Everyone on both teams is a volunteer, so we intentionally oversized our teams to accommodate high fluctuations in availability. Each team has a team lead and 4 to 10 active members at any given time, meaning Hachyderm is a ~20 person org. What this means for the moderation team specifically is the topic of today’s blog post!

Moderation on a large instance

Large instances like ours are mostly powered by humans and processes. And computers, of course. But mostly humans.

And for a large instance, there needs to be quite a few humans. In our case, most of our current mods volunteered as part of our Call for Volunteers back in December 2022.

How moderators are selected

Moderators must first and foremost be aligned with the ethos of our server. That means they must agree with our stances on no racism, no white supremacy, no homophobia, no transphobia, etc. Beyond that baseline, moderators are also chosen for:

  • Their lived experiences and demographics
  • Their experience with community moderation

For demographics: it is important to ensure that a wide variety of lived experiences are representative on the moderation team. These collective experiences and voices allow us to discuss, build, and enforce the policies that govern our server. Ensuring that there are multiple backgrounds, including race, gender, orientation, country of origin, language, and so on helps to ensure that multiple perspectives contribute.

For prior experience: there was also an intentional mix of experience levels on the moderation team. Ensuring that there are experienced mods also means that we have the bandwidth to onboard new, less experienced moderators. Being able to lay the groundwork for mentorship and onboarding is crucial for a self-sustaining organization.

Moderation onboarding and continuous improvement

Before moderators can begin acting in their full capacity, they must:

  • Agree to the Moderator Covenant
  • Be trained on our policies
    • This includes the server rules, account policies, etc.
  • Practice on inbound tickets
    • Practice tickets have feedback from the Head Moderator and the group

The first two points go hand-in-hand. Our server rules outline the allowed and disallowed actions on our server. Our Moderator Covenant governs how we interpret and enforce those rules.

When practicing, new moderators are expected to write their analysis of the ticket including how they understand the situation, what action(s) they would or would not take in the given scenario, as well as why they are making that recommendation. When training the first group of moderators, this portion of the process was intended to last a week, but it worked out so well that we have kept the process. This means it is common for multiple moderators to see an individual ticket and asynchronously discuss prior to taking an action. This informal, consensus-style review has led to our team being able to continue to learn from each individual’s experiences and expertise.

When moderators make a mistake

We do our best, but are human and are thus prone to mistakes. Whenever something like this happens, we:

  • May follow up with the impacted user or users
  • Will review the policy that led to the error

When we make mistakes, we will always do our best by the user(s) directly impacted. This means that we will take ownership for our mistakes, apologize to the impacted user(s) if doing so will not cause further harm, and also review any relevant policies to ensure it doesn’t happen again.

Harm prevention and mitigation on large instance

How we handle moderation reports is driven by the enforced stance that moderation reports are harm already done. (We mentioned this stance in our recent postmortem as well.) Essentially, this means that if someone has filed a report because harm has been done, then that harm has been done.

How we determine what to do next depends on several factors, including the scope and severity of what has been done. It also depends on the source of the harm (local vs remote).

Using research as a tool for harm prevention

To say it first: in cases of egregious harm that originates on our server, the user is suspended from our server.

Reports of abuse originating from one of our own are exceedingly rare. More commonly, the reports of egregious harm come from remote sources. The worst cases are what you’d likely expect and are easy to suspend federation with.

The Hachyderm Moderation team also does a lot of proactive research regarding the origins of abusive behavior. The goal of our research is to minimize, and perhaps one day fully prevent, these instances’ ability to interact with our instance either directly or indirectly. To achieve this, we research not only the instances that are the sources of abusive behavior and content, but also those actively federating with them. To be clear: active here means active participation. We take this research very seriously and are doing it continuously.

Nurturing safe spaces by requiring active participation in moderation

The previous section focused on how we handle the worst offenders. What about everyone else? When someone Well Actuallys or doesn’t listen, or doesn’t respond properly when a boundary has been specified? (For information about setting and maintaining boundaries, please see our Mental Health doc.)

In situations where the situation being reported is not an egregious source of harm, the Hachyderm Moderation team makes heavy use of Mastodon’s freeze feature. The way we use it is to send the user a message that details what they were reported for, including their posts as needed, and use the freeze to tell them what we need from them to restore normal activity on their account. To prevent moderation issues from going on for long periods of time, users must respond in a given time frame and then perform the required action(s) in a given time frame.

What actions we require are situationally dependent. Most commonly, we request that users delete their own posts. We do this because we want to nurture a community where individuals are aware of, and accountable for, their actions. When moderators simply delete posts, the person who made the post is not required to even give the situation a second thought.

Occasionally, we may nudge the reported user a little further. In these cases, we include some introductory information in our message and also request that the person do a brief search on the topic as a condition of reinstating the account. The request usually looks a bit like this:

We ask that you do a light search on these topics. Only 5-15 min is fine.

The goal of this request is that we can all help make our community a safer place just by taking small steps to increase our awareness of others in our shared space.

If you agree to the above by filing an appeal, we will unfreeze your account. If you have further questions, please reach out to us at admin@hachyderm.io .

– The Hachyderm Mods

The reason we ask for such a brief search is because we do not expect someone to become an expert overnight. We do expect that all of us take small steps together to learn and grow.

As a point of clarification: we only engage in the freeze and restore pattern if the reported situation does not warrant a more immediate and severe action, such as suspending the user from Hachyderm.

And that’s it for this month’s Moderator Minute! Please feel free to ask the moderation team any questions about the above, either using Hachyderm’s Hachyderm account, email, or our Community Issues. We’ll see you next month! ❤️