This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Hachyderm Community Documentation

Welcome to Hachyderm!

Here we are trying to build a curated network of respectful professionals in the tech industry around the globe. Welcome anyone who follows the rules and needs a safe home or fresh start.

We are hackers, professionals, enthusiasts, and are passionate about life, respect, and digital freedom. We believe in peace and balance.

Safe space. Tech Industry. Economics. OSINT. News. Rust. Linux. Aurae. Kubernetes. Go. C. Infrastructure. Security. Black Lives Matter. LGTBQIA+. Pets. Hobbies.

Quick FAQs to Get Started

How do you pronounce “Hachyderm”?

Pronounced hack-a-derm like pachyderm. (Audio link.)

(What is a pachyderm?)

Running a service is expensive. How can I donate?

There are a few ways to support us, including through Hachyderm’s GitHub Sponsors page. For additional ways to support, including supporting via Nivenly, please take a look at our Funding and Thank You doc.

Is Hachyderm down or just me? Or: I think I’m having a service issue.

To see if Hachyderm itself is up and running, please look at:

To report a full or partial outage, or an issue with a particular feature not working as expected, please use our Community Issues on GitHub.

Where are all those great incident reports I keep hearing about?

On our blog.

Also: The famous “The Queues ☃️ down in Queueville” Incident Report

I’m new to Mastodon in general. Help?

If you’re new to Mastodon in general, we recommend taking a look at Mastodon.Help’s brief explainer about what Mastodon is and isn’t, as well as taking a look at for some How To Dos.

As for How to be a Hachydermian, take a look at the docs in our Hachyderm section.

Can I create a non-user account?

Accounts that are not for an individual person’s personal use are referred to as “Specialized Accounts” and are categorized as corporate, bots, curated, OSS Project, Community Event, and Influencer Accounts. Some accounts are restricted and/or invite-only and many have rules governing how they can interact with the platform. Unrecognized account types are suspended. Please read our Account Types documentation for more information.

I am the mod / admin of another instance and need to contact the Hachyderm mods / admins.

Please email us at

I have an issue that requires the moderation team, please help.

If the issue is regarding a user or post, please use the report feature in Mastodon. If you need to reach the mods another way, please look at our Reporting and Communication page.

I have been moderated and would like to know what to do.

Please take a look at our Moderation Actions and Appeals page.

I would like to know more about how Hachyderm is moderated, so that I can determine if Hachyderm is a safe space for me.

Please start by looking at both our Rule Explainer and our Moderator Covenant. The main rule of Hachyderm is “Don’t Be A Dick” and the remaining rules explicitly call out common “whataboutisms”. The Rule Explainer dives into these different facets to provide additional clarity. The Moderator Covenant is to show how the rules are enforced and what is used to guide our decisions.

The other rules we have are governing the account types permitted on Hachyderm and how often they can post and how they can interact on the platform.

Our logo was designed by the lovely Ashton Rainbows. (website, Instagram)

Thank you Ashton!

1 - Funding Hachyderm

How to fund Hachyderm and our sponsor list.

🎉 Thank you to all our donors and sponsors! 🎉

Hachyderm is primarily funded by individual Hachydermians either directly to the Hachyderm project or to Hachyderm’s parent org, the Nivenly Foundation. Although corporate accounts assist with paying for Hachyderm’s infrastructure costs, they are not the primary source of funding for Hachyderm.

Note that we moved off Kris Nóva’s Ko-Fi as of March 2023. As we start receiving other sponsorship sources we will post updates here.

How to donate

One fast, easy way to donate to help finance Hachyderm is directly via Hachyderm’s GitHub Sponsors page:


Hachyderm’s GitHub sponsors page has a few benefits, including:

  • Sponsor icon on your GitHub profile
  • (Optional) A shout out on our #ThankYouThursday on Hachyderm’s Hachyderm account starting in April.
  • (Optional) Added to our Thank You list at the bottom of this page starting in April.

For both the shoutouts and Thank You list: we will use your GitHub username by default. If you would like this changed please either submit a PR or email us at

Donation Options

The three ways to support Hachyderm are:

  • Donating directly to the Hachyderm project
  • Donating to Hachyderm’s parent organization, the Nivenly Foundation
  • Purchasing swag

Regular Donations

(… and swag)

Please visit our swag store if you’re looking to update your awesome assortment of shirts, mugs, stickers, and so on:

The Nivenly Foundation and Membership

The Nivenly Foundation is the parent organization for Hachyderm. The non-profit co-op itself is being founded over the course of 2023. Currently, Nivenly is a recognized non-profit in the State of Washington, with upcoming milestones to be completed with the IRS. Please check out Nivenly’s webpage and Hachyderm account for updates as we reach different milestones.

As relevant to Hachyderm: since the Nivenly Foundation funds Hachyderm and other projects, this means that Nivenly donations and memberships also support Hachyderm. The main difference between the paths of supporting Hachyderm is that Nivenly members take part in member elections and non-member sponsors and donors do not. Regular donations do not count as or toward memberships, but you can change your preference from donor to member at any time.

General Nivenly Membership can be purchased through Nivenly’s new Open Collective page. If you are looking to join Nivenly as a project or trade member, you must email

Kris Nóva’s Ko-Fi

Originally, Hachyderm was primarily funded through donations to Kris Nóva’s Ko-Fi. Although her Ko-Fi page is still active for her Twitch stream, please donate to Hachyderm using one of the paths above.

Thank you everyone!

Our first set of Thank Yous will be added here in April, one month after our March 2023 release. Updates afterward will be quarterly in June, September, and December 2023. We will update from our public GitHub Sponsors primarily, as we are treating private GitHub Sponsors, Ko-fi, and Stripe donations private. If you have sent us a donation via one of these and do not want it to be private, and do want to be on this page, please contact us at The amount each donor donated will not be listed on this page, name/handle only.

2 - Welcome

New to Hachyderm and Mastodon in general? These docs are for you! For deeper documentation about Mastodon features, please refer to the Mastodon section.

Hello Hachydermians

Hachydermians new and old, and recent migrants from other instances and platforms: hello and welcome!

This section is devoted to materials that help Hachydermians interact with each other and on the platform.

Quick FAQs for this section

I’m new to Mastodon and the Fediverse - help?

The two largest external resources we’d recommend are FediTips and Mastodon Help. FediTips are great for iterating as you become more and more familiar with engaging on the Fediverse. Mastodon Help is a great “101 Guide” to just get started.

I’m new to Hachyderm - help?

First of all: welcome! We encourage new users to post an introduction post if and when they feel comfortable and ready. This can also help you engage with the community here as people follow that hashtag.

Accessible content is important on the Fediverse. The quick asks of our server are outlined near the top of our Accessible Posting doc. The doc itself dives into those with more nuance, thus the doc page itself is quite long. We recommend reading and understanding just the asks when you’re getting started, then coming back once you’re in a place to dive into the deeper explainations.

Beyond that, you will need to understand our server rules and permitted account types (if you’re looking to create a non-general user account). The Rule Explainer doc, like the Accessible Posting doc, has the short list of rules at the top and then delves into them. Similar to Accessible Posting, we recommend that you familiarize yourself with the short list first, then read more once you’re in a place to read the nuance.

What is hashtag etiquette on the Fediverse?

Hashtag casing, and not misusing restricted hashtags, are important aspects of hashtag use on the Fediverse. For more, read our Hashtag doc.

What are content warnings and how/when do I use them?

Content warnings are a feature that allows users to click-through to opt-in to your content rather than be opted-in by default. Our Content Warnings doc explains this, and how to create an effective content warning, in more detail.

Content warnings are most commonly used for spoilers and for accessibility. For more about the latter, please read our Accessible Posting doc.

Mastodon uses a concept of verification that works like identity verification (Keybase, KeyOxide). For information about how you can verify your domains and so on, please see our Mastodon Verification doc.

The world is on fire, how do I help support myself while enjoying Hachyderm and the Fediverse?

There are many tools that allow you to control what content you are exposed to and how visible you want your account and posts to be. For information about these features and more, please see our Mental Health doc.

2.1 - Accessible Posting

Introduction to posting accessibly for new users.

How to create accessible posts

This documentation page is an introductory guide for about posting accessibly on Hachyderm. As an introductory guide there are topics and sections that will need to be added and improved upon over time.

If you are looking for the short version of our asks here on Hachyderm, please read “What do we mean when we talk about accessibility”, “what you should know and our asks”, as well as the summary at the end of the document.

If you are looking for the underlying nuance and context to apply them more effectively and and take an active part in maintaining Hachyderm as a safe, active community, please read and reflect on each of the deeper sections.

What do we mean when we talk about accessibility?

Accessibility means that as many people as possible can access your content if they choose to. Accessibility does not mean that you cannot otherwise intentionally gate your content, for example via a content warning. Rather, accessibility refers to the many ways that people typically create unintentional gates around their content.

What you should know and our asks

No one on Hachyderm is expected to be an expert. Everyone on Hachyderm is asked to approach accessibility with a growth mindset and to iterate and change over time.

Whenever you receive a request from a group you are not yet familiar with, or who you do not interact with often enough to have cultural fluency, please take that request as a growth opportunity. This growth can happen with sustainable time and effort on your part.

Our asks

When posting accessibly:

  1. Include effective alt text for images.
    • Note you cannot add alt text after posting by editing a post. This includes both creating new alt text that was neglected or fixing existing alt text. The common work around is to comment to your post with the alt text.
  2. Use PascalCase or camelCase for your hashtags.
  3. Learn how to write effective summaries for audio / video content.
  4. Prioritize audio content with captions and transcripts where available.
  5. Be aware how often you post paywalled content; not everyone has the same purchasing power.
  6. Learn how and when to use effective content warnings.
  7. When writing posts for an international audience, minimize use of slang and metaphor and instead use literal, direct, phrasing that can be easily translated by translation tools.
  8. When someone makes a mistake regarding any of the above, please either help them if you have the emotional space to do so or move on. Do not shame them or sealion them.

Content warnings in particular are a useful feature that applies to many situations. As a reminder, we request and recommend content warnings as a general rule as opposed to requiring them. (Please see this document’s summary for more information about why this is.)

The remainder of this introductory doc page will supply context and nuance to the above asks. The use of content warnings will come up heavily for the interpretive section.

Interactions on the internet

Breaking down the different ways that we send, receive, and interpret content on the internet can help when building an internal framework for “what is accessible”.

When receiving content on the internet, that content is typically:

  • Visual
    The text on this page, static or animated images, video
  • Auditory
    Non-visual audio content like podcasts, or audio content with visuals like a video.
  • Tactile
    How we interact with visual content by “clicking here” or otherwise interacting with the content we are receiving.
  • Economic
    Content that requires individual purchase or subscription to access.

When sending content on the internet, the content is typically:

  • Visual
    Same as the above, but something we are sending rather than receiving.
  • Auditory
    Same as the above, but something we are sending rather than receiving.
  • Tactile
    Same as the above, but something we are sending rather than receiving.

When interpreting content on the internet, we are using:

  • Our available senses
  • Our neurodiversity
  • Our lived experiences, including but not limited to our socialization and culture
  • Our moral compasses and ethical alignments
  • Our primary language(s), spoken and signed
  • And so on.

Generating accessible content is the combinatorics problem of the above. Most commonly, accessibility is implemented by creating a “sensory backup” of the primary delivery of the content. For example, if the content is audio, it will have a (visual) transcript. If the content is visual, it will have descriptive text that can be audibly read. And so on.

The interpretation aspect content is where “sensory backups” alone fall short. If someone is a trauma survivor, having a “sensory backup” of the content does not solve the particular difficulty they are having. If someone is having sensory overwhelm, pivoting to a different sense may solve the particular difficulty they are having but it may also not. To dive into that a little deeper: if the difficulty they are having is that the web page is visually noisy, having a transcript that deeply describes all that visual noise and instead makes it auditory doesn’t necessarily solve the difficulty. In fact, it might not even be desirable.

Mastodon and Hachyderm

For the rest of this article, we will describe the ways that Hachydermians can begin to maximize the accessibility of their posts within the context of Mastodon. We will not describe how the Mastodon software itself can be improved. This is only because that exceeds the scope of this page and our influence, not because it is unimportant. For those of you who have ideas for how Mastodon itself can be more accessible, we recommend making PRs or opening GitHub issues on the Mastodon project repo.

To restate, this an introduction to some of what you will want to learn and internalize in order to create posts that are more accessible. Note that we didn’t say “posts that are accessible”, only “posts that are more accessible”. The reason for this is the scope of humanity is broad, and learning about others is a lifelong journey. Being truly accessible not only with posts, but with software design and just general life, is an end goal you should strive to attain even if it can’t be truly achieved.


This first set of “things to consider” when you are creating content is based on the senses we described above that are used when others are receiving the content you are creating.


This section will be the longest one and will interplay with other sections below. That is because a lot of the content on Mastodon is visual in nature, whether it is plain text, memes, or animated GIFs. Some common examples:

  • Images, static and animated
  • Videos
  • “Fancy Text” and special characters
  • Emoji
  • Hashtags

The direct asks for each of these:

  • Include effective alt text
  • Should have a summary, similar to the function of alt text
  • Minimize usage of “fancy text” and special characters
  • Favor longer, complete emoji names over shorter names
  • Use CamelCase
The context

How do people who do not see, or see clearly, the above interact with the content? Typically, via screen readers. Screen readers are designed to not only read plaintext documents, like this page, but also to read any text associated with a visual. For images and video:

But what about “fancy text”, emoji, and hashtags? In fact, what do we mean by “fancy text”?

“Fancy text” / special characters actually warrant an article of their own, and Scope has a lovely 2021 article titled How special characters and symbols affect screen reader accessibility. The article shows how different special character “fonts”, typically used to create italics or other visual effects, are read by screen readers for those who use them.

The case is similar for emoji. While in the standard emoji set there is associated text for a screen reader to read, like a thumbs up 👍, when reading the text for a custom emoji the only text available is the name that is supplied between the colons.

Importantly, this is why here on Hachyderm we favor emoji names like “:verified:” rather than “:v:”, even though the latter is shorter. When a screen reader encounters the text “Jayne Cobb :v: :gh:” it will read “Jayne Cobb v g h”. On the other hand when a screen reader encounters “Jayne Cobb :verified: :github:” it will read “Jayne Cobb verified github”. One of these is significantly more accessible than the other.

Hashtags is the last heavily used “type” in the visual section. Many screen readers are aware of and able to read hashtags, but only when they use alternating case (PascalCase, camelCase). For those unfamiliar, that means that you should use the hashtag #SaturdayCaturday not #saturdaycaturday. To show the difference, Belong AU has an excellent 17 second clip showing how screen readers read hashtags.


Common sources of audio or audiovisual content on social media are:

  • Podcasts, recorded messages, and so on.
  • Audio video content like YouTube, TikTok, etc.

The asks for these:

  • When the content is your own, please have a transcript or similar available.
  • When the content is not your own, please favor content that has a transcript for longer content as often as possible.
  • In either case, when posting the content include a short summary (similar to the function of alt text).
The context

Due to the sizes of audio files in posts, most audio content, or audiovisual content in the case of video, is not hosted on Hachyderm. Linked content comes from various news pages, podcast pages, Twitch streams, YouTube, TikTok, and so on. Unless you are the streamer, this also means that you don’t have as much control over how the content is displayed or rendered, as you would for embedding a GIF or meme (with the alt text, etc.). For this reason, the biggest ask here is that you summarize audio or video when you post it, so that someone can get the gist of what is posted even if they cannot directly use the content. It also helps to start to be aware of what sources have captions (many video sites offer automated captions) as well as transcriptions. If you would like an example of a podcast that has a transcript, take a look at any of the episode pages for PagerDuty’s Page It to the Limit podcast.


“Noise” in this sense applies to:

  • “Too much” audio and/or visual content
  • “Too loud” audio and/or visual content

The asks for these:

  • Please call out in your post if your linked content fits either of the above.
The context

For an example of what might be generating audiovisual noise, try navigating the internet without an adblocker or script blocker. Risk of malware aside, there are a lot of audio and/or visual ads placed all over web pages and there are frequently pop-ups, notifications, and cookie consent windows as well.

These situations are usually frustrating when you’re trying to navigate the situation as-is, let alone what happens when you’re trying to convert the page to one particular sense (auditory or visual).

Most of these situations do not apply on the Fediverse directly. They appear when links to other pages and content. To be clear, on Hachyderm we do not ask you to be responsible for the entirety of the internet. That said, if you are posting content that might be “noisy”, it might be worth mentioning in your post that supplies a link.


Interpretive accessibility is about how our minds understand presented information. This is a very broad set of topics as our minds use a lot of data to process information. As an introduction, some common areas to consider for making interpretation more access are listed below.

Almost exclusively, the ask for assisting with accessible interpretation is:

Since the ask is almost always the same, unlike the above section this section will not have a “common examples” and “direct asks” pattern. As an introduction we’re calling out some of the most common barriers to interpretation, offering a suggestion to handle, and reminding everyone that we do not request or require anyone to become experts. Our main ask is that you continue to learn and grow in awareness.


Neurodiversity is the umbrella term for “the range of differences in individual brain function and behavioral traits, regarded as part of the normal variation in the human population”. (Oxford Dictionary) A few common attributes that are part of neurodiversity are:

  • ADHD
  • Dyslexia, Dyscalculia
  • Autism / Spectrum

There are more than these. The main underlying factors that define different aspects of neurodiversity are things like verbal / written skills, hyper/hypofocus, sensory interpretation (e.g. overwhelm when there’s too much sensory input), mental visualization, and so on. The Web Content Accessibility Guide article on Digital Accessibility and Neurodiversity has some excellent tips on the software design level that can also help you build your mental model while interacting with others.

Within the context of Mastodon, these will usually come up via links to shared content rather than anything hosted on the platform itself. That means that what you can do is include the relevant information when you are posting a link to other content. This can either be via a description in the post itself or, where relevant, crafting a content warning for the post.


One of the most common medical conditions that can cause issues with audiovisual content is seizures. Photosensitive seizures can be triggered by strobing, flickering, and similar visual effects. This would only come up if/when a user posted an animated image or video that contained effects similar to these. If you would like to read about this in more depth, please take a look at Mozilla’s Web accessibility for seizures and physical reactions page.

The other two primary medical conditions that come up when interacting with social media are eating disorders and addictions. The former can be triggered by images of food, discussions of weight gain or loss, and so forth. The latter can be triggered by images and discussion around any addictive substance, which includes but is not limited to: food, alcohol, various recreational drugs, and gambling.

We do not ask that Hachydermians be medical experts in order to interact on the platform. The main ask to be aware of situations like these and use content warnings when posting content that might be triggering to these groups.

Traumas and phobias

Trauma is a very broad category, and the nuance of what can trigger trauma varies between individuals. That said, there are some common examples of posting patterns that can be assumed to be generally traumatic:

  • If posting about trauma to an individual member of a community, either via a news cycle or personal experience, in all likelihood the trauma for the collective group will be triggered.
  • If posting about any sort of violence, it can be assumed to be traumatic even to those who have never experienced that type of violence. This includes various forms of violent trauma humans can inflict on each other as well as animal abuse and abuse to our environment.
  • If posting about wealth and poverty, and the topics in-between, it can be assumed that this will trigger the trauma of the many who have had to interact with economic systems from a place of disadvantage.

There are many more traumas than these. There are also common phobias that humans have where the response patterns in the mind and body very directly mirror what happens in a traumatized person that has been triggered. Common categories of phobias include:

  • Death
  • Disease
  • Enclosed spaces
  • Heights

We do not ask that Hachydermians be experts in trauma and phobias in order to interact on the platform. We do ask that users use content warnings when discussing heavy topics like the above. This is because, while there is a lot to be gained from discussion, those most impacted will see the same traumatic conversations over and over again. Especially if it’s the Topic Du Jour (or week) or something has happened in a recent news cycle to prompt many simultaneous discussions.

Language accessibility and ease of translation

The main goal here is to ensure that both plain text and text descriptions of media are copy/pasteable so they can be translated into a different language than they were composed in. This allows users that may not be fluent, or fluent enough, in the language the text was written in to use translation tools for assistance.

For clarity: we do not expect any individual to be a hyperpolyglot. We do not expect Hachydermians to post translations of their posts either. What we are asking is for you to be aware of the issue and to be aware if you are posting something that cannot be copy/pasted into a third party tool for translation assistance if someone needs to do so.

Some examples:

  • Video content with captions in any language: can another language tool be used to translate the captions and/or does the video host support multiple languages for their captions?
  • Video content with transcript: can that transcript be copy/pasted into a translation tool?
  • Plain-text post: can the post be copy/pasted into a translation tool?
  • Slang: most regional slang doesn’t translate well when using tools. If you’re making a post that you want others to be able to easily translate, minimize the use of slang.


Within the context of Mastodon, this appears when posts are made that link to paywalled content. The paywall may be a direct purchase for that specific piece of content or the content is hosted by an entity that requires a subscription to access.

From an accessibility and equity mindset: while people should be paid for their work, it is important to remember that not everyone can pay for the access to that work. They may be disadvantaged overall, or may live outside the country or countries that are allowed to pay for access to it.

Another common pattern is for user data to be a type of payment. In this situation, someone must typically supply their email and some demographic information for free (as in currency) access to the content. Similar to the above, this can be an accessibility issue for those who have reason to only share their information cautiously. This is especially in light of increasingly common data breaches, where supplied data can be used to target individuals and groups.

Here on Hachyderm we do not moderate you for posting paywalled content. Within the context of accessibility, we ask that you are aware (and call out) when you do and that you manage what you choose to share with care.


The length of this particular document should tell you that being accessible requires time and effort. As only an intro guide, it should also tell you that there is a lot happening on our biodiverse sphere.

Diversity is one of the primary reasons we request, not require, use of content warnings in most cases. This is because there are many ways two or more groups may be in a state of genuine conflict without anyone being in the wrong. One quick example could be if someone was posting about weight loss or gain as a response to recovery from a medical issue that triggered someone else’s eating disorder. Another might be someone who needs to scream about how transphobia hurts them, while someone else needs to not be reminded that’s still happening today.

Hachyderm needs to be able to accommodate all of these situations and more. To do so, we try to create space for disparate needs to co-exist. For situations where instance-level policy wouldn’t be beneficial to the community, we ask individuals to create and maintain their personal boundaries in a public space. We also ask everyone to use common keywords and hashtags so that those who are looking to filter that content can do so easily. As always, please report malicious and manipulative individual users and instances to the moderation team.

As you learn and grow you may want to help others as well. This is great! Remember to do so only when you have the emotional space to help with grace. Different people are at different stages in different journeys, which means that the person who you are frustrated with for not understanding one facet of accessibility might be very adept with a facet you know very little of.

If you run into situations where your needs and another’s come into a state of conflict, please approach each other with compassion and respect. Please also remember that you can always walk away from disrespectful conversations for any reason. If the other person does not respect your boundaries and/or the space you are creating for yourself, you can also request moderator intervention by sending us a report.

2.2 - Content Warnings

How to use content warnings on Hachyderm.

This document describes how to use content warnings and how they are moderated. To understand the feature itself, please look at our content warning feature doc. In this document:

What is a content warning?

Content warnings are the text that displays first instead of the text and/or media that you have included in your post. Since the goal of the content warning is to put a buffer between a passerby and your underlying content, it is important to have something that is descriptive enough to be helpful for someone to decide if they should click through the content warning or not.

When to use content warnings

The two main use cases for content warnings:

  • A content warning should be used to protect the psychological safety of others in a responsible way
  • Spoilers

A central concept to content warnings is “opt-in”. By using a content warning, you are creating a situation where other users are opted-out of the detail of your post by default. They must consent to opt-in. In an ideal scenario, other users are able to make an informed decision to opt-in, i.e. informed consent.

Common examples of content warnings

There are a few situations where content warnings are commonly used on the Fediverse (these are not exclusive to Hachyderm):

  1. Images of faces with eye contact
  2. Images of food and alcohol, even if they are not being consumed in excess
    • Though they will also be behind a content warning if they are
  3. Text or media describing or showcasing violence or weapons (including but not limited to guns, knives, and swords)
    • This is common when discussing news around war violence or shootings
    • Also common when sharing stories of personal trauma
    • This is also common when sharing news about personal safety, domestic abuse, or sexual assault
  4. NSFW content
  5. Fandom-specific spoilers for various forms of entertainment like TV shows, movies, and books.

Content warnings feature heavily in our Accessible Posting doc, especially in the Interpretive Accessibility sections.

How to structure a good content warning

The goal of a content warning is to communicate what another person needs to know, so they can determine if clicking through the warning is something they want to do or should be doing, especially if the content is describing a situation that may be a traumatic, lived, experience or news event. The question you should ask yourself when you have written your content warning is:

If I was a user that did not want to engage with this post, would I know to avoid this post with the information provided?

The follow-up question you then need to ask is:

If I was a user that did not want to engage with this post, but did by mistake and/or an unclear content warning, what is the impact?

Quick example

Before going further, take a look at these two options to be used as a content warning for the same post about the first episode of the most recent season of The Mandalorian:

  1. “Spoilers”
  2. “Spoilers for the New Season of The Mandalorian”

Which of these two content warnings let you know if you want to interact with this post from the prospective of:

  1. A non-Star Wars fan
  2. A fan of The Mandalorian who does not care about spoilers
  3. A fan of The Mandalorian who does care about spoilers

Also: what is the impact to each of these groups if they click through your content warning and see content they did not want to see?

Remember: The goal is for the content warning text to be descriptive enough that all of these prospective audiences can decide to opt-in or opt-out. The impact in this example is intentionally low, someone will see spoilers, but there are situations where the impact will be higher.

“Spoilers, Sweetie”

(Quote attributed to River Song of Doctor Who fame.)

Preventing the spread of spoilers is an excellent “off label” use of the content warning feature. Spoilers most commonly refers to current books, television, and movies. One quick, easy example would be using a content warning when discussing Game of Thrones when it was actively airing. Another would be discussing the results of a football / soccer match that people maybe haven’t been able to catch up on yet. Different fandoms have different expectations around what should be a protected spoiler or not, so the general rule of thumb here is to treat other fans the way you’d want to be treated.

As for moderation, we do not moderate spoiler tags. That said, and as always, please Don’t Be A Dick.

Protecting Psychological Safety

This topic is covered second because it requires a lot of explanation for the implementation details. In fact, protecting psychological safety is a life-long journey and involves understanding others unlike yourself. The short version of when to use this content warning:

You should use a content warning whenever the psychologically safest option is to opt into a conversation rather than to default into that conversation.

Let’s start with a hopefully clear example. If you want to share news about human rights abuses around the world, you will likely be sharing links and media that shows the reality of those situations. This could include a lot of violent and traumatic video and images. Due to the need to raise awareness, you may want to make sure that people can see the content to avoid turning a blind eye.

That said: you cannot control the reach of your information. This means if you do not use a content warning, not only will the people you wish would pay attention potentially see it (or not, depending on how their home instance federates), but you could also be exposing victims of that same violence to content that triggers their trauma. So what’s the psychologically safest option? To protect the most vulnerable, in this case the people with trauma, and that means use a content warning. The post will still have the same reach, but it allows people to opt into that conversation. In these cases it is to your benefit to use clear content warnings, e.g. “article about the war in Ukraine, includes images of physical violence in the war zone”. A content warning that is clear like this serves the dual purpose of labeling the topic of the post while also explaining why it is behind a content warning.

Similar cases to the above would be any posts that include text descriptions of, or images and/or video of, violent actions. To be clear: our definition of violence includes physical, psychological, or sexual violence. Animal abuse also counts as violence. (For more clarification on the various rules, like “No violence”, please see our Rule Explainer.)

Composing content warnings for psychological safety

Being conscientious about composing a content warning that is intended to preserve the psychological safety of others is important. The impact is significantly higher than if someone sees a spoiler before they have a chance to experience the “unspoiled” content.

When you are composing this type of content warning, ask yourself two questions:

  1. Why am I putting this behind a content warning?
  2. What differs between this and other posts about this topic that I would not put behind a content warning?

The answer to your first question will likely be short. “It’s about war”, “it’s about yet another shooting”, “domestic violence”, “eating disorders”, and so forth.

The answer to the second question is what will provide the nuance. In the example near the top of the post, it is the difference between “it’s a spoiler” and “it’s about the latest episode of The Mandalorian”. For psychological safety, you might find yourself providing answers like:

  • 1 ) It’s about the war in Ukraine
    2 ) It shows images of people in the aftermath of an explosion.
    • Example content warning text:
      “Ukraine War - images and video of bomb injuries and death”
  • 1 ) It’s about another shooting in the US
    2 ) It was at a school and children are scared and crying in the images / video in the news report.
    • Example content warning text:
      “School Shooting - images / video of traumatized children but no injuries shown”

Asking the questions in this way allows you to supply the broad topic as well as describe the nuance that informed your decision to put the post behind a content warning, especially if you wouldn’t necessarily do so for the broad topic area by itself.

Nuance and growth

Areas that might take more learning and growth to understand and adopt are normally those that involve understanding intersectionality of users on the platform. For example, you may see people use content warnings on images with faces (especially with eye contact) or over food. With these specific examples, using a content warning for faces and eye contact is to help neurodiverse users on the platform who struggle with these and have strong adverse reactions to it, and for the latter it can be to help those with eating disorders or who are recovering. (Or similar for images of alcohol for users who struggle / are recovering from alcohol addiction.)

For taking an active part in keeping Hachyderm a psychologically safe place to be, it is important to do the work of understanding others and apply that knowledge to how you interact with each other. There is a lot to be said here, and it would exceed this document’s scope, but a good place to start is learning about anti-racism, accessibility, and anti-ableism.

What Do the Moderators Enforce


  • Moderators will not take moderation action against spoilers, but it really is in poor form to openly share spoilers.
  • Moderators will protect the psychological safety of users and prioritize the most vulnerable.
  • Moderators will not, in general, take moderator action due to content warnings (or their lack).
    • To rephrase: moderators will request and recommend effective ways to use content warnings as opposed to requiring them.
    • Any exceptions to this are called out on a case-by-case basis.
  • Using a content warning as a workaround is not actually a workaround.
    • This means that rule violations are not less severe or otherwise mitigated by putting the offending content behind a content warning.

2.3 - Hashtags on the Fediverse and Hachyderm

How to use hashtags on the Fediverse, including reserved hashtags.

Hashtags are useful ways of connecting with other users. When users follow hashtags they can see posts that might otherwise be buried in the fire hose of their local and/or federated timelines.

If you looking for some popular hashtags to follow, please scroll down to the Popular Hashtags section at the bottom. Note that some of the popular hashtags are reserved. To understand what that means, please read the Don’t misuse reserved hashtags section.

Hashtag Etiquette

How to have good hashtag etiquette on Hachyderm and the Fediverse:

Use common hashtags

Just like living languages, hashtags change. Cultural reference to a meaning or phrase may shift over time, may increase or decrease in usage, or may just be an inline joke in the post. Noting which hashtags are commonly associated with post topics, either specific or broad, and using them can help others who are either opting in or out of the type of content you are posting. (See filtering content on our mental health doc). As a quick example, someone looking to connect with other fountain pen users might follow hashtags like pen, ink, or FountainPen. Likewise, someone who needs to filter out food and/or drink related content for any reason, might filter out hashtags like food, drink, DiningOut, and so forth.

Use an alternating case pattern for your hashtags

You should always use an alternating case pattern, like PascalCase or camelCase, for your hashtags so that screen readers can read them properly. (Please see our Accessible Posts doc.) Changing the case will not change the posts that are attributed to the tag. This means whether someone is filtering in or out a specific tag, the same posts will be displayed or hidden.

If you are using a hashtag and a non-cased option is offered to autocomplete, please complete with the correct casing. This is because Mastodon will offer the most common tag or tags associated with what you are writing. The more users manually type the cased version, the sooner the cased tag is offered for autocomplete. That said, one of the big reasons that we do not moderate for tag casing is because of this feature: users will frequently not catch that their tags have been overwritten while composing their post.

For completely new tags the tag is stored with the case you compose it with. This is when you should take additional care: the next person who types the tag after you will be presented to autocomplete the version you have typed (as the only one that exists).

Don’t overuse hashtags

It’s reader discretion for how many tags are “too much”. One or two will likely be fine, and having every word be a separate hashtag is a chore to read. As a loose rule, around five tags every two paragraphs should be a reasonable “upper bound”.

We do not, in general, moderate hashtag volume in a post. That said, posts with a high number of hashtags may be reported and moderated as spam.

Don’t misuse reserved hashtags

What to know about reserved hashtags:

  • Can only be used for their stated purpose.
  • Can be restricted across the Fediverse or instance-specific.
  • Misuse of reserved hashtags may warrant moderator action, this will be on a case-by-case basis.

This means that these reserved hashtags cannot be used the same as general purpose hashtags. For example, you can include #cats and #caturday on any cat-relevant post you wish. You cannot use #FediBlock for any other reason than to notify other instance admins of a malicious instance or user.

Fediverse reserved hashtags

In order for hashtags to be considered reserved in the Fediverse context, they must be shown to be:

  • Used across the Fediverse
  • Extremely limited or narrow in scope

Due to the ever-changing nature of hashtags, we will not track and moderate all Fediverse reserved hashtags, only an extremely limited subset of them. That said, if someone has notified you that you’ve misused a hashtag that is intended for a specific use, please remove it from your post whether it is on this list or not.

The following are the reserved hashtags that we will moderate for misuse. In every case, misuse is “any use of the hashtag other than how it specified below”.

  • FediBlock
    • Should only be used for updating the Fediverse about users and/or instance domains that need to be suspended via the instance admin tools or for directly responding to a thread/discussion about a reported user or instance.
    • This tag is followed by instance admins across the Fediverse and as such is not limited to Hachyderm or even Mastodon instances in general.
    • Existing posts using the hashtag can be used as a guide for what details to include in your post.
      • As a general rule, you’ll want to include the user or domain, the reason why, and screenshots to substantiate. The use of screenshots may vary, though, depending on the nature of the content being reported.
  • FediHire
    • Should only be used for job posting and job seeking.
    • This tag is followed by both those posting and applying for jobs across the Fediverse.
    • Existing posts using the hashtag can be used as a guide for what details to include in your posts.

Hachyderm reserved hashtags

Similar to the Fediverse reserved hashtags, the following are reserved on Hachyderm.

  • HachyBots
    • This tag is required for all bot posts on Hachyderm to allow users to easily opt in or out of bot content.
    • Use of this hashtag for any other purpose will warrant moderator action.

General use hashtags

General use hashtags are “normal use”. This means you have broader discretion over how/when/where you use them. The general rule here is “have fun, don’t be confusing”.

If you notice that there are topic areas that you post about relatively frequently, it is a good idea to become familiar with the hashtags in that space. This is so that people filtering out content can do so more effectively.

If you’re new to Hachyderm and/or the Fediverse, here are some high traffic hashtags you may want to look at, grouped roughly by topic. Note that general and reserved hashtags are both listed below.

2.4 - Preserving Your Mental Health

How to use the Mastodon tools help preserve your mental health on the Fediverse.

Social media and your mental health

There are many articles available online about social media, general online engagement, and mental health. This document is going to focus on a subset of these and focus on helping you use the Mastodon tools to uphold your personal needs and boundaries.

Moderation action and individual action

As you navigate the Fediverse, you will frequently run into situations where you need to decide if a situation can or should be handled by you individually, by the moderation team, or both. When you are deciding which action(s) to take, the main question you should ask is:

Is the discussion / interaction that I encountered something that I need to avoid for my own mental health, or is it something that poses a community risk that needs to be handled by moderation?

A couple examples of discussions or interactions that would have a negative impact on the community:

  • Any posts that are supporting or perpetuating racist, homophobic, transphobic, and other *ist / *phobic viewpoints.
  • Any interactions where a user or users is targeting an individual or group for stalking and/or harassment.

A couple examples of discussions that can have a positive impact on the community as a whole but can have a negative impact on individuals:

  • Posts sharing and discussing news cycles around individual or state violence,
    even if they are condemning that violence.
    • Individuals and communities that are the current or historic target of that violence may be triggered.
  • Posts about individual exposure to, or recovery from, various forms of trauma.
    • Individuals who have been exposed to the same, or similar, situations may be triggered.

The above examples are not comprehensive, but do show situations where you may need to protect your own individual mental health even as communities across the Fediverse discuss and interact in ways that show collective growth.

Mastodon tools at your disposal

Being online is a bit like being in a crowded room or stadium, depending on the audience size. That means when you go to take a step back, you may need to use some of the Mastodon tools to help you maintain the space you’re trying to create for yourself either temporarily or permanently.

Unwanted content

These are the features that Mastodon built and highlighted for dealing with unwanted content:

  • Filtering
    • You can filter by hashtag and/or keyword
    • You can filter permanently or set an expiration date for the filter
    • This feature is most useful when you want to completely opt-in or opt-out of content.
      • opt-in: perhaps you want to follow all Caturday posts in a separate panel or follow cat posts generally.
      • opt-out: you do not want to be subject to the unpredictable news cycles around events that impact a group you’re a member of. Even if people are expressing support for you, you just do not want to see any of it at all ever.
  • Other users
    • Following/Unfollowing
      • Following: explicitly opt-in to another user’s posts, boosts, etc.
      • Unfollowing / not following: a passive opt-out. You will still see posts / etc. if a user you are following interacts with them.
    • Muting / Blocking
      • Mute: You will not see another user’s posts, boosts, etc. but they can see and interact with yours. (Note: you will not see them doing so.) Users are not notified when they are muted.
      • Block: You will not see another user’s posts, boosts, etc. and they cannot see yours. Users are not notified when they are blocked, but as your content “disappears” for them most users can tell if they’re blocked.
      • Hiding boosts
        • Less commonly used tool, mainly helpful if you feel another account’s boosts are “noisy” but their content is otherwise fine.
  • Other instances
    • Block
      • This is the only action available to individual users. Moderation tools allow for more nuanced instance-level implementations including allowing posts but hiding media. (Please see Mastodon’s instance moderation documentation page for the complete list.)
      • When you block another server you will not see any activity from any user on that instance. Instance admins and other users are not notified when you have blocked their instance.

How to implement all of these features are on Mastodon’s Dealing with unwanted content doc page. Please refer to the documentation page for the implementation detail for each.

Additional features

There are many more Mastodon features that you can use to help improve your experience of the Fediverse, in particular in your account profile and preferences settings. These features mostly control the visibility of your profile, your posts, and the posts of others.

  • Limiting how other accounts interact with yours (or not)
    • Follow requests
      • When enabled, you need to explicitly approve a potential follower. Only approved accounts can follow you.
    • Hide your social graph
      • When enabled, other accounts cannot see your followers or who you’re following.
    • Suggest your account to others.
      • Disabled by default. When enabled your account is suggested to others to follow, increasing your visibility.
    • DMs, important notes:
      • DMs cannot be disabled and you cannot restrict who is able to DM you (e.g. Followers Only).
      • When you include a new user to an existing DM, they can see the DM entire DM history. This is more like adding a new user to a private Slack channel than the way other tools handle DMs by creating a new group thread with no history.
  • Limiting post / account visibility (or not)
    • Posting privacy
      • Can be set to public, followers only, or “unlisted”. The latter means that someone can see your activity by going to your profile, but cannot see your activity in any of their individual, local, or federating timelines.
    • Opting out of search engine indexing
    • How media displays
      • Can be set to always show, always hide, or to only hide when marked sensitive. If you find that media such as static and/or animated images, video, etc. have a negative impact on your mental health, you might want to enable Always Hide.
    • Disable content warnings
      • If you find that content warnings create more barriers than access, you may want to enable “always expand posts marked with content warnings”.
    • Slow mode
      • Enabling this feature means you will need to refresh for new posts to be added to the timeline you’re currently viewing.

The above are all set in your profile and preferences. We have documented how to configure these settings on our our Mastodon User Profile and Preferences doc page.

Patterns for a better Fediverse experience

Here we are detailing patterns that you can implement and iterate to improve your Fediverse experience. The tools that can assist are directly referenced in each section. For how to implement, please see the docs linked in the above Mastodon tools at your disposal section.

Control what content you are exposed to

The mental model that helps the most here is to realize that social media by design opts everyone into all content by default. Additional tooling constrains this default opt in, either by creating a new default of opt-out or by creating situational opt-outs.

Deciding which opt-outs you want to implement will inform what tools you choose to use for your benefit.

Proactively filter content

There will be times that you will want and need to filter content. This might be permanent or temporary.

  • Active opt-in
    Following keywords and hashtags can provide relief when you need to decompress. Cats, math, stomatopoda - anything that helps.
  • Active opt-out
    You can filter out keywords and hashtags to either temporarily or permanently remove yourself from certain content and discussions.

We recommend filtering out words, phrases, and/or hashtags that:

  • You do not want to be exposed to, for any reason.
    • This includes not wanting to be exposed to messages of support, which can have the unintended, added consequence of reminding you why support is needed. “For any reason” is for any reason.
  • You do not want to be exposed to even via reference in a content warning.

Filters will filter out posts with content warnings as well. When you filter out content you will still be able to see other posts from other users you do not actively block in your follows, local, and federated timelines.

Note that you can choose to filter or block content as a separate choice from choosing to report it to the mods. We will still see the report.

Proactively obscure media

When changing the default opt-in for media, you can either:

  • Opt in to all media
    This will display all media, regardless of whether the media is parked as sensitive.
  • Obscure media marked as sensitive
    This will only obscure media if the person posting it marked it as sensitive.
  • Obscure all media
    This will obscure all media, regardless of whether the person posting it marked it as sensitive.

Please visit the Mastodon preferences doc to show how obscured / hidden media appears.

We recommend obscuring media:

  • If images and other media are a frequent trigger of any kind.
  • If you do not want to be exposed to media that you do not explicitly opt-in to.

Control access to your posts and account

The mental model here is: anything you post or include in a public facing space should be assumed to be public. There are ways to reduce how public it is and how long the information is available. Tactics here are particularly useful if you find your account attracting a disproportionate amount of “trolls” or other types of malicious activity. (You should also report this behavior to the mods.)

Protect your personal information

The usual advice about not disclosing personally identifying information for yourself or others still applies here. Beyond that, you can also:

  • Hide your social graph
    This will hide your followers and who you’re following. If you have reason to suspect that someone may use your account to find others to target, you should definitely enable this feature.
  • Change the default visibility of your posts
    By default all posts are public, but you can limit your posting reach by unlisting your posts or restricting them to followers only. When you unlist your posts, that means they are still public and visible if someone navigates to your profile, but your posts will not display in any local or federated timelines.
    Combining “followers only” for your posts and enabling “follow requests” so you can control who follows you is a way to maximize how private your account is.
  • Auto-delete posts
    This is enabled and configured via the “Automated post deletion” option on the left nav menu. You can set the threshold from as low as one week to as long as two years. You can also choose types of posts to exclude from the auto-delete like pinned posts, polls, and so forth.

Limit how and who can interact with your account as needed

By default, everyone and anyone can follow your account from any other Fediverse instance that your home instance is federating with. To restrict that behavior, the main feature is to enable follow requests for your account. This will require you to explicitly approve everyone who follows your account.

Be aware of how the DM feature is implemented

There is not currently the ability to do much to constrain DMs, so you will need to be aware of how they work so you can choose how you want to engage with them (or not).

  • You cannot currently limit or disable DMs
    This means that any account that you have not explicitly blocked can DM you.
  • When you add someone to a group DM, they see the history of that group DM
    This means that the group DM feature functions more like a private Slack / Discord channel: when you add a new user, they can see the entire history. To change this, you’ll need to create a new group DM with the new user(s).

Collective action requires a collective

It is common to feel the impulse to respond to content that we see that has a negative impact on ourselves and/or others. When these engagements go well, it is/was an opportunity to help others grow on their journey to be better humans. When they do not, however, they can be a source of stress at best or result in harassment at worst.

Improving the experiences of yourself and others on social media is a form of collective action. There are many, many articles about maintaining or improving your mental health while taking part in collective action of some kind.

While the research around self-care in that broader context exceeds the scope of this article, what everyone should remember is this:

It is important for everyone to do their part to create and maintain safe communities (and build a better world). The reason this is called collective action is because it takes more than one person to accomplish group-level and societal changes. As with any effort to build something better, individuals that are doing their part for others must also remember to do their part for themselves.

Give yourself permission

Questions you should ask yourself when you are deciding whether or not to engage in a thread / conversation that has a negative impact on yourself and/or others:

  • Do you, specifically, need to engage at this moment or can another member of the community (collective) do it?
  • If you need to be the one that engages, either due to expertise or some other reason, do you have the capacity to do in the current moment? If not, can you defer until a time that you do?


  • The Fediverse is vast and if you see an issue, especially one that the community commonly catches, it is likely someone else will say / do something.
  • Even if you do not have the capacity at the moment and no one else does, protecting your mental health will ensure your ability to continue to sustainably help others while growing yourself.

Being able to make contributions over long periods of time will almost always yield more and better results than doing short bursts of high volume.


  • You can leave conversations at any time for any reason. You do not need to justify or inform when you do, especially if doing so will continue to inflame the situation.
  • You can opt-out of any content at any time for any reason. You to not need to justify or inform when you do, especially if doing so will be negatively impactful to yourself.
  • You can also opt-in to content at any time. Any time you feel the need to opt-out of content, the ability is always there to opt back in when you’re ready.

Opting in and out of topics, especially those that might be personally relevant and soul straining for you, is a very useful tactic for creating personal boundaries. It limits when and how you are exposed to that type of content, especially if / when there is a news cycle or other event increasing the discussion frequency.

Create and maintain interpersonal boundaries

First and foremost: it is not ok for someone to violate your boundaries or for you to violate someone else’s. To be clear, this section is not referring to stalking and other forms of harassment. If you are experiencing these, please report it to the moderation team.

The common patterns you’ll see here:

  • When someone is asked to stop engaging in a topic of conversation, thread, or with someone else overall and they do not. e.g.:
    • “Please don’t talk about TOPIC in this thread.”
    • “Please don’t continue to engage with this conversation.”
    • “Please don’t continue to engage with me.”
  • When a person who is a member of a community posts and clearly identifies an in-group thread and non-members / out-group persons respond. e.g.:
    • “Black Mastodon users…” -> If you are not Black, do not respond.
    • “Non-native English speakers…” -> If your native language is English, do not respond.

Both of these patterns involve having a healthy understanding of public spaces and consent. A conversation thread being on social media does not necessarily mean it is for everyone to engage with. This concept should feel familiar: if a group is going for a walk and talking together in a park it does not mean that their walk and conversation is for everyone in that park at that time. That said, there are also public performances that happen in public spaces which are intended for everyone. Similar to the examples just provided, there are frequently communicated boundaries and expectations around private and public engagements happening in public spaces.

When someone violates your boundaries

If you have asked someone to stop engaging with you and/or your thread, the main tools at your disposal are:

  • Mute the user
    They will still be able to follow you and see your posts. They will not be notified they are muted.
  • Block the user
    They will not be able to follow you, see your posts, etc. They will not be notified that they are blocked, but they will be able to tell that they are blocked.
  • Report the user
    Remember you can always escalate to the moderation team.

Which of these actions you take is entirely up to you. Your account is how you navigate the Fediverse and you are the one that will need to decide which of these are warranted for the situation you are in.

Note that we do not recommend announcing when you have muted / blocked someone, as that defeats the purpose of removing yourself from the conflict. There are reasonable exceptions to this involving the safety of others and/or the overall community. In this case we request that you report the situation to the mods, whether you post about it or not, as it is the job of moderation to handle instance-level policy such as this.

When you violate someone else’s boundaries
  • Step away / adhere to the request. This is true regardless of who is “in the right”. The conversation is not effective and should cease.
  • Apologize, as appropriate. You will need to use good judgement here and should only apologize if and when you are not emotionally charged in a way that prevents you from using good judgement.
    • Depending on the impact to the person setting the boundary, if they’ve asked you to stop engaging there are going to be times when an apology results in further corrective action from them including muting, blocking, or reporting to the moderators.
    • If this happens this is not a flaw in them or necessarily a flaw in you. Allow them to meet their own needs without judgement or further engagement.
    • If you are a Hachydermian, refusing to let someone walk away regardless of whether they accepted or rejected your apology, or if/when you don’t have clarity on whether they’ve accepted or rejected your apology, is a violation of our Don’t Be A Dick policy at a minimum.

3 - Open Source Infrastructure

Description of our public infrastructure that keeps Hachyderm online.

This page is still being built. Check back often.

Hachyderm is taking the first steps towards what we are calling Open Source Infrastructure which is where we intend to operate a secure and resilient service completely in the public.

Hachyderm Today

Today Hachyderm runs a global topology that is distributed across the following service providers:

  • On premise servers in Seattle, Washington also known as “The Watertower”.
  • Small lightweight Linode VMs operating around the world.
  • Core infrastructure operating in Hetzner in Falkenstein, Germany.
  • Object storage sponsored by Digital Ocean.

Experimenting in the Public

Hachyderm deeply believes there is untapped value left in computer science. We intend on approaching our infrastructure as an opportunity for safe and thoughtful experiments, similar to how the International Space Station conducts experiments in orbit. We intend on prototyping new technology, operational models, SRE organizational structure, follow-the-sun patterns, and open source collaborative workstreams for our infrastructure. In the coming months we will be sharing ways in which the broader Hachyderm community can volunteer to support our infrastructure, as well as register hypothesis backed experiments to run with our data and our services.

Your user data will never be leveraged for an experiment. All user profile data, direct messages, post content, access metrics, demographic detail, and personal information will be restricted from any form of experiment. To be even more direct about, you, as a Hachyderm user will never be leveraged in an experiment. We will not be experimenting on you. We are experimenting on the tools and services that support hachyderm’s public services such as prototyping databases, HTTP(s) servers, and compute runtimes.

4 - Moderation at Hachyderm

How we moderate here at Hachyderm

Here you will find everything you need to know about what to do if you’ve been moderated, including:

  • The specific actions we take and what they mean or imply
  • How to appeal and how we work around limitations with the appeal process
  • What we promise as moderators to moderate Hachyderm

4.1 - Moderation Actions and Appeals Process

Information about the different actions that moderators take and what to do if you’ve been moderated.

Here you will find everything you need to know about what to do if you’ve been moderated, including:

  • The specific actions we take and what they mean or imply
  • How to appeal and how we work around limitations with the appeal process

First things first

We acknowledge that being moderated can be stressful. We do our best to intervene only when necessary and in the interest of preserving Hachyderm as a safe space. We acknowledge that we are human and that we can make errors. We ask for your patience and understanding that when we approach a situation, we are doing so as strangers moderating strangers.

For more information about what goes into how we interpret and enforce our rules, please take a look at our Moderator Covenant and Rule Explainer. Some of the language used below will come from the Moderator Covenant in particular.


The moderation actions and information about them

Although there are a few actions we can take as moderators, the most common are:

  • Warn
  • Freeze
  • Suspend

Two additional, less commonly used, actions are:

  • Limit
  • Delete Posts


The warning feature is a tool in the Mastodon admin tools that allows us to send a message directly to you.

When you receive a warning, that means that the moderation team has decided:

  • The impact of what you were reported for did not require a more significant intervention, and …
  • …based on the interaction(s), we believe that you will respond to the warning with a growth mindset.

Essentially, warnings are the feature that allow us to respond to a report with a gentle nudge in the right direction. Warnings do not change your login, use, or ability to federate as a Hachyderm user.

It is important to understand that, unlike some other systems, a “warning” is not something that is tracked for later punishment. This means that you are not accruing “strikes”, or similar, with every warning you receive. That said, please do keep in mind that we choose our actions to prevent recurrence of the same actions that have caused the community harm. So while there is not a strike system, this is not leeway to continue to do what you were reported for without change.

When the moderation team sends a warning, we always send a message with what we need from you. This may be a reminder to leave interactions that are not going how you hope or intend, or it may be a reminder to disengage if you’ve been asked to leave a conversation for whatever reason.

Freeze and Suspend

These are both actions that prevent you from using your Hachyderm account normally.

  • Freeze
    You will be able to log in, but will only be able to access your account settings with a message that your account is still otherwise online. Basically this means that to an outside observer your account is still “up”.
  • Suspend
    When your account is suspended it will be taken offline. Your data including media, posts, followers, and following will all be intact for 30 days. After 30 days, they are automatically purged from the server after that time.

In all except the most egregious situations, whether a moderator suspends or freezes an account is based on what we are seeing in the reported interaction. Specifically:

  • How severe is the impact of the reported interaction to our community and other communities
  • Is the situation continuing to escalate, or at risk of doing so
    • Which action, freeze or suspend, reduces the risk to our community and others

If your account was frozen, you should always file an appeal. If your account was suspended, the situation of the suspension will determine if an appeal is possible. (Skip to The Appeals Process)

We want to call out that there are times that we temporarily suspend an account even if the rule broken is not severe. This would only happen when there is an impact to the server or community that would warrant and benefit from an immediate, visible, action and forced cool-down period.

Limit and Delete Post

Being limited means that you are hidden on our instance. Your posts are all still visible to your followers and can still be discovered when directly searched for. Your posts will not otherwise show up in the Local feed. You will be able to federate with other instances normally.

Delete Post is what it sounds like - we can delete the reported posts and associated media (if that media is uploaded to Hachyderm).

We do not typically Limit accounts or Delete Posts. We have a couple of reasons for this:

  • We do not want the moderation process to be passive; essentially, if you acted in a way that required intervention, we want to see that you are willing and able to rectify the situation without further intervention.
  • In the case of Limit in particular, we do not want the process to go on without resolution.

This means that, in general, we will use one of the other actions and communicate what we’d need from you in an appeal process to reinstate normal functioning of your account.

Reminder: not all moderation action is visible

Of the above actions, the only moderation actions that are visible are if a moderator deletes a post or suspends an account. When an issue is closed without action or when a user is warned, frozen, or limited, the action is not visible to the reporting user or other users.

The Appeals Process

You may respond to any moderator action with an appeal. In all cases except for 1 ) severe rule violations or 2 ) creating a banned account type it is in your interest to file an appeal to start or continue a conversation about what you reported for.

When not to appeal

The only two situations when filing an appeal will not be helpful. If the harm done to the community is repetitive (before it was caught) and the impact and risk to the community is high, there is no benefit to you filing an appeal. This includes, but is not limited to, being in favor of systems of oppression, posting illegal content, harassment, etc.

The other situation is if the account is a type that we ban on our server and it was correctly flagged (please file an appeal if not). Common types of banned accounts are those that don’t abide by our NSFW Policy and Monetary Policy.

For clarity: if your account was suspended (or frozen) due to being either 1 ) an unrecognized special account type or 2 ) not following the rules for your account type (bots, companies, etc.) then you should file an appeal.

When to appeal

If the reason that your account was flagged for a rules violation was incorrect, you should file an appeal. We again ask for patience and understanding as we work with you to correct our mistake in that case.

Some cases for something like this:

  • An account that is flagged as a company or business (a corporate account), but is not.
  • An account that is a specialized account type, but one that is not directly allowed from our Account Types. Currently, accounts that are not specifically called out are not allowed. We request and recommend everyone interested in creating accounts that are in a grey area to reach out to us at

There are other cases where we use freeze and suspend as tools to help de-escalate or otherwise resolve a situation in progress, but the freeze or suspension does not need to be permanent. Example situations for this:

  • A specialized account type not following the rules for that account type. (e.g. bot posting more than the limit, not using hashtags, etc.)
  • If a situation is escalating in a way that cannot be resolved without stopping what is happening while in progress. (e.g. A user accidentally spamming the server due to something misconfigured.)

Where possible and applicable, we try to include whatever needs to happen to unfreeze or unsuspend an account. Commonly, with frozen accounts, we may need you to agree to delete posts or similar to re-instate your account.

Whenever you file an appeal you should always email us due to limitations in the appeals process. (See next section.)

Limitations of the appeals process

Whenever you want to file an appeal you should both file the appeal and email us at

The reason for this is that the Appeal feature in the admin interface does not allow us to continue a conversation with you. When we receive an appeal we can only Approve or Reject. We cannot send an outbound communication when we do either action and cannot send a communication to help us decide what action to take.

In order for us to know if your account was flagged incorrectly and why, or for us to be able to reinstate your account if it was frozen or suspended pending action we requested of you, we will need you to email us at Please also include a summary of the situation in the appeal itself, as that will remain tied with the appeal in admin UI, which will set the initial context and set the expectation that there are emails corresponding to the appeal.

4.2 - Blocklists

Community information about Blocklists and how Blocklists are built.

The Basics of Blocklists

General Purpose

Blocklists are one way that a Mastodon instance can handle unwanted content on the instance level. When a Mastodon domain is on the Blocklist, this means that the server administrators have limited or completely suspended activity with the server at that domain. The specific actions are:

  • Suspend
    This is the most commonly known one, as it prevents all instance-to-instance activity.
  • Limit
    Previously known as “silencing”. This means that accounts on the home server can follow accounts on the limited server, but that general content from that server does not show up on the Federated timeline.
  • Reject Media
    Media from the moderated server will not display on the home server, this includes not only media in the posts themselves but avatars and headers as well.

Mastodon has these features in their documentation.

Moderator vs User Actions

Users can also take individual action to prevent themselves from seeing unwanted content by blocking or limiting other accounts on the user level. This is recommended for cases where the content is perhaps unwanted by an individual user, but that content does not violate the home server’s ethos.

Mastodon has additional documentation about actions that individual users can take on their Dealing With Unwanted Content documentation page.

How Hachyderm’s Blocklist is Built

At Hachyderm, we do our best to balance what actions should be taken at the instance level and what should be handled at the user level - both when it comes to our own users and when we receive reports of users on other instances or the instances themselves. The vast majority of the moderator action we have taken on servers on our Blocklist is to either silence/limit or suspend/block. The domains that are included on this list are:

  • Curated from a variety of published Blocklists on other Mastodon instances
  • Added based on user reports. This can mean either:
    • A domain has been reported, researched, and moderated individually, or
    • A high volume of reports regarding several users on an instance have led to an instance being researched and moderated

Although suspension is the most well-known and discussed moderation action, domains may be limited as well. For example, we might limit a server that has several bots that are taking over the Federated timeline but not suspend so users can continue to follow individual accounts.

Concerns that go into building the Blocklist

There are a few top level concerns that go into determining adding servers to the Blocklist:

  1. Does the ethos of a specific server violate the ethos of this server?
    A few easily understood examples would be if a server is anti-Black, anti-queer, or endorses either direct or dog-whistled hate speech and content.
  2. Is there a security concern around this server?
    As a broad example, if we see malicious traffic coming from a specific server or servers.
  3. What risk does the server pose to our users?
    We prioritize the safety and experience of our users that are in historically underrepresented groups.
  4. What do our user reports look like? Are they user level on a given server or are they indicative of malicious patterns server-wide?
  5. What level of moderation is needed? Limit/silence or block/suspend?

It is our opinion that it is in our users’ best interest to federate with as much of the Fediverse as possible so that we can all share our joys, sorrows, growth, learning, etc. with each other.

Our goal with the maintenance of the Blocklist is to ensure that all of our users are safe on Hachyderm. That means when we move forward with taking moderation action on a server, that we will take the best course of action to ensure that safety. We will prioritize the safety of our marginalized users over the broader experience of a completely open and unmoderated (at the server level) Fediverse.

When we take moderation action against a server, we consider:

  • The items called out in the numbered list above
  • Balancing open participation with curating a safe space

Concerns that go into transparency around the Blocklist

Some servers have reasons attached to their moderation action and others do not. In addition, we may or may not announce when we limit/silence or block/suspend individual instances. Why is this?

When we choose what level of notification to send, and how transparent to be, with moderation actions we consider:

  1. The impact of the change
  2. The interest level in the change
  3. The risk of publicizing/being transparent about the change

For example, if we were to take moderation action against any of the large, popular Mastodon instances we would err on the side of transparency as this would be a significant and user impacting event.

Whenever we take action on a server that has malicious activity, and this can be in the form of attacks on our server or in the form of social attacks like stalking and harassment, we err on the side of safety. This means what level of information we provide and how loudly (notifications, etc.) we provide it will be based on what is safest for all of our users.

The vast, vast majority of instances fall between these two extremes and thus the resulting decisions do not fall perfectly in the “fully transparent” or “completely silent” buckets. This means:

  • Servers may not have reasons attached to their moderation decisions, but that doesn’t mean they are a security or safety concern.
    To do this would immediately out them as a security concern, by the absence of information.
  • Not all changes to individual server status will be loudly announced, but some will.
    We will use the decision-making process outlined above when announcing.
  • Not announcing that a server has changed status does not mean that server is or was a cause of concern. It can also, and frequently will, mean that the server size doesn’t impact enough of our users to make a large announcement.

Requesting Moderation Changes for a Server

What to do if there is a domain on the Blocklist in error

We are all human and are prone to mistakes. If there is a domain that is moderated on our Blocklist that seems to be in error, please open a GitHub Issue in our Community repo to request that we take another look at the domain. Please include as much relevant context as you can to help us make our decision. Note that depending on the circumstances, and as outlined above, we may not be able to be fully transparent with our decision - but we commit to erring on the side of transparency with these reports as often as possible. For more information about how to file a report in our community repo, please take a look at our Reporting Documentation.

What to do if you would like us to moderate a server

If there is a server that is not currently moderated, i.e. either limited/silenced or banned/suspended, then please file a report via the Hachyderm (Mastodon) UI or GitHub Issue in our Community repo for us to take a look at that domain or domains. As before, please include as much context as possible. If there is a concern around the domain(s) you would like to report that would be risky to report in our GitHub Issue tracker, please email us at For more information about filing reports and how to choose between the Mastodon UI and the GitHub Issue tracker, please look at our Reporting Documentation.

What not to do in either of these cases

There are far, far more Hachydermians than moderators. We do not follow tags, posts, etc. to make changes to our Blocklist - we only use the sources outlined above.

4.3 - Moderator Covenant

How Hachyderm moderators moderate Hachyderm.

This is the set of principles that Hachyderm moderators agree to inform their decisions and judgment calls when creating and maintaining Hachyderm as a safe space and enforcing server rules. This is because first and foremost:

Hachyderm moderators acknowledge the importance of server rules / Codes of Conduct that are complete and clear. Hachyderm moderators also acknowledge that the entirety of human behavior cannot be captured by an itemized list, no matter how many subsections it has, and therefore use the following principles to ensure that we are always able to take action even in situations where a reported infraction “falls between the cracks”.

  1. We will prioritize the vulnerable.
    All actions we take will prioritize the most vulnerable, full stop.
  2. We acknowledge that we will make mistakes.
    We acknowledge that we are not infallible. We will constantly be learning and growing and will respond to our mistakes with acknowledgement, care, and do what is necessary to undo or mitigate the harm done.
  3. We will moderate with respect.
    We will handle our communications with users and accounts that have been reported for moderation with respect.
  4. We acknowledge and understand that we are strangers on this pale blue dot.
    The reality is that the vast majority of user interactions are between strangers, even if familiarity increases with time. This means that in almost all reported situations: the reported user is a stranger to us as moderators and that the reported user is a stranger to other users in the interaction (on a personal level, even if the account is recognizable). This means that while we will look at the reported user on the surface to try and understand possible intent, we acknowledge that it is not possible to use intent or presumed / guestimated intent alone to inform what moderation action(s), if any, to take.
  5. We will prioritize impact over intent.
    Whenever we look into a reported interaction, we look at as much of the situation we can see. This means we do due diligence on seeing what, if any, factors are contributing to the situation and if that situation is escalating or at risk of escalating. Since we acknowledge that We Are Strangers, that means we are doing this based on an understanding of people, in general, and the intersectionalities at play. Regardless of intent or whether actions and words were purposeful, the targeted or affected person is still harmed. That’s why it is critical to prioritize impact and acknowledge the harm that was caused.
  6. We will trust, but verify.
    There is a saying that you need to believe someone when they tell you who they are. Individuals and communities make use of the reporting feature to tell us about other individuals and/or communities who have announced who they are in some way so we can take appropriate action. There is also the rare occasions where individuals will use the reporting feature(s) as a vector of harassment or oppression against a targeted user and/or demographic. We balance these two realities by trusting that reports are filed with good intention, but verifying every time.
  7. We will hold Hachyderm users accountable for their actions.
    This is specific to the moderation context of when a reported user is a Hachydermian. When we communicate rule violation(s), we will also communicate what (if any) actions are needed on your part. To put it another way: if you acted in a way that requires moderator attention, you must take action to un-require that attention. The most common pattern here will be asking you to delete problematic posts or similar. Note that this will not be done in situations where it comes into conflict with Prioritizing the Vulnerable or Making Safety the Sustainable State. Also note that sometimes the action isn’t deleting posts, but changing a behavior. Two common patterns here are:
    • Asking a reported user to do some light research into the topic area that caused them to be reported. Small steps iterating over time increase our collective knowledge and our community’s ability to be safe and open.
    • Reminding a reported user that they can always walk away from an interaction that is not going the way they intend.
  8. We will steward safe spaces to allow for the range of human expression and experience.
    Since people are more likely to report negative emotions and perspectives than positive, this one will be explained by relevant examples:
    • We do not moderate people for being angry at systems of oppression functioning as designed, because that design is traumatic.
    • We do not moderate people for existing in public. This includes, but is not limited to, “acting Black”, “acting gay”, being visibly a member of a particular religion, and so on.
  9. We will not create the Paradox of Tolerance.
    Whenever there is a choice that needs to be made between the impact of individual actions and community safety, we will choose community safety.
  10. We will only take moderation action where doing so increases community safety and/or decreases community risk.
    For every report, we do an analysis to determine whether or not taking moderator action will improve community safety and/or decrease community risk. If the best action to take is to not react, then we will not react.
    For off server users in particular we also recognize the limits of what we are able to moderate. Users on the fediverse who did not agree to our server rules are not subject to them. In these cases we are solely evaluating what, if any, moderation action will protect our community and its members rather than evaluating if a user who never agreed to our specific rules is abiding by them.
  11. We understand that people need space and safety to grow.
    We understand that it is impossible for everyone to know everything, and that includes us. We do not expect our community to be experts on every fact of life, or experts in every form of social interaction.
  12. We will prioritize making safety the sustainable state.
    We will take actions to prevent users from being reported for the same, or similar, infractions.
  13. We will take actions to prevent learning at the community’s expense.
    • We will proactively learn and grow to prevent our growth as individuals and moderators from coming at the community’s expense.
    • We acknowledge when a user has been reported specifically for being harmful in the community, they have already caused that harm. While we Understand That People Need to Grow, we will not allow that growth to happen at the expense of the community. That means that when a user is reported for harmful action(s) and we determine there is a risk of future behavior, and/or that the user is not displaying a growth mindset when already prompted, that we will choose action(s) that Prioritize Making Safety the Sustainable State.

4.4 - Reporting Issues and Communicating with Moderators

How to report issues and interact with the moderation team.

There are three ways to correspond with the Hachyderm Moderation team:

In general, the Mastodon UI (i.e. the “report” feature on is used for reporting specific posts, users, and domains. The GitHub Community Issue tracker is for other types of reports as well as raising other questions and conversations with the Hachyderm Moderation Team. Optionally, users may also send us information via email if neither Mastodon reports nor the GitHub Community Issues are appropriate for the conversation.


Our server rules still apply when filing a report or otherwise communicating with the moderation and infrastructure teams.

How and When to use Email

The moderation team should, in general, only be contacted via email to:

  • Supplement a report in the Mastodon UI
  • Provide a report or other communication that cannot be in a public forum, like the GitHub Issue tracker, and cannot be submitted via the Mastodon UI.
  • Request information about creating a specialized account.

In short: please prioritize using the Mastodon UI and/or GitHub issues as often as possible. That said, if you would need to reach out to the admin team for any of the above situations or another grey area, please use

How and When to use the Mastodon UI

The Mastodon UI, i.e. what you see when you’re using or your home Mastodon instance of choice, should generally be used for reporting issues that can be reported via reporting individual posts. This typically is used for:

  • Reporting individual posts but not the user overall
  • Reporting a user via their posts
  • Reporting a domain via the posts of their users

For information about the report feature, and what we see when you send us a report, please look at our Report Feature doc page.

What you should include in your report

  • Include specific posts where relevant
    • This includes if you’re reporting a specific user as an individual or as as a general representation of a server dedicated to that type of behavior.
    • Note that it is important to include posts, where relevant, as the report feature does keep the posts even if the user / their server admin deletes the posts or suspends the user’s account. So if a user has been posting and deleting those posts, we won’t see it by looking at the timeline but the UI will keep them if you include them.
  • Always include the context when you are prompted for Additional Comments, even if it is obvious. This can be as succinct as “spam”.
    • If you are sending us a report for a server violation and the posted content is not in English, please supply the translation and relevant context. In many cases, online translation tools can only directly translate the words but not the commonly understood (or dogwhistled) meaning. If you run out of characters, please submit the report and tell us you have emailed us, and email us at
    • If you are sending us a report with a short / one-word description, please make sure it correctly captures the situation. If the reported description does not align with what is included with the report, we will close the report.
    • If you are sending us a report of problematic content where the visuals may be traumatizing in and of themselves, you can choose not to include the posts but please always include what we will see when we look at the reported user’s account or the reported server. We have moderators opt-in to tasks like these when they appear.

What to know about the Additional Comments:

The most important limitation you should know is that the Additional Comments field has a character limit of 1000 characters (as of this writing). If you need to supply more context, or the translation takes more than 1000 characters, please:

  • File the report with what you can
  • Make sure to leave enough space to tell us there is a supplementary email
  • Email us at

Please note: if we receive an empty report and cannot see a clear cause, we will close the report without moderator action.

Limitations of the Mastodon Admin Interface

When we receive a report, we cannot follow up with the reporting user to ask for additional information using the admin tools.

How this impacts you:

If you are reporting an issue and do not include enough information and/or a way for us to get in touch with you to clarify, we might not be able to take the appropriate action. So please do make sure to include posts as needed, comments and context, and email us at as needed.

How and When to use the GitHub Issue Tracker

The Community’s GitHub Issues, a.k.a. Issue Tracker, is for communicating with the moderation and infrastructure teams, as needed. To create an issue:

  1. Go to
  2. Click on “New Issue” in the upper right side.
  3. Select one of the issue templates that applies to you / your situation
  4. Enter the information needed on the Issue. Depending on the template, there may be some prompts for what information should be included.

The Community Issues can still be used to report domains, as you would do in the UI. It can also be used to request emoji, report a service outage (you can also use for this), request updates / changes to the docs, and so on. There are issue templates for the most common issues that prompt users for the information we need to respond to requests efficiently. Depending on the nature of the request / discussion, a member of the infrasture team and/or the moderation team will respond.

4.5 - Exceptions and Rule Changes

Steps to take to request an exception to an existing rule or a rule change.

The Rules

In accordance with the Moderator Covenant:

Hachyderm moderators acknowledge the importance of server rules / Codes of Conduct that are complete and clear. Hachyderm moderators also acknowledge that the entirety of human behavior cannot be captured by an itemized list, no matter how many subsections it has, and therefore use [a set of] principles to ensure that we are always able to take action even in situations where a reported infraction “falls between the cracks”.

This means that we will always do our best to take action on reports, but we also acknowledge that there are situations where the rules may not be clear or may not apply. In these cases, we will do our best to take action that is in the spirit of the rules and the spirit of the Hachyderm community. In addtion, we will work to expand and clarify the rules as the community grows and the need change.


If you would like to request an exception to the rules, please email us at with the following information:

  • Your username
  • A link to or description of the rule that you are impacted by
  • A description of why you believe you are an exception

Rule Changes

There are two routes for changing rules:

  • Create a new issue in the Hachyderm Community’s GitHub Issues
    • Select “Documentation Request,” if you believe that the documenation associated with a specific rule needs to be updated
    • Select “General Suggestions,” if you believe would like to propose a new rule or a modification to the rule
  • Create a pull request with the proposed update
    1. When viewing the documentation, click the “Edit this page” link in the right-hand menu of the page (only visible on desktop)
    2. Make the changes you would like to see
    3. Commit the changes to your fork
    4. Create a pull request with the changes

5 - Rule Explainer

Expanded explanation of the rules that govern user conduct on Hachyderm.

The Rule Explainer

The Rules / Code of Conduct are the most important thing to understand when on Hachyderm. A quick recap of the rules:

  1. Don’t be a dick
  2. No hacking
  3. No violence
  4. No fascism
  5. No colonialism
  6. No white supremacy
  7. No religious extremism
  8. No nationalism
  9. No racism
  10. No homophobia
  11. No transphobia
  12. Safe space: LGBTQIA+
  13. Safe space: neurodivergent (ADHD, Autism, etc.)

As a general rule to keep in mind:

If your intended post is in a grey area for one of these rules, then err on the side of caution and reach out to the mods via our GitHub issue tracker.

Don’t be a dick

Hachyderm and its surrounding community have found safety and value in a primary guiding principle, that is often hard to interpret.

Don’t be a dick.

We believe that everyone knows when they are “being a dick”, and we do not tolerate this level of aggression towards our community.

In short, we believe that you know if you are being a dick, and therefore you should be able to stop. If you do not stop, you will no longer be welcome on Hachyderm.

Being a dick is measured by “self control” and “intent”.

Are you able to restrain yourself? What are your intentions?

If your intentions are to hurt, brandish, slander, diminish, insult, or offend someone you are likely being a dick. If you are commenting on anyone’s “tone” or whether or not they are “having a civilized” discussion you are also being a dick. We believe you should be able to restrain yourself in these situations. If you do not restrain yourself, we believe that you are likely being a dick.

Being a dick is the opposite of respect. We believe all “bad takes” on a topic can be voiced with respect. We hold each member of the community accountable for managing themselves and finding a respectful way of communicating their contrarian views.

We expect all members to be respectful and thus, not be a dick.

What about “shitposting”?

Oxford dictionary defines shitposting as:

the activity of posting deliberately provocative or off-topic comments on social media, typically in order to upset others or distract from the main conversation.

In short shitposting is allowed on Hachyderm.

However, shitposting can easily violate our “don’t be a dick policy” if it turns into offensive trolling at the expense of others.

The key factor in distinguishing between shitposting and being a dick is “at the expense of others”.

We believe that any time clout, attention, credibility, fortune, or fame is gained “at the expense of others” it is likely close to a violation of our “no colonialism” rule or a violation of our “don’t be a dick” policy.

We don’t want to be the “shitpost” police. We encourage you to shitpost, just not at the expense of others.

No hacking

All usage of Hachyderm as a service should fall under “expected use”. Explicitly, users should not attempt to hack, expose vulnerabilities on our infrastructure, attack our infrastructure, or use our infrastructure to do these actions toward other servers or entities.

In terms of discussions, users should not discuss the specific mechanics of how to attack or compromise our own or other services. Generic discussions in the infosec space are welcome, especially if they are formatted to educate e.g. “here is how to protect against XSS attacks” or “here’s what you should know about the OWASP Top Ten”. Discussions around publicly known attacks and vulnerabilities are welcome, again only if the intent is to educate. “Here’s what you should know about Big Data Breach Du Jour ™️ “.

No one should ever discuss infosec knowledge where they have privileged access. e.g. If you know of an unpublished vulnerability that an entity or community hasn’t had the chance to address, whether or not they have been notified, and other best practices in this space.

As the moderator team is volunteer, what this means for reports that come through as violating this rule: sharing privileged information, compromising other users or services either via information or action, are all activities that will warrant account moderation. We may not always be able to verify if a report is “privileged” or not so we will always err on the side of caution with these reports and handle from that posture.

What you should do if reporting this type of activity

Make sure to include as much information as possible in the report, including any and all posts or links to other relevant information so we are able to quickly and easily see the severity and scope of the issue being reported.

What you should do if you were reported and you believe it was in error

If you were reported for hacking as defined above, make sure to include as much information in your response to us as possible. For example, if you were reported for disclosing privileged information that is in fact public, include links to that effect. Just as we need the report to make it as easy as possible for us to “trust but verify”, we need you to make your appeal as easy as possible for us to “trust but verify”.

What you should do if you are interested in helping with security

If you are a security specialist and interested in joining the volunteer mods to help us keep our service secure, please reach out to us in our GitHub issue tracker to let us know your intent and we can figure out further conversations from there.

No violence

Violence can include words and actions, and can be between adult humans, adult humans abusing children, or humans of any age abusing animals. Specifically the following are not tolerated:

  • Causing intentional injury or threatening to do so
  • Any form of harrassment
  • Any form of stalking
  • Behaviors that result in emotional distress or fear
  • Encouraging violent behavior
  • Posts in favor of domestic or state violence
  • Posts in favor of animal abuse or cruelty
  • Posts in favor of child abuse or cruelty
  • Images, videos, or descriptions of the above

No fascism

Toots in favor of fascism either abstractly or concretely are not allowed. This includes sharing material and media that is in favor of fascism.

No colonialism

The history of colonialism worldwide is a layers on layers of suffering. Discussions that whitewash or otherwise diminish or deny this history will not be tolerated.

No white supremacy

White supremacy and anti-Blackness are not welcome on this server. Not all forms of white supremecy are easily filtered out by the No Violence rule, although there are obvious overlaps. Systems of oppression rely on multiple layers of support, starting with more covert and escalating to more overt.

To clarify, this is frequently represented by a pyramid that represents the more “socially acceptable/practiced” (note: not here) and “less socially acceptable” forms of racism:

Pyramid of covert and overt racism, lists many supporting forms
at the bottom including colorblind racism and moving to acts of
violence at the top. Pyrmaid fully described in linked text below.

This image is borrowed from the Religion and Race hub article on Overt and Covert Racism. The article fully describes each of the terms and concepts listed on the pyramid. To be clear: all forms of white supremacy, be they overt or covert are unwelcome on this server and will result in being banned.

No religious extremism

Religious extremism is not tolerated on this server. Just like white supremacy, there are layers to religious extremism that start with supporting forms and ends toward violence. Also like white supremacy we do not tolerate supporting forms of religious extremism any more than we do overt religious extremism. There isn’t (yet) a handy infographic to share for this one, yet, but what this generally means is:

  • Attacking / demeaning others’ religion is not tolerated
  • Supporting violence towards the participants of a religion is not tolerated
  • Spreading hateful speech regarding religion is not tolerated
  • To clarify potential grey areas: atheism, as the absence of religion, as well as agnosticism is covered by the above. Other philosophies that could be considered “religion adjacent”, like secular humanism, are also protected by this rule.
  • Spreading support for a subset of one religion spreading hate, violence, etc. against other religions or philosophies is not tolerated.
  • Supporting or spreading support of one religion being “dominant” over others
  • Supporting or spreading support for religiously motivated hate speech or violence

As a point of clarity: this does not mean that people cannot analyze or otherwise discuss religion in this space. It simply cannot cross the line into causing harm and requires good boundaries and healthy respect by all active participants. All participants must also keep in mind that their conversation will be visible by many, many non-participants in the Fediverse.

No nationalism

What we are trying to distinguish here is, similar to above, extremist ideology. As a quick example, you might feel pride that your country has universal healthcare. Awesome! This is fine. If you have made your national identity your whole identity, and use that identity to harm others either by direct action or proxy action, then that would be the “extremism” aspect that is disallowed on this server.

No racism

A lot of this rule overlaps with No White Supremacy; however as an extension and clarification from that rule: discrimination on the basis of race or ethnicity is not tolerated on this server. Neither will hate speech or propagating hateful ideas on the basis of race or ethnicity be tolerated.

  • Attacking / demeaning others’ race or ethnicity is not tolerated
  • Supporting violence on the basis or race or ethnicity is not tolerated
  • Spreading hateful speech regarding race or ethnicity is not tolerated
  • Spreading support for dominance of one race or ethnicity over others is not tolerated
  • Supporting or spreading support for racially or ethnically motivated hate speech or violence is not tolerated
  • Supporting or spreading current or past historical dehumanizing tropes on the basis of race or ethnicity will not be tolerated

No homophobia

Homophobia in any form is not tolerated on this server. Some specifics:

  • Attacking / demeaning others’ sexual orientation is not tolerated
  • Supporting violence towards people on the basis of sexual orientation is not tolerated
  • Spreading hateful speech regarding sexual orientation is not tolerated
  • Supporting or spreading support indicating one sexual orientation as “the only correct” one, or “only natural” one, etc. is not tolerated
  • Supporting or spreading support for hate speech or violence against a specific sexual orientation, is not tolerated
  • As few points of clarity:
    • All those who are not heterosexual are protected by this rule, even if they do not identify as specifically gay
    • Those who identify as asexual, whether this is considered a sexual orientation or the absence of one, are protected by this rule
    • Romantic attraction, which is separate from sexual orientation, is also protected by this rule

No transphobia

Transphobia in any form is not tolerated on this server. Some specifics:

  • Attacking / demeaning others’ gender is not tolerated
  • Supporting violence towards specific genders is not tolerated
  • Spreading gender-based hate speech is not tolerated
  • Supporting or spreading support of misinformation around gender identity will not be tolerated.
  • Supporting or spreading support of cruelty to transgender children and adults will not be tolerated
  • The promotion, endorsement, or suggestion of conversion therapy or any related programs or practices is not tolerated
  • Deadnaming, which involves referring to a transgender or gender-diverse person by a name they used before their gender transition, is not tolerated
  • Those on the gender spectrum, who may not identify as trans but who may be non-binary, genderfluid, etc. are protected by this rule.

Safe Space: LGBTQIA+ and Neurodivergent

These are being worked on to expand on the above.

The three grey areas

Current events, past events, and personal experience

Hachydermians may need to discuss some topics on the above “No” list from time to time. Whether it is sharing current news, feelings about current news, personal experiences and/or impacts, or making historical connections the fact is that the world is a bit on fire. Conversations from these perspectives are welcome, however: use content warnings on text and media whenever discussing or sharing anything that might fall under these rules. If you are unsure, use a content warning as that is why the feature is there. This is how we as a community can give space to as many as we can: both those who need to talk about certain topics and spread knowledge about them as well as protect those who are actively experiencing, or recovering from, the topic in the discussion. To put it another way: one person’s case study is another person’s life and trauma, treat these conversations with respect.

6 - Sexual Content

Policies and thoughts on sexual content, NSFW content, 18+ content, sexually charged content, sexual imagery, and its relationship to Hachyderm users and marginalized communities.

Sex is everywhere, and we have a pragmatic and mature relationship with sex and sexual content at Hachyderm. We want Hachyderm to be a home for science, technology, and collaboration while also embracing the beauty and depth of all walks of sexual life.

Modern society functions with deep-rooted and historical sexual norms that are exploitative, unfair, harmful towards marginalized people, where the danger for these individuals is often overlooked. Product marketing, sales tactics, and even corporate policies are often structured and allow for harmful culture to thrive, often in the name of hetero-normative traditions and expectations.

Many parts of the industry operate with a bias towards keeping traditional and hetero-normative sexual culture protected while demonizing sex work, kink culture, queer culture, polyamory, ethical non-monogamy, furry, lesbian, gay, bisexual, pansexual, transgender, or any other sexual area of what we refer to as “The Alphabet”.

We understand that sex work is real work, and we view sex workers equally as professional as the rest of users. We deeply believe in validating, legitimizing, and confirming many sexually marginalized cultures on our platform. We understand that silencing sexual content is typically used as a weapon to target gender minorities and sexual minorities.

Our Policy

To be candid our general policy on sexual content is:

keep it legal, and practice consent.

Absolutely under no circumstance is 18+ content allowed without a content warning.

We believe the first step in understanding how to navigate sexual content is to be a good steward of the content. We expect all Hachydermians to lead by example with regard to consent.

In other words we expect all Hachydermians to have a mature understanding of consent, and to practice consent in their daily usage of the service by offering their content with a content warning.

The first step in practicing consent is learning how to listen to the needs of others, and be respectful and cognizant of their needs. It is important that everyone understands that their own personal definition of sexually charged content will likely be different from others. We encourage users to be respectful, and ask for consent before sharing images, topics, or content that is potentially outside the scope for other users.

Additionally, we encourage anyone who finds something they deem out of scope or jarring to consider how they manage the situation.

NSFW Content

Each corporation has unique rules and policies in place to manage content that is not safe to access while on the job. It is impossible for Hachyderm to police this content on behalf of all the corporations in the world.

Here we draw on our general advice to practice consent by putting any content that you suspect to not be safe for work behind a content warning such that users can opt-in to the content.

The Law

There are many global laws in place that attempt to measure and determine what is and isn’t sexually acceptable. We acknowledge that many of these laws are structured in ways to unfairly harm marginalized communities.

These laws vary drastically by country, and in some cases by state, or city codes. It is every user’s responsibility to adhere to their local laws to the best of their ability.

Do not, under any circumstance deliberately violate the law, or your account and your content will be permanently destroyed. Hachyderm, and our governing body and legal entity assume no accountability, liability, or responsibility for our user’s content.

The general advice we have for our users is to keep any content that is potentially legally restricted to 18+ audiences well protected and well advertised. We want your sexual content to be obnoxiously “loud” about how well it is protected from the general public, and how well you practice opt-in and consensual techniques.

To be clear, we ask that any sexual content, imagery, videos, language, or suggestive content by clearly marked “18+”.

In the event we find a user is sharing content that puts Hachyderm in legal risk, we will step in and moderate accordingly. The best way to prevent your account from being moderated, is to practice marking any questionable content as “18+” as clearly and often as possible.

In the event there are noticeable instances or accounts that post 18+ content without consent, we will likely step in and moderate accordingly.

Limitations with Mastodon

Currently, Mastodon does not allow users to:

  1. Specify that they are 18 or over
  2. Mark that content is specifically 18+ or otherwise NSFW

Content Warnings, as a feature, do help by allowing users to make use of warnings to obscure text and visual media with an explanation. That said, by itself the Content Warning feature does not comply with what is legally required to prevent minors from seeing content the law considers to be 18+.

Since Hachyderm is geared for adult professionals, in order to be legally compliant and accommodate the limitation in the Mastodon tools, servers deemed to be 18+ will be either Limited or set to Reject Media.

Staying on Topic

We put the decision making power in the hands of the community on what content they chose to share will operating an account on Hachyderm. However, we do want to remind any user that Hachyderm is a place for technical professionals who will likely be looking at their federated timelines at the office.

If you are interested in sharing high volumes of sexually charged imagery, pornography, or pictures, videos, and language specifically about sex, having sex, or fantasizing about sex Hachyderm might not be the best home for you. Ultimately “how much is too much” will be up to our moderators, and our users.

Many Hachyderm mods, operators, and volunteers operate alternative accounts on other instances where they isolate their off-topic hobbies and interests such as full nudity, kink culture, sexual content, BDSM, and more. Additionally many Hachyderm mods, operators, and volunteers openly bring their sex work, dom/sub dynamics and kink articles to their work on the project. We believe we can stay on topic, while also being respectful of the vast array of sexual dynamics in the world.

General Advice

Our general advice is to not post adult 18+ content that is intended to provoke sexual arousal or openly invite behavior that is legally considered to be 18+.

In other words, don’t post adult content unless you are confident you are practicing consent and that your content is relevant to the mission, topics, and goals of Hachyderm.

We can’t guarantee that our users are 18+. If you continually post 18+ adult content, you are likely putting the service at risk, and you will likely be moderated or asked to leave Hachyderm.

7 - Monetary Posts

Policies and thoughts on monetarily focused post (including but not limited to: mutual aid, charity fundraisers, affiliate links), and its relationship to Hachyderm users and marginalized communities.

All of us are part of a larger capitalistic society. As a result, we acknowledge that conversations will arise that are centered around money or the need for money. We also acknowledge that many of our users are marginalized and may need financial assistance. We want Hachyderm to be a home for science, technology, and collaboration while also embracing the incredible capacity for us as humans to care for one another.

In order to support this philosophy, we have a few guidelines for how we expect users to engage with one another when it comes to money.

The Platinum Rule

Treat others as they wish to be treated.

For many people, money is a sensitive topic. We ask that you be mindful of this when engaging with others. If you are unsure of how to engage with someone, ask them. If you are unsure of how to engage with a group, ask them. If you are unsure of how to engage with a community, ask them.

In the case of Hachyderm, we ask that you be considerate of people’s posts timelines by:

  • Monitoring the frequency of your posts
  • Using summary sentences to describe your post
  • Using content warnings to hide the details of your post when appropriate

Mutual Aid


Mutual aid is a form of solidarity-based support that is centered around the idea that we all have something to offer and we all have something we need. Mutual aid is not charity. Mutual aid is not a handout. Mutual aid is a way for us to support one another in a way that is not hierarchical or exploitative.

Hachyderm unequivocally supports all calls for mutual aid. -Nóva

As a community that focuses on being a safe space and supporting the needs of marginalized people, we support mutual aid efforts. The rules below help to ensure individuals are not exploiting the needs of marginalized people to the benefit of others.

Things to keep in mind when posting about mutual aid:

  • Mutual aid is not charity. Mutual aid is not a handout. Mutual aid is a way for us to support one another in a way to support one another without demanding status or favors over another being.
  • Mutual aid typically involves a one-to-one interaction. However, please remember your mental health as you interact with the community and only share what you are comfortable with discussing.

Posting about Mutual Aid

  • You should include “#MutualAid” either at the beginning or end of your post.
  • Mutual aid post should still be appropriately marked with content warnings when applicable, but you can add mutual aid to the title to help with engagement. For example, if the need from the mutual aid is related to domestic abuse, you could use the Mutual Aid: Domestic Abuse content warning.

Charity Fundraisers


Charity fundraisers are a way for us to support organizations that are doing work that we believe in. Charity fundraisers are not typically a way for us to support individuals.

Posting about Charity Fundraisers

While we are appreciative of the incredible work that large organizations can do for individuals in need, specialized accounts that are focused on charity fundraisers are not permitted at this time as we do not have the capacity to vet each organization.

All others user posts related to charity fundraisers should include “#FediFundraising” either at the beginning or end of your post.


Affiliate links allow an individual to earn a commission when someone purchases a product or service through their link. Affiliate links are a way for you to earn money without having to create a product or service.

Typically users that are posting affiliate links are doing so from a business capacity.

  • You must include “#AffiliateLinks” either at the beginning or end of your post.
  • Accounts that are utilizing affiliate links will be moderated based on the Specialized Account Expectations. Should you choose to utilize affiliate links, you will be expected to follow the guidelines outlined in the Specialized Account Expectations.
  • Posts that include affiliate links may not be made through an automation. All posts must be made by a human.
  • Posts that include affiliate links must be made by the individual that is receiving the commission. You may not post affiliate links on behalf of another individual.



Crowdfunding is a way for individuals to raise money for a specific project or cause. Crowdfunding is typically done through a platform such as Kickstarter, Indiegogo, or GoFundMe.

Posting about Crowdfunding

Please use the below matrix to determine how to interact with crowdfunding posts.

Type of CrowdfundingPoster Financially BenefitsPoster Does Not Financially Benefit
Mutual AidCrowdfunding for a mutual aid effort is allowed. Please review the mutual aid section above.Please ensure that the individual(s) impacted are okay with the distribution of their information.
Not Mutual AidAs a community that focuses on science and technology, we want to support the incredible work that is being done by our users. However, we ask that you be mindful of the frequency of your posts and review the Specialized Account Expectations for more information. For non-tech crowdfunding, we recommend an alternative platform.We are excited that you are excited!

8 - Account Types

Permissible Hachyderm account types and features, with an account FAQ.

Hachyderm generalizes accounts into one of two account types for moderation:

Specialized accounts are, broadly, any account where the account is not for the purpose of an individual user who represents themselves on the platform. Some examples of specialized accounts include, but aren’t limited to: novelty accounts, bot accounts, corporate accounts, event accounts, project accounts, and so on.

For specialty accounts: we recognize and have rules around some. If an account is not a user account and not one of the recognized specialized account types below, then it is not an account type permitted on this server. If you are interested in creating an account that is not a user account or a recognized specialized account type, then you need to reach out to us at

Unrecognized specialized accounts will be removed. Recognized account types that do not follow the rules of their account type will be moderated as needed.

Note: The Hachyderm Moderation Team does not try to proactively determine your account type.

In most cases, unless you’ve reached out to us via, then the first time we encounter your account is when it has been reported for a rules violation based on your perceived account type. In general, unless we see a discrepancy that indicates otherwise, we will moderate based on that perceived type.

For all account types: username parking is prohibited on this server. Please only create an account that is in active use, even if only lurking. Accounts that appear to be “parked” will be suspended.

User Accounts

User accounts are also referred to as General Accounts, which are briefly described on our General Accounts page.

Similar to all other accounts, General Accounts must follow server rules.

There are some exceptions that would result in General Accounts being moderated more strictly:

  • Accounts Causing Confusion
  • Influencer Accounts

Please review the below sections for details about each of the exceptions

Accounts Causing Confusion

If your account has habitual posts that result in confusion about the purpose of the account, your account could be subject to moderation. It’s recommended that you review the list of Recognized Specialized Account Categories and the list of Disallowed Specialized Account Categories to confirm that a significant portion of your posts don’t fall into these categories.

In addition, if you are an individual user then you shouldn’t have a username or an about page that might indicate that you are a company, project, or other form of specialized account. As a specific example, if you have an account name like MyUserCompany or MyPodcast, then you will be considered that specialized account type based on name or about page and be bound by those rules.

As always, our group of volunteer moderator will always do their best to abide by the Moderator Covenant.

Influencer Accounts = Large Userbase Accounts

Anyone who has more than 10K followers on an existing platform is what we refer to as an Influencer Account. The only special rule here is that we request that you reach out to us in advance at before migrating as we will likely need to assist with the migration to prevent follows getting “stuck”, etc.

Specialized Accounts

The common rules for all specialized account types are:

  • Your username and display name must clearly indicate what you are.
    This means for companies, OSS projects, community events, podcasts, newsletters, etc. then the username must match the company / project name. For most other account types this means “describe what you are” via username, e.g. “CuratedTechJobPosts”.
  • You must be verified with your company, project, etc. page.
    If your company, project, newsletter, etc. has a web page, then that web page must be verified on your profile.
  • All specialized accounts are welcome to become verified with Hachyderm.

Guiding Principles for All Specialized Accounts

Similar to the way that Don’t Be A Dick is the guiding principle for the user accounts, with other rules calling out common “Whataboutisms” for that rule, specialized accounts also have three main guiding principles.

We don’t want a digital shouting match on Hachyderm.

What does this mean? Essentially, it means we recognize that some of the motivation behind creating highly repetitive posts is because Entity A posts N times, so Entity B posts M times to be visible over Entity A. And so on. The result is a posting race so that an entity’s posts can be “heard” above the “cacophony” of other posts.

The goal of some of the rules and restrictions we impose on specialized account types is to curtail a discordant symphony of competition that drowns out collaboration and connection.

We don’t want to host an ad server.

The goal of the remaining rules and restrictions that we impose on specialized account types is to prevent hosting an abundance of ad and marketing focused content. The world is plastered in ads, but we believe what is needed and craved is connection and collaboration. With Hachyderm, we seek to nurture a space where these can thrive.

The goal of rules related to this principle is not to make it so our accounts that have services and ideas they may also be selling unable to interact on our platform. Instead, it is our hope and design that accounts this is relevant to reimagine how they can interact with our community in a way that will foster connection and collaboration with Hachydermians.

We want community consent to be the default.

There is a lot of content on the internet that can be either harmful or fun, depending on its frequency and presentation. Bots, newsletters, and so on are execellent examples of this: having a bot respond with “honk” when summoned can get a laugh. A bot that auto-responds with gas prices when you mention certain keywords is not. When we are moderating specialized accounts, and drafting policies around them, the goal is to put that account type in alignment with user consent so that people in our community can effortlessly opt into and out of the content as often as each person requires.

This doesn’t mean that you can’t post about what is relevant to your account type. This does mean we ask you do so in a way that engages with humans the way you’d wish to be interacted with.

These principles are outlined further in our Specialized Account Expectations. Some specialized accounts need to agree to the Specialized Account Expectations as part of account creation; others, only if they become verified.

Recognized Specialized Account Categories

  • Corporate - For businesses, companies, and similar. Corporate accounts:
  • Events - For community events, including conferences and meetups.
  • Projects - For Open Source Projects.
  • Curated - These can be accounts that are re-posting / hosting other content, for example a curated newsletter, or for individuals that are creating and hosting their own content. Examples of curated content can include, but aren’t limited to: newsletters, podcasts, and streamers.
  • Automated - Most commonly, but not exclusively, bot accounts. Any posts that are programmatic and/or scheduled in nature.
  • Hybrid - This is when an account falls under more than one category. Typically, but not exclusively, this is companies, events, projects, etc. making use of automated posts. In this case, the account must adhere to all the rules for the different types they house.

Of the above, only the corporate accounts are invite-only / require approval. That said, we do recommend that all specialized accounts reach out to us prior to creating an account so we may assist with the expectation setting for your account type.

Disallowed Specialized Account Categories

The following are disallowed as general content on the server, thus are also disallowed as a dedicated account type.

  • NSFW / 18+ - This is mostly due to our need to be able to comply with 18+, which isn’t in the Mastodon tooling at this time. (See our Sexual Content Policy for more information.)
  • Fundraising - No accounts can fundraise on Hachyderm.
  • Unofficial company / corporate accounts - As this account type is invite only, unapproved accounts will be banned.
  • Unrecognized specialized account - Essentially if you are not a regular user, and not one of the explicitly allowed types, you should reach out to prior to making an account.

Frequently Asked Questions

Below are the FAQ for all Hachyderm accounts.

Can I post about my company?

Yes. Posting about your company is different from a corporate account. Feel free to say whatever you want about your company.

Can I post about my project?

Yes. Posting about your project is different from an open source account. Feel free to say whatever you want about your project.

Can I create a bot account?

Not without working directly with the moderators.

Can I create an anonymous account?

Yes. However, we do not want a single person to operate a high number of accounts.

We understand the importance of being anonymous from a security, gender, safety, and mental health perspective.

Can I create multiple accounts?

Yes. Within reason. We expect you to use your best judgement in not flooding our servers. We understand you might need more than one account. However, it is not a free for all. We understand security, gender, safety, and mental health, and safe space from work are all reasons to create multiple and anonymous accounts.

Can I create accounts for pets, characters, or imaginary friends?

We prefer if you didn’t. We are trying to build a curated network of professionals on Hachyderm and would encourage you to move these accounts to another instance.

8.1 - Specialized Account Expectations

Formerly referred to as the Corporate Covenant. We’ve rebranded the name, as more accounts other than corporations must adhere to these expectations, but have preserved a lot of the original intent.

The following account types should adhere to the expectations as part of signing up for Hachyderm:

Expectations for Specialized Accounts

If you are a corporate account, or you are a specialized account that is going to be verified, we need you to understand and adhere to our guiding principles in addition to the rules governing your account type.

1. Be a Hachydermian: Fines Doubled In Work Zone

We expect you to set the bar for what it means to be a Hachydermian, and the rules apply extra to you.

I agree to be a steward and role model in the Hachyderm community. I understand that my presence as a specialized account implies the rules will be more strictly applied to me than normal accounts.

The Hachyderm community is a diverse community comprised of many people with many economic beliefs and situations.

For companies in particular: We expect you to have a mature understanding of various economic structures around the world, and find a strong balance in your position as a corporation.

In the Hachyderm community, you will find strong capitalists and even anti-capitalists. We expect you to consider these different perspectives while creating content on Hachyderm.

As a specialized account, both our rules, and the rules governing your account type, apply more to you than to the average user. This means that the consequences of a rule violation applies more to you than others. In other words, your “fines will be doubled” if you end up violating our rules. We will warn you first that you have violated a guiding principle, and if it continues, then we will ask you to leave.

Ultimately, you are setting the bar for Hachyderm users, the exclusive corporate accounts, as well as other specialized accounts.

2. Be Who You Want To Work With

We expect you to do unto the community as you would have done unto you.

“Create the market you would want to shop at”.

I agree to build a market that I would want to shop at. I agree to set the bar for having world-class engagement on Hachyderm.

We find value in drawing the analogy to forced advertising at the gas station.

As a gas station owner, forced messaging makes sense because of the higher return on advertising.

As a gas station customer, there is nothing more annoying than being forced to listen to an ad while pumping gas.

We want people to want to shop at your gas station, which means you need to consider their experience first.

We have high expectations for our community filling their cars with your metaphorical gas.

3. Say It Once: “The Wikipedia” policy

Measuring spam is hard. We expect you to stay away from repetitive content.

I agree to share new content and do my best to prevent repetitive content. I understand that words can be changed but messaging can remain consistent. I agree to only share my message once.

We don’t have time to chase you around.

We find value in what Wikipedia calls Use our own words.

The best practice is to research the most reliable sources on the topic and summarize what they say in your own words.

We understand that while technically not repeating the exact same words, you can deliver the same message.

We expect you to say your message once, and not repeat the same message time and time again.

We are explicitly trying to avoid the “cron job” effect that we see on Twitter with promotional accounts.

When in doubt, craft new words and form a new opinion. Otherwise, it likely is repetitive content.

In short, we hold you to a high bar to be “the first of your kind”, otherwise we expect you to be contributing to the existing resource when applicable.

We understand that this impacts some specialized account types more so than others. We expect all specialized accounts to use good judgement when they need to approach this rule.

We don’t want Hachyderm to turn into a promotional recycling center. We want you to be either sharing new content, or mentioning the original source and adding to it.

4. “We Know It When We See It” Policy

Ultimately, the community comes first and the community’s response to your content will be our guiding light.

I understand that policies are difficult to enforce. If it is made obvious to me that Hachyderm is no longer benefiting from my presence, I will politely step down as a specialized account and move to another server.

If your content becomes “invasive” or “unwanted” or seems to be causing problems, we will ask you to move your account to another server.

Basically, we look at “bad actors” like we look at pornography. We know it when we see it. We reserve the right to ask you to leave at any time.

5. Mods are Volunteers, Not Hall Monitors

Your account needs to be a win for everyone involved, including our moderators.

I understand that it is my job to self-moderate. If moderation from Hachyderm is involved, we have failed twice. First in self-moderation, and 2nd in violating a specific policy.

Ideally, a specialized account should be helping to keep bad actors out while also setting an extremely high bar of what it means to be a good community member.

If at any time the Hachyderm moderators are spending more time chasing your account around than our community is getting out of your presence, it’s time to go.

Our moderators are here to keep the community safe.

8.2 - Specialized Account Verification with Hachyderm

How to become a verified, specialized, account with Hachyderm.

Who should verify?

Currently, only a subset of our specialized accounts are required to verify. That said, we recommend that all specialized accounts (influencer accounts on a case-by-case basis) go through the verification process so that Hachyderm users know that your account is officially recognized and that you’ve agreed to the Specialized Account Expectations.

Required and optional verification

While we recommend all accounts that are not user accounts apply to be verified with us, only the following are required to be verified:

Accounts that will likely want to consider being verified:

  • Community Events
  • Open Source Projects
  • Any account that may appear similar to a corporate account (but is not). This may include:
    • Some of the non-profits we’ve welcomed on our instance
  • Some of our Influencer Accounts (though not all)
    • Note that verification is not open to general users at this time.
    • In general, this is only if your personal brand or posting style could be conflated with one of the recognized specialized account types. If you’re using Hachyderm as a general user and happen to have a large following, you likely would not want to be verified at this time.

When to verify?

This process was launched in February 2023. We ask existing accounts that need to be verified to do so by the end of March 2023.

What does a verified account look like?

All specialized accounts that have been verified by the Hachyderm moderation team will have a verified link pointing to in their profile links. This link should be titled “Approved” on the account bio:

Screenshot of the approved links section of a profiles.
There are two verified links. One link is pointing to the companies website.
The other link is pointing to

The Application Process

Steps to getting approved

This is only for the specialized accounts. We are not accepting request for individual users at this time.

Part 1: Get Pre-Approved

To get started, we will create an issue to track the full process from the creation of your account to the completion of the approval process.

  1. Read the Specialized Account Expectations
  2. Open a “Specialized Account Approval” issue from our Community Issue Tracker
  3. Include your agreement to the Specialized Account Expectations in the GitHub issue
  4. Be patient (we are group of volunteer mods and will reach out as soon as we can)
  5. We make a decision
    • If you are not approved, we will provide a reason and close the GitHub issue.
    • If you are approved, you can now create your account and then move on to Part 2 below
      • Note: you should leave the GitHub issue open.

Do not pre-emptively put the Approval URL in your profile metadata. Due to the way that verification works, Mastodon will only try to verify you once per server when you add a URL. This means that Hachyderm’s instance will only verify the URL when you add it, prior to it being approved, and will not re-check. You will be prompted to add the URL at the appropriate step below so the URL is live when Hachyderm checks.

If you’ve pre-emptively added the field to the account profile, you’ll need to re-save after the approval URL is live with your account listed in Part 2 below.

Part 2: Finishing account setup

  1. Ensure you’re already verified with your official domain
    • Do not remove the reference link. Moderators will be using this during their validation process.
  2. We will add your verification tag under the appropriate header in the approved file. (Please be patient, we are a group of volunteers and will update as soon as we can.)
  3. We will update the GitHub issue when this is done.
  4. Using the verification steps, add the following to your profile metadata
  5. Verify that you can see the appropriate field highlighted in green and the green check mark
  6. Once you have added the above to your profile metadata and verification is successful, please update and close the GitHub issue.
    • Do not remove the approved link once you have completed the process. This field lets our users know you’re an approved account and will prevent your account from being reported as an unapproved account.
  7. Optional, but recommened: Make sure to share that you’re an approved account, and include that that means you’ve agreed to the Specialized Account Expectations!

8.3 - Bot and Post Automation

For all types of automated and scheduled content.

If you are running a bot account and/or plan on making use of post scheduling - this article is for you!


  • Automated posts and bot accounts
  • Posting rules for automated content
  • Posting rules and behavior restrictions for bots
  • All bots must be verified with Hachyderm
  • Where applicable, accounts should be verified with their relevant domain(s).

A bit about post automation

Not all posting automation is bot, or bot-like, in nature. Many specialized accounts make use of post scheduling to help communicate with their intended userbase. This can include, but not be limited to, sign up windows for an event that must be done in a specific time frame, automated posts when a blog is updated, and so on.

The line between automated posts and bot posts is a thin one, and mostly dependent on whether a human composed a post (whether it is scheduled for now or later) and whether or not that post is intended to repeat (the “say it once” policy on the Specialized Account Expectations).

As with many of our rules on Hachyderm, the rules and regulations for bot and automated posting is about their impact on the server. We want that impact to be positive and non-invasive. It is our belief that collaboration thrives where there is genuine connection. Genuine connection can happen via bot and automated posts, but cannot thrive if those posts become invasive or compete with each other negatively.

When we enforce the rules and regulations on bots and automated content, as outlined in the next section, we do so with the goal of increasing the connected, collaborative nature of the Hachyderm community.

Posting Rules

All accounts that make use of automated and/or scheduled posts must adhere to the following:

  • There is an upper limit of 5 curated/scheduled/automated posts per day. Lower is better.
  • Posts cannot violate other rules
    • This includes server rules disallowing spam, fundraising, NSFW/18+ content (et al)
    • Hybrid accounts, those that make use of scheduled posts and are also another specialized type, are bound to all sets of rules for their combined, hybrid, account type.
    • Posts cannot violate the Specialized Account Expectations
  • There is one exception: automated posts can skirt the “say it once” policy, unless it becomes spam-worthy (community discretion). So, if you need to post about your event, blog, etc. and stay within the total number of posts per day, this is fine.

Accounts that violate the above may be limited or suspended.

Bot Specific Rules

Account Configuration

  • All bot accounts must be verified with Hachyderm, which means they agree to the Specialized Account Expectations.
  • All bots must select the bot checkbox in their profile settings. (Open image in new window to enlarge.)
    Screenshot of four check boxes in account settings: require follow
requests, this is a bot account, suggest account to others, and hide
your social graph
  • Bots are required to put the #hachybots hashtag in all posts to allow users to opt into, or out of, bot posts.

Restrictions on Behavior

All bot accounts, and bot-like posts, must respect consent (i.e. opt-in). This means:

  • Bots may respond directly to posts that they have been tagged in.
  • Reactionary bots may only be triggered / “summoned” by posts that include their handle.
    • But they cannot “doom spiral”. Essentially: they cannot continue to respond to every response in a thread, because they were tagged once, and thus auto-tagged in all subsequent responses.
  • Bots cannot respond to hashtags, keywords, etc. without being tagged - e.g. bots that respond to user posts based on keywords and similar.
  • Bots designed to consume, use, store, or otherwise handle data (even public data) can only do so with consent (opt-in).
  • Bots designed for fun can skirt the “say it once” policy, unless it becomes spam-worthy (community discretion). So, if your bot only responds with “honk”, and only when summoned, this is fine.

Bot accounts that violate any of the above may be limited or banned.

8.4 - Corporate Accounts

Who should make corporate accounts, how to make them, and their rules and restrictions.


What is a Corporate Account

Corporate accounts are specialized accounts that apply to those running accounts for businesses and business-like entities. This can include certain types of projects, conferences / events, and non-profit organizations as well as traditional for-profit companies of any size.

Some corporate accounts may be affiliated with Nivenly Trade Members, although this is not a requirement. (Clarification: this is a requirement we removed prior to the 1 Mar 2023 launch.) For a quick summary of Nivenly, please see the end of this article.

Rules and restrictions

Corporate accounts:

Corporate account application information and cost

In order to apply to create a corporate account, please email us at We will go over the rules and expectations with you and help you be successful interacting with members of our community.


Corporate account standard pricing is tiered based on size and is as follows:

  • <25 employees: $650/year
  • 25-49 employees: $1000/year
  • 50-99 employees: $1500/year
  • 100-149 employees: $2000/year
  • 150-999 employees: $2500/year

Entities with 1000+ employees or more than 50,000 followers will have customized High Volume pricing based on resource usage. If you believe your entity would fall in this range, please mention that you will need High Volume Pricing when you email us at

If you are a small startup (<25 employees) and $650/year would be a burden: please still reach out to us! We want to help the industry as much as possible and work with you.


The Nivenly Foundation

The Nivenly Foundation will facilitate Hachyderm’s governance as well as support other open source projects. The governance model will be democratic in nature, and all Nivenly Trade members will be given a vote in member level elections. For more information, see

How to become a Nivenly Trade Member

If you are interested in starting the conversation about the Nivenly Foundation and Trade Membership: please email us at

Current corporate accounts

Listed here for historical purposes. Please note that as of 1 Mar 2023 all corporate accounts are verified / approved with Hachyderm.

8.5 - Curated Accounts and Content

Curated accounts are accounts that post content on a regular basis that the account owner(s) have previously vetted. This can include, but is not limited to:

  • Job postings for a specific profession, industry, or from a particular source
  • Newsletter or newsletter-like postings relevant to a particular profession or industry
  • Podcasts or other streaming focused content
  • Conference aggregators for CFPs, tickets, sign-up date(s), etc.

Restrictions on behavior

Curated posts, whether they are a dedicated account type or an account that features them prominently, have restrictions on this server.

  • Curated accounts must be “on topic”. To put it another way, they must be directly relevant to the tech industry.
  • Curated accounts must follow the same posting rules as automated posts. Notably this includes:
    • There is an upper limit of 5 curated/scheduled/automated posts per day. Lower is better.
    • Posts cannot violate other rules
      • This includes server rules disallowing spam, fundraising, NSFW/18+ content (et al)
      • Hybrid accounts, those that make use of curated content and are also another specialized type, are bound to all sets of rules for their combined, hybrid, account type.
      • Posts cannot violate the Specialized Account Expectations
    • Similar to bots, curated accounts must use the #hachygrator hashtag (“Hachyderm” + “Aggregator”) in their posts to allow users to opt-in or opt-out.
  • Curated accounts must be verified with Hachyderm
  • Where applicable, accounts should be verified with their relevant domain(s).

8.6 - General Accounts

For the vast majority of Hachyderm users:

And most of all:

Have fun.

8.7 - Influencer Accounts

In short, we welcome all influencers and have an open door policy for any large influencer account who would like to make Hachyderm a home.


  • We welcome influencer accounts
  • If you have a following of more than 10K on any platform, please reach out to us at before announcing your account to your followers.
  • You may want to verify with Hachyderm, depending on your content.

Self Promotion

We believe that all influencers should be free to promote their brand and their content on our site, however we have a high bar with regard to product marketing.

We draw the line between “personal brand” and “corporate account”. The latter would need to adhere to our corporate account policy.

In short, if you are representing yourself, then you should be fine to self promote on Hachyderm. If your content is close to one of our other account types, and potentially not only your own personal brand and/or there’s high overlap between your personal brand being a “brand-brand”, please reach out to us at so we can see if verification would be beneficial for your account. (We do not have this process open to general user accounts.)

Tech Twitch Streamers

Twitch streamers are welcome!

Twitch logo in purple and white

Large Accounts - More than 10k on any platform

We’ve had a couple of instances where accounts with large followers needed assistance with their account migrations. Early in the Twitter migration, Ian Coldwater joined our community and with them brought roughly 600 new users in less than an hour. This unexpected rise in traffic took our team of volunteers off guard and ended up slowing our service down for the evening. (With our infrastructure migrated off The Watertower and into more resilient architecture, this should no longer be a concern.)

The other main instance is when Scott Hanselman migrated over from his previous Mastodon home. Due to the large number of followers that came with him, we had tickets opened on both our own Community Issue Tracker and on Mastodon’s issues, to help us re-prompt remote servers that had not responded to the initial migration request.

So essentially, we want to make sure that when you migrate over that we’re aware so we are prepared if we need to do anything for backend processes or potentially re-prompt remote servers (i.e. neither Hachyderm nor your original Mastodon home server) to migrate followers if they become stuck during your migration.

As a rule, if you have more than 10K followers on an existing platform, we ask that you reach out to us directly at or via the Nivenly Discord (if you’re already in there) before announcing your arrival to your followers so we can prepare to assist as needed.

8.8 - Open Source Project and Community Event Accounts

This section is for anyone who is interested in creating an account specifically for an open source project and/or community driven event.

If you are a maintainer of a project or event organizer and are representing yourself, this page does not apply to you. For example:

I, Kris Nóva, am a maintainer of Aurae.

This documentation only applies to me if I were attempting to create a 2nd account in addition to my primary personal account that would be specifically for “Aurae”.

This documentation does not apply to my personal account, where I would be free to talk about Aurae as I wish.

Open Source Projects and Community Events

We welcome open source and collaborative projects to create accounts on Hachyderm. We also welcome community events to create accounts on Hachyderm.

If your project is an extension of, or resembles, a corporate account, it is bound by the Corporate Account rules and restrictions. If your event is a corporate event and/or meetup, it is also bound by those same rules and restrictions.

If you are unsure if your project and/or event would be considered corporate, please reach out to us at

Understanding Open Source Projects and Community Events

The following are the types of projects that are welcome to come and go from Hachyderm as they please.

  • ✔️ Open source software projects with MIT, Apache 2.0, GPL, Creative Commons or similar licenses.
  • ✔️ Open collaborative projects such as The Gutenberg Project or collaborative wiki style projects like Wikipedia and Wikimedia.
  • ✔️ Community driven events, open organizations, and volunteer driven events like FOSDEM.
  • ✔️ Open source organizations such as Matrix.

OSS projects and industry events that are company-like or business-like in their implementation will be considered “corporate” (in the account sense) and require a corporate account. They will also be bound by the account rules and restrictions for that account type. Criteria that would meet this type:

  • Projects that are a product / service of a parent entity that would meet the criteria for a corporate account or
    events that are for a specific company or company-like organization or
    events for a specific product / service of that company or company-like organization.
  • Projects with either a single sponsor or owner that is a company or is company-like / business-like or
    events that meet this definition.
  • Projects that resemble paid services for a company or company-like / business-like entity or
    events that meet this defniition.
  • Projects that have “free tier” and “paid tier” models that resemble company-like or business-like implementation.

Some specific examples that would would not fall under the OSS project / event definition (in terms of account type) and would require a corporate account:

  • ✖️ Large trade organizations and governing bodies such as the CNCF or Cloud Foundry or their subsequent projects such as Istio or Helm.
  • ✖️ Product ecosystem event like AWS re:Invent
  • ✖️ Open source projects with a single corporate sponsor/owner such as Google’s Go Programming language and HashiCorp’s Terraform.
  • ✖️ Open source projects with structured sponsorship models that resemble paid services. Some examples include unlocking essential features, increasing quotas, and so forth via donation or payment tiers.
  • ✖️ “up-sell” or “upgrade to pro” or “free trial” model projects that resemble a paid service such as pfSense Community Edition or Wolfram Alpha.

What is financial bias?

Financial bias is defined as “the ability for a specific vendor, project, or organization to pay for a competitive advantage” or sometimes referred to as “pay-to-play” vendor spaces.

Why do we care about financial bias?

We acknowledge the system that we’re in: projects and events need money to run and continue to have great work and results. What we are specifically concerned with is when financial bias can create unhealthy releationships. In particular, financial bias can taint the relationship between a project and/or event and its intended goals. As that grows and spreads, this in turn drives unhealthy relationships between the people building/running the project and/or event and the very people they want to bond, collaborate, and share with.

Our goal with the guidance, rules, and restrictions that we have in place for these and other account types is to protect our community by promoting and nurturing the conditions that allow healthy relationships to thrive by default.

Frequently Asked Questions

Can I create an account for my event? Like DevopsDays?

Yes, the goal is to support community focused events. If the goal of the event is the community, then this is the correct account type for your event. If your event is business oriented, or for businesses to network with each other (e.g. what is sometimes referred to as a tradeshow), your event will need a corporate account.

What about DevopsDays for my specific city?

Yes. As long as your subsidiary account isn’t repeating the same content as the parent account. We expect each account to have relatively independent content.

Can I create a support/help/fan/parody account?

No. Accounts like “Linux Tips” or “Kubernetes Memes” are not in alignment with our mission to create a curated group of professionals. We aim to have accounts that represent real people.

Can I create an account if a similar one already exists?

No. We want to have one account for each project. We do not want to have “Wikipedia tips” and “Wikipedia facts” as separate accounts. We think that collaboration of the same account should be shared at the project level. You should join the project and ask for access to the Mastodon account.

Can I create a bot account for our open source project?

No. We know it is fun to automate pull requests, build status, and more. However, we try to keep our content based around real words written by real people. That said, you are welcome to include some of that data in your project’s account posts, it just can’t be dedicated to only publishing that type of content.

9 - Mastodon for Users and Moderators

Supplementary documentation for Hachyderm users and moderators.

This section documents features and processes maintained by Mastodon. For issues related to these features and/or processes, please reach out to the Mastodon team directly on the Mastodon project’s GitHub.

For issues with this doc page itself, please reach out to us on Hachyderm’s Community Issue tracker.

The documentation in this section is meant to supplement, not supplant, Mastodon’s documentation at We are choosing to write clarifications around the Mastodon UI and features that are either frequently used by the Hachyderm community or provide aadded clarity for any features that we require our users to make use of.

9.1 - Mastodon's UI and Features for Moderators

Subset of supplemental documentation for Mastodon moderators.

This section documents features and processes maintained by Mastodon. For issues related to these features and/or processes, please reach out to the Mastodon team directly on the Mastodon project’s GitHub.

For issues with this doc page itself, please reach out to us on Hachyderm’s Community Issue tracker.

The documentation in this section is meant to supplement, not supplant, Mastodon’s documentation at This section of the documentation is specifically for the Mastodon UI and features that are used by the moderation team here at Hachyderm.

9.1.1 - Mastodon Report Feature

How to use the Mastodon report feature.

This section documents features and processes maintained by Mastodon. For issues related to these features and/or processes, please reach out to the Mastodon team directly on the Mastodon project’s GitHub.

For issues with this doc page itself, please reach out to us on Hachyderm’s Community Issue tracker.


About the Report Feature

Mastodon’s report feature is a way for Mastodon users to send reports to a Mastodon instance’s admins or moderators. If you are reporting a user to your own instance’s moderators, then only they will see the report. If you are reporting a user on a remote server, then your home instance’s admins still see the report. In the case of a remote user, you can also choose whether or not to forward the report to that instance’s admins. The nuance here is capturing whether you are reporting a user to their own home instance for their admins to take action, to your own instance admins for them to take action, or both.

On Hachyderm, we specifically request Hachydermians to use the report feature for the following scenarios:

  • Reporting individual posts but not the user overall
  • Reporting a user via their posts
  • Reporting a domain via the posts of a user on that domain

When submitting a report, it is important to include all relevant information. This includes supporting information, even if it seems obvious, any relevant posts as needed, as well as comments supplied by you.

Please note: if we receive an empty report and cannot see a clear cause, we will close the report without moderator action.

For more information about this, please see our doc on Reporting Issues and Communicating with Moderators.

How to create a report

  1. Click on the meatballs(⋯) menu below the post and select “report”
    Example post with the text This is a blob fox
appreciation post and three blob fox emoji. The UI menu is
expanded to show copy link to post, embed, mention the user,
mute the user, block the user, or report the user. The
following steps are for reporting the user.
  2. Select why you’re reporting.
    Dialog box with four options to select for reporting a
user. The options are I don't like it, It's spam, It violates
server rules, and It's something else. In the example, It
violates server rules is selected.
  3. If you selected Server Rules for the reason, as we did in this example, then you’ll be asked to select which and can select more than one:
    Additonal dialog box to allow user to select which rules
are violated. All the server rules, including Don't be a
dick, No hacking, No violence, etc. are shown. You can select
one or more server rules as being violated. Here only Don't
be a dick is selected.
  4. Please select any and all additional posts to include in the report
    Dialog that allows you to select other user posts that
may be related to the report. The posts shown are not all in
one thread, but all of the user's recent posts. In this case,
an additional two posts about a form announcement and a
response window are shown. Only the Blob Fox Appreciation
post is selected.
  5. Please include all relevant context to help us process the report:
    Additional Comments dialog box, with a limit of 1000
characters, to allow users to supply additional information
with their report.
  6. You can optionally choose to unfollow, mute, or block the user before you click “Done”.
    You can optionally unfollow, mute, or block the user you are
reporting before selecting Done. You do not need to do any of
these actions for the report to be submitted.

The Additional Comments step is very important. To help us process reports efficiently there should always be additional context in the Additional Comments field - the more the better. This should be done even if the report seems self-explanatory. In the case of reports of posts, users, and domains that are in languages other than English, we will need an English translation supplied.

The most important limitation you should know at this stage is that the Additional Comments field has a character limit of 1000 characters (as of this writing). If you need to supply more context, or the translation takes more than 1000 characters, please:

  • File the report with what you can
  • Make sure to leave enough space to tell us there is a supplementary email
  • Email us at

What a Filed Report Looks Like

For an example, I had Björn’s user create a report against my Blob Fox Appreciation Post that I used for the screenshots above. When a member of the moderation team reviews the report, it looks like this:

Zoomed out view of what the report looks like for a
Mastodon admin. The text is too small to read, but the
sections show the user stats at top, report reason in the
middle, any comments provided by the user filing the report,
and then a section for moderator actions. These are described
in detail below.

Since that is a very zoomed out view, let’s look at each of the sections. At the top is the same user information you’d see if you navigated to the top of a user’s profile page:

This is the top of the same report. The user is user
quintessence with 1.01K posts, 3.61K followers, and 115
following. The first line of her bio is visible, as is her
header and avatar. The report date and time as well as the
reporting user, Björn, are visible. The Category of the
report is shown as violates server rules - Don't be a dick.

This section also shows the moderator team why the report was filed, in this case the “Don’t Be A Dick” rule is selected.

Underneath that is the comment that the user supplied when they filed the report as well as all the posts they selected to include with it. If the user is a Hachydermian, the user name is supplied as you can see here. If the user is off our server, only the source server is supplied.

This section shows the additional information provided
by the user. In this case, only the text This is what was in
the additional comments field for the reporting user comment
as well as the post that was included in the report.

At the bottom is the section where moderators can leave comments and choose what action to take:

Moderator actions are shown here as Mark As Resolved -
no action, Delete posts, Limit user, Suspend User, and
Custom. There is also an additional field for moderator

Moderators can choose to close the issue with only an explaining comment, or to take one of the shown actions and close the issue. For visibility, the moderation actions are:

  1. Mark as Resolved (No moderator action)
  2. Delete posts (Moderator resolves by deleting the offending post(s))
  3. Limit (Formerly known as “silence”. The user can still participate but they will not show in Local or Federated feeds.)
  4. Freeze (User can log into their account but cannot interact.)
  5. Warn (Moderators send a note through the interface to the reported user. Note this option is not visible on the screenshot.)
  6. Suspend (Also known as “ban”. If the user is a Hachydermian then their account is removed from our server. If they are on a different server that user is banned from interacting with our server.)

Of the above actions, the only moderation actions that are visible are if a moderator deletes a post or suspends an account. When an issue is closed without action or when a user is warned, frozen, or limited, the action is not visible to the reporting user or other users. This means that we / your instance moderators may have taken action as the result of your report, but that action is not publicly visible.

Please look at our Actions and Appeals doc for more information about how we use the moderation tools to moderate Hachyderm.

Who can see moderation reports

If you are reporting a user on the same instance as you are (local user):

  • The instance moderators can see your report

If you are reporting a user not on the same instance as you are (remote user):

  • Your instance moderators can see your report
  • Remote instance moderators / the moderators for the reported user’s instance can only see the report if you forward the report.

Regular users do not have access to moderation reports.

Limitations of the Mastodon Admin Interface

When we receive a report in the admin tools, there are two main drawbacks:

  • We cannot follow up with the reporting user to ask for additional information
  • We cannot follow up with the reported user, even if they file an appeal

How this impacts you:

  • If you are reporting an issue and do not include enough information and/or a way for us to get in touch with you to clarify, we might not be able to take the appropriate action.
  • If you are appealing a moderation decision and do not include enough information for us to make a decision and a way to contact you, we might not have enough information to reverse the decision and no way to request more information from you.

One Last Reminder

If we receive an empty report and cannot see a clear cause, we will close the report without moderator action.

9.2 - Mastodon's UI and Features for Users

Subset of supplemental documentation for Mastodon users.

This section documents features and processes maintained by Mastodon. For issues related to these features and/or processes, please reach out to the Mastodon team directly on the Mastodon project’s GitHub.

For issues with this doc page itself, please reach out to us on Hachyderm’s Community Issue tracker.

The documentation in this section is meant to supplement, not supplant, Mastodon’s documentation at This section of the documentation is specifically for the Mastodon UI and features that are used by our general, non-moderator, users.

9.2.1 - Content Warnings

Describes what the content warning feature is and how it operates.

This page documents features and processes maintained by Mastodon. For issues related to these features and/or processes, please reach out to the Mastodon team directly on the Mastodon project’s GitHub.

For issues with this doc page itself, please reach out to us on Hachyderm’s Community Issue tracker.

What are content warnings?

Content warnings are a feature that allows you to obscure your content in such a way that it is hidden by default in other users’ timelines. Instead, only the text of the content warning is displayed. To put it another way, if you were to put a content warning on one of your posts that read “descriptions of war violence” while discussing current or past wars, users would only see that description and could then choose to click through the content warning to view the content (or not).

How to apply a content warning

In order to apply a content warning use the “CW” in the post field:

Screenshot of the post field with the CW circled and with
an arrow pointing to it

An example post with a content warning on the text and image looks like this:

Screenshot of a post with content warning 'Politics (CO)' and the
blurred out content labeled 'Sensitive Content'

Threads with a content warning

Whenever someone replies to a post with a content warning, by default their response will carry the same content warning. Here is an example:

Screenshot of a thread with an example content warning,
that reads Example Content Warning, showing that when a user
replies that content warning is pre-populated

This is the default due to the nature of the content warning: if the top-most post of a thread needs a content warning due to what’s being shown or discussed, then the rest of that thread probably needs the same content warning. This is true both for when you respond to your own posts or when other users respond to your posts.

When you reply to a post with a content warning you can manually disable and delete the content warning, but please do so with care. When you remove the content warning from your post all replies to you will also no longer be behind a content warning by default, even if they should be.

When are content warnings used

Content warnings are generally used to either hide spoilers or to provide a buffer when it is psychologically safer to choose to opt in to a conversation as opposed to being opted in by default (seeing it in a timeline). For more information on the nuance for how and when to use content warnings, please take a look at our doc on how to use content warnings.

9.2.2 - Mastodon User Profile and Preferences

What Mastodon’s user profile and preferences are and how to configure them.

This page documents features and processes maintained by Mastodon. For issues related to these features and/or processes, please reach out to the Mastodon team directly on the Mastodon project’s GitHub.

For issues with this doc page itself, please reach out to us on Hachyderm’s Community Issue tracker.


This documentation page covers the customization options available to you via Profile and Preferences settings. The majority of the article length is due to screenshots that show the main view for each settings page. All of the settings shown in the screenshots are the default settings, done for the purposes of this article. This document only provides explanation text where the feature being enabled / disabled is not self-descriptive.

Profile Settings

In order to edit your profile settings, you can click the “Edit Profile” text beneath your handle on the left side, like so:

Screenshot of left Mastodon navigation bar with search field,
user's avatar, user's handle, and the text Edit Profile. The
Edit Profile text has a large, red arrow pointing to it.

Alternatively you can go to the settings/profile endpoint directly in your browser.


When you go to settings/profile, by default you will be taken to the settings for Profile Appearance:

Screenshot of the complete profile settings configuration options, including but not limited to
the bio text area, the uploads for avatar and page banner, check boxes to control follower and
following activity, metadata for URLs, and so forth.

To recap the above, here is where you can:

  • Set your display name
  • Write your bio
  • Upload your header and avatar images
  • Enable or disable follow requests
    • Disabled by default. Accounts will be able to follow your account without explicit approval.
      When enabled, you will be prompted to approve or deny follow requests.
  • Set if you’re a bot
    • Disabled by default. When enabled, your account displays with the word “bot” next to your handle and display handle on your profile. There are no other changes to your account.
  • Suggest your account to others
    • Disabled by default. When enabled, your account will be recommended to other accounts as an account to follow.
  • Hide your social graph
    • Disabled by default. When enabled, other users will not be able to see who you follow or who follows you.
  • Write your profile metadata
    • This allows you to input information that is rendered like “key: value” tags.
      Relevantly, when you add URLs these are also where the text area highlights green when verified.
  • Migrate to a different account or link to an account you’re migrating from
  • Delete your account
    • ⚠️ Account deletion works differently than account suspension. When you delete your account your data is deleted immediately and not after a 30 day window. Also, there are inconsistent reports when deleted accounts are resurrected as they may not federate correctly depending on how servers remote to that account update their information about the account no longer being deleted (or not).

Featured hashtags are your ability to direct which hashtags appear on your main view. They are configured by the Featured Hashtags settings located on the nav menu underneath Profile Appearance. This page can also be accessed directly via settings/featured_tags.

Featured hashtags settings with descriptive text that states What are featured hashtags as well as the
text area field that allows you to input hashtags. Two example hashtags of Hachyderm and Nivenly are shown.

When you enter hashtags to feature on your main view, they appear in the lower right:

Screenshot of the profile / post view for Hachyderm's Hachyderm account. In the lower right reads the
text Hachyderm's Featured Hashtags with the Hachyderm and Nivenly tags underneath.

Note in this case the descriptive text “Hachyderm’s Featured Hashtags” refers to Hachyderm as the account display handle and not Hachyderm at the instance-level.

Preferences Settings

The Preferences settings are located immediately under the Profile settings in the left navigation menu.

Screenshot of left navigation menu, including Profile top level option and Preferences with
its sub-items Appearance, Notifications, and Other.


When selected, the Appearance settings (not to be confused with the above) display by default. This page can also be accessed via settings/preferences/appearance.

Screenshot of the appearances settings page including checkboxes for the advanced web interface,
slow media, auto-play, and so on.

To recap the above, here is where you can:

  • Set your interface language
  • Set your site theme (light or dark)
  • Enable the advanced web interface
  • Enable Slow mode
    • This stops the timeline view from refreshing automatically.
  • Enable Auto-play animated GIFs
  • Set to reduce motion in animations
    • This is specific to reducing the motion of animated GIFs.
  • Disable swiping motions
    • This is specific to using browsers with devices that have a swiping feature. When enabled, it stops you from accidentally switching timelines when using a swipe motion to “go back”.
  • Use the system font
    • When enabled, this uses Mastodon’s default font (Roboto).
  • Enable crop image size
  • Show trends
  • Enable the confirmation before following someone
  • Enable the confirmation before boosting
  • Enable the confirmation before deleting a post
  • Set how you want media to display
    • Default is “hide media marked as sensitive”. Specifically allows you to either always show media, always hide media, or hide media only where the poster marked that media as sensitive.
  • Enable if you want a color gradient for hidden media
  • Enable if you want posts expanded with content warnings


Accessible via Preferences Notifications. Can also be directly accessed via settings/preferences/notifications. These preferences are separate from the Notifications preferences you can make when viewing in the Timeline view.

Screenshot of notification preferences that control email notifications and blocking preferences.

To recap the above, here is where you can:

  • Enable or disable email notifications for:
    • Someone following you
    • Someone requesting to follow you
    • Someone boosting you
    • Someone favoriting a post
    • Someone mentioning you
  • Override the above, in cases where:
    • The account is not following you
    • You are not following the account
  • Block direct messages from people you don’t follow

For the two features in the “override” section above, this means if you have enabled notifications for boosts and also enabled “block notifications from non-followers”, then you will only see notifications for when your followers boost you.


Located at via Preferences Other or settings/preferences/other.

Screenshot of the Other preferences, which include boost grouping and supported language(s).

To recap the above, here is where you can:

  • Opt-out of search engine indexing
  • Group boosts in timelines
    • Enabled by default. When enabled, if multiple people have boosted the same post in a short timeframe you will only one of the boosts rather than all of the boosts separately.
  • Set your default posting privacy
    • Posting privacy options are:
  • Set your default posting language
  • Set if you want all of your media marked as sensitive
  • Disclose what application you use to post from
  • Filter languages
    • Note not all languages captured in screenshot, due to length.

9.2.3 - How to Verify with Mastodon

How to verify with Mastodon.

This page documents a process maintained by Mastodon. For verification failures, please reach out to the Mastodon team directly on the Mastodon project’s GitHub.

For issues with this doc page itself, please reach out to us on Hachyderm’s Community Issue tracker.

What is verification?

Verification on Mastodon works less like Twitter and more like an identity service. That is, you do not need to prove your association with your own brand to an entity or to pay a fee, you are only showing that you are the owner of one (or more) domains or accounts on separate services to substantiate your digital identity.

Here’s what it looks like when a profile has verified via their GitHub identity:

Screenshot of profile for user quintessence, showing avatar, header
and relevantly the verified GitHub URL field which is highlighted in
green and has a green checkmark next to the URL.

GitHub shows as verified with a green checkmark and complete URL, including username / handle.

What domains or accounts can you verify?

You can verify via:

  • Any domain that you can edit pages for
  • Any online service that recognizes, supplies, or allows you to supply the “rel me” attribute (see below).

How to verify

Verifying with domains

In general, when you verify you will do so by using the following HTML on the page you are editing, like a personal site or blog:

<a rel="me" href="">Hachyderm</a>

If you would like to avoid using a visible link, like the above, you can also put the following in the page headers:

<link rel="me" href="">

After doing either of the above, you will need to add the URL of the site to your Hachyderm / Mastodon profile. You will do that by:

  1. Go to Edit Profile
  2. In one of the four fields of Profile Metadata, add the URL of the destination you are verifying
  3. Save changes

Here is an example profile with two separate sources of identity verification:

Screenshot of profile for user Matty Stratton, used with permission,
showing two sources of URL verification. One is his own website, and the other is his Keyoxide account. Both are
highlighted in green with green checkmarks next to the URL to show
they are verified.

Screenshot of Hachyderm user profile taken with permission.

Verifying with services

We will add more services as requested by the Community either by creating an issue on our Community Issue Tracker or via a direct pull request on the Community repo.

Instructions for current, commonly requested, services are below.

As with the verification process itself: when verifying with a service, that service is responsible for assisting with errors or issues with verification. If you experience issues with the verification process, please reach out to the relevant service for assistance.


In early 2023 GitHub announced support for multiple social URLs, including adding support for Mastodon specifically. In order to verify via GitHub:

  1. Go to your GitHub profile page and click “Edit”
  2. Provide your Hachyderm account URL of the format
  3. Click “Save”

Once you have saved, your GitHub profile should now render your Mastodon account in the format


When you edit, it will look like this:

Screenshot of editable fields on Quintessence's GitHub
profile. Specifically under social fields the full URL for
her Hachyderm account, of the pattern, is

Once saved, your Hachyderm account will look like this:

Screenshot of only the rendered Mastodon handle after
saving the change, of the format