The debate around “platforming Nazis” has re-emerged, in light of a recent article in The Atlantic titled “Substack Has a Nazi Problem” and a campaign by Substack writers to see offensive accounts removed from the platform. However, calls for more content suppression are short-sighted and wrong. This is not the first time that a major social media platform has faced pressure to ban bigoted or offensive accounts. Substack, a platform for emailed content delivery, has previously faced criticism for its moderation policies or lack thereof. Unlike traditional blogging systems, Substack is geared toward a subscription-based content delivery and allows creators to monetize their content without advertising. Readers subscribe to receive content from specific creators and do not receive random content they did not sign up for. Substack has broad moderation policies, with restrictions on illegal, violent, plagiarized, or spam content. Its approach toward moderation irks those who believe that tech companies should determine which viewpoints should be heard and which should be suppressed. However, the execution of actions against Nazis and those with toxic views would be challenging due to overuse of the term, leading to subjective judgment calls. Expanding moderation criteria also opens the door to censorship of various viewpoints, making it more difficult to monitor and counter Nazi activity. Additionally, banning accounts does not necessarily prevent their activities. Lastly, the Substack co-founders have previously emphasized their commitment to light-handed moderation and avoidance of heavy-handed censorship.