Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The key here is manual moderation. At NodeBB we also have had our fair share of spam companies trying to build scripts to post things, and the only foolproof solution is manual moderation via a post queue for new users.

The downside, of course, is that they takes effort to maintain, and is a barrier to entry for new accounts.



Do you get a lot of people reposting the same thing because they think it didn’t work the first time?

I wonder if you treated new content from unverified users like a temporary shadowban (they see it, nobody else does) how that would affect behavior.


Great idea!

Edit: Why does shadow banning feel like such an elegant solution? Everything has tradeoffs and I feel like shadow banning has tons of upside and very little downside. What am I missing?


There are cases in which explicit and obvious moderation results in retaliation, including DDoS and hacking attacks.

Various soft fail mechanisms, including shadowbanning, degraded performance, errors, authentication failures, etc., may help avoid this.

The question is what adversarial model you're facing: is it some random pr0n / SEO / affiliate / fraudster, or is it a "nice website you gots heyah, be a shame if anyting happen' to it" squad?

Former can be modded away. Latter takes nuance.


The only risk I can think of is that someone will upload the video and then livestream its playback on your platform with your branding using a separate livestreaming tool.


I.e. shadow banning semantically cannot work with a site that is geared toward publishing content, because the content is supposed to be visible to visitors who are not authenticated. If it isn't visible, that is painfully obvious.

Shadow banning only works when only authenticated users can see any content at all; then we arrange for only the offender to see content they have created. This works as long as the offender doesn't create multiple accounts.

(You need something slightly more clever, like allowing the content to be viewed from the same IP address as the last known address for that offender, plus some surrounding range (e.g. IPv4 class C subnet)).


The only one I can think of is that accounts may have extremely objectionable content and it remains there. That could be a legal liability.


It’s kind of easy to check if your posts are visible for not logged in users so you aren’t fooling bots and spammers, just real people. And the bots don’t care about moderation drama but the real people do, no matter if the moderation is proper or not. It always looks really unfair.


That's actually a wonderful idea. I'll see if I can get that implemented, and cc you on the commit.

We're always looking for little things that set our software apart!

As for your question: no, with the appropriate messaging, we don't often have repeated submission attempts.


That would result in more spam to be moderated. The spammers would see their content themselves, think it was working and mark your site as a target.

One point of manual moderation is that spammers know it is happening.


This invites a couple of questions. One, do the spammers care that the content is actually visible? Two, does spam come from more accounts or the same accounts?


I am assuming checks are rudimentary. Post some spam. See if spam is visible. Add site to you script for mass posting spam.


Unrelated, but seeing NodeBB reminded me of the old days when installing forums for communities. phpBB2, vBulletin, downloading plugins and themes, etc. Oh, the nostalgy for the old web is amazing...


I recently got a raspberry pi and set up phpbb using docker just for a fun side project.

I looked at nodebb might give that ago next




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: