Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It seems like it's really just a matter of personal semantic interpretation to me. Did Hacker News not drive some traffic to TechCrunch as a result of this post? How much bigger must Hacker News be relative to TechCrunch for it to begin "driving" traffic as opposed to some other verb? According to who? Either way, call it whatever you'd like, but the point remains that driving/sending/adding/creating/dispatching/routing traffic and/or increasing visibility is what these "parasites" have going for them. Whether or not that's worth the cost of letting them "leech" is debatable, but I'm going to go with yes given that people keep on cranking out APIs.

P.S. Since you didn't seem to notice, "perhaps" was sarcasm on my part. FriendFeed will most certainly create some additional traffic and/or visibility for the sites it "leeches" off of. It is important, however, to note that traffic != visibility. Sometimes times visibility detracts from traffic because there's no need to visit the source (i.e. Google Maps, YouTube, Scribd, etc.) but other times it creates traffic because the source provides added value (i.e. Thumbnails -> Larger Image, HN Links -> Stories, etc.). It's also worthwhile to note that the former (increase in visibility, decrease in traffic) still has benefits, as your service becomes more well known (creating future traffic). I'll leave you with this: http://friendfeed.com/e/d1ffcc73-040a-5609-6168-993c4549591a



There's one rather large difference between Hacker News driving traffic to Techcrunch, and FriendFeed driving traffic to twitter. Hacker News doesn't actually scrape techcrunch or get it's content via an api.

From a practical viewpoint (and this is why I used the word parasite - not to be pejorative, but to indicate that FF is entirely dependent on its hosts), Friendfeed could be shutdown tomorrow by the sources. If they're using APIs, then just about every api out there is for non-commercial use, and if they're scraping, then an appropriate robots.txt, backed up by IP blocking and lawyers, does the trick.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: