Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Researcher sets up illegal 420,000 node botnet for IPv4 Internet map (theregister.co.uk)
254 points by amadeuspzs on March 20, 2013 | hide | past | favorite | 63 comments


Just saw this posted here yesterday: https://news.ycombinator.com/item?id=5395009

Interestingly, I fell for the sensationalist headline of the Register instead of clicking the original HN link entitled "Port scanning /0 using insecure embedded devices" (which I saw and ignored).

I wonder if it will be same here on HN!


I did the same. Network programming and embedded devices are both not my forte, but this headline grabbed me. Can't take back my upvote now!


I think it raises an interesting point: how many interesting links get overlooked because of poor titles? And how many titles are more interesting than the links?

Coming from a technical/academic background, good copy is not something that is emphasised - are we missing a trick?


A specific problem I see a lot on HN is blog post titles that do not make sense out of context. Titles like "Thoughts on music" or "An update on performance" might make sense if I'm reading on your site, but it gives no incentive to click through in the first place. This problem is exacerbated by HN's recent zero tolerance policy for headline changes.


I think I'm going to write up a nice study of comparative multitenant app-mesh queuing performance or something--and then, just for HN, I'll set the actual blog post title to:

> This Is A Bad Title With No Relevance; If You Revert To It You're A Bad Editor And You Should Feel Bad

...and then give the link a more sensible (but slightly editorializing) title when I submit it to HN, and see what happens. :)


Don't worry, even if you pick a good title a HN mod will come and change it to whatever they fancy.


Well when you think about it pragmatically, I don't have the time - and I'm sure you don't either - to look at every single intellectually stimulating title on the internet. And frankly, when I'm browsing HN, I'm not exactly looking for the epitome of academic integrity. I'm looking for things that I'm interested in, and that I care about. And frankly, portscanning /0 is not something that I care about very often. Researchers setting up illegal botnets are.

Copy matters.


Titles matter.

The exponential suppression of new articles is deliberately harsh. The first few early upvotes are critical. Getting them early and often requires either the right hook or a voting ring.

Copy matters in the real world too.


Yes. For an excellent introduction to writing copy, I highly recommend everything on www.copyhackers.com


I've been looking for a good copywriting blog to replace Copyblogger after it started hard selling questionable widgets.


In general articles from The Register are i) awful, ii) given link-bait headlines, and iii) available elsewhere.

The HN guideline about linking to the original source would be lovely if it meant people stopped linking to the Reg.


Wait, why did you upvote an article without reading it or the comments?


Probably to save the article for later reading.


Jesus. If getting that many bots in an evening is so easy, it's a wonder botherders spend so much effort on exploiting vulnerabilities in apps or engaging in phishing. Granted, Linux boxes probably aren't going to have the banking or email info you could grab off a Windows machine, but at the very least you could generate a decent number of bitcoins for very little effort. Actually, for that matter, 420k bots could launch a pretty nasty DDoS were the herder so inclined. That's enough firepower to take out some fairly large services, which I'm sure someone would be willing to pay for.


> If getting that many bots in an evening is so easy

According to the report, designing the botnet took six months.


Given the number of these that are some form of router, why waste time bitcoin mining? Just MITM everyone.


HTTPS would tend to rob you of anything interesting, though.


For many users and many sites, SSL stripping is still quite a significant vulnerability.

http://www.thoughtcrime.org/software/sslstrip/

Assuming DNSSEC isn't in use, controlling someone's DNS settings is enough to carry out an SSL stripping attack!


Which is why everyone needs to learn about HSTS (my new favorite thing). There's room for a stripping attack upon first connecting and being redirected from http to https but from then on you've mitigated most of the problem.


You use webmitm[1] or ettercap[2] to get the traffic. Sure users will get a cert warning, but you are going to at least catch a few people that click through it.

[1] http://manned.org/webmitm/8187425a

[2] http://samiux.blogspot.com/2011/05/howto-sniffing-ssl-with-e...


Amusingly the general classes of "users who leave default passwords" and "users who click through a warning" are going to have a big intersection.


Well done, sir. Even so, I think once a user is shown a cert warning it becomes their problem, not the site owner's(in the sense of who's responsible that is). When it comes to security, everyone has their part to play.


Only if anyone actually looks at the SSL cert warnings instead of just clicking through them.


This has to be some of the best research done on the public Internet itself in the past 10 years. I'm not sure what I'm going to try and do with the dump of data, but I can't imagine not finding something to extract from the ~600GB.

Highly illegal, yet highly intriguing that building a 400k+ node botnet is this damn simple.


That's ~600GB compressed - over 9TB when decompressed...... extremely slowly by ZPAQ. Seriously, at current rates the entire data set would take over 100 days to decompress on my computer. I'm only downloading a subset so I expect decompression will only take a couple of weeks. -_-


Are you extracting TO another device?


I predict more and more "interesting research" will be done anonymously, like this. This time was blatantly illegal, but simply releasing security vulnerabilities, if you're not willing to comply fully with a vendor's requests, might be prudent to do anonymously as well.


This speaks about Internet scale and the vast problems of security we shall face in the coming decades

I mean rtm's Internet worm was supposed to be the great wake up call - and thirty years later look !


"This speaks about Internet scale and the vast problems of security we shall face in the coming decades"

Exactly... And the very fact that even people don't see anything wrong with "autonomous cars", probably "because this time the company in charge is going to create a separate network whis is really secure" is precisely part of the problem.

We live in a world were people who are supposed to be smart enough to program all these machines and set up all these infrastructure are thinking: "Nothing can ever go wrong with this".

As a result we have more and more insecurity and the problem is only going to get worse.

And I've got a bridge to sell to people who think that autonomous are never going to get hacked...


Some autonomous cars will be hacked in the future. But it does not mean that they are not a great idea. If we reduce car accidents to half of accidents we have now it would be amazing. Imagine being able to drink and then return home safely, on your own car. Imagine being able to pick up your kids without having to go, or playing with them instead driving. Or having sex with your significant other instead of driving, or lurking on the internet instead of driving, etc. Autonomous cars are a huge deal. Some of them being hacked is not a reason to not have them.


What are you talking about 'hacked in the future'? Cars have been hacked for years.

https://www.google.com/?q=tadayoshi+kohno+automobile

EDIT: Sorry, I didn't mean to sound like such a jerk about it :-)


Don't worry, I did not take it personally. Besides the link is interesting :).

OTOH, I was talking about autonomous cars. AFAIK none of those was hacked yet (but regular cars were hacked, as proven by your link).

Peace!


> And the very fact that even people don't see anything wrong with "autonomous cars", probably "because this time the company in charge is going to create a separate network whis is really secure" is precisely part of the problem.

Why do autonomous cars need to be on a network at all? The ones I built were never remotely accessible...


So how is your autonomous car going to receive traffic updates, for example? (OK, not 100% technically necessary, but commercially I'd say it's pretty essential...)


Pull only would make it a lot harder to crack.


After releasing the code overnight the report's writer found 420,000 suitable botnet endpoints ... The botnet was able to spread quickly and efficiently just using the four login combinations and was soon reporting back in healthy numbers.

That doesn't actually say he infected 420,000 machines, does it?


  To further verify our sample data, we developed a small binary that could be uploaded to insecure devices.
https://news.ycombinator.com/item?id=5395009

http://internetcensus2012.bitbucket.org/paper.html


From http://internetcensus2012.bitbucket.org/paper.html

> Our binary ran on approximately 420 thousand devices.


Github repository which includes the raw data dump: https://github.com/InternetCensus2012/InternetCensus2012


Can somebody here ballpark how much this would cost if you rented servers instead of using a botnet?


The problem with renting is that your provider will probably shutdown you, if your scan rate is high. At least that is the problem with large dedicated server hosters.


Funnily, the post inmediately above you (posted a bit later) mentions:

"I had to spend USD15 to run 8 medium EC2 instances for 16 hours to only find 200++ million hosts"

https://news.ycombinator.com/item?id=5406233

and another study was done using EC2 instances:

https://news.ycombinator.com/item?id=5405483


Using the botnet to run security patches is what fascinates me.

> But it soon found it was getting competition from a malicious botnet dubbed Aidra and the researcher adapted the binary to block this competitor where possible[...]


My understanding is that it's actually not that uncommon for botnet malware to patch the exploit that it came in on.

I too wonder if this is some deep principle at work or just something obvious.


You don't want other malware running on your botnet. They take up resources and may make the owner realise that their computer is infected. If you patch their biggest security holes, their computer keeps on running smoothly and nobody suspects a thing.


My take is it's obvious: they are already in and don't need to use that exploit again. This makes sure no one else gets access as well.

(On a related note: I think I remember HP demonstrate a remote mitigation tool in '07 that would use exploits to pop messages to logged in users or even shut down the machine.)


Hmm...I wonder if this has anything to do with https://factorable.net/weakkeys12.extended.pdf


The paper says that they compromised machines by guessing credentials -- not by breaking crypto. ("Four simple stupid default telnet passwords" is how the paper refers to the scope of the vulnerability.)


Agreed, I just happened to skim over the paper to spot any similarities but there appear to be none; in fact the paper's authors have used EC2 instances---a legal means. Also noteworthy difference is the paper's limited attention to SSH/TLS ports.


Thanks for the list of 1.3 billion potentially in-use IP addresses! I had to spend USD15 to run 8 medium EC2 instances for 16 hours to only find 200++ million hosts.


So the same results can be achieved for less than a hundred bucks without the risk of spending the rest of your life in jail?


Not quite. My method was far from what this guy is doing. I only did a minimal ping, one packet per IP with a Python script:

  s = socket.socket(socket.AF_INET, socket.SOCK_RAW, socket.getprotobyname("icmp"))
  for address in addresses:
    s.sendto(pkt, (address, 1))
  s.close()
At the same time, I had tcpdump running, i.e. "tcpdump -i en1 icmp[icmptype] == icmp-echoreply" to capture the replies.


...what value has this information?


"Potentially risks thousands of years in jail". Did I miss the post on the invention of cryogenic jails?


Well he would be sentenced to thousands of years in jail not made to serve them all against nature and death. When someone is given several consecutive life sentences no one goes looking for them when they are reborn!


this bit is much more interesting: http://seclists.org/fulldisclosure/2013/Mar/166


> A reboot of the infected system would wipe the binary completely and...

Why in the heavens would you reboot obviously badly administrated machines quite literally half across the globe if you otherwise took every initiative to not harm the target machines and keep your foot print as small as possible? (lowest possible priority, watchdog) Killing the process and removing the files should have been more than enough and you just don't know what a reboot could do to these systems; regardless of how much the admins of those machines are to blame.


It doesn't say that he reboots the compromised machines, just that if the sysadmin happens to reboot then the bot would disappear.


I hope they did not infect any medical device e.g. a ventilator.


I hope there are no medical devices(e.g. a ventilator) accessible from the public internet with a default password.


Hope, or expect?


Unfortunately, only hope.


Doh, now that you point that out... I misunderstood it. Thank you.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: