Synopsis: When choosing between multiple A records, RFC3484 changes the old "choose at random" behavior to a deterministic "choose one with longest matching address prefix", given the huge proportion of home machines at 192.168.. it will favor some A records over others.
Avoid the issue by not returning multiple A records with different address prefix lengths compared against 192.168 and 10. That will make hosting selection a little trickier.
The internet will be pronounced dead in 60 seconds.
@m_eiman: and/or allow users to subscribe to other HN users' synopses, such that I would only see my favorite commenters synopsis. And why use mouse over? Simply place it to the right of the link, taking up all that space that is not being used yet; put the synopsis in a more transparent color than the other text- like the <0 down-modded comments.
Putting on the right would mean it can't be more than one line (give or take), and I think that's too short to be really useful. At least a few paragraphs are needed if the article isn't very short.
A mouseover might not be the best way, maybe a hidden div beneath the current metadata?
Ok. So the space is too short ( for a full synopsis ). But we ( as synopsis authors ) would invent shortcuts for rating and summarizing the content that would fit to the right or be truncated. And when a user clicks on comments, the full synopsis is always at the top of the comments.
And maybe the descriptors that different users would implement ( like their own tags/ratings for the content ) would become unique to that user, eventually, and have contextual meaning to that author. Each synopsis author would have their own tags that they could reuse. Only people who subscribed to that author could see their synopsis so their silly or cryptic or worthless synopsis descriptors would not clutter HN readers' experience unless they subscribed to a particular s-author ( synopsis author ).
So each user could have a customized right-hand HN site by subscribing to other synopsis authors and each synopsis author would have their own ways ( tags, most likely to start ) for communicating concise summaries/likes/dislikes of the content posted on HN.
But wouldn't a feature such as this take all the mystery out of HN headlines? Or would it add more mystery :) ?
Or would it add too much sub-culture HN? How much would be just enough? ( I hope I am not too off-topic by now )
Opinion on these synopses seems to be strongly divided. Sometimes they get lots of upvotes and "that was helpful" replies. Sometimes they get snide comments about what a waste of time they are and how their authors are just looking for cheap karma. There doesn't seem to be much correlation between which of those happens and how helpful the synopsis actually is.
I think having two different voting mechanisms would cause more confusion than the benefit would justify. What about showing the first bit of (1) the original author's text, where present, or (2) the highest-rated toplevel comment, as a mouseover? Even when the highest-rated comment isn't "representative", it probably gives some idea of what the article is about.
When you make such dire claims, you kind of owe it to your audience to have a brief summary of what you mean by "death of the net" at the top of the article.
Found it at the end:
"So we're going to have to take a slight hit to our resilience and reduce the number of A records we return for a DNS lookup to one instead of two."
A dangerous trend. If 100% of provocative linkbait titles lead to crappy articles, then I can safely ignore them, but if , say, 85% of them lead to crappy articles and 15% to good ones, then I have to waste resources derefencing pointers.
Or maybe I could just boycott linkbait titles altogether. If everyone does that the new way to get people to read your article would be... drumroll... to write an accurate title! Now that would be scary wouldn't it? The end of an era.
That doesn't save you the dereferencing, though. It'll actually add another dereferencing for 15% of the articles, although by scanning the comments instead of the article you don't give the writer the satisfaction of a higher page view count (and possibly related ad income). Which in turn, in an ideal world, would make linkbait titles less popular (in favor of some other scheme, of course).
> If 100% of provocative linkbait titles lead to crappy articles, then I can safely ignore them, but if , say, 85% of them lead to crappy articles and 15% to good ones, then I have to waste resources derefencing pointers.
That's an interesting and practical idea. To take your suggestion one step further, I propose that submissions of good articles should from now on include [Good article] in the title; crappy articles should include [Crappy article]. That way, I can only click on the links tagged [Good article] and ignore the crappy ones. (If the idea turns out to be popular, this can be automated with two checkboxes in the submission form.) What do you think?
So we're going to have to take a slight hit to our resilience and reduce the number of A records we return for a DNS lookup to one instead of two.
How does that improve matters? Even if more or less every Vista user used the IP that was closest to 192.x, wouldn't other users still see the benefit of multiple records? In fact, they could make non-vista systems prefer the other addresses if they used effectively duplicates of the other addresses. Costs a few extra IPs, but surely that's not a problem for a site with millions of page views?
In case that wasn't clear, say they have 3 A entries:
Vista will always use A. Other systems will use all with 1/3 probability. If servers B & C were aliased to very similar IPs to the ones they already have, they'll end up with this situation:
Vista will still prefer A, but other systems will choose B or C twice as often as A, compensating for some of the extra load. You could play this game further if you wanted to.
An orthogonal solution is to get hold of some more servers that have an IP with the same "distance" as A. Vista should distribute traffic evenly across these; combined with the duplication suggested above to draw other OSes to the non-vista servers, it should be possible to make this less of a problem.
Still, if big companies are affected by this, I'm sure they can pressure Microsoft into abandoning this method. Actually, the author mentions that Windows 7 seems to have a more sane policy, so you'd just need such hacks until Vista fades back into insignificance.
I would argue that this is still an implementation issue w/Vista's networking stack. The RFC assumes that the DNS resolver (the client) has a public IP. This is usually not the case with most clients, and Microsoft (etc) knows this. It's just an edge case that is now brought to light.
The solution to this is pretty simple (at least in the case of IPv4): identify and use the public IP of a session instead of the private one. I know that public IP identification behind a NAT is problematic, and certainly not that robust (SOAP-based UPnP IG is pretty ooky), but if it doesn't work, fall back to round robin. Not too hard.
Of course, while the solution is not too hard, it's up to a handful of large enterprises to deal with (people who make operating systems). And the rest of us are the ones having to hack around the problem in the meantime, which kinda sucks.
Showing that even international standards can be poorly thought out, and fail in real-world application. And when they do, they are hard to go back and fix.
Respectfully speaking, I might have a different take on waterfall. It reminds me of what Churchill said about democracy, that "it is the worst form of government except for all those other forms that have been tried from time to time". Sure, waterfall is not perfect, and bad design, like in the case here with IP address routing, leads to bad results. But its competitors like agile and so on really have not sold me. It leads to a perpetual mentality of being in beta mode, code does not get frozen when it should, and customers don't have a clear idea of what goes into what release. It is murky, and that means more work ends up being done for less money. Not to mention from a QA point of view that QA is an equal part of the process in waterfall, where it tends to be a red-headed stepchild in agile environments. In waterfall, at least you have a clear idea of what is going to the customer, and what the bugs are, and the customer knows it too. And of course sometimes poor requirements end up going out and that is just life, but rigor is the key. Rigorous marketing requirements lead to rigorous technical specifications which lead to rigorous architectures, unit tests, and actual code. Likewise, rigorous requirements and technical specifications lead to properly designed test cases which reduces bugs. Perfection does not exist of course. But clearly defined goals in a rigorous process helps towards that end. But this case is really not so much a failure of waterfall but it more brings up the issue of what technical standards to adhere to. Companies as big as Microsoft should be able to afford an R&D type department filled with PhD types to figure these things out, so when the time comes to turn marketing requirements into technical requirements the people doing that will have a clear idea of which standards to use for what tasks. Ad hoc choosing of standards is what created this debacle. The answer in my opinion is not switching from waterfall to agile. The answer is to ear mark a team who is tasked solely with the purpose of researching, experimenting, and so forth with industry standards, like in this case routing standards, and have that team interface with the design team. Which again big companies that make operating systems ought to be able to do.
Avoid the issue by not returning multiple A records with different address prefix lengths compared against 192.168 and 10. That will make hosting selection a little trickier.
The internet will be pronounced dead in 60 seconds.