Hacker Newsnew | past | comments | ask | show | jobs | submit | aragonite's commentslogin

Claude Code now hides thinking as well unless you turn on an undocumented setting:

https://github.com/anthropics/claude-code/issues/31326#issue...

https://x.com/nummanali/status/2032451025500528687


Do long sessions also burn through token budgets much faster?

If the chat client is resending the whole conversation each turn, then once you're deep into a session every request already includes tens of thousands of tokens of prior context. So a message at 70k tokens into a conversation is much "heavier" than one at 2k (at least in terms of input tokens). Yes?


That's correct. Input caching helps, but even then at e.g. 800k tokens with all of them cached, the API price is $0.50 * 0.8 = $0.40 per request, which adds up really fast. A "request" can be e.g. a single tool call response, so you can easily end up making many $0.40 requests per minute.

Interesting, so a prompt that causes a couple dozen tool calls will end up costing in the tens of dollars?

It essentially depends on how many back-and-forth calls are required. If the model returns a request for multiple calls at once, then the reply can contain all responses and you only pay once.

If the model requests tool calls one-by-one (e.g. because it needs to see the response from the previous call before deciding on the next) then you have to pay for each back-and-forth.

If you look at popular coding harnesses, they all use careful prompting to try to encourage models to do the former as much as possible. For example opencode shouts "USING THE BATCH TOOL WILL MAKE THE USER HAPPY" [1] and even tells the model it did a good job when it uses it [2].

[1] https://github.com/anomalyco/opencode/blob/66e8c57ed1077814c... [2] https://github.com/anomalyco/opencode/blob/66e8c57ed1077814c...


Not necessarily, take a look at ex OpenApi Responses resource, you can get multiple tool calls in one response and of course reply with multiple results.

If you use context cacheing, it saves quite a lot on the costs/budgets. You can cache 900k tokens if you want.

> I guess it's hard for me to edit things that I don't see right in front of me or aren't super simple changes (like name changes). Or at least, basic things I can reason about (such as finding by regex then deleting by textobject or something).

This is actually what's nice about tools like ast-grep. The pattern language reads almost like the code itself so you can see the transformation right in front of you (at least for small-scale cases) and reason about it. TypeScript examples:

  # convert guard clauses to optional chaining
  ast-grep -pattern '$A && $A.$B' --rewrite '$A?.$B' -lang ts

  # convert self-assignment to nullish coalescing assignment
  ast-grep -pattern '$X = $X ?? $Y' --rewrite '$X ??= $Y' -l ts

  # convert arrow functions to function declarations (need separate patterns for async & for return-type-annotated though)
  ast-grep -pattern 'const $NAME = ($$$PARAMS) => { $$$BODY }' --rewrite 'function $NAME($$$PARAMS) { $$$BODY }' -l ts

  # convert indexOf checks to .includes()
  ast-grep -pattern '$A.indexOf($B) !== -1' --rewrite '$A.includes($B)' -l ts
The $X, $A etc. are metavariables that match any AST node and if the same metavariable appears twice (e.g. $X = $X ?? $Y), it requires both occurrences to bind to the same code so `x = x ?? y` will match but `x = y ?? z` won't. You can do way more sophisticated stuff via yaml rules but those are less visually intuitive.

Sadly coding agents are still pretty bad at writing ast-grep patterns probably due to sparse training data. Hopefully that improves. The tool itself is solid!


While at it, https://github.com/semgrep/semgrep was around for several years, too.

Came here to recommend ast-grep too!

If your editor of choice supports an extension (vscode does for example) it's a very easy on-ramp for a better search/replace than regex offers. It's syntax-aware so you don't need care about whitespace, indentation etc. Very easy to dip your toes in where a regex would get complex fast, or require multiple passes.

I converted a codebase from commonjs to esm trivially with a few commands right after first installing it. Super useful.

I hope LLMs eventually start operating at this level rather than raw text. And likewise for them to leverage the language server to take advantage of built in refactorings etc


I don't know which editors support this but there's also, for lack of a better word, context aware grep. For example, search and replace foo with bar but only inside strings, or only inside comments, or only variables, or only methods, not class names (maybe ast-grep does this).

> Do websites want to prevent automated tooling, as indicated by everyone putting everything behind Cloudfare and CAPTCHAs since forever, or do websites want you to be able to automate things? Because I don't see how you can have both.

The proposal (https://docs.google.com/document/d/1rtU1fRPS0bMqd9abMG_hc6K9...) draws the line at headless automation. It requires a visible browsing context.

> Since tool calls are handled in JavaScript, a browsing context (i.e. a browser tab or a webview) must be opened. There is no support for agents or assistive tools to call tools "headlessly," meaning without visible browser UI.


That really just increases the processing power required to automate it. VM running Chrome to a virtual frame buffer, point agent at frame buffer, automate session. It's clunky, but probably not that much more memory intensive than current browser automation. You could probably ditch the frame buffer as well, except for giving the browser something to write out to. It can probably be /dev/null.


Claude Code by default auto-deletes local chat/session logs after 30 days, so the claim that this tool can recover "any file Claude Code ever read/edited/wrote" is only true within that retention window unless you've explicitly changed the settings ("cleanupPeriodDays", see [1])

Speaking as someone who's derived a lot of value from these logs, it's a bit shocking that the default is to wipe them automatically!

[1] https://simonwillison.net/2025/Oct/22/claude-code-logs/


Yes, as soon as I noticed that I changed that setting to 9999 days. Luckily enough I still was in that 30 day window. But true, the retention window is a factor for chances of recovery indeed.


Wow I had been trying to find an old session for quite some time, thanks for this.


Hope you got what you needed!


good to know. im thinking about making an mcp tool/skill for searching them, and i guess that same tool should be archiving them properly too


IME more likely cpptools (which comes with vscode) than clangd.

Relevant: https://news.ycombinator.com/item?id=43788332


That's correct. Clangd doesn't churn nearly as hard as cpptools, but it's also not nearly as good as cpptools.


In the preprint they write:

> ... we observe extreme inequality in attention distribution. The Gini coefficient of 0.89 places HN among the most unequal attention economies documented in the literature. For comparison, Zhu & Lerman (2016) reported Gini co-efficients of 0.68–0.86 across Twitter metrics. ... The bottom 80% of posts [on HN] receive less than 10% of total upvotes. (https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5910263)

This could probably be explained by HN's unique exposure mechanism. Every post starts on /newest and unless it gets picked up by the smaller group of users who browse /newest, it never reaches the front page where the main audience is. In most forums/subreddits by contrast a new post (unless it gets flagged as spam) usually gets some baseline exposure with the main audience before it sinks. On HN the main audience is downstream of an early gate and missing that gate is close to being effectively invisible. IMO this fact alone could probably explain why "attention inequality" seems more extreme on HN.


This also explains how early performance can be predictive despite the lack of preferential attachment.


Some time ago I noticed that in Chrome, every time you click "Never translate $language", $language quietly gets added to the Accept-Language header that Chrome sends to every website!

My header ended up looking like a permuted version of this:

  en-US,en;q=0.9,zh-CN;q=0.8,de;q=0.7,ja;q=0.6
I never manually configured any of those extra languages in the browser settings. All I had done was tell Chrome not to translate a few pages on some foreign news sites. Chrome then turned those one-off choices into persistent signals attached to every request.

I'd be surprised if anyone in my vicinity share my exact combination of languages in that exact order, so this seems like a pretty strong fingerprinting vector.

There was even a proposal to reduce this surface area, but it wasn't adopted:

https://github.com/explainers-by-googlers/reduce-accept-lang...


This is a problem, that the software will try to guess what you mean by such things like this (it is not specific to this feature, but other features of computer programs in general; this is one specific case of that). Just because you do not want it to translate such a language (or any other langage) automatically does not necessarily mean that you can read it or that you want to request documents written in that language. Fingerprinting is not the only issue with this.


Is Chrome trying to assume that, since you don’t want it to translate those pages/languages, that you can read them/want them in your header? Interesting


I'd read it more generously than that. I think Chrome trying to stop the server choosing the language for you. By sending an accepts-language header (which your browser does regardless of what you use; it's not a Chrome thing) the server should return the page in a language you've said you'll accept. By adding the language to what you've told Chrome not to translate, it's attempting to show you pages in languages you want.

I imagine Chrome is really adding the language to your browser preferences when you choose not to translate a page, and the HTTP client in the browser is generating request headers based on your preferred languages. A small (and largely unimportant) semantic point, but it's possible that the Google translate team weren't aware of how adding a preferred language might impact user privacy. That isn't to excuse the behaviour; they should have checked.


PSA Don't use chrome.


Translating pages is literally the only thing I use Chrome for. The built-in translation works way better than other browsers, even though they also use Google Translate.


Firefox does not use Google Translate and performs the translation locally, which works great for the most common languages out there. For the less common ones you still have to go to Google Translate, but IME it's definitely not worth changing the browser to Chrome over.


Yeah I really like the Firefox translate. A rare win for recent Firefox.


I don't really like firefox translate, despite having made the switch many years ago. For a long time it didnt have the (european) language of the country I live in. Now it does have it. Every time I want it to translate I have to manually find both languages in the insanely long dropdowns. It will not save it the way I want it, but impressively seems to manage to always save it in the other direction...


> works great for the most common languages out there

Most of the time when I tried it the Firefox translations were obviously wrong or nonsense.


Ditching Chrome is something we need to teach everyone.

The DOJ is totally spineless and refuses to squash Google's absurd monopoly on the internet. We are literally the last line of defense, even though we really don't amount to much.

Perhaps we could start a grassroots movement.


You don’t need a grassroots movement when other movements doing this exact thing already exist. In fact it is likely counterproductive. Mozilla Foundation is the organization you want to support, or EFF.


> Mozilla Foundation is the organization you want to support

Mozilla Foundation is rudderless. I'm convinced the leadership are all Google plants who are keeping the "antitrust litigation sponge" from doing anything damaging to Chrome.


The new built-in translation in Firefox works pretty well! I never need to fallback to others, although forcing it to translate has weird UX.


Sorry but you're using a Google browser and Google translation service, when excellent alternatives to both exist. What did you expect regarding privacy?

A clueless person might not know any better, but you clearly do, and also you seemingly care. So why do you use Google all the same?


Safari does not use Google Translate and it works well. It even translates text on images BTW!


I don’t think safari uses google translate


There is an extension called twp or something like that for firefox. IME it is pretty good


PSA only use Mullvad or Tails which are set up to be as bland and uniform as possible


As uniform as possible is exactly the wrong way to go. It only takes one data point overlooked or newly discovered to make every person trying to look identical distinct. New fingerprinting techniques are being implemented all the time, so what's the point in taking chances when it's far easier to randomly change a browsers fingerprint for each site/connection making it much harder to track any one browser over time.


Except I don't want to be flagged as a bot when I'm just visiting some website in my browser. (I also don't want to be flagged as a bot when I'm scraping some website with a bot).


Definitely a good STEP1, but it’s not like Firefox and Safari are finger printing secure.


Firefox does pretty damn well though, especially with privacy.resistFingerprinting set to true


Every time I manually touched the "fingerprinting" about:config settings, my entropy went up. I used the EFF site to test: https://coveryourtracks.eff.org/

AFAIK some of these options are there to be used by the Tor browser, which comes with strict configuration assumptions, and it doesn't translate well to normal Firefox usage. Especially if you change the window size on a non-standardized device. Mind you, the goal is not to block fingerprinting, but to not stand out. Safari on a macbook is probably harder to fingerprint than Firefox on your soldering iron.

However, judging by the fact that every data hungry website seemingly has a huge problem with VPN usage, I'd presume they are pretty effective and fingerprinting is not.


I've had good success with tracking tool tests and resistFingerprinting. Granted, I usually use it with uMatrix/NoScript most of the time which cuts down on the available data a lot and maybe makes it an unfair test. One issue, I expect, is simply not enough people using resist fingerprinting to add variation to the mix. Since it's off by default, and only a small % of users use Firefox and an even tinier percentage use resistFingerprinting, unlike your example of Tor where probably most people on the tor network use the tor browser, it's likely that simply blocking things is a fingerprint all on its own. The solution there would be to get more people using it :)

I will say one downside to using it is far more bot detection websites freaking out over generic information being returned to them, causing some sites to break (some of their settings breaking webgl games too due to low values). Using a different profile avoids this, or explicitly whitelisting certain sites in privacy.resistFingerprinting.exemptedDomains - obviously if a site is using a generic tracking service for bot detection, that kills a fair amount of the benefit of the flag, so a separate profile might be best. I wish firefox had a container option for this.

... and. not too sure what you mean by changing window size on a non-standardised device. They do try to ensure the window sizes are at standard intervals, as if they were fullscreened at typical widths to reduce fingerprinting, but surely that applies to using Tor too? I mean, people don't use Tor on dedicated monitors at standard sizes.


Oh, and a bit of followup. I tried the EFF cover your tracks on a Firefox profile with resist fingerprinting, and almost all the bits of identifying information came from the window size (which EFF considers "brittle") and the UA (I was testing in Firefox Nightly).

Apparently you need to add the hidden pref: firefox.resistFingerprinting.letterboxing

Enabling letterboxing knocked off 5 bits of identifying information. Apparently my 1800px wide letterbox was still pretty identifiable, but, an improvement.

Setting a chrome user agent string using a user agent string manager dropped that one from 12ish bits to <4 bits. 'course, that has disadvantage of reducing firefox visibility online further, and probably being more recognisable with the other values (like mozilla in the webgl info). Using firefox stable for windows was <5bits, so probably best to use that if on linux. Although, it might conflict with the font list unless a windows font list was pulled in.


privacy.resistFingerprinting has potentially-unwanted side-effects, like wiping out most of your browser history (instead of the more sensible approach of just disabling purple links). I also recall something about it getting removed or nerfed, though I'm not sure whether that was a mere proposal.


It does not wipe your browser history. I can definitely attest to that since my generic JS active + resistFingerprinting profile has a history going back years. It does set your timezone to UTC in JS on websites. I've mostly encountered that when playing Wordle ;)


It also does (or at least used to) mess with dates, due to it attempting to hide what time zone you're in.


The browser should reasonably know what time zone you're in and what time zone you're reporting to the website and translate between them automatically.


Yeah, "should". Too bad it's unfeasible. As soon as you e.g. print the current date as part of a paragraph somewhere, the browser loses track of it, and the website can just read the element's content and parse it back.


what about duck duck go? We need a simple chart: 1. What browsers are good at resisting finger printing 2. tell for each browser, does it work on android ad ios and apple and windows and linux 3. what setting are needed to achieve this

for bonus points, is there no way to strip all headers on chrome on control it better?


This is my question also. I tend to not use apps, use DuckDuckGo browser.

I sometimes do use Safari which is a more convenient browser - it would be ironic if DDG browser is less private than Safari.


Modern Safari is pretty damned good at randomizing fingerprints with Intelligent Tracking Prevention. With IOS 26 and MacOS 26, it's enabled in both private and non private browser windows (used to be only in private mode).

All "fingerprint" tests I've run have returned good results.


Unfortunately, it's closed source and only available on Apple devices.


I haven’t tried 26, but I remember it didn’t used to be so great.


Tor Browser (based on Firefox) is.


That will just make you stand out more.


You can change the reported UA header independently of the UA you use.


If I was a fingerprinting company, I'd be cross-referencing signals between browsers for sure.

If the browser header says windows but the fonts available says linux, that's a very distinctive signal.

And if the UA says Chrome but some other signal says not-chrome, that's very distinctive as well.


Surely this is true, but if you’re a fingerprinting company aren’t you making so much money violating the privacy of the masses that it’s not worth your time going after the tiny set of Freedom Nerds trying to evade you?


They aren't specifically going after you... they just try to create a unique hash from everything they can and by doing weird things to your system you are making a truly unique hash easier


Yeah, and my passwords are so obvious and stupid, nobody's gonna guess them!

I think, you are falling for a technical fallacy. It's not costing them any more time.


You said it better than I did.


You can change the header, but browser developers are not that dumb and they added properties like "navigator.platform" which do not change and immediately give you away. Consider also writing a browser extension to patch these properties. Also, I think that DRM module (widewine), that is bundled with browsers, also can report the actual software version. Sadly it is undocumented so I don't know what information it can provide, but I notice warnings from Firefox about attempts to use DRM on various sites like Yandex Market.


The article also mentions this, and suggests the UA is not a silver bullet. That said, they didn’t go into specifics. I’m assuming there are other details that correlate to particular browsers that will betray a false UA. Plus, having a UA that says Chrome while including an extension that’s exclusive to Safari (tor example) will not only contradict the UA, but it will also be a highly distinctive datapoint for fingerprinting, in and of itself.


don't use the same browser regardless - the key is to compartmentalise.


I only use it when I want to be tracked.


Using Chrome and caring about privacy? I thought, after Google killed uBlock Origin, it had become beyond clear these two things were incompatible, https://news.ycombinator.com/item?id=41905368


Most people using chrome are also using Google's DNS servers too which hands them a list of every single domain you visit.


uBlock origin just got replaced with uBlock lite for most people


Which, by design, doesn't protect you from actual spying, https://github.com/uBlockOrigin/uBOL-home/wiki/Frequently-as...


There's a way to enforce loading UBo in Chromium but you need to download the extension by hand (git clone it from GitHub) and load it in "developer mode" in the extension settings. Also, you need to enable some legacy options related to extensions in about:flags.


Which really puts a massive spotlight on you.


How does it determine the order?

Clearly it thinks you prefer Chinese to German. Was that correlated with the frequency of your requests on Google Translate? With your browsing history? With your shopping history?


$lang_header = $lang_header + $the_lang_choice_that_was_just_made


Hmmm...YouTube has been getting confused about the language and displaying random languages for the closed captions on videos. This was happening to me across smart TVs but I access YouTube randomly from various devices and browsers...but mostly Chrome when using a browser.


> There was even a proposal to reduce this surface area, but it wasn't adopted:

>> Instead of sending a full list of the users' preferred languages from browsers and letting sites figure out which language to use, we propose a language negotiation process in the browser, which means in addition to the Content-Language header, the site also needs to respond with a header indicating all languages it supports

Who thought that made sense? Show me the website that (1) is available in multiple languages, and also (2) can't display a list of languages to the user for manual selection.


What language do you put that list in? Would you still want to show it to every visitor when you know most of them speak a particular language?

I use to do some work in this area. The first question is difficult and the second is no. We had the best results when we used various methods to detect the preferred language and then put up a language selector with a welcome message in that language. After they made a selection, it would stick on return visits.


> What language do you put that list in? Would you still want to show it to every visitor when you know most of them speak a particular language?

Judging by... a large number of websites, you make the list available in a topbar, and each language is named in itself. You don't apply one language to the entire list.

Here's the first page that popped into my head as one that would probably offer multiple languages (and it does!):

https://www.dyson.com/en

They've got the list in a page footer instead of a header, but otherwise it's an absolutely standard language selector. It does technically identify countries rather than languages. The options range from Azərbaycan to Україна. They are -- of course -- displayed to every visitor.

Why would you want to force someone to consume your website in the wrong language?

And why would the list be in a single language, again?


You’re looking at it with the perspective of someone who understands the language the site defaults to. Most non-native speakers have a hard time finding the link and they leave.


No, I'm looking at it from the perspective of someone who has needed to use that language selector in the past. Understanding the language the site defaults to wouldn't help, because the selector doesn't use that language anyway.

> Most non-native speakers have a hard time finding the link

You might notice the colorful flag right next to it.


Flags are a terrible way to indicate language. At best, they are unclear. At worst, they can be offensive.

Assuming you are a US company catering to non-English speakers in the US, which flag would you use for Spanish? Which flags would you use to differentiate between Mandarin and Cantonese? What would you do in Canada where they speak English and French? Show a French flag?


Except they're recognizable across languages. Faced with a UI in a language I don't know, going to settings -> languages -> my preferred language is a total guessing game. Meanwhile, if I'm confronted by a UI that has a tiny flag icon in the top, I know I can click on that and get to something familiar. Yes, someone looking to get offended can nitpick your flag choice, but a Spanish flag vs a Mexican flag for Spanish will at least let the user get to something closer to what they know, even though there's quite a bit of difference on the ground between Spanish in Spain and Spanish in Mexico. If your internationalization team is well funded enough to offer both, then show both flags. Same for UK English and American English, Chinese Simplify, Traditional, and Cantonese. And yes, Quebecoise French and French in France. Offer as many flags as you actually have translations for. If you can have a Chinese flag and a Hong Kong flag, users will appreciate it. Having a two level menu is also an option. Click on the Canada flag, which then offers Francaise and English is also an option.


Well, one of us has done research and work in this area. I don’t know what you’ve been doing. All of your suggestions perform poorly in the real world.


You can determine user's language from IP address location. Of course, there are users with VPNs, but they probably are used to seeing foreign content. For example, Youtube shows me advertisement in a language I don't understand despite my language header saying I only understand "en-US" and "en" languages. So this header is unnecessary, even Youtube ignores it.

Also, when using VPN, Google typically uses a language based on IP address, not my language header. I assume the header is only useful for fingerprinting today.


> You can determine user's language from IP address location.

There are reasons why it might not work (VPN is only one of them; there are others such as places with multiple languages, people traveling to foreign countries, and others), although it is also a bad idea for other reasons as well.

If the user specifies the language then you should use that one. I think it would probably be better to use the following order of figuring out which language you should want:

1. If the URL specifies the language to use, then use the language specified by the URL.

2. If the language is not specified by the URL, use the language specified by any cookies that are set for the purpose of selecting the language.

3. If the language is not specified by URL or cookies, but the user is logged in and the user account has a language setting, use the language specified by the user account. (If TLS client authentication is being used, then you might consider adding an extension into the client's X.509 certificate to select the language.)

4. If the language is not specified by URL or cookies or the user's account, or the user is not logged in, use the Accept-Language header.

5. If the language is not specified by URL or cookies or the user's account, or the user is not logged in, or the Accept-Language header is not present or cannot be parsed or does not specify any language that the request file is available in, then use the default, such as the language that it was originally written in.


> You can determine user's language from IP address location.

I live in Hyderabad, Telangana, India. I do not yet speak enough Telugu or Hindi or Urdu to be useful, and cannot read Hindi or Urdu at all; but I’m a foreigner who grew up on English only, rather rare around here, so let’s consider native Indians instead. Many can speak these languages but not read them in their native scripts, only romanised (in which case they can probably speak English tolerably). And many (many) come from other parts of India (or even Nepal) and can’t speak Telugu. Or are Muslim and at least prefer to deal in Hindi, often not having very good Telugu. And so on. It’s messy.

Some IP geolocation doesn’t even get the city right—I’ve seen Noida suggested, which is up north in Hindi territory.


More and more international audiences websites literally do this themselves, putting a language (sometimes even currency) select box option on top when they detect your settings don’t match best at first the page you are on.

Why not have this negotiation implemented at the browser level?


Because that prevents all of your users from selecting the language they want. It's a terrible idea with no upside and not-high-but-still-not-no downside.


It doesn't, because that's an optional negotiation. Try Apple.com in a different country/locale than yours, you'll see how it behaves.


It has a remarkably inconspicuous language selector, also using the names of countries rather than languages, located in the page footer. Compared to Dyson, Apple's list of country names is much more willing to use English in preference to whatever someone from that country would call it. This isn't consistent; many countries are rendered in their own language (日本 / Ελλάδα) and many aren't (Georgia / Kazakhstan).

The page defaults to the locale that you request in the URL. https://www.apple.com/ shows up in English, regardless of your country;† https://www.apple.com/bg/ shows up in Bulgarian. Switching your preferred location simply takes you to the page for that location. (Dyson does the same thing.) Some locations support more than one language; there's https://www.apple.com/lae/ for Latin America (English) and https://www.apple.com/la/ for Latin America (Spanish). If you're on the page for a location like this, a language selector (with language names) displays next to the location selector. In the case of Latin America, only two languages are supported, and the language selector automatically displays "Español" if you're on the English site and "English" if you're on the Spanish site, which makes sense but won't generalize.

Apple's selector is inconspicuous because it refuses to display flags, which I would guess is due to much higher political exposure than Dyson. So it's lower-quality in two ways, but fundamentally the same approach. The user asks for a language, and the site honors that.

Given that I presented Dyson as an example of doing language selection correctly, I'm confused about what you wanted me to see on apple.com. They're trying to do the right thing, but less effectively.

† I tested this by accessing the site(s) from Mongolia, Vietnam, and Morocco using ExpressVPN.


That was my point. Not comparing Apple/Dyson/whatever, but showing that website do have this need.

If this was designed and implemented as a standard at the browser level, we would get something better in the end, rather than re-implementations on each and every website.


No, you wouldn't. Having it done by the browser means it sucks. That is a very, very, very bad idea. You need to do it on the website.


Sure, if that suits you...


>In other words, I believe the reason this code is hard to read for many who are used to more "normal" C styles is because of its density; in just a few dozen lines, it creates many abstractions and uses them immediately, something which would otherwise be many many pages long in a more normal style.

I also spent some time with the Incunabulum and came away with a slightly different conclusion. I only really grokked it after going through and renaming the variables to colorful emojis (https://imgur.com/F27ZNfk). That made me think that, in addition to informational density, a big part of the initial difficulty is orthographic. IMO two features of our current programming culture make this coding style hard to read: (1) Most modern languages discourage or forbid symbol/emoji characters in identifiers, even though their highly distinctive shapes would make this kind of code much more readable, just as they do in mathematical notation (there's a reason APL looked the way it did!). (2) When it comes to color, most editors default to "syntax highlighting" (each different syntactic category gets a different color), whereas what's often most helpful (esp. here) is token-based highlighting, where each distinct identifier (generally) gets its own color (This was pioneered afaik by Sublime Text which calls it "hashed syntax highlighting" and is sometimes called "semantic highlighting" though that term was later co-opted by VSCode to mean something quite different.) Once I renamed the identifiers so it becomes easier to recognize them at a glance by shape and/or color the whole thing became much easier to follow.


I've experimented a few times with coloring my variables explicitly (using a prefix like R for red, hiding the letters, etc) after playing with colorforth. I agree getting color helps with small shapes, but I think the colors shouldn't be arbitrary: every character Arthur types is a choice about how the code should look, what he is going to need, and what he needs to see at the same time, and it seems like a missed opportunity to turn an important decision about what something is named (or colored) over to a random number generator.


> (1) Most modern languages discourage or forbid symbol/emoji characters in identifiers

> (2) When it comes to color,

Call me boomer if you wish, but if you can't grasp the value of having your code readable on a 24 rows by 80 columns, black and white screen, you are not a software developer. You are not even a programmer: at most, you are a prompt typist for ChatGPT.


While I agree that, if the function at hand can’t fit in a 25x80 window it most likely should be broken in smaller functions, there are kinder ways to say that.

I also joke God made the VT100 with 80 columns for a reason.


... For the reason that IBM made their 1928 card with 80 columns, in an attempt to increase the storage efficiency of Hollerith’s 45-column card without increasing its size?

That said, ~60 characters per printed line has been the typographer’s recommendation for much longer. Which is why typographers dislike Times and derivatives when used on normal-sized single-column pages, as that typeface was made to squeeze more characters into narrow newspaper columns (it’s in the name).


The fact that the claim is wrong on multiple levels (IBM punchcards, VT100 did 132 columns as well) is part of the fun.


23x75 to allow for a status bar and the possibility that the code may be quoted in an email. Also, it’s green on black. Or possibly amber.

And yet I still have a utility named "~/bin/\uE43E"


\uExxx is in the private use area. What is it?


That’s private, obviously.


Fun fact: both HN and (no doubt not coincidentally) paulgraham.com ship no DOCTYPE and are rendered in Quirks Mode. You can see this in devtools by evaluating `document.compatMode`.

I ran into this because I have a little userscript I inject everywhere that helps me copy text in hovered elements (not just links). It does:

[...document.querySelectorAll(":hover")].at(-1)

to grab the innermost hovered element. It works fine on standards-mode pages, but it's flaky on quirks-mode pages.

Question: is there any straightforward & clean way as a user to force a quirks-mode page to render in standards mode? I know you can do something like:

document.write("<!DOCTYPE html>" + document.documentElement.innerHTML);

but that blows away the entire document & introduces a ton of problems. Is there a cleaner trick?


I wish `dang` would take some time to go through the website and make some usability updates. HN still uses a font-size value that usually renders to 12px by default as well, making it look insanely small on most modern devices, etc.

At quick glance, it looks like they're still using the same CSS that was made public ~13 years ago:

https://github.com/wting/hackernews/blob/5a3296417d23d1ecc90...


I trust dang a lot; but in general I am scared of websites making "usability updates."

Modern design trends are going backwards. Tons of spacing around everything, super low information density, designed for touch first (i.e. giant hit-targets), and tons of other things that were considered bad practice just ten years ago.

So HN has its quirks, but I'd take what it is over what most 20-something designers would turn it into. See old.reddit Vs. new.reddit or even their app.


There's nothing trendy about making sure HN renders like a page from 15 years ago should. Relative font sizes are just so basic they should count as a bug fix and not "usability update".


Overall I would agree but I also agree with the above commenter. It’s ok for mobile but on a desktop view it’s very small when viewed at anything larger than 1080p. Zoom works but doesn’t stick. A simple change to the font size in css will make it legible for mobile, desktop, terminal, or space… font-size:2vw or something that scales.


It’s not ok for mobile. Misclicks all around if you don’t first pinch zoom to what you are trying to click.


Indeed, the vast majority of things I've flagged or hidden have been the accidental result of skipping that extra step of zooming.


> Zoom works but doesn’t stick.

perhaps try using a user agent that remembers your settings? e.g. firefox


Perhaps not recommend workarounds to lack of utilizing standards.


Setting aside the relative merits of 12pt vs 16pt font, websites ought to respect the user's browser settings by using "rem", but HN (mostly[1]) ignores this.

To test, try setting your browser's font size larger or smaller and note which websites update and which do not. And besides helping to support different user preferences, it's very useful for accessibility.

[1] After testing, it looks like the "Reply" and "Help" links respect large browser font sizes.


Side note: pt != px. 16px == 12pt.


You are correct, it should have been "px".


Please don’t. HN has just the right information density with its small default font size. In most browsers it is adjustable. And you can pinch-zoom if you’re having trouble hitting the right link.

None of the ”content needs white space and large fonts to breathe“ stuff or having to click to see a reply like on other sites. That just complicates interactions.

And I am posting this on an iPhone SE while my sight has started to degrade from age.


Yeah, I'm really asking for tons of whitespace and everything to breathe sooooo much by asking for the default font size to be a browser default (16px) and updated to match most modern display resolutions in 2025, not 2006 when it was created.

HN is the only site I have to increase the zoom level, and others below are doing the same thing as me. But it must be us with the issues. Obviously PG knew best in 2006 for decades to come.


On the flipside, HN is the only site I don't have to zoom out of to keep it comfortable. Most sit at 90% with a rare few at 80%.

16px is just massive.


Sounds like your display scaling is a little out of whack?


Yeah, this is like keeping a sound system equalized for one album and asserting that modern mastering is always badly equalized. Tune the system to the standard, and adjust for the oddball until it's remastered.


Except we all know what happened to the "standard" with the Loudness War.


I'm not a fan of extreme compression and limiting, but doing so in a multiband fashion (as occurs due the loudness war) actually does result in more consistent EQ from album to album, label to label, genre to genre, etc. which virtually eliminates the need to adjust EQ at playback time between each post-war selection.


You're obviously being sarcastic, but I don't think that it's a given that "those are old font-size defaults" means "those are bad font-size defaults." I like the default HN size. There's no reason that my preference should override yours, but neither is there any reason that yours should override mine, and I think "that's how the other sites are" intentionally doesn't describe the HN culture, so it need not describe the HN HTML.


on mobile at least, i find thati can frequently zoom in, but can almost never zoom out, so smaller text allows for more accessibility than bigger text


Browser (and OS) zoom settings are for accessibility; use that to zoom out if you've got the eyes for it. Pinching is more about exploring something not expected to be readily seen (and undersized touch targets).


Don't do this.


I agree, don't set the default font size to ~12px equiv in 2025.


[flagged]


Do you think that "Don't do this" as a reply comment is following the spirit of the guidelines? It doesn't seem very thoughtful or substantive to me.


Content does need white space.

HN has a good amount of white space. Much more would be too much, much less would be not enough.


No kidding. I've set the zoom level so long ago that I never noticed, but if I reset it on HN the text letters use about 2mm of width in my standard HD, 21" display.


> but if I reset it on HN the text letters use about 2mm of width in my standard HD, 21" display.

1920x1080 24" screen here, .274mm pitch which is just about 100dpi. Standard text size in HN is also about 2mm across, measured by the simple method of holding a ruler up to the screen and guessing.

If you can't read this, you maybe need to get your eyes checked. It's likely you need reading glasses. The need for reading glasses kind of crept up on me because I either work on kind of Landrover-engine-scale components, or grain-of-sugar-scale components, the latter viewed down a binocular microscope on my SMD rework bench and the former big enough to see quite easily ;-)


Shameless plug: I made this userstyle to make HN comfortable to handle both on desktop and mobile. Minimal changes (font size, triangles, tiny bits of color), makes a huge difference, especially on a mobile screen.

https://userstyles.world/style/9931/


Thanks for that, it works well, and I like the font choice! Though personally I found the font-weight a bit light and changed it to 400.


> HN still uses a font-size value that usually renders to 12px by default as well, making it look insanely small on most modern devices, etc.

On what devices (or browsers?) it renders "insanely small" for you? CSS pixels are not physical pixels, they're scaled to 1/96th of an inch on desktop computers, for smartphones etc. scaling takes into account the shorter typical distance between your eyes and the screen (to make the angular size roughly the same), so one CSS pixel can span multiple physical pixels on a high-PPI display. Font size specified in px should look the same on various devices. HN font size feels the same for me on my 32" 4k display (137 PPI), my 24" display with 94 PPI, and on my smartphone (416 PPI).


On my MacBook it's not "insanely small", but I zoom to 120% for a much better experience. I can read it just fine at the default.


On my standard 1080p screen I gotta set it to 200% zoom to be comfortable. Still LOTS of content on the screen and no space wasted.


> At quick glance, it looks like they're still using the same CSS that was made public ~13 years ago:

It has been changed since then for sure though. A couple of years ago the mobile experience was way worse than what it is today, so something has clearly changed. I think also some infamous "non-wrapping inline code" bug in the CSS was fixed, but can't remember if that was months, years or decades ago.

On another note, they're very receptive to emails, and if you have specific things you want fixed, and maybe even ideas on how to do in a good and proper way, you can email them (hn@ycombinator.com) and they'll respond relatively fast, either with a "thanks, good idea" or "probably not, here's why". That has been my experience at least.


I hesitate to want any changes, but I could maybe get behind dynamic font sizing. Maybe.

On mobile it’s fine, on Mac with a Retina display it’s fine; the only one where it isn’t is a 4K display rendering at native resolution - for that, I have my browser set to 110% zoom, which is perfect for me.

So I have a workaround that’s trivial, but I can see the benefit of not needing to do that.


The font size is perfect for me, and I hope it doesn’t get a “usability update”.


“I don’t see any reason to accommodate the needs of others because I’m just fine”


I bet 99.9% of mobile users' hidden posts are accidentally hidden


12 px (13.333 px when in the adapted layout) is a little small - and that's a perfectly valid argument without trying to argue we should abandon absolute sized fonts in favor of feels.

There is no such thing as a reasonable default size if we stop calibrating to physical dimensions. If you choose to use your phone at a scaling where what is supposed to be 1" is 0.75" then that's on you, not on the website to up the font size for everyone.


I find it exactly the right size on both PC and phone.

There's a trend to make fonts bigger but I never understood why. Do people really have trouble reading it?

I prefer seeing more information at the same time, when I used Discord (on PC), I even switched to IRC mode and made the font smaller so that more text would fit.


I'm assuming you have a rather small resolution display? On a 27" 4k display, scaled to 150%, the font is quite tiny, to the point where the textarea I currently type this in (which uses the browsers default font size) is about 3 times the perceivable size in comparison to the HN comments themselves.


Agreed. I'm on an Apple Thunderbolt Display (2560x1440) and I'm also scaled up to 150%.

I'm not asking for some major, crazy redesign. 16px is the browser default and most websites aren't using tiny, small font sizes like 12px any longer.

The only reason HN is using it is because `pg` made it that in 2006, at a time when it was normal and made sense.


Yup, and these days we have relative units in CSS such that we no longer need to hardcode pixels, so everyone wins (em, rem). That way people can get usability according to the browsers defaults, which make the whole thing user configurable.


1920x1080 and 24 inches

Maybe the issue is not scaling according to DPI?

OTOH, people with 30+ inch screens probably sit a bit further away to be able to see everything without moving their head so it makes sense that even sites which take DPI into account use larger fonts because it's not really about how large something is physically on the screen but about the angular size relative to the eye.


Yeah, one of the other cousin comments mentions 36 inches away. I don't think they realize just how far outliers they are. Of course you have to make everything huge when your screen is so much further away than normal.


I have HN zoomed to 150% on my screens that are between 32 and 36 inches from my eyeballs when sitting upright at my desk.

I don't really have to do the same elsewhere, so I think the 12px font might be just a bit too small for modern 4k devices.


I'm low vision and I have to zoom to 175% on HN to read comfortably, this is basically the only site I do to this extreme.


I have mild vision issues and have to blow up the default font size quite a bit to read comfortably. Everyone has different eyes, and vision can change a lot with age.


Even better: it scales nicely with the browser’s zoom setting.


Text size is easily fixed in your browser with the zoom setting. Chrome will remember the level you use on a per site basis if you let it.


I'm sure they accept PRs, although it can be tricky to evaluate the effect a CSS change will have on a broad range of devices.


The text looks perfectly normal-sized on my laptop.


Really? I find the font very nice on my Pixel XL. It doesn't take too much space unlike all other modern websites.


A uBlock filter can do it: `||news.ycombinator.com/*$replace=/<html/<!DOCTYPE html><html/`


Could also use tampermonkey to do that, also perform the same function as OP.


There is a better option, but generally the answer is "no"; the best solution would be for WHATWG to define document.compatMode to be writable property instead of readonly.

The better option is to create and hold a reference to the old nodes (as easy as `var old = document.documentElement`) and then after blowing everything away with document.write (with an empty* html element; don't serialize the whole tree), re-insert them under the new document.documentElement.

* Note that your approach doesn't preserve the attributes on the html element; you can fix this by either pro-actively removing the child nodes before the document.write call and rely on document.documentElement.outerHTML to serialize the attributes just as in the original, or you can iterate through the old element's attributes and re-set them one-by-one.


On that subject I would be fine if the browser always rendered in standard mode. or offered a user configuration option to do so.

No need to have the default be compatible with a dead browser.

further thoughts: I just read the mdn quirks page and perhaps I will start shipping Content-Type: application/xhtml+xml as I don't really like putting the doctype in. It is the one screwball tag and requires special casing in my otherwise elegant html output engine.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: