Arbitrary demarcations can still be valuable! Just because something is arbitrary doesn't mean that it's not helpful. Working in chunks will let you take more time to review each callsite individually, and increase your confidence in the changes
In the future, I would definitely encourage you to explore a more iterative solution—fix the first 50 occurrences first, or maybe all the occurrences of a handful of functions. For example, if you have utility functions A, B, C and D, maybe fix functions A and B first, and then C and D second.
Ultimately, at the end of the day it's going to depend on how much code you're touching. If you're only touching 100 library calls, then it's probably easy to do them in one PR. But if you're updating 1000 library calls, you'll need to take a more iterative approach. Building those skills now will serve you well in the future when working on bigger codebases and harder refactors.
Well another problem is that there was a developer also working on those functions at the same time. So just like the recent post on a 25 million LoC reformat done in a weekend, it seemed better to do it in one fell swoop. If it's good enough at 25 million, I'm sure it's good enough at a few thousand
Autoformatters are deterministic tools that are tested regularly, with extensive test suites and went through a very long process of production usage and review before the reformat:
> We also built a tool to diff ripper trees across formatted files, accounting for things like rubyfmt converting single quotes to double quotes. Combined with our extensive test suite, we built confidence slowly and deliberately.
If an autoformatter is working right, it's only changing whitespace—not the actual code executed. Changing between two different implementations of the same function is very different from changing whitespace around.
They also didn't do the entire thing in one weekend—that was just the article title clickbait. they did it file by file, incrementally, over the course of months:
> Rolling out a novel autoformatter to 25 million lines of code has two big risks: merge conflicts and correctness. A bug affecting just 0.01% of lines would still touch tens of thousands of files. To manage both, we built in a per-file opt-in so rubyfmt would only format files that explicitly asked for it. Following the Developer Productivity org’s typical pattern, we started with systems we owned and could observe closely, then expanded coverage gradually as our confidence grew.
^ See how they talk about the incremental changes it took? This is what mature refactors look like. And they were only changing whitespace!
Exames were previously proctored, and it led to a "us vs them" mentality that meant students banded together to
The Honor Code system, and removing proctors was a way to route around that—it made all of the students responsible for catching cheaters and turned the "Students vs Faculty" mentality into a "Honor vs Cheaters" mentality among the students.
Unfortunately, it seems like the "Students vs Faculty" mentality has seen too much of a resurgence due to outside factors, and the Honor Code is no longer a match for the current climate. That's what the article is about
coppsilgold is the one who made a hard-line, clear-cut dichotomy when they said "it's easy to do harm [but] it's all but impossible to do any good". bglazer referenced several interventions that are known to increase IQ which challenge this dichotomy. Saying that it is difficult to separate "doing good" and "stop doing harm" is agreeing with the point that coppsilgold created a distinction without a difference.
No, debt isn't "parts of your software that could have been written better". Any part of your software can always be written better. Debt is the cost you have to pay monthly to keep your application working—it's the parts of your codebase that make it harder to work on new features.
Is the AROUND(n) one real? I've never seen it before, and trying "climate AROUND(3) policy" as mentioned in the article just gives me results where "Around 3" is in the body:
European Central Bank
Climate, Nature and Monetary Policy
1 day ago — ECB research has found that four years after a drought or flood, regional output remains depressed by around 3 percentage points on average
The ability to make information private fundamentally conflicts with how ATProto is designed. All records have to be sent to all Relays and AppView nodes on the network to provide a "global view" of the network. So there's no way to keep records private without locking out some user's servers from viewing them, and since AppViews are centralized indexing services, they won't function without being able to see the entire network.
Yeah, apps wouldn't be able to only listen to the firehose.
There are some proposals for private files. However, I'm outside the AtProto world so not sure what exactly the suggested implementations are. I just hope they give enough control.
I think the technology could potentially be used for way more than microblogging. I would love to use webapps that store the data on my devices and share it with specific people. The data and access under my control.
> Sync is pull-based. Applications are responsible for staying in sync with all member PDSes. PDSes assist by sending lightweight write notifications to prompt pulls when new data is written.
It looks like this basically just reinvents ActivityPub (local servers can pull or push to remote servers). So it defeats all of the "benefits" you get from Bluesky's firehose-based approach anyway, except for the fact that Bluesky assumes you're going to be using their AppView and they will always have access to your private data.
In the future, I would definitely encourage you to explore a more iterative solution—fix the first 50 occurrences first, or maybe all the occurrences of a handful of functions. For example, if you have utility functions A, B, C and D, maybe fix functions A and B first, and then C and D second.
Ultimately, at the end of the day it's going to depend on how much code you're touching. If you're only touching 100 library calls, then it's probably easy to do them in one PR. But if you're updating 1000 library calls, you'll need to take a more iterative approach. Building those skills now will serve you well in the future when working on bigger codebases and harder refactors.
reply