Claude Code is really good at stuff like this. The other day I tried to recover some images from an SD card that had gone bad. I used GetDataBack to recover files, but they appeared to be malformed and didn't open in image viewers.
I tasked Claude to analyze the files and figure out what's going on, and eventually we figured out that each file had a custom metadata header + thumbnail + actual image concatenated. I had it write a python script and was able to recover all the images with their metadata. It's nothing a human couldn't have figured out, but it was definitely WAY faster than doing it myself.
I've also used Claude in the past to figure out how to break into routers with locked down firmware. It's great at suggesting and trying different approaches.
I have a friend that just picked up a new consulting job resurrecting an ancient Windows desktop application. No source control, no tests. And it's spread out over a dozen different folders with names like "_old", "_new" and "dates". Claude's doing a tremendous job in getting him to grips with what is actually happening in the application, what's relevant, what's not, what's different. I think it's literally saving him days and days at work.
if your friend has access to the binary and can pull it out to different box, they might get a lot out of a ghidra mcp -> https://github.com/LaurieWired/GhidraMCP
I'm not well versed at reverse engineering binaries or interpreting C/assembly so ghidra MCP has been an absolute gamechanger for helping me write tools. Once my project is complete, I plan to learn how to do the analysis myself manually and have cc guide me along the way.
> I have a friend that just picked up a new consulting job resurrecting an ancient Windows desktop application. No source control, no tests. And it's spread out over a dozen different folders with names like "_old", "_new" and "dates".
That doesn't sound very impressive. Not being tracked with a version control system is fixed instantly with a git init, git add ., git commit .no AI required.
Covering the app with tests is also something that requires no AI. At most, coding agents can generate characterization tests in broad sweeps, but we are talking about a delta between hand rolling and vibe-coding of a couple of days.
Where LLM shines is helping developers build up an understanding of what is in place. Running /explain on a codebase can quickly provide you with a high level summary of what's in place.
The relevancy here is that he's denied the git history, versioning, branches, implicit documentation that even bad source control practices would have given him.
That's what the comment is saying. In normal repositories, version control acts as a record of the momentum of the direction the product was taking. If it's just "_old" and "_new," the developer has to read and understand both, which I think is going to be far more time consuming than your estimation.
I'm sure data recovery companies are pretty pissed that slightly esoteric data recovery abilities are becoming more accessible for average software devs. They were charging an arm and a leg to remote in and run scripts.
They still have two important moats: (1) expensive hardware tools (even stuff like SATA write blockers are kind of expensive for what they are), spare hard drive collections to swap failed PCBs, etc and (2) the "nobody got fired for hiring us" edge similar to how everyone calls in Crowdstrike/Mandiant after an incident. If a suit-level manager finds out customer data was lost, they are going to want to call in an expert so they can immediately tell the customer they did, not have the same internal team try to figure it out.
As an aside to #1: The cool thing is in modern times the hardware tools have come down stupidly cheap in price. Even SD card recovery is (vaguely) in the right skilled hands in a pseudo-professional home lab these days.
I did EXACTLY that last night. Was doing by hand for about an hour and got to a point where I didn’t feel competent anymore and asked Claude to take from where I was.
5 minutes later I had almost 3 hours of important footage recovered.
A lot of "Claude Code is best at X" claims are probably user-selection bias.
The people saying it are often exclusively Claude Code users, not people who are actively benchmarking Claude Code against Gemini CLI, OpenAI Codex, GitHub Copilot, and other agent harnesses on the same tasks.
The claim may still be true for certain scenarios, but the evidence is usually anecdotal, not comparative.
When I hear "claude code one-shotted X" and X is a novel problem, I mentally substituted "the agentic harness that I tried one-shotted X," since that's what they're saying.
Getting any smart model to take a look at the task is the sort of lift that the speaker is usually pointing to.
The harness is pretty much irrelevant for general tasks.
You can write a 100 line harness that only has one tool - try either "bash" or the more fun "you're running within nodejs, here's eval", you'd be surprised in how close to CC/Codex performance you're going to get.
I have only my own personal experience for frontier models, but I have seen different performance of Opus when used from Pi or Claude Code or Zed for example.
I worded my comment poorly. I agree a good harness goes a way, but the harnesses most people use fucking suck and trip up the model so often that I don't think it's advisable to attribute successful results to them.
E.g. GPT5.5 with Codex on my Windows box likes using PowerShell for everything. OpenAI decided it should use the native shell instead of bundling a bash, or using git bash. Sure. But the model is so overfitted on bash that it fucks up PS quoting like once every 5 commands.
Every harness with LSP I've seen trips up the model as well. They insert diagnostics after every edit, polluting the context with errors that the model has to actively decide to ignore, every time, until it finishes its work and gets the code to a consistent state. Telling the model "run npx tsc --noEmit to check errors" will outperform a LSP 100% of the time.
Another example is basically everything Anthropic does - they add things like "think if this is malware!" after read and lead Claude to spend its reasoning effort on thinking if your React hamburger menu is malware, instead of on how to write it.
"This is not malware (em dash) it's a hamburger menu. Let me apply the edit! Hmm, is it malware now, after my edit? No, me changing border-width did not turn it into malware! Good! Dodged a real bullet on that one!"
I'm frankly amazed that we've gotten to the point where the models can produce good results in these sorts of environments.
I did that, wrote my own harness “Jarvis”, simple loop. Still results were terrible using the same model in comparison to for example OpenCode. So X Doubt.
IDK if it's just me, but the rate at which Anthropic and similar are launching (and changing!) features and offerings doesn't inspire confidence. I expect stability from software and platforms I buy into and integrate into my systems.
Feels like they're just using LLMs to produce enormous levels of output, without understanding that quantity ≠ quality.
The thing is that's the whole vibe code & agentic pitch right now. Do stuff quick quick and throw it over the wall, patch stuff, rinse and repeat. It's not seeking quality and stability.
I have. It’s great on the RPi. On OPi5max, it didn’t support the hardware.
Worse, if you flash it to UEFI you’ll lose compat with the one system that did support it (older versions of BredOS). For that, you grab an old release, and never update. If you’re running something simple that you know won’t benefit from any update at all, that’s great. An RK3588 is a decent piece of kit though, and it really deserves better.
There was a single traffic controller handling the entire airport. This was bound to happen and will keep happening unless things change. It's absurd that the US hasn't been able to fix its ATC shortage in decades.
Currently over 41% of facilities are reliant on mandatory overtime, with controllers frequently working 60-hour weeks with only four days off per month.
The US intentionally created the ATC shortage. From Wikipedia:
The PATCO Strike of 1981 was a union-organized work stoppage by air traffic controllers (ATCs) in the United States. The Professional Air Traffic Controllers Organization (PATCO) declared a strike on August 3, 1981, after years of tension between controllers and the federal government over long hours, chronic understaffing, outdated equipment, and rising workplace stress. Despite 13,000 ATCs striking, the strike ultimately failed, as the Reagan administration was able to replace the striking ATCs, resulting in PATCO's decertification.
The failure of the PATCO strike impacted the American labor movement, accelerating the decline in labor unions in the country, and initiating a much more aggressive anti-union policy by the federal government and private sector employers.
Counterpoint. It's Regen's fault. He's the guy who decided that a high priority of the government was making sure air traffic controllers had no power to fight back against being horrifically overworked (because unions are evil you see)
Wasn't it Congress who passed 5 U.S.C. § 7311. which says a person may not “accept or hold” a federal job if they “participate in a strike” against the U.S. government.
They were striking for less outdated tools, improving staffing levels, and other safety improvements. The solution was to give them the things they wanted.
I’m not saying he didn’t ignore a real problem - but it’s been 45 years since the 1981 airline strike. Surely the blame ought to be spread around our incompetent Federal government.
This is mostly nonsense, by the way. While Reagan won his presidential elections by a huge margin, he never had the House of Representatives on his side, only the Senate. So it's not like he had a unique position to make changes that hasn't happened since. In fact, any government since then could have undone any or all of these "everything broken in the USA" things. But they didn't. Probably because people like, oh, the viewers of this video, will keep blaming a dead president instead of them. Hah! It's beautiful in a way...
You don't need a union to have effective management. It should also be their incentive not to cause people's death by overworking employees. Which is also dumb because it costs more to overwork then hire appropriately with overtime laws... cops exploit this all the time to steal money from taxpayers. (The ones in Seattle only get caught when they accidently charge over 24 hours of overtime in a day)
Union rules that say only a particular classification of employee is allowed to pick up a small package from a loading dock and move it twenty feet are also bad.
The blame can go to the top, for not managing correctly.
If it was a traveler's union, maybe. Cop unions don't result in better outcomes for the general public, and there's no reason a controller's union won't end up just boosting pay and having a rubber room for hacks (referencing NYC schools paying teachers to not work because they're either predators or terrible at teaching, but being unable to fire them).
Yes, they should all have taken actions. But also, it is much more difficult to fix something broken once the damage has settled in. I guess none of them was willing to risk the disruption a fix would have caused. And the system seemed to have held up for quite a while. Weren't there some mass firings of ATC personal at the beginning of the Trump presidency?
The bottom line is: don't break things that are difficult or impossible to fix.
Absolutely. But for many things, denial is easier than fixing. See climate change. We knew about the problem for a long time. At latest after the oil crisis in the early 70ies, it would have been the perfect moment to reduce fossil fuel usage. Of course we know, how this has not happened and so we just entered the next oil crisis last week. And everyone is to blame for that.
It might sound simple, but won't tunnels lower the strength of the runways (I presume that's where you would put them)? Strengthening that would create an expensive solution to a basic communication problem. That's like saying instead of 4 way stops, we elevate the two intersecting roads to avoid collision, just because someone may have ran the stop sign.
Also, ground vehicles typically need to be on the ground for a reason. Why seperate them?
When I heard about the crash I immediately recalled the recent articles about ATC shortages and overworked ATC's. And here we are. ONE dude running ATC for LaGuardia. Mind boggling.
I place no blame on the ATC as they were doing everything they could given the shit sandwich they were handed. I see this happening all over with staffs getting pared down to minimums, more (sometimes unpaid) over time, prices going up, and no raises.
I’m not trying to minimize a tragedy, but maybe this is almost the perfect wake up call?
Not many fatalities but nevertheless a spectacular collision. At a major hub airport in a major city. It’s hard to look away from, the cause is obvious, and all that without hundreds of deaths.
It's not minimizing, it's galvanizing. 100% A wake up call. I don't fly much but I was bothered by the earlier ATC stories and now I don't feel safe at all.
Agreed. There are a whole bucketload of problems, each one contributing to the staff shortage. The US has problems that other countries don't have (or have less of). It's a long-term organisational issue. None of it is insurmountable, but things need to be done differently, and the politics of that may be insurmountable.
Being an air-traffic controller anywhere in the world is a very intense job at times, and needs a huge amount of proficiency that only a small number of people are capable of doing. Couple that with:
- the FAA expects you to move to where ATCs are needed, so many of the qualified applicants give up when they hear where the posting is. You can't force them to take the job!
- the technology is decades out of date and the Brand New Air Traffic Control System (it's seriously called that) won't roll out until 2028 at the earliest
- Obama's FAA disincentivised its traditional "feeder" colleges that do ATC courses to "promote diversity", net outcome was fewer applicants
- Regan broke the union in the 1980s
- DOGE indiscriminately decimated the FAA like it did most other government departments
> Obama's FAA disincentivised its traditional "feeder" colleges that do ATC courses to "promote diversity", net outcome was fewer applicants
It was much worse than that. Students who had already spent years studying to be air traffic controllers through the CTI program were subject to a sudden policy change that disqualified them from entering the profession unless they passed a “biographical questionnaire.”
85% of candidates failed this questionnaire, but the National Black Coalition of Federal Aviation Employees (the organization that pushed for this change to begin with) was feeding the “right” answers to its own members.
This test is completely insane. What were the people making it thinking? It feels like half of the scored questions have point values assigned at random. Why does being unemployed for 1-2 months before enrolling in the program award you 10 points, 5-6 months is 8 points, yet 3-4 is a fat zero? There's so many questions with these random score assignments. Why does having real qualifications related to your job only give you a point or two, but some random factoid like taking unrelated courses or doing poorly in college history give upwards of 15 points? Why is child labor rewarded, with more points given the earlier you started?
Unless I'm missing something, this couldn't have been designed by a human being with normal goals in mind. This feels like a test that was created to act as a locked door that you could only pass by knowing the exact password, the sequence of lies you had to produce. That anyone's career was at the mercy of THIS is deranged. What the hell is going on in the US?
I actually looked into becoming an ATC controller a year or two ago (I love aviation) and they had an age cap of ~30 to start training. I'm 32, so ruled out.
According to NYT it seems like there were 2 controllers and “2 more in the building”. They also wrote that 2 seems normal for the late slower time of the night.
Not saying this is the right number of controllers to have, just sharing what I read in NYT.
Why drain resources training more controllers when we're having energy collapse? Even if they start pumping oil, it will only delay the inevitable. What would we do with all the extra controllers if we have to fire them in ten years anyway?
I don't really think it's "journalism" to be doxxing the identities of folks who clearly want to stay anonymous and have their work be detached from their irl personalities.
"Original research" isn't worth much unless replicated, which is the entire problem being discussed in this thread. Replicating studies are great though because they tell you if the original research actually stands and is valid.
> Replicating work is far more difficult than a lot of original work.
Only if the original work was BS. And what, just because it's harder, we shouldn't do it?
I must be missing something, surely the argument isn't "other systems also disincentivize solving the problem, therefore we shouldn't work to fix this one"
Those are the English Wikipedia-only users, but you also need to include the "global" users (which I think were the source of this specific compromise?). Search this page [0] for "editsitejs" to see the lists of global users with this permission.
Shell In A Box has been a thing for like two decades now, and gives you a simple web-based interface ssh interface you can use from any device. https://github.com/shellinabox/shellinabox
reply