Hacker Newsnew | past | comments | ask | show | jobs | submit | kittikitti's commentslogin

Thank you for this insight. Even as a developer, I can easily lose track of all the trackers I've included in a webpage. Usually, if I see a tracker in the code, it's already obfuscated and I provide the benefit of the doubt to leave it in.

It's only when I jump back into the ads management page where I'm able to get a better idea. Even then, the specific trackers are hidden behind a variety of menu items that can change every time. This post made me realize that I need a better strategy as things are getting ridiculous with ads.

I used to be someone who didn't use ad blockers because some of them are botnets. It's just not the same anymore, as I would trust the botnets with my data over the advertisers.


Even if its obfuscated, there should be a comment above it saying what it is. This is bad developer hygiene.

I have a friend whose nearest grocery store is surrounded by Flock "safety" cameras. The police and security in the retail or grocery stores regularly share data and logins, and this extends across multiple states. He says it's been brought up in mundane traffic court and affected his ability to enroll his children into schools. Not only that, but his ability to seek legal guidance is hindered since the state can easily produce suspicious evidence on a whim against him.

It seems like anyone with even a cursory role can access this information and abuse it. It's ridiculous that this is happening. I think a sizable number of people on Hacker News actually support these systems and if you're one of them, please keep yourself safe.


Flock cameras prevent someone from enrolling a child in school?

I'd be curious to hear how those dots connect.


Last week’s episode of enshittening dystopia:

https://news.ycombinator.com/item?id=47351239


You'll have to elaborate more on some of the details

This is great, congratulations to the Mistral team! I'm looking forward to the code arena benchmark results. Thanks for sharing.

People in Florida, when I tell them about my background working with data, often scoff and claim that the data can be changed to spread lies. They have a government who arrested a data scientist when she published information about Coronavirus. This is prevalent across all of America, especially after DOGE, who encourage fraud so the data supports their political interests.

I think the reliability problem is very bad. It's not just that the US government is encouraging fraud, it's also that the average American hates AI and data science. Usually, the public would prefer reliable data, but in this case, Americans seem to prefer corruption just to spite the AI.

We're certainly living in a post-truth country. By vilifying higher education, the assumption that Americans can interpret data is challenging. Therefore, Americans are consuming biased information in their online bubbles because their media is comfortable with fraudulent data.

A concrete example of what happens whenUS economic data becomes unreliable is employment numbers. At the end of 2025, the government couldn't produce any data because of the government shutdown. Most quants and analysts utilized ADP numbers instead. A few years ago, the ADP payroll numbers and the projections by the government were perceived as aligned. This is no longer the case, and most traders rely more on ADP indicators for things like the unemployment rate.

Speculating on what other data is fraudulent, I suspect that real Gross Domestic Product (GDP) will become meaningless. It was supposed to be an indicator for economic wellbeing but now best describes wealth inequality. Nominal GDP is a slightly better measure because it adjusts for things like inflation but it's based on government produced data.

Lastly, there is widespread fraud in climate data in order to deny climate change. The data feeds into economic models and affects property values and insurance rates. I have personally received gag orders from government agencies from both the US and Europe for publishing environmental data.


>They have a government who arrested a data scientist when she published information about Coronavirus

That was fake news:

In May 2020, Jones was terminated from her position managing the team that created Florida's ArcGIS COVID-19 dashboard after being repeatedly reprimanded for sharing the department's work online without authorization. Jones alleged instead that she was told to manipulate the dashboard's data and that her firing was retaliation for her refusal. The OIG exonerated state health officials, finding her allegations to be unsubstantiated and unfounded. Jones later posted on social media a forgery of the dismissal letter from the Florida Commission on Human Relations, such that it appeared that her complaint had been validated.

In December 2022, she signed a deferred prosecution agreement admitting guilt to unauthorized use of the state's emergency alert system on November 10, 2020, which resulted in her home being searched under warrant by state police in December 2020. The execution of the warrant with armed police, widely referred to as a raid, was due to a 2016 battery charge against Jones by the Louisiana State University police. In 2023, Jones pled no-contest to a 2019 charge of cyberstalking a former Florida State University student. She was fired from both institutions.

https://en.wikipedia.org/wiki/Rebekah_Jones


Citing Rebekah Jones in your argument is the opposite of convincing. She forged documents related to her firing to make her appear more sympathetic. She has been adjudicated guilty of cyberstalking and misuse of the state’s emergency notification system, and I haven’t seen a credible defense against those accusations. She’s a fraud, and many in the media uncritically boosted her claims because they shared her political aims. That people still cite her is proof of the old adage that a lie can travel across the world before the truth can lace its boots.

> It's not just that the US government is encouraging fraud, it's also that the average American hates AI and data science.

When all they see is it being used to push narratives, they'd rather not have it at all.

Also, "There Are Three Kinds of Lies: Lies, Damned Lies, and Statistics" dates to the 1800s. This is nothing new.


It is impressive how much damage that phase has actually done to human progress over the last two centuries.

Nothing compared to what the misuse of statistics to give bunk a veneer of mathy authority has done though.

Ah yes and the alternative of just engaging in magical thinking is working out so well.

That quote is solely used to add thin validation to simply rejecting critical consideration of every evidence.


I have no idea what you are talking about. I'm firmly in favor of fact based thinking, and (I suspect) share your disdain for magical thinking. My objection is to the selective use of statistics to shape the facts to support a predefined narrative.

Statistics are, first and foremost, a set of techniques for summarizing and simplifying data by reducing a large amount of raw facts to a few easily grasped parameters. They can be very powerful when used for good (e.g. to help you answer your own questions about the data) but that very power can even more readily be abused for evil when they are used to persuade others. This is what the quote refers to. Statistics are a powerful way to lie. That's what it says, and it is true.

Examples: p-hacking, Anscombe's quartet, all manner of chart crimes, the numerology of quants (there's some magical thinking for you), the isolated, uncontextualized "significant numbers" so loved by journalists, etc.

As for your claim that it is "solely used to add thin validation to simply rejecting critical consideration of every evidence"... do you have anything to back that up? Note that as worded it is clearly false, since I am using it in the original sense and it only takes a single exception to refute such a broad claim.


Can you tell us a bit more about the gag orders? I find it fascinating that all the discussion about climate change has largely disappeared after LLMs became mainstream, and the idea that state actors may be suppressing data is equally fascinating/terrifying.

>Can you tell us a bit more about the gag orders?

They're LARPing. There are no gag orders.

>I find it fascinating that all the discussion about climate change has largely disappeared after LLMs became mainstream, and the idea that state actors may be suppressing data is equally fascinating/terrifying.

It's just not as popular to virtue signal over. Everyone is discussing the wars.


I really don’t like people bashing my state, especially when they’re repeating made-up bullshit. Do you just believe anything negative you read as long as it fits your views?

If you start all your conversations with a blatant lie that is incredibly easy to prove as such why are you surprised people scoff at you?

“Never trust any data you haven’t manipulated yourself”.

"One such test for Python code, called a pytest"

The brain rot from the author couldn't even think of "unit test".


Why would you expect a reporter to magically know what a "unit test" is? Sounds like a simple miscommunication with one of his sources. Not perfect but not "brain rot".

This article is ragebaiting people and it's an embarrassing piece from the NYT.

NYT has it out for digital advertisers, who directly compete with them. I do sense some schadenfreude here that the tech nerds who work at these places might be in trouble.

"Silicon Valley panjandrums spent the 2010s lecturing American workers in dying industries that they needed to “learn to code."

To copywriters at the NYT, LLMs are far better at stringing together natural language prose than large amounts of valid software. Get ready to supervise LLMs all day if you're not already.


LLMs are much better at coding now than at writing prose that doesn't sound like slop.

The code is also recognizable as slop to those who know how. Not the tropey "Not X, but Y" kind that's super easy to spot. But tons of repetition, deeply nested code, etc.

A counterpoint is that (maybe) nobody cares if the code is understandable, clean and maintainable. But NYT is explicitly in the business of selling ads surrounded by cheap copy just good enough to attract eyeballs. I suspect getting LLMs to write that is going to be far easier than getting LLMs to maintain large code bases autonomously.


>But tons of repetition, deeply nested code, etc.

If you explicitly make it go over the code file by file to clean up, fix duplication and refactor, it'll look much better, while no amount of "fix this slop" prompting can fix AI prose.


> no amount of "fix this slop" prompting can fix AI prose

What's the proof for that? What fundamental limitation of these large language models makes them unable to produce natural language? A lot of people see the high likelihood of ever increasing amounts of generated, no-effort content on the web as a real threat. You're saying that's impossible.


>What fundamental limitation of these large language models makes them unable to produce natural language?

LLMs can get indefinitely good at coding problems by training in a reinforcement learning loop on randomly generated coding problems with compiler/unit tests to verify correctness. On the other hand, there's no way to automatically generate a "human thinks this looks like slop" signal; it fundamentally requires human time, severely limiting throughput compared to fully automatable training signals.


Another trash article from the New York Times, who financially benefit from this type of content because of their ongoing litigation against OpenAI. I think the assumption that developers don't code is wrong. Most software engineers don't even want to code, they are opportunists looking to make money. I have yet to experience this cliff of coding. These people aren't asking for hard enough questions. I have a bunch of things I want AI to build that it completely fails on.

The article could have been written from a very different perspective. Instead, the "journalists" likely interviewed a few insiders from Big Tech and generalized. They don't get it. They never will.

Before the advent of ChatGPT, maybe 2 in 100 people could code. I was actually hoping AI would increase programming literacy but it didn't, it became even more rare. Many journalists could have come at it from this perspective, but instead painted doom and gloom for coders and computer programming.

The New York Times should look in the mirror. With the advent of the iPad, most experts agreed that they would go out of business because a majority of their revenue came from print media. Look what happened.

Understand this, most professional software and IT engineers hate coding. It was a flex to say you no longer code professionally before ChatGPT. It's still a flex now. But it's corrupt journalism when there is a clear conflict of interest because the NYT is suing the hell out of AI companies.


Agreed - just like the Fortune article talking about (Edit: Morgan Stanley, not GS) saying "the AI revolution is coming next year, and will decimate tons of industries, and no one is ready for it". They quote Altman and Musk. Gee - what did you expect from those two snake-oil salesmen?

Also the fact that NYT gives all their devs licenses to Cursor and Claude

I agree that the article is a poor take on AI in programming. However, I wouldn't blame NYT for corrupt journalism. This is an op-ed, not something written by NYT staff.

I think it's going to be more about how many people have access to the surveillance who might use it for needless things or personal reasons, at a large scale.

I'm going to guess warrantless search of all of our data, retention policies, and the worst part is who gets access to search through it. Basically, I speculate that anyone under a loosely defined classification would be able to access it legally. I also think there's a bunch of information and password sharing between people who don't even have a clearance for it. Perhaps sprinkle in abusing this system for personal or political reasons.

My word of caution is if you do have access to these systems or a shared password, tread very carefully.


The intent of the idea is there, and I agree that there should be more precise syntax instead of colloquial English. However, it's difficult to take CodeSpeak seriously as it looks AI generated and misses key background knowledge.

I'm hoping for a framework that expands upon Behavior Driven Development (BDD) or a similar project-management concept. Here's a promising example that is ripe for an Agentic AI implementation, https://behave.readthedocs.io/en/stable/philosophy/#the-gher...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: