Hacker Newsnew | past | comments | ask | show | jobs | submit | pveierland's commentslogin

When correctness is important I much prefer having strong types for most primitives, such that the name is focused on describing semantics of the use, and the type on how it is represented:

    struct FileNode {
        parent: NodeIndex<FileNode>,
        content_header_offset: ByteOffset,
        file_size: ByteCount,
    }
Where `parent` can then only be used to index a container of `FileNode` values via the `std::ops::Index` trait.

Strong typing of primitives also help prevent bugs like mixing up parameter ordering etc.


I agree. Including the unit in the name is a form of Hungarian notation; useful when the language doesn't support defining custom types, but looks a little silly otherwise.


Depends on what variant of Hungarian you're talking about.

There's Systems Hungarian as used in the Windows header files or Apps Hungarian as used in the Apps division at Microsoft. For Apps Hungarian, see the following URL for a reference - https://idleloop.com/hungarian/

For Apps Hungarian, the variable incorporates the type as well as the intent of the variable - in the Apps Hungarian link from above, these are called qualifiers.

so for the grandparent example, rewritten in C, would be something like:

    struct FileNode {
        FileNode *pfnParent;
        DWORD ibHdrContent;
        DWORD cb;
    }
For Apps Hungarian, one would know that the ibHdrContent and cb fields are the same type 'b'. ib represents an index/offset in bytes - HdrContent is just descriptive, while cb is a count of bytes. The pfnParent field is a pointer to a fn-type with name Parent.

One wouldn't mix an ib with a pfn since the base types don't match (b != fn). But you could mix ibHdrContent and cb since the base types match and presumably in this small struct, they refer to index/offset and count for the FileNode. You'd have only one cb for the FileNode but possibly one or more ibXXXX-related fields if you needed to keep track of that many indices/offsets.


- Certain pages load but are not able to load content, e.g. https://npmx.dev/package/@storybook/addon-docs fails to load content with:

> `[nuxt] Cannot load payload /package/@storybook/addon-docs/_payload.json?c459501f-8eb7-49c9-be9c-4a197fa35a39 Error: Invalid input`

- Scrolling fast on Firefox + Chrome is broken and resets the search results page to start.

- Pressing up/down arrows should navigate search item results instead of focusing individual tag elements.


This is something I noticed in the xAI All Hands hiring promotion this week as well. None of the 9 teams presented is a safety team - and safety was mentioned 0 times in the presentation. "Immense economic prosperity" got 2 shout-outs though. Personally I'm doubtful that truthmaxxing alone will provide sufficient guidance.

https://www.youtube.com/watch?v=aOVnB88Cd1A


xAI is infamous for not caring about alignment/safety though. OpenAI always paid a lot more lip service.


Their flagship product is child porn MechaHitler, it’s not exactly a surprise that safety is not a priority.


One bothersome aspect of generative assistance for personal and public communication not mentioned is that it introduces a lazy hedge, where a person can always claim that "Oh, but that was not really what I meant" or "Oh, but I would not express myself in that way" - and use it as a tool to later modify or undo their positions - effectively reducing honesty instead of increasing it.


> where a person can always claim that "Oh, but that was not really what I meant"

that already happens today - they claim autocorrect or spell checks instead of ai previously.

I don't accept these as excuses as valid (even if it was real). It does not give them a valid out to change their mind regardless of the source of the text.


Arguably, excusing oneself because of autocorrect is comparable to the classic "Dictated but not read" [0] disclaimer of old. Excusing oneself because an LLM wrote what was ostensibly your own text is more akin to confessing that your assistant wrote the whole thing and you tried to pass it off as your own without even bothering to read it.

[0] https://en.wikipedia.org/wiki/Dictated_but_not_read


Yep! However the problem will increase by many orders of magnitude as the volume of generated content far surpasses the content created by autocorrect mechanisms, in addition to autocorrect being a far more local modification that does not generate entire paragraphs or segments of content, making it harder to excuse large changes in meaning.

I agree that they make for poor excuses - but as generative content seeps into everything I fear it will become more commonly invoked.


> I fear it will become more commonly invoked.

yep, but invoking it doesnt force you to accept it. The only thing you get to control is your own personal choices. That's why i am telling you not to accept it, and i hope that people reading this will consider this their default stance.


Never in my life would I accept that as a valid excuse. If you sent the mail, committed the code or whatever, you take responsibility for it. Anything else is just pathetic.


Are you embracing the fundamental attribution error?


Good question. I certainly commit that error sometimes, like everyone else. But the issue here is people using LLMs to write eg emails and then not taking responsibility for what they write. That has nothing to do with attribution, only accountability.

"I was having a bad day, my mother had just died" is a very valid explanation for a poorly worded email. "It was AI" is not.


You must be a delightful person to work with.

> If you sent the mail, committed the code or whatever, you take responsibility for it. Anything else is just pathetic.

Have you discussed this with your therapist?


I mean he mentioned it in IMO too harsh of a way (e.g. “pathetic”) but I do think it raises the point: if you don’t own up to your actions then how can you be held accountable to anything?

Unless we want to live in a world where accountability is optional, I think taking responsibility for your actions is the only choice.

And to be honest, today I don’t know where we stand on this. It seems a lot of people don’t care enough about accountability but then again a lot of people do. That’s just my take.


I mean we're only human. We all make mistakes. Sure, some mistakes are worse than others but in the abstract, even before AI, who hasn't sent an email that they later regretted?


Yes, we all make mistakes. But when I make mistakes when sending an email you can be damn sure that they are my own mistakes which I take full accountability for.


Making mistakes and regretting is of course perfectly ok!

What I reacted to was blaming the LLM. "I am sorry,I meant like this ..." versus "it wasn't me, it was AI".


Yes, thank you. I used "pathetic" in the meaning of something which makes feel sorry for them, not something despicable. I fully expect people to stand by what they write and not blame AI etc, but my comment came across as too aggressive.


> in the meaning of something which makes feel sorry for them

I've been speaking English as a second language since I was 12 but I completely overlooked one could use it that way. I guess they don't say it like that a lot in Hollywood, video games or... the internet.

Until now! :D

Thanks for explaining it :)


Therapists are also supposed to take responsibility for their work.


I guess you got hung up on the word "pathetic". See my comment below, I used it not as "despicable" but rather "something to feel sorry for". Indeed, people writing emails using LLMs and then blame the AI for consequences, that is something that makes me feel sorry for them.

Implying mental health issues? That makes me think you were triggered by my comment.


The tax authority in Norway alone employs 500 full-time software developers. If all of Europe followed France's example to adopt the UN Open Source Principles for all publicly funded development - and prioritized open formats + protocols + interoperability - it would within only a few years be possible to greatly improve software reliability for all nations.


UK government standards say that government software should be open source by default https://www.gov.uk/service-manual/service-standard/point-12-...


That is a document. Show me reality.


They have 1.5k public repositories here at least: https://github.com/orgs/alphagov/repositories?type=all

Collection of RFCs: https://github.com/alphagov/govuk-rfcs

Open design system: https://design-system.service.gov.uk


And individual departments can/do have their own GitHub org. Eg the Office of National Statistics. Some work I did ~10 years ago can be found there! https://github.com/onsdigital


Next thing we need is for them to host their own Git infra, to avoid dependency on US Github.


Lol remind me who owns Github again?


Why do you think this is substantial? The software built in these repos is not tied to GitHub.

Git is decentralized and using a self-hosted instance of Gitea / Forgejo will give you a replacement for the essential parts of GitHub. GitHub is absolutely replaceable.


> followed France's example to adopt the UN Open Source Principles

Has this actually produced any tangible results?

I'm all in for interoperability, open source and such but the primary purpose of software is that it should work and actually achieve its task. I'm always skeptical of such top-down mandates where engineering principles or ideas are being pushed over tangible outcomes, as it usually leads to endless bikeshedding and "design by committee", while the resulting solution (if any is delivered before the budget runs out) is ultimately not fit for purpose.


I'm hopeful that it can work if:

- The top-down mandate is very general: e.g. "default to using or contributing to open standards, protocols, file formats, and interoperability".

- It's applied across many nations and organizations that can themselves choose how they wish to allocate their resources to achieve their specific objective. Meaning that the tax authority in Norway can contribute to a specific tax-reporting software project and collaborate with nations X + Y + Z on this specific project as long as it is fit for their specific purpose and mandate.

Ideally this helps incentivize a diverse ecosystem of projects that all contribute to maximize public utility, without forcing specific solutions at the highest level.

One example of a recent French software project is Garage which is an open-source object storage service. It's received funding from multiple EU entities and provides excellent public utility: https://garagehq.deuxfleurs.fr/


EU countries are great at adopting principles. And saying things. And writing documents. And passing regulations.

Meanwhile, very country still runs on Microsoft and IBM.


I wonder if it would work if the governments provide some tax incentives for open source contributions similar to charity donations as well.


Prompt: generate 15k in tax-deductible open source code contributions.

Result: all of our charities are being held hostage by ransomware.


I meant something like, as a deduction from payroll taxes as a proportion of worked hours by the employee if he works on open source projects. Obviously not perfect but I don't think it's much worse than the existing R&D type schemes.


Soon: Github is filled with even more garbage in order to collect tax refunds. lol


French gov open source is a joke, single repo dump once from a zip file given by the contractor and then nothing. And that's when the source is provided, France Identité is closed source and Play Integrity dependent.


> the contractor

If there is a single policy change I could pick for public spending on IT it would be to forbid outsourcing to “contractors” and thinking of software delivery as “projects”


Just disabled the "Smart features" setting in Gmail. Disabling this feature force-disables these other settings:

- Grammar

- Spelling

- Auto-correct

Pretty dark UX to force users into an all AI or nothing situation.

PS1: Yes, I've paid for Google One for years and I'm not just a free user.

PS2: Yes, these features are entirely possible to provide without training on your specific user data.


These don't disable Gemini. You can disable it completely if you pay for Workspace, but you can also use a Hide Gemini extension to hide most pop-ups and nags.


It used to be that you had to buy a premium version of software to get more features, now you have to buy the premium version to disable features you don’t want…


That sucks. I've been running the paid version for years - however it's clear that it hasn't been properly maintained for a while and it suffers from sporadic crashes.

Any recommendations for launchers that are functionally similar? The launchers mentioned in this thread so far are quite different.


Lawnchair is similar, but it does have some bugs that they're still working through.

If you're not set in the traditional page/app drawer launcher, I'd recommend Kvaesitso. It's a FOSS search based launcher. A bit of a learning curve but it is very performant and feature rich.


You could also flip that and talk about the risks of when your gasoline supply get shut down due to some event. With an EV stack you can generate your power locally and add resilience that way.


One of the big risks for manufacturers seem to be that EVs are fundamentally more compatible with automated production and allows simplifications to the car stack. It would seem that the costs and risks of the keeping the ICE stack alive will keep increasing over time as it loses relevance to EVs.


I believe this is wrong for many topics. The news media is strongly incentivized to sensationalize and continuously produce content for their readers and viewers. Wikipedia is able to cover many topics that are less contested in a slower and more tempered manner, as the content does not need to be marketable or immediately available. As an example, for STEM topics I'd trust Wikipedia far more than any news media.


>as the content does not need to be marketable

For a reputable secondary source to consider writing something it does need to be marketable. This can result in situations where there is an event that happens where only the sensationalist pieces were deemed marketable enough for people to write meaning that the writers of the wikipedia page do not have the option of using non sensationalist sources.


I'm struggling to make sense of this. Parent is saying news media has a financial incentive to grab attention, Wikipedia does not. Best I can make out, you've moved the target by suggesting it's not about how the content of the article itself is written, but rather about the sources it supposedly has to use.


My original comment is about cases where only biased secondary sources exist due to the story not being notable enough to be picked up by other authors. What appears to you as moving the target is clarifying that the situation the replies commented won't happen in the situation I am referring to.


Thanks for clarifying.

So one can surely imagine cases where the only references are sensationalised/biased new media reporting. However:

1) Isn't this confined to a pretty small proportion of articles, given the breadth of topics Wikipdia covers expands well outside the purview of news media? E.g. any basic physics or math articles, like Electromagnetism or Linear Algebra - a lot of the sources for these seem to be textbooks.

2) Can we not assume any editorial leeway on the part of contributors to try and contextualise such sensationalism/bias? No examples are coming to mind now, but I'm pretty sure I've seen qualifiers in articles at least hinting that the cited source could be potentially problematic.


A wikipedia article has to attribute a source, and their sources are biased af.


Wikipedia does not accept primary sources. News media are acceptable to them so if they are sensationalist, then it follows that Wikipedia is sensationalist. Having said that, Wikipedia bans outlets which don't follow the former's world view, which then reinforces its lack of credibility in non-STEM topics.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: