Which is a pretty abominable tool TBH. It breaks on just about every little edge and isn't very useful unless you like to see pretty colors and nested outputs + learn it's terrible output formatting spec. When using json I still use jansson for quick one offs.
It's not about trusting user input. It's asking for a function to create a 100 byte substring of a very long string and expecting it to take time~100 not time~len(src).
In fairness, that is probably just not the target "market" for strlcpy. Presumably, it is meant for "please copy this whole string which I expect to fit in the target (but catch me in the rare case that it does not)".
It does trust the user input. If you are being attacked and the input has lost its null terminator then your strlcpy might core dump even if it doesn't leak data.
They made it this way to make it more of a drop-in replacement for strncpy, but IMHO they should have changed the return value to be the number of characters copied. If your return value is less than n, then your string was copied completely, if it is equal to n then your string was truncated.
There are good reasons to use strlcpy over memcpy. For example you have a text parser where the most common case is each string is only a few bytes long, but you need to be able to handle odd cases where they are much longer. So you have a large buffer that you only use a tiny chunk of most times. With strlcpy it will be quick, but memcpy will be chugging the whole buffer each time.
Of course, but everyone wants to overdo the complexity of the time worn solution. OMG you need to null terminate the string after all the _other_ gymnastics..gee C sure does suck! Why don't we use rust|go|c++ ad-nauseam.
bzero(buf,sz); /memset nazis here/
strncpy(buf,src,sz - 1);
It doesn't make sense to have to null-terminate the string by hand (maybe) after doing a "safe" STRING copy. Which means it's easy to forget to do and that's a dangerous wart in the design.
Really, stop using C. To quote Hayao Miyazaki, C was a mistake.
I've never understood complaints like this. If you are proficient in the unix environment any sort of gymnastics can be handled via generation of some 'object' from plaintext and command generation on the fly.
find $PATH -name '.c' -exec grep -l socket {} \; | awk ' {printf "mv %s %s\n",$0,sprintf("%s.old",$0)}'
find $PATH -name '.c' -exec grep -l socket {} \; | awk ' BEGIN {n=0} {printf "{\"items\": \%s",sprintf("[\"%d\",\"%s\"]}\n",n++,$0)}'
If you aren't proficient or have an aesthetic or religious aversion to unix userland and traditional tools you'll play some other game I guess. Reinventing the wheel without understanding the model and power is a next-gen game. I don't have time for it.
Just because I built ripgrep doesn't mean I've reinvented a wheel without understanding the existing model/power, so your criticism feels a bit disingenuous to me.
To be clear, with the current release of ripgrep, you cannot create structured objects from its output as easily as you might think. I get that it's fun to show how to do it with long shell pipelines for simple cases, but the current release of ripgrep would actually require you to parse color escape sequences in order to find all of the match boundaries in each line. This is what tools like VS Code do, for example. The --json output format rectifies that. There are other solutions that might be closer to the text format, but they're just more contortions on the line oriented output format that aren't clearly useful for human consumption, and it's much simpler to just give people what they want: JSON.
Wasn't referring specifically to you.. but to the gist of the article and the post previous to yours I believe. On the ansi escape sequences to find matches, etc...yes, I get what your tool does but having to tokenize against ansi escape codes and other ad-hoc env artifacts is something I'm glad to leave to authors that enjoy it...not that it is terribly difficult unless you decide to reinvent the wheel and optimize everything.
He is pointing to a phenomenon I have witnessed which is
copy-pasta and a mediocre understanding of the domain is enough to pass as a programmer these days.
How much of that (Github) code is based on du jour languages and only a casual attempt (if any) to reuse or adapt an existing codebase? I'd say at least 40% but I may be pessimistic.
I would argue that 'abuse' is the most significant characteristic of an open system. What is one persons
abuse is absolute freedom to another.
The proficient design subsystems to isolate themselves from the 'abuse' and attract customers of their design. These become walled gardens. Eventually the closed system becomes oppressive.
As many others note it is a sort of dialetic that can be traced in historical political movements and other human
endeavor.
Not so much these days with the devops reaction. Now you need to package your work and a good remote worker can figure out what is needed in a few minutes. Not a bad thing for either party TBH. Sysadmins can chase butterflies when they burn out and 3rd worlders get a remote gig to pay for food.
Responsibilities-
* Take over everything technical I was good at and hired for.
* Take over policy and progress on everything I was good at.
* Be on call 24/7.
* Do whatever the CEO/CIO didn't want to do.
Think this is pretty standard from my reading of other startups. The one lesson I should have learned and didn't
is that as first engineer you should get 10% or more equity if you stay > 5 years.
"...It was a world that is now extinct. People don’t know that vi was written for a world that doesn’t exist anymore..."
And every time I use it I'm reminded that there is this
devops/automation movement against interactivity with systems and that this editor is symbolic of (and best suited to) an operator culture..which is also dying if not defunct.