I do that with my Pinephone (a powered USB-C hub with ethernet, HDMI, keyboard and mouse; I also plug a proper set of speakers+subwoofer into headphone jack).
Both Phosh and PlasmaMobile turn into a "proper" desktop when "docked" (Gnome-like and KDE-like, respectively).
True, but you can't have complete tests without 100% coverage. It's a necessary, but not a sufficient condition; as long as it doesn't become the sole goal, it's still a useful metric.
I agree. The same can be said for testing too: their main purpose is to find mistakes (with secondary benefits of documenting, etc.). Whenever I see my tests fail, I'm happy that they caught a problem in my understanding (manifested either as a bug in my implementation, or a bug in my test statement).
This ultimately is what shapes my view of what a good test is vs a bad test.
An issue I have with a lot of unit tests is they are too strongly coupled to the implementation. What that means is any change to the implementation ultimately means you have to change tests.
IMO, good tests are relatively immutable. You should be able to have multiple valid implementations. You should add new tests to describe the new functionality of that implementation, however, the old tests should remain relatively untouched.
If it turns out that a single change to an implementation requires you to change and update 20 tests, those are bad tests.
What I want as a dev is to immediately think "I must have broken something" when a test fails, not "I need to go fix 20 tests".
For example, let's say you have a method which sorts data.
A bad test will check "did you call this `swap` function 5 times". A good test will say "I gave the method this unsorted data set, is the data set sorted?". Heck, a good test can even say something like "was this large data set sorted in under x time". That's more tricky to do well, but still a better test than the "did you call swap the right number of times" or even worse "Did you invoke this sequence of swap calls".
> IMO, good tests are relatively immutable. You should be able to have multiple valid implementations. You should add new tests to describe the new functionality of that implementation, however, the old tests should remain relatively untouched.
Taken to extreme this would mean getting rid of unit tests altogether in favor of functional and/or end-to-end testing. Which is... a strategy. I don't know if it is a good or bad strategy, but I can see it being viable for some projects.
If you can't tell, I actually think functional tests have a lot more value than most unit tests :)
Kent Dodd agrees with me. [1]
This isn't to say I see no value in unit tests, just that they should tend towards describing the function of the code under test, not the implementation.
> Taken to extreme this would mean getting rid of unit tests all together in favor of functional and/or end-to-end testing.
The dirty little secret in CS is that unit, functional, and end-to-end tests are all the exact same thing. Watch next time someone tries to come up with definitions to separate them and you'll soon notice that they didn't actually find a difference or they invent some kind of imagined way of testing that serves no purpose and nobody would ever do.
Regardless, even if you want to believe there is a difference, the advice above isn't invalidated by any of them. It is only saying test the visible, public interface. In fact, the good testing frameworks out there even enforce that — producing compiler errors if you try to violate it.
Yep, the 'unit' is size in which one chooses to use. The exact same thing happens when trying to discuss micro services v monolith.
Really it all comes down to agreeing to what terms mean within the context of a conversation. Unit, functional, and end-to-end are all weasel words, unless defined concretely, and should raise an eyebrow when someone uses them.
> The dirty little secret in CS is that unit, functional, and end-to-end tests are all the exact same thing.
I agree that the boundaries may be blurred in practice, but I still think that there is distinction.
> visible, public interface
Visible to whom? A class can have public methods available to other classes, a module can have public members available to other modules, a service can have public API that other services can call through network etc
I think that the difference is the level of abstraction we operate on:
unit -> functional -> integration -> e2e
Unit is the lowest level of abstraction and e2e is the highest.
The user. Your tests are your contract with the user. Any time there is a user, you need to establish the contract with the user so that it is clear to all parties what is provided and what will not randomly change in the future. This is what testing is for.
Yes, that does mean any of classes, network services, graphical user interfaces, etc. All of those things can have users.
> Unit is the lowest level of abstraction and e2e is the highest.
There is only one 'abstraction' that I can see: Feed inputs and evaluate outputs. How does that turn into higher or lower levels?
It took me a bit of time (and two or three different view) to finally get this. That is mostly why I hardcode my values in the tests. Make them simpler. If something fails, either the values are wrong or the algorithm of the implementation is wrong.
Comparing actual outputs against expected ones is the ideal situation, IMHO. My own preference is for property-checking; but hard-coding a few well-chosen values is also fine.
That's made easier when writing (mostly) pure code, since the output is all we have (we're not mutating anything, or triggering other processes, etc. that would need extra checking).
I also think it's important to make sure we're checking the values we actually care about; since those might not be the literal return value of the "function under test". For example, if we're testing that some function correctly populates a table cell, I would avoid comparing the function's result against a hard-coded table, since that's prone to change over time in ways that are irrelevant. Instead, I would compare that cell of the result against a hard-coded value. (Rather than thinking about the individual values, I like to think of such assertions as relating one piece of code to another, e.g. that the "get_total" function is related to the "populate_total" function, in this way...).
The reason I find this important, is that breaking a test requires us to figure out what it's actually trying to test, and hence whether it should have broken or not; i.e. is it a useful signal that requires us to change our approach (the table should look like that!), or is it noise that needs its incidental details updated (all those other bits don't matter!). That can be hard to work out many years after the test was written!
Also agree. There’s also a diminishing returns with test cases. Which is why I focus mainly on what I do not want to fail. The goal is not really to prove that my code work (formal verification is the tool for that), but to verify that certain failure cases will not happen. If one does, the code is not merged in.
> I believe, that if your definition of a choice stop working if we assume a deterministic Universe, then you need a better definition of a choice. In a deterministic Universe becomes glaringly obvious that all the framework of free will and choice is just an abstraction, that abstract away things that are not really needed to make a decision.
Indeed, I think of concepts like "agency", "choice", "free will", etc. as aspects of a particular sort of scientific model. That sort of model can make good predictions about people, organisations, etc. which would be intractable to many other approaches. It can also be useful in situations that we have more sophisticated models for, e.g. treating a physical system as "wanting" to minimise its energy can give a reasonable prediction of its behaviour very quickly.
That sort of model has also been applied to systems where its predictive powers aren't very good; e.g. modelling weather, agriculture, etc. as being determined by some "will of the gods", and attempting to infer the desires of those gods based on their observed "choices".
It baffles me that some people might think a model of this sort might have any relevance at a fundamental level.
It eschews angles entirely, sticking to ratios. It avoids square roots by sticking to "quadrances" (squared distance; i.e. pythagoras/euclidean-distance without taking square roots).
He's quite contrarian, so I'd take his informal statements with a pinch of salt (e.g. that there's no such thing as Real numbers; the underlying argument is reasonable, but the grand statements lose all that nuance); but he ends up approaching many subjects from an interesting perspective, and presents lots of nice connections e.g. between projective geometry, linear algebra, etc.
I also invented this! There is cool stuff like angle adding and angle doubling formulas, but the main downside is that you can only directly encode 180 degrees of rotation. I use it for FOV in my games internally! (With degrees as user input of course.) In order to actually use it to replace angles, I assume you'd want to use some sort of half angle system like quaternions. Even then you still have singularities, so it does have its warts.
With all due respect, no, it isn't. His drivel against set theory shows that he didn't even read the basic axiomatic set theory texts. In one of his papers, he is ranting against the axiom of infinity saying that 'there exists an infinite set' is not a precise mathematical statement. However, the axiom of infinity does not say any such thing! It precisely states the existence of some object than can be thought of as infinite but does not assign any semantics to it. Ironically, if he looked deeper, he would realize that the most interesting set theoretic proofs (independence results) are really the results in basic arithmetic (although covered in a lot of abstractions) and thus no less 'constructive' than his rational trigonometry.
Almost every critique of the axiom of infinity is philosophical. I don't think you can just say "the axiom is sound, so what's your point". And you don't even get to claim that because of Godel's incompleteness theorem.
The axioms were not handed to us from above. They were a product of a thought process anchored to intuition about the real world. The outcomes of that process can be argued about. This includes the belief that the outcomes are wrong even if we can't point to any obvious paradox.
If with an axiomatic system there are undecidable propositions, that is not the same with the axiomatic system being contradictory, i.e. where you can prove that a proposition is both true and false.
An undecidable proposition is neither true nor false, it is not both true and false.
A system with undecidable propositions may be perfectly fine, while a contradictory system is useless.
Thus what the previous poster has said has nothing to do with what Gödel had proved.
Ensuring that the system of axioms that you use is non-contradictory has remained as useful today as by the time of Euclid and basing your reasoning on clearly stated non-contradictory axioms has also remained equally important, even if we are now aware that there may be undecidable things (which are normally irrelevant in practice anyway).
The results of Gödel may be interpreted as a demonstration that the use of ternary logic is unavoidable in mathematics, like it already was in real life, where it cannot always be determined whether a claim is true or false.
Indeed. Soundness and completeness are different things.
There are two well accepted definitions of soundness. One of them is the inability to prove true == false, that is, one cannot prove a contradiction from within that axiomatic system.
> It precisely states the existence of some object than can be thought of as infinite but does not assign any semantics to it
Can you elaborate on this? I think many understand that the "existence of some object" implies there is some semantic difference even if there isn't a practical one.
I really enjoyed Wildberger's take back in high school and college. It can be far more intuitive to avoid unnecessary invocation of calculation and abstraction when possible.
I think the main thrust of his argument was that if we're going to give in to notions of infinity, irrationals, etc. it should be when they're truly needed. Most students are being given the opposite (as early as possible and with bad examples) to suit the limited time given in school. He then asks if/where we really need them at all, and has yet to be answered convincingly enough (probably only because nobody cares).
Stuff like this is what really interests me in trying to imagine how differently aliens might use things that we consider to be immutable fundamentals.
personal theory: I think there's going to turn out to be a parallel development of math that is basically strictly finitist and never contends with the concept of an infinite set, much less the axiom of choice or any of its ilk. Which would require the foundation being something other than set theory. You basically do away with referring to the real numbers or the set of all natural numbers or anything like that, and skip all the parts of math that require them. I suspect that for any real-world purpose you basically don't lose anything. (This is a stance that I keep finding reinforced as a learn more math, but I don't really feel like I can defend it... it's a hunch I guess.)
any particular reference to what you're thinking of? I am aware of some writings on finitist or constructivist mathematics but they have not quite seemed to get at what I want (in particular doing away with explicit infinities does not require doing away with excluded middle at all, which is what most of that literature seems to be concerned with).
I think it's just a perspective shift. The main idea is that you can't ever measure a real number, only an approximation to one, so if two values differ by less than the resolution of your measurement they are effectively the same. For example consider the derivative f(x+dx) = f(x) + f'(x) dx + O(dx^2). The analysis version of the derivative says that in the limit dx -> 0 the O(dx^2) part vanishes and so the limit [f(x+dx)-f(x)]/dx = f'(x). The 'finitist' version would be something like: for a sufficiently small dx, the third term is of order dx^2, so pick a value of dx small enough that dx^2 is below your 'resolution', and then the derivative f'(x) is indistinguishable from [f(x+dx)-f(x)]/dx, without a reference to the concept of a limit.
Yes but like I was thinking more how you'd do any kind of "and it vanishes" or even "becomes sufficiently small" with a gappy number system as it would have to pass through gaps where "undefined" non-rationals exist.
I guess my stance (which is not very well-developed or anything) is that you try to learn to live with the gaps: define everything in terms of only what you can measure and it no longer matters whether a number is rational or irrational, or infinitesimal vs small-but-finite, because you can't tell. Instead of saying "it vanishes" as an absolute statement you say "it appears to vanish from my perspective".
I had this feeling of alien math when I went thru his videos on ancient Babylonian math. They were very serious about the everything divided by sixty stuff. Good times.
> I think having a globally shared buffer state, etc. is an antifeature.
As someone who mostly lives in Emacs, I like it. If I'm away from a machine, I can SSH into it and carry on with whatever I was in the middle of.
It's also nice to set emacsclient as EDITOR, so that e.g. running `git commit` will open up a buffer in the existing Emacs session. This is especially useful since I use shell-mode, and it would be confusing/weird to have new Emacs instances popping up when I'm already in an editor! (They open in a "window" (i.e. pane) in the existing "frame" (i.e. window) instead)
Emacs can certainly be sluggish, but I'm not sure how much that's e.g. inherent to ELisp, or due to synchronous/single-threaded code, or choosing slow algorithms for certain tasks, etc.
For me, the best performance improvement has been handling long lines; e.g. Emacs used to become unusable if it was given a line of around 1MB. Since I run lots of shell-mode buffers, that would happen frustratingly-often. My workaround was to make my default shell a script that pipes `bash` through a pared-down, zero-allocation copy of GNU `fold`, to force a newline after hitting a certain length (crossing a lower threshold would break at the next whitespace; hitting an upper threshold would force a break immediately). That piping caused Bash to think it wasn't interactive, which required another work-around using Expect.
Thankfully the last few versions of Emacs have fixed long-line handling enough for me to get rid of my awful Rube-Goldberg shell!
I've been using magit for years, and it's the reason I avoided giving the jujutsu VCS a try: the `jj` workflow/UI is supposedly much nicer than the `git` workflow/UI; but since I use magit more than bare `git` commands, that wasn't enough to sell me.
I finally gave it a try when I came across the majutsu package, which is a magit-like interface for jujutsu. I recommend it for Emacs/magit users wanting to try `jj`!
one thing I'm missing in jjui which jj cmdline does natively is rebase onto multiple heads - using this for quickly testing my branch on some other pr and latest main. other than that agreed, helps a lot with tedious noting down of change id prefixes.
This is something I've never done before. Are you just repeating -o, creating a merge commit?
If that's the case, it also seems like you can do jj duplicate and repeat -o if you just want to create a branch to temporarily test against another branch and main.
yes, exactly this, multiple -o. I sometimes have two or three branches which I keep a single merge branch on top of and being able to switch out the parents is super convenient.
I'm still learning jj, so I'm not sure about jj things that majutsu might be missing, or what git/magit things seem "missing" but are just done differently in jj.
A couple of things I tend to notice:
- In magit, I can run a raw git shell commands by pressing `:`; majutsu doesn't seem to have that, so I use Emacs ordinary `M-!`
- The default view in majutsu (log) isn't as slick as magit's. With magit, I'll routinely open it up to look at the repo status, browse through the diffs (expanding/collapsing), staging/unstaging, etc. With majutsu, most of that requires first opening up the log, then opening up the diff of a commit.
- Staging/unstaging in magit is quite nice. The analogous workflow in jj seems to be splitting/squashing, but that feels clunkier in majutsu.
I've not opened bugs or PRs for these things, since it's mostly vibes and I don't have actual solutions to offer ;-)
EDIT: Oh, I also remembered that `jj` ignores $PAGER and uses its own built-in paging by default. That tries to act like `less`, and doesn't work well in Emacs. It can't use env vars either, unless we set its pager to something like `sh -c "$PAGER"` (see https://docs.jj-vcs.dev/latest/config/#pager ). Since my $PAGER is always `cat`, I've just set that as jj's pager directly too.
I think XML fit well into the turn-of-the-millenium zeitgeist: GUIs would hide the verbosity; the proliferation of bespoke tags would map cleanly to OOP representations; middleware could manipulate and transform data generically, allowing anything to plug into anything else (even over the Internet!).
Whilst lots of impressive things were built, the overall dream was always just out of reach. Domain-specific tooling is expensive to produce and maintain, and often gives something that's not quite what we want (as an extreme example, think of (X)HTML generated by Dreamweaver or FrontPage); generic XML processors/editors don't offer much beyond avoiding syntax/nesting errors; so often it was simplest to interact directly with the markup, where the verbosity, namespacing, normalisation, etc. wouldn't be automated-away.
XML's tree model was also leaky: I've worked with many data formats which look like XML, but actually require various (bespoke!) amounts of preprocessing, templating, dereferencing, etc. which either don't fit the XML model (e.g. graphs or DAGs), or just avoid it (e.g. sprinkling custom grammar like `${foo.bar}` in their text, rather than XML elements like `<ref name="foo.bar" />`). Of course, it was hard to predict how those systems would interact with XML features like namespaces, comments, etc. which made generic processing/transforming middleware less plug-and-play. That, plus billion-laugh mitigation, etc. contributed to a downward spiral of quality, where software would not bother supporting the full generality of XML, and only allowed its own particular subset of functionality, written in one specific way that it expected. That made the processors/transformers even less useful; and so on until eventually we just had a bunch of bespoke, incompatible formats again. At which point, many just threw up their hands and switched to JSON, since at least that was simpler, less verbose and easier to parse... depending whether you support comments... and maybe trailing commas...; or better yet, just stick to YAML. Or TOML.....
(My favourite example: at an old job, I maintained an old server that sent orders from our ecommerce site to third party systems, using a standard "cXML" format. Another team built a replacement for it, I helped them by providing real example documents to test with, and eventually the switch was made. Shortly after, customers were receiving dozens of times what they ordered! It turned out that a third-party was including an XML declaration like `<?xml>` at the start of their response, which caused the new system to give a parse failure: it treated that as an error, assumed the order had failed, and retried; over and over again!)
Both Phosh and PlasmaMobile turn into a "proper" desktop when "docked" (Gnome-like and KDE-like, respectively).
reply