Hacker Newsnew | past | comments | ask | show | jobs | submit | phao's commentslogin

Maybe this is one of those things like cold reading.

IMO, the text may be seen as a piece of advertising for a fictional "de-programming" (like they call it) treatment from a cult. However, it's written for an abstract general kind of cult, described only in terms of what is common about several cults. It's not specific about anything. Just like well done cold reading: it's right, but unspecific.

I believe the idea is that you identify some of the things you're doing in your life as a cult (guided by the general features given in the text) and pay more attention to them. The text, however, is vague enough to classify a lot of things as cults, even though you/I/we may believe these aren't.


I've just spent the last 30min-1h looking this up. I think I might have a form of this... Mostly, I can't really picture stuff. I thought that was just everyone.

Gotta find out more about this!


He is mostly talking about computational linear algebra problems of a large scale type due to large matrices: the "computational intensity" comes from having really large matrices (kxk for k = 100's, 1 000's, 10 000's, 100 000's, 1 000 000's, ...).

In computer graphics, the situation is often different. Usually, you have small matrices (kxk for k=2,3,4); a huge number of vectors; and you want to apply your matrix to all of those vectors. Very often, these matrices have very well known forms and also known well behaved inverses. There isn't really a significant computational cost in computing the inverse (you'll very often write down its formula by hand), and conditioning is usually not an issue (consider a rotation matrix for example or undoing translations with homogeneous coordinates).


Thank you for the clarification. I didn't realize we were talking about that kind of scale.


With Newton's method, you'll be solving Hx=g (H = hessian of f, g = gradient of f) at each iteration. For large number of variables N, building H is of order N^2 and solving Hx = g is of order N^3 with an usual solver. N^2 and N^3 are really large for an already large N. I believe the reason is as simple as that. It isn't that it is tedious and difficult to write down the formulas or the code to compute H. It's just too costly, computationally speaking. There is also an increased memory cost (possibly having to store H).

When people do have ways to go around this problem, they do use Newton's method for large scale problems.


By the way... which font is being used in the readme screenshot?

https://github.com/helix-editor/helix/blob/master/screenshot...


Looks a lot like JetBrains Mono but I’m not 100% sure


Directly, I believe this has more to do with people interested in the language seeing it being used in actual projects.

Indirectly (a lot indirectly), sure. Maybe the software should perform better or maybe have fewer bugs because programming language X or Y is understood to help with those things. Maybe it showcases a more flexible architectural design, which leads to more potential features in future versions, etc.


It also hints at the mindset and values of the creator. Communities around languages value things that may or may not line up with the qualities of the language they form around, but simply the fact they value those things is reflected by rhe things they produce. If people think a language is performant, then performance will be something their community values. If it is known for having a learning curve, their product might too.


A bit like seeing «car built in carbon fiber”. May poke the interest in materials or imply different quality attributes and design intent.


Super sincere question.

> Google's reputation for not supporting things long term

I didn't know Google had such a reputation. I mostly use drive and gmail, so it was fine to me.

Does google really have such a reputation? Any place I can read more on this?


https://killedbygoogle.com/ Lists the lifetime and EOL of everything Google has killed, along with a short blurb about the termination


Very sincerely, you must read just about zero tech news. Google has been infamous for this ever since they shut down Google Reader in 2013. For about the past 10 years they are a company adrift that can no longer launch new products without getting absolutely ridiculed. Everyday consumers have lost their faith in the company because they are so used to getting jerked around anything G. People I know wont touch a G chat app because they know it wont last 6 months.


> Very sincerely, you must read just about zero tech news.

Not 0, but really not much, that is true.

I remember the shutdown of google reader. I tried it once, but let go of it before it was shut down.

But I really didn't know google had this kind of reputation.


https://killedbygoogle.com/

I miss Google Reader and Google Wave the most.


I really miss Google Play Music. For my needs, it was the perfect streaming service.

Youtube Music is a huge step back. Spotify is far too playlist and recommendation happy, I want to listen to albums not curated lists. Tidal is decent, but similar to Spotify. Apple Music is the one I haven't tried for more than a couple of days and I don't recall what I didn't like about it.


Same here, GPM recommendations were fantastic and the interface was very simple and nice to use. When they moved the service to YouTube music half my playlists were filled with poor quality songs uploaded to YouTube, it's a mess.

Spotify is okay and does have some nice features in the way that casting works and multiple devices joined to one account, but it's certainly not as enjoyable to use.


It made a great offline music player, too!


Apple music is my choice, exactly because it is still album focused. That said, I’m not surprised you bounced off… the UI isn’t very good.


I keep getting free trials and financially it makes sense to me with Apple One etc, but every time I've tried it I've ended up back on Spotify.

I've concluded the price saving isn't enough to use a product who's UX I enjoy less, even if it works well for other users.


I don't understand this comment. You can listen to whole Albums on Spotify.


Yes, you can. Easily!

The point I was making about Spotify is that even if I solely listen to music as full albums, I only get recommendations for playlists. I rarely want to listen to a playlist. There are a number of other things I don't like about Spotify, but it works well enough.


Gmail and drive are pretty much the only safe havens. The rest… how many its own chat apps will google kill this year?



Yes, they do. I personally would never put a single service on GCP just out of principle.



> In my view, because pi crops up unavoidably in math, if you concoct a "unit" to get rid of pi in one place, it will simply crop up somewhere else, perhaps in a denominator.

That doesn't mean you shouldn't try to put it in a convenient place.

One way to think of the post is: where you want pi to come up?

With arc length parametrization f(r) = (cos(r), sin(r)), it comes up in the parameter space (one turn: 0 <= r <= 2 pi). If you had the whole thing in terms of turns, you'd instead have (as a primitive) some kind of function g(t); with one full round for 0 <= t <= 1. It'd then have to be true that

f(2 pi t) = g(t) = (cos(2 pi t), sin(2 pi t)).

Pi would come up in the velocity:

f'(r) = (-sin(r), cos(r)) = if

(i u means rotate the vector u by 90 degrees counter-clockwise)

g'(t) = 2 pi f'(2 pi t) = 2 pi (i f(2 pi t)) = 2 pi (i g(t))

Before, you had |f'| = 1. Now you have |g'| = 2 pi.

For classical physics (kinematics and dynamics) applications and classical geometrical applications (curvature, etc), it's really convenient to have that speed term (|f'|) being 1. This is one of the major motivations for arc length parametrization.

By the way, this can't be understated. It really simplifies kinematics, dynamics, geometry, etc, having |f'| = 1 throughout. It's not just for circles. This can be done for an extremely large class of curves and it makes the related math much more understandable and easier to deal with.

For a lot of computer graphics (I believe this is where Casey comes from), you care less about tradicional mathematics for physics and geometry. So you'd rather (maybe) take this pi appearing in the parameter space and push it to the velocity.


The major motivation for radians is arc length parametrization, really. Meaning that in a circle of radius 1 unit (in whatever measurement unit you've chosen), an arc formed by a k-rad angle measures k units. There is an intentional coincidence of angles and arc measurements.


I wonder how much of (2) is speculative and how much of it is a real need in actual projects.


The negative performance impact of GC in performance-engineered code is neither small nor controversial, it is mechanical consequence of the architecture choices available. Explicit locality and schedule control makes a big difference on modern silicon. Especially for software that is expressly engineered for maximum performance, the GC equivalent won't be particularly close to a non-GC implementation. Some important optimization techniques in GC languages are about effectively disabling the GC.

While some applications written in C++ are not performance sensitive, performance tends to be a major objective when choosing C++ for many applications.


When people complain about "negative performance impact of GC", often they're actually bothered by badly designed languages like Java that force heap-allocation of almost everything.

I think this might have been fixed in latest versions of Java, though, not sure if value types are already in the language or just coming soon.

Aside from that, it's my understanding that GC can be both a blessing and a curse for performance (throughput), that is, an advanced-enough GC implementation should (theoretically?) be faster than manual memory management.


In theory, a GC should never be faster than manual memory management. Anything a GC can do can be done manually, but manual management has much more context about appropriate timing, locality, and resource utilization that a GC can never have. A large aspect of performance in modern systems is how effectively you can pipeline and schedule events through the CPU cache hierarchy.

There are a few different ways a GC impacts code performance. First, even low-latency GCs have a latency similar to a blocking disk op or worse on modern hardware. In high-performance systems we avoid blocking disk ops entirely specifically because it causes a significant loss in throughput, instead using io_submit/io_uring. Worse, we have limited control over when a GC occurs; at least with blocking disk ops we can often defer them until a convenient time. To fit within these processing models, worst case GC latency would need to be much closer to microseconds.

Second, a GC operation tends to thrash the CPU cache, the contents of which were carefully orchestrated by the process to maximize throughput before being interrupted. This is part of the reason high-performance software avoids context-switching at all costs (see also: thread-per-core software architecture). It is also an important and under-appreciated aspect of disk cache replacement algorithms, for example; an algorithm that avoids thrashing the CPU cache can have a higher overall performance than an algorithm that has a higher cache hit rate.

Lastly, when there is a large stall (e.g. a millisecond) in the processing pipeline outside the control of the process, the effects of that propagate through the rest of the system. It become very difficult to guarantee robust behaviors, safety, or resource bounds when code can stop running at arbitrary points in time. While the GC is happening, finite queues are filling up. Protecting against this requires conservative architectures that leave a lot of performance on the table. If all non-deterministic behavior is asynchronous, we can optimize away many things that can never happen.

A lot of modern performance comes down to exquisite orchestration, scheduling, and timing in complex processes. A GC is like a giant, slow chaos monkey that randomly destroys the choreography that was so carefully created to produce that high-performance.


> performance tends to be a major objective

My comment is about the thinking behind making this decision, C++ or not. It wasn't "is it speculative that GC will add a cost?" or something like that.

I wonder how much of the thinking that leads one to conclude "I need so much performance here that I can't afford a managed language", for example, is real carefully thought vs. speculative.


In my experience 99% speculative and WRONG. Who said: "early optimization is the root of all evil"? :) Today more and more is possible to have a GC without terrible performance issues. Some weeks ago I read an article here in HN, about LISP used for safety critical systems. That bad fame of GC comes from the early versions of Java... but I've been using GC languages a LOT, and never had those "stop the world" moments.


The expression is "premature optimization". And, Donald Knuth.

GC overhead is always hard to measure except end-to-end, because it is distributed over everything else that happens. Cache misses, TLB shoot-downs. Mitigations are very difficult to place.

Practically, you usually just have to settle for lower performance, and most people do. Double or triple your core count and memory allocation, and bull ahead.


Not my bailiwick but I feel like early Java's problem was a combination of everything not a simple primitive is an object that goes on the heap plus a GC optimized for batch throughput vs latency. Bonus it's Java all the way down to the bitter end.

I'm with you. One should look at latency requirements and the ratio of profit vs server costs when making a decision. AKA when your product generates $250k/month, you're paying three programmers $40k/month, and your AWS bill is $500/month isn't the time to try and shave pennies.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: