Also saying Apple reinvented the cursor is a bit much. Highlighting elements, snapping, and changing the cursor weren't just invented. The author really bought into Apple's marketing here.
I feel like the quality of TechCrunch has really gone down as a whole over the years, and this article is a great example. It is essentially a fluff and marketing piece for Apple that can be summarised in three dotpoints:
* Apple invented the cursor (untrue)
* On iPad, the cursor is a round circle
* They bought over the same Apple TV effect on iPad
Most of the words and sentences in this article are meaningless, and if there's anything reading this has made me feel, it's that I'll never subscribe to their subscription service.
Care to share a couple examples of the meaningless sentences in this piece? I found it more interesting and engaging than the Wired piece, precisely because it dived deeper into the theory behind the interaction design choices that Apple made here.
> Honestly, the thinking could have stopped there and that would have been perfectly adequate. A rough finger facsimile as pointer. But the concept is pushed further. As you approach an interactive element, the circle reaches out, smoothly touching then embracing and encapsulating the button.
> The idea of variable cursor velocity is pushed further here too. When you’re close to an object on the screen, it changes its rate of travel to get where you want to go quicker, but it does it contextually, rather than linearly, the way that macOS or Windows does.
Could be shortened significantly. The passage repeats each point twice, including the main point, and is mostly flowery language that doesn’t add anything.
> The thinking could have stopped there, but instead of being a rough finger facsimile, the circle accelerates, then reaches out to elements as you approach them, embracing and encapsulating the button. This is unlike macOS and Windows, which vary velocity linearly.
The entire article is like this. I think the author was padding for word count or seo or something. I didn’t read the wired piece.
I don't read much of this language as "flowery" at all, and I think removing almost any of the words that you're decrying as not adding anything change the meaning and tone.
For example, your rewritten version is not an accurate representation of what the author was trying to convey, and mangles the two separate points into one. The contextual acceleration the author is describing is not the same thing as the circle "reaching out" as it approaches buttons.
> Unlike the text entry models of before, which placed character after character in a facsimile of a typewriter, this was a tether that connected us, embryonic, to the aleph.
They never say that Apple invented the cursor. What they do say is that Apple brought the mouse-controlled cursor to the mass market, which is true. They are actually careful to explain its origins at Xerox PARC, and they even show a video from that time.
I lived with a journalist for a while. Basically it can be summed up as "now they pay freelancers to write for basically nothing except ~exposure~". Typically there's a word count and a monthly article minimum, and if you don't hit your monthly minimum you get nothing for any of the articles you wrote that month. Like just $0 for all your work. This isn't just blogs like TC BTW, my roomate worked for Forbes and I think Bloomberg, but the second one might have been a real job, idk. Forbes was definatly freelance, though. And this was business news, not some BS side column. If you are following the industry he wrote about closely, there's a good chance you have read at least one of his articles
Fwiw the writers (at least, the one I knew) hate it too because it floods the job market with ultra low paid competition, and since it's "just the way things are", a lot of those underpaid people are very talented and are glad to accept very little pay
Forbes is one of the saddest cases. Under Malcolm Forbes the biweekly print magazine was arguably the real class of the business periodicals. But, at some point, the online version just sold its soul for page views with all the "influencer" blogs, many of them written by people with conflicts of interest a mile long. Overall, their "native advertising" was/is as egregious as anyone's.
And that's not even getting into just how annoying it is to try to read anything on Forbes because of all the pop-ups etc.
Yeah, it's fairly simple to crudely replicate the effect in CSS. It gets rid of the possibility that your cursor is blocking a portion of the button label, but creates uncertainty in the position of the cursor -- jiggling the mouse lightly won't let you locate it anymore. What iPadOS 'invented' was to transition the effect in a smoother and snappier way.
I've found Chrome OS to provide the most Mac like experience outside of Mac. Its Linux app support is still in development, but I successfully made a Unity game on my Chromebook!
Also using a Swift like ARC implementation server side that doesn't seem like a good idea. Memory leaks are so common in iOS development. Its just they are usually small enough and an iOS application lifetime is so short it doesn't matter. An ARC implementation that handles cyclical references seems like a more stable solution for a server.
-Note- I'm not saying one is better than the other. ARC works great on iOS for creating applications.
I have written server's in Kotlin, that run on the JVM and thus are practically the same. IMO I am far more productive then using Swift, plus you have access to a huge legacy of stable, well thought out, and documented libraries.
I work in Swift on iOS. I think the "Swift on a server" idea is way over hyped. There are so many issues with the whole Swift stack adopting that servers side would be a mistake IMO. One of issues with Swift on a server is ARC vs a Go-like GC. If a company is legitimately concerned about runtime efficiency, they already have such great solutions. Go being one of those.
I would love to hear your perspective but this is not a well supported argument. ARC is one of the most interesting things about Swift on the server. Historically one of the greatest challenges with servers is the impact of GC on the long tail of performance and latency. So, so much work has gone and continues to go into addressing the problem of low-overhead GC. You can appear to get great performance but then when you look at your stats you see that some significant percentage of your users are exceeding your maximum latency goals due to GC kicking in. Go and Java have made great strides but sometimes at the cost of memory inefficiency and/or optimizing for specific cases.
Reference counting, by contrast, is entirely predictable. It doesn't defer any work. I would argue that CTOs are a lot more interested in consistently low latency than in requests per second. So it's very interesting to see an approachable, performant language take the ARC route on the server. It is early days though.
My main issue of Swift's implementation of ARC is it commonly leads to memory leaks. Even experienced developers can cause unintentional memory leaks. It's extremely common in iOS development. It's also hard to even detect and eliminate them, you just seem memory usage climb as your app ages. It not an issue with iOS because it is commonly very small and an app's have such a short lifecycle.
ARC would be just one my issues using Swift outside of iOS.
I think you are overstating the pause duration of a modern GC. I have used both Go and the JVM server side, not at a huge scale but enough to see GC effecting response times. They add some fluctuate, but nothing compared to network latency or the multitude of other factors that fluctuate heavily. It was never significantly relevant for response times. I'm looking at my server logs right now and theres not even a real correlation between GC and response time. Unless you are considering a 0-10ms fluctuation.
If you are interested in using an ARC server side I know Kotlin Native is using ARC however their implementation eliminates the cyclical reference issue.
Nah, I’d say Swift’s good support for value type semantics helps cut down on leak problems. Depends on the patterns you use of course but I can see that being a good approach for server apps.
Funny you say that because just this week there has been an investigation into what seemed to be “leaks” but turns out it’s memory fragmentation. It’s a fascinating read into how to debug a server problem if you’re into that sort of thing.[1]
Agree that modern tracing GC can be very good in a wide range of cases but there are some where it’s not. Very dependent on the case. Ultimately you are deferring work and hoping to find some time in the future to squeeze it in unnoticed. ARC is a cool paradigm to explore on the server as it doesn’t have this problem to begin with.
Value types are great, but they don't help cut down on cyclical references as the type of coding that will cause memory leaks is done in objects. Typically objects with complex dependencies and inheritance hierarchies.
Memory fragmentation is another legit concern I guess, as far as I can remember iOS has no memory compaction. Again not a necessarily an issue for a short live user space application, it is a larger one than memory leaks at least at my company. In some hot spots of our app we specifically slow down reading of some queries to reduce memory fragmentation.
Frankly Chris Lattner's claim that a GC leads to 2x-3x memory consumption over ARC is unfounded and sorta shocking coming from someone held is such high esteem. It's something thats continually shown to be untrue.
It always seems Swift's biggest selling point is it uses ARC instead of a GC, which is either not a large issue or a GC is actually more beneficial. Other than that you still haven't dealt with the toxic "Swifty" community, the terrible tooling situation, the immature libraries and frameworks, etc, etc.
There seems to be so many better solutions to writing server side code. This is all coming from someone who uses the language on a daily basis.
Glad to see Vapor/Perfect has toned down their "we are going to change the world by releasing a new web framework" rhetoric on their sites. When they both first did announced them it was pretty cringe
I have been using C# a lot recently. I can't stand it. It feels like it it lacks a coherent design. I'm not a huge Java fan either, but at least it feels coherent to me. I have used a lot of different programming languages and can see the values in their different approaches, but I have a hard time seeing it in C#. Also C#'s tooling is only good on Windows.
what don't you find coherent about the desgin of C#? On HN there is far more value in being specific about your particular concerns rather than giving no reasons. There is a lot stuff written on the design of C#.
Those are 3 options, and have different nuance. They also were added to the language in later versions but you can use what you what works, I find it hard to see how that makes the language incoherent as if multiple options are bad.
What exactly changed with a for loop?
The runtime is different from the c# programming language. Originally when this all started, Windows was the focus and Linux/open-source were not an option, so saying it should be there from v1 doesn't fit when there was only one OS that it was planned to run on.
Thing is, there isn't just one version of Windows, and updating the CLR in place can be lots of fun, to the point many enterprise postpone it ad infinitum.
This was the original driving idea for .NET Core, when ASP team started doing it, making it easy to allow IT to update Windows production servers to newer versions, porting to other platforms came afterwards.
Can you say how those make it incoherent? 3 different ways to define a function, a breaking change in the language a while back. And only their implementation run time was tied to the OS, Mono came along and provided another
They are not the same features, delegates and lambdas are very different even if they look similar, and go far beyond just event handling. Also what language is perfect at version 1? It's good that the language keeps evolving and gets better and beginners can learn just fine.
Are you stuck on .NET Framework? If you arent then .NET Core already solves all of the maintenance problems by being deployed completely self-contained. It's a similar problem with many other language toolchains as well and .NET Framework at least saves work by being a single install that can be automated away with group policies. Enterprises postponing software updates is a problem with the enterprise, not the framework.
I'm sorta in the same boat, but after Google announced Chromebooks will be getting first party support for Linux apps that is tempting me. I occasionally work on Linux, but I would love something more polished like ChromeOS.
Basically, serialization will throw if there is a BigInt present. But, the .toJSON() callout is supported if a particular environment wants to opt-in to supporting it in some manner.
I have run into the issue a few times. I've run into it most frequently when doing binary operations on 64bit numbers. When I first ran into the issue with going over `Number.MAX_SAFE_INTEGER`, it took me a long time to figure out that was the issue.
I can imagine this is an especially important advancement for server side JS. Not something I personally care about tho.
Also saying Apple reinvented the cursor is a bit much. Highlighting elements, snapping, and changing the cursor weren't just invented. The author really bought into Apple's marketing here.