I've used Julia for quite a few years now. It's biggest flaws in my opinion are basically cultural and not technological. It's been adopted mostly by serious domain experts rather then typical software engineers and more 'normal' people. I don't know say junior or senior scientists. This has lead to amazing results but also has it's own detriments.
Some portions of the ecosystem are rock solid, especially the parts where JuliaComputing makes money from consulting(not all but some). Other parts are beds of sand/permanent research projects. The median experience is usually someone points you to a package and it doesn't really do what you hoped it would so you end up adapting it and rolling your own solution to a problem. Maybe you try to make a PR and it gets rejected because of "not invented here"/academia mindsets, either way you made a fix and your code works for you.
What makes this barrier hard to overcome for adoption is: trust, and blind spots. People who aren't experts in a casual work area (maybe computer vision) realize they can't use a tool to do something `basic` and run away to easier ecosystems(R/Python). People who are experts in other areas, check credentials of packages see that an ivy league lead researcher made it and assumes it's great and usable for a general audience. So you'll get a lot of "there's a package for that" but when you go to use it you might find the package is barren for common and anticipatable use cases in industry (or even hobbies).
This makes Julia best positioned as a research tool, or as a teaching tool. Unfortunately - where Julia actually shines is as a practical tool for accomplishing tasks very quickly and cleanly. So there's this uncomfortable mismatch between what Julia could be and what it's being used for today. (yes Julia can do both not arguing against it). The focus on getting headlines far outsurpasses stable useful stuff. Infact, very often after a paper gets published using Julia, a packages syntax will completely change - so no one really benefits except for the person who made the package.
Interestingly, 1 person(with some help of course) fleshed out the majority of the ecosystems need for interchange format support(JSON), database connections, etc. It's not like that person is jobless spending all their days doing it - it was a manageable task for a single smart person to kick off and work hard to accomplish. Why? Because Julia is amazing for quickly developing world class software. That is also kind of its detriment right now.
Because its so easy to create these amazing packages you'll find that a lot of packages have become deprecated or are undocumented. Some researcher just needed a 1 off really quickly to graduate, maybe the base language(or other parts of the ecosystem) changed many times since its release. Furthermore, if you try to revitalize one of these packages you'll sometimes find a rats nest of brilliance. The code is written very intelligently, but unpacking the design decisions to maintain world class performance can be prickly at best.
One of Julia's strengths is it's easy/clean to write fast enough code. One of its downsides is, this attracts people who focus on shaving nanoseconds from a runtime (sometimes needlessly) at the expense of (sometimes) intense code complexity. Performance is important, but, stable and correct features/capabilities mean more to the average person. After-all, this is why people use, pay for, hire for: Matlab, Python and R in the first place - right?
Most people don't want to have to figure out which ANOVA package they should use. Or find out in a bad way some weird bug in one of them and be forced to switch. Meanwhile in R: aov(...).
Do I blame Torch for not using Julia? No. Should they consider using it? Yes, absolutely. Does Julia's cultural issue need attention before risking Python(or anything else) reinventing a flavor of Julia that's more widely used for stability reasons alone - in my opinion, yes (see numba, pyjion, etc). Still love the language, because technologically it's sound, but there are blemishes. I'd chalk it up to growing pains.
This is a great comment, I've had exactly the same experience. For a simple, concrete example of the fragmentation issue, the canonical JSON parser seems to be JSON3.jl, but there's also JSON.jl, which is slower and has other subtly different behavior. Neither mentions the other in its documentation, neither is deprecated, but if you search for "json julia" only JSON.jl comes up on the first page of results, but if you ask a question about JSON.jl in Discourse or Slack they'll probably tell you to use JSON3.jl instead.
(To be fair, Postgres has an extremely similar issue with JSON data types and it's doing fine.)
The state of tabular data formats is similar but instead of 2 libraries there are 20, and some of them are effectively deprecated, but they're not marked as deprecated so the only way to find out that you shouldn't be using them is, again, to ask a question about them in Discourse or Slack. You can check the commit history, but sometimes they'll have had minor commits recently, plus (to Julia's immense credit) there are some libraries that are actively maintained and work fine but haven't had any commits for 3 years because they don't need them. I assume this will get worse before gets better as the community tries to decide between wrapping Polars and sticking to DataFrames.jl, hopefully without chopping the baby in half.
I feel like the "not invented here" mindset contributes a lot to that fragmentation. It's easy to write your own methods for types from other Julia libraries because of multiple dispatch, which seems to have resulted in a community expectation that if you want some functionality that a core package doesn't have, you should implement it yourself and release your own package if you want to. So we have packages like DataFramesMeta.jl and SplitApplyCombine.jl, not to mention at least 3 different, independent packages that try (unsuccessfully IMO) to make piping data frames through functions as ergonomic as it is in R's dplyr.
Despite all of this, I still like the language a lot and enjoy using it, and I'm bullish on its future. Maybe the biggest takeaway is how impactful Guido was in steering Python away from many of these issues. (The people at the helm of Julia development are probably every bit as capable, but by design they're far less, um, dictatorial.)
Kindred spirits it seems. Yea I think there is a serious future for Julia. It's my R&D and prototype workhorse by preference :).
Again, completely agree with the sometimes confusing state of the ecosystem. Sometimes I wish a bit of democracy existed, but people are people. I proposed some solutions to that problem a while ago but that's a story for another year.
Academia does create a very different kind of reward system that is often counter to community progress. IE: get there first, publish, obfuscate to thwart competition, abandon for new funding. Tends to reward people the highest for not giving credit, or sharing progress.
Meanwhile, people relying on alternatives to julia are more like: load in trusty xyz, use it in trusty way, I'll upgrade when it makes sense, and check the docs not the code when I am unsure of something.
Not to say industry is much better(I keep saying `academia`), but industry projects do tend to appreciate/honor free labor a little more kindly. That or they close the OSS gate and you get what you get.
Novelty is a driving force, but too much entropy and not playing well with each other can destroy a meaningful future quickly. It'll work itself out, one way or another but only because the technology is good :D.
Some portions of the ecosystem are rock solid, especially the parts where JuliaComputing makes money from consulting(not all but some). Other parts are beds of sand/permanent research projects. The median experience is usually someone points you to a package and it doesn't really do what you hoped it would so you end up adapting it and rolling your own solution to a problem. Maybe you try to make a PR and it gets rejected because of "not invented here"/academia mindsets, either way you made a fix and your code works for you.
What makes this barrier hard to overcome for adoption is: trust, and blind spots. People who aren't experts in a casual work area (maybe computer vision) realize they can't use a tool to do something `basic` and run away to easier ecosystems(R/Python). People who are experts in other areas, check credentials of packages see that an ivy league lead researcher made it and assumes it's great and usable for a general audience. So you'll get a lot of "there's a package for that" but when you go to use it you might find the package is barren for common and anticipatable use cases in industry (or even hobbies).
This makes Julia best positioned as a research tool, or as a teaching tool. Unfortunately - where Julia actually shines is as a practical tool for accomplishing tasks very quickly and cleanly. So there's this uncomfortable mismatch between what Julia could be and what it's being used for today. (yes Julia can do both not arguing against it). The focus on getting headlines far outsurpasses stable useful stuff. Infact, very often after a paper gets published using Julia, a packages syntax will completely change - so no one really benefits except for the person who made the package.
Interestingly, 1 person(with some help of course) fleshed out the majority of the ecosystems need for interchange format support(JSON), database connections, etc. It's not like that person is jobless spending all their days doing it - it was a manageable task for a single smart person to kick off and work hard to accomplish. Why? Because Julia is amazing for quickly developing world class software. That is also kind of its detriment right now.
Because its so easy to create these amazing packages you'll find that a lot of packages have become deprecated or are undocumented. Some researcher just needed a 1 off really quickly to graduate, maybe the base language(or other parts of the ecosystem) changed many times since its release. Furthermore, if you try to revitalize one of these packages you'll sometimes find a rats nest of brilliance. The code is written very intelligently, but unpacking the design decisions to maintain world class performance can be prickly at best.
One of Julia's strengths is it's easy/clean to write fast enough code. One of its downsides is, this attracts people who focus on shaving nanoseconds from a runtime (sometimes needlessly) at the expense of (sometimes) intense code complexity. Performance is important, but, stable and correct features/capabilities mean more to the average person. After-all, this is why people use, pay for, hire for: Matlab, Python and R in the first place - right?
Most people don't want to have to figure out which ANOVA package they should use. Or find out in a bad way some weird bug in one of them and be forced to switch. Meanwhile in R: aov(...).
Do I blame Torch for not using Julia? No. Should they consider using it? Yes, absolutely. Does Julia's cultural issue need attention before risking Python(or anything else) reinventing a flavor of Julia that's more widely used for stability reasons alone - in my opinion, yes (see numba, pyjion, etc). Still love the language, because technologically it's sound, but there are blemishes. I'd chalk it up to growing pains.