Hacker Newsnew | past | comments | ask | show | jobs | submit | borodi's commentslogin

Fun fact, Julia's parser and part of its compiler are implemented in femtolisp, and you can access it using a not so secret option in the Julia CLI.


We are slowly moving on replacing this stuff with implementations written in pure julia.

Currently the femtolisp parser is only used during bootstrapping the core systems so that we can parse the pure-julia parser and then we switch over to the julia parser. The same process is now happening with the femtolisp implementation of the lowering pass.


So Julia will no longer be a LISP? :'(


My night and weekend project the last month or so has been creating and implementing a package that provides a pure s-exp syntax for Julia that lowers to Julia's AST directly, and lately been churning through (mostly Opus is doing the actual churning) all the problems of creating an automatic transpiler for Julia to this other syntax.

Not ready to share just yet but nearly at the point that there are no Julia-syntax fallbacks in the entire base/stdlib and a femtolisp parser for the sexp syntax is able to build a complete Julia sysimage from the transpiled files. Already verified that I can transpile the .jl source of the Julia package for the syntax into the syntax, then use that transpiler to transpile again and load into the running sexp repl, then use that transpiler on the source again and get byte identical code, and along the way am testing to ensure that the entire Julia test suite passes in the sysimage being built.

So, with any luck here soon I'll have a sexp syntax for Julia that builds from raw transpiled sexp-syntax source and uses sexp syntax natively in the repl but can transpile & load any Julia code. Fingers crossed.

I'm aware of --lisp but it's not very good imo lol.


Having some components written in lisp was never the lispy part of julia. The thing that makes julia lispy is its semantics and features.


I agree. Was trying a tongue in cheek comment about how the Julia/LISP discussion over the years often would have someone point to julia --lisp as an argument for Julia being a LISP dialect.


    $ julia --lisp
    ;  _
    ; |_ _ _ |_ _ |  . _ _
    ; | (-||||_(_)|__|_)|_)
    ;-------------------|-----    ------------------------------    -----------------------
    > (+ 1 2)
    3


Chaos mode is an option when invoking rr that can expose some concurrency issues. Basically it switches which thread is executing a bunch to try and simulate multiple cores executing. It has found some race conditions for me but it’s of course limited


Unfortunately that only works for large-scale races, and not, say, one instruction interleaving with another one on another thread without proper synchronization. -fsanitize=thread probably works for that though (and of course you could then combine said sanitizer with rr to some effect probably).


One option would be to combine chaos mode with a dynamic race detector to try to focus chaos mode on specific fine-grained races. Someone should try that as a research project. Not really the same thing as rr + TSAN.

There's still the fundamental limitation that rr won't help you with weak memory orderings.


I havent tried Tsan with rr but msan and asan work quite well with it (it’s quite slow when doing this) but seeing the sanitizer trigger then following back what caused it to trigger is very useful.


Yeah, the reason it only works for these coarser race conditions is that RR only has one thread executing at a time. Chaos mode randomizes the durations of time allotted to each thread before it is preempted. This may be out of date. I believe I read it in the Extended Technical Report from 2017: https://arxiv.org/pdf/1705.05937


There is, it's called count_ones. Though I wouldn't be surprised if LLVM could maybe optimize some of these loops into a popcnt, but I'm sure it would be brittle


So it was a bit more pervasive than this, the issue was that flushing subnormals (values very close to 0) to 0 is a register that gets set, so if a library is built with the fastmath flags and it gets loaded, it sets the register, causing the whole process to flush it's subnormals. i.e https://github.com/llvm/llvm-project/issues/57589


btw, there has been a pretty nice effort of reimplementing the tidyverse in julia with https://github.com/TidierOrg/Tidier.jl and it seems to be quite nice to work with, if you were missing that from R at least


LLVMs API is the c++ one. The C one while more stable also doesn't support everything. Keeping up with LLVM is annoying but it's not the source of bugs or anything of the sort. PS(it's not actually stable. Because if the c++ code it calls is removed it just gets removed from the C one)

I say this as one of the devs that usually do the work of keeping up the latest LLVM.


Ye well, I am just guessing as an outsider so I guess I'll have to fold.


Julia follows semver, so code written in 1.0 should work on the latest release (packages notwithstanding)


It tries to show you the type information on the source itself, instead of on the IR.


I do believe this is an issue of not having explicit dependencies. Julia takes the approach of, we build and ship everything for every OS, which means Pkg (the package manager) knows about binary dependencies as well. Making things more reproducible in language


Linux distros often do things to force packages to declare all their dependencies: Nix and Guix use unique prefixes and sandboxed builds, openSUSE builds all their packages in blank slate VMs that only install what is declared, standard Fedora tools for running builds in minimal chroot environments, etc.

I'm not aware of any language ecosystem package managers taking similar measures to ensure that dependency declarations in their packages are complete.


Julia does have really nice GPU support, being able to directly compile julia code into CUDA, ROCm, Metal or other accelerators. (Being GPU code it's limited to a subset of the main language)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: