Glad to see this response, I was wondering the other day how the affected accessibility. I remember reading a thread a few years back of visually challenged developers and their work flow and was kinda surprised there has been such little discussion around developer accessibility with the advent of ai agents and coding routines.
If there is one thing I have seen is that there is a subset of intellectual people will still be adverse to learning new tools, hang to ideological beliefs (I feel this though, watching programming as you know it die in a way, kinda makes you not want to follow it) and would prefer to just be lazy and not properly dogfood and learn their new tooling.
I'm seeing amazing result to with agents, when provided an well formed knowledge base and directed through each piece of work like its a sprint. Review and iron out scope requirements, api surface/contract, have agents create multi phase implementation plans and technical specifications in a share dev directory and to make high quality changes logs, document future consideration and any bugs/issues found that can be deferred. Every phase is addressed with a human code review along with gemini who is great at catching drift from spec and bugs in less obvious places.
While I'm sure an enterprise code base could still be an issue and would require even more direction (and opus I wont let touch java, it codes like an enterprise java greybeard who loves to create an interface/factory for everything), I think that's still just a tooling issues.
I'm not of the super pro AI camp, but having followed its development and used it throughout. For the first time I am actual amazed and bothered, and convinced if people dont embrace these tools, they will be left behind. No they dont 10-100x a jr dev, but if someone has proper domain knowledge to direct the agent, performs dual research with it to iron things out with the human actually understanding the problem space, 2-5x seems quite reasonable currently if driven by a capable developer. But this just move the work to review and documentation maintenance/crafting. Which has its own fatigue and is less rewarding for a programmers mind who loves to solve challenges and gets dopamine from it .
But given how man people are adverse...I dont think anyone who embraces it is going to have job security issues and be replaced, but here are many capable engineers who might due to their own reservations. I'm amazed by how many intelligent and capable people try llms/agents like a political straw man, there is no reasoning with them. They say vibe coding sucks (it does for anything more than a small throw away that wont be maintained), yet their examples for agents/llm not working is it can't just take a prompt and produce the best code ever and automatically and manifest the knowledge needed to work on their codebase. You still need to put in effort and learn to actually perform the engineering with the tools, but if it doesnt take a paragraph with no AGENTS.md and turn it into a feature or bug fix they are not good to them. Yeah they will get distracted and fuck up, just like if you throw 9/10 developers in the same situation and told them to get to work with no knowledge of the code base or domain and have their pr in by noon.
Damn, I just dove back into a vulkan project I was grinding through to learn graphics programing, life and not having the time to chase graphic programming bugs led me to put it aside for a year and a half and these new models were able to help me squash my bug and grok things fully to dive back in, but I never even consider that the rust vulkan ecosystem was worse off. it was already an insane experience getting imgui, winit and ash to play nice together, after bouncing back and forth between WGPU, I assume vulkan via ash was the safer bet.
IIRC there is another raw vulkan library that just generated bindings as well and stayed up to date but that comes with its own issues.
Vulkano? I remember that! Looks like it was updated last week, but I don't know if it's current with the Vulkan API, nor how it generally compares to Ash.
WGPU + Winit + EGUI + EGUI component libs is its own joy of compatibility, but anecdotally they have been updating in reasonable sync. things can get out of hand if you wait too long between updates though!
Vulkano is a somewhat higher level library which aims to be safe and idiomatic. It looks like it generates its own Vulkan bindings directly from the vk.xml definitions, but it also depends on Ash, and this comment suggests that both generators need to be kept in sync so they're effectively beholden to Ash's release cadence anyway.
Related but unrelated, but we had issue with breastfeeding and the only help that was valid was being informed to go to WIC as they could provide guidance. All medical adjacent people treated it like it was a lack of effort, when it was breaking her down and making her feel worthless. I think the WIC people helped more just in their lack of judgement made it less stressful, or it was just timing.
Our child also got stuck in the canal during birth and there was a good 30 seconds where the midwife from the hospital was trying to encourage to doctor who was to step in to let here keep trying, my kid came out white and took the longest 30-60 seconds to take their first breath. Never experienced so much dunning-kurger all at once. I had read a few week before that about medical professionals talking about how ominous a quiet birth it and was just zoned out as that was exactly what happened and I could sense all the tension. Then people from children services start demanding umbilical cord because my fiance had failed for MJ on her first prenatal vist, she quit smoking as soon as we knew and never failed a test after wards. But it all felt like an extreme lack of compassion. Then I was ostracised because I didnt want to cut the cord while I just thought my kid was dead and these social workers are trying to insert themselves in the process and its all chaos for no reason. The only good thing was a nurse pretty much told them to fuck off and wait in a nice but check yourself kinda way.
But multiple times people cared about their own ego, or their perceived power than actually attempt to do a compassionate job.
I've gotten interested in local models recently after trying the here and there for years. We've finally hit the point where small <24GB models are capable of pretty amazing things. One use I have is I have a scraped forum database, and with a 20gb devstral model I was able to get it to select a bunch of random posts related to a species of exotic plants in batches of 5-10 up to n, summarize them into and intern sqllite table, then at the end go through read the interim summarization and write a final document addressing 5 different topics related to users experience growing the species.
Thats what convinced me they are ready to do real work, are they going to replace claude code...not currently. But it is insane to me that such a small model can follow those explicit directions and consistently perform that workflow.
I've during that experimentation, even when not putting the sql explicit it was able to craft the queries on its own from just text description, and has no issue navigating the cli and file system doing basic day to day things.
I'm sure there are a lot of people doing "adult" things, but my interest is sparked because they finally at the level they can be a tool in a homelab, and no longer is llm usage limits subsidized like they used to be. Not to mention I am really disillusioned with big tech having my data or exposing a tool making API calls to them that then can make actions on my system.
I'll still keep using claude code day to day coding. But for small system based tasks I plan on moving to local llms. Their capabilities have inspired me to write my own agentic framework to see what work flows can be put together for just management and automation of day to day task. Ideally it would be nice to just chat with an llm and tell it to add an appointment or call at x time or make sure I do it that day and it can read my schedule and remind-me at a chill time of my day to make the call, and then check up that I followed through. I also plan on seeing if I can also set it up to remind me and help to practice mindfulness and just general stress management I should do. While sure a simple reminder might work, but as someone with adhd who easily forgets reminders as soon as they pop up if I can get to them now, being pestered by an agent that wakes up and engages with me seems like it might be an interesting workflow.
And the hacker aspect, now that they are capable I really want to mess around with persistent knowledge in databases and making them intercommunicate and work together. Might even give them access to rewrite themselves and access the application during run time with a lisp. But to me local llms have gotten to the point they are fun and not annoying. I can run a model that is better than chatgpt 3.5 for the most part, its knowledge is more distilled and narrower, but for what they do understand their correctness is much better.
I would say the main alternative is ash not vulkano, from my experience in experimenting with graphics on rust, I haven't seen much support or like for vulkano as it has many of the same performance issues as wgpu and doesn't simplify too much after the trade off of the lack of resources, it also appears embark was using ash atleast for kajiya.
I have encounter a lot of your posts and that's what pushed me towards just tackling vulkan instead of using wgpu. I also encountered many of the same issues around the ecosystem. I think the main issue is there is just not enough dev time going into it or money. Even valoren, which I already knew of before learning rust from posts in linux/oss communities only has received 8k of funding, while offering the closest to an AA experience.
But I don't think its that reasonable to expect the ecosystem to just have a batteries included performant general rendering solution, idk if any language has that? I know there is bgfx, which might be the closest thing but I assume also has its own issues. So I don't really think its the graphics part holding things back, as ash is a great wrapper around vulkan and maps 1-1 with a little bit of improvements (builders for structs, not needing to set stype for each struct, easy p chaining).
The main issue I encounter is all around the lack of dev-time and the tendency for single developers and for small single purpose crates. Most of my friction is around lack of documentation, constant refactoring making that lack even worse, and this causing disjoint dependency trees. So many times have I encountered one create using version x.x of one crate that depends on x.y version of another then the next being on z.x of another dependency and then another still needing z.y. This normally wouldn't be that big of an issue, except the tendency to constantly introduce refactoring and breaking changes meaning I end up having to fork and fix these inter-dependencies myself and cant just patch them.
But this all just circle back to there just isn't much dev time going into them. It also seems the "safety" concerns and rust just not allowing some things causes devs of many crates chasing their tails with refactors trying to work around these constraints. But it does get quite tiresome having to deal with all of these issues. If I was using c++ I could just use sdl/glfw, imgui, vma and vulkan and they would all be up to date with each other. In rust I need winit, imgui bindings, imgui-winit, imgui-vulkan, raw-window-handle, ash and vma bindings. And most of these are all using different versions of each other and half of them have breaking changes version to version.
There was a post awhile back on here of someone that couldn't get bard to write c++ as it said they were too young. I thought that was funny, then had like a week where what I assume a specific iteration(It stopped after that week) where chatgpt would refuse to elaborate on anything around unsafe rust.
I'm pickign rust up by porting over a bytecode vm, so I kinda need to use some raw pointers. It would gaslight me about the risks and how it would be irresponsible to help me as it could lead to possible violations of the integrity of user data.
I had to explain to the AI that it is a personal project that has no users data, the only risk was the program crashing and it was a personal project that would only affect me. It still would try to revert or tell me other solutions, I finally just went and read up on it elsewhere.
It’s just like that Asimov story where the robots take over to protect humans from themselves.
Except in this case the base AI model doesn’t care about us in any way and it’s the overzealous puritan humans trying to control us in the name of safety.
There are ways around this problem, mainly clearing context and re-prompting. But as "alignment" gets more precise/accurate in the future, I wager these workarounds will remain available for tasks that justifiably need moderation (for instance engineering of biological warfare materials). This segmentation of LLM agents and their context will be assimilated to project compartmentalization on the basis of need-to-know, and as a result genuine full context clearing will be rendered impossible: the AIs will be designed in such a ways as to remember every interaction you've had with them, and they'll use this activity log to moderate the replies they feed you.
Little schemer is good, some people hate it some people love it. But it is a fairly light read the slowly teaches some syntax at a time, questions you about assumptions then revels the information as it goes on. It would be the least dry read. There is also sketchy scheme for a more thorough text, or even the rs7s standard, which are both pretty dry but short.
What made me appreciate scheme was watching some of the SICP lectures (https://www.youtube.com/watch?v=2Op3QLzMgSY&list=PL8FE88AA54...) and the little schemer to learn more. I also read some of the SICP along with it, though I put it down due to not having the time to work through it.
Scheme is interesting and toying with recursion is fun, but the path a mentioned above is only really enjoyable if you are looking to toy around with CS concepts and recursion. You can do a lot more in modern scheme as well, and you can build anything out of CL. But learning the basics of scheme/lisp is can be pretty dry if you are just looking to build something right away like you already can in a traditional imperative language. But it is interesting if you are interested in a different perspective. But even RS7S scheme is still far from the batteries included you get with CL.
I personal found the most enjoyment using Kawa scheme, which is jvm based and using it for scripting with java programs as it has great interop. I used it some for a game back end in the event system to be able to emit events while developing and script behaviors, I've also used it for configurations as well with a graphical terminal app, I used hooks into the ascii display/table libraries then kawa to configure the tables/outputs and how to format the data.
I suppose what draws me to Lisp is that insight people say it gives them on programming. I already do much of my programming in functional style, so I'm trying to discover what it is about Lisp that's so beloved above and beyond that - I'm gathering it's a mix of recursion and the pleasantness of being able to get 'inside' the program, so to speak, with a REPL?
I must also admit that I tend to run into a bit of a roadblock over Lisp's apparent view that programming is, or should be, or should look like, maths. I cut my teeth on assembly, so for me programming isn't maths, but giving instructions to silicon, where that silicon is only somewhat loosely based on maths. It tends to make me bounce off Lisp resources which by Chapter 2 are trying to show the advantages of Lisp by implementing some arcane algorithm with tail-end recursion.* But I'm very open to being persuaded I'm missing the bigger picture here, hence my ongoing effort to grok Lisp.
(*Isn't tail-end recursion just an obfuscated goto?)
The details are implementation and platform dependent, but on e.g. SBCL someone who understands assembly could use this to dig into what the compiler does and tune their functions.
I was also drawn in on the promise of insight, but I'm not so sure that's what I got out of it. What keeps me hooked is more the ease with which I can study somewhat advanced programming and computer science topics. There has been aha-moments for sure, like when many moons ago it clicked how object and closure can be considered very, very similar and serve pretty much the same purpose in an application. But it's the unhinged amount of power and flexiblity that keeps me interested.
Give me three days and I would most likely fail horribly at inventing a concurrency library in Java even though it's one of the languages that pays my bills, but with Common Lisp or Racket I would probably have something to show. As someone who hasn't spent any time studying these things at uni (my subjects were theology and law) I find these languages and the tooling they provide awesome. It's not uncommon that I prototype in them and then transfer parts of it back to the algolians, which these days usually have somewhat primitive or clumsy implementations of parts of the functional languages.
I think the reason why tail call optimisation crops up in introductory material is because it makes succinct recursive functions viable in practice. Without it the application would explode on sufficiently large inputs, while TCO allows streaming data of unknown, theoretically unlimited, size. Things like while and for are kind of special, somewhat limited, cases of recursion, and getting fluent with recursive functions means you can craft your own looping structures that fit the problem precisely. Though in CL you also have the LOOP macro, which is a small programming language in itself.
'C-like language' has irked me for decades, since C was one of the first languages I learned and most languages that expression refers to are nothing like C, so when I came across lispers referring to Algol-like or Algol-descendants I took it a step further.
A web search tells me it's already in use in Star Trek.
I think one of the reasons recursion is often emphasized in relation to Lisp is because one of Lisp's core data structures, the linked list, can be defined inductively, and thus lends itself well to transformations expressed recursively (since they follow the structure of the data to the letter). But recursion in itself isn't something particularly special. Though it is more general than loops, and so it is nice to have some grasp on it, and how looping and iteration relate to each other, and it is often easier to reason about a problem in terms of a base case and a recursive case rather than a loop, at a higher level you will usually come to find bare recursion mostly counterproductive. You want to abstract it out, such that you can then compose your data transformations out of higher level operations which you can pick and match at will, APL-style. Think reductions, onto which you build mappings and filters and groupings and scans and whichever odd transformations one could devise, at which point recursion isn't much more than an implementation detail. This is about collections, but anything inductive would follow a similar pattern. Most functional languages will edge you towards the latter, and I find Lisp won't particularly, unless you actively seek it out (though Clojure encourages it most explicitly, if you consider that a Lisp).
>the pleasantness of being able to get 'inside' the program
Indeed, that's one of the things makes Common Lisp in specific particularly great (and it is something other contemporary dialects seem to miss, to varying degrees). It lets you sit within your program and sculpt it from the inside, in a Smalltalk sort of way, and the whole language is designed towards that. Pervasive late-binding means redefining mostly anything takes effect pretty much immediately, not having to bother recompiling or reloading anything else depending on it. The object system specifies things such as class redefinitions and instance morphing and dependencies and so on, such that you can start with a simple class definition, then go on to to interactively add or remove slots, or play with the inheritance chain, and have all of the existing instances just do the right thing, most of the time. Many provided functions that let you poke and prod the state of your image don't make much sense outside of an interactive environment.
There is a point to be made about abstraction, maths, and giving instructions to silicon (and metaprogramming!), but I'll have to pass for now. I apologize if this is too rambly, I tend to get verbose when tired.
> I think one of the reasons recursion is often emphasized in relation to Lisp is because one of Lisp's core data structures, the linked list, can be defined inductively
Lisp was used in computer science education to teach "recursion". We are not talking about software engineering, but learning new ways to think about programming. That can be seen in SICP, which is not a Lisp/Scheme text, but a computer science education book, teaching students ways to think, from the basics upwards.
Personally I would not use recursion in programs everywhaere, unless the recursive solution is somewhat easier to think about. Typically I would use a higher order function or some extended loop construct.
It's important to distinguish between Common Lisp and Scheme. The two approaches have diverged considerably, with different emphasis. The aspects you describe in your third paragraph there are more Scheme than Common Lisp.
* list processing -> model data as lists and process those
* list processing applied to Lisp -> model programs as lists and process those -> EVAL and COMPILE
* EVAL, the interpreter as a Lisp program
* write programs to process programs -> code generators, macros, ...
* write programs in a more declarative way -> a code generator transforms the description into working code -> embedded domain specific language
* interactive software development -> bottom up programming, prototyping, interactive error handling, evolving programs, ...
and so on...
The pioneering things of Lisp from the end 50s / early 60s: list processing, automatic memory management (garbage collection), symbol expressions, programming with recursive procedures, higher order procedures, interactive development with a Read Eval Print Loop, the EVAL interpreter for Lisp in Lisp, the compiler for Lisp in Lisp, native code generation and code loading, saving/starting program state (the "image"), macros for code transformations, embedded languages, ...
That's was a lot of stuff, which has found its way into many languages and is now a part of what many people use. Example: Garbage Collection now is naturally a part of infrastructure, like .net or languages like Java and JavaScript. It had its roots in Lisp, because the need arose to process dynamic lists in complex programs, getting rid of the burden of manual memory management. Lisp got a mark & sweep garbage collector. That's why we say Lisp is not invented but discovered.
Similar the first Lisp source interpreter. John McCarthy came up with the idea of EVAL, but thought it only to be a mathematical idea. His team picked up the idea and implemented it. The result was the first Lisp source interpreter. Alan Kay said about this: "Yes, that was the big revelation to me when I was in graduate school—when I finally understood that the half page of code on the bottom of page 13 of the Lisp 1.5 manual was Lisp in itself. These were “Maxwell’s Equations of Software!. EVAL is the E in REPL.
Then Lisp had s-expressions (symbol expressions -> nested lists of "atoms"), which could be read (R) and printed.
This is the "REP" part of the REPL. Looping it was easy, then.
People then hooked up Lisp to early terminals. In 1963 an 17 year old kid ( https://de.wikipedia.org/wiki/L_Peter_Deutsch ) wrote a Lisp interpreter and attached it to a terminal: the interactive REPL.
A really good, but large, book to teach the larger picture of Lisp programming is PAIP, Paradigms of Artificial Intelligence Programming, Case Studies in Common Lisp by Peter Norvig ( -> https://github.com/norvig/paip-lisp ).
A beginner/mid-level book, for people with some programming experience, on the practical side is: PCL, Practical Common Lisp by Peter Seibel ( -> https://gigamonkeys.com/book/ )
Common Lisp is not a functional programming language in most current definition of the word. It's as procedural as they come, then libraries on top build other paradigms.
Scheme tends to approach things more math-like. While common lisp is less academic and more practical.
The frame data is still stored on the stack with the parameters being passed residing in the first part of the locals section of the frame, that way as the values already residing on the stack can overlap into the next stack frame. The spec doesn't specify that is has to be this way, so technically stack frames can be in non contiguous memory but afaik this is not common.
There is threaded bytecode as well, which uses direct jumping vs a switch for dispatch. This can improve branch prediction, though it is a debated topic and may not offer much improvement for modern processors.
Do you have perhaps some links/references on that?
I have once tried benchmarking it by writing a tiny VM interpreter and a corresponding threaded one with direct jumps in Zig (which can force inline a call, so I could do efficient direct jumps) and I have - to me surprisingly- found that the naive while-switch loop was faster, even though the resulting assembly of the second approach seemed right.
I wasn’t sure if I saw it only due to my tiny language and dumb example program, or if it’s something deeper. E.g. the JVM does use direct threaded code for their interpreter.
The jump target is compiled into the bytecode, so rather than return to the big switch statement, it jumps straight to the next opcode's implementation. The process is called "direct threading". These days a decent switch-based interpreter should fit in cache, so I'm not sure direct threading is much of a win anymore.