One interesting detail: In previous years, Joscha Bach gave a talk on AI, consciousness, and related topics (see e.g. [0]). A similar talk was planned for this year as well, but after emails between him and Epstein were made public (see his comment on this in [1]), his talk was canceled. Instead, there appears to have been an event that critically addressed the situation [2]. Unfortunately it was not recorded. Did anyone attend? A discussion between Joscha and his critics would have been really interesting.
Well that discussion talk is not an open discourse about the situation...
He quoted what he believed was scientific evidence in a private conversation that became public, has comments on fashism being efficient are clearly anti-facist and believed to observe a gender stereotype. No matter if the facts were true, it should be possible to discuss such things (especially those you think are facts) in private without getting canceled. Even if they would play in to the hand of racism or sexism if made as public statements.
I found his appology a bit weak, but I also don't see his offense, despite the messages in public being offensive and possibly harmful.
If you are going to defend someone you have no or very distant association with like you stated in another reply. Maybe just maybe read what everyone else is talking about, in this chain it would be his email exchange with epstein. Thanks for making ME read that pseudo intellectual shit again so YOU don't have to.
"too many people, so many mass executions of the elderly and infirm make sense is the fundamental fact that everyone dies at some time .make it imporrisbole to ask so why not earilier. if the brain discards unused neurons, why shold socieity keep their equivalent."
"too many people, so many mass executions of the elderly and infirm make sense is the fundamental fact that everyone dies at some time .make it imporrisbole to ask so why not earilier. if the brain discards unused neurons, why shold socieity keep their equivalent
The radical idea of treating individuals in a society as cells and the society itself as a well-organized organism is fascism, or course. Probably the most efficient and rationally stringent way of governance, if someone could pull it off in a sustainable way; and if it is aggressive and expansive, its efficiency makes it a virus that everybody will want to stomp out. Fascism makes romantic doo-gooders like me very uncomfortable"
He dares to explore radical taboo ideas and concludes that it would be fascism, which he is not comfortable with.
So .. I see nothing where he is intolerant of anything. But you seem not tolerant for people daring to explore certain thoughts in general? Even if they reach the conclusion this is not the way to go.
(And maybe even an attempt at dissuading the other person of those concepts)
That's why I didn't want to quote anything because it's just deteriorating into a debate club about hypotheticals.
To extend your "full" quote: "The radical idea of treating individuals in a society as cells and the society itself as a well-Organized organism is fascism, or course. Probably the most efficient and rationally stringent way of governance, if someone could pull it off in a sustainable way… I rather like the treatment Fascism gets in the Amazon Series ‘The Man in the High Castle’, which explores what would have happened if the Germans and Japanese had won the war: A society that tries to function as a brutal and ruthlessly efficient machine, eliminating all social and evolutionary slack. It is very dark, but not a flat caricature of pointless evil for its own sake."
Let's stay away from killing people how about the misogyny?:
"You cannot learn what does not attract your attention. Women tend to find abstract systems, conflicts and mechanisms intrinsically boring."
"Let's stay away from killing people how about the misogyny?:
"You cannot learn what does not attract your attention. Women tend to find abstract systems, conflicts and mechanisms intrinsically boring.""
I am not an expert, but that is not misogyny in my book. Not sure about the part about conflicts, but in general it matches my observation as well, women tend to find abstract things boring. That does not say ALL women are like this, or ALL men like abstract things, but on average this is the trend. And when you compare the ratio of men / women who go into in the abstract scientific field, it seems backed up by real world data (also when accounting for existing sexism in the field).
In general, Joshua is indeed a weird guy, the main thing I remember from him as a guest from a alternativlos podcast is:
He always was exited for AGI to finally have someone smart enough to talk to.
Well, I don't subscribe to that, nor his openness for certain other positions, but he is definitely not a fascist. And I believe I am sort of an expert here, as I exposed and confronted quite some of those who tried to infiltrate alternative groups I am part of. (Also I live in saxony. I know cryptonazi talk.) So yes, I do see some signs that are worth debating. Giving him a chance to clarify and reconsider.
But canceling and blocking him will just push him to that side for good. And that would be a shame.
To add some context and to spare readers who, like me, know nothing about Joscha Bach and only little about Epstein from having to go through all the linked material:
The allegations do not appear to involve abuse or moral complicity with Epstein. Instead, they seem to focus on emails Bach exchanged with Epstein concerning IQ, race, and possibly sex. Bach denies these allegations of racism and sexism.
That is at least how I understand the material based on the provided links.
"The main part of the workshop consists of a moderated deliberative discussion with the audience."
I think it is a bit ironic, that Joshua got canceled because of a private conversation - and the debate about it is not recorded, so .. in effect people are more free to express their opinions without getting canceled.
Disapointing to me. Joshua seems to have points of views I find debatable (I don't know much about him) But canceling to not have to stand his opinions? That is very much against the hacker spirit to me and he is a smart guy who knows a lot about AI.
In my dayjob I often run the tech for events, nearly once a week. In my experience known recording/publication tend to make discussions worse and not better than closed room discussions — especially if the topic is controversial. I'd love it if that wasn't the case, but that is not what I observed.
That is because with published recordings it often becomes purely performative, where people aren't actually interested in honestly engaging with each others thoughts, but instead (ab)using the recording as a stage to make a public statement. It essentially becomes a thinly veiled PR battle with multiple actors trying to control the narrative and the ones that prepared well (so not the general audience) tend to dominate the discussion. In my experience that is the opposite of a good discourse.
In the latter case the audience is only the audience that is already present and they are part of the discussion, if everything goes well a feeling of "we need to resolve this issue" is established, with a collective feeling emerging in the room. There is no guarantee that this happens and that there is a result, but in my experience (with well over 400 events) the tendency speaks for the closed room, especially with touchy subjects.
"the tendency speaks for the closed room, especially with touchy subjects."
I do agree to that
I just would have prefered a closed room debate with him invited to adress those issues, not the cancel mentality and then speaking in a close room about him.
"All of the people I know who were friends with this sociopathic child-trafficking pedophile told me he was reformed now" is certainly something to put out there.
I'm a fan of antlr-ng. It's a solid upgrade if you're already using antlr. In my experience, they're fully compatible. antlr's ALL(*) parsing is relatively powerful for a parser generator, but it lacks support for incremental parsing. antlr-ng might improve things enough to be usable interactively in smaller settings, even if you need to reparse the document each time. It also comes with useful extensions like https://github.com/mike-lischke/antlr4-c3, which generates syntactic and semantic completions directly from the grammar.
Is there a general solution to this problem? I assume you can only start buffering tokens once you see a construct, for which there are continuations, that once completed, would lead to the previous text being rendered differently. Of course you don't want to keep buffering for too long, since this would defeat the purpose of streaming. And you never know if the potential construct will actually be generated. Also, the solution probably has to be more context sensitive. For example, within code blocks, you'll never want to render links for []() constructs.
EDIT: One library I found is https://github.com/thetarnav/streaming-markdown which seems to combine incremental parsing with optimistic rendering, which works good enough in practice, I guess.
There are a few things in our implementation that make a more general solution unnecessary. We only need the output to support a limited set of markdown which is typically text, bullet points, and links. So we don't need code blocks (yet).
However, the second thing (not mentioned in the post) is that we are not rendering the markdown to HTML on the server, so []() markdown is sent to the client as []() markdown, not converted into <a href=...>. So even if a []() type link exists in a code block, that text will still be sent to the client as []() text, only sent in a single chunk and perhaps with the link URL replaced. The client has its own library to render the markdown to HTML in React.
Also, the answers are typically short so even if OpenAI outputs some malformed markdown links, worst case is that we end up buffering more than we need to and the user experiences a pause after which the entire response is visible at once (the last step is to flush any buffered text to the client).
Yes. You can define a regex matching what you want, and every regex can be compiled into a state machine (https://en.wikipedia.org/wiki/Nondeterministic_finite_automa...). Then at each character you make a step in your state machine. You pause the output while the regex is not matching.
That was my association as well! Dune even uses similar vocabulary. For example someone mentioned "pranayama" in this thread, which sounds a lot like Dune's "Prana-bindu". Really makes me wonder about Frank Herbert's experiences about all of this.
Aren't LLMs much more limited on the amount of output tokens than input tokens? For example, GPT-4o seems to support only up to 16 K output tokens. I'm not completely sure what the reason is, but I wonder how that interacts with Chain-of-Thought reasoning.
There's no fundamental difference between input and output tokens technically.
The internal model space is exactly the same after evaluating some given set of token, no matter which of them were produced by the prompter or the model.
The 16k output token limit is just an arbitrary limit in the chatgpt interface.
Maybe you're already aware of it, but there is difftastic [0], which is a syntax aware diff tool that can also be used with git. Its understanding of syntax is based on treesitter, so it works for most languages. Although I haven't tried, I think most IDEs should also be able to use it.
This takes n^2 time and memory in the naive implementation.
But clearly, the memory could be reduced to O(n) with the right "fusing" of the operations.
KANs are similar. This is the forward code for KANs:
x = einsum("bi,oik->boik", x, w1) + b1
x = einsum("boik,oik->bo", relu(x), w2) + b2
This is the forward code for a Expansion / Inverse Bottleneck MLPs:
x = einsum("bi,iok->bok", x, w1) + b1
x = einsum("bok,okp->bp", relu(x), w2) + b2
Both take nd^2 time, but Inverse Bottleneck only takes nd memory.
For KANs to match the memory usage, the two einsums must be fused.
Which is to say, a big part is lack of optimization.
Personally, I think this is fine in context. Context that it is a new formulation and the difficulty and non-obviousness of optimization. Shouldn't be expected that every researcher can recognize and solve all optimization problems.
Mindless browsing is one of the lowest work activities, but the influx of information is highly rewarding for the brain. That's why it's so addicting. Programming and OS installation are more work, but there is direct progress. Filing taxes is just work, but at again it's a very direct way to feel productive. All of these activities are immediately rewarding.
Reading on the other hand requires a lot of concentration, without much immediate reward. And I think the ratio here is highly subjective for most people.
Thank you! I have seen that I read the last chapters with increased focus and at times rush (while trying not to skim) through. Finishing the book must be perceived as a reward by the brain, unlike completion of a page or a chapter or even a section.
Let's say you want to show a modal, which fetches some data and modifies the state. Based on this, new children are rendered which again fetch state. The problem of "spaghetti fetching" becomes worse the more levels of recursive fetching there are. If I understand you correctly, you argue for fetching all data upfront, and then rendering the modal and all its children all at once. This way you ensure "UI = f(state)" by removing side effects from "f".
On the other hand, I can also see some drawbacks:
1. This goes against the idea of fetching data close to where it's used, basically promoting modularization.
2. From the POV of the children, you have to backtrack where their data are coming from.
3. If components always use the same data, you have to duplicate fetching their data everywhere you want to use them.
4. You can't partially show children, but have to wait for everyone to have their data before rendering them.
[0] https://media.ccc.de/v/38c3-self-models-of-loving-grace
[1] https://joscha.substack.com/p/on-the-jeffrey-epstein-affair
[2] https://events.ccc.de/congress/2025/hub/en/event/detail/tech...