> LLMs do not encode nor encrypt their training data. The fact they can recite training data is a defect not a default.
About this specific point, it is unclear how much of a defect memorization actually is - there are also reasons to see it as necessary for effective learning. This link explains it well:
> Wasm should be able to just talk to the browser directly.
Web APIs are designed for JavaScript, though, which makes this hard. For example, APIs that receive or return JS Typed Arrays, or objects with flags, etc. - wasm can't operate on those things.
You can add a complete new set of APIs which are lower-level, but that would be a lot of new surface area and a lot of new security risk. NaCl did this back in the day, and WASI is another option that would have similar concerns.
There might be a middle ground with some automatic conversion between JS objects and wasm. Say that when a Web API returns a Typed Array, it would be copied into wasm's linear memory. But that copy may make this actually slower than JS.
Another option is to give wasm a way to operate on JS objects without copying. Wasm has GC support now so that is possible! But it would not easily help non-GC languages like Rust and C++.
Anyhow, these are the sort of reasons that previous proposals here didn't pan out, like Wasm Interface Types and Wasm WebIDL bindings. But hopefully we can improve things here!
At least the DOM APIs are ostensibly designed to work in multiple languages, and are used by XML parsers in many languages.
Some of the newer Web APIs would be difficult to port. But the majority of APIs have quite straight forward equivalents in any language with a defined struct type (which you admittedly do have to define for WASM, and whether that interface would end up being zero-copy would change depending on the language you are compiling to wasm)
There is no solution without tradeoffs here, but the only reason JS-glue-code is winning out is because the complexity is moved from browsers to each language or framework that wants to work with wasm
> There is no solution without tradeoffs here, but the only reason JS-glue-code is winning out is because the complexity is moved from browsers to each language or framework that wants to work with wasm
Correct, but this is has been one of wasm's guiding principles since the start: move complexity from browsers to toolchains.
Wasm is simple to optimize in browsers, far simpler than JavaScript. It does require a lot more toolchain work! But that avoids browser exploits.
This is the reason we don't support the wasm text format in browsers, or wasm-ld, or wasm-opt. All those things would make toolchains easier to develop.
You are right that this sometimes causes duplicate effort among toolchains, each one needing to do the same thing, and that is annoying. But we could also share that effort, and we already do in things like LLVM, wasm-ld, wasm-opt, etc.
Maybe we could share the effort of making JS bindings as well. In fact there is a JS polyfill for the component model, which does exactly that.
1. This is a kind of fuzzer. In general it's just great to have many different fuzzers that work in different ways, to get more coverage.
2. I wouldn't say LLMs are "better" than other fuzzers. Someone would need to measure findings/cost for that. But many LLMs do work at a higher level than most fuzzers, as they can generate plausible-looking source code.
At the very least, computers are still getting faster. Models will get faster and cheaper to run over time, allowing them more time to "think", and we know that helps. Might be slow progress, but it seems inevitable.
I do agree that exponential progress to AGI is speculation.
> AlphaGo or AlphaZero didn’t need to model human cognition. It needed to see the current state and calculate the optimal path better than any human could.
I don't think this is right: To calculate the optimal path, you do need to model human cognition.
At least, in the sense that finding the best path requires figuring out human concepts like "is the king vulnerable", "material value", "rook activity", etc. We have actual evidence of AlphaZero calculating those things in a way that is at least somewhat like humans do:
What i think you are referring to is hidden state as in internal representations. I refer to hidden state in game theoretic terms like a private information only one party has. I think we both agree alphazero has hidden states in first sense.
Concepts like king safety are objectively useful for winning at chess so alphazero developed it too, no wonder about that. Great example of convergence. However, alphazero did not need to know what i am thinking or how i play to beat me. In poker, you must model a player's private cards and beliefs.
So what Anthropic are reporting here is not unprecedented. The main thing they are claiming is an improvement in the amount of findings. I don't see a reason to be overly skeptical.
I'm not sure the volume here is particularly different to past examples. I think the main difference is that there was no custom harness, tooling or fine-tuning. It's just the out of the box capabilities for a generally available model and a generic agent.
To be honest I didn't find the historical parallels as convincing in this article. I'm glad the author did recognize that we are in uncharted waters, but I think another potential reason to believe that our current fascist government is a little bit more restrained than earlier ones is due to the same forces that allowed it to rise in the first place - that is, social media and instantly viral videos.
What has happened since the Alex Pretti shooting was simply impossible in previous fascist governments. The administration can tell all the lies they want about it, but most of us have eyeballs, and we can see the multiple videos with frame-by-frame analysis. In the past, government propaganda would have been more effective in cases like this - it would have been a case of "who do you believe, team A or team B?" I don't have to believe either team, I just have to believe my own eyes.
> The administration can tell all the lies they want about it, but most of us have eyeballs […] In the past, […] it would have been a case of "who do you believe, team A or team B?"
Damn I wish I could share your optimism. If one thing, social media has induced more division, and generalised the idea that "if you are not with me, you are against me". We are at a point where many are demonstrably more comfortable staying in their bubble of lies than willing to seek the truth out of it. And truth is unfortunately overrated.
> I researched every Democratic attempt to stop fascism in history. the success rate after fascists were elected was 0%.
Ergo Trump isn't fascist since he already was elected and democracy removed him once before. Otherwise they have to say that there has been one successful attempt for democracy to remove a fascist. Only reason Trump won the last election was that the democrats failed so hard at coming up with good candidates, if they had someone as good as John Biden before dementia Trump would have lost, trying to hide his dementia is why Trump rules today.
Well he did try to overturn that election, but he failed. So I guess that makes him a failed fascist last time around. This time he’s trying much harder. Let’s make sure he fails again.
What are examples of such applications? Honest question - I'm curious to learn more about issues such applications have in production.
> But we really shouldn't be requiring everyone to become an expert to benefit from wasm.
If the toolchain does it for them, they don't need to be experts, no more than people need to be DWARF experts to debug native applications.
I agree tools could be a lot better here! But as I think you know, my position is that we can move faster and get better results on the tools side.
reply