Yes it appears your personal data IS being sent to open router and the model provider here. The problem I think is that a lot of people (especially in the openclaw community) mistake “I run it on my mac mini” to mean their data is private. Meanwhile all data is being shipped off for training to anthropic via openrouter and both of those parties see everything.
I guess you could theoretically plug in a local model here but of course the readme should be more precise here when talking about privacy
The attestation report is produced ahead of time and verified on each connection (before the prompt is sent). Every time the client connects to do an inference request via one of the Tinfoil SDKs, the attestation report is checked relative to a known-good/public configuration to ensure the connection is to a server that is running the right model.
The attestation is tied to the Modelwrap root hash (the root hash is included in the attestation report) so you know that the machine that is serving the model has the right model weights
The absence of solutions for LLM privacy on that list is telling. We’ve figured out how to have private communications with other humans via end to end encryption but arguably we’re leaking a lot more to chatbots about ourselves in a few sessions than we do to even our closest friends and family over Whatsapp
It uses confidential computing primitives like Intel TDX and NVIDIA CC, available on the latest generations of GPUs. Secure hardware like this is a building block to enable verifiably private computation without having to trust the operator. While Confer hasn’t released the technical details yet, you can see in the web inspector that they use TDX in the backend by examining the attestation logs. This is a similar architecture to what we’ve been developing at Tinfoil (https://tinfoil.sh) if you’re curious to learn more!
not to mention the privacy concerns associated with connecting my entire life to OpenAI or Anthropic. If you have the memory feature enabled, it's scary how much ChatGPT knows about you already and can even infer implicit thoughts and patterns about you as a person.
I am sure it already knows a lot regardless of the memory feature, as long you're sharing your chat history/ have your history enabled, but I agree, it'd simply worsen it.
This is really neat! Didn’t realize it could be this simple to run RL on models. Quick question: How would I specify the reward function for tool use? or is this something you automatically do for me when I specify the available tools and their uses?
Thanks! Our goal is to make rl "just work" with completely automated GPU provisioning/algorithm selection/SFT-warm up, but giving people the ability to switch away from the defaults if they want to.
The way tools currently work in the beta is you add tools via MCP to the configuration, and they get passed in as additional context for the model; the model might then choose to use a tool during inference; the tool is then automatically called and the output is returned as a tool message. If you really want to you could parse the tool output as part of reward calculation, but I expect you'd usually base the reward just on the model's completion. I could give more details if there's a specific tool setup you're envisioning!
To add to this, you can currently manually parse tool calls in your environment's step function, but we'll be rolling out a UI that makes this easier soon.
> Far easier said than done. If it were that easy why even go for the salt?
I think it's more gross to pluck it out. Hence why people just pour salt on it or let it finish feeding maybe? Leeches can be really viscerally repulsive to look at and touch
I was once hiking in the dark in the south of Japan. I crossed a few streams but didn't think much of it. A while later, I felt like my socks were sticking to my shoes and unusually warm. I looked down to see my ankles and shoes covered in blood. Totally drenched in dark red blood with 10 leeches attached. This was truly the most confused and terrified I ever felt in my life - seeing that much blood without feeling any pain was utterly disorienting.
I guess you could theoretically plug in a local model here but of course the readme should be more precise here when talking about privacy