Yea there could also be an issue with learning how to hand hold an AI vs working on how to actually engineer good solutions. Maybe one feeds into the other since we're not getting off the AI train...
Currently working on training language models steered towards certain "states of consciousness".
I have a model trained on publics datasets tied to brainwaves and/eye tracking and text comprehension (have this working well enough to experiment). Now I am training an adapter for various llm architectures to generate text steered to certain neural oscillation patterns (let's call them "states of consciousness" for brevity). I also have a 'rephraser' that rephrases text to elicit these certain states of consciousness. Overall experimenting with creating an suite of tools off my findings with how text relates to the eigenmodes of consciousness. My theory is once I do this I'll be able to do some...interesting things with "AI" agents. lmk if you want to talk about it if you're someone with knowledge in neuroscience/ML. My background is as a Software/ML Engineer so I could use additional thoughts. I do wish I could send a Github/docs which I will soon but this is currently a private project seeking investment for various research/public/private sector applications.
Awesome! I write LLM powered scrapers and stuff all the time and one of the biggest pain points is HTML is full of so much crap that isn't meaningful and overwhelms the context. And being a data science guy idk how to solve this.
awesome that's the same reason why I use it. It's basically a balance between the full html and having the markdown type scrapers that are better for just text. Do you mind if I reach out to you once I set up the Github?
Sooo make your html extremely convoluted, randomized semantics, and a ton of hidden interations (+1 for only using custom web elements). basically make it like youtube. After spending way too much time building browser agents I can assure you this will also defeat Operator as well.
reply