Hacker Newsnew | past | comments | ask | show | jobs | submit | Barneyhill's commentslogin

Embedded the AI translation/commentary (or human translated if present) - main purpose of the AI translations was to improve embeddings.


Hi everyone! A couple weeks ago I stumbled upon ISicily (https://github.com/ISicily/ISicily), a project to encode all ancient sicilian inscriptions in a strict xml language called EpiDoc. While the existing website is great I wanted to make an interface which makes it easy to explore the human stories contained within the inscribed text. In order to aid searching for concepts/terms I've also filled in missing translations/commentary with Claude Sonnet-3.5 translations (this is meant as a recreational tool - these translations are not to be trusted...)

Hopefully I'll be able to add more datasets in the coming weeks - if you know any do shout!


Ugh that's so annoying... Was originally hosted on a cloudflare static site - now switched to R2 bucket storage. That might help things?


I agree. Here’s a discovery tool I made to traverse NTS tracklists linked by common tracks ;)

https://www.barneyhill.com/pages/nts-tracklists/


Yeah... I definitely share the feeling that we've hit a wall in terms of new sounds. Currently I think some of the work happening in the audio/ML space is the most promising in terms of emerging possibilities but things are still moving much slower than I thought they would. Here's a couple projects we've done developing novel audio interfaces that leverage ML: https://github.com/vroomai - I'm still confident we'll see a breakout use-case in this area soon.


I went to a interesting hackathon the other day focused on building tools for exploring timbre in sound https://comma.eecs.qmul.ac.uk/timbre-tools-hackathon/.

Was brilliant, a lot of groups focusing on the use of ML to characterise the "unexplainable" in sound synthesis.

We ended up submitting a tool for interacting directly with Ableton using LLM agents after becoming disenchanted with text2audio models, wrote about it here - https://montyanderson.net/writing/synthesis


Very cool. I will have to join "mod wiggler" lol.

I have been out of it so long muff wiggler is now mod wiggler. Come on, that is absurd. Muff Wiggler was the best name ever.


Yeah.... no. Why audio-nerd forums were ever so infantile as to brand themselves with "muff" and "slutz" isn't much of a big mystery, but we haven't lost anything of value by seeing those 'cute' names off. I'd like to chat about synthesizers with fellow nerds without feeling rightfully embarrassed about the name in the header.


Those cutesy names were also a barrier to getting girls and young women involved in this stuff too. Changing it to modwiggler was absolutely the right thing to do.


Yep, I came to similar conclusions w/ text-to-audio models - in terms of creative work the ability to iterate is really lacking with the current interfaces. We've stopped working on text-to-audio models and are instead focusing on targeting a lower-level of abstraction by directly exposing an Ableton environment to LLM agents.

We just published a blog today discussing this - https://montyanderson.net/writing/synthesis


Recently I've been thinking about new ways to discover music and made this page to traverse all NTS Radio (https://www.nts.live/) sets as an interconnected network.

It works by displaying all sets containing the selected track. You can then click through white nodes to traverse the network or search for a specific song/artist. All the data is downloaded to the client and traversal is done locally so the page is static.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: