Hacker Newsnew | past | comments | ask | show | jobs | submit | mongrelion's commentslogin

Apparently there are a few more similar communities like the one from the post

https://tildeverse.org/members/


Pi ships with powerful defaults but skips features like sub-agents and plan mode

Does anyone have an idea as to why this would be a feature? don't you want to have a discussion with your agent to iron out the details before moving onto the implementation (build) phase?

In any case, looks cool :)

EDIT 1: Formatting EDIT 2: Thanks everyone for your input. I was not aware of the extensibility model that pi had in mind or that you can also iterate your plan on a PLAN.md file. Very interesting approach. I'll have a look and give it a go.


I plan all the time. I just tell Pi to create a Plan.md file, and we iterate on it until we are ready to implement.

Agreed. I rarely find the guardrails of plan to be necessary; I basically never use it on opencode. I have some custom commands I use to ask for plan making, discussion.

As for subagents, Pi has sessions. And it has a full session tree & forking. This is one of my favorite things, in all harnesses: build the thing with half the context, then keep using that as a checkpoint, doing new work, from that same branch point. It means still having a very usable lengthy context window but having good fundamental project knowledge loaded.


Check https://pi.dev/packages

There are already multiple implementations of everything.

With a powerful and extensible core, you don't need everything prepackaged.


See my comment in the thread but there is an intuitive extension architecture that makes integrating these type of things feel native.

https://github.com/badlogic/pi-mono/tree/main/packages/codin...


I agree with you, especially with this:

They paid for the access the same as any other.

If anything, this makes them more legit than Anthropic because they are paying for the content, whereas Anthropic just stole *all* the data they got a hold of. So, in this case the Chinese AI labs stand on higher moral ground LOL.


The article touches a bit on how Sega basically lost. There is literally a whole documentary about this: Console Wars, where they go deep into how Sega lost the battle: https://en.wikipedia.org/wiki/Console_Wars_(film)

Hello. I am happy to take this for a spin.

I see that not all models available in my Github subscription are available (all models should be visible).

Further, is it possible to use openrouter with the current implementation? I couldn't figure it out by reading the documentation alone.

Thank you!


This is definitely a cool finding.

Have you investigated more on this topic? like, anything similar in concept that competes with Serena? if so, have you tested it/them? what are your thoughts?


I actually just enhanced my `codescan` project to exceed Serena in some ways

https://github.com/pmarreck/codescan

Essentially zero-install, no MCP, just tell your agent about its CLI, have Ollama running with a particular embeddings model and boom

now I just need to set up Github Actions (ugh) so people can actually download artifacts


@pmarreck, Serena developer here. We invite you to contribute to Serena in order to make it better. Serena is free & open-source, and it already robustly addresses the key issues preventing coding agents from being truly efficient even in complex software development projects (while being highly configurable).

We don't believe CLI is the way to go though, because advanced code intelligence simply cannot be spawned on the fly and thus benefits from a stateful process (such as a language server or an IDE instance).


Curious question: why would they check for installed extensions on one's browser?


Fingerprinting. There are a few reasons you'd do it:

1. Bot prevention. If the bots don't know that you're doing this, you might have a reliable bot detector for a while. The bots will quite possibly have no extensions at all, or even better specific exact combination they always use. Noticing bots means you can block them from scraping your site or spamming your users. If you wanna be very fancy, you could provide fake data or quietly ignore the stuff they create on the site.

2. Spamming/misuse evasion. Imagine an extension called "Send Messages to everybody with a given job role at this company." LinkedIn would prefer not to allow that, probably because they'd want to sell that feature.

3. User tracking.


> The bots will quite possibly have no extensions at all

I imagine most users will also not have extensions at all, so this would not be a reliable metric to track bots. Maybe it might be hard to imagine for someone whose first thing to do after installing a web browser is to install some extensions that they absolutely can't live without (ublock origin, privacy badger, dark mode reader, noscript, vimium c, whatever). But I imagine the majority of casual users do not install any extensions or even know of its existence (Maybe besides some people using something like Grammarly, or Honey, since they aggressively advertise on Youtube).

I do agree with the rest of your reasons though, like if bots used a specific exact combinations of extensions, or if there was an extension specifically for linkedin scraping/automation they want to detect, and of course, user tracking.


I wrote some automation scripts that are not triggered via browser extensions (e.g., open all my sales colleagues’ profiles and like their 4 most recent unliked posts to boost their SSI[1], which is probably the most ‘innocent’ of my use-cases). It has random sleep intervals. I’ve done this for years and never faced a ban hammer.

Wonder if with things like Moltbot taking the scene, a form of “undetectable LinkedIn automation” will start to manifest. At some point they won’t be able to distinguish between a chronically online seller adding 100 people per day with personalized messages, or an AI doing it with the same mannerisms.

[1] https://business.linkedin.com/sales-solutions/social-selling...


most automations for sales and marketing use browser extensions... linkedIn wants you using their tools not 3rd party


Their own tools suck, that’s the issue.


Third–party tools don't bring money to LinkedIn, that's the issue. Rather than try to compete, much easier to force you to use their tools! Reddit did the same thing.


Easy solution is to sell a plan that explicitly allows third-party tool usage. Then they get the money and the users get the tooling LinkedIn is incapable of building themselves.

(except they won't, because they're not after money but engagement, and their built-in tools suck on purpose to maximize wasted time)


For a social network, more information about their users = better ad targeting. It likely gets plumbed into models to inform user profiles.


Look at the actual list. It's primarily questionable AI tools, scrapers, lead generation tools, and other plugins in that vein.

I would guess this is for rate limiting and abuse detection.


An attempt at fingerprinting, I suppose?


I had the dilemma of choosing between Bazzite and CachyOS or Manjaro for my workstation, which is also my gaming rig.

The whole immutable distro felt like a hinder for my workflow (running docker containers, etc.), so that's why I went for CachyOS.


I think you should take another look, especially at the “Bazzite developer experience” edition: container based development is pretty much what it’s centred around. Alternatively, Bluefin, which is much more dev focused


I ran Archlinux as my main driver on both PC and Laptop for more than a decade but after having the opportunity to use a Windows machine with WSL and eventually WSL2, I felt like I had access to the best of both worlds: a Linux terminal for development (bash + tmux + vim, now bash + zellij + neovim) without the hassle of updates breaking things every few months and a out-of-the-box native gaming experience.

But with the enshitification of Windows (first all the spam and ads on the Start menu, then Microsoft forcing you to have an account to be able to use the machine and the expensive license for Windows Professional if you want access to Hyper-V, which I did), I did some research, tried a few new distros (Manjaro, Bazzite and CachyOS) and settled for CachyOS (gaming support was the main driver, based on Archlinux was secondary).

I do everything I did on Windows and some more: all the terminal stuff plus browsing, CAD modeling, 3D printing / slicing, Office stuff... I miss nothing. No more double partition to boot into Windows when I want to game.

My RX 9070 XT runs smoothly with no driver issues whatsoever. I even have tested the waters running some LLMs with LM Studio and that also worked out of the box.

The only thing that has been a bit meh are Teams and Slack and I believe that has to do with the fact that I ran them in Firefox. Once I ran Slack on Chromium, noise canceling was again available.

2009 was the year of Linux on desktop for me. 17 years later, after going back and forth between macOS and Windows, it feels good to be back home.

One last note in my random ramble is that I do not have as much spare time as before, and I had heard this from other people back in the day whenever I'd say I ran Archlinux on my machines, so I am going to repeat what others have said to me: it's really nice to not have to worry about much, be able to sit down and get productive right away. To me, CachyOS and KDE have made that idea my actual experience and for that I am grateful.


> without the hassle of updates breaking things every few months

That's not so much a Linux issue as an Arch issue.


My experience with generating code with AI is very limited across a limited set of programming languages but whenever it has produced low quality code, it has been able to better itself with further instructions. Like "oh no, that is not the right naming convention. Please use instead" or "the choice of design pattern here is not great because ${reasons}. Produce 2 alternative solutions using x or y" and in nearly every case it produces satisfactory results.

Has this also been your experience?


Kinda, yes, my experience has also been that it takes too long in that way though. I've been using AI for different tasks than something that requires high quality code


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: