Hacker Newsnew | past | comments | ask | show | jobs | submit | ivan_gammel's commentslogin

I owned Salesforce setup with 4 engineers and 500+ licenses. I don‘t see how could I replace our SF setup with an in-house product on the same budget within reasonable timeline. We won local competition within a few years, because our sales could use good CRM from day 1 and our competitor, according to the rumors I heard, could not calculate properly sales agent commission. Vendor lock-in is not always a stupid thing. Sometimes it‘s the bet that wins you a market.

Zoom out a little though. I've always felt the main reason That most companies use Salesforce Is that most companies use Salesforce.

I'll give you an example. At a previous employer, We used Google Analytics. We paid for Google Analytics. I feel positive that as a mid size company, We shouldn't have paid for Google Analytics. The free product with 50 events in GA4 should be plenty for us. But why do we use Google Analytics in the first place? Because everyone uses Google Analytics.

I agree that sometimes Salesforce might be a good idea. However, it should be a part of an overall strategy, not just because everyone does it. This kind of deliberate tooling strategy is difficult though because the way Google Analytics or Salesforce works from what I understand is make marketing folks feel they are specialized in Google Analytics or Salesforce so they feel like they have to keep using it or their skill will become useless.

It is like resume driven development but for the whole business.


>I've always felt the main reason That most companies use Salesforce Is that most companies use Salesforce.

It's like this for most software, but as a salaryman it's better for you if you use the common software. If you have an interview you can now say "I know how to use the thing that most people use" instead of "Actually we had an inhouse system so if you hire me I need to be onboarded for 3 months".

I got hired to my 2nd job in large part because I knew how to use Broadridge Paladyne (back then it was pretty good if you got over the pretty bad UI/UX, by today's standards it's not great).


I think it‘s kind of a common knowledge now that Salesforce is very expensive, so it is not a go-to choice for most startups/no-CRM-experience people. You are more likely to start with Hubspot today than with anything else, but those low-effort CRMs are also quite easy to migrate from. Google Analytics too, so it’s not exactly a „lock-in“. The lock-in happens when you struggle with your current setup or risks associated with it become unacceptable, but do not have the budget and a competent team or external partner to execute the migration.

„Everyone does that“ is definitely part of decision-making process almost everywhere, but I personally have not seen companies where it’s just a cargo cult rather than a reasonable strategic choice. The obvious benefits are that it’s easier to find implementation partners, the costs are predictable and your users may already know the system, so you won’t have unnecessary friction in your ops.


Decided to cancel my personal Miro subscription, so vibe-coding* a diagram/vector graphics tool with UX I would enjoy rather than tolerate.

* assisted coding, not full code generation


This is the only correct way to do it: choose infrastructure provider that can help you deliver. AWS is good, just not for everyone. It stands somewhere between services like Heroku and bare metal, abstracting a lot of maintenance, but offering some control over scaling architecture. Which means that as a cloud provider it helps to scale, not to build the cheapest and simplest setup possible. If you have VC money and pitch growth, AWS might be a safe choice - 2 years of startup credits they offer via accelerator programs help you not to bother too much about your infra budget and build first 18 months before you start optimizing spending (and then you know it, have good forecasting etc). If you are bootstrapped or indie developer, choose what you can afford and choose something simple. Hetzner, DO etc will work fine.

Technically, the kernel team is sufficiently competent to design and build bespoke tools for themselves. It‘s probably a question of risk assessment and priorities.

>A spotty connection hasn’t loaded the dependencies correctly - Either they load or they don't. How would the dependencies load "incorrectly"?

Let‘s say you have 5-7 dependencies to load, but 3 of them timed out because your train entered the tunnel. Your app ends up in incorrect state, fails silently and UX degrades unpredictably. This is where the conversion often drops visibly and the reason SSR is now a go-to solution for any marketing website.


Why am I loading dependencies from 5-7 places? Why is my website not using a bundler if it has so many varied dependencies? Why do we not expect the user to understand that they are in a tunnel without internet?

Regardless, this isn't really restricted to the usage of JavaScript. The website would likely have pretty bad UX if only half of the CSS loaded correctly, but no one programs defensively around it being absent.


Have you ever developed an enterprise scale frontend applications optimized for conversion targets? It feels like you have not. You may ship your own code in a bundle, yes. All integrations come on top of that. That chatbot, tracker, A/B testing logic etc - all are loaded separately from your service provider CDN.

An user opening a web page is not expecting a full-blown app with multi-second loading times. If that happens, they bounce, and you loose revenue. Web is supposed to have very short time to first content paint and very short time to interactive, the shorter, the better, less than 0.5s is the goal. It can deliver that, if built properly. Many SPAs, bulky JS apps are built this way for developer convenience, not for end users. The only real use case for SPA is when you deal with a lot of local data. A spreadsheet, document or image editor, a diagram tool (but then wasm is probably a better choice).

You may say, you are not building enterprise grade frontend. But if you are small enough, you don’t need SPA either.


Go on. How do I have no idea what I'm talking about? Why is it okay for a website to break simply because the analytics don't load? Why do you think that's good design? How is my personal, lived experience less valuable than yours?

Is it just that you're ashamed that you have made such poorly designed web apps that can't handle a few broken HTTP calls?

Is it just that you can't simply accept that JavaScript is a requirement for the modern web which is what this entire discussion is hinged upon?


>How do I have no idea what I'm talking about?

You dismissed A/B testing as unnecessary. That is sufficient for this judgement. A/B tests mostly run on the happy path scenario of a customer: An A/B test breaks, the company is losing money at light speed.

The loading-related issues overall may eat 0,5-1% of the revenue. It is not something that should be an afterthought.


Lol, okay. I didn't know that every single customer was going to go through a tunnel as they loaded the page.

I didn't dismiss A/B testing. I'm just saying that, if the analytics don't load on the client, you should already have A loaded and ready to render. It's literally just a matter of a try/catch, and you shouldn't be waiting to load this stuff on the client-side anyways if this is truly supposed to be the "Happy Path".

Yes, I know that legacy software like Google Tag Manager requires client-side integration, but I would argue that is an orthogonal concern. You don't need to use that for your A/B testing. It's pretty easy to integrate this stuff into SSR-- especially if you stream in the HTML. This is why cookies exist.

And, again, none of this changes the central concept of this comment thread: JavaScript is necessary for the modern web experience.


> chatbot, tracker, A/B testing logic etc

Literally none of those things are necessary for a working website. If your site breaks when your analytics don't load, then that's just horrible design at any scale.


Apparently you have no idea what are you talking about.

A normal person would immediately think "dang, page didn't load before I entered the tunnel. Guess I'll wait til I'm out again and refresh".

And if they're deliberately going somewhwre where there's no signal for an extended period of time, and really want it to work, they'll ensure they've loaded everything before doing so.

And I say this as someone who is developing a pwa that is for people with low end phones and very inconsistent and/or connections. I'm very cognizant and empathetic to their situation.


Anecdotal evidence does not beat statistics and user research. Bounce rate has inverse correlation to loading speed. People with low intent do not refresh, they simply don‘t come back and look elsewhere or just move on. Telling you this as someone who built first commercial website in 1999 and was a hyperscaler B2C startup CTO. Let‘s not measure the length of credentials.

To clarify, you're saying we should be jumping through convoluted hoops - full page navigation + js to rewrite history, all so that you can avoid a very minimal amount of js to show/hide a nav menu - for low intent people who are frequently entering tunnels?

Something like Datastar would enable this with like two html attributes, and only require 10kb of js (and would also allow for endless other things via declarative html).

https://data-star.dev/reference/attributes#data-show


> To clarify, you're saying we should be jumping through convoluted hoops

Good that you are asked. Did I say anything like that in my comment above?


That's certainly what you implied, given the chain of comments.

Lidl doesn‘t do that. It is just a grocery discounter, one company of the many in that corporate structure, and one of the users of that cloud.


Schwarz Gruppe includes Schwarz Digits, which include StackIT. 7500 is the number of employees at Digits, which also includes online marketplaces like Kaufland e-commerce, so definitely not all of them work on the sovereign cloud.


> Technically you don't have to be an employed developer to become a senior developer.

Outside of a sufficiently large organization „seniority“ of a developer doesn‘t make any practical sense. So, technically you can assign yourself any label, but that would be weird thing to do.

A freelancer is measured by portfolio, a computer scientist in academia by publications, an OSS contributor by the volume and impact of contributions. In either case, it‘s proportional to the effort spent on learning and building.

Anyway, regardless of employment status the measure of your professionalism is not defined by only something you can learn from the books. Experience matters a lot: it‘s nearly impossible to succeed in stakeholder management or presentation of your solutions by reading anything. You need practice and feedback. Senior engineers aren‘t those who excel in writing code: fresh CS graduates are supposed to know algorithms better. Senior engineers can contribute at full scale of SDLC themselves and support others. That is much easier to achieve in a professional environment rather than working on amateur projects.


All of this is possible on a corporate track. Ability to build frameworks and tools do qualify a person as at least a solid mid-level professional, not having corporate experience and associated skills can be a pretty big gap in their CV.


Possibilities are not results.


That’s quite obvious, ontologically speaking.


This of course cannot be generalized, but withdrawal is quite noticeable for personal well-being in a positive way.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: