FinOps is definitely becoming a pretty hot topic. And, yeah, a lot of companies are realizing that they have some pretty low hanging fruit where they can cut spend significantly.
I've wondered how much of the cloud spend in the world is just junk like this.
I don't mean that cynically or sarcastically. The cloud makes it legitimately more easy to deploy resources; I've experienced this in my own teams. But the flip side is that it is easier to deploy things without thinking about the costs, or deploy things and forget about them, and management can just shrug and say "yes, it's cloud, it must be cheaper and better".
This is by no means a cloud-unique problem. Watch any datacenter a company has been at for a few years get shut down and moved to the cloud. The stream of "hey, who owns this?" and "are we still using this?" and "I just yanked the network cable, has anyone noticed?" is morbidly amusing. Even if you think you're tracking everything, you quite likely aren't, because the owners don't update their own tracking info very well.
But cloud makes this worse.
So I just find myself wondering how much of AWS profit is on this sort of stuff.
I myself have S3 buckets full of crap. As I do watch my spend, said buckets full of crap are in the ~10-50$/month range. At that price point it is difficult to justify engineering time to clean out the crap. But it's not hard to imagine that scaling up by three or more factors of magnitude.
We saw this one iteration back with VMWare. Once it became easy to spin up a VM without getting a PO for a server and finding rack space, the number of "servers" in the org exploded. And of course we lost track of what they all did, and who they belonged to... but there was actually a healthy ecosystem of tools built for (and by) VMWare to manage this situation. The cloud providers seem incentivised to make tracking real costs as difficult as possible. At least with VMWare once you paid for it, it was yours.
There’s great tooling on cloud providers for tracking costs. The pain point is that if you never set it up it’s incredibly difficult and time consuming to add it after the fact through tagging.
Yeah exactly. I feel that recklessly opening new projects and then abandoned them, or refactored multiple times with different teams, quickly added up to the number of rss no one owns.
Last couple of weeks the whole data org underwent a forced upgrade of certain components on Google cloud and we got quite of clusters no one knows who owns.
I don't know but I'm itchy to start finding ways to build new projects because that is the best way for techy people to grab ownership and promition.
Feels like there is a startup in there? Give it your account number, and read only access keys, and it tells you what and where and how to change things.
Just curious what do you think about the service? Is it a lot deeper than what Google itself provides? We are just using Google own service to turn down costs.
So Apptio is a larger application for finance people, with some cloud cost management built in. They are the main users, I just have read only access an login perhaps monthly to view budgets and cloud spend.
I just have access to small bits and pieces, but what I have seen I like.
Just a note that Google itself is doing that so one has to better than that. I think the ideal candidate is a senior data architect/ops person who has xtensive hand on experience. Sadly I'm so far from it.
GCP puts automatic analysis recommendations in the UI right next to the resources in the table of VM/etc names, and IMO the recommendations (for both upsizing and downsizing) are dramatically more accurate than what AWS gives you
Yeah. I can think about consultants making $$ giving full analysis of cloud costs and ways to reduce them. It's actually not very technically challenging TBH. All those low hanging fruits can be grabbed by a junior enginner or even a non tech senior person who is a deep user of such services.