Vercel runtime must be able to access the values (so customer's apps can use them). But nobody else should ever be able to. This is the typical amateur hour security but on the other hand, who was naive enough to expect any better from vercel?
If they had SSO sign in to their admin panel (trusted device checks notwithstanding) the oauth access would be useless.
Vercel is understandably trying to shift all the blame on the third party but the fact their admin panel can be accessed with gmail/drive/whatever oauth scopes is irresponsible.
That's a low-leverage place to intervene. Whether or not the internal admin system was directly OAuth linked to Google, by the time the attacker was trying that, they already had a ton of sensitive/valuable info from the employee's Google Workspace account.
If you can only fix one thing (ideally you'd do both, but working in infosec has taught me that you can usually do one thing at most before the breach urgency political capital evaporates), fix the Google token scope/expiry, or fix the environment variable storage system.
I don't know why you are downvoted. The article is AI blogspam, it doesn't have any more factual information than eg https://www.darkreading.com/application-security/vercel-empl... and is full of empty LLMisms. It's depressing people are willing to read this.
I dont have an llm-radar like you but I felt some anxiety reading through it. Cant explain why but the logic was not linear and this strained me as a reader. It didnt have the obvious llm-isms i see on youtube videos "not this but that".
My natural instinct is to make sense of what I read, and if presented with a word-salad, it strains me. What are the empty LLMisms so my radar can be calibrated ? These are some giveaways I could spot.
> The timeline is genuinely absurd
> The timeline sequence description (Feb/March/April) is abstract and does not depict specifics reflecting human understanding.
That article you linked to didn't mention that Context.ai, from where this mess originated, is a YCombinator company. Most probably its founders are on this very web-forum.
I was trying to look it up (basically https://developers.google.com/identity/protocols/oauth2/java... -- the consent screen shows the app name) but it now says "Error 401: invalid_client; The OAuth client was not found." so it was probably deleted by the oauth client owner.
Why use AI generated pictures for the letters? Lot of the site wording is clearly AI too, but the pictures of final product are the most important aspect.
I'm genuinely asking, since it just makes the site look untrustworthy. The only thing balancing that are the real pictures of the peltier stamp process.
IP filtering is a valuable factor for security. I know which IPs belong to my organisation and these can be a useful factor in allowing access.
I've written rules which say that access should only be allowed when the client has both password and MFA and comes from a known IP address.
Why shouldn't I do that?
And there are systems which only support single-factor (password) authentication so I've configured IP filtering as a second factor. I'd love them to have more options but pragmatically this works.
Why are you (re-)implementing client security on provider end? If a client requires that only requests from a particular network are permitted... Peer in some way.
I do understand the value of blocking unwanted networks/addresses, but that's a bit different problem space.
reply