Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

A reason for such requirements is similar to that that software engineers need to leetcode hard: supply and demand. Prestigious companies get hundreds, if not thousands, of applications every day. The companies can afford looking for candidates who have raw talent, such as the capability of mastering many concepts and being able solve hard mathematical problems in a short time. Case in point, you may not need to use eigenvectors directly in the job, but the concept is so essential in linear algebra and I as a hiring manager would expect a candidate to explain and apply it in their sleep. That is, knowing eigenvector is an indirect filter to get people who are deeply geeky. Is it the best strategy for a company? That's up to discussion. I'm just explaining the motives behind such requirements.


I can’t help but think there’s been a ton of filters used in the past to figure out if someone is deeply geeky, and we’ll continue to invent more in the future.

It’s really looking like another rat race. Especially since there’s no central authority, every hiring manager has the potential to invent their own filter, and make it arbitrarily harder or easier based on supply and demand (and then the filter drifts away from the intended purposes).


It will be rat race when there are so many interview books and courses and websites. It was a not rat race before 2005, when there were only two reasons that one can solve problems like Pirate Coins or Queen Killing Infidel Husbands: the person is so mathematically mature that such problems are easy for them; the person is so geeky that they read Scientific American or Gardner's columns and remembered everything they read.


You're missing the third category: people like myself who absolutely love this kind of riddles and destroy them in a few minutes, without any significance on their actual work abilities.

I don't think I'm a bad engineer, but I'm certainly not the rock star you absolutely need for your team, but when it comes to this kind of “cleverness” tests, I'm really really good.

I've had the “Queen Killing Infidel Husbands" (with another name) in an interview last year and I aced it in a few minutes, and I didn't knew about "Pirate Coins", but when I read your comment HN said your comment was "35 minutes ago" and now it says "40 minutes" which means I googled the problem, figured out the solution and then found the correction online to see if I was right in less than 6 minutes, and so while I'm putting my son to bed!

It's really sad because there are many engineers much better at there job than me who will get rejected because of pointless tests like this…


If somebody asked me logic/brainteaser questions like that, I would politely stop them, explain that if they're asking me that question I'm not a good match for the company, and if they would like to ask a better question, I'm open to it, but otherwise, we can end the application process now. I did that recently with a junior eng who asked me a leetcode question literally with the same exact test data as the leetcode page. I ended up explaining to the CEO that at the very least his engineers should be creative enough to come up with different test data, but that realistically, if "recognize the need for, and implement binary search in 45 minutes" is your go-to question, I'm not gonna be a match at your company.

I had to fight my way into google by doing every bit of prep and practice to solve stupid questions and code quicksort but when I joined, nothing I did in the 12 years I was there required any of that. And I wrote high performance programs that ran on millions of cores (I did know some folks who needed that skill, like the search engine developers, or the maps engine, or the core scheduling algorithms in borg). The entire time I was there I tried to get people to understand the questions they're asking are just not good indicators of programming, but it was repeatedtly pointed out, the goal is to minimize false-positive hires.

I do admire your ability to solve problems like that quickly, always wished I could.


> If somebody asked me logic/brainteaser questions like that, I would politely stop them, explain that if they're asking me that question I'm not a good match for the company

This is exactly what I started to do after I was asked a leetcode-based question for a SRE manager position.

It turned out that by making clear my "profile", I stopped to have bullshit interviews and started to get ones more aligned to actual daily work.


The Queen problem first showed up in a Putnam Math Contest. If you solved it in no time, then you're mathematically talented, which puts you in the first category.


I'm not questioning the fact that I'm kind of gifted when it comes to mathematics (I actually ranked #72 in a nation-wide math contest in France when I was 10) but you were talking about “maturity” and not innate skill. Since don't have a math degree and I haven't done math in more than a decade, I'm definitely far from “mature” on any mathematics perspective that can matter for a job.

And after ten years working in the industry, I can assure you that it is not a skill I can leverage a lot in my job…


But if there is an abundance of supply, the company has to use some kind of filter.

Testing for geekyness and ability to solve tricky coding math problems, seems like a rational way to do that.

If companies were starving for talent because 'nobody could pass the test' - it would be another thing.

But they have to set the bar on something, somewhere.

I can't speak to AI/ML but I would imagine it might be hard to hire there, given the very deep and broad concepts, alongside grungy engineering.

I've rarely had such fascination and interest in a field that I would never actually want to work in.


There’s an abundance of supply of people with masters degrees in machine learning? How’s that possible? I thought this shit was supposed to be hard.

Has humanity just scaled way too hard or something, because if we’re having an abundance of supply in difficult cutting edge fields to the point where they also have their own version of Leetcode, then what hope do average people have of getting any job in this world?

Or, is it at all possible that companies are disrespecting the candidate pool by being stingy and picky?

Maybe the truth is gray.


I currently work as an ML engineer and have interviewed on both sides for some well known companies.

The absolute demand in number of people is small compared to popularity. It would not surprise me at all if many computer science master's programs had a majority of the students studying machine learning. I remember in undergrad we had to ration computer science classes due to too much demand from students. I think school had 3x majors over a couple year time period in CS.

The number of needed ML engineers is much smaller than total software engineers. When a lot of students decide ML is coolest we have imbalanced CS pool with too many wanting to do ML. Especially when for ML to work you normally need good data engineering, backend engineer, infra, and the actual ML is only a small subset of the service using ML.

At the same time supply of experienced ml engineers is still low due to recent growth of the field. Hiring 5+ years of professional experience ML engineers is more challenging. The main place were supply is excessive is for new graduates.


> There’s an abundance of supply of people with masters degrees in machine learning? How’s that possible? I thought this shit was supposed to be hard.

I think it's just a matter of proliferation of these types of programs, as well as a large supply of students.

Also, the average qualification of people working in ML is probably no longer a Ph.D, like it used to be. This is arguably because deep learning techniques require less involved math to understand, and are more focused on computational methods that work well.

So the field has probably saturated. When I got involved with ML for the first time (well, really, statistical signal processing) in the mid 2000s, the field was kind of dead, and very high qualified postdocs had tough time finding jobs.


> There’s an abundance of supply of people with masters degrees in machine learning? How’s that possible?

I don't know for ML, but there are almost 12k Masters CS degrees awarded per year and 1.1k PhDs. If my university is any indication, then there's a good portion of those that are ML or doing some sort of ML in their research. But even if it was just 10%, that's a lot of people per year that are being added. This is just the US btw.

https://datausa.io/profile/cip/computer-science-110701


> Case in point, you may not need to use eigenvectors directly in the job, but the concept is so essential in linear algebra and I as a hiring manager would expect a candidate to explain and apply it in their sleep.

Exactly. Whenever eigenvectors come up during interviews, it’s usually in the context of asking a candidate to explain how something elementary like principal components analysis works. If they claim on their CV to understand PCA, then they’d better understand what eigenvectors are. If not, it means they don’t actually know how PCA works, and the knowledge they profess on their CV is superficial at best.

That said, if they don’t claim to know PCA or SVD or other analysis techniques requiring some (generalized) form of eigendecomposition, then I won’t ask them about eigenvectors. But given how fundamental these techniques are, this is rare.


Given that PCA is heavily antiquated these days, I'd say that asking your candidates to know algebraic topology (the basis behind many much more effective non linear DR algorithms like UMAP) is far better... But in spite of the field having long ago advanced beyond PCA, you're still using it to gatekeep.


The initialization strategy for UMAP is important enough that asking about that in practice is probably more important than anything out of Ghrist's book as an interview question

cf. https://twitter.com/hippopedoid/status/1356906342439669761


UMAP (and t-SNE) aren't the same as PCA. UMAP is pretty close to t-SNE and I think expanding PCA (Principle Component Analysis) and t-SNE (teacher Stochastic Neighbor Embedding) explain the difference. Neighbor embedding is a visualization technique and not the same as determining principle components. PCA preserves global properties while t-SNE and UMAP don't. They are good techniques for _visual_ dimensional reduction, but they aren't going to tell you the dominant eigenvectors of the data, or _dimensional reduction_. This is a bit of a pet peeve of mine.

There's some more in this SE post https://stats.stackexchange.com/questions/238538/are-there-c...


> asking your candidates to know algebraic topology

Congratulation, you've eliminated 99% of the ML research community.


and yet we're also told that tech companies can't get enough people.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: