My biggest problem with leetcode type questions is that you can't ask clarifying questions. My mind just doesn't work like most do, and leetcode to some extent seems to rely on people memorizing leetcode type answers. On a few, there's enough context that I can relate real understanding of the problem to, such as the coin example in the article... for others I've seen there's not enough there for me to "get" the question/assignment.
Because of this, I've just started rejecting outright leetcode/ai interview steps... I'll do homework, shared screen, 1:1, etc, but won't do the above. I tend to fail them about half the time. It only feels worse in instances, where I wouldn't even mind the studying on leetcode types sites if they actually had decent explainers for the questions and working answers when going through them. I know this kind of defeats the challenge aspect, but learning is about 10x harder without it.
It's not a matter of skill, it's just my ability to take in certain types of problems doesn't work well. Without any chance of additional info/questions it's literally a setup to fail.
edit: I'm mostly referring to the use of AI/Automated leetcode type questions as a pre-interview screening. If you haven't seen this type of thing, good for you. I've seen too much of it. I'm fine with relatively hard questions in an actual interview with a real, live person you can talk to and ask clarifying questions.
The LC interviews are like testing people how fast they can run 100m after practice, while the real job is a slow arduous never ending jog with multiple detours and stops along the way.
But yeah that's the game you have to play now if you want the top $$$ at one of the SMEGMA companies.
I wrote (for example) my 2D game engine from scratch (3rd party libs excluded)
but would not be able to pass a LC type interview that requires multiple LC hard solutions and a couple of backflips on top. But that's fine, I've accepted that.
Did you mean to type 25? 5 years ago LC challenge were as, if not more, prevalent than they are today. And a single interview for a job is not something I have seen ever after 15 years in the space (and a bunch of successful OSS projects I can showcase).
I actually have the feeling it’s not as hardcore as it used to be on average. E.g. OpenAI doesn’t have a straight up LC interview even though they probably are the most sought after company. Google and MS and others still do it, but it feel like it has less weight in the final feedback than it did before. Most en-vogue startup have also ditched it for real world coding excercices.
Probably due to the fact that LC has been thoroughly gamed and is even less a useful signal than it was before.
Of course some still do, like Anthropic were you have to have a perfect score to 4 leetcode questions, automatically judged with no human contact, the worst kind of interview.
There's an entire planet of jobs that have nothing to do with leetcode. I was talking about those, not FAANG stuff. Unfortunately I am not FAANG royalty.
>Of course some still do, like Anthropic were you have to have a perfect score to 4 leetcode questions, automatically judged with no human contact, the worst kind of interview.
Only if there is enough evidence. Yes, I can say that the inability to account for things like the ADA in the US can place an employer in hot water, however, since LC doesn't make those decisions, they are immune. The accountability is placed upon the employer. Don't hate the players or the game. Maybe just figure out how to fix it without harming everyone, be popular enough to make said idea into law, and get into a position of power that allows you to do so. If that sounds hard, congrats, welcome to the reason why I never got into politics. Don't even get me started on all the people you will never realize you are hurting by fixing that one single problem.
I can't imagine this kind of entitlement. If you don't want to work for them, don't study leetcode. If you want to work for them (and get paid tons of money), study leetcode. This isn't a difficult aristotelian ethics/morals question.
I read this, and intentionally did not read the replies below. You are so wrong. You can write a library, even an entirely new language from scratch, and you will still be denied employment for that library/language.
> 5 years ago you'd have a project like that, talk to someone at a company for like 30m-1hr about it, and then get an offer.
Based on my own experiences, that was true 25 years ago. 20 years ago, coding puzzles were now a standard part of interviewing, but it was pretty lightweight. 5 years ago (covid!) everything was leet-code to get to the interview stage.
I'm lucky I'm in the frontend webdev sphere then I guess instead of like being a pure backend guy. I've had a couple of those live ones and just denied them. I did manage to implement a "snake" algorithm once but got denied because I wasn't able to talk about time/space complexity.
As someone who’s hired 10s of engineers across multiple companies, it’s bullshit on the hiring side too.
It was humbling having to explain to fellow adult humans that when your test question is based on an algorithm solving a real business problem that we work on every day, a random person is not going to implement a solution in one hour as well as we can.
I’ve seen how the faangs interview process accounts for those types of bias and mental blindness and are actually effective, but their solutions require time and/or money so everywhere I’ve been implements the first 80% that’s cheap and then skips on the rest that makes it work
>As someone who’s hired 10s of engineers across multiple companies
Any way to reach out? :)
I think it boils down to companies not wanting to burn money and time on training, and trying to come up with all sorts of optimized (but ultimately contrived) interview processes. Now both parties are screwed.
>It was humbling having to explain to fellow adult humans that when your test question is based on an algorithm solving a real business problem that we work on every day, a random person is not going to implement a solution in one hour as well as we can.
Tell me about it! Who were you explaining this to?
>The LC interviews are like testing people how fast they can run 100m after practice
Ah, but, the road to becoming good at Leetcode/100m sprint is:
>a slow arduous never ending jog with multiple detours and stops along the way
Hence Leetcode is a reasonably good test for the job. If it didn't actually work, it would've been discarded by companies long ago.
Barring a few core library teams, companies don't really care if you're any good at algorithms. They care if you can learn something well enough to become world-class competitive. If you can show that you can become excellent at one thing, there's a good chance you can become excellent at another thing.
That's basically also the reason that many Law and Med programs don't care what your major in undergrad was, just that you had a very high GPA in whatever you studied. A decent number of Music majors become MDs, for example.
LC interviews were made popular by companies that were started by CS students because they like feeling that this stuff is important. They're also useful when you have massive numbers of applicants to sift through because they can be automated and are an objective-seeming way to discard loads of applicants.
Startups that wanted to emulate FAANGs then cargo-culted them, particularly if they were also founded by CS students or ex-FAANG (which describes a lot of them). Very, very few of these actually try any other way of hiring and compare them.
Being able to study hard and learn something well is certainly a great skill to have, but leetcode is a really poor one to choose. It's not a skill that you can acquire on the job, so it rules out anyone who doesn't have time to spend months studying something in their own time that's inherently not very useful. If they chose to test skills that are hard and take effort to learn, but are also relevant to the job, then they can also find people who are good at learning on the job, which is what they are actually looking for.
But why stop there? Why not test candidates with problems they have never seen before? Or problems similar to the problems of the organization hiring? Leetcode mostly relies on memorizing patterns with a shallow understanding but shows the candidates have a gaming ability. Does that imply quality in any way? Some people argue that willing to study for leetcode shows some virtue. I very much disagree with that.
I think you have a misunderstanding. Most companies that do LC-style interviews usually show unknown problems.
Memorizing the Top 100 list from Leetcode only works for a few companies (notably and perplexingly, Meta) but doesn't for the vast majority.
Also, just solving the problem isn't enough to perform well on the interview. Getting the optimal solution is just the table stakes. There's communication, tradeoffs between alternative solutions, coding style, follow-up questions, opportunities to show off language trivia etc.
Memorizing problems is wholly not the point of Leetcode grinding at all.
In terms of memorizing "patterns", in mathematics and computer science all new discovery is just a recombination of what was already known. There's virtually no information coming from outside the system like in, say, biology or physics. The whole field is just memorized patterns being recombined in different ways to solve different problems.
It’s not about memorizing individual problems per se, but rather recognizing overall patterns and turning the process into a gameable endeavor. This can give candidates an edge, but it doesn’t necessarily demonstrate higher-level ability beyond surface familiarity with common patterns and the expectations around them. I’d understand the value if the job actually involved work similar to what's reflected in leetCode style problems, but in most cases, that couldn’t be further from reality. leetCode serves little purpose beyond measuring a candidate’s willingness to invest time and effort. That’s the only real virtue it rewards. But ultimately, I believe leetCode style interviews are measuring the wrong metric.
>a candidate’s willingness to invest time and effort
I guess it's a matter of opinion but my point is, this is probably the right metric. Arguably, the kind of people who shut up and play along with these stupid games because that's where the money is make better team players in large for-profit organizations than those who take a principled stance against ever touching Leetcode because their efforts wouldn't contribute anything to the art.
Maybe yes maybe not, I'm leaning not but it's just an opinion. But as a company be careful what you wish for, these same candidates are often skilled at gaming systems and may leave your team as soon as they've extracted the benefits. They’re likely more interested in playing the game than in seriously solving real-world problems.
Because chess is more unrelated to the job? It is easy to see that LeetCode problems are closer to a programmers job than what chess is.
But yeah, people used to ask that level of unrelated questions to programmers, and they were happy with the results. "Why are manhole covers round" etc. LeetCode style questions do produce better results than those, so that is why they use them.
To play the devils advocate, being able to memorize patterns and recognize which patterns apply to a given problem is extremely valuable. Tons of software dev is knowing the subset of algorithms, data structures, and architecture that apply to a similar problem and being able to adapt it.
That's literally what CS teaches you too. Which is what "leetcode" questions are: fundamental CS problems that you'd learn about in a computer science curriculum.
It's called "reducing" one problem to another. We had an entire semester's mandatory class spend a lot of time on reducing problems. Like figuring out how you can solve a new type of question/problem with an algorithm or two that you already know from before.
Like showing that "this is just bin packing". And there are algorithms for that, which "suck" in the CS kind of sense but there are real world algorithms that are "good enough" to be usable to get shit done.
Or showing that something "doesn't work, period" by showing that it can be reduced to the halting problem (assuming that nobody has solved that yet - oh and good luck btw. if you want to try ;) )
I did quite a bit of competitive programming in school, and pretty much all the world-class competitive problems are reduced to well-known algorithms. It's quite hard to come up with something new (not proven to be unsolvable for its constraints). I believe problem setters just try to disguise a known algorithm as much as possible.
Then comes the ability/memorization to actually code it, e.g. if I knew it needs coding red-black tree I wouldn't even start.
In math, you usually need to prove said simplifications. So just memorizing is not enough. As you get more advanced, you then start swapping out axioms.
When I look at the messy Android code, Fuchsia's commercial failure, Dart being almost killed by politics, Go's marvellous design, WinUI/UWP catastrophical failure, how C++/CX got replaced with C++/WinRT, ongoing issues with macOS Tahoe,....
I am glad that apparently I am not good enough for such projects.
> If it didn't actually work, it would've been discarded by companies long ago
You're assuming that something else works better. Imagine if we were in a world where all interviewing techniques had a ton of false positives and negatives without a clear best choice. Do you expect that companies would just give up, and not hire at all, or would they pick based on other factors (e.g. minimizing the amount of effort needed on the company side to do the interviews)? Assuming you accept the premise that companies would still be trying to hire in that situation, how can you tell the difference between the world we're in now and that (maybe not-so) hypothetical one?
I never made any claims about optimality. It works (for whatever reason) hence companies continue to use it
If it didn't work, these companies wouldn't be able to function at all.
It must be the case that it works better than running a RNG on everyone who applied.
Does it mean some genius software engineer who wrote a fundamental part of the Linux kernel but never learned about Minimum Spanning Trees got filtered out? Probably. But it's okay. That guy would've been a pain in the ass anyway.
> If it didn't actually work, it would've been discarded by companies long ago.
This that I've singled out above is a very confident statement, considering that inertia in large companies is a byword at this point. Further, "work" could conceivably mean many things in this context, from "per se narrows our massive applicant pool" to "selects for factor X," X being clear only to certain management in certain sectors. Regardless, I agree with those who find it obvious that LC does not ensure a job fit for almost any real-world job.
> Hence Leetcode is a reasonably good test for the job. If it didn't actually work, it would've been discarded by companies long ago.
I see it differently. I wouldn't say it's reasonably good, I'd say it's a terrible metric that's very tenuously correlated with on the job success, but most of the other metrics for evaluating fresh grads are even worse. In the land of the blind the one eyed man is king.
> If you can show that you can become excellent at one thing, there's a good chance you can become excellent at another thing.
Eh. As someone who did tech and then medicine, a lot great doctors would make terrible software engineers and vice versa. Some things, like work ethic and organization, are going to increase your odds of success at nearly any task, but there's plenty other skills that are not nearly as transferable. For example, being good at memorizing long lists of obscure facts is a great skill for a doctor, not so much for a software engineer. Strong spatial reasoning is helpful for a software developer specializing in algorithms, but largely useless for, say, an oncologist.
> Hence Leetcode is a reasonably good test for the job. If it didn't actually work, it would've been discarded by companies long ago.
This is an appeal to tradition and a form of survivorship bias. Many successful companies have ditched LeetCode and have found other ways to effectively hire.
> If you can show that you can become excellent at one thing, there's a good chance you can become excellent at another thing.
My company uses LeetCode. All I want is sane interfaces and good documentation. It is far more likely to get something clever, broken and poorly documented than something "excellent", so something is missing for this correlation.
Mistakenly read this as you wrote that 2D game engine (which looks awesome btw) for a job interview to get the job: "I can't compete with this!!! HOW CAN I COMPETE WITH THESE TYPES OF SUBMISSIONS!?!?! OH GAWD!!!"
And nowadays people are blatantly using AI to answer questions like this (https://www.finalroundai.com/coding-copilot). Even trying to stumble through design questions using AI
100%. I just went through an interview process where I absolutely killed the assignment (had the best one they'd seen), had positive signal/feedback from multiple engineers, CEO liked me a lot etc, only to get sunk by a CTO who thought it would be cool to give me a surprise live test because of "vibe coding paranoia". 11 weeks in the process, didn't get the role. Beyond fucking stupid.
It's funny because this repo really does seem vibe-coded. Obviously I have no reason not to believe you, but man! All those emojis in the install shell script - I've never seen anyone other than an AI do that :) Maybe you're the coder that the AI companies trained their AI on.
There's even a rocket emoji in server console.logs... There are memes with ChatGPT and rocket emojis as a sign of AI use. The whole repo looks super vibe-coded, emojis, abundance of redundant comments, all in perfect English and grammar, and the readme also has that "chatty" feel to it.
I'm not saying that using AI for take-home assignments is bad/unethical overall, but you need to be honest about it. If he was lying to them about not using any AI assistance to write all those emojis and folder structure map in the repo, then the CTO had a good nose and rightfully caught him.
As a big believer in documentation and communication in general, there's this inevitable double-bind that people hate whatever you give them and also hate it if you give them nothing. LLMs have made this worse.
No emojis and any effort to be comprehensive? Everyone complains "what is this wall of text", or "this is industry not grad school so cut it out with the fancy stuff" or "no one spends that much time on anything and it must be AI generated". (Frequently just a way of saying that they hate to read, and naively believe that even irreducibly complex stuff is actually simple).
Stuff that's got emojis, a friendly casual tone and isn't information dense? Well that's very chatty and cute, it also has to be AI and can't be valuable.
Since you can't win with docs, the best approach is to produce high quality diagrams that are simultaneously useful for a wide audience from novice to expert. The only problem is that even producing high quality diagrams at a ratio of 1 diagram per 1k lines of code is still very time consuming to produce if you're putting lots of thought into it, double so if you're fighting the diagramming tools, or if you want something that's easy for multiple stakeholders with potentially very different job descriptions to take in. Everyone will call it inadequate, ask why it took so long, and ask for the missing docs that they will hate anyway!
On the bright side, LLMs are pretty great at generating mermaid, either from code, or natural language descriptions of data-flows. Diagrams-as-code without needing a whole application UI or one of a limited number of your orgs lucid-chart licenses is making "Don't like it? Submit a PR" a pretty small ask. Skin in the game helps to curbs endless bike-shedding criticism
> No emojis and any effort to be comprehensive? Everyone complains "what is this wall of text", or "this is industry not grad school so cut it out with the fancy stuff" or "no one spends that much time on anything and it must be AI generated". (Frequently just a way of saying that they hate to read, and naively believe that even irreducibly complex stuff is actually simple).
> Stuff that's got emojis, a friendly casual tone and isn't information dense? Well that's very chatty and cute, it also has to be AI and can't be valuable.
As a counterpoint, I can confidently say that I've never once had anyone give any feedback to me on the presence or absence of emojis in code I've written, whether for an interview, work, or personal projects, and I've never had anyone accuse my documentation of being AI generated or gotten feedback in an interview that my code didn't have enough documentation. There's a pretty wide spectrum between "indistinguishable from what I get when I give an LLM the same assignment as my interviewee" and "lacking any sort of human-readable documentation whatsoever".
If you're using AI for an interview, you are basically telling them "you could just not bother with hiring me and use AI yourself" which is neither good for you nor them.
Oh my god Becky, there's even a rocket emoji in the server console logs!
Should I also be "honest" about tab-completion? Where do you draw the line? Maybe I should be punished for having an internet connection too. Using AI for docker/readme's/simple scaffolding I would have done anyways? Oh the horror!
There was no lying because there was no discussion or mention of AI at all. Had they asked me, I'd have happily told them yes I obviously use AI to help me save time on grunt-work, I've been doing this stuff for like 15 years.
It's an unpaid take-home assignment. You'd have to be smoking crack to think that I would be rawdogging this. Imagine if I had a family or a wife or an existing job? I'd dump them after getting linked their assignment document.
Honestly at this point in the AI winter if you are a guy who has AI-inspired paranoia then I don't want to work for you because you are not "in the know".
You have that you’re the founder of an AI company in your hacker news profile, and your take home looks completely vibe coded. Why in the world are you surprised that a hiring manager is a little suspicious about your coding skills?
Given what you’ve said in your other comments, it seems like you used AI in a way that I wouldn’t have a problem with but just briefly looking through I can see how it would look suspicious.
That's all well and good. Totally ask me about AI, I can talk a lot about it. Don't however, make me go through 99% of the interview process up until the very last stage (spanning weeks), and throw a live test in my face, and then have the hiring manager clarify that it's about "vibe coding paranoia". It negates the entire reason I did the take-home assignment.
> Should I also be "honest" about tab-completion? Where do you draw the line?
I'd probably draw it somewhere in the miles-long gap between tab completion and generating code with an LLM. It sounds like that's where the company drew it too.
I used AI for the Docker setup which I've already done before. I'm not wasting time on that. Yeah you can vibe code basic backend and frontend and whatnot, but you're not going to vibe code your way to a full inverse kinematics solution.
I'm not a math/university educated guy so this was truly "from the ground up" for me despite the math being simple. I was quite proud of that.
So what was the issue the CTO had with vibe coding? Had you disclosed to then that you used LLMs for coding "basic" features outside the math and whatnot?
The hiring manager told me that they were getting a lot of "signal to noise" ratio in terms of their hiring, where they'd bring someone on-site who had a good assignment and apparently more often than not, these candidates would shit the bed in a live environment. So the CTO made a live take-home assignment and didn't tell anyone. I was told that he did this to weed out the low signal-to-noise people they dealt with recently.
>Had you disclosed to then that you used LLMs for coding "basic" features outside the math and whatnot?
No it seems completely immaterial. I'll happily talk about it if asked but it's just another tool in the shed. Great for scaffolding but makes me want to rip my hair out more often than not. If it doesn't one-shot something simple for me it has no use because it's infuriating to use. I didn't get into programming because I liked writing English.
Hah I feel you there. Around 2 years ago I did a take home assignment for a hiring manager (scientist) for Merck. The part B of the assignment was to decode binary data and there were 3 challenges: easy, medium and hard.
I spent around 40 hours of time and during my second interview, the manager didn't like my answer about how I would design the UI so he quickly wished me luck and ended the call. The first interview went really well.
For a couple of months, I kept asking the recruiter if anyone successfully solved the coding challenge and he said nobody did except me.
Out of respect, I posted the challenge and the solution on my github after waiting one year.
Part 2 is the challenging part; it's mostly a problem solving thing and less of a coding problem
That doesn't look too challenging for anyone who has experience in low-level programming, embedded systems, and reverse engineering. In fact for me it'd be far easier than part 1, as I've done plenty of work similar to the latter, but not the former.
That sucks so hard man, very disrespectful. We should team up and start out own company. I tried checking out your repo but this stuff is several stops past my station lol.
A surprise live test is absolutely the wrong approach for validating whether someone's done the work. IMO the correct approach is to go through the existing code with the applicant and have them explain how it works. Someone who used AI to build it (or in the past had someone else build it for them) wouldn't be able to do a deep dive into the code.
We did go into the assignment after I gently bowed out of the goofy live test. The CTO seemed uninterested & unfamiliar with it after returning from a 3 week vacation during the whole process. I waited. Was happy to run him through it all. Talked about how to extend this to a real-world scenario and all that, which I did fantastically well at.
I feel your pain. This isn't a question about AI or not. It's about if you can do the work and do it well. This kind of nonsense happened before AI. If you can't win the game of Jeapordy you don't get the job which has nothing to do with being a Jeapordy contestant!
It isn't impressive to spend a lot of time on a hiring problem, you shouldn't do that. If you can't do it in a few hours then just move on and apply for another job, you aren't the person they are looking for.
Doing it slowly over many days is only taking your time and probably wont get you the job anyway since the solution will be a hard to read mess compared to someone who solves it quickly since they are familiar with the domain.
No. Should I invoice them? I'm still livid about it. The kicker is the position pays a max of 60-120k euros, the maximum being what I made 5 years ago.
Damn... that's WAY more than I'll do for an interview process assignment... I usually time box myself to an hour or two max. I think the most I did was a tic-tac-toe engine but ran out of time before I could make a UI over it.
I put absolutely every egg into that basket. The prospect of working in Europe (where I planned to return to eventually) working on cool robot stuff was enticing.
The fucking CTO thought I vibe-coded it and dismissed me. Shout-out to the hiring manager though, he was real.
Yes. I put a ton of work into it. I had about 60 pages worth of notes. On inverse kinematics, FABRIK, cyclic algorithms used in robotics, A*/RRT for real-world scenarios etc. I was super prepared. Talked to the CEO for about two hours. Took notes on all videos I can find of team members on youtube and their company.
Luckily the hiring manager called me back and levelled with me, nobody kept him in the loop and he felt terrible about it.
Some stupid contrived dumbed down version of this crane demo was used for the live test where I had to build some telemetry crap. Nerves took over, mind blanked.
At this rate I'm probably going to starve to death before I get a job. Should I write a blog post about my last 2 years of experiences? They are comically bad.
This was for monumental.co - found them in the HN who's hiring threads.
This never happened to me in a job interview before I turned 40. But once I knew I was too old to look the part, and therefore and had to knock it out of the park, mind blank came roaring in. I have so much empathy now for anyone it ever happened to when I was giving the a job interview. Performing under that kind of pressure has nothing to do with actual ability to do the job.
I feel bad for you, and I support you in naming and shaming this company. It's just horseshit to jerk people around like that.
I hope you can at least leverage this demo. Maybe remove the identifications of it and shove it into your CV as a "hobby project"? It looks pretty good for that.
They probably think they are geniuses who "weeded out another AI guy!" High fives all around! It was a great process (for me) right up until it wasn't.
I find it's less about the salary than it is the type of company. Any startup doing anything they consider remotely "cutting edge" is going to probably be a shit show.
Its not really memorizing solutions. Yes you can get quite far by doing so but follow ups will trip people up. However if you have memorized it and can answer follow ups, I dont see a problem with Leetcode style problems. Problem solving is about pattern matching and the more patterns you know and can match against, the better your ability to solve problems.
Its a learnable skill and better to pick it up now. Personally I've solved Leetcode style problems in interviews which I hadnt seen before and some of them were dynamic programming problems.
These days its a highly learnable skill since GPT can solve many of the problems, while also coming up with very good explanations of the solution. Better to pick it up than not.
It is and isn't. I'd argue it's not memorizing exact solutions(think copy paste) but memorizing fastest algos to accomplish X.
And some people might say well, you should know that anyways. The problem for me is, and I'm not speaking for every company of course, you never really use a lot of this stuff in most run of the mill jobs. So of course you forget it, then have to study again pre interview.
Problem solving is the best way to think of it, but it's awkward for me(and probably others) to spend minutes thinking, feeling pressured as someone just stares at you. And that's where memorizing the hows of typical problems helps.
That said, I just stopped doing them altogether. I'd passed a few doing the 'memorizing' described above, only to start and realize it wasn't at all irrelevant to the work we were actually doing. In that way I guess it's a bit of a two way filter now.
The only part of memorizing fastest algorithm the vast majority needs is whatever name that goes by in your library. Generic reusable code works very well in almost any language for algorithms.
Even if you are an exception either you are writing the library meaning you write that algorithm once for the hundreds of other users, or the algorithm was written once (long ago) and you are just spending months with a profiler trying to squeeze out a few more CPU cycles of optimization.
There are more algorithms than anyone can memorize that are not in your library, but either it is good enough to use a similar one that already is your library, or you will build it once and once again it works so you never go back to it.
Which is to say memorizing how to implement an algorithm is a negative: it means you don't know how to write/use generic reusable code. This lack is costing your company hundreds of thousands of dollars.
I’d say it’s not even problem solving and it’s more pattern recognition.
I actually love LC and have been doing a problem a week for years. Basically I give myself 30 minutes and see what I can do. It’s my equivalent to the Sunday crossword. After awhile the signals and patterns became obvious, to me anyway.
I also love puzzlerush at chess.com. In chess puzzles there are patterns and themes. I can easily solve a 1600 rated problem in under 3 seconds for a chess position I’ve never seen before not because I solve the position by searching some move tree in my mind, I just recognize and apply the pattern. (It also makes it easier to trick the player when rushing but even the tricks have patterns :)
That said, in our group we will definitely have one person ask the candidate a LC style question. It will probably be me asking and I usually just make it up on the spot based on the resume. I think it’s more fun when neither one of us know the answer. Algorithm development, especially on graphs, is a critical part of the job so it’s important to demonstrate competency there.
Software engineering is a hugely diverse field now. Saying you’re a programmer is kinda like saying you’re an artist. It does give some information but you still don’t really know what skill set that person uses day to day.
I don't think most LC problems require you to do that. Actually most of them I've seen only require basic concepts taught in Introduction to Algorithms like shortest path, dynamic programming, binary search, etc. I think the only reason LC problems stress people out is time limit.
I've never seen a leetcode problem that requires you to know how to hand code an ever so slightly exotic algorithm / data structure like Fibonacci heap or Strassen matrix multiplication. The benefit of these "fastest algos" is too small to be measured by LC's automatic system anyway. Has that changed?
My personal issue with LC is that it has a very narrow view of what "fast" programs look like, like most competitive programming problem sets. In real world fast programs are fast usually because we distribute the workload across machines, across GPU and CPU, have cache-friendly memory alignment or sometimes just design clever UI tricks that make slow parts less noticeable.
> you never really use a lot of this stuff in most run of the mill jobs. So of course you forget it, then have to study again pre interview.
I'm wondering how software devs explain this to themselves. What they train for vs what they actually do at their jobs differ more and more with time. And this constant cycle of forgetting and re-learning sounds like a nightmare. Perhaps people burn out not because of their jobs but the system they ended up in.
"Fastest algos" very rarely solve actual business problems, which is what most of us are here to do. There's some specialized fields and industries where extreme optimization is required. Most of software engineer work is not that.
I'm fine with that in an interview... I'm not fine with that, in a literally AI graded assignment where you cannot ask clarifying questions. In those cases, if you don't have a memorized answer a lot of times I cannot always grasp the question at hand.
I've been at this for 30+ years now, I've built systems that handle millions of users and have a pretty good grasp at a lot of problem domains. I spent about a decade in aerospace/elearning and had to pick up new stuff and reason with it all the time. My issue is specifically with automated leetcode pre-interview screening, as well as the gamified sites themselves.
I'd say that learning to solve tough LeetCode problems has very little (if not precisely zero) value in terms of you as a programmer learning to do something useful. You will extremely rarely need to solve these type of tougher select-the-most efficient-algorithm problems in most real-world S/W dev jobs, and nowadays if you do then just as AI.
Of course you may need to pass an interview LeetCode test, in which case you may want to hold your nose and put in the grind to get good at them, but IMO it's really not saying anything good about the kind of company that thinks this is a good candidate filter (especially for more experienced ones), since you'd have to be stupid not to use AI if actually tasked with needing to solve something like this on the job.
If a position needs low-level from-scratch code so performance-critical, and needs it so quickly that the developer must recall all of this stuff from memory, any candidate likely wouldn’t be asked to give a technical interview, let alone some gotcha test.
That was because the parent complained about not having good write ups. You can use GPT which has already been trained on publicly available solutions to generate a very good explanation. Like a coaching buddy. Keeping in mind there are paid solutions that charge 15k USD for this type of thing, being able to upskill at just 20bucks a month is an absolute steal.
Been in software development for 30 years. I have no idea what "Leetcode" is. As far as I know I've never been interviewed with "Leetcode", and it seems like I should be happy about that.
And when someone uses "leet" when talking about computing, I know that they aren't "elite" at all and it's generally a red flag for me.
Leetcode with no prep is a pretty decent coding skill test
The problem is that it is too amenable to prep
You can move your score like 2stddev with practice, which makes the test almost useless in many cases
On good tests, your score doesn't change much with practice, so the system is less vulnerable to Goodharting and people don't waste/spend a bunch of time gaming it
I think LC is used mostly as a metric of how much tolerance you have for BS and unpaid work: If you are willing to put unpaid time to prepare for something with realistically zero relevance with the day-to-day duties of the position, then you are ripe enough to be squeezed out.
Cynical, but correct. I've long maintained that these trials, much like those we encounter in the school system, are only partially meant to test aptitude. Perhaps more importantly, they measure submissive compliance.
> On good tests, your score doesn't change much with practice, so the system is less vulnerable to Goodharting and people don't waste/spend a bunch of time gaming it
This framing of the problem is deeply troubling to me. A good test is one that evaluates candidates on the tasks that they will do at the workplace and preferably connects those tasks to positive business outcomes.
If a candidate's performance improves with practice, then so what? The only thing we should care about is that the interview performance reflects well on how the candidate will do within the company.
Skill is not a univariate quantity that doesn't change with time. Also it's susceptible to other confounding variables which negatively impact performance. It doesn't matter if you hire the smartest devs. If the social environment and quality of management is poor, then the work performance will be poor as well.
leetcode just shows why interviews are broken. As a former senior dev (retired now, thanks to almost dying) I can tell you that the ability to write code is like 5% of the job. Every interview I've ever attended has wasted gazillions of dollars and has robbed the company of 10X that amount.
Until companies can focus on things like problem solving, brainstorming, working as a team, etc. the situation won't improve. If I am wrong, why is it that the vast majority of my senior dev and dev management career involved the things I just mentioned?
(I had to leave the field, sadly, due to disability)
Oh and HR needs to stop using software to filter. Maybe ask for ID or something, however, the filters are flagging everyone and the software is sinking the ship, with you all with it.
> My biggest problem with leetcode type questions is that you can't ask clarifying questions.
What is there to clarify? Leetcode-type questions are usually clear, much clearer than in real life projects. You know the exact format of the input, the output, the range for each value, and there are often examples in addition to the question. What is expected is clear: given the provided example inputs, give the provided example outputs, but generalized to cover all cases of the problem statement. The boilerplate is usually provided.
One may argue that it is one of the reasons why leetcode-style questions are unrealistic, they too well specified compared to real life problems that are often incomplete or even wrong and require you to fill-in the gaps. Also, in real life, you may not always get to ask for clarification: "here, implement this", "but what about this part?", "I don't know, and the guy who knows won't be back before the deadline, do your best"
The "coin" example is a simplification, the actual problem statement is likely more complete, but the author of the article probably felt these these details were not relevant to the article, though it would be for someone taking the test.
These interviews seem designed to filter out applicants with active jobs. In fact, I'd say that they seem specifically for selecting new CS graduates and H1B hires.
Which isn't that the main skill actually being tested? How the candidate goes about solving problems? I mean if all we did was measure peoples' skills at making sweeping assumptions we'd likely end up with people who oversimplify problems and all of software would go to shit and get insanely complex... Is the hard part writing the lines of code or solving the problem?
> My biggest problem with leetcode type questions is that you can't ask clarifying questions. My mind just doesn't work like most do, and leetcode to some extent seems to rely on people memorizing leetcode type answers. On a few, there's enough context that I can relate real understanding of the problem to, such as the coin example in the article... for others I've seen there's not enough there for me to "get" the question/assignment.
The issue is that leetcode is something you end up with after discovery + scientific method + time, but there's no space in the interview process for any of that.
Your mind slides off leetcode problems because it reverses the actual on-the-job process and loses any context that'd give you a handle on the issue.
Where I interviewed you had effectively 1 or 2 LC question but the interviewer offered clarifying questions making for a real time discussion and coding exercise.
This solves one problem but it does add performance anxiety to the mix having to live code.
1. People can be hired to take the test for you - surprise surprise
2. It is akin to deciding if someone can write a novel from reading a single sentence.
Hiring people for the test is only valid for online assessment. For an onsite, its very obvious if the candidates have cheated on the OA. I've been on the other side and its transparent.
> It is akin to deciding if someone can write a novel from reading a single sentence.
For most decent companies, the hiring process involves multiple rounds of these challenges along with system designs. So its like judging writing ability by having candidates actually write and come up with sample plots. Not a bad test.
If they are on site why not interview them? If the purpose of these online assessments is to be the mouth of the funnel that process is starting to fail.
Personally I feel software development has become more or less like assembly line work. If I was starting out today I would seriously consider other options.
> My biggest problem with leetcode type questions is that you can't ask clarifying questions.
Huh? Of course you can. If you're practicing on leetcode, there's a discussion thread for every question where you can ask questions till the cows come home. If you're in a job interview, ask the interviewer. It's supposed to be a conversation.
> I wouldn't even mind the studying on leetcode types sites if they actually had decent explainers
If you don't find the hundreds of free explanations for each question to be good enough, you can pay for Leetcode Pro and get access to editorial answers which explain everything. Or use ChatGPT for free.
> It's not a matter of skill, it's just my ability to take in certain types of problems doesn't work well.
I don't mean to be rude, but it is 100% a matter of skill. That's good news! It means if you put in the effort, you'll learn and improve, just like I did and just like thousands and thousands of other humans have.
> Without any chance of additional info/questions it's literally a setup to fail.
Well with that attitude you're guaranteed to fail! Put in the work and don't give up, and you'll succeed.
Last year, I saw a lot of places do effectively AI/Automated pre-inverview screenings with a leetcode web editor, and a video capture... This is what I'm talking about.
I'm fine with hard questions in an actual interview.
> My biggest problem with leetcode type questions is that you can't ask clarifying questions.
Yeah this one confused me. Not asking clarifying questions is one of the sureshot ways of failing an interview. Kudos if the candidates ask something that the interviewers havent thought of, although its rare as most problems go through a vetting process (along with leak detection).
How does asking clarifying questions work when a non-programmer is tasked with performing the assessment, because their programmers are busy doing other things, or find it degrading and pointless?
Many interviews now involve automated exercises on websites that track your activity (don't think about triggering a focus change event on your browser, it gets reported).
Also, the reviewer gets an AI report telling it whether you copied the solution somewhere (expressed as a % probability).
You have few minutes and you're on your own.
If you pass that abomination, maybe, you have in person ones.
It's ridiculous what software engineers impose on their peers when hiring, ffs lawyers, surgeons, civil engineers get NO practical nor theorical test, none.
The major difference between software devs and lawyers, surgeons, and civil engineers is that the latter three have fairly rigorous standards to pass to become a professional (bar, boards, and PE).
That could exist for software too, but I'm not sure HN folks would like that alternative any better. Like if you thought memorizing leetcode questions for 2 weeks before an interview was bad, well I have some bad news.
Maybe in 50-100 years software will have that, but things will look very different.
Accountants have to sit for the CPA exams (four of them), and depending on the state may have required graduate course load. And also you should interview your CPA, because a lot are not very good at whatever specific section of accounting you need (e.g. tax filing).
Plumber is probably the closest to what you're getting at. They are state licensed typically, with varying levels of requirement. But the requirement is often just like "have worked for 2-4 years as a trainee underneath a certified plumber" or whatever. That would be closest to what I'm guessing you would be recommending?
Also relevantly: the accountant and plumber jobs that are paying $300k-$500k+ are very rare. There exist programming jobs that pay what a typical plumber makes, but don't have as many arcane interview hoops to jump through.
At least in the US, lawyers, surgeons, & civil engineers all have accredited testing to even enter the profession, in the form of the bar exam, boards, and FE & PE tests respectively. So they do have such theoretical tests, but only when they want to gain their license to practice in a given state. Software doesn't have any such centralized testing accreditation, so we end up with a mess.
One can type in devtools withouth having the focus on dev tools, but indeed, to track down the event, one has to loose focus for a while. But after you find out what line of js is needed, then you can just inject that without dev tools with greasemonkey for instance.
But probably a general solution exists ... and there are actually extensions that will do that in general.
I feel like if I'm being asked this in an interview, they're not asking me to use a constraint solver, they're asking me to _write_ a constraint solver. Just for a specific constraint problem, not a more general constraint solver.
You're right, but that just shows how fundamentally silly this interview approach is.
In any real engineering situation I can solve 100% of these problems. That's because I can get a cup of coffee, read some papers, look in a textbook, go for a walk somewhere green and think hard about it... and yes, use tooling like a constraint solver. Or an LLM, which knows all these algorithms off by heart!
In an interview, I could solve 0% of these problems, because my brain just doesn't work that way. Or at least, that's my expectation: I've never actually considered working somewhere that does leetcode interviews.
I was told to use ANY language in an interview. I asked them if they were sure, so I solved it with J. They were not too pleased and asked me if I could use another language, so I did prolog and we moved on to the next question. Then the idiot had the audacity to say I should not use "J and Prolog" but any common known language. I asked if assembly was fine, and they said no. Perhaps python or javascript. I did the rest in python, needless to say I didn't get the job. :-)
If the candidate asks if you're sure you want them to use any language and you say "yes", and then get pissy when they do, the candidate isn't the one who sabotaged anything and they're dodging a bullet if they "fail".
I feel like I'm entering a whole different universe on HN. Maybe things are this equal and fair on the senior, high-paying part of the spectrum that most people here seem to occupy, but in general there's a huge power imbalance in job interviews. Unless you're special and the company wants you in particular, it costs them nothing to turn you down in favor of the other 10000 perfect applicants, while you must find a job to survive.
As someone just starting out, the general feeling among my peers is that I must bend to the interviewer's whims, any resistance or pushback will get you rejected. If this is dodging a bullet, then the entire junior field is a WW1 trench, at least where I am. Why would a company hire someone who gets 9/10 on the behavioral portion when they have a dozen other 10/10 candidates? Of course when the interviewer asks me to use "any language", I'll assume they want Python or Java or C++ or Rust, not Bash or ALGOL 68. Stepping out of line would just be performatively asking them to reject me.
> I'll assume they want Python or Java or C++ or Rust, not Bash or ALGOL 68.
I've solved interview questions with one line of Bash before and gotten an offer. The question was something like "count all the files in this folder with a name ending in X". The interviewer was happy I had a quick solution and they could move on to talking about something more interesting.
I agree that doing that without asking if they really mean "any" would in fact demonstrate traits that might be bad for a co-worker.
If the candidate reads that this may be the case, asks for, obviously, that reason, and the interviewer confirms that they mean "any", then it's a red flag for that interviewer, at least, as a co-worker, if they go on to get upset over your choice, unless it's something where you're obviously taking the piss, like Brainfuck (the later suggestion of assembly probably counts as this, but at that point the interviewer[s] had already failed the interviewee's test of them, so, whatever)
But yes, if you're desperate for a job you should indeed just ignore any red flags and do your best to fit the perfect-cog mold and do whatever emotional labor is required to seem the way you think they want you to be, and take whatever abuse they offer with a smile. That's true.
Yeah, I don't mean to justify the actions of the interviewer, they were likely in the wrong here. It's just that, to someone in my position, it seems almost funny to be willing to throw the entire interview over something like that. It's them who gets to decide your fate.
Also, we can't know what exactly was said, so maybe miscommunication could be partly to blame. Like, "Are you sure I can use any language? (Are you really so gracious as to give me this option?)" vs. "Are you sure I can use any language? (Can I use something you definitely don't know?)"
> Of course when the interviewer asks me to use "any language", I'll assume they want Python or Java or C++ or Rust, not Bash or ALGOL 68.
When I did interviews, I used to ask for “any imperative language”. Most people chose C or Java, some chose e.g. Python and the best solutions looked very different from the C/Java ones. I did not deduct points for either; a good solution is a good solution.
I once had a candidate that chose Oberon, because it was the only language they felt comfortable with (by their own account). They fell through on the interview for other reasons, but this seriously made me consider to what degree they had any programming experience at all outside a few select school assignments.
Independent of that, if someone came with a solution in a constraint solver, my next question would be (as it usually was, regardless of approach) “and what is the runtime complexity of your solution?” and I'd be impressed if they had any nonobvious thoughts about that!
> the general feeling among my peers is that I must bend to the interviewer's whims
This is just conflict avoidance and naivety. After a while you start to realize that there's a whole world of people just like on HN and *we hire people too*. No matter what you do, youll end up in the place you deserve. If you try to be sneaky, you will end up working for people who are either easily fooled or see right through how to exploit you. If you let your nerd shine you'll end up with people who love your nerdiness.
> After a while you start to realize that there's a whole world of people just like on HN and we hire people too. No matter what you do, youll end up in the place you deserve.
I mean, I'm hoping for that too. But it also feels like this only applies as long as there's a balance of likeminded people who are already in the industry vs. the people looking to get a job. For someone like me, without a real network, meeting a person like the kind you mention is extremely unlikely. Even then, most of these people are looking for more qualified candidates, since there's an overabundance of juniors and seniority is a good predictor for being really passionate about their field. So, maybe I'll figure that out someday, but right now I just need a job, and what people in my cohort do is a way to try and get a job at all costs.
When I say "any language" when interviewing candidates, I mean it. I would be stoked if someone busted out J in an interview.
Of course, my team also writes SDKs in a bunch of different languages, so it makes sense. Even if that weren't the case though, I'd be stoked. To your point though, early in your career, I get your viewpoint. It's hard out there to get a foot in the door and you have to seize opportunities.
> As someone just starting out, the general feeling among my peers is that I must bend to the interviewer's whims, any resistance or pushback will get you rejected.
But interviews are bidirectional. The company is deciding if they want me, and I’m deciding if I want them. If I chose to use Self or Forth as the whiteboard context for the conversation we’re having, it’s deliberately to make the interviewer think, and hopefully learn. If the experience of thinking differently about a problem (that they chose!) and learning something new is a negative signal to them, that’s fine —- it being a negative signal to them is a negative signal to me, and I don’t want to be there anyway! If they’re excited, and intrigued, and give “12 o’clock” feedback — well, that’s the team I want to work with. So I’ve helped us both accomplish our goals (making accurate assessments about fit), and aligned our metrics along the way.
> Unless you're special and the company wants you in particular, it costs them nothing to turn you down in favor of the other 10000 perfect applicants, while you must find a job to survive.
This is not what you see in practice. Trying to hire, the view is very much different, in my experience. Every candidate has strengths and flaws, it's much more of a... constraint problem!
The idea that there even exists a perfect candidate is one of the biggest issues with hiring practices in tech these days.
I, for one, would be extremely impressed by a candidate breaking out J or Prolog for a constraint problem. But I'm also not a typical hiring manager for sure.
That is what people miss about interviews. Often when you interview you don't have reasonable leads on any other job and so you don't feel like there is a choice since you likely need a job (unemployment rarely pays as well as a job). However interviews are not only about the company deciding if they will hire you, they are also about do you want to work there and convincing you to take the job if one is offered.
So make sure you use those "do you have any questions" time to ask questions! What is it really like to work there. How much notice do you need to give before taking vacation? Do they really give pay raises? How often do they lay people off? What is the dress code? Do they let you take time for your kids school activities? And so on - these questions should be things that are important to you - find out.
In the best cases the interview is only about convincing you to take the offer - generally because someone who you worked with at a previous job said "hire this person" and they trust that person enough to not need any other interview. So keep your network open.
People don't miss that about interviews, they just know that the balance of power is so skewed that the interests of the employer become the only relevant part. The employer can keep going through hundreds of applicants until they find someone who's literally perfect in every single way, they have nearly unlimited time. Meanwhile, the applicants need a job now, any job at all, they're on a hard time limit until their money runs out.
I feel like in practice, unless you're an established, senior professional in a high-paying, in-demand field with a network to rely on, this would go something like:
> What is it really like to work there. How much notice do you need to give before taking vacation? Do they really give pay raises? How often do they lay people off? What is the dress code? Do they let you take time for your kids school activities?
"Candidate ABC seems too demanding and picky, constantly inquiring about irrelevant specifics. They would be a bad fit for our company culture. I advise going with candidate XYZ instead."
> they just know that the balance of power is so skewed that the interests of the employer become the only relevant part
That happens since people only apply to very well paying jobs. If you apply to shit enough jobs they wont be asking hard questions, and those who offer shit jobs will say "all the power lies with the employees, I have no power to make them stay or apply, I am social and nice to them and they still reject my job offer!".
Just give the companies what they want and they all will want you, it is that easy. If you try to give them something they don't care about, like a hiring manager giving you a smile and minimum wage, of course you will get rejected a lot. Give them what they ask for, not what you think they should want.
Maybe in some companies. Every interviewer I've talked to has never considered those a negative. Most don't even think of them at all once the interview is over. Of course I've always worked in companies where people work their 8 hours and go home to their family and so you would be a good fit (depending on what you asked).
I know applicants need the job more than they need you. However you still have options if you don't get this one - you should always be following several leads until you finally get a job. Odds are your other leads are not anywhere close to as advanced as this, but if you can wait a couple more months you have a chance.
Unless you are really desperate to find a job, there are definitely workplaces you would want to avoid. While a power imbalance does in principle exist, that doesn’t mean you usually have no choice at all. Of course that is less of a case when you just start, but in general pp can go around doing interviews and negotiating positions rather than just accept the first offer.
I have to push back on the unlimited amount of time thing. Maybe in FAANG that’s true but in the places I’ve worked for, hiring is something that comes down from on high - someone tells us they need N bodies for some project, and we need to have a team hired by some deadline. We really can’t interview endlessly.
I don't mean that you're literally allowed to run interviews for years. I mean that companies can, if they choose to, interview people indefinitely until they find a suitable candidate. The company won't collapse if they don't find an employee by the deadline, it's not imperative to their existence, it's just a nice to have, a goal. Maybe some project or initiative doesn't pan out or gets pushed back if no one gets hired, but the impact of all that seems rather limited. On the other hand, my existence is fully contingent on finding a job, and if I overrun the deadline I have to find a place to work, I won't be able to eat and pay rent. My time limit is existential, their time limit is artificial and fully in the realm of planning.
> So make sure you use those "do you have any questions" time to ask questions!
I started giving interviews again and im surprised how many people dont ask anything. I'm an IC, not a hiring manager, and only evaluating a specific thing, (technical assement) and still nothing really.
It just goes to show how skewed the power balance is right now. People are probably afraid to make an extra move that can deduct points for any obscure reason.
When I interview people I encourage them to ask any question they want and I make damned sure it doesn't reflect in my report to the higher-ups! Just imagine being in their shoes, you could be in the same position tomorrow!
This kind of tradeoff discussion is good to explicitly call out in an interview. I often say things like "if this were my own project I'd use X, but on a team I would probably try to find a library in a language the team already uses".
Bringing the team up on Prolog and integrating it into your CI/CD system and finding some way to connect it with other services is often going to be a poor choice, even if in isolation it's the very best tool for the job. And that's the best case solution - more likely the tests will be limited and not automated, the code review will be rubber stamp because only the author knows the language, and the code and deploy process will be a black box that everyone is afraid to touch once the author moves on.
Obviously in an interview none of the code should make it into production, but being openly pragmatic is still a good idea. And if you use an obscure language, you'd better have better than usual communication skills to concisely explain how the code works for someone who hasn't used that language before. I've seen it done well but it's difficult.
Why would you ever want to work somewhere that clearly employs such unqualified individuals? And not only that, but allows those individuals to be the face of their company to prospective hires?
A company's interview process tells you a lot about how the company thinks and operates. This was was surely a dumpster fire.
It goes without saying that someone needing money that badly wouldn't do what the OP here did. Stop trying to be right and start trying to see the world for what it is. It'll help you do better.
Sabotaging? The candidate learned that their interviewers, and probably the company as a whole, isn't curious about languages or stuff that is outside of their wheelhouse.
What if the interviewers decided to ask the candidate about their language choice and trade-offs between different languages? Wouldn't that actually give them more signals into the skill of the engineer, rather than just blindly following their script?
I haven't been asked leetcode questions in a while and when I was asked, it was an easy level problem. I don't know where they ask hard leetcode problems, I also never solved a hard leetcode problem on my own.
The purpose of coding questions should be a problem that you can solve in about 20 minutes, then they ask another, and then you get 20 minutes to either finish or talk about other things. If you ask questions where either someone knows the trick and they pass, or they don't and fail you don't learn much. You need to watch the person write code to see if they are reasonable about it.
I interviewed at an investment bank in London and they asked me pretty hard questions. One was to implement some multithreaded producer consumer thing in C++. I can't remember the details but it was... well you know how writing multithreaded C++ is. I was allowed to look up references at least. Took me maybe 20 minutes and the whole time the interviewer was just sitting on his phone while I wrote it.
Weird experience. Didn't get that job (probably for the best tbf).
If you wrote an MPSC queue (standard question) with multithreaded demo in 20 minutes in C++ you’re pretty hot shit, mate. Their loss. It’s not that it’s hard. But that speed without error is just really good. C++ is particularly unforgiving too.
I can't remember the exact problem or how long it took but it was definitely some awkward multithreading. I'd rate my C++ as pretty good but probably not hot shit!
I'm routinely asked LC Hard questions in interviews. Sometimes more than one in one 45 minute interview.
That said, I interview in silicon valley and I'm a mixed race American. (extremely rare here) I think a lot of people just don't want me to pass the interview and will put up the highest bar they can. Mind you, I often still give optimal solutions to everything within good time constraints. But I've practiced 1000+ problems and done several hundred interviews.
This is not how it works. The interviewer knows 1-2 problems and there is no time for profiling since they are rushing through their day, probably focused on their day to day work. You are the least of their concern, believe me.
Depends on your experience and what you’re interviewing for. At a high enough level, the questions are pulled from the easier side, and the interviewer doesn’t want you to fail.
More exactly, you can't invent algorithms on a spot which took who knows how many years for others to invent. I.e. the question ends up being more if you know about a specific algorithm, which results in "invent it if you don't know about it". It's absolutely silly to test for ability to invent one on the spot, so it's a pretty pointless interview question really.
I hate when it asks for a memorized specific problem, but most of the hard ones I found needs a clever twist of a well-known algorithm, and I still struggle at that too for hard LC.
This. Literally every problem in NP can be cast as a constraint problem. The question of whether a solver is the right solution varies a lot depending on the application, and in an interview , it’s almost by definition not the right solution.
They can also be dreadfully slow (and typically are) compared to just a simple dynamic program.
This will be true in some interviews, but not in all.
I'm generally against using leetcode in interviews, but wherever I've seen it used it's usually for one reason & one reason alone: known dysfunctional hiring processes. These are processes where the participants in the hiring process are aware of the dysfunction in their process but are either powerless or - more often - too disorganised to properly reform the process.
Sometimes this is semi-technical director level staff leveraging HR to "standardise" interview techniques by asking the same questions across a wide range of teams within a large corp. Other times this is a small underresourced team cobbling together interview questions from online resources in a hurry, not having the cycles to write a tailored process for themselves.
In these cases, you're very likely to be dealing with a technical interviewer who is not an advocate of leetcode interviewing & is attempting to "look around" the standardised interview scoring approach to identify innovative stand out candidates. In a lot of cases I'd hazard even displaying an interest in / some knowledge of solvers would count significantly in your favour.
If someone solves a leetcode hard with a constraint solver and you don't hire them, you are an idiot.
Do you know how few people in this world even know what a constraint solver is, let alone how to correctly define the problem into one?
I used a constraint solver to solve a homework problem once in my CS degree 3rd year. My god just writing the damn constraints was a huge cognitive load!
I did this, wrote an Essence-prime program to generate Minion solver code for a simple instance of the knapsack problem, as part of a startups "solve one of these and get an interview" challenges. Because I had used those tools recently for a contract job (and wrote/presented a paper on invitation of the solver authors,) I thought it would be fun and didn't really want the job. Got an interview but every dev was like "why did you use a cannon to swat a fly?" and were clearly concerned that without strict supervision I would create baroque towers of garbage for them to clean up.
I would like to believe that most people capable of writing a solver would appreciate simple code. It's like when looking at ffmpeg or some physic engine code. You know you'll forget the details easily so you make sure everything is as simple as they can be.
> If someone solves a leetcode hard with a constraint solver and you don't hire them, you are an idiot.
I do hope you're exagerating here, but in case you aren't: this is an extremely simplistic view of what (software) engineers have to do, and thus what hiring managers should optimize for. I'd put "ability to work in a team" above "raw academic/reasoning ability" for the vast majority of engineering roles, any day.
Not that the latter doesn't matter, of course, but it's by no means the one and only measure.
I don't. I do easy code interviews because there are people who work great on a team and know enough buzzwords to sound like they know how to write code, but cannot. Something that isn't hard to solve in about 20 minutes (I can solve in 5 - but I've seen a solution several times and so don't have to think about the solution), but is different enough that you haven't memorized the solution. If you can't solve an easy problem then you can't code.
I've won a couple hackathons with just CP-SAT & Linear Programming which led to my first jobs. I'm surprised not more people know/use it. Very inefficient compared to the "correct" answer but the development speed is much faster.
> If someone solves a leetcode hard with a constraint solver and you don't hire them, you are an idiot
Sometimes you just don't want someone that takes these shortcuts. I think being able to solve the problem without a constraint solver is much more impressive
This - the only downside to a constraint solver is it's usually slower. If you want them to write a fast algorithm, then specify that. Have an actual metric for it, if they can pass it with the declarative language, then great. If not, they should have written a more complicated algorithm.
Yes and no: I've asked questions like this in interviews, and I'd count it as a plus if the candidate reached for a constraint solver. They're criminally underused in real-world software engineering and this would show the candidate probably knows how to get the right answer faster instead of wasting a bunch of time.
Now, if they did answer with a constraint solver, I'd probably ask some followup whiteboard questions to make sure they do actually know how to code. But just giving a constraint solver as an answer definitely wouldn't be bad.
Yes, especially if the interviewee said something like 'this may not be asymptomatically optimal, but if it's not a known bottleneck, then I might start with constraint solver to get something working quickly and then profile later.' Especially if it's a case where even the brute-force solution is tricky.
Otherwise penalizing interviewees for suggesting quick-and-dirty solutions reinforces bad habits. "Premature optimization is the root of all evil," after all.
Using a bad algorithm when a good algorithm that is known to exist is premature pessimization and should be avoided.
There is some debate about what premature optimization is, but I consider it about micro optimizations that often are doing things a modern compiler will do for you better than you can. All too often such attempts result in unreadable code that is slower because the optimizer would have done something different but now it cannot. Premature optimization is done without a profiler - if you have a profile of your code and can show a change really makes a difference then it isn't premature.
On the other hand job interviews imply time pressure. If someone isn't 100% sure how to implement the optimization algorithm without looking it up brute force is faster and should be chosen then. In the real world if I'm asked to do something I can spend days researching algorithms at times (though the vast majority of the time what I need is already in my language's standard library)
Sure, if a good algorithm exists and is simple to implement, then go for it. But if it is non-trivial, then you have to make a judgement call whether it is worth the trouble to solve in a more optimal way. You acknowledge yourself that that this can take days.
Personally I really have to be disciplined about choosing what to optimize vs what to code up quick-and-dirty. There's always a temptation to write clean, custom solutions because that's more interesting, but it's just not a good use of time for non-performance critical code.
IBO premature optimization is normally one of two things:
1. Any optimization in a typical web development file where the process is not expected to be particularly complex. Usually a good developer will not write something very inefficient and usually bottlenecks come from other areas
2. Doing stuff like replacing a forEach with a for loop to be 0.5% faster
General constraint solver would be terribly inefficient for problems like these. It's a linear problem and constraint solver just can't handle O(10^6) variables without some beefy machine.
FWIW, the OP's problem is not linear. It's an integer programming problem.
A trick if you can't do a custom algorithm and using a library is not allowed during interview could be to be ready to roll your own DPLL-based solver (can be done in 30 LOC).
Less elegant, but it's a one-size-fits-all solution.
Okay, but who says you need to use a simple constraint solver? There are various sophisticated constraint solvers that know how to optimize.
At this point, job interviews are so far removed from actual relevance. Experience and aptitude still matter a lot, but too much experience at one employer can ground people in rigid and limiting ways of thinking and solving problems.
The O stands for "Ordnung", the German word for order. So it does literally mean that, except mathematicians think that the order of f(x)=1 is the same as the order of f(x)=10^6, because "clearly" f(x)=x gets way bigger than any constant function.
In physics "order of" means "approximately" using something like a taylor series, which typically start with a constant, then move to higher polynomial terms which add smaller and smaller corrections. Similar, but different, I think...
Great insight. But this is sadly not applicable to interviews.
> It's easy to do in O(n^2) time, or if you are clever, you can do it in O(n). Or you could be not clever at all and just write it as a constraint problem
This nails it. The point of these problems is to test your cleverness. That's it. Presenting a not-clever solution of using constraint solvers shows that you have experience and your breadth of knowledge is great. It doesn't show any cleverness.
>The point of these problems is to test your cleverness.
In my experience, interviewers love going to the Leetcode "Top Interview 150" list and using problems in the "Array String" category. I'm not a fan of these problems for the kind of jobs I've interviewed for (backend Python mostly), as they are almost always a "give me a O(n) runtime O(1) memory algorithm over this array" type challenge that really doesn't resemble my day to day work at all. I do not regularly do in-place array algorithms in Python because those problems are almost always handled by other languages (C, Rust, etc.) where performance is critical.
I wish interviewers would go to the "Hashmap" section for interviews in Python, JavaScript, etc., type of languages. They are much less about cleverness and more about whether you can demonstrate using the appropriate tools in your language to solve problems that actually do resemble ones I encounter regularly.
There's also the problem of difficulty tuning on some of these. Problem 169 (Majority Element) being rated "Easy" for getting a O(n) runtime O(1) memory solution is hilarious to me. The algorithm first described in 1981 that does it (Boyer–Moore majority vote algorithm) has a Wikipedia page. It's not a difficult to implement or understand algorithm, but its correctness is not obvious until you think about it a bit, at which point you're at sufficient "cleverness" to get a Wikipedia page about an algorithm named after you. Seems excessive for an "Easy" problem.
Interviews should not be about cleverness. They should test that you can code. I almost never write an algorithm because all the important algorithms are in my standard library already. Sure back in school I did implement a red-black tree - I don't remember if it worked, but I implemented it: I can do that again if you need me to, but it will take me several days to get all the details right (most of it looking up how it works again). I use red-black trees all the time, but they are in the language.
You need to make sure a candidate can program so asking programing question make sense. However the candidate should not be judged on if they finish or get an optimal or even correct answer. You need to know if they write good code that you can understand, and are on a path that if given a reasonable amount of time on a realistic story would finish it and get it correct. If someone has seen the problem before they may get the correct answer, but if they have not seen it they won't know and shouldn't expected to get the right answer in an hour.
These tests are programming tests, but also effectively IQ and conscientiousness tests in the same way that most of what people learn in college is pointless, but graduating with a 4.0 GPA is still a strong signal.
I will say, IME, it's pretty obvious when people have seen a problem before, and unless you work at a big company that has a small question pool, most people are not regurgitating answers to these questions but actually grappling with them in realtime. I say this as someone who has been on both ends of this, these problems are all solvable de novo in an hour by a reasonable set of people.
Leetcode ability isn't everything, but I have generally found a strong correlation between Leetcode and the coding aspects of on the job performance. It doesn't test everything, but nothing in my experience of hiring has led me to wanting to lower the bar here as much as raise the bar on all other factors that influence job performance.
Majority Element is rated easy because it can be trivially solved with a hashmap in O(N) space and that's enough to pass the question on Leetcode. The O(1) space answer is probably more like a medium.
Yeah it just depends on whether your interviewer considers that "solved". To test this out, I wrote a one liner in Python (after imports) that solves it with a hashmap (under the hood for Counter, which uses a heap queue to find the most common one):
return Counter(nums).most_common(1)[0][0]
And that's 50th percentile for runtime and memory usage. Doing it with another one liner that's 87% percentile for time because it uses builtin Python sorting but is 20th percentile for memory:
return sorted(nums)[len(nums) // 2]
But the interviewer might be looking for the best approach, which beats "100%" of other solutions in runtime per Leetcode's analysis:
m, c = -1, 0
for x in nums:
if not c:
m = x
c = 1
elif m == x:
c += 1
else:
c -= 1
return m
If I were interviewing, I'd be happy with any of these except maybe the sorted() one, as it's only faster because of the native code doing the sort, which doesn't change that it's O(n log n) time and O(n) space. But I've had interviews where I gave answers that were "correct" to the assumptions and constraints I outlined but they didn't like them because they weren't the one from their rubric. I still remember a Google interview, in which we're supposed to "design to scale to big data", in which they wanted some fiddly array manipulation algorithm like this. I gave one that was O(n log n) but could be done in place with O(1) memory, and the interviewer said it was "incorrect" in favor of a much simpler O(n) one using dicts in Python that was O(n) memory. Had the interviewer specified O(n) memory was fine (not great for "big data" but ok) I would have given him the one liner that did it with dicts lol
I guess my point is that interviewers should be flexible and view it as a dialogue rather than asking for the "right answer". I much prefer "identify the bug in this self contained code snippet and fix it" type problems that can be completed in <15-30 minutes personally, but Leetcode ones can be fine if you choose the right problems for the job.
Honestly in day to day programming I find data types & associated APIs are so so much more important than algorithms.
I would rather work with a flexible data type with suboptimal performance than a brittle data type that maybe squeezes out some extra performance.
Your example of in-place array mutation feels like a good example of such a thing. I feel like there should be a category of interviewing questions for "code-safety" not just performance.
I would rather work with persistent data structures, the least brittle of all, which would also in many cases trivially allow me to parallelize the work, but as far as I can see all the leetcode problems are low level mutation based problems with no clue about functional data structures. Clueless interviewers look to these problems as if they alone epitomized great programming, while they are often inflexible single core stuff, that may not even be appropriate for this day and age any longer.
> The point of these problems is to test your cleverness.
Last round I did at Meta it was clearly to test that you grinded their specific set of problems, over and over again, until you could reproduce them without thinking. It's clear because the interviewers are always a bit surprised when you answer with whatever is not the text-book approach on both leetcode and on the interview guide they studied.
Cleverness is definitely not high on the list of things they're looking for.
Cheekily using counting sort ended things the one and only time I agreed to interview with Meta. Definitely improved my inbox for a couple years though.
Bottom up dynamic programming algorithms require some cleverness.
All of the ones listed can be solved with a top down dynamic programing algorithm. Which just means "write recursive solution, add caching to memoize it".
For some of these, you can get cleverer. For example the coin change problem is better solved with an A* search.
Still, very few programmers will actually need these algorithms. The top thing we need is to recognize when we accidentally wrote a quadratic algorithm. A quick scan of https://accidentallyquadratic.tumblr.com/ shows that even good people on prominent projects make that mistake on a constant basis. So apparently being able to produce an algorithm on the test, doesn't translate to catching an algorithmic mistake in the wild.
For the love of me I still can't consistently solve dynamic programming problems. Because "write a clever brute force solution that can be cached" is so broad that there are tons of variations out there, and a slight twist can bring you out of the loop fast.
Project Euler 18. I tried 3 heuristic approaches, before accepting, that to get the real answer without brute forcing it (because it comes back later in non-brute forcable version anyway), I need to find another way. I came up with an optimal solution, but it is still not dynamic programming, which I would also consider inferior to the bottom up solution I have found.
When I interview with problem solving problems, the point is to understand how the candidate thinks, communicates, and decomposes problems. Critically, problem solving questions should have ways to progressively increase and decrease difficulty/complexity, so every candidate "gets a win" and no candidate "dunks the ball".
Interviewers learn nothing from an instant epiphany, and they learn next to nothing from someone being stumped.
Unfortunately, this is why we can't have nice things. Problem solving questions in interviews can be immensely useful tools that, sadly, are rarely usefully used.
> the point is to understand how the candidate thinks, communicates, and decomposes problems.
100% and it's a shame that over time this has become completely lost knowledge, on both sides of the interview table, and "leetcode" is now seen as an arbitrary rote memorization hurdle/hazing ritual that software engineers have to pass to enter a lucrative FAANG career. Interviewees grind problems until they've memorized every question in the FAANG interview bank, and FAANG interviewers will watch a candidate spit out regurgitated code on a whiteboard in silence, shrug, and say "yep, they used the optimal dynamic programming solution, they pass."
If somebody writes the optimal algorithm that should be a negative unless their resume indicates they are writing that algorithm often. The only reason you should know any algorithm well enough to get it right is if your job is implementing the optimal version for every single language. Of course nobody maintains one algorithm in many different languages/libraries (say libc++, python, rust, ada, java - each has different maintainers), so I can safely safe the number is zero who should be able to implement your cleaver algorithm. Now if your cleaver algorithm is in the language standard library (or other library they often use) that should be able to call/use it, though even then I expect them to look up the syntax in most languages.
I've probably implemented first-order Markov-chain text generation more than a dozen times in different languages, and earlier this week I implemented Newton–Cotes adaptive quadrature just because it sounded awesome (although I missed a standard trick because I didn't know about Richardson extrapolation). I've also recently implemented the Fast Hadamard Transform, roman numerals, Wellons–NRK hash tries, a few different variants of Quicksort (which I was super excited to get down to 17 ARM instructions for the integer case), an arena allocator with an inlined fast path, etc. Recently I wrote a dumb constrained-search optimizer to see if I could get a simpler expression of a word-wrap problem. I learned about the range-minimum-query algorithm during a job interview many years ago and ad-libbed a logarithmic-time solution, and since then I've found a lot of fascinating variants on the problem.
I've never had a job doing this kind of thing, and I don't expect to get one, just like I don't expect to get a job playing go, rendering fractals, reading science fiction, or playing video games. But I think there's a certain amount of transferable skill there. Even if what I need to do this week is figure out how to configure Apache to reverse proxy to the MediaWiki Docker container.
(I know there are people who have jobs hacking out clever algorithms on different platforms. I even know some of them personally. But there are also people who play video games for a living.)
> Critically, problem solving questions should have ways to progressively increase and decrease difficulty/complexity, so every candidate "gets a win" and no candidate "dunks the ball".
Absolutely agree. When I interview, I start with a simple problem and add complexity as they go. Can they write X? Can they combine it with Y? Do they understand how Z is related?
Same. I'm never doing a fail/pass type interview. Instead I try to assess where the candidate is on the beginner/intermediate/expert axis and match that with the expectations of the role I'm interviewing for.
> the point is to understand how the candidate thinks, communicates, and decomposes problems
Interviewers always say this, but consider: would you endorse a candidate who ultimately is unable to solve the problem you've presented them, even if they think, communicate, and decompose problems well? No interview in this industry prizes those things over getting the answer right.
Note how I structure my problem solving questions to be progressive and adjustable, both up and down. This gives me room to simplify and get the candidate to a place where they can show me something (candidates who truly come up goose eggs on everything functional but still show solid fundamentals may be showing that the interview is for the wrong job family). It also means that it is virtually impossible to get all the way to "the end" and "finish" the problem, as I leave room for extension and modification. I had one question that I thought was long enough, and, of maybe ~120 interviews with it, exactly two people dunked on it, one writing out code for solutions with and without libraries. That guy was a complete jerk, and I wasn't at all surprised when the entire panel came back not-inclined.
My first boss (a CTO at a start-up) drilled this into us. What you know is far less valuable than how you learn/think and how you function on a team.
Interesting. Sounds like you and other HN commentators from firms that interview better than the industry Leetcode convention oughta be on one of those workplace lists on GitHub (like this one: https://github.com/poteto/hiring-without-whiteboards) for applicants who want to go through a more interesting process.
Every interview I know is severely time limited. I don't care if you can solve the problem, so long as your are clearly making progress and have proven you could solve the problem if given longer.
Now I give you problems I expect to take 20 minutes if you have never seen them before so you should at least solve 1. I have more than once realized someone was stuck on the wrong track and redirection efforts were not getting them to a good track so I switched to a different problem which they were then able to solve. I've also stopped people when they have 6 of 10 tests passing because it is clear they could get the rest passing but I wouldn't learn anything more so it wasn't worth wasting their time.
In the real world I'm going to give people complex problems that will take days to solve.
Would a good answer be "I can do it as a constraint problem, but since I guess you are not asking for this, the solution is..." and then proceed as usual?
Id probably stop the candidate, dig into how they’d using constraint based solvers, and how they might expect that to fall apart. Applicability and judgment is worth way more than raw algorithmic questions.
One way to think about this is:
Is a fresh graduate more likely to provide a solid answer to this than a strategic-thinking seasoned engineer? If so, just be conscious of what your question is actually probing.
And, yes, interview candidates are often shocked when I tell them that I’m fine with them using standard libraries or tools that fit the problem. It’s clear that the valley has turned interviewing into a dominance/superiority model, when it really should be a two-way street.
We have to remember that the candidate is interviewing us, too. I’ve had a couple of interviews as the interviewee where the way the interview was conducted was why I said “no” to an offer (no interest in a counter, just a flat “no longer interested” to the recruiter, and, yes, that surprises recruiters, too).
Constraint solvers are also often not applicable to the real world either.
Many formulations scale in a way that is completely unusable in practice.
Knowing how to get tools like Z3 or Gurobi to solve your problems is it's own skill and one that some companies will hire for, but it's not a general purpose technology you can throw at everything.
This post is the unironic version of "FizzBuzz in TendorFlow", where just because you have a big hammer doesn't mean everything is a nail. And I say that as an enjoyer of bug hammers including SMT solvers.
>The point of these problems is to test your cleverness.
No it's just memorization of 12 or so specific patterns. The stakes are too high that virtually everyone going in will not be staking passing on their own inherent problem solving ability. LeetCode has been so thoroughly gamified that it has lost all utility of differentiability beyond willingness to prepare.
Yeah, it tests if the candidate enjoys the programming-adjacent puzzle game of LeetCode, which is a perfectly decent game to play, but it is just a signal.
If somebody grinds LeetCode while hating it, it signals they are really desperate for a job and willing to jump through hoops for you.
If somebody actually enjoys this kind of stuff, that is probably a signal that they are a rare premium nerd and you should hire them. But the probably play Project Euler as well (is that still up?).
If somebody figures out a one-trick to minmax their LeetCode score… I dunno, I guess it means they are aware of the game and want to solve it efficiently. That seems clever to me…
I think this is one of the more true answers but can you be more specific?
Like in race? Like in wealth? Like in defection willingness? Like in corruption?
Asking for a friend who is regularly identified as among the most skilled but feels their career has been significantly derailed by this social phenomenon.
People decide what is like. I know some people who would never work with some group, but they have no problem with some other group.
In this case the group is people good at leetcode - the people I know of in that group are perfectly fine with any race so long as they can solve leetcode. There are people who care about race, but I've never had much to do with them so I can't guess how they think.
That is the acceptable public answer of course but it is a mind stopper. Obviously the definition comes from some person with some set of motivations and this seems to ignore that real and pertinent question.
Things like age, class, education and educational institution, willingness to work long hours doing something you hate for a goal you don't care about except that it feeds and houses you.
Line engineers running interviews have stopped having any say in the corporate policies of tech firms years ago. They are cogs, not rockstars.
You are right, this definition does come from some person with some set of motivations, but that person is some mid/high-level manager who probably hasn't ever written a line of code in their life.
It's just tradition for the sake of tradition. When cargo cult practice becomes industry culture. Like a much milder version of why medical residents are put through extreme sleepless wringers just because William Halsted was a cocaine addict.
But what is it differentiating? And is it really the best evidence of willingness to prepare? My MSc and BA on the topics, my open source contributions, two decades of industry experience... Those aren't evidence of not only willingness but execution of preparation?
The papers and open source indicate that you can build stuff. That's not what it's testing for.
Will you put up with very long hours of insane grindy nonsense in the spirit of being a team player for a team that doesn't really remember what game they're playing?
Are you sufficiently in need of income to be fighting through this interview dance in preference to other things, such that once you join you'll be desperate to stay?
Those are extremely important questions, and a willingness to have spent a thousand hours memorising leetcode correlates strongly with the attributes sought.
It is a differentiator when you are hiring straight from college. The fact we use this beyond entry level roles is a sign the company has lost the thread and is cargo culting.
That they would ask me to prepare for that is a signal as well.
In no case is it a useful signal on if I can do my job better than someone else. Some people like this type of problem and are good at it anyway which is a good signal compared to average - but there are also above average people who don't enjoy this type of problem and so don't practice it. Note that both cases the people I'm talking about did not memorize the problem and solution.
That willingness to prepare doesn't reconcile with the realities of parenthood and all of the other responsibilities someone in their thirties may have. Consistently finding that time will be a huge ask, especially if you haven't worked on those problems in a while.
I mean, it would be illegal for them to state it outright, but most companies would prefer not to hire people with kids and other responsibilities. That's the whole reason there are specific discrimination laws for that.
LeetCode questions neatly solve the problem of not wanting to hire people who won't,
or can't,
spend hours of their free time doing things they hate for a goal they don't care about except to the extent that will feed and house them.
No its not a measure of cleverness. Its about whether you can break down problems and apply common methods. Thats the entire job. Its a learnable skill and honestly resisting learning because of personal biases is a red flag in my book.
Yes. Common LC patterns such as 1D and 2D dynamic programming. I'm not defending leetcode style interviews, in fact I think they are actually bad, I'm simply stating their intent as observed by me.
In my notes I have roughly 30 patterns to leetcode questions btw.
Most interviews are based on the premise that if a diabetic can't synthesize their own insulin in their basement, they are somehow cheating at the game of life.
If my wife's blood sugar is high, she takes insulin. If you need to solve a constraint problem, use a constraint solver.
If your company doesn't make and sell constraint solving software, why do you need me to presume that software doesn't exist and invent it from scratch?
It’s explicitly not testing if you can synthesize insulin in a crisis, it’s a general aptitude test for “if we tell you you need to cram this textbook on how to synthesize insulin by next week and then ask you how to do it on a call, can you coherently repeat that back to us?”
If you can figure out that a problem can be efficiently solved with a constraint solver then you can also write the two for loops and maybe some auxiliary recursive function to solve the given toy instance.
Whenever constraint programming languages come up, you can’t miss mentioning Håkan Kjellerstrand. He’s put together an amazing collection of problems and examples—including plenty for MiniZinc—on his site: https://www.hakank.org/minizinc/
> Now if I actually brought these questions to an interview the interviewee could ruin my day by asking "what's the runtime complexity?"
This completely undermines the author's main point. Constraint solvers don't solve hard leetcode problems if they can't solve large instances quickly enough.
Many hard leetcode problems can be solved fairly simply with more lax runtime requirements -- coming up with an efficient solution is a large part of the challenge.
> coming up with an efficient solution is a large part of the challenge
More of my work tends to be "rapidly adopting solution to additional and changing requirements" than "come up with most efficient solution", so why are we interviewing for something where in practice we just throw a couple extra instances at it? (Your specific job role may vary, of course, but I usually just increase the scaling factor)
Author's point is that coming up with the most efficient solution might not actually be a good measure of your real-world performance.
And that's been a longrunning critique of leetcode, of course. However, this is a neat framing where you can still use the same problems but give solutions that perform better when measured by "how adaptable is this to new requirements?"
It would have been worthwhile if this article had briefly touched upon how the constraint solvers are implemented, rather than avoiding this altogether
A loonnngggg time ago when I was green, and wasn't taught about constraint solving in my State University compsci program, I encountered the problem when trying to help a friend with his idea.
He wanted to make an app to help sports club owners schedule players for the day based on a couple simple rules. I thought this was going to be easy, and failed after not realizing what I was up against. At the time I didn't even know what I didn't know.
I often look back on that as a lesson of my own hubris. And it's helped me a lot when discussing estimates and timelines and expectations.
This might be a dumb question (as I'm not familiar with constraint solvers) but would a linear optimization approach be better? I've used linear optimization for scheduling in the past. The nice thing is that linear optimization handles rule conflicts well, because you just set weights on all your rules and the optimizer will find the "least bad" solution to the conflicts.
Well if your using MiniZinc you're free to use a CP solver, MIP solver, SAT solver, CP-SAT-LP solver. In general the model is roughly the same, even though some formulations work better for some solvers than others.
But CP (and CP-SAT) solvers tend to do very well on scheduling problems
> I thought this was going to be easy, and failed after not realizing what I was up against. At the time I didn't even know what I didn't know
This reminds me of high school ~25 years ago when I just started learning TI-Basic on my calculator and was dabbling in VB6 on my PC, and I was flipping burgers at Steak n Shake as my part time job. The manager moaned about how hard it was to write the employee schedules out each week (taking into account requested days off, etc) and I thought “ooh, I know how to write software now, I’ll make a scheduling program!” I told the manager I bet I could do it.
… it took a very short time for 16 year old me to realize writing scheduling software to solve for various constraints is pretty damned hard. I never brought it up after that.
- Constraint solvers? That's a nice concept, I heard about this once. However, for the purposes of the interview, let's just write some Python code, I wanna see your way of thinking...
(I think it's almost impossible to convince your interviewer into constraint solvers, while the concept itself is great)
Long time ago, just for fun, I wrote a constraint solver problem that could figure out which high yield banks to put money into that were recommended on doctor of credit(https://www.doctorofcredit.com/high-interest-savings-to-get/) based on <= `X` money and <= `Y` # of transactions on debit cards maximize the yield and other constraints(boolean and real valued)
I played it for a while when interest rates were really low and used the thing for my own rainy day savings(I did get tired changing accounts all the time)
SAT, SMT, and constraint solvers are criminally underutilized in the software industry. We need more education about what they are, how they work, and what sorts of problems they can solve.
At least personally, I've been very underwhelmed by their performance when I've tried using them. Usually past a few dozen variables or so is when I start hitting unacceptable exponential runtimes, especially for problem instances that are unsatisfiable or barely-satisfiable. Maybe their optimizations are well-suited for knapsack problems and other classic OR stuff, but if your problem doesn't fit the mold, then it's very hit-or-miss.
I'm surprised to hear this. Modern SAT solvers can easily handle many problems with hundreds of thousands of variables and clauses. Of course, there are adversarial problems where CDCL solvers fail, but I would be fascinated if you can find industrial (e.g. human written for a specific purpose) formulas with "dozens of variables" that a solver can't solve fairly quickly.
One thing that I spent a particularly long time trying to get working was learning near-minimum-size exact languages from positive and negative samples. DFAMiner [0] has a relatively concise formulation for this in terms of variables and clauses, though I have no way to know if some other reformulation would be better suited for SAT solvers (it uses CaDiCaL by default).
It usually starts taking a few seconds around the ~150-variable mark, and hits the absolute limit of practicality by 350–800 variables; the number of clauses is only an order of magnitude higher. Perhaps something about the many dependencies in a DFA graph puts this problem near the worst case.
The annoying thing is, there do seem to be heuristics people have written for this stuff (e.g., in FlexFringe [1]), but they're all geared toward probabilistic automata for anomaly detection and similar fuzzy ML stuff, and I could never figure out how to get them to work for ordinary automata.
In any case, I eventually figured out that I could get a rough lower bound on the minimum solution size, by constructing a graph of indistinguishable strings, generating a bunch of random maximal independent sets, and taking the best of those. That gave me an easy way to filter out the totally hopeless instances, which turned out to be most of them.
I've worked on a model with thousands of variables and hundreds of thousands of parameters with a hundred constraints. There are pitfalls you need to avoid, like reification, but it's definitely doable.
Of course, NP hard problems become complex at an exponential rate but that doesn't change if you use another exact solving technique.
Using local-search are very useful for scaling but at the cost of proven optimality
I think this hits the nail on the head: performance is the obstacle, and you can't get good performance without some modeling expertise, which most people don't have.
I wish I knew better how to use them for these coding problems, because I agree with GP they're underutilized.
But I think if you have constraint problem, that has an efficient algorithm, but chokes a general constraint solver, that should be treated as a bug in the solver. It means that the solver uses bad heuristics, somewhere.
I'm pretty sure that due to Rice's theorem, etc., any finite set of heuristics will always miss some constraint problems that have an efficient solution. There's very rarely a silver bullet when it comes to generic algorithms.
Rice's theorem is about decidability, not difficulty. But you are right that assuming P != NP there is no algorithm for efficient SAT (and other constraint) solving.
I think they're saying that the types of counter-examples are so pathological in most cases that if you're doing any kind of auto-generation of constraints - for example, a DSL backed by a solver - should have good enough heuristics.
Like it might even be the case that certain types of pretty powerful DSLs just never generate "bad structures". I don't know, I've not done research on circuits, but this kind of analysis shows up all the time in other adjacent fields.
Idk, I also thought so once upon the time. "Everyone knows that you can usually do much better than the worst case in NP-hard problems!" But at least for the non-toy problems I've tried using SAT/ILP solvers for, the heuristics don't improve on the exponential worst case much at all. It's seemed like NP-hardness really does meet the all-or-nothing stereotype for some problems.
Your best bet using them is when you have a large collection of smaller unstructured problems, most of which align with the heuristics.
> Your best bet using them is when you have a large collection of smaller unstructured problems, most of which align with the heuristics.
Agreed. An algorithm right now in our company turns a directed graph problem, which to most people would seem crazy, into roughly ~m - n (m edges, n nodes) SAT checks that are relatively small. Stuffing all the constraints into an ILP solver would be super inefficient (and honestly undefined). Instead, by defining the problem statement properly and carving out the right invariants, you can decompose the problem to smaller NP-complete problems.
Well, they aren’t magic. You have to use them correctly and apply them to problems that match how they work. Proving something is unsat is worst case NP. These solvers don’t change that.
Of course they aren't magic, but people keep talking about them as if they're perfectly robust and ready-to-use for any problem within their domain. In reality, unless you have lots of experience in how to "use them correctly" (which is not something I think can be taught by rote), you'd be better off restricting their use to precisely the OR/verification problems they're already popular for.
Hence my statement about education. All tools must be used correctly in their proper domain, that is true. Don’t try to drive screws with a hammer. But I'm curious what problems you tried them on and found them wanting and what your alternative was? I actually find that custom solutions work better for simple problems and that solvers do a lot better when the problem complexity grows. You’re better off solving the Zebra puzzle and its ilk with brute force code, not a solver, for instance.
SAT solvers are used daily to generate solutions for problems that have literally millions of variables. So, what you said is just wrong on the face. Yes, some talented people can write custom code that solves specific problems faster than a general purpose solver, particularly for easy special cases of the general problem, but most of the time that results in the programmer recreating the guts of a solver customized to a specific problem. There’s sort of a corollary of Greenspun’s Tenth Rule that every sufficiently complicated program also contains an ad hoc, informally-specified, bug-ridden, slow-implementation of half of a SAT or SMT solver.
I mean right tool for the right job. Plenty of formulations and problems (our job has plenty of arbitrarily hard graph algorithms) that have 90% of the problem just being a very clever reduction with nice structure.
Then the final 10% is either NP hard, or we want to add some DSL flexibility which introduces halting problem issues. Once you lower it enough, then comes the SMT solvers.
I find this post interesting independent of the question of whether leetcode problems are a good tool for interviews. It's: here are some kinds or problems constraint solvers are useful for. I can imagine a similar post about non-linear least squared solvers like ceres.
Yeah, especially for learning how to use a solver!
> Most constraint solving examples online are puzzles, like Sudoku or "SEND + MORE = MONEY". Solving leetcode problems would be a more interesting demonstration.
He's exactly right about what tutorials are out there for constraint programming (I've messed around with it before, and it was pretty much Sudoku). Having a large body of existing problems to practice against is great.
> The real advantage of solvers, though, is how well they handle new constraints.
Well said. One of the big benefits of general constraint solvers is their adaptability to requirements changes. Something I learned well when doing datacenter optimization for Google.
I agree with the other comments here that using a constraint solver defeats the purpose of the interview. But this seems like a good case for learning how to use a constraint solver! Instead of spending hours coding a custom solution to a tricky problem, you could use a constraint solver at first and only write a custom solution if it turns out to be a bottleneck.
Here's an easy ad-hoc Prolog program for the first problem:
% Given a set of coin denominations,
% find the minimum number of coins
% required to make change.
% IE for USA coinage and 37 cents,
% the minimum number is four
% (quarter, dime, 2 pennies).
num(0). num(1). num(2).
num(3). num(4). num(5).
?- num(Q), num(D), num(P),
37 is Q * 25 + D * 10 + P
You can just paste it into [1] to execute in the browser. Using 60 as target sum is more interesting as you can enumerate over two solutions.
(Posting again what I already posted two days ago [2] here)
I've actually used pseudo-prolog to explain how to solve leetcode problems to a friend. Write the facts, then write the constraints, and then state your problem. Close to the last part, they've already understood how to solve it, or at least how to write the program that can answer the question.
Of course, the challenge is that the next question after solving a leetcode problem is often to explain and optimize the performance characteristics, which in prolog can get stupidly hairy.
As an interviewer, I gave one pretty simple task (people solved it in as little as 8 minutes), wasn't using any real CS, even though I'm good at it.
The reason was that aboint 70% of candidates couldn't write a simple loop -- to filter those out. The actual solution didn't matter much, I gave a binary decision. The actual conversation matters more.
This. Main point of giving candidates CS problems was always to weed out those who couldn't program at all, but somehow were still in the industry. I worked with such people - it's unpleasant.
Somehow someone figured that giving harder problems should result in better candidates. Personally, despite having passed most of the tests I've been subjected to, I don't see the connection.
Here’s my empirical evidence based on several recent “coding session” interviews with a variety of software companies. Background: I have been developing software for over 30 years, I hold a few patents, I’ve had a handful of modestly successful exits. I kind of know a little bit about what I am doing. At this stage in my career, I am no longer interested in the super early stage startup lifestyle, I’m looking at IC/staff engineer type roles.
The mature, state-of-the-art software companies do not give me leetcode problems to solve. They give me interesting & challenging problems that force me to both a) apply best practices of varying kinds and yet b) be creative in some aspects of the solution. And these problems are very amenable to “talking through” what I’m doing, how I’m approaching the solution, etc. Overall, I feel like they are effective and give the company a good sense of how I develop software as an engineer. I have yet to “fail” one of these.
It is the smaller, less mature companies that give me stupid leetcode problems. These companies usually bluntly tell me their monolithic codebase (always in a not-statically-typed language), is a total mess and they are “working on domain boundaries”.
I fail about 50% of these leetcode things because I don’t know the one “trick” to yield the right answer. As a seasoned developer, I often push back on the framing and tell them how I would do a better solution by changing one of the constraints, where the change would actually better match the real world problem they’re modeling.
And they don’t seem to care at all. I wonder if they realize that their bullshit interviewing process has both a false positive and a false negative problem.
The false negatives exclude folks like myself who could actually help to improve their codebase with proper, incremental refactorings.
The false positives are the people who have memorized all the leetcode problems. They are hired and write more shitty monolithic hairball code.
Their interviewing process reinforces the shittiness of their codebase. It’s a spiral they might never get out of.
The next time I get one of these, I think I’m going to YOLO it, pull the ripcord early and politely tell them why they’re fucked.
There is something to be said for being senior in a way where the people interviewing you are junior enough that they don't necessarily have the experience to necessarily "click" with the nuance that comes with said problems.
That being said, from a stoicism point of view, the interview ends up becoming a meta-challenge on how you approach a problem that is not necessarily appropriately framed, and how you'd go about doing and/or gently correcting things as well.
And if they're not able to appreciate it, then success! You have found that it is not the right organization for you. No need to burn the door down on the way out, just feel relief in that you dodged a bullet (hopefully).
Yes, it is a death spiral; if you are to lead them, you have to know what to fix when, to avoid making things worse.
The solution is typically not just to fix their code. They got in over their heads by charging ahead and building something they'll regret, but their culture (and likely the interviewer personal self-regard) depends on believing their (current) tech leaders.
So yes, the interviewer is most comfortable if you chase and find the ball they're hiding.
But the leadership question is whether you can relieve them of their ignorance without also stripping their dignity and future prospects.
I've found (mostly with luck) that they often have a sneaking suspicion that something isn't right, but didn't have the tools or pull to isolate and address it. As a leader if you can elicit that, and then show some strategies for doing so, you'll improve them and the code in a way that encourages them that what was hard to them is solvable with you, which helps them rely on you for other knotty problems.
It's not really that you only live once; it's that this opportunity is here now and should have your full attention, and to be a leader you have to address it directly but from everyone's perspective.
Even if you find you'd never want to work with them, you'd still want to leave them feeling clearer about their code and situation.
Clarifying my "YOLO" usage: I was being a little flippant, in the sense that when ending an interview early with direct critical feedback, the most likely outcome is a "burned bridge" with that company (you're never coming back).
Which reminds me one of my favorite twisted idioms: We'll burn that bridge when we get to it!
I guess I've finally found an acceptable real-world use case for this phrase :)
MiniZinc is a really great modeling language for constraint programming. Back in August I gave a talk at NordConstNet25 on how we used it to build a product configurator in what's (probably) the worlds largest MiniZinc model
An interesting meta problem is to determine antagonistic set of denominations, like the [10,9,1] example given in the post, to maximize the number of coins selected by the gradient method.
Been working on a calendar scheduling app that uses a constraint solver to auto schedule events based on scheduling constraints (time of day preferences and requirements, recurrence rules), and track goal progress (are you slipping on your desired progress velocity? Get a notification). It’s also a meal planner: from a corpus of thousands of good, healthy recipes, schedule a meal plan that reuses ingredients nearing expiration, models your pantry, estimates grocery prices, meets your nutritional goals. Constraint solvers are black magic.
It's insane how many of these new "AI" companies don't let you use AI or even your own IDE for coding interviews. And most questions from such companies are LC type problems so they know any AI tool can one shot it.
I discourage it but I let them use it and then give them a specific problem that I know your average Claude 4 or GPT 5 will just not get it right.
Actually people perform worse in an interview using AI because they spend time trying to understand what the tool is proposing and then time to figure out why that doesn’t work.
My experience has been quite different. With Cursor/Claude code, I've ended up writing full fledge solutions (running cli/web servers with loggers and unit tests for each functionality). We're talking crawlers, cab booking service like uber, search engines with seed data. All within the hour.
Definitely not insane. Ironic is the correct term. The field is evolving, a lot of these companies talk about replacing outdated practices using AI. Asking software engineers to not use their own tools to solve problems falls under the same bucket.
> Given an array of integers heights representing the histogram's bar height where the width of each bar is 1, return the area of the largest rectangle in the histogram.
Maybe it's my graphics programmer brain firing on all cylinders, but isn't this just a linear scan, maintaining a list of open rectangles?
Yes, you just need to maintain a stack of rectangles ordered from lowest to highest. You only ever have to push and pop the top of the stack, so the runtime is O(n).
I tried a couple of times long time ago to solve them with cp/integer programming.
The interviewers were clueless so after 10 minutes of trying to explain to them I quit and fell back to just writing the freaking algo they were expecting to see.
Terrible question for an interview, and further highlights how our interviews are broken.
Greedy algorithms tell you nearly nothing about the candidate's ability to code. What are you going to see? A single loop, some comparison and an equality. Nearly every single solution that can be solved with a greedy algorithm is largely a math problem disguised as programming. The entire question hinges on the candidate finding the right comparison to conduct.
The author himself finds that these are largely math problems:
> Lots of similar interview questions are this kind of mathematical optimization problem
So we're not optimizing to find good coders, we're optimizing to find mathematicians who have 5 minutes of coding experience.
At the risk of self-promotion, I'm fairly opinionated on this subject. I have a podcast episode where I discuss exactly this problem (including discuss greedy algorithms), and make some suggestions where we could go as an industry to avoid these kind of bad-signal interviews:
This is how I prefer to interview. I don’t understand the mindset of LeetCode interviewers. It’s a weak signal because it’s easily gamed (false positives), and has misses too many strong candidates who have better things to do in their spare time (false negatives, bias towards one type of candidate -> lack of diversity in experience).
One level of nested for loop for each type of coin. (Run them until i*coin is larger than the input)
Populate a 2d lookup array. $7,50 becomes arr[750] = [7,1,0,0,0,0] which represents [7x100,1x50,0x25,0x10,0x5,0x1]
With each loop check if the array entry exists, if so check if that number of coins is larger. [7,1,0... is better than [7,0,2...] because 8 is a better solution than 9!
> This was a question in a different interview (which I thankfully passed):
> Given a list of stock prices through the day, find maximum profit you can get by buying one stock and selling one stock later.
It was funny to see this, because I give that question in our interviews. If someone suggested a constraint solver... I don't know what I'd have done before reading this post (since I had only vaguely even heard of a constraint solver), but after reading it...
Yeah, I would still expect them to be able to produce a basic algorithm, but even if their solution was O(n^2) I would take it as a strong sign we should hire them, since I know there are several different use cases for our product that require generalized constraint solving (though I know it by other names) and having a diverse toolset on hand is more important in our domain than writing always-optimal code.
Something that works poorly is often better than something that doesn't work in an instant. This is what I have to tell myself every time I step into a massive, excessively complex mess of a codebase. Many business rules aren't clearly defined ahead of time in a way that always translates well to the code and starting over is a mistake more often than not imo.
Update... refactor... update... break off... etc. A lot of times, I'm looking at something where the tooling is 8+ years old, and the first order of business should be to get it working on a current and fully patched version of whatever is in place... replacing libraries that are no longer supported, etc. From there, refactor what you can, break off what makes sense into something new, refactor again. This process, in my experience, has been far more successful than ground up, new versions.
I say this while actively working on a "new" version of a software. New version being web based, "old" version being a winforms VB.Net app from over a decade ago. Old version has bespoke auth, new verion will rely on Azure Entra... Sometimes, starting over is the answer, but definitely not always.
Reminder that the research says the interview process should match the day to day expectations as closely as possible, even to a trial day/week/month. All these brain teasers are low on signal, not to mention bad for women and minorities.
I've never heard of a "dynamic programming algorithm". Read wikipedia and it seems to mean....use a recursive function? The coin problem is an easy recursive problem (I just wrote the code for it to make sure my old brain can still do it).
It's usually covered in a first or second year algorithms course. It's a recursive problem definition paired with tabling to eliminate redundant work. Critically, the recursive subproblems have to be overlapping (they'll do some of the same work as the other recursive steps) to see any benefit. You can implement it top-down and add a cache (memoization) or you can implement it bottom-up and fill out the table iteratively rather than through recursion.
If you just implement it recursively without tabling then you end up re-doing work and it's often an exponential runtime instead of polynomial.
To clarify on overlapping, consider Fibonacci:
F(n) = F(n-1) + F(n-2) # and the base cases
F(n-1) includes F(n-2) in its definition, and both F(n-2) and F(n-1) include F(n-3). If you implement this naively it produces an exponential runtime. Once you add the table, the single initial recursive call to F(n-1) will end up, through its chain of calls, storing the result of F(n-2) and now the implementation is linear instead of exponential.
> Read wikipedia and it seems to mean....use a recursive function?
Yes, that's one (common) approach to dynamic programming. The recursive function call are memoized so that previous calculations are remembered for future function calls. Overlapping subproblems become trivial if you can reuse previously computed values. The recursion with memoization is top-down dynamic programming.
The hard part is realizing that the problem you're solving efficiently maps to a dynamic programming algorithm. You have to spot the opportunity for sub-problem reuse, or else the solution looks something like cubic or exponential (etc.)
Most leetcode problems fall into the same ~15 patterns, and hard problems most of the time require you to use a combination of two patterns to solve them.
I think LeetCode tests two things. First, your willingness to grind to pass an exam, which is actually a good proxy for some qualities you need to thrive in a corporate environment: work is often grungy and you need to push through without getting distracted or discouraged.
Second, it's a covert test for culture fit. Are you young (and thus still OK with grinding for tests)? Are you following industry trends? Are you in tune with the Silicon Valley culture? For the most part, a terrible thing to test, but also something that a lot of "young and dynamic" companies want to select for without saying so publicly. An AI startup doesn't want people who have family life and want to take 6 weeks off in the summer. You can't put that in a job req, but you can come up with a test regime that drives such people away.
It has very little to do with testing the skills you need for the job, because quite frankly, probably fewer than 1% of the SWE workforce is solving theoretical CS problems for a living. Even if that's you, that task is more about knowing where to look for answers or what experiments to try, rather than being able to rattle off some obscure algorithm.
No. They use sophisticated algorithms called propagators to prune the invalid solutions from the domains of possible solutions in conjunction with a search strategy, like branch and bound
I've always maintained that solving LeetCode is more about finding the hidden "trick" that makes the solution,
if not easy,
one that is already "solved" in the general sense.
Look at the problem long enough and realize "oh that's a sliding window problem" or somesuch known solution,
and do that.
Everyone misunderstands what LC focuses on. It focuses on - did you grind like everyone else that did to get into this company/region/tech? It allows for people who didn't go to the most specific schools (e.g. Cal, Stanford, etc.) to still get into silicon valley companies if they show they are willing to fit the mold. It's about showing you are a conformist and are willing to work very hard to do something that you won't realistically use much in your day to day job.
It's about signaling. That's all it is. At least it's not finance where it's all dictated by if you were born into the right family that got you into the elite boarding schools for high school, etc. I would've never made it into finance unless I did a math phd and became a quant.
All problems cited are about testing if you can write if's, loops and recursion (or a stack/queue).
They aren't testing if you can write a solver. They are testing if you can use bricks that solvers are built out of because other software when it gets interesting is built out of the same stuff.
> The "smart" answer is to use a dynamic programming algorithm, which I didn't know how to do. So I failed the interview.
Really? This kind of interview needs to go away.
However, coding interviews are useful. It's just that "knowing the trick" shouldn't be the point. The point is whether the candidate knows how to code (without AI), can explain themselves and walk through the problem, explain their thought processes, etc. If they do a good enough reasoning job but fail to solve the problem (they run out of time, or they go on an interesting tangent that ultimately proves fruitless) it's still a "passed the test" situation for me.
Failure would mean: "cannot code anything at all, not even a suboptimal solution. Cannot reason about the problem at all. Cannot describe a single pitfall. When told about a pitfall, doesn't understand it nor its implications. Cannot communicate their thoughts."
I agree with this approach. With the exception of testing for specific domain knowledge relevant to the work role, the coding interview should just be about testing the applicant's problem-solving skills and grasp of their language of choice. I would even prefer a take-home style problem that we can review in-person over some high-pressure puzzle. The leetcode interview doesn't seem to correspond to anything a developer actually does day to day.
The bar is so high nowadays that simply being able to talk intelligentyly about the problem, ask clarifying questions, getting an inefficient solution and coding it up well does not pass muster.
Even getting an efficient algorithm basically right, is no guarantee.
In some cases there might be alternative solutions which have some tradeoffs, and you might have to come up with those, as well
Miss a counterexample? Even if you get it after a single hint?. Fuck you, you're out. I can find someone who doesn't need the hint
I implemented the simple greedy algorithm and immediately fell into the trap of the question: the greedy algorithm only works for "well-behaved" denominations. If the coin values were [10, 9, 1], then making 37 cents would take 10 coins in the greedy algorithm but only 4 coins optimally (10+9+9+9).
That's a bad algorithm, then, not a greedy algorithm. Wouldn't a properly-implemented greedy algorithm use as many coins as possible of a given large denomination before dropping back to the next-lower denomination?
If a candidate's only options are to either use a constraint solver or to implement a naïver-than-usual greedy algorithm, well, sorry, but that's a no-hire.
> Wouldn't a properly-implemented greedy algorithm use as many coins as possible of a given large denomination before dropping back to the next-lower denomination?
Yes, and it won't work on the problem described. The greedy algorithm only works on certain sets of coins (US coin denominations are one of those sets), and fails in at least some cases with other coin sets (as illustrated in the bit you quoted).
The algorithm they're using must be "Until you hit the limit, take the highest denomination coin that fits beneath the limit. If you can't hit the limit, fall back one step."
That fits your definition of "use as many coins as possible of a given large denomination before dropping back to the next-lower denomination" but will find 10-10-10-1-1-1-1-1-1-1 and stop before it even tries 10-9-anything.
My beef with someone using a constraint solver here is that they almost certainly wouldn't be able to guarantee anything about their solution other than that, if it produces an output, it will be correct. They won't be able to guarantee running time, space usage, or (probably for most tools) even a useful progress indicator. The problem isn't merely that they used another tool - the problem is that they abstracted away critical details. Had they provided a handwritten solution from scratch with the same characteristics, it would've exhibited the same problems.
This doesn't mean they can't provide a constraint solver solution, but if they do, they'd better be prepared to address the obvious follow-ups. If they're prepared to give an efficient solution afterward in the time left, then more power to them.
I don't think someone with an account from 2012 and 20k karma would be posting LLM-generated comments. It also doesn't read as one. It doesn't even use the "it's not x it's y" formula, it contraposes things against each other. Like I just did.
My biggest problem with leetcode type questions is that you can't ask clarifying questions. My mind just doesn't work like most do, and leetcode to some extent seems to rely on people memorizing leetcode type answers. On a few, there's enough context that I can relate real understanding of the problem to, such as the coin example in the article... for others I've seen there's not enough there for me to "get" the question/assignment.
Because of this, I've just started rejecting outright leetcode/ai interview steps... I'll do homework, shared screen, 1:1, etc, but won't do the above. I tend to fail them about half the time. It only feels worse in instances, where I wouldn't even mind the studying on leetcode types sites if they actually had decent explainers for the questions and working answers when going through them. I know this kind of defeats the challenge aspect, but learning is about 10x harder without it.
It's not a matter of skill, it's just my ability to take in certain types of problems doesn't work well. Without any chance of additional info/questions it's literally a setup to fail.
edit: I'm mostly referring to the use of AI/Automated leetcode type questions as a pre-interview screening. If you haven't seen this type of thing, good for you. I've seen too much of it. I'm fine with relatively hard questions in an actual interview with a real, live person you can talk to and ask clarifying questions.
The LC interviews are like testing people how fast they can run 100m after practice, while the real job is a slow arduous never ending jog with multiple detours and stops along the way.
But yeah that's the game you have to play now if you want the top $$$ at one of the SMEGMA companies.
I wrote (for example) my 2D game engine from scratch (3rd party libs excluded)
https://github.com/ensisoft/detonator
but would not be able to pass a LC type interview that requires multiple LC hard solutions and a couple of backflips on top. But that's fine, I've accepted that.
5 years ago you'd have a project like that, talk to someone at a company for like 30m-1hr about it, and then get an offer.
Did you mean to type 25? 5 years ago LC challenge were as, if not more, prevalent than they are today. And a single interview for a job is not something I have seen ever after 15 years in the space (and a bunch of successful OSS projects I can showcase).
I actually have the feeling it’s not as hardcore as it used to be on average. E.g. OpenAI doesn’t have a straight up LC interview even though they probably are the most sought after company. Google and MS and others still do it, but it feel like it has less weight in the final feedback than it did before. Most en-vogue startup have also ditched it for real world coding excercices.
Probably due to the fact that LC has been thoroughly gamed and is even less a useful signal than it was before.
Of course some still do, like Anthropic were you have to have a perfect score to 4 leetcode questions, automatically judged with no human contact, the worst kind of interview.
There's an entire planet of jobs that have nothing to do with leetcode. I was talking about those, not FAANG stuff. Unfortunately I am not FAANG royalty.
>Of course some still do, like Anthropic were you have to have a perfect score to 4 leetcode questions, automatically judged with no human contact, the worst kind of interview.
Should be illegal honestly.
5 years ago non-FAANG companies were fully in leetcode mode for interviews. Maybe 10-15 years ago you could totally avoid it without much problem.
In most European companies that isn't a thing.
Thankfully not everything from SV culture gets adoption.
It might be illegal; certainly if you can show that LC is biased against a protected class, then there would be grounds for a lawsuit.
Only if there is enough evidence. Yes, I can say that the inability to account for things like the ADA in the US can place an employer in hot water, however, since LC doesn't make those decisions, they are immune. The accountability is placed upon the employer. Don't hate the players or the game. Maybe just figure out how to fix it without harming everyone, be popular enough to make said idea into law, and get into a position of power that allows you to do so. If that sounds hard, congrats, welcome to the reason why I never got into politics. Don't even get me started on all the people you will never realize you are hurting by fixing that one single problem.
I never meant to imply that LC would be violating the law.
Good legal disclaimer!
> certainly if you can show that LC is biased against a protected class, then there would be grounds for a lawsuit.
That wouldn't be hard to do. Given the disparate impact standard, everything is biased against a protected class.
> Should be illegal honestly.
I can't imagine this kind of entitlement. If you don't want to work for them, don't study leetcode. If you want to work for them (and get paid tons of money), study leetcode. This isn't a difficult aristotelian ethics/morals question.
I meant no human-in-the-loop wrt hiring, which is what I thought you were getting at.
It's the same exact thing - if some company makes you jump through hoops to get hired that you find distasteful just don't apply to company.
Not all of us are market extremists. The “invisible hand of the market” doesn’t care about human rights.
You don't know their interview process unless it's one of the big tech companies though.
No. Certain things just harm basic human dignity and should be outlawed. Judgement comes from our peers, not from machines.
But sometimes also machines. ACLs are enforced by machines, and everyone is fine with that.
I literally got my first real job 26 years ago by talking about my game engine, for a fintech firm.
Not sure if that's a typo. 5 years ago was also pretty LC-heavy.
Ten years ago it was more based on Cracking the Coding Interview.
So i'd guess what you're referring to is even older than that.
Talking about general jobs not FAANG adjacent.
Nearly everyone is FAANG adjacent
Apart from those companies where social capital counts for more ...
I rarely apply for or interview at FAANG or adjacent companies...
I read this, and intentionally did not read the replies below. You are so wrong. You can write a library, even an entirely new language from scratch, and you will still be denied employment for that library/language.
> 5 years ago you'd have a project like that, talk to someone at a company for like 30m-1hr about it, and then get an offer.
Based on my own experiences, that was true 25 years ago. 20 years ago, coding puzzles were now a standard part of interviewing, but it was pretty lightweight. 5 years ago (covid!) everything was leet-code to get to the interview stage.
I have been getting grilled on leet code style questions since the beginning my of my career over 12 years ago.
The faangs jump and then the rest of the industry does some dogshit imitation of their process
I'm lucky I'm in the frontend webdev sphere then I guess instead of like being a pure backend guy. I've had a couple of those live ones and just denied them. I did manage to implement a "snake" algorithm once but got denied because I wasn't able to talk about time/space complexity.
As someone who’s hired 10s of engineers across multiple companies, it’s bullshit on the hiring side too.
It was humbling having to explain to fellow adult humans that when your test question is based on an algorithm solving a real business problem that we work on every day, a random person is not going to implement a solution in one hour as well as we can.
I’ve seen how the faangs interview process accounts for those types of bias and mental blindness and are actually effective, but their solutions require time and/or money so everywhere I’ve been implements the first 80% that’s cheap and then skips on the rest that makes it work
>As someone who’s hired 10s of engineers across multiple companies
Any way to reach out? :)
I think it boils down to companies not wanting to burn money and time on training, and trying to come up with all sorts of optimized (but ultimately contrived) interview processes. Now both parties are screwed.
>It was humbling having to explain to fellow adult humans that when your test question is based on an algorithm solving a real business problem that we work on every day, a random person is not going to implement a solution in one hour as well as we can.
Tell me about it! Who were you explaining this to?
>how fast they can run 100m after practice, while the real job is a slow arduous never ending jog with multiple detours and stops along the way
I've always explained it as demonstrating your ping pong skills to get on the basketball team.
>The LC interviews are like testing people how fast they can run 100m after practice
Ah, but, the road to becoming good at Leetcode/100m sprint is:
>a slow arduous never ending jog with multiple detours and stops along the way
Hence Leetcode is a reasonably good test for the job. If it didn't actually work, it would've been discarded by companies long ago.
Barring a few core library teams, companies don't really care if you're any good at algorithms. They care if you can learn something well enough to become world-class competitive. If you can show that you can become excellent at one thing, there's a good chance you can become excellent at another thing.
That's basically also the reason that many Law and Med programs don't care what your major in undergrad was, just that you had a very high GPA in whatever you studied. A decent number of Music majors become MDs, for example.
LC interviews were made popular by companies that were started by CS students because they like feeling that this stuff is important. They're also useful when you have massive numbers of applicants to sift through because they can be automated and are an objective-seeming way to discard loads of applicants.
Startups that wanted to emulate FAANGs then cargo-culted them, particularly if they were also founded by CS students or ex-FAANG (which describes a lot of them). Very, very few of these actually try any other way of hiring and compare them.
Being able to study hard and learn something well is certainly a great skill to have, but leetcode is a really poor one to choose. It's not a skill that you can acquire on the job, so it rules out anyone who doesn't have time to spend months studying something in their own time that's inherently not very useful. If they chose to test skills that are hard and take effort to learn, but are also relevant to the job, then they can also find people who are good at learning on the job, which is what they are actually looking for.
But why stop there? Why not test candidates with problems they have never seen before? Or problems similar to the problems of the organization hiring? Leetcode mostly relies on memorizing patterns with a shallow understanding but shows the candidates have a gaming ability. Does that imply quality in any way? Some people argue that willing to study for leetcode shows some virtue. I very much disagree with that.
I think you have a misunderstanding. Most companies that do LC-style interviews usually show unknown problems.
Memorizing the Top 100 list from Leetcode only works for a few companies (notably and perplexingly, Meta) but doesn't for the vast majority.
Also, just solving the problem isn't enough to perform well on the interview. Getting the optimal solution is just the table stakes. There's communication, tradeoffs between alternative solutions, coding style, follow-up questions, opportunities to show off language trivia etc.
Memorizing problems is wholly not the point of Leetcode grinding at all.
In terms of memorizing "patterns", in mathematics and computer science all new discovery is just a recombination of what was already known. There's virtually no information coming from outside the system like in, say, biology or physics. The whole field is just memorized patterns being recombined in different ways to solve different problems.
It’s not about memorizing individual problems per se, but rather recognizing overall patterns and turning the process into a gameable endeavor. This can give candidates an edge, but it doesn’t necessarily demonstrate higher-level ability beyond surface familiarity with common patterns and the expectations around them. I’d understand the value if the job actually involved work similar to what's reflected in leetCode style problems, but in most cases, that couldn’t be further from reality. leetCode serves little purpose beyond measuring a candidate’s willingness to invest time and effort. That’s the only real virtue it rewards. But ultimately, I believe leetCode style interviews are measuring the wrong metric.
>a candidate’s willingness to invest time and effort
I guess it's a matter of opinion but my point is, this is probably the right metric. Arguably, the kind of people who shut up and play along with these stupid games because that's where the money is make better team players in large for-profit organizations than those who take a principled stance against ever touching Leetcode because their efforts wouldn't contribute anything to the art.
Maybe yes maybe not, I'm leaning not but it's just an opinion. But as a company be careful what you wish for, these same candidates are often skilled at gaming systems and may leave your team as soon as they've extracted the benefits. They’re likely more interested in playing the game than in seriously solving real-world problems.
Then what if the test was how well you play chess? That takes time to study to become good. But would it be a good metric for hiring programmers?
Because chess is more unrelated to the job? It is easy to see that LeetCode problems are closer to a programmers job than what chess is.
But yeah, people used to ask that level of unrelated questions to programmers, and they were happy with the results. "Why are manhole covers round" etc. LeetCode style questions do produce better results than those, so that is why they use them.
To play the devils advocate, being able to memorize patterns and recognize which patterns apply to a given problem is extremely valuable. Tons of software dev is knowing the subset of algorithms, data structures, and architecture that apply to a similar problem and being able to adapt it.
It's funny you mention that.
That's literally what CS teaches you too. Which is what "leetcode" questions are: fundamental CS problems that you'd learn about in a computer science curriculum.
It's called "reducing" one problem to another. We had an entire semester's mandatory class spend a lot of time on reducing problems. Like figuring out how you can solve a new type of question/problem with an algorithm or two that you already know from before.
Like showing that "this is just bin packing". And there are algorithms for that, which "suck" in the CS kind of sense but there are real world algorithms that are "good enough" to be usable to get shit done.
Or showing that something "doesn't work, period" by showing that it can be reduced to the halting problem (assuming that nobody has solved that yet - oh and good luck btw. if you want to try ;) )
I did quite a bit of competitive programming in school, and pretty much all the world-class competitive problems are reduced to well-known algorithms. It's quite hard to come up with something new (not proven to be unsolvable for its constraints). I believe problem setters just try to disguise a known algorithm as much as possible.
Then comes the ability/memorization to actually code it, e.g. if I knew it needs coding red-black tree I wouldn't even start.
> Or problems similar to the problems of the organization hiring?
People complain, rightly so in some cases, that their "interview" is really doing some (unpaid) work for the company
> Leetcode mostly relies on memorizing patterns
Math is like that as well though. It's about learning all the prior axioms, laws, knowing allowed simplifications, and so on.
In the same way that writing and performing a new song is "just memorizing prior patterns and law"
or that writing a new book is the same.
I.e. it's not about that. Like sure it helps to have a base set of shared language, knowledge, and symbols, but math is so much more than just that.
Programming competition problems are also much more than just memorizing patterns, that was the point of his post.
In math, you usually need to prove said simplifications. So just memorizing is not enough. As you get more advanced, you then start swapping out axioms.
In programming the simplifications has to be correct even if you don't prove them, and being correct isn't that easy.
Does it work though?
When I look at the messy Android code, Fuchsia's commercial failure, Dart being almost killed by politics, Go's marvellous design, WinUI/UWP catastrophical failure, how C++/CX got replaced with C++/WinRT, ongoing issues with macOS Tahoe,....
I am glad that apparently I am not good enough for such projects.
zero of those failures are of a technical nature.
The fact is that they fail is not evidence that leetcode interviews fails to select for high quality engineers.
On the contrary, they prove high quality engineers, for whatever measure that happens to be, does not correlate to product quality.
> If it didn't actually work, it would've been discarded by companies long ago
You're assuming that something else works better. Imagine if we were in a world where all interviewing techniques had a ton of false positives and negatives without a clear best choice. Do you expect that companies would just give up, and not hire at all, or would they pick based on other factors (e.g. minimizing the amount of effort needed on the company side to do the interviews)? Assuming you accept the premise that companies would still be trying to hire in that situation, how can you tell the difference between the world we're in now and that (maybe not-so) hypothetical one?
I never made any claims about optimality. It works (for whatever reason) hence companies continue to use it
If it didn't work, these companies wouldn't be able to function at all.
It must be the case that it works better than running a RNG on everyone who applied.
Does it mean some genius software engineer who wrote a fundamental part of the Linux kernel but never learned about Minimum Spanning Trees got filtered out? Probably. But it's okay. That guy would've been a pain in the ass anyway.
> If it didn't actually work, it would've been discarded by companies long ago.
This that I've singled out above is a very confident statement, considering that inertia in large companies is a byword at this point. Further, "work" could conceivably mean many things in this context, from "per se narrows our massive applicant pool" to "selects for factor X," X being clear only to certain management in certain sectors. Regardless, I agree with those who find it obvious that LC does not ensure a job fit for almost any real-world job.
It's also a filter for people who are ok with working hard on something completely pointless for many months in order to get a job.
> Hence Leetcode is a reasonably good test for the job. If it didn't actually work, it would've been discarded by companies long ago.
I see it differently. I wouldn't say it's reasonably good, I'd say it's a terrible metric that's very tenuously correlated with on the job success, but most of the other metrics for evaluating fresh grads are even worse. In the land of the blind the one eyed man is king.
> If you can show that you can become excellent at one thing, there's a good chance you can become excellent at another thing.
Eh. As someone who did tech and then medicine, a lot great doctors would make terrible software engineers and vice versa. Some things, like work ethic and organization, are going to increase your odds of success at nearly any task, but there's plenty other skills that are not nearly as transferable. For example, being good at memorizing long lists of obscure facts is a great skill for a doctor, not so much for a software engineer. Strong spatial reasoning is helpful for a software developer specializing in algorithms, but largely useless for, say, an oncologist.
> Hence Leetcode is a reasonably good test for the job. If it didn't actually work, it would've been discarded by companies long ago.
This is an appeal to tradition and a form of survivorship bias. Many successful companies have ditched LeetCode and have found other ways to effectively hire.
> If you can show that you can become excellent at one thing, there's a good chance you can become excellent at another thing.
My company uses LeetCode. All I want is sane interfaces and good documentation. It is far more likely to get something clever, broken and poorly documented than something "excellent", so something is missing for this correlation.
> If it didn't actually work, it would've been discarded by companies long ago
That makes the assumption that company hiring practices are evidence based.
How many companies continue to use pseudo-science Myers Briggs style tests?
Mistakenly read this as you wrote that 2D game engine (which looks awesome btw) for a job interview to get the job: "I can't compete with this!!! HOW CAN I COMPETE WITH THESE TYPES OF SUBMISSIONS!?!?! OH GAWD!!!"
> SMEGMA companies
Microsoft, Google, Meta, Amazon, I'm guessing... but, what are the other two?
"Startups" and "Enterprise"? I guess that basically covers everything
I prefer AGAMEMNON: Apple, Google, Amazon, Microsoft, Ebay, Meta, NVIDIA, OpenAI, Netflix
Lol :)
"SMEGMA companies." :D
And nowadays people are blatantly using AI to answer questions like this (https://www.finalroundai.com/coding-copilot). Even trying to stumble through design questions using AI
100%. I just went through an interview process where I absolutely killed the assignment (had the best one they'd seen), had positive signal/feedback from multiple engineers, CEO liked me a lot etc, only to get sunk by a CTO who thought it would be cool to give me a surprise live test because of "vibe coding paranoia". 11 weeks in the process, didn't get the role. Beyond fucking stupid.
This was the demo/take-home (for https://monumental.co): https://github.com/rublev/monumental
It's funny because this repo really does seem vibe-coded. Obviously I have no reason not to believe you, but man! All those emojis in the install shell script - I've never seen anyone other than an AI do that :) Maybe you're the coder that the AI companies trained their AI on.
Sorry about the job interview. That sucks.
There's even a rocket emoji in server console.logs... There are memes with ChatGPT and rocket emojis as a sign of AI use. The whole repo looks super vibe-coded, emojis, abundance of redundant comments, all in perfect English and grammar, and the readme also has that "chatty" feel to it.
I'm not saying that using AI for take-home assignments is bad/unethical overall, but you need to be honest about it. If he was lying to them about not using any AI assistance to write all those emojis and folder structure map in the repo, then the CTO had a good nose and rightfully caught him.
As a big believer in documentation and communication in general, there's this inevitable double-bind that people hate whatever you give them and also hate it if you give them nothing. LLMs have made this worse.
No emojis and any effort to be comprehensive? Everyone complains "what is this wall of text", or "this is industry not grad school so cut it out with the fancy stuff" or "no one spends that much time on anything and it must be AI generated". (Frequently just a way of saying that they hate to read, and naively believe that even irreducibly complex stuff is actually simple).
Stuff that's got emojis, a friendly casual tone and isn't information dense? Well that's very chatty and cute, it also has to be AI and can't be valuable.
Since you can't win with docs, the best approach is to produce high quality diagrams that are simultaneously useful for a wide audience from novice to expert. The only problem is that even producing high quality diagrams at a ratio of 1 diagram per 1k lines of code is still very time consuming to produce if you're putting lots of thought into it, double so if you're fighting the diagramming tools, or if you want something that's easy for multiple stakeholders with potentially very different job descriptions to take in. Everyone will call it inadequate, ask why it took so long, and ask for the missing docs that they will hate anyway!
On the bright side, LLMs are pretty great at generating mermaid, either from code, or natural language descriptions of data-flows. Diagrams-as-code without needing a whole application UI or one of a limited number of your orgs lucid-chart licenses is making "Don't like it? Submit a PR" a pretty small ask. Skin in the game helps to curbs endless bike-shedding criticism
> No emojis and any effort to be comprehensive? Everyone complains "what is this wall of text", or "this is industry not grad school so cut it out with the fancy stuff" or "no one spends that much time on anything and it must be AI generated". (Frequently just a way of saying that they hate to read, and naively believe that even irreducibly complex stuff is actually simple).
> Stuff that's got emojis, a friendly casual tone and isn't information dense? Well that's very chatty and cute, it also has to be AI and can't be valuable.
As a counterpoint, I can confidently say that I've never once had anyone give any feedback to me on the presence or absence of emojis in code I've written, whether for an interview, work, or personal projects, and I've never had anyone accuse my documentation of being AI generated or gotten feedback in an interview that my code didn't have enough documentation. There's a pretty wide spectrum between "indistinguishable from what I get when I give an LLM the same assignment as my interviewee" and "lacking any sort of human-readable documentation whatsoever".
If you're using AI for an interview, you are basically telling them "you could just not bother with hiring me and use AI yourself" which is neither good for you nor them.
Oh my god Becky, there's even a rocket emoji in the server console logs!
Should I also be "honest" about tab-completion? Where do you draw the line? Maybe I should be punished for having an internet connection too. Using AI for docker/readme's/simple scaffolding I would have done anyways? Oh the horror!
There was no lying because there was no discussion or mention of AI at all. Had they asked me, I'd have happily told them yes I obviously use AI to help me save time on grunt-work, I've been doing this stuff for like 15 years.
It's an unpaid take-home assignment. You'd have to be smoking crack to think that I would be rawdogging this. Imagine if I had a family or a wife or an existing job? I'd dump them after getting linked their assignment document.
Honestly at this point in the AI winter if you are a guy who has AI-inspired paranoia then I don't want to work for you because you are not "in the know".
> It's an unpaid take-home assignment
It's not defensible in any case.
That being said, I think the CTO's "vide coding paranoid" after seeing this repo is 100% justified.
You have that you’re the founder of an AI company in your hacker news profile, and your take home looks completely vibe coded. Why in the world are you surprised that a hiring manager is a little suspicious about your coding skills?
Given what you’ve said in your other comments, it seems like you used AI in a way that I wouldn’t have a problem with but just briefly looking through I can see how it would look suspicious.
That's all well and good. Totally ask me about AI, I can talk a lot about it. Don't however, make me go through 99% of the interview process up until the very last stage (spanning weeks), and throw a live test in my face, and then have the hiring manager clarify that it's about "vibe coding paranoia". It negates the entire reason I did the take-home assignment.
> Should I also be "honest" about tab-completion? Where do you draw the line?
I'd probably draw it somewhere in the miles-long gap between tab completion and generating code with an LLM. It sounds like that's where the company drew it too.
I used AI for the Docker setup which I've already done before. I'm not wasting time on that. Yeah you can vibe code basic backend and frontend and whatnot, but you're not going to vibe code your way to a full inverse kinematics solution.
I'm not a math/university educated guy so this was truly "from the ground up" for me despite the math being simple. I was quite proud of that.
So what was the issue the CTO had with vibe coding? Had you disclosed to then that you used LLMs for coding "basic" features outside the math and whatnot?
CTO's previous job was at Palantir, perhaps he has some reasons to be paranoid
The hiring manager told me that they were getting a lot of "signal to noise" ratio in terms of their hiring, where they'd bring someone on-site who had a good assignment and apparently more often than not, these candidates would shit the bed in a live environment. So the CTO made a live take-home assignment and didn't tell anyone. I was told that he did this to weed out the low signal-to-noise people they dealt with recently.
>Had you disclosed to then that you used LLMs for coding "basic" features outside the math and whatnot?
No it seems completely immaterial. I'll happily talk about it if asked but it's just another tool in the shed. Great for scaffolding but makes me want to rip my hair out more often than not. If it doesn't one-shot something simple for me it has no use because it's infuriating to use. I didn't get into programming because I liked writing English.
Hah I feel you there. Around 2 years ago I did a take home assignment for a hiring manager (scientist) for Merck. The part B of the assignment was to decode binary data and there were 3 challenges: easy, medium and hard.
I spent around 40 hours of time and during my second interview, the manager didn't like my answer about how I would design the UI so he quickly wished me luck and ended the call. The first interview went really well.
For a couple of months, I kept asking the recruiter if anyone successfully solved the coding challenge and he said nobody did except me.
Out of respect, I posted the challenge and the solution on my github after waiting one year.
Part 2 is the challenging part; it's mostly a problem solving thing and less of a coding problem: https://github.com/jonnycoder1/merck_coding_challenge
Part 2 is the challenging part; it's mostly a problem solving thing and less of a coding problem
That doesn't look too challenging for anyone who has experience in low-level programming, embedded systems, and reverse engineering. In fact for me it'd be far easier than part 1, as I've done plenty of work similar to the latter, but not the former.
That sucks so hard man, very disrespectful. We should team up and start out own company. I tried checking out your repo but this stuff is several stops past my station lol.
A surprise live test is absolutely the wrong approach for validating whether someone's done the work. IMO the correct approach is to go through the existing code with the applicant and have them explain how it works. Someone who used AI to build it (or in the past had someone else build it for them) wouldn't be able to do a deep dive into the code.
We did go into the assignment after I gently bowed out of the goofy live test. The CTO seemed uninterested & unfamiliar with it after returning from a 3 week vacation during the whole process. I waited. Was happy to run him through it all. Talked about how to extend this to a real-world scenario and all that, which I did fantastically well at.
I feel your pain. This isn't a question about AI or not. It's about if you can do the work and do it well. This kind of nonsense happened before AI. If you can't win the game of Jeapordy you don't get the job which has nothing to do with being a Jeapordy contestant!
That is an insane amount of work for a job application. Were you compensated for it at all?
It isn't impressive to spend a lot of time on a hiring problem, you shouldn't do that. If you can't do it in a few hours then just move on and apply for another job, you aren't the person they are looking for.
Doing it slowly over many days is only taking your time and probably wont get you the job anyway since the solution will be a hard to read mess compared to someone who solves it quickly since they are familiar with the domain.
The other comments here note that, and the author even stated it directly, that it was vibe-coded.
No. Should I invoice them? I'm still livid about it. The kicker is the position pays a max of 60-120k euros, the maximum being what I made 5 years ago.
Probably too late now unfortunately.
The job market is brutal right now, and you have my sympathy. I hope you can find a good fit soon.
Much appreciated.
TBF that's a pretty top tier salary for Europe.
Right but we both know nobody is being offered the 120 right out the gate, so it's more like 100 max.
Damn... that's WAY more than I'll do for an interview process assignment... I usually time box myself to an hour or two max. I think the most I did was a tic-tac-toe engine but ran out of time before I could make a UI over it.
I put absolutely every egg into that basket. The prospect of working in Europe (where I planned to return to eventually) working on cool robot stuff was enticing.
The fucking CTO thought I vibe-coded it and dismissed me. Shout-out to the hiring manager though, he was real.
This repo has enough red flags to warrant some suspicion.
You have also not attempted to hide that, which is interesting.
Wait, what.. you did this as a take home for a position? Damn that looks excessive.
Yes. I put a ton of work into it. I had about 60 pages worth of notes. On inverse kinematics, FABRIK, cyclic algorithms used in robotics, A*/RRT for real-world scenarios etc. I was super prepared. Talked to the CEO for about two hours. Took notes on all videos I can find of team members on youtube and their company.
Luckily the hiring manager called me back and levelled with me, nobody kept him in the loop and he felt terrible about it.
Some stupid contrived dumbed down version of this crane demo was used for the live test where I had to build some telemetry crap. Nerves took over, mind blanked.
Here's the take-home assignment requirements btw: https://i.imgur.com/HGL5g8t.png.
Here's the live assignment requirements: [1] https://i.imgur.com/aaiy7QR.png & [2] https://i.imgur.com/aaiy7QR.png.
At this rate I'm probably going to starve to death before I get a job. Should I write a blog post about my last 2 years of experiences? They are comically bad.
This was for monumental.co - found them in the HN who's hiring threads.
> Nerves took over, mind blanked.
This never happened to me in a job interview before I turned 40. But once I knew I was too old to look the part, and therefore and had to knock it out of the park, mind blank came roaring in. I have so much empathy now for anyone it ever happened to when I was giving the a job interview. Performing under that kind of pressure has nothing to do with actual ability to do the job.
> Here's the live assignment requirements: [1] https://i.imgur.com/aaiy7QR.png & [2] https://i.imgur.com/aaiy7QR.png.
These are the same link
I feel bad for you, and I support you in naming and shaming this company. It's just horseshit to jerk people around like that.
I hope you can at least leverage this demo. Maybe remove the identifications of it and shove it into your CV as a "hobby project"? It looks pretty good for that.
Best!
Thanks man, I'm pretty much forced to do exactly that.
Their hiring process seems absolutely absurd.
They probably think they are geniuses who "weeded out another AI guy!" High fives all around! It was a great process (for me) right up until it wasn't.
how much did this job pay?
60k-120k euros. The upper 20k probably being entirely inaccessible so in reality probably like 70-100k euros.
It's always these low pay jobs that have the sloppiest interview experiences
I find it's less about the salary than it is the type of company. Any startup doing anything they consider remotely "cutting edge" is going to probably be a shit show.
In at least parts of Europe, 70k-100k is pretty good for a mid/senior developer.
It’s the market rate in my city in Germany (not Berlin not Munich). I pivoted from non CS academia and entered software at 73k
Its not really memorizing solutions. Yes you can get quite far by doing so but follow ups will trip people up. However if you have memorized it and can answer follow ups, I dont see a problem with Leetcode style problems. Problem solving is about pattern matching and the more patterns you know and can match against, the better your ability to solve problems.
Its a learnable skill and better to pick it up now. Personally I've solved Leetcode style problems in interviews which I hadnt seen before and some of them were dynamic programming problems.
These days its a highly learnable skill since GPT can solve many of the problems, while also coming up with very good explanations of the solution. Better to pick it up than not.
It is and isn't. I'd argue it's not memorizing exact solutions(think copy paste) but memorizing fastest algos to accomplish X.
And some people might say well, you should know that anyways. The problem for me is, and I'm not speaking for every company of course, you never really use a lot of this stuff in most run of the mill jobs. So of course you forget it, then have to study again pre interview.
Problem solving is the best way to think of it, but it's awkward for me(and probably others) to spend minutes thinking, feeling pressured as someone just stares at you. And that's where memorizing the hows of typical problems helps.
That said, I just stopped doing them altogether. I'd passed a few doing the 'memorizing' described above, only to start and realize it wasn't at all irrelevant to the work we were actually doing. In that way I guess it's a bit of a two way filter now.
The only part of memorizing fastest algorithm the vast majority needs is whatever name that goes by in your library. Generic reusable code works very well in almost any language for algorithms.
Even if you are an exception either you are writing the library meaning you write that algorithm once for the hundreds of other users, or the algorithm was written once (long ago) and you are just spending months with a profiler trying to squeeze out a few more CPU cycles of optimization.
There are more algorithms than anyone can memorize that are not in your library, but either it is good enough to use a similar one that already is your library, or you will build it once and once again it works so you never go back to it.
Which is to say memorizing how to implement an algorithm is a negative: it means you don't know how to write/use generic reusable code. This lack is costing your company hundreds of thousands of dollars.
I’d say it’s not even problem solving and it’s more pattern recognition.
I actually love LC and have been doing a problem a week for years. Basically I give myself 30 minutes and see what I can do. It’s my equivalent to the Sunday crossword. After awhile the signals and patterns became obvious, to me anyway.
I also love puzzlerush at chess.com. In chess puzzles there are patterns and themes. I can easily solve a 1600 rated problem in under 3 seconds for a chess position I’ve never seen before not because I solve the position by searching some move tree in my mind, I just recognize and apply the pattern. (It also makes it easier to trick the player when rushing but even the tricks have patterns :)
That said, in our group we will definitely have one person ask the candidate a LC style question. It will probably be me asking and I usually just make it up on the spot based on the resume. I think it’s more fun when neither one of us know the answer. Algorithm development, especially on graphs, is a critical part of the job so it’s important to demonstrate competency there.
Software engineering is a hugely diverse field now. Saying you’re a programmer is kinda like saying you’re an artist. It does give some information but you still don’t really know what skill set that person uses day to day.
> memorizing fastest algos
I don't think most LC problems require you to do that. Actually most of them I've seen only require basic concepts taught in Introduction to Algorithms like shortest path, dynamic programming, binary search, etc. I think the only reason LC problems stress people out is time limit.
I've never seen a leetcode problem that requires you to know how to hand code an ever so slightly exotic algorithm / data structure like Fibonacci heap or Strassen matrix multiplication. The benefit of these "fastest algos" is too small to be measured by LC's automatic system anyway. Has that changed?
My personal issue with LC is that it has a very narrow view of what "fast" programs look like, like most competitive programming problem sets. In real world fast programs are fast usually because we distribute the workload across machines, across GPU and CPU, have cache-friendly memory alignment or sometimes just design clever UI tricks that make slow parts less noticeable.
> you never really use a lot of this stuff in most run of the mill jobs. So of course you forget it, then have to study again pre interview.
I'm wondering how software devs explain this to themselves. What they train for vs what they actually do at their jobs differ more and more with time. And this constant cycle of forgetting and re-learning sounds like a nightmare. Perhaps people burn out not because of their jobs but the system they ended up in.
"Fastest algos" very rarely solve actual business problems, which is what most of us are here to do. There's some specialized fields and industries where extreme optimization is required. Most of software engineer work is not that.
I'm fine with that in an interview... I'm not fine with that, in a literally AI graded assignment where you cannot ask clarifying questions. In those cases, if you don't have a memorized answer a lot of times I cannot always grasp the question at hand.
I've been at this for 30+ years now, I've built systems that handle millions of users and have a pretty good grasp at a lot of problem domains. I spent about a decade in aerospace/elearning and had to pick up new stuff and reason with it all the time. My issue is specifically with automated leetcode pre-interview screening, as well as the gamified sites themselves.
I'd say that learning to solve tough LeetCode problems has very little (if not precisely zero) value in terms of you as a programmer learning to do something useful. You will extremely rarely need to solve these type of tougher select-the-most efficient-algorithm problems in most real-world S/W dev jobs, and nowadays if you do then just as AI.
Of course you may need to pass an interview LeetCode test, in which case you may want to hold your nose and put in the grind to get good at them, but IMO it's really not saying anything good about the kind of company that thinks this is a good candidate filter (especially for more experienced ones), since you'd have to be stupid not to use AI if actually tasked with needing to solve something like this on the job.
If a position needs low-level from-scratch code so performance-critical, and needs it so quickly that the developer must recall all of this stuff from memory, any candidate likely wouldn’t be asked to give a technical interview, let alone some gotcha test.
Ironic that you’re touting these puzzles as useful interviewing techniques while also admitting that ChatGPT can solve them just fine.
If you’re hiring software engineers by asking them questions that are best answered by AI, you’re living in the past.
That was because the parent complained about not having good write ups. You can use GPT which has already been trained on publicly available solutions to generate a very good explanation. Like a coaching buddy. Keeping in mind there are paid solutions that charge 15k USD for this type of thing, being able to upskill at just 20bucks a month is an absolute steal.
Few people are in both circles of "can memorize answers" and "dont understand what they are doing".
You would need "photographic" memory
It's bizarre because I see the opposite.
Most people memorize and cargo cult practices with no deeper understanding of what they are doing.
Been in software development for 30 years. I have no idea what "Leetcode" is. As far as I know I've never been interviewed with "Leetcode", and it seems like I should be happy about that.
And when someone uses "leet" when talking about computing, I know that they aren't "elite" at all and it's generally a red flag for me.
Leetcode with no prep is a pretty decent coding skill test
The problem is that it is too amenable to prep
You can move your score like 2stddev with practice, which makes the test almost useless in many cases
On good tests, your score doesn't change much with practice, so the system is less vulnerable to Goodharting and people don't waste/spend a bunch of time gaming it
I think LC is used mostly as a metric of how much tolerance you have for BS and unpaid work: If you are willing to put unpaid time to prepare for something with realistically zero relevance with the day-to-day duties of the position, then you are ripe enough to be squeezed out.
Cynical, but correct. I've long maintained that these trials, much like those we encounter in the school system, are only partially meant to test aptitude. Perhaps more importantly, they measure submissive compliance.
It selects for age and childlessness.
And experience selects for age as well, doesn't make it a bad signal.
> On good tests, your score doesn't change much with practice, so the system is less vulnerable to Goodharting and people don't waste/spend a bunch of time gaming it
This framing of the problem is deeply troubling to me. A good test is one that evaluates candidates on the tasks that they will do at the workplace and preferably connects those tasks to positive business outcomes.
If a candidate's performance improves with practice, then so what? The only thing we should care about is that the interview performance reflects well on how the candidate will do within the company.
Skill is not a univariate quantity that doesn't change with time. Also it's susceptible to other confounding variables which negatively impact performance. It doesn't matter if you hire the smartest devs. If the social environment and quality of management is poor, then the work performance will be poor as well.
leetcode just shows why interviews are broken. As a former senior dev (retired now, thanks to almost dying) I can tell you that the ability to write code is like 5% of the job. Every interview I've ever attended has wasted gazillions of dollars and has robbed the company of 10X that amount.
Until companies can focus on things like problem solving, brainstorming, working as a team, etc. the situation won't improve. If I am wrong, why is it that the vast majority of my senior dev and dev management career involved the things I just mentioned?
(I had to leave the field, sadly, due to disability)
Oh and HR needs to stop using software to filter. Maybe ask for ID or something, however, the filters are flagging everyone and the software is sinking the ship, with you all with it.
> My biggest problem with leetcode type questions is that you can't ask clarifying questions.
What is there to clarify? Leetcode-type questions are usually clear, much clearer than in real life projects. You know the exact format of the input, the output, the range for each value, and there are often examples in addition to the question. What is expected is clear: given the provided example inputs, give the provided example outputs, but generalized to cover all cases of the problem statement. The boilerplate is usually provided.
One may argue that it is one of the reasons why leetcode-style questions are unrealistic, they too well specified compared to real life problems that are often incomplete or even wrong and require you to fill-in the gaps. Also, in real life, you may not always get to ask for clarification: "here, implement this", "but what about this part?", "I don't know, and the guy who knows won't be back before the deadline, do your best"
The "coin" example is a simplification, the actual problem statement is likely more complete, but the author of the article probably felt these these details were not relevant to the article, though it would be for someone taking the test.
These interviews seem designed to filter out applicants with active jobs. In fact, I'd say that they seem specifically for selecting new CS graduates and H1B hires.
Skill? LC is testing rote memorization of artificial problems you most likely never encounter in actual work.
> My biggest problem with leetcode type questions is that you can't ask clarifying questions. My mind just doesn't work like most do, and leetcode to some extent seems to rely on people memorizing leetcode type answers. On a few, there's enough context that I can relate real understanding of the problem to, such as the coin example in the article... for others I've seen there's not enough there for me to "get" the question/assignment.
The issue is that leetcode is something you end up with after discovery + scientific method + time, but there's no space in the interview process for any of that.
Your mind slides off leetcode problems because it reverses the actual on-the-job process and loses any context that'd give you a handle on the issue.
Where I interviewed you had effectively 1 or 2 LC question but the interviewer offered clarifying questions making for a real time discussion and coding exercise.
This solves one problem but it does add performance anxiety to the mix having to live code.
IMO leetcode has multiple problems.
1. People can be hired to take the test for you - surprise surprise 2. It is akin to deciding if someone can write a novel from reading a single sentence.
Hiring people for the test is only valid for online assessment. For an onsite, its very obvious if the candidates have cheated on the OA. I've been on the other side and its transparent.
> It is akin to deciding if someone can write a novel from reading a single sentence.
For most decent companies, the hiring process involves multiple rounds of these challenges along with system designs. So its like judging writing ability by having candidates actually write and come up with sample plots. Not a bad test.
If they are on site why not interview them? If the purpose of these online assessments is to be the mouth of the funnel that process is starting to fail.
https://www.reddit.com/r/leetcode/comments/1mu3qjt/breaking_...
There are funded companies set up just to help you get past this stuff.
https://www.reddit.com/r/leetcode/comments/1iz6xcy/cheating_...
Personally I feel software development has become more or less like assembly line work. If I was starting out today I would seriously consider other options.
> My biggest problem with leetcode type questions is that you can't ask clarifying questions.
Huh? Of course you can. If you're practicing on leetcode, there's a discussion thread for every question where you can ask questions till the cows come home. If you're in a job interview, ask the interviewer. It's supposed to be a conversation.
> I wouldn't even mind the studying on leetcode types sites if they actually had decent explainers
If you don't find the hundreds of free explanations for each question to be good enough, you can pay for Leetcode Pro and get access to editorial answers which explain everything. Or use ChatGPT for free.
> It's not a matter of skill, it's just my ability to take in certain types of problems doesn't work well.
I don't mean to be rude, but it is 100% a matter of skill. That's good news! It means if you put in the effort, you'll learn and improve, just like I did and just like thousands and thousands of other humans have.
> Without any chance of additional info/questions it's literally a setup to fail.
Well with that attitude you're guaranteed to fail! Put in the work and don't give up, and you'll succeed.
Last year, I saw a lot of places do effectively AI/Automated pre-inverview screenings with a leetcode web editor, and a video capture... This is what I'm talking about.
I'm fine with hard questions in an actual interview.
> My biggest problem with leetcode type questions is that you can't ask clarifying questions.
Yeah this one confused me. Not asking clarifying questions is one of the sureshot ways of failing an interview. Kudos if the candidates ask something that the interviewers havent thought of, although its rare as most problems go through a vetting process (along with leak detection).
[dead]
How does asking clarifying questions work when a non-programmer is tasked with performing the assessment, because their programmers are busy doing other things, or find it degrading and pointless?
Many interviews now involve automated exercises on websites that track your activity (don't think about triggering a focus change event on your browser, it gets reported).
Also, the reviewer gets an AI report telling it whether you copied the solution somewhere (expressed as a % probability).
You have few minutes and you're on your own.
If you pass that abomination, maybe, you have in person ones.
It's ridiculous what software engineers impose on their peers when hiring, ffs lawyers, surgeons, civil engineers get NO practical nor theorical test, none.
The major difference between software devs and lawyers, surgeons, and civil engineers is that the latter three have fairly rigorous standards to pass to become a professional (bar, boards, and PE).
That could exist for software too, but I'm not sure HN folks would like that alternative any better. Like if you thought memorizing leetcode questions for 2 weeks before an interview was bad, well I have some bad news.
Maybe in 50-100 years software will have that, but things will look very different.
You ain't interviewing your plumber or accountant come on and I have millions of other examples.
Accountants have to sit for the CPA exams (four of them), and depending on the state may have required graduate course load. And also you should interview your CPA, because a lot are not very good at whatever specific section of accounting you need (e.g. tax filing).
Plumber is probably the closest to what you're getting at. They are state licensed typically, with varying levels of requirement. But the requirement is often just like "have worked for 2-4 years as a trainee underneath a certified plumber" or whatever. That would be closest to what I'm guessing you would be recommending?
Also relevantly: the accountant and plumber jobs that are paying $300k-$500k+ are very rare. There exist programming jobs that pay what a typical plumber makes, but don't have as many arcane interview hoops to jump through.
At least in the US, lawyers, surgeons, & civil engineers all have accredited testing to even enter the profession, in the form of the bar exam, boards, and FE & PE tests respectively. So they do have such theoretical tests, but only when they want to gain their license to practice in a given state. Software doesn't have any such centralized testing accreditation, so we end up with a mess.
"don't think about triggering a focus change event on your browser, it gets reported)."
So .. my approach would be to just open dev tools and deactivate that event.
Show of practical skill or cheating?
Switching to devtools also triggers a focus change and is detectable by other means (such as repeatedly invoking a debugger statement).
One can type in devtools withouth having the focus on dev tools, but indeed, to track down the event, one has to loose focus for a while. But after you find out what line of js is needed, then you can just inject that without dev tools with greasemonkey for instance.
But probably a general solution exists ... and there are actually extensions that will do that in general.
The one's i've gotten have all seemed more like tests of my puzzle solving skills than coding.
The worst ones i've had though had extra problems though:
one i was only told about when i joined the interview and that they would be watching live.
One where they wanted me streaming my face the whole time (maybe some people people are fine with that)
And one that would count it against me if i tabbed to another page. So no documentation because they assume i'm just googling it.
Still it's mostly on me to prepare and expect this stuff now.
You can make up API calls which you can say you'd implement later. As long as these are not tricky blocks, you'll be fine.
For Google, Facebook and Amazon, yes. At least last I interviewed there a few years ago. They're more interested in the data structure/algorithm
But I have also been to places that demand actual working code which is compiled and is tested against cases
Usually there the problem is simpler, so there's that
I feel like if I'm being asked this in an interview, they're not asking me to use a constraint solver, they're asking me to _write_ a constraint solver. Just for a specific constraint problem, not a more general constraint solver.
You're right, but that just shows how fundamentally silly this interview approach is.
In any real engineering situation I can solve 100% of these problems. That's because I can get a cup of coffee, read some papers, look in a textbook, go for a walk somewhere green and think hard about it... and yes, use tooling like a constraint solver. Or an LLM, which knows all these algorithms off by heart!
In an interview, I could solve 0% of these problems, because my brain just doesn't work that way. Or at least, that's my expectation: I've never actually considered working somewhere that does leetcode interviews.
I was told to use ANY language in an interview. I asked them if they were sure, so I solved it with J. They were not too pleased and asked me if I could use another language, so I did prolog and we moved on to the next question. Then the idiot had the audacity to say I should not use "J and Prolog" but any common known language. I asked if assembly was fine, and they said no. Perhaps python or javascript. I did the rest in python, needless to say I didn't get the job. :-)
You're a hero!
Reminds me of https://aphyr.com/posts/340-reversing-the-technical-intervie... (and the follow-ups to it)
We had a programming language class at college and wrote the same program in everything from Java to Lisp. The lisp was way nicer.
I find it hilarious when people brag about stupid shit like that. Congrats on sabotaging your own interview process I guess??
If the candidate asks if you're sure you want them to use any language and you say "yes", and then get pissy when they do, the candidate isn't the one who sabotaged anything and they're dodging a bullet if they "fail".
I feel like I'm entering a whole different universe on HN. Maybe things are this equal and fair on the senior, high-paying part of the spectrum that most people here seem to occupy, but in general there's a huge power imbalance in job interviews. Unless you're special and the company wants you in particular, it costs them nothing to turn you down in favor of the other 10000 perfect applicants, while you must find a job to survive.
As someone just starting out, the general feeling among my peers is that I must bend to the interviewer's whims, any resistance or pushback will get you rejected. If this is dodging a bullet, then the entire junior field is a WW1 trench, at least where I am. Why would a company hire someone who gets 9/10 on the behavioral portion when they have a dozen other 10/10 candidates? Of course when the interviewer asks me to use "any language", I'll assume they want Python or Java or C++ or Rust, not Bash or ALGOL 68. Stepping out of line would just be performatively asking them to reject me.
> I'll assume they want Python or Java or C++ or Rust, not Bash or ALGOL 68.
I've solved interview questions with one line of Bash before and gotten an offer. The question was something like "count all the files in this folder with a name ending in X". The interviewer was happy I had a quick solution and they could move on to talking about something more interesting.
I agree that doing that without asking if they really mean "any" would in fact demonstrate traits that might be bad for a co-worker.
If the candidate reads that this may be the case, asks for, obviously, that reason, and the interviewer confirms that they mean "any", then it's a red flag for that interviewer, at least, as a co-worker, if they go on to get upset over your choice, unless it's something where you're obviously taking the piss, like Brainfuck (the later suggestion of assembly probably counts as this, but at that point the interviewer[s] had already failed the interviewee's test of them, so, whatever)
But yes, if you're desperate for a job you should indeed just ignore any red flags and do your best to fit the perfect-cog mold and do whatever emotional labor is required to seem the way you think they want you to be, and take whatever abuse they offer with a smile. That's true.
Yeah, I don't mean to justify the actions of the interviewer, they were likely in the wrong here. It's just that, to someone in my position, it seems almost funny to be willing to throw the entire interview over something like that. It's them who gets to decide your fate.
Also, we can't know what exactly was said, so maybe miscommunication could be partly to blame. Like, "Are you sure I can use any language? (Are you really so gracious as to give me this option?)" vs. "Are you sure I can use any language? (Can I use something you definitely don't know?)"
> Of course when the interviewer asks me to use "any language", I'll assume they want Python or Java or C++ or Rust, not Bash or ALGOL 68.
When I did interviews, I used to ask for “any imperative language”. Most people chose C or Java, some chose e.g. Python and the best solutions looked very different from the C/Java ones. I did not deduct points for either; a good solution is a good solution.
I once had a candidate that chose Oberon, because it was the only language they felt comfortable with (by their own account). They fell through on the interview for other reasons, but this seriously made me consider to what degree they had any programming experience at all outside a few select school assignments.
Independent of that, if someone came with a solution in a constraint solver, my next question would be (as it usually was, regardless of approach) “and what is the runtime complexity of your solution?” and I'd be impressed if they had any nonobvious thoughts about that!
> the general feeling among my peers is that I must bend to the interviewer's whims
This is just conflict avoidance and naivety. After a while you start to realize that there's a whole world of people just like on HN and *we hire people too*. No matter what you do, youll end up in the place you deserve. If you try to be sneaky, you will end up working for people who are either easily fooled or see right through how to exploit you. If you let your nerd shine you'll end up with people who love your nerdiness.
> After a while you start to realize that there's a whole world of people just like on HN and we hire people too. No matter what you do, youll end up in the place you deserve.
I mean, I'm hoping for that too. But it also feels like this only applies as long as there's a balance of likeminded people who are already in the industry vs. the people looking to get a job. For someone like me, without a real network, meeting a person like the kind you mention is extremely unlikely. Even then, most of these people are looking for more qualified candidates, since there's an overabundance of juniors and seniority is a good predictor for being really passionate about their field. So, maybe I'll figure that out someday, but right now I just need a job, and what people in my cohort do is a way to try and get a job at all costs.
When I say "any language" when interviewing candidates, I mean it. I would be stoked if someone busted out J in an interview.
Of course, my team also writes SDKs in a bunch of different languages, so it makes sense. Even if that weren't the case though, I'd be stoked. To your point though, early in your career, I get your viewpoint. It's hard out there to get a foot in the door and you have to seize opportunities.
> As someone just starting out, the general feeling among my peers is that I must bend to the interviewer's whims, any resistance or pushback will get you rejected.
But interviews are bidirectional. The company is deciding if they want me, and I’m deciding if I want them. If I chose to use Self or Forth as the whiteboard context for the conversation we’re having, it’s deliberately to make the interviewer think, and hopefully learn. If the experience of thinking differently about a problem (that they chose!) and learning something new is a negative signal to them, that’s fine —- it being a negative signal to them is a negative signal to me, and I don’t want to be there anyway! If they’re excited, and intrigued, and give “12 o’clock” feedback — well, that’s the team I want to work with. So I’ve helped us both accomplish our goals (making accurate assessments about fit), and aligned our metrics along the way.
> Unless you're special and the company wants you in particular, it costs them nothing to turn you down in favor of the other 10000 perfect applicants, while you must find a job to survive.
This is not what you see in practice. Trying to hire, the view is very much different, in my experience. Every candidate has strengths and flaws, it's much more of a... constraint problem!
The idea that there even exists a perfect candidate is one of the biggest issues with hiring practices in tech these days.
I, for one, would be extremely impressed by a candidate breaking out J or Prolog for a constraint problem. But I'm also not a typical hiring manager for sure.
Interviews go both ways ... I don't think they lost out on anything they wanted.
That is what people miss about interviews. Often when you interview you don't have reasonable leads on any other job and so you don't feel like there is a choice since you likely need a job (unemployment rarely pays as well as a job). However interviews are not only about the company deciding if they will hire you, they are also about do you want to work there and convincing you to take the job if one is offered.
So make sure you use those "do you have any questions" time to ask questions! What is it really like to work there. How much notice do you need to give before taking vacation? Do they really give pay raises? How often do they lay people off? What is the dress code? Do they let you take time for your kids school activities? And so on - these questions should be things that are important to you - find out.
In the best cases the interview is only about convincing you to take the offer - generally because someone who you worked with at a previous job said "hire this person" and they trust that person enough to not need any other interview. So keep your network open.
People don't miss that about interviews, they just know that the balance of power is so skewed that the interests of the employer become the only relevant part. The employer can keep going through hundreds of applicants until they find someone who's literally perfect in every single way, they have nearly unlimited time. Meanwhile, the applicants need a job now, any job at all, they're on a hard time limit until their money runs out.
I feel like in practice, unless you're an established, senior professional in a high-paying, in-demand field with a network to rely on, this would go something like:
> What is it really like to work there. How much notice do you need to give before taking vacation? Do they really give pay raises? How often do they lay people off? What is the dress code? Do they let you take time for your kids school activities?
"Candidate ABC seems too demanding and picky, constantly inquiring about irrelevant specifics. They would be a bad fit for our company culture. I advise going with candidate XYZ instead."
> they just know that the balance of power is so skewed that the interests of the employer become the only relevant part
That happens since people only apply to very well paying jobs. If you apply to shit enough jobs they wont be asking hard questions, and those who offer shit jobs will say "all the power lies with the employees, I have no power to make them stay or apply, I am social and nice to them and they still reject my job offer!".
Just give the companies what they want and they all will want you, it is that easy. If you try to give them something they don't care about, like a hiring manager giving you a smile and minimum wage, of course you will get rejected a lot. Give them what they ask for, not what you think they should want.
Maybe in some companies. Every interviewer I've talked to has never considered those a negative. Most don't even think of them at all once the interview is over. Of course I've always worked in companies where people work their 8 hours and go home to their family and so you would be a good fit (depending on what you asked).
I know applicants need the job more than they need you. However you still have options if you don't get this one - you should always be following several leads until you finally get a job. Odds are your other leads are not anywhere close to as advanced as this, but if you can wait a couple more months you have a chance.
Unless you are really desperate to find a job, there are definitely workplaces you would want to avoid. While a power imbalance does in principle exist, that doesn’t mean you usually have no choice at all. Of course that is less of a case when you just start, but in general pp can go around doing interviews and negotiating positions rather than just accept the first offer.
I have to push back on the unlimited amount of time thing. Maybe in FAANG that’s true but in the places I’ve worked for, hiring is something that comes down from on high - someone tells us they need N bodies for some project, and we need to have a team hired by some deadline. We really can’t interview endlessly.
I don't mean that you're literally allowed to run interviews for years. I mean that companies can, if they choose to, interview people indefinitely until they find a suitable candidate. The company won't collapse if they don't find an employee by the deadline, it's not imperative to their existence, it's just a nice to have, a goal. Maybe some project or initiative doesn't pan out or gets pushed back if no one gets hired, but the impact of all that seems rather limited. On the other hand, my existence is fully contingent on finding a job, and if I overrun the deadline I have to find a place to work, I won't be able to eat and pay rent. My time limit is existential, their time limit is artificial and fully in the realm of planning.
It's also very expensive to interview, since you're typically paying people who make over $100 an hour to interview people and review their code.
> So make sure you use those "do you have any questions" time to ask questions!
I started giving interviews again and im surprised how many people dont ask anything. I'm an IC, not a hiring manager, and only evaluating a specific thing, (technical assement) and still nothing really.
It just goes to show how skewed the power balance is right now. People are probably afraid to make an extra move that can deduct points for any obscure reason.
When I interview people I encourage them to ask any question they want and I make damned sure it doesn't reflect in my report to the higher-ups! Just imagine being in their shoes, you could be in the same position tomorrow!
Use the right tool for the job. Thats engineering.
Instead you insist we should solve a nieche problem with a ill suited tool, while inventing a costume solution when a standard solution exist.
This kind of tradeoff discussion is good to explicitly call out in an interview. I often say things like "if this were my own project I'd use X, but on a team I would probably try to find a library in a language the team already uses".
Bringing the team up on Prolog and integrating it into your CI/CD system and finding some way to connect it with other services is often going to be a poor choice, even if in isolation it's the very best tool for the job. And that's the best case solution - more likely the tests will be limited and not automated, the code review will be rubber stamp because only the author knows the language, and the code and deploy process will be a black box that everyone is afraid to touch once the author moves on.
Obviously in an interview none of the code should make it into production, but being openly pragmatic is still a good idea. And if you use an obscure language, you'd better have better than usual communication skills to concisely explain how the code works for someone who hasn't used that language before. I've seen it done well but it's difficult.
They dodged a bullet. It would have been hell working there.
Why would you ever want to work somewhere that clearly employs such unqualified individuals? And not only that, but allows those individuals to be the face of their company to prospective hires?
A company's interview process tells you a lot about how the company thinks and operates. This was was surely a dumpster fire.
> Why would you ever want to work somewhere that clearly employs such unqualified individuals
Because you're unemployed and need to work to get some money.
Do you think you're a super intelligent person when you couldn't even figure that out?
It goes without saying that someone needing money that badly wouldn't do what the OP here did. Stop trying to be right and start trying to see the world for what it is. It'll help you do better.
What's the point of doing well if you already determined you wouldn't even look at their offer?
Sabotaging? The candidate learned that their interviewers, and probably the company as a whole, isn't curious about languages or stuff that is outside of their wheelhouse.
What if the interviewers decided to ask the candidate about their language choice and trade-offs between different languages? Wouldn't that actually give them more signals into the skill of the engineer, rather than just blindly following their script?
I haven't been asked leetcode questions in a while and when I was asked, it was an easy level problem. I don't know where they ask hard leetcode problems, I also never solved a hard leetcode problem on my own.
The purpose of coding questions should be a problem that you can solve in about 20 minutes, then they ask another, and then you get 20 minutes to either finish or talk about other things. If you ask questions where either someone knows the trick and they pass, or they don't and fail you don't learn much. You need to watch the person write code to see if they are reasonable about it.
I interviewed at an investment bank in London and they asked me pretty hard questions. One was to implement some multithreaded producer consumer thing in C++. I can't remember the details but it was... well you know how writing multithreaded C++ is. I was allowed to look up references at least. Took me maybe 20 minutes and the whole time the interviewer was just sitting on his phone while I wrote it.
Weird experience. Didn't get that job (probably for the best tbf).
If you wrote an MPSC queue (standard question) with multithreaded demo in 20 minutes in C++ you’re pretty hot shit, mate. Their loss. It’s not that it’s hard. But that speed without error is just really good. C++ is particularly unforgiving too.
He didn't get the job so changes are it wasn't correct.
I can't remember the exact problem or how long it took but it was definitely some awkward multithreading. I'd rate my C++ as pretty good but probably not hot shit!
I was once asked fizz buzz in an interview and it made me sad that some people don't pass it.
I guess when you're brand new you don't know about the mod operator?
Yeah I've interviewed people who didn't.
I'm routinely asked LC Hard questions in interviews. Sometimes more than one in one 45 minute interview.
That said, I interview in silicon valley and I'm a mixed race American. (extremely rare here) I think a lot of people just don't want me to pass the interview and will put up the highest bar they can. Mind you, I often still give optimal solutions to everything within good time constraints. But I've practiced 1000+ problems and done several hundred interviews.
This is not how it works. The interviewer knows 1-2 problems and there is no time for profiling since they are rushing through their day, probably focused on their day to day work. You are the least of their concern, believe me.
Source: we am a hiring manager.
Do you interview at startups?
Yes. I interview at everything from pre-seed to FAANG.
Depends on your experience and what you’re interviewing for. At a high enough level, the questions are pulled from the easier side, and the interviewer doesn’t want you to fail.
More exactly, you can't invent algorithms on a spot which took who knows how many years for others to invent. I.e. the question ends up being more if you know about a specific algorithm, which results in "invent it if you don't know about it". It's absolutely silly to test for ability to invent one on the spot, so it's a pretty pointless interview question really.
You can for simple algorithms. It's just really easy for interviewers to overestimate how simple an algorithm is when they have been told the answer.
Yeah, that's exactly the point. These kind of algorithms are far from easy to invent even if they look simple once they are known.
I hate when it asks for a memorized specific problem, but most of the hard ones I found needs a clever twist of a well-known algorithm, and I still struggle at that too for hard LC.
This. Literally every problem in NP can be cast as a constraint problem. The question of whether a solver is the right solution varies a lot depending on the application, and in an interview , it’s almost by definition not the right solution.
They can also be dreadfully slow (and typically are) compared to just a simple dynamic program.
LeetCode problems typically are in P. The challenge is finding out why.
Yeah I think the trivial solution is always harder complexity and the main challenge is to lower it. Either from NP to P or from n*2 to n log n.
This will be true in some interviews, but not in all.
I'm generally against using leetcode in interviews, but wherever I've seen it used it's usually for one reason & one reason alone: known dysfunctional hiring processes. These are processes where the participants in the hiring process are aware of the dysfunction in their process but are either powerless or - more often - too disorganised to properly reform the process.
Sometimes this is semi-technical director level staff leveraging HR to "standardise" interview techniques by asking the same questions across a wide range of teams within a large corp. Other times this is a small underresourced team cobbling together interview questions from online resources in a hurry, not having the cycles to write a tailored process for themselves.
In these cases, you're very likely to be dealing with a technical interviewer who is not an advocate of leetcode interviewing & is attempting to "look around" the standardised interview scoring approach to identify innovative stand out candidates. In a lot of cases I'd hazard even displaying an interest in / some knowledge of solvers would count significantly in your favour.
If someone solves a leetcode hard with a constraint solver and you don't hire them, you are an idiot.
Do you know how few people in this world even know what a constraint solver is, let alone how to correctly define the problem into one?
I used a constraint solver to solve a homework problem once in my CS degree 3rd year. My god just writing the damn constraints was a huge cognitive load!
I did this, wrote an Essence-prime program to generate Minion solver code for a simple instance of the knapsack problem, as part of a startups "solve one of these and get an interview" challenges. Because I had used those tools recently for a contract job (and wrote/presented a paper on invitation of the solver authors,) I thought it would be fun and didn't really want the job. Got an interview but every dev was like "why did you use a cannon to swat a fly?" and were clearly concerned that without strict supervision I would create baroque towers of garbage for them to clean up.
I would like to believe that most people capable of writing a solver would appreciate simple code. It's like when looking at ffmpeg or some physic engine code. You know you'll forget the details easily so you make sure everything is as simple as they can be.
> If someone solves a leetcode hard with a constraint solver and you don't hire them, you are an idiot.
I do hope you're exagerating here, but in case you aren't: this is an extremely simplistic view of what (software) engineers have to do, and thus what hiring managers should optimize for. I'd put "ability to work in a team" above "raw academic/reasoning ability" for the vast majority of engineering roles, any day.
Not that the latter doesn't matter, of course, but it's by no means the one and only measure.
> I'd put "ability to work in a team" above "raw academic/reasoning ability" for the vast majority of engineering roles, any day.
In this hypothetical, why do you do leetcode hard interviews?
> why do you do leetcode hard interviews?
I don't. I do easy code interviews because there are people who work great on a team and know enough buzzwords to sound like they know how to write code, but cannot. Something that isn't hard to solve in about 20 minutes (I can solve in 5 - but I've seen a solution several times and so don't have to think about the solution), but is different enough that you haven't memorized the solution. If you can't solve an easy problem then you can't code.
> In this hypothetical, why do you do leetcode hard interviews?
I thought I already answered that:
>> Not that the latter doesn't matter, of course, but it's by no means the one and only measure.
One can be gifted while still producing code that the rest of the team can read.
Maybe because they are simplier to practice than working in a team?
OK, but obviously this presupposes a job where the hiring process is focused on leetcode.
Hey I'm with you 100% about the idea of code-interviews/leetcode being a problem and the importance of culture-fit and ability to work on a team.
I should have said "if you deemed this a fail on the code interview, you are an idiot".
I've won a couple hackathons with just CP-SAT & Linear Programming which led to my first jobs. I'm surprised not more people know/use it. Very inefficient compared to the "correct" answer but the development speed is much faster.
> If someone solves a leetcode hard with a constraint solver and you don't hire them, you are an idiot
Sometimes you just don't want someone that takes these shortcuts. I think being able to solve the problem without a constraint solver is much more impressive
This - the only downside to a constraint solver is it's usually slower. If you want them to write a fast algorithm, then specify that. Have an actual metric for it, if they can pass it with the declarative language, then great. If not, they should have written a more complicated algorithm.
Yes and no: I've asked questions like this in interviews, and I'd count it as a plus if the candidate reached for a constraint solver. They're criminally underused in real-world software engineering and this would show the candidate probably knows how to get the right answer faster instead of wasting a bunch of time.
Now, if they did answer with a constraint solver, I'd probably ask some followup whiteboard questions to make sure they do actually know how to code. But just giving a constraint solver as an answer definitely wouldn't be bad.
Yes, especially if the interviewee said something like 'this may not be asymptomatically optimal, but if it's not a known bottleneck, then I might start with constraint solver to get something working quickly and then profile later.' Especially if it's a case where even the brute-force solution is tricky.
Otherwise penalizing interviewees for suggesting quick-and-dirty solutions reinforces bad habits. "Premature optimization is the root of all evil," after all.
Using a bad algorithm when a good algorithm that is known to exist is premature pessimization and should be avoided.
There is some debate about what premature optimization is, but I consider it about micro optimizations that often are doing things a modern compiler will do for you better than you can. All too often such attempts result in unreadable code that is slower because the optimizer would have done something different but now it cannot. Premature optimization is done without a profiler - if you have a profile of your code and can show a change really makes a difference then it isn't premature.
On the other hand job interviews imply time pressure. If someone isn't 100% sure how to implement the optimization algorithm without looking it up brute force is faster and should be chosen then. In the real world if I'm asked to do something I can spend days researching algorithms at times (though the vast majority of the time what I need is already in my language's standard library)
> when a good algorithm that is known to exist
Sure, if a good algorithm exists and is simple to implement, then go for it. But if it is non-trivial, then you have to make a judgement call whether it is worth the trouble to solve in a more optimal way. You acknowledge yourself that that this can take days.
Personally I really have to be disciplined about choosing what to optimize vs what to code up quick-and-dirty. There's always a temptation to write clean, custom solutions because that's more interesting, but it's just not a good use of time for non-performance critical code.
IBO premature optimization is normally one of two things:
1. Any optimization in a typical web development file where the process is not expected to be particularly complex. Usually a good developer will not write something very inefficient and usually bottlenecks come from other areas
2. Doing stuff like replacing a forEach with a for loop to be 0.5% faster
Constraint solvers (or MILP solvers) while not asymptotically optimal are often as fast or faster than other methods.
It’d be a positive in my book if they used a constraint solver.
General constraint solver would be terribly inefficient for problems like these. It's a linear problem and constraint solver just can't handle O(10^6) variables without some beefy machine.
FWIW, the OP's problem is not linear. It's an integer programming problem.
A trick if you can't do a custom algorithm and using a library is not allowed during interview could be to be ready to roll your own DPLL-based solver (can be done in 30 LOC).
Less elegant, but it's a one-size-fits-all solution.
You can implement DPLL in 30 lines of code? Not for SMT, I assume.
You'd need a fancy encoding for SAT to use a small DPLL implementation.
Otherwise, customize DPLL for this particular problem.
Okay, but who says you need to use a simple constraint solver? There are various sophisticated constraint solvers that know how to optimize.
At this point, job interviews are so far removed from actual relevance. Experience and aptitude still matter a lot, but too much experience at one employer can ground people in rigid and limiting ways of thinking and solving problems.
O(10^6) = O(1)
no, the "O" here is "on the order of", not Big O notation.
I believe NoahZuniga is perfectly aware of the intent and denouncing an abuse of (unneeded) notation.
What is "Big O" if not literally "order of"?
The O stands for "Ordnung", the German word for order. So it does literally mean that, except mathematicians think that the order of f(x)=1 is the same as the order of f(x)=10^6, because "clearly" f(x)=x gets way bigger than any constant function.
In physics "order of" means "approximately" using something like a taylor series, which typically start with a constant, then move to higher polynomial terms which add smaller and smaller corrections. Similar, but different, I think...
Great insight. But this is sadly not applicable to interviews.
> It's easy to do in O(n^2) time, or if you are clever, you can do it in O(n). Or you could be not clever at all and just write it as a constraint problem
This nails it. The point of these problems is to test your cleverness. That's it. Presenting a not-clever solution of using constraint solvers shows that you have experience and your breadth of knowledge is great. It doesn't show any cleverness.
>The point of these problems is to test your cleverness.
In my experience, interviewers love going to the Leetcode "Top Interview 150" list and using problems in the "Array String" category. I'm not a fan of these problems for the kind of jobs I've interviewed for (backend Python mostly), as they are almost always a "give me a O(n) runtime O(1) memory algorithm over this array" type challenge that really doesn't resemble my day to day work at all. I do not regularly do in-place array algorithms in Python because those problems are almost always handled by other languages (C, Rust, etc.) where performance is critical.
I wish interviewers would go to the "Hashmap" section for interviews in Python, JavaScript, etc., type of languages. They are much less about cleverness and more about whether you can demonstrate using the appropriate tools in your language to solve problems that actually do resemble ones I encounter regularly.
There's also the problem of difficulty tuning on some of these. Problem 169 (Majority Element) being rated "Easy" for getting a O(n) runtime O(1) memory solution is hilarious to me. The algorithm first described in 1981 that does it (Boyer–Moore majority vote algorithm) has a Wikipedia page. It's not a difficult to implement or understand algorithm, but its correctness is not obvious until you think about it a bit, at which point you're at sufficient "cleverness" to get a Wikipedia page about an algorithm named after you. Seems excessive for an "Easy" problem.
Interviews should not be about cleverness. They should test that you can code. I almost never write an algorithm because all the important algorithms are in my standard library already. Sure back in school I did implement a red-black tree - I don't remember if it worked, but I implemented it: I can do that again if you need me to, but it will take me several days to get all the details right (most of it looking up how it works again). I use red-black trees all the time, but they are in the language.
You need to make sure a candidate can program so asking programing question make sense. However the candidate should not be judged on if they finish or get an optimal or even correct answer. You need to know if they write good code that you can understand, and are on a path that if given a reasonable amount of time on a realistic story would finish it and get it correct. If someone has seen the problem before they may get the correct answer, but if they have not seen it they won't know and shouldn't expected to get the right answer in an hour.
These tests are programming tests, but also effectively IQ and conscientiousness tests in the same way that most of what people learn in college is pointless, but graduating with a 4.0 GPA is still a strong signal.
I will say, IME, it's pretty obvious when people have seen a problem before, and unless you work at a big company that has a small question pool, most people are not regurgitating answers to these questions but actually grappling with them in realtime. I say this as someone who has been on both ends of this, these problems are all solvable de novo in an hour by a reasonable set of people.
Leetcode ability isn't everything, but I have generally found a strong correlation between Leetcode and the coding aspects of on the job performance. It doesn't test everything, but nothing in my experience of hiring has led me to wanting to lower the bar here as much as raise the bar on all other factors that influence job performance.
Majority Element is rated easy because it can be trivially solved with a hashmap in O(N) space and that's enough to pass the question on Leetcode. The O(1) space answer is probably more like a medium.
Yeah it just depends on whether your interviewer considers that "solved". To test this out, I wrote a one liner in Python (after imports) that solves it with a hashmap (under the hood for Counter, which uses a heap queue to find the most common one):
return Counter(nums).most_common(1)[0][0]
And that's 50th percentile for runtime and memory usage. Doing it with another one liner that's 87% percentile for time because it uses builtin Python sorting but is 20th percentile for memory:
return sorted(nums)[len(nums) // 2]
But the interviewer might be looking for the best approach, which beats "100%" of other solutions in runtime per Leetcode's analysis:
If I were interviewing, I'd be happy with any of these except maybe the sorted() one, as it's only faster because of the native code doing the sort, which doesn't change that it's O(n log n) time and O(n) space. But I've had interviews where I gave answers that were "correct" to the assumptions and constraints I outlined but they didn't like them because they weren't the one from their rubric. I still remember a Google interview, in which we're supposed to "design to scale to big data", in which they wanted some fiddly array manipulation algorithm like this. I gave one that was O(n log n) but could be done in place with O(1) memory, and the interviewer said it was "incorrect" in favor of a much simpler O(n) one using dicts in Python that was O(n) memory. Had the interviewer specified O(n) memory was fine (not great for "big data" but ok) I would have given him the one liner that did it with dicts lolI guess my point is that interviewers should be flexible and view it as a dialogue rather than asking for the "right answer". I much prefer "identify the bug in this self contained code snippet and fix it" type problems that can be completed in <15-30 minutes personally, but Leetcode ones can be fine if you choose the right problems for the job.
Honestly in day to day programming I find data types & associated APIs are so so much more important than algorithms.
I would rather work with a flexible data type with suboptimal performance than a brittle data type that maybe squeezes out some extra performance.
Your example of in-place array mutation feels like a good example of such a thing. I feel like there should be a category of interviewing questions for "code-safety" not just performance.
I would rather work with persistent data structures, the least brittle of all, which would also in many cases trivially allow me to parallelize the work, but as far as I can see all the leetcode problems are low level mutation based problems with no clue about functional data structures. Clueless interviewers look to these problems as if they alone epitomized great programming, while they are often inflexible single core stuff, that may not even be appropriate for this day and age any longer.
> The point of these problems is to test your cleverness.
Last round I did at Meta it was clearly to test that you grinded their specific set of problems, over and over again, until you could reproduce them without thinking. It's clear because the interviewers are always a bit surprised when you answer with whatever is not the text-book approach on both leetcode and on the interview guide they studied.
Cleverness is definitely not high on the list of things they're looking for.
Cheekily using counting sort ended things the one and only time I agreed to interview with Meta. Definitely improved my inbox for a couple years though.
Bottom up dynamic programming algorithms require some cleverness.
All of the ones listed can be solved with a top down dynamic programing algorithm. Which just means "write recursive solution, add caching to memoize it".
For some of these, you can get cleverer. For example the coin change problem is better solved with an A* search.
Still, very few programmers will actually need these algorithms. The top thing we need is to recognize when we accidentally wrote a quadratic algorithm. A quick scan of https://accidentallyquadratic.tumblr.com/ shows that even good people on prominent projects make that mistake on a constant basis. So apparently being able to produce an algorithm on the test, doesn't translate to catching an algorithmic mistake in the wild.
For the love of me I still can't consistently solve dynamic programming problems. Because "write a clever brute force solution that can be cached" is so broad that there are tons of variations out there, and a slight twist can bring you out of the loop fast.
Project Euler 18. I tried 3 heuristic approaches, before accepting, that to get the real answer without brute forcing it (because it comes back later in non-brute forcable version anyway), I need to find another way. I came up with an optimal solution, but it is still not dynamic programming, which I would also consider inferior to the bottom up solution I have found.
When I interview with problem solving problems, the point is to understand how the candidate thinks, communicates, and decomposes problems. Critically, problem solving questions should have ways to progressively increase and decrease difficulty/complexity, so every candidate "gets a win" and no candidate "dunks the ball".
Interviewers learn nothing from an instant epiphany, and they learn next to nothing from someone being stumped.
Unfortunately, this is why we can't have nice things. Problem solving questions in interviews can be immensely useful tools that, sadly, are rarely usefully used.
> the point is to understand how the candidate thinks, communicates, and decomposes problems.
100% and it's a shame that over time this has become completely lost knowledge, on both sides of the interview table, and "leetcode" is now seen as an arbitrary rote memorization hurdle/hazing ritual that software engineers have to pass to enter a lucrative FAANG career. Interviewees grind problems until they've memorized every question in the FAANG interview bank, and FAANG interviewers will watch a candidate spit out regurgitated code on a whiteboard in silence, shrug, and say "yep, they used the optimal dynamic programming solution, they pass."
If somebody writes the optimal algorithm that should be a negative unless their resume indicates they are writing that algorithm often. The only reason you should know any algorithm well enough to get it right is if your job is implementing the optimal version for every single language. Of course nobody maintains one algorithm in many different languages/libraries (say libc++, python, rust, ada, java - each has different maintainers), so I can safely safe the number is zero who should be able to implement your cleaver algorithm. Now if your cleaver algorithm is in the language standard library (or other library they often use) that should be able to call/use it, though even then I expect them to look up the syntax in most languages.
What if we just really enjoy clever algorithms?
I've probably implemented first-order Markov-chain text generation more than a dozen times in different languages, and earlier this week I implemented Newton–Cotes adaptive quadrature just because it sounded awesome (although I missed a standard trick because I didn't know about Richardson extrapolation). I've also recently implemented the Fast Hadamard Transform, roman numerals, Wellons–NRK hash tries, a few different variants of Quicksort (which I was super excited to get down to 17 ARM instructions for the integer case), an arena allocator with an inlined fast path, etc. Recently I wrote a dumb constrained-search optimizer to see if I could get a simpler expression of a word-wrap problem. I learned about the range-minimum-query algorithm during a job interview many years ago and ad-libbed a logarithmic-time solution, and since then I've found a lot of fascinating variants on the problem.
I've never had a job doing this kind of thing, and I don't expect to get one, just like I don't expect to get a job playing go, rendering fractals, reading science fiction, or playing video games. But I think there's a certain amount of transferable skill there. Even if what I need to do this week is figure out how to configure Apache to reverse proxy to the MediaWiki Docker container.
(I know there are people who have jobs hacking out clever algorithms on different platforms. I even know some of them personally. But there are also people who play video games for a living.)
I guess I'd fail your interview process?
It's usually fairly obvious when people have just seen the solution before.
But also, interviews are fuzzy and not at all objective, false negatives happen as well as false positives.
If you want people to know about these things you should put them in your resume though. People can't read your mind.
> Critically, problem solving questions should have ways to progressively increase and decrease difficulty/complexity, so every candidate "gets a win" and no candidate "dunks the ball".
Absolutely agree. When I interview, I start with a simple problem and add complexity as they go. Can they write X? Can they combine it with Y? Do they understand how Z is related?
Same. I'm never doing a fail/pass type interview. Instead I try to assess where the candidate is on the beginner/intermediate/expert axis and match that with the expectations of the role I'm interviewing for.
> the point is to understand how the candidate thinks, communicates, and decomposes problems
Interviewers always say this, but consider: would you endorse a candidate who ultimately is unable to solve the problem you've presented them, even if they think, communicate, and decompose problems well? No interview in this industry prizes those things over getting the answer right.
Note how I structure my problem solving questions to be progressive and adjustable, both up and down. This gives me room to simplify and get the candidate to a place where they can show me something (candidates who truly come up goose eggs on everything functional but still show solid fundamentals may be showing that the interview is for the wrong job family). It also means that it is virtually impossible to get all the way to "the end" and "finish" the problem, as I leave room for extension and modification. I had one question that I thought was long enough, and, of maybe ~120 interviews with it, exactly two people dunked on it, one writing out code for solutions with and without libraries. That guy was a complete jerk, and I wasn't at all surprised when the entire panel came back not-inclined.
My first boss (a CTO at a start-up) drilled this into us. What you know is far less valuable than how you learn/think and how you function on a team.
Interesting. Sounds like you and other HN commentators from firms that interview better than the industry Leetcode convention oughta be on one of those workplace lists on GitHub (like this one: https://github.com/poteto/hiring-without-whiteboards) for applicants who want to go through a more interesting process.
Every interview I know is severely time limited. I don't care if you can solve the problem, so long as your are clearly making progress and have proven you could solve the problem if given longer.
Now I give you problems I expect to take 20 minutes if you have never seen them before so you should at least solve 1. I have more than once realized someone was stuck on the wrong track and redirection efforts were not getting them to a good track so I switched to a different problem which they were then able to solve. I've also stopped people when they have 6 of 10 tests passing because it is clear they could get the rest passing but I wouldn't learn anything more so it wasn't worth wasting their time.
In the real world I'm going to give people complex problems that will take days to solve.
Would a good answer be "I can do it as a constraint problem, but since I guess you are not asking for this, the solution is..." and then proceed as usual?
Id probably stop the candidate, dig into how they’d using constraint based solvers, and how they might expect that to fall apart. Applicability and judgment is worth way more than raw algorithmic questions.
One way to think about this is:
Is a fresh graduate more likely to provide a solid answer to this than a strategic-thinking seasoned engineer? If so, just be conscious of what your question is actually probing.
And, yes, interview candidates are often shocked when I tell them that I’m fine with them using standard libraries or tools that fit the problem. It’s clear that the valley has turned interviewing into a dominance/superiority model, when it really should be a two-way street.
We have to remember that the candidate is interviewing us, too. I’ve had a couple of interviews as the interviewee where the way the interview was conducted was why I said “no” to an offer (no interest in a counter, just a flat “no longer interested” to the recruiter, and, yes, that surprises recruiters, too).
I see, thank you
Constraint solvers are also often not applicable to the real world either.
Many formulations scale in a way that is completely unusable in practice.
Knowing how to get tools like Z3 or Gurobi to solve your problems is it's own skill and one that some companies will hire for, but it's not a general purpose technology you can throw at everything.
This post is the unironic version of "FizzBuzz in TendorFlow", where just because you have a big hammer doesn't mean everything is a nail. And I say that as an enjoyer of bug hammers including SMT solvers.
>The point of these problems is to test your cleverness.
No it's just memorization of 12 or so specific patterns. The stakes are too high that virtually everyone going in will not be staking passing on their own inherent problem solving ability. LeetCode has been so thoroughly gamified that it has lost all utility of differentiability beyond willingness to prepare.
Yeah, it tests if the candidate enjoys the programming-adjacent puzzle game of LeetCode, which is a perfectly decent game to play, but it is just a signal.
If somebody grinds LeetCode while hating it, it signals they are really desperate for a job and willing to jump through hoops for you.
If somebody actually enjoys this kind of stuff, that is probably a signal that they are a rare premium nerd and you should hire them. But the probably play Project Euler as well (is that still up?).
If somebody figures out a one-trick to minmax their LeetCode score… I dunno, I guess it means they are aware of the game and want to solve it efficiently. That seems clever to me…
Given this consider that LeetCode solving is rarely ever part of your work. So then, what are they selecting for with the habit?
Selecting for people like themselves.
I think this is one of the more true answers but can you be more specific?
Like in race? Like in wealth? Like in defection willingness? Like in corruption?
Asking for a friend who is regularly identified as among the most skilled but feels their career has been significantly derailed by this social phenomenon.
People decide what is like. I know some people who would never work with some group, but they have no problem with some other group.
In this case the group is people good at leetcode - the people I know of in that group are perfectly fine with any race so long as they can solve leetcode. There are people who care about race, but I've never had much to do with them so I can't guess how they think.
Like in 'can solve a leetcode question quickly', because that's what the interview rubric asks them to test for.
That is the acceptable public answer of course but it is a mind stopper. Obviously the definition comes from some person with some set of motivations and this seems to ignore that real and pertinent question.
Things like age, class, education and educational institution, willingness to work long hours doing something you hate for a goal you don't care about except that it feeds and houses you.
Line engineers running interviews have stopped having any say in the corporate policies of tech firms years ago. They are cogs, not rockstars.
You are right, this definition does come from some person with some set of motivations, but that person is some mid/high-level manager who probably hasn't ever written a line of code in their life.
It's just tradition for the sake of tradition. When cargo cult practice becomes industry culture. Like a much milder version of why medical residents are put through extreme sleepless wringers just because William Halsted was a cocaine addict.
In defense of questions like this, “willingness to prepare” is a significant differentiator
But what is it differentiating? And is it really the best evidence of willingness to prepare? My MSc and BA on the topics, my open source contributions, two decades of industry experience... Those aren't evidence of not only willingness but execution of preparation?
The papers and open source indicate that you can build stuff. That's not what it's testing for.
Will you put up with very long hours of insane grindy nonsense in the spirit of being a team player for a team that doesn't really remember what game they're playing?
Are you sufficiently in need of income to be fighting through this interview dance in preference to other things, such that once you join you'll be desperate to stay?
Those are extremely important questions, and a willingness to have spent a thousand hours memorising leetcode correlates strongly with the attributes sought.
It is a differentiator when you are hiring straight from college. The fact we use this beyond entry level roles is a sign the company has lost the thread and is cargo culting.
That they would ask me to prepare for that is a signal as well.
In no case is it a useful signal on if I can do my job better than someone else. Some people like this type of problem and are good at it anyway which is a good signal compared to average - but there are also above average people who don't enjoy this type of problem and so don't practice it. Note that both cases the people I'm talking about did not memorize the problem and solution.
It also means "I don't have money for food, and at this point I am desperate".
That willingness to prepare doesn't reconcile with the realities of parenthood and all of the other responsibilities someone in their thirties may have. Consistently finding that time will be a huge ask, especially if you haven't worked on those problems in a while.
I mean, it would be illegal for them to state it outright, but most companies would prefer not to hire people with kids and other responsibilities. That's the whole reason there are specific discrimination laws for that.
LeetCode questions neatly solve the problem of not wanting to hire people who won't, or can't, spend hours of their free time doing things they hate for a goal they don't care about except to the extent that will feed and house them.
I always prefer the "The Soviets used a pencil" type engineers. Simplicity is quite close to greatness.
No its not a measure of cleverness. Its about whether you can break down problems and apply common methods. Thats the entire job. Its a learnable skill and honestly resisting learning because of personal biases is a red flag in my book.
The point is to test whether or not you put in the time to sharpen common patterns and also to test your communication ability
Super common patterns like dynamic programming?
Yes. Common LC patterns such as 1D and 2D dynamic programming. I'm not defending leetcode style interviews, in fact I think they are actually bad, I'm simply stating their intent as observed by me.
In my notes I have roughly 30 patterns to leetcode questions btw.
Yes. It is common on leetcode.
Most interviews are based on the premise that if a diabetic can't synthesize their own insulin in their basement, they are somehow cheating at the game of life.
If my wife's blood sugar is high, she takes insulin. If you need to solve a constraint problem, use a constraint solver.
If your company doesn't make and sell constraint solving software, why do you need me to presume that software doesn't exist and invent it from scratch?
It’s explicitly not testing if you can synthesize insulin in a crisis, it’s a general aptitude test for “if we tell you you need to cram this textbook on how to synthesize insulin by next week and then ask you how to do it on a call, can you coherently repeat that back to us?”
If you can figure out that a problem can be efficiently solved with a constraint solver then you can also write the two for loops and maybe some auxiliary recursive function to solve the given toy instance.
In defense of coding tests, most people who can't solve simple dynamic programming problems generally turn out to be pretty poor programmers IRL.
At least that's been my experience. I'm sure there are exceptions.
What?
I've never used constraint solvers, seems like black magic. Need to fill that gap in my knowledge.
But how do they work, what is the complexity of the solution, for example for the stock prices, is it O(n^2)?
Whenever constraint programming languages come up, you can’t miss mentioning Håkan Kjellerstrand. He’s put together an amazing collection of problems and examples—including plenty for MiniZinc—on his site: https://www.hakank.org/minizinc/
Not only has he made a great website, he's also a super nice guy
> Now if I actually brought these questions to an interview the interviewee could ruin my day by asking "what's the runtime complexity?"
This completely undermines the author's main point. Constraint solvers don't solve hard leetcode problems if they can't solve large instances quickly enough.
Many hard leetcode problems can be solved fairly simply with more lax runtime requirements -- coming up with an efficient solution is a large part of the challenge.
> coming up with an efficient solution is a large part of the challenge
More of my work tends to be "rapidly adopting solution to additional and changing requirements" than "come up with most efficient solution", so why are we interviewing for something where in practice we just throw a couple extra instances at it? (Your specific job role may vary, of course, but I usually just increase the scaling factor)
Author's point is that coming up with the most efficient solution might not actually be a good measure of your real-world performance.
And that's been a longrunning critique of leetcode, of course. However, this is a neat framing where you can still use the same problems but give solutions that perform better when measured by "how adaptable is this to new requirements?"
It would have been worthwhile if this article had briefly touched upon how the constraint solvers are implemented, rather than avoiding this altogether
A loonnngggg time ago when I was green, and wasn't taught about constraint solving in my State University compsci program, I encountered the problem when trying to help a friend with his idea.
He wanted to make an app to help sports club owners schedule players for the day based on a couple simple rules. I thought this was going to be easy, and failed after not realizing what I was up against. At the time I didn't even know what I didn't know.
I often look back on that as a lesson of my own hubris. And it's helped me a lot when discussing estimates and timelines and expectations.
This might be a dumb question (as I'm not familiar with constraint solvers) but would a linear optimization approach be better? I've used linear optimization for scheduling in the past. The nice thing is that linear optimization handles rule conflicts well, because you just set weights on all your rules and the optimizer will find the "least bad" solution to the conflicts.
This is what major sports leagues use for season scheduling (source: https://mathstodon.xyz/@j2kun/108975072813565989)
Well if your using MiniZinc you're free to use a CP solver, MIP solver, SAT solver, CP-SAT-LP solver. In general the model is roughly the same, even though some formulations work better for some solvers than others.
But CP (and CP-SAT) solvers tend to do very well on scheduling problems
> I thought this was going to be easy, and failed after not realizing what I was up against. At the time I didn't even know what I didn't know
This reminds me of high school ~25 years ago when I just started learning TI-Basic on my calculator and was dabbling in VB6 on my PC, and I was flipping burgers at Steak n Shake as my part time job. The manager moaned about how hard it was to write the employee schedules out each week (taking into account requested days off, etc) and I thought “ooh, I know how to write software now, I’ll make a scheduling program!” I told the manager I bet I could do it.
… it took a very short time for 16 year old me to realize writing scheduling software to solve for various constraints is pretty damned hard. I never brought it up after that.
I would be blown away if a candidate solved it using DP and then said “but let me show you how to use a constraint solver”. Immediate hire.
This. Jump through their hoops first.
+1
- Constraint solvers? That's a nice concept, I heard about this once. However, for the purposes of the interview, let's just write some Python code, I wanna see your way of thinking...
(I think it's almost impossible to convince your interviewer into constraint solvers, while the concept itself is great)
import z3
from ortools.sat.python import cp_model
Have you ever tried or this is your assumption of what the interviewer would say.
Long time ago, just for fun, I wrote a constraint solver problem that could figure out which high yield banks to put money into that were recommended on doctor of credit(https://www.doctorofcredit.com/high-interest-savings-to-get/) based on <= `X` money and <= `Y` # of transactions on debit cards maximize the yield and other constraints(boolean and real valued)
I played it for a while when interest rates were really low and used the thing for my own rainy day savings(I did get tired changing accounts all the time)
Repo?
The documentation for Googles OR tools comes with many interesting examples of constraint problems, e.g.
https://developers.google.com/optimization/lp/stigler_diet
SAT, SMT, and constraint solvers are criminally underutilized in the software industry. We need more education about what they are, how they work, and what sorts of problems they can solve.
At least personally, I've been very underwhelmed by their performance when I've tried using them. Usually past a few dozen variables or so is when I start hitting unacceptable exponential runtimes, especially for problem instances that are unsatisfiable or barely-satisfiable. Maybe their optimizations are well-suited for knapsack problems and other classic OR stuff, but if your problem doesn't fit the mold, then it's very hit-or-miss.
I'm surprised to hear this. Modern SAT solvers can easily handle many problems with hundreds of thousands of variables and clauses. Of course, there are adversarial problems where CDCL solvers fail, but I would be fascinated if you can find industrial (e.g. human written for a specific purpose) formulas with "dozens of variables" that a solver can't solve fairly quickly.
One thing that I spent a particularly long time trying to get working was learning near-minimum-size exact languages from positive and negative samples. DFAMiner [0] has a relatively concise formulation for this in terms of variables and clauses, though I have no way to know if some other reformulation would be better suited for SAT solvers (it uses CaDiCaL by default).
It usually starts taking a few seconds around the ~150-variable mark, and hits the absolute limit of practicality by 350–800 variables; the number of clauses is only an order of magnitude higher. Perhaps something about the many dependencies in a DFA graph puts this problem near the worst case.
The annoying thing is, there do seem to be heuristics people have written for this stuff (e.g., in FlexFringe [1]), but they're all geared toward probabilistic automata for anomaly detection and similar fuzzy ML stuff, and I could never figure out how to get them to work for ordinary automata.
In any case, I eventually figured out that I could get a rough lower bound on the minimum solution size, by constructing a graph of indistinguishable strings, generating a bunch of random maximal independent sets, and taking the best of those. That gave me an easy way to filter out the totally hopeless instances, which turned out to be most of them.
[0] https://github.com/liyong31/DFAMiner
[1] https://github.com/tudelft-cda-lab/FlexFringe
I've worked on a model with thousands of variables and hundreds of thousands of parameters with a hundred constraints. There are pitfalls you need to avoid, like reification, but it's definitely doable.
Of course, NP hard problems become complex at an exponential rate but that doesn't change if you use another exact solving technique.
Using local-search are very useful for scaling but at the cost of proven optimality
I think this hits the nail on the head: performance is the obstacle, and you can't get good performance without some modeling expertise, which most people don't have.
Hence my call for more education.
I wish I knew better how to use them for these coding problems, because I agree with GP they're underutilized.
But I think if you have constraint problem, that has an efficient algorithm, but chokes a general constraint solver, that should be treated as a bug in the solver. It means that the solver uses bad heuristics, somewhere.
I'm pretty sure that due to Rice's theorem, etc., any finite set of heuristics will always miss some constraint problems that have an efficient solution. There's very rarely a silver bullet when it comes to generic algorithms.
Rice's theorem is about decidability, not difficulty. But you are right that assuming P != NP there is no algorithm for efficient SAT (and other constraint) solving.
I think they're saying that the types of counter-examples are so pathological in most cases that if you're doing any kind of auto-generation of constraints - for example, a DSL backed by a solver - should have good enough heuristics.
Like it might even be the case that certain types of pretty powerful DSLs just never generate "bad structures". I don't know, I've not done research on circuits, but this kind of analysis shows up all the time in other adjacent fields.
Idk, I also thought so once upon the time. "Everyone knows that you can usually do much better than the worst case in NP-hard problems!" But at least for the non-toy problems I've tried using SAT/ILP solvers for, the heuristics don't improve on the exponential worst case much at all. It's seemed like NP-hardness really does meet the all-or-nothing stereotype for some problems.
Your best bet using them is when you have a large collection of smaller unstructured problems, most of which align with the heuristics.
> Your best bet using them is when you have a large collection of smaller unstructured problems, most of which align with the heuristics.
Agreed. An algorithm right now in our company turns a directed graph problem, which to most people would seem crazy, into roughly ~m - n (m edges, n nodes) SAT checks that are relatively small. Stuffing all the constraints into an ILP solver would be super inefficient (and honestly undefined). Instead, by defining the problem statement properly and carving out the right invariants, you can decompose the problem to smaller NP-complete problems.
Definitely a balancing act of design.
For some problems, there is not much you can do. But for many, it works.
Well, they aren’t magic. You have to use them correctly and apply them to problems that match how they work. Proving something is unsat is worst case NP. These solvers don’t change that.
Of course they aren't magic, but people keep talking about them as if they're perfectly robust and ready-to-use for any problem within their domain. In reality, unless you have lots of experience in how to "use them correctly" (which is not something I think can be taught by rote), you'd be better off restricting their use to precisely the OR/verification problems they're already popular for.
Hence my statement about education. All tools must be used correctly in their proper domain, that is true. Don’t try to drive screws with a hammer. But I'm curious what problems you tried them on and found them wanting and what your alternative was? I actually find that custom solutions work better for simple problems and that solvers do a lot better when the problem complexity grows. You’re better off solving the Zebra puzzle and its ilk with brute force code, not a solver, for instance.
In what way? They're useful for toy problems like this but they're very slow on larger problems.
SAT solvers are used daily to generate solutions for problems that have literally millions of variables. So, what you said is just wrong on the face. Yes, some talented people can write custom code that solves specific problems faster than a general purpose solver, particularly for easy special cases of the general problem, but most of the time that results in the programmer recreating the guts of a solver customized to a specific problem. There’s sort of a corollary of Greenspun’s Tenth Rule that every sufficiently complicated program also contains an ad hoc, informally-specified, bug-ridden, slow-implementation of half of a SAT or SMT solver.
I mean right tool for the right job. Plenty of formulations and problems (our job has plenty of arbitrarily hard graph algorithms) that have 90% of the problem just being a very clever reduction with nice structure.
Then the final 10% is either NP hard, or we want to add some DSL flexibility which introduces halting problem issues. Once you lower it enough, then comes the SMT solvers.
Define large. We've written model which solves real business issues in 8K lines of MiniZinc and it wasn't slow.
The conventional wisdom is the larger you make an NP hard problem, the slower is going to get. Irregardless of algorithm.
SAT & CSP are criminally under utilized in CS classes, because profs have no clue about them.
That's why in so many industries they prefer to hire engineers and OR grads and teach them python, than hire SWE and teach them modeling
I find this post interesting independent of the question of whether leetcode problems are a good tool for interviews. It's: here are some kinds or problems constraint solvers are useful for. I can imagine a similar post about non-linear least squared solvers like ceres.
Yeah, especially for learning how to use a solver!
> Most constraint solving examples online are puzzles, like Sudoku or "SEND + MORE = MONEY". Solving leetcode problems would be a more interesting demonstration.
He's exactly right about what tutorials are out there for constraint programming (I've messed around with it before, and it was pretty much Sudoku). Having a large body of existing problems to practice against is great.
> The real advantage of solvers, though, is how well they handle new constraints.
Well said. One of the big benefits of general constraint solvers is their adaptability to requirements changes. Something I learned well when doing datacenter optimization for Google.
I agree with the other comments here that using a constraint solver defeats the purpose of the interview. But this seems like a good case for learning how to use a constraint solver! Instead of spending hours coding a custom solution to a tricky problem, you could use a constraint solver at first and only write a custom solution if it turns out to be a bottleneck.
Here's an easy ad-hoc Prolog program for the first problem:
You can just paste it into [1] to execute in the browser. Using 60 as target sum is more interesting as you can enumerate over two solutions.(Posting again what I already posted two days ago [2] here)
[1]: https://quantumprolog.sgml.net/browser-demo/browser-demo.htm...
[2]: https://news.ycombinator.com/item?id=45205030
I've actually used pseudo-prolog to explain how to solve leetcode problems to a friend. Write the facts, then write the constraints, and then state your problem. Close to the last part, they've already understood how to solve it, or at least how to write the program that can answer the question.
Of course, the challenge is that the next question after solving a leetcode problem is often to explain and optimize the performance characteristics, which in prolog can get stupidly hairy.
As an interviewer, I gave one pretty simple task (people solved it in as little as 8 minutes), wasn't using any real CS, even though I'm good at it.
The reason was that aboint 70% of candidates couldn't write a simple loop -- to filter those out. The actual solution didn't matter much, I gave a binary decision. The actual conversation matters more.
This. Main point of giving candidates CS problems was always to weed out those who couldn't program at all, but somehow were still in the industry. I worked with such people - it's unpleasant.
Somehow someone figured that giving harder problems should result in better candidates. Personally, despite having passed most of the tests I've been subjected to, I don't see the connection.
Here’s my empirical evidence based on several recent “coding session” interviews with a variety of software companies. Background: I have been developing software for over 30 years, I hold a few patents, I’ve had a handful of modestly successful exits. I kind of know a little bit about what I am doing. At this stage in my career, I am no longer interested in the super early stage startup lifestyle, I’m looking at IC/staff engineer type roles.
The mature, state-of-the-art software companies do not give me leetcode problems to solve. They give me interesting & challenging problems that force me to both a) apply best practices of varying kinds and yet b) be creative in some aspects of the solution. And these problems are very amenable to “talking through” what I’m doing, how I’m approaching the solution, etc. Overall, I feel like they are effective and give the company a good sense of how I develop software as an engineer. I have yet to “fail” one of these.
It is the smaller, less mature companies that give me stupid leetcode problems. These companies usually bluntly tell me their monolithic codebase (always in a not-statically-typed language), is a total mess and they are “working on domain boundaries”.
I fail about 50% of these leetcode things because I don’t know the one “trick” to yield the right answer. As a seasoned developer, I often push back on the framing and tell them how I would do a better solution by changing one of the constraints, where the change would actually better match the real world problem they’re modeling.
And they don’t seem to care at all. I wonder if they realize that their bullshit interviewing process has both a false positive and a false negative problem.
The false negatives exclude folks like myself who could actually help to improve their codebase with proper, incremental refactorings.
The false positives are the people who have memorized all the leetcode problems. They are hired and write more shitty monolithic hairball code.
Their interviewing process reinforces the shittiness of their codebase. It’s a spiral they might never get out of.
The next time I get one of these, I think I’m going to YOLO it, pull the ripcord early and politely tell them why they’re fucked.
There is something to be said for being senior in a way where the people interviewing you are junior enough that they don't necessarily have the experience to necessarily "click" with the nuance that comes with said problems.
That being said, from a stoicism point of view, the interview ends up becoming a meta-challenge on how you approach a problem that is not necessarily appropriately framed, and how you'd go about doing and/or gently correcting things as well.
And if they're not able to appreciate it, then success! You have found that it is not the right organization for you. No need to burn the door down on the way out, just feel relief in that you dodged a bullet (hopefully).
In a few cases, I really liked the company and what they were doing, got along wonderfully with the hiring manager. Then bombed their leetcode BS.
So when I say I’d politely tell them why they’re fucked, it’s actually out of a genuine desire to help the company.
But you’re right, I’m also thankful that they showed their red flag so visibly, early enough, and I’m happy to not move forward!
Yes, it is a death spiral; if you are to lead them, you have to know what to fix when, to avoid making things worse.
The solution is typically not just to fix their code. They got in over their heads by charging ahead and building something they'll regret, but their culture (and likely the interviewer personal self-regard) depends on believing their (current) tech leaders.
So yes, the interviewer is most comfortable if you chase and find the ball they're hiding.
But the leadership question is whether you can relieve them of their ignorance without also stripping their dignity and future prospects.
I've found (mostly with luck) that they often have a sneaking suspicion that something isn't right, but didn't have the tools or pull to isolate and address it. As a leader if you can elicit that, and then show some strategies for doing so, you'll improve them and the code in a way that encourages them that what was hard to them is solvable with you, which helps them rely on you for other knotty problems.
It's not really that you only live once; it's that this opportunity is here now and should have your full attention, and to be a leader you have to address it directly but from everyone's perspective.
Even if you find you'd never want to work with them, you'd still want to leave them feeling clearer about their code and situation.
I agree with everything you've written.
Clarifying my "YOLO" usage: I was being a little flippant, in the sense that when ending an interview early with direct critical feedback, the most likely outcome is a "burned bridge" with that company (you're never coming back).
Which reminds me one of my favorite twisted idioms: We'll burn that bridge when we get to it!
I guess I've finally found an acceptable real-world use case for this phrase :)
may the bridges I burn light the way.
Maybe the process works as designed. It's just that "hiring the best developer" isn't necessarily the goal here
me neither
Whoever agrees to do LC problems during interview has zero dignity.
Interview:
> We can solve this with a constraint solver
Ok, using your favorite constraint solver, please write a solution for this.
> [half an hour later]
Ok, now how would you solve it if there was more than 100 data points? E.g. 10^12?
Maybe some preprocessing, maybe column generation, depends on the problem.
MiniZinc is a really great modeling language for constraint programming. Back in August I gave a talk at NordConstNet25 on how we used it to build a product configurator in what's (probably) the worlds largest MiniZinc model
https://pierre-flener.github.io/research/NordConsNet/NordCon...
I avoided all this just by becoming a contractor, i ship solution, no me tests me for leetcode ability
> faangguyindia
> contractor
Do FAANG hire contractor in India?
I mean, yeah, they do.
[flagged]
apex predator of grug is complexity
No me no nice
An interesting meta problem is to determine antagonistic set of denominations, like the [10,9,1] example given in the post, to maximize the number of coins selected by the gradient method.
Isn't it trivially [1]?
Been working on a calendar scheduling app that uses a constraint solver to auto schedule events based on scheduling constraints (time of day preferences and requirements, recurrence rules), and track goal progress (are you slipping on your desired progress velocity? Get a notification). It’s also a meal planner: from a corpus of thousands of good, healthy recipes, schedule a meal plan that reuses ingredients nearing expiration, models your pantry, estimates grocery prices, meets your nutritional goals. Constraint solvers are black magic.
Which solver do you use?
Would love to know how to actually assess the runtime complexity of constraint solvers like this.
Nice post, I wasn't aware that there were so many dedicated constraint solving systems.
It's insane how many of these new "AI" companies don't let you use AI or even your own IDE for coding interviews. And most questions from such companies are LC type problems so they know any AI tool can one shot it.
I discourage it but I let them use it and then give them a specific problem that I know your average Claude 4 or GPT 5 will just not get it right.
Actually people perform worse in an interview using AI because they spend time trying to understand what the tool is proposing and then time to figure out why that doesn’t work.
My experience has been quite different. With Cursor/Claude code, I've ended up writing full fledge solutions (running cli/web servers with loggers and unit tests for each functionality). We're talking crawlers, cab booking service like uber, search engines with seed data. All within the hour.
Why is that insane? Seems logical to me.
Definitely not insane. Ironic is the correct term. The field is evolving, a lot of these companies talk about replacing outdated practices using AI. Asking software engineers to not use their own tools to solve problems falls under the same bucket.
> Given an array of integers heights representing the histogram's bar height where the width of each bar is 1, return the area of the largest rectangle in the histogram.
Maybe it's my graphics programmer brain firing on all cylinders, but isn't this just a linear scan, maintaining a list of open rectangles?
Yes, you just need to maintain a stack of rectangles ordered from lowest to highest. You only ever have to push and pop the top of the stack, so the runtime is O(n).
So LeetCode has fallen into the same trap as ProjectEuler (anyone remember that?)
I tried a couple of times long time ago to solve them with cp/integer programming.
The interviewers were clueless so after 10 minutes of trying to explain to them I quit and fell back to just writing the freaking algo they were expecting to see.
Use the right tool for the right job!
Does using a constraint solver actually solve the question under the time ... constraints?
If not, how can you claim you have solved the problem?
https://codeforces.com/problemset/problem/1889/D
Terrible question for an interview, and further highlights how our interviews are broken.
Greedy algorithms tell you nearly nothing about the candidate's ability to code. What are you going to see? A single loop, some comparison and an equality. Nearly every single solution that can be solved with a greedy algorithm is largely a math problem disguised as programming. The entire question hinges on the candidate finding the right comparison to conduct.
The author himself finds that these are largely math problems:
> Lots of similar interview questions are this kind of mathematical optimization problem
So we're not optimizing to find good coders, we're optimizing to find mathematicians who have 5 minutes of coding experience.
At the risk of self-promotion, I'm fairly opinionated on this subject. I have a podcast episode where I discuss exactly this problem (including discuss greedy algorithms), and make some suggestions where we could go as an industry to avoid these kind of bad-signal interviews:
https://socialengineering.fm/episodes/the-problem-with-techn...
My best interview consisted of: -what projects have you done
-what tech you worked with and some questions about decisions
-debugging an issue they encountered before
-talking about interests and cultural fit
Instant green flag for me. Too bad that after receiving my offer covid happened and they had a hiring freeze.
This is how I prefer to interview. I don’t understand the mindset of LeetCode interviewers. It’s a weak signal because it’s easily gamed (false positives), and has misses too many strong candidates who have better things to do in their spare time (false negatives, bias towards one type of candidate -> lack of diversity in experience).
I've seen some cases where someone bragged about projects they participated in but then struggle to write a simple loop.
You can try to sus out smooth talking faker or just tell them to write a thing and then talk only after they demonstrate basic comprehension.
You: Oh I know, I can use a constraint solver for this problem!
Interviewer: You can't use a constraint solver
Any problem can be solved by a sufficient number of nested for loops.
(if you have enough time)
One level of nested for loop for each type of coin. (Run them until i*coin is larger than the input)
Populate a 2d lookup array. $7,50 becomes arr[750] = [7,1,0,0,0,0] which represents [7x100,1x50,0x25,0x10,0x5,0x1]
With each loop check if the array entry exists, if so check if that number of coins is larger. [7,1,0... is better than [7,0,2...] because 8 is a better solution than 9!
And a stack.
> This was a question in a different interview (which I thankfully passed):
> Given a list of stock prices through the day, find maximum profit you can get by buying one stock and selling one stock later.
It was funny to see this, because I give that question in our interviews. If someone suggested a constraint solver... I don't know what I'd have done before reading this post (since I had only vaguely even heard of a constraint solver), but after reading it...
Yeah, I would still expect them to be able to produce a basic algorithm, but even if their solution was O(n^2) I would take it as a strong sign we should hire them, since I know there are several different use cases for our product that require generalized constraint solving (though I know it by other names) and having a diverse toolset on hand is more important in our domain than writing always-optimal code.
Something that works poorly is often better than something that doesn't work in an instant. This is what I have to tell myself every time I step into a massive, excessively complex mess of a codebase. Many business rules aren't clearly defined ahead of time in a way that always translates well to the code and starting over is a mistake more often than not imo.
Update... refactor... update... break off... etc. A lot of times, I'm looking at something where the tooling is 8+ years old, and the first order of business should be to get it working on a current and fully patched version of whatever is in place... replacing libraries that are no longer supported, etc. From there, refactor what you can, break off what makes sense into something new, refactor again. This process, in my experience, has been far more successful than ground up, new versions.
I say this while actively working on a "new" version of a software. New version being web based, "old" version being a winforms VB.Net app from over a decade ago. Old version has bespoke auth, new verion will rely on Azure Entra... Sometimes, starting over is the answer, but definitely not always.
Reminder that the research says the interview process should match the day to day expectations as closely as possible, even to a trial day/week/month. All these brain teasers are low on signal, not to mention bad for women and minorities.
I've never heard of a "dynamic programming algorithm". Read wikipedia and it seems to mean....use a recursive function? The coin problem is an easy recursive problem (I just wrote the code for it to make sure my old brain can still do it).
It's usually covered in a first or second year algorithms course. It's a recursive problem definition paired with tabling to eliminate redundant work. Critically, the recursive subproblems have to be overlapping (they'll do some of the same work as the other recursive steps) to see any benefit. You can implement it top-down and add a cache (memoization) or you can implement it bottom-up and fill out the table iteratively rather than through recursion.
If you just implement it recursively without tabling then you end up re-doing work and it's often an exponential runtime instead of polynomial.
To clarify on overlapping, consider Fibonacci:
F(n-1) includes F(n-2) in its definition, and both F(n-2) and F(n-1) include F(n-3). If you implement this naively it produces an exponential runtime. Once you add the table, the single initial recursive call to F(n-1) will end up, through its chain of calls, storing the result of F(n-2) and now the implementation is linear instead of exponential.> Read wikipedia and it seems to mean....use a recursive function?
Yes, that's one (common) approach to dynamic programming. The recursive function call are memoized so that previous calculations are remembered for future function calls. Overlapping subproblems become trivial if you can reuse previously computed values. The recursion with memoization is top-down dynamic programming.
So all in all pretty basic stuff. Why would anyone worth their salt should have problem with that?
The hard part is realizing that the problem you're solving efficiently maps to a dynamic programming algorithm. You have to spot the opportunity for sub-problem reuse, or else the solution looks something like cubic or exponential (etc.)
The coin problem is like the intro to it. Try some of the codeforces one :skull:
Most leetcode problems fall into the same ~15 patterns, and hard problems most of the time require you to use a combination of two patterns to solve them.
I think LeetCode tests two things. First, your willingness to grind to pass an exam, which is actually a good proxy for some qualities you need to thrive in a corporate environment: work is often grungy and you need to push through without getting distracted or discouraged.
Second, it's a covert test for culture fit. Are you young (and thus still OK with grinding for tests)? Are you following industry trends? Are you in tune with the Silicon Valley culture? For the most part, a terrible thing to test, but also something that a lot of "young and dynamic" companies want to select for without saying so publicly. An AI startup doesn't want people who have family life and want to take 6 weeks off in the summer. You can't put that in a job req, but you can come up with a test regime that drives such people away.
It has very little to do with testing the skills you need for the job, because quite frankly, probably fewer than 1% of the SWE workforce is solving theoretical CS problems for a living. Even if that's you, that task is more about knowing where to look for answers or what experiments to try, rather than being able to rattle off some obscure algorithm.
Internally, do contraint solvers just do brute force?
It's interesting how powerful contraint solvers are (Ive never used one).
But actually all of these problems are fairly simple if we allow brute force solutions. They just become stacked loops.
No. They use sophisticated algorithms called propagators to prune the invalid solutions from the domains of possible solutions in conjunction with a search strategy, like branch and bound
I've always maintained that solving LeetCode is more about finding the hidden "trick" that makes the solution, if not easy, one that is already "solved" in the general sense. Look at the problem long enough and realize "oh that's a sliding window problem" or somesuch known solution, and do that.
Everyone misunderstands what LC focuses on. It focuses on - did you grind like everyone else that did to get into this company/region/tech? It allows for people who didn't go to the most specific schools (e.g. Cal, Stanford, etc.) to still get into silicon valley companies if they show they are willing to fit the mold. It's about showing you are a conformist and are willing to work very hard to do something that you won't realistically use much in your day to day job.
It's about signaling. That's all it is. At least it's not finance where it's all dictated by if you were born into the right family that got you into the elite boarding schools for high school, etc. I would've never made it into finance unless I did a math phd and became a quant.
All problems cited are about testing if you can write if's, loops and recursion (or a stack/queue).
They aren't testing if you can write a solver. They are testing if you can use bricks that solvers are built out of because other software when it gets interesting is built out of the same stuff.
> The "smart" answer is to use a dynamic programming algorithm, which I didn't know how to do. So I failed the interview.
Really? This kind of interview needs to go away.
However, coding interviews are useful. It's just that "knowing the trick" shouldn't be the point. The point is whether the candidate knows how to code (without AI), can explain themselves and walk through the problem, explain their thought processes, etc. If they do a good enough reasoning job but fail to solve the problem (they run out of time, or they go on an interesting tangent that ultimately proves fruitless) it's still a "passed the test" situation for me.
Failure would mean: "cannot code anything at all, not even a suboptimal solution. Cannot reason about the problem at all. Cannot describe a single pitfall. When told about a pitfall, doesn't understand it nor its implications. Cannot communicate their thoughts."
An interview shouldn't be an university exam.
I agree with this approach. With the exception of testing for specific domain knowledge relevant to the work role, the coding interview should just be about testing the applicant's problem-solving skills and grasp of their language of choice. I would even prefer a take-home style problem that we can review in-person over some high-pressure puzzle. The leetcode interview doesn't seem to correspond to anything a developer actually does day to day.
The bar is so high nowadays that simply being able to talk intelligentyly about the problem, ask clarifying questions, getting an inefficient solution and coding it up well does not pass muster.
Even getting an efficient algorithm basically right, is no guarantee.
In some cases there might be alternative solutions which have some tradeoffs, and you might have to come up with those, as well
Miss a counterexample? Even if you get it after a single hint?. Fuck you, you're out. I can find someone who doesn't need the hint
It might be true in the general case, I haven't interviewed for a job for some years, so I may be out of touch.
All I can say is that I do conduct interviews, and that I follow the above philosophy (at least for my round).
“Many hard problems are, to me, quite easy”
I implemented the simple greedy algorithm and immediately fell into the trap of the question: the greedy algorithm only works for "well-behaved" denominations. If the coin values were [10, 9, 1], then making 37 cents would take 10 coins in the greedy algorithm but only 4 coins optimally (10+9+9+9).
That's a bad algorithm, then, not a greedy algorithm. Wouldn't a properly-implemented greedy algorithm use as many coins as possible of a given large denomination before dropping back to the next-lower denomination?
If a candidate's only options are to either use a constraint solver or to implement a naïver-than-usual greedy algorithm, well, sorry, but that's a no-hire.
There is no greedy solution to the problem. A greedy algorithm would start by taking 3 10-cent coins to make 37 which is wrong.
> Wouldn't a properly-implemented greedy algorithm use as many coins as possible of a given large denomination before dropping back to the next-lower denomination?
Yes, and it won't work on the problem described. The greedy algorithm only works on certain sets of coins (US coin denominations are one of those sets), and fails in at least some cases with other coin sets (as illustrated in the bit you quoted).
The algorithm they're using must be "Until you hit the limit, take the highest denomination coin that fits beneath the limit. If you can't hit the limit, fall back one step."
That fits your definition of "use as many coins as possible of a given large denomination before dropping back to the next-lower denomination" but will find 10-10-10-1-1-1-1-1-1-1 and stop before it even tries 10-9-anything.
By the way, ChatGPT was able to solve this problem and give the correct solution.
It's in numerous algorithms textbooks and probably a lot of code repositories, so that's not surprising.
D'oh, that makes sense, I didn't consider the case where it would keep returning 10.
"Follow-up question since you solved that so quickly: implement a constraint solver."
[dead]
A little off topic, but I don't know much about greedy algorithms or dynamic programming. I got curious. This conversation was very insightful and now it's very clear in my mind: https://chatgpt.com/share/68c46d0b-8858-8004-aa03-f7ce321988...
My beef with someone using a constraint solver here is that they almost certainly wouldn't be able to guarantee anything about their solution other than that, if it produces an output, it will be correct. They won't be able to guarantee running time, space usage, or (probably for most tools) even a useful progress indicator. The problem isn't merely that they used another tool - the problem is that they abstracted away critical details. Had they provided a handwritten solution from scratch with the same characteristics, it would've exhibited the same problems.
This doesn't mean they can't provide a constraint solver solution, but if they do, they'd better be prepared to address the obvious follow-ups. If they're prepared to give an efficient solution afterward in the time left, then more power to them.
[flagged]
> First of all, Nice ChatGPT response
What the heck are you talking about? I didn't even visit ChatGPT today.
That's an en-dash, not an em-dash
"It's not X, It's Y"
I don't think someone with an account from 2012 and 20k karma would be posting LLM-generated comments. It also doesn't read as one. It doesn't even use the "it's not x it's y" formula, it contraposes things against each other. Like I just did.