This whole post-developer idea is a red herring fueled by investor optics.
The reality is AI will change how software is built, but it's still just a tool that requires the same type of precise thinking that software developers do. You can remove all need to understand syntax, but the need for translating vague desires from laypeople into a precise specification will remain—there's nothing on the AI horizon that will solve for this, and savvy tech leaders know this. So why are we hearing this narrative from tech leaders who should know better? Two reasons:
First is that the majority of investor gains in the US stock market have been fueled heavily by AI hype. Every public tech CEO is getting questions from analysts about what their AI strategy, and they have to tell a compelling story. There's very little space or patience for nuance here because no one really knows how it will play out, but in the meantime investors are making decisions based on what is said. It's no surprise that the majority of execs are just jumping on the bandwagon so they have a story to keep their stock price propped up.
Second, and perhaps more importantly, regardless of AI, software teams across the industry are just too big. Headcount for tech companies has ballooned over the last couple decades due to the web, smart phone revolution and ZIRP. With this type of environment the FAANGs of the world were hoarding talent just to be ready to capitalize on whatever is next. But the ugly truth is that a lot of the juice has already been squeezed, and the actual business needs don't justify that headcount over the long-term. AI is convenient cover story for RIFs that they would have done anyway, this just ties it with a nice bow for investors.
Exactly. Plus we kind of want to believe it. The "extrapolate to infinity" bias writ large. It's seductive. Step 1: AI that genuinely does some amazing things. Step 2: handwave, but look out Step 3: Super Intelligence and AI that does it all. "This changes everything" etc. And there are just enough green shoots to go all in on this idea (and I mean cult-level all in).
In practice it plays out much closer to the author's sentiment. A useful tool. Perhaps even paradigm defining.
> First is that the majority of investor gains in the US stock market have been fueled heavily by AI hype. Every public tech CEO is getting questions from analysts about what their AI strategy, and they have to tell a compelling story. There's very little space or patience for nuance here because no one really knows how it will play out, but in the meantime investors are making decisions based on what is said. It's no surprise that the majority of execs are just jumping on the bandwagon so they have a story to keep their stock price propped up.
This is also the only place where LLMs have had a tangible impact wrt product offerings with some path of utility, so it must be sold this way.
The broader public (my experience, unprovoked, in conversations with the non-technical) are even aware of this--a neighbor of mine mentioned "prompting for code" to me the other day, while "AI" was a topic we discussed.
Programmers have been well-compensated and I suspect there's some sort of public dissatisfaction with the perception of "day in the life" types making loads of comp to drink free lattes or whatever; no-one will cry for us.
While, there's a billion and 6 "AI Startups" "revolutionizing healthcare/insurance/whatever with AI" but with nothing that the public has seen at any scale that can even be sold as a plausible demo.
Image/music gen and chatbots writing code are basically all of it, and the former isn't even often sold as a profitable path.
When COBOL came out it was hyped as ending the need for software developers because it looked sorta like normal English, however it still required someone to be able to think like a programmer. The need to be able to think like a developer is somewhat reduced, but I don't see it totally going away.
You can set a hotkey to disable completions, it's very useful for the 'no my little AI friend, I don't think you quite get it' situations that would lead you to spending more brain cycles discarding broken suggestions than actually coding.
> the need for translating vague desires from laypeople into a precise specification will remain—there's nothing on the AI horizon that will solve for this
LLMs are very good at exactly that. What they aren't good at (I'll add the yet as an opinion) is larger systems thinking, having the context of multiple teams, multiple systems, infra, business priorities, security, etc.
> the need for translating vague desires from laypeople into a precise specification will remain
What makes you think LLMs will never be able to do that?
Have you tried any of the various DeepResearch products? The workflow is that you make a request for a research project; and then it asks various clarifying questions to narrow down the specific question, acceptable sources, etc.; and only then does it do all the research and collate a report for you.
Sounds like the work of mustering up instructions... like programming...
So how do these LLM's completely remove us from having to do this work of mustering up instructions? Seems to me someone still has to instruct LLMs on what to do, and that the only way this reality will cease to exist entirely, is if humanity stopped desiring computers to do what it is they want. I don't think that's happening anytime soon.
However, maybe fewer programmers will be needed, but then again, the same was said of Fortran and COBOL and look at where we are today, more programmers than ever...
> Sounds like the work of mustering up instructions... like programming...
Again, try Deep Research. You make a vague request, and it works with you to make it specific enough that it can deliver a product with some confidence that it will meet your requirements. Like a product manager, business analyst, requirements engineer, or whatever they call it these days.
The other day I was reviewing the work of a peer on a relatively easy task (using a SDK to manage some remote resources given a static configuration).
At several times I was like « why did you do that this way? This is so contrived ». And I should have known better but of course the answer started with « … I know but Copilot did this or that… ». Of course no test to validate properly the implementation.
The sentiment expressed in the article were developers won’t even bother to validate the output of coding assistants is real. And that’s one of the biggest shame to this current hype: quality was already decreasing for the past 10 years, signs indicate this will only go downhill from here
That’s because it’s pretty safe to safe you have experience and have seend havoc in the past.
Less experienced developers are the primary vector of propagation for this « low quality » output, with seniors trying to educate and review the mess (if time permits)
was thinking about this while reading another story about AI code review.
Having an LLM write the code for me? Blecch, it doesn't do it right.
Have an LLM make suggestions about my code? That's fine. If some of them are asinine I just get to laugh and feel smart while ignoring them. But if 1/5 of the suggestions are actually good? That's a win.
But if 1/5 of the questions I ask an LLM are correct, that's a waste of time. Funny how the accuracy of the model matters a different amount depending on the task at hand!
Or people just become satisfied with lower quality. Hand-crafted chairs from a woodworker used to be the only way to get a chair. You'd get an amazing, life-long chair with no flaws. Now that chairs are cheaply made in factories, we can have multiple different chairs with many flaws. And just toss them out and go on to the next one when needed. But... those sturdy beautiful fine-wordworker-made chairs are still being built. They are just more rare, expensive, and special than ever before.
You actually believe that not a single woodworker said to himself, there are only so many wealthy people, I might do better if I make some chairs from cheap wood in a hurry, not doing so much polishing or ornamentaion, and I can make twice as many and sell to the not-so-rich people? You really don't think that existed?
I have to disagree here based on experience--the majority of software projects I've worked on really didn't care much for "quality" in any sense.
A HUGE portion of work in f500 companies that are non-software firms is outsourced--these companies spend loads of cash on consulting companies (yes, those ones) that often produce results to which the description of sub-par would be a compliment. There's been various cases made public, but more often is routine and poor quality mundane work that's way over budget and barely meets requirements.
If anything, the prevalence and adoption of LLM-generated code will increase the quality in a lot of places.
If you've never had to wade through something that one of Those Companies wrote, you have no idea how bad it gets and how frequently this is the case.
We just saw DOGE-related cuts to Accenture and Deloitte--these companies have huge contracts all over the place, not just public sector.
There's a massive amount of crap out there.
Most companies do not care, do not understand "quality" outside of some cargo-culted notion of "clean", have only adopted modern practices as some sort of ritual* of Things You Do without any assessment on what tangibly works or doesn't, have no understanding of how to attract skilled professionals, nor assess them, nor nurture them. Often their definition of skilled is a resume that contains the proper "experience" with whatever Java/Angular/React/we-love-containers thing they use. With the right number of "years" near it.
*a recent example--company has GH Actions with CI/CD for every commit on all branches but routinely suffers delays due to misconfigurations, runner issues, and other headaches. Code Reviews for PRs exist but are basically pedantry that just slow down deployments ("change this to switch over if-else") while missing crucial bugs, randomly updating major dependencies cause vague security warning that breaks interfaces and other APIs. All this and more--and this is one of the better examples I can think of, in terms of "quality."
I'm saying this as I worked at a Pretty Good consulting company that's business model changed to sub-outsourcing some of the labor to some of These Companies and often had to fix, assist, push back on, work extra to clean up, the various decisions made in these cases.
I also "inherited" and worked on fixes for things left behind by some of these places.
Good thing is there often was low-hanging fruit. Extra and overprovisioned cloud resources wasting $$ and the like, absurd architectures with extra dbs and queues, and all sorts of nightmares.
The "AGI is right around the corner" argument is effectively corporate malpractice. No, shareholders don't want you to wait around and do nothing (other than a few layoffs here and there) while you wait for AGI.
The layoff mass hysteria that ran through the tech industry established doing less or nothing as a corporate virtue.
If the market gave your company a PE ratio of 20+ and you're flush with cash, it is borderline fiduciary negligence to be slashing projects and doing layoffs. Your shareholders didn't invest in you for the capital preservation portfolio-management skills of your finance department.
Not necessarily. Should the market action bump up a company PE very high (hello, Costco at PE 53 today) that's not a reason to splurge on projects. On the contrary, many shareholders would expect the company to behave responsibly, save cash, pause share buybacks and build up war chest for leaner times, where buying a competitor could be way cheaper. My 2c.
If the PE ratio is high enough, the company can raise money by simply issuing more shares. Giving the company a money-printer is the whole point of the stock market. That's what makes you an investor -- you're offering the company the value of your shares by exposing yourself to "inflation" whenever the company needs to raise some money.
On paper. In reality, issuing shares (or debt) takes time and is a complex process, a mere sign of which can dampen share prices.
> That's what makes you an investor -- you're offering the company the value of your shares by exposing yourself to "inflation" whenever the company needs to raise some money.
I beg to differ. What makes me an investor is buying a share of future company profits, directly via dividends or indirectly via a future sale of shares. Many of the companies I invested in over the last 30 years were (over time) reducing share counts via buybacks, not issuing shares like candy.
You're not an "investor" if you aren't actually funding the company. Trading scrips that entitle you to a share of the profit is not "funding" the company.
Except that you are -- every time that the company prints shares, they've taken some value from you to fund their operations. If companies weren't allowed to issue shares from thin air, there wouldn't be much point to the stock market.
”We’ve become so efficient and are so forward-thinking, we just don’t need all these pesky employees” sounds better than ”We overhired and it turns out money isn’t free anymore”.
I think the crux of this post is spot-on: we’re nowhere near a “post-developer” era. LLMs are great accelerators, but they still need a competent human in the loop.
That said, I do think he understates how much the value of certain developer skills is shifting. For example, pixel-perfect CSS or esoteric algorithmic showmanship used to signal craftsmanship — now they’re rapidly becoming commoditized. With today’s tools, I’d rather paste a screenshot into an LLM and tweak the output than spend hours on handcrafted layouts for a single corporate device.
The client sends the new menu as a word document. Select all > copy > post this wall of text into llm > please proofread > please turn into semantic html > please translate into German, French and Italian. That takes the 3 minutes a Developer needs to formulate a prompt that will get a result with only 2 bugs.
Over the past 30 years, computers and software have dramatically transformed our world. Yet many sectors remain heavily influenced by their analog history. My understanding is that the HN community has always recognized that the future is already here, just not evenly distributed across work environments, administration, and general processes. Didn't many of us believe that numerous jobs could be replaced by a few lines of code if inputs and outputs were properly standardized? The fact that this hasn't happened or has occurred very slowly due to institutional inertia is another story altogether.
Whether software development will become a "bullshit job" or how the world will look in a few years remains unknown. But those who constantly praise their work as software developers while simultaneously acknowledging that other non-physical jobs and processes could be fundamentally overhauled are living in a cognitive bubble—something I wouldn't have expected in this community.
I thought we were supposed to reach AGI since like, the 1970s. Every time there's an advancement in AI, there's always speculations that "robot gonna take our jobs in 10 years".
That said, at least we have reached the phase where AI tools are commercialized, so that's another +1 I guess.
After some experience, starting to use a new programming language is not such a big challenge. Mastering the new language eco system is. AI might help you generate new code faster, but feel like crunching code has not been so much of an issue. Bigger picture system design, something that scales and is maintainable is the challenge.
Big part of coding is understanding the code and making decisions on what needs to be added/removed/changed. LLMs can code, even if they generate perfect code every single time, someone still needs to read the code and make decisions. Others speak about understanding business logic, interfacing with clients and stakeholders,etc... I get that, but even without that, someone will always need to decide how things should be done, improved,etc.. LLMs are not going to benchmark your code and they will never understand the developer/client's intent perfectly.
Why are LLMs in these context being viewed as more than really powerful auto-complete/intellisense? I mean, correct me if I'm wrong, but aren't LLMs still years away from auto-generating a complete complex codebase that works without fixing a lot of bugs/assumptions?
It's funny how we'd like to chop off the head of that junior developer who's behind that pull request of a perfectly working but not enough clean code, but we happily welcome spaghetti and buggy LLM code because sisterhood and brotherhood claim it's trendy. Herd mentality.
There's an uncomfortable truth backing this, at least for me: revising the code of a LLM and changing my cursorrules is easier than fixing up a junior's code and teaching them _why_ I'd want to do something a particular way. After all, it's a tool, not a person, and you can be much rougher with your tools.
In my experience, you kinda get used to other people's mindset and after a while it becomes second nature to look for their mistakes or to understand their approaches.
Well I, for one, have infinitely more patience for my juniors than whatever nonsense an LLM spits out. Far more pleasant to teach a person something new than to fix autocomplete output.
You hardly need more than a simple script to just run a generic benchmark tool. But to measure performance of your code and figure out what needs to be tuned, I don't think LLMs are there, am I wrong? They can't understand the code they generated well enough to make small adjustments and improve the benchmark score. For a dev to do that on LLM generated code, it would have to understand the codebase as intended by the LLM at least.
There used to be a job of a "Typist". Now everyone does their own typing.
In the near future we'll probably see a lot more subject matter experts creating their own tools instead of requiring a dedicated person to translate their requirements.
I'm not sure subject matter experts will still be a thing. I worry that we'll just see expectations gradually lower to the point where kinda shitty is the baseline, and we just trust or accommodate the machine for everything.
Truth is we've been post-developer for a while; powerful machines and high level abstractions has made good developers into managers, translating business objectives into software products and managing the outcomes with much less typing of code -- they're still called software engineers for legacy and cultural reasons, of course.
Sadly most shops greatly overlook this reality, stuck in 1991, and continue to add redundant layers and staff thinking it makes them 'agile'.
However, LLMs might already be good enough to replace teams of 10 software developers with a domain expert and an expert developer armed with good LLMs.
That's enough for a fundamental, very violent change to the software industry.
Americans inexplicably re-elected a wildly incompetent conman to be president
In fact, this was totally and easily explicable after the dominant political party tried to convince Americans that their current leader was not suffering from severe cognitive decline and was "sharp as a tack" (a Biden admin talking point) and that NPCs lying to Americans about Trump being "easy to beat" by Kamala and also lying to them about Trump calling Charlottesville protestors "very fine people" and thinking that those fake attempts to make Trump look like he was endorsing Nazis wouldn't backfire explosively.
So, no, it was not "inexplicable" at all but it was rather Whiplash Effect initiated by media narratives originating in the Democrat ecosystem. And don't forget: Trump, Elon and Rogan are all ex-Democrats. I wonder, too, if the author was one of those people who deluded himself into thinking "Kamala is a great candidate" against all the evidence.
So the "Russiagate" hoax failed, "Kamala is awesome" failed, elites orienting themselves around appeasing far left pressure groups failed, smug contempt for middle America failed (astonishingly) and yet it is "inexplicable" why Orange Man Bad won the election!
Always bewildering to me when people look at the demographics of conservatives, leaning old and rich, and think that they just fell out of the coconut tree. No. Those people were sometimes "progressives" in their time and later became conservatives.
Same as it ever was. No further explanation needed. Not a new phenomenon.
ed: "No, it must be the fault of the left." must be a reassuring thought, but let me occams razor that for you: when these people were younger, they were selfish, and as they weren't yet the status quo, they were progressive. When these people got older, they were still selfish, and as they were the status quo, they were conservatives. Simple.
"If you’re passionate about software development, or if you see it as the best opportunity for you to earn a high salary that’ll lift you into the upper middle class, I really hope you won’t let yourself get discouraged by AI hype."
This whole post-developer idea is a red herring fueled by investor optics.
The reality is AI will change how software is built, but it's still just a tool that requires the same type of precise thinking that software developers do. You can remove all need to understand syntax, but the need for translating vague desires from laypeople into a precise specification will remain—there's nothing on the AI horizon that will solve for this, and savvy tech leaders know this. So why are we hearing this narrative from tech leaders who should know better? Two reasons:
First is that the majority of investor gains in the US stock market have been fueled heavily by AI hype. Every public tech CEO is getting questions from analysts about what their AI strategy, and they have to tell a compelling story. There's very little space or patience for nuance here because no one really knows how it will play out, but in the meantime investors are making decisions based on what is said. It's no surprise that the majority of execs are just jumping on the bandwagon so they have a story to keep their stock price propped up.
Second, and perhaps more importantly, regardless of AI, software teams across the industry are just too big. Headcount for tech companies has ballooned over the last couple decades due to the web, smart phone revolution and ZIRP. With this type of environment the FAANGs of the world were hoarding talent just to be ready to capitalize on whatever is next. But the ugly truth is that a lot of the juice has already been squeezed, and the actual business needs don't justify that headcount over the long-term. AI is convenient cover story for RIFs that they would have done anyway, this just ties it with a nice bow for investors.
> a red herring fueled by investor optics
Exactly. Plus we kind of want to believe it. The "extrapolate to infinity" bias writ large. It's seductive. Step 1: AI that genuinely does some amazing things. Step 2: handwave, but look out Step 3: Super Intelligence and AI that does it all. "This changes everything" etc. And there are just enough green shoots to go all in on this idea (and I mean cult-level all in).
In practice it plays out much closer to the author's sentiment. A useful tool. Perhaps even paradigm defining.
> First is that the majority of investor gains in the US stock market have been fueled heavily by AI hype. Every public tech CEO is getting questions from analysts about what their AI strategy, and they have to tell a compelling story. There's very little space or patience for nuance here because no one really knows how it will play out, but in the meantime investors are making decisions based on what is said. It's no surprise that the majority of execs are just jumping on the bandwagon so they have a story to keep their stock price propped up.
This is also the only place where LLMs have had a tangible impact wrt product offerings with some path of utility, so it must be sold this way.
The broader public (my experience, unprovoked, in conversations with the non-technical) are even aware of this--a neighbor of mine mentioned "prompting for code" to me the other day, while "AI" was a topic we discussed.
Programmers have been well-compensated and I suspect there's some sort of public dissatisfaction with the perception of "day in the life" types making loads of comp to drink free lattes or whatever; no-one will cry for us.
While, there's a billion and 6 "AI Startups" "revolutionizing healthcare/insurance/whatever with AI" but with nothing that the public has seen at any scale that can even be sold as a plausible demo.
Image/music gen and chatbots writing code are basically all of it, and the former isn't even often sold as a profitable path.
When COBOL came out it was hyped as ending the need for software developers because it looked sorta like normal English, however it still required someone to be able to think like a programmer. The need to be able to think like a developer is somewhat reduced, but I don't see it totally going away.
> The reality is AI will change how software is built
Yup, now instead of blindly using tab to autocomplete I have to check the "AI" is not fucking up before pressing the key.
You can set a hotkey to disable completions, it's very useful for the 'no my little AI friend, I don't think you quite get it' situations that would lead you to spending more brain cycles discarding broken suggestions than actually coding.
I disable AI autocomplete entirely and map it to ctrl-; so I can call it up intentionally.
> the need for translating vague desires from laypeople into a precise specification will remain—there's nothing on the AI horizon that will solve for this
LLMs are very good at exactly that. What they aren't good at (I'll add the yet as an opinion) is larger systems thinking, having the context of multiple teams, multiple systems, infra, business priorities, security, etc.
> the need for translating vague desires from laypeople into a precise specification will remain
What makes you think LLMs will never be able to do that?
Have you tried any of the various DeepResearch products? The workflow is that you make a request for a research project; and then it asks various clarifying questions to narrow down the specific question, acceptable sources, etc.; and only then does it do all the research and collate a report for you.
> ... you make a request ...
Sounds like the work of mustering up instructions... like programming...
So how do these LLM's completely remove us from having to do this work of mustering up instructions? Seems to me someone still has to instruct LLMs on what to do, and that the only way this reality will cease to exist entirely, is if humanity stopped desiring computers to do what it is they want. I don't think that's happening anytime soon.
However, maybe fewer programmers will be needed, but then again, the same was said of Fortran and COBOL and look at where we are today, more programmers than ever...
> > ... you make a request ...
> Sounds like the work of mustering up instructions... like programming...
Again, try Deep Research. You make a vague request, and it works with you to make it specific enough that it can deliver a product with some confidence that it will meet your requirements. Like a product manager, business analyst, requirements engineer, or whatever they call it these days.
Not OP,
AI may or may not, this is to be seen, but LLMs of today certainly won't.
The other day I was reviewing the work of a peer on a relatively easy task (using a SDK to manage some remote resources given a static configuration).
At several times I was like « why did you do that this way? This is so contrived ». And I should have known better but of course the answer started with « … I know but Copilot did this or that… ». Of course no test to validate properly the implementation.
The sentiment expressed in the article were developers won’t even bother to validate the output of coding assistants is real. And that’s one of the biggest shame to this current hype: quality was already decreasing for the past 10 years, signs indicate this will only go downhill from here
Stack overflow was never a good excuse for crappy work and neither is AI.
Your real problem is a people problem: hiring, management, feedback, etc.
Problem is, big corporations top level management think GenAI will turn crappy developers into superstars.
Happy to be an independant and not an employee to not put up with this crap
what's funny is that I have had good experience using the opposite workflow as said coworker
Do implementation with minimal (not none, tbf) AI support, ask copilot if there are any obvious issues with and otherwise check my work.
This workflow has helped me catch quite a few gotchas I would have otherwise missed.
Coding assistants are really helpful for validating output, I've had much more mixed results trying to use it to generate novel outputs.
That’s because it’s pretty safe to safe you have experience and have seend havoc in the past.
Less experienced developers are the primary vector of propagation for this « low quality » output, with seniors trying to educate and review the mess (if time permits)
was thinking about this while reading another story about AI code review.
Having an LLM write the code for me? Blecch, it doesn't do it right.
Have an LLM make suggestions about my code? That's fine. If some of them are asinine I just get to laugh and feel smart while ignoring them. But if 1/5 of the suggestions are actually good? That's a win.
But if 1/5 of the questions I ask an LLM are correct, that's a waste of time. Funny how the accuracy of the model matters a different amount depending on the task at hand!
Its actually amazing news, worse software quality means more oportunities for hackers and better tools
Or people just become satisfied with lower quality. Hand-crafted chairs from a woodworker used to be the only way to get a chair. You'd get an amazing, life-long chair with no flaws. Now that chairs are cheaply made in factories, we can have multiple different chairs with many flaws. And just toss them out and go on to the next one when needed. But... those sturdy beautiful fine-wordworker-made chairs are still being built. They are just more rare, expensive, and special than ever before.
Yeah that's where I think it's going. Quality software will have no space but a niche, and so your customers will be the top 1% that can afford it.
You actually believe that not a single woodworker said to himself, there are only so many wealthy people, I might do better if I make some chairs from cheap wood in a hurry, not doing so much polishing or ornamentaion, and I can make twice as many and sell to the not-so-rich people? You really don't think that existed?
One of the reasons of the downfall of Florence was that they didn't enter the low quality textile market and remained on the high end stuff.
I have to disagree here based on experience--the majority of software projects I've worked on really didn't care much for "quality" in any sense.
A HUGE portion of work in f500 companies that are non-software firms is outsourced--these companies spend loads of cash on consulting companies (yes, those ones) that often produce results to which the description of sub-par would be a compliment. There's been various cases made public, but more often is routine and poor quality mundane work that's way over budget and barely meets requirements.
If anything, the prevalence and adoption of LLM-generated code will increase the quality in a lot of places.
If you've never had to wade through something that one of Those Companies wrote, you have no idea how bad it gets and how frequently this is the case.
We just saw DOGE-related cuts to Accenture and Deloitte--these companies have huge contracts all over the place, not just public sector.
There's a massive amount of crap out there.
Most companies do not care, do not understand "quality" outside of some cargo-culted notion of "clean", have only adopted modern practices as some sort of ritual* of Things You Do without any assessment on what tangibly works or doesn't, have no understanding of how to attract skilled professionals, nor assess them, nor nurture them. Often their definition of skilled is a resume that contains the proper "experience" with whatever Java/Angular/React/we-love-containers thing they use. With the right number of "years" near it.
*a recent example--company has GH Actions with CI/CD for every commit on all branches but routinely suffers delays due to misconfigurations, runner issues, and other headaches. Code Reviews for PRs exist but are basically pedantry that just slow down deployments ("change this to switch over if-else") while missing crucial bugs, randomly updating major dependencies cause vague security warning that breaks interfaces and other APIs. All this and more--and this is one of the better examples I can think of, in terms of "quality."
I'm saying this as I worked at a Pretty Good consulting company that's business model changed to sub-outsourcing some of the labor to some of These Companies and often had to fix, assist, push back on, work extra to clean up, the various decisions made in these cases.
I also "inherited" and worked on fixes for things left behind by some of these places.
Good thing is there often was low-hanging fruit. Extra and overprovisioned cloud resources wasting $$ and the like, absurd architectures with extra dbs and queues, and all sorts of nightmares.
The "AGI is right around the corner" argument is effectively corporate malpractice. No, shareholders don't want you to wait around and do nothing (other than a few layoffs here and there) while you wait for AGI.
The layoff mass hysteria that ran through the tech industry established doing less or nothing as a corporate virtue.
If the market gave your company a PE ratio of 20+ and you're flush with cash, it is borderline fiduciary negligence to be slashing projects and doing layoffs. Your shareholders didn't invest in you for the capital preservation portfolio-management skills of your finance department.
Not necessarily. Should the market action bump up a company PE very high (hello, Costco at PE 53 today) that's not a reason to splurge on projects. On the contrary, many shareholders would expect the company to behave responsibly, save cash, pause share buybacks and build up war chest for leaner times, where buying a competitor could be way cheaper. My 2c.
If the PE ratio is high enough, the company can raise money by simply issuing more shares. Giving the company a money-printer is the whole point of the stock market. That's what makes you an investor -- you're offering the company the value of your shares by exposing yourself to "inflation" whenever the company needs to raise some money.
On paper. In reality, issuing shares (or debt) takes time and is a complex process, a mere sign of which can dampen share prices.
> That's what makes you an investor -- you're offering the company the value of your shares by exposing yourself to "inflation" whenever the company needs to raise some money.
I beg to differ. What makes me an investor is buying a share of future company profits, directly via dividends or indirectly via a future sale of shares. Many of the companies I invested in over the last 30 years were (over time) reducing share counts via buybacks, not issuing shares like candy.
You're not an "investor" if you aren't actually funding the company. Trading scrips that entitle you to a share of the profit is not "funding" the company.
Except that you are -- every time that the company prints shares, they've taken some value from you to fund their operations. If companies weren't allowed to issue shares from thin air, there wouldn't be much point to the stock market.
Everything you've said sounds like next quarters problem.
”We’ve become so efficient and are so forward-thinking, we just don’t need all these pesky employees” sounds better than ”We overhired and it turns out money isn’t free anymore”.
I think the crux of this post is spot-on: we’re nowhere near a “post-developer” era. LLMs are great accelerators, but they still need a competent human in the loop.
That said, I do think he understates how much the value of certain developer skills is shifting. For example, pixel-perfect CSS or esoteric algorithmic showmanship used to signal craftsmanship — now they’re rapidly becoming commoditized. With today’s tools, I’d rather paste a screenshot into an LLM and tweak the output than spend hours on handcrafted layouts for a single corporate device.
Are you suggesting pixel perfect CSS is not needed any more, or that an LLM can fix any CSS problem presented to it?
The client sends the new menu as a word document. Select all > copy > post this wall of text into llm > please proofread > please turn into semantic html > please translate into German, French and Italian. That takes the 3 minutes a Developer needs to formulate a prompt that will get a result with only 2 bugs.
Know if it actually is your tool.
Over the past 30 years, computers and software have dramatically transformed our world. Yet many sectors remain heavily influenced by their analog history. My understanding is that the HN community has always recognized that the future is already here, just not evenly distributed across work environments, administration, and general processes. Didn't many of us believe that numerous jobs could be replaced by a few lines of code if inputs and outputs were properly standardized? The fact that this hasn't happened or has occurred very slowly due to institutional inertia is another story altogether. Whether software development will become a "bullshit job" or how the world will look in a few years remains unknown. But those who constantly praise their work as software developers while simultaneously acknowledging that other non-physical jobs and processes could be fundamentally overhauled are living in a cognitive bubble—something I wouldn't have expected in this community.
I thought we were supposed to reach AGI since like, the 1970s. Every time there's an advancement in AI, there's always speculations that "robot gonna take our jobs in 10 years".
That said, at least we have reached the phase where AI tools are commercialized, so that's another +1 I guess.
After some experience, starting to use a new programming language is not such a big challenge. Mastering the new language eco system is. AI might help you generate new code faster, but feel like crunching code has not been so much of an issue. Bigger picture system design, something that scales and is maintainable is the challenge.
Big part of coding is understanding the code and making decisions on what needs to be added/removed/changed. LLMs can code, even if they generate perfect code every single time, someone still needs to read the code and make decisions. Others speak about understanding business logic, interfacing with clients and stakeholders,etc... I get that, but even without that, someone will always need to decide how things should be done, improved,etc.. LLMs are not going to benchmark your code and they will never understand the developer/client's intent perfectly.
Why are LLMs in these context being viewed as more than really powerful auto-complete/intellisense? I mean, correct me if I'm wrong, but aren't LLMs still years away from auto-generating a complete complex codebase that works without fixing a lot of bugs/assumptions?
It's funny how we'd like to chop off the head of that junior developer who's behind that pull request of a perfectly working but not enough clean code, but we happily welcome spaghetti and buggy LLM code because sisterhood and brotherhood claim it's trendy. Herd mentality.
There's an uncomfortable truth backing this, at least for me: revising the code of a LLM and changing my cursorrules is easier than fixing up a junior's code and teaching them _why_ I'd want to do something a particular way. After all, it's a tool, not a person, and you can be much rougher with your tools.
In my experience, you kinda get used to other people's mindset and after a while it becomes second nature to look for their mistakes or to understand their approaches.
Well I, for one, have infinitely more patience for my juniors than whatever nonsense an LLM spits out. Far more pleasant to teach a person something new than to fix autocomplete output.
> LLMs are not going to benchmark your code
If I may focus on this particular line without invalidating the rest of the content -- why aren't they going to benchmark the code?
You hardly need more than a simple script to just run a generic benchmark tool. But to measure performance of your code and figure out what needs to be tuned, I don't think LLMs are there, am I wrong? They can't understand the code they generated well enough to make small adjustments and improve the benchmark score. For a dev to do that on LLM generated code, it would have to understand the codebase as intended by the LLM at least.
There used to be a job of a "Typist". Now everyone does their own typing.
In the near future we'll probably see a lot more subject matter experts creating their own tools instead of requiring a dedicated person to translate their requirements.
I'm not sure subject matter experts will still be a thing. I worry that we'll just see expectations gradually lower to the point where kinda shitty is the baseline, and we just trust or accommodate the machine for everything.
It's fine, there will be new jobs.
The post developer era will be the post white collar era.
Surely the level of intellectual difficulty in software engineering is similar to practicing law, medicine, banking… ?
Saying AI writes code is like saying your clarinet plays music.
Truth is we've been post-developer for a while; powerful machines and high level abstractions has made good developers into managers, translating business objectives into software products and managing the outcomes with much less typing of code -- they're still called software engineers for legacy and cultural reasons, of course.
Sadly most shops greatly overlook this reality, stuck in 1991, and continue to add redundant layers and staff thinking it makes them 'agile'.
Someone kill that muppet that jumps out of the margin halfway down. Please.
The site doesn't work in my browser (Safari), cannot scroll. Now I know why...
Yes, that manages to be both super annoying and condescending at the same time.
Funny how a foreign country got America to compromise on its core value of free speech that we used to lecture Europeans on.
With great worry I observe recent developments in the U.S., but I do not catch what event are you referring to exactly. Would you kindly hint?
ahh wrong thread. I meant to comment here LOL
https://news.ycombinator.com/item?id=43684536
> It’s like a tag team wrestling match; when I hit a task that Claude would excel at, I tap out and let him tackle it.
Please do not anthropomorphize the AI, it's not a he.
This "AGI" definition is extremely loose depending on who you talk to. Ask "what does AGI mean to you" and sometimes the answer is:
1. Millions of layoffs across industries due to AI with some form of questionable UBI (not sure if this works)
2. 100BN in profits. (Microsoft / OpenAI definition)
3. Abundance in slopware. (VC's definition)
4. Raise more money to reach AGI / ASI.
5. Any job that a human can do which is economically significant.
6. Safe AI (Researchers definition).
7. All the above that AI could possibly do better.
I am sure there must be a industry aligned and concrete definition that everyone can agree on rather the goal post moving definitions.
Anyone who thinks AI will replace programmers isn’t doing much AI assisted coding.
AI is at best very helpful.
It’s a very very long way from making programmers obsolete.
The developer is not going to become obsolete.
However, LLMs might already be good enough to replace teams of 10 software developers with a domain expert and an expert developer armed with good LLMs.
That's enough for a fundamental, very violent change to the software industry.
They definitely are not for projects of any serious complexity
Americans inexplicably re-elected a wildly incompetent conman to be president
In fact, this was totally and easily explicable after the dominant political party tried to convince Americans that their current leader was not suffering from severe cognitive decline and was "sharp as a tack" (a Biden admin talking point) and that NPCs lying to Americans about Trump being "easy to beat" by Kamala and also lying to them about Trump calling Charlottesville protestors "very fine people" and thinking that those fake attempts to make Trump look like he was endorsing Nazis wouldn't backfire explosively.
So, no, it was not "inexplicable" at all but it was rather Whiplash Effect initiated by media narratives originating in the Democrat ecosystem. And don't forget: Trump, Elon and Rogan are all ex-Democrats. I wonder, too, if the author was one of those people who deluded himself into thinking "Kamala is a great candidate" against all the evidence.
So the "Russiagate" hoax failed, "Kamala is awesome" failed, elites orienting themselves around appeasing far left pressure groups failed, smug contempt for middle America failed (astonishingly) and yet it is "inexplicable" why Orange Man Bad won the election!
I consider the relative ease with which many people can imitate him to be the reason he is in office
You guys are so triggered by that "fine people" line lmao
Always bewildering to me when people look at the demographics of conservatives, leaning old and rich, and think that they just fell out of the coconut tree. No. Those people were sometimes "progressives" in their time and later became conservatives.
Same as it ever was. No further explanation needed. Not a new phenomenon.
ed: "No, it must be the fault of the left." must be a reassuring thought, but let me occams razor that for you: when these people were younger, they were selfish, and as they weren't yet the status quo, they were progressive. When these people got older, they were still selfish, and as they were the status quo, they were conservatives. Simple.
"If you’re passionate about software development, or if you see it as the best opportunity for you to earn a high salary that’ll lift you into the upper middle class, I really hope you won’t let yourself get discouraged by AI hype."
+100 to that