spiralcoaster 11 hours ago

> The amount of cognitive overhead in this deceptively simple log is several levels deep: you have to first stop to type logger.info (or is it logging.info? I use both loguru and logger depending on the codebase and end up always getting the two confused.) Then, the parentheses, the f-string itself, and then the variables in brackets. Now, was it your_variable or your_variable_with_edits from five lines up? And what’s the syntax for accessing a subset of df.head again?

What you're describing is called: programming. This can't be serious. What about the cognitive overhead of writing a for loop? You have to remember what's in the array you're iterating over, how that array interacts with maybe other parts of the code base, and oh man, what about those pesky indices! Does it start at 0 or 1? I can't take it! AI save me!

  • btown 11 hours ago

    One of the things I love about computing and computer science is how the wide variety of tools available, built over multiple generations, provide people with the leverage to bring their highly complex ideas to life. No matter how they work best, they can use those tools as a way to keep their mind focused on larger goals with broader context without yak shaving every hole punched in a punchcard.

    You see a person whose conception of programming is different from yours; I see a person who's finding joy in the act of creating computer programs, and who will be able to bring even more of their ideas to life than they would have beforehand. That's something to celebrate, I think.

    • bravetraveler 4 hours ago

      Calling aptitude "yak shaving" is certainly something to consider

    • gopalv 7 hours ago

      > who will be able to bring even more of their ideas to life than they would have beforehand.

      This is the core part of what's changing - the most important people around me used to be "People who know how".

      We're slowly shifting to "Knowing what you want" is beating the Know-how.

      People without any know-how are able to experiment because they know what they want and can keep saying "No, that's not what I want" to a system which will listen to them for without complaining supplying the know-how.

      From my perspective, my decades of accumulating know-how is entirely pointless and wiped away in the last 2 years.

      Adapt or fall behind, there's no way to ignore AI and hope it passes by without a ripple.

      • raffraffraff 4 hours ago

        I've found that if you're a novice coder you don't know what to ask for.

        Your decades of experience are probably a bit like mine: you sense the cause of a problem in an almost psychic way, based on knowledge of the existing codebase, the person who wrote the last update, the "smell" of the problem. I've walked into a major incident, looked at a single alert on a dashboard, and almost with a smile on my face identified the root cause immediately. It's decades of knowledge that allow to know what to ask for.

        Same with vibe coding: I've been having tremendous fun with it but occasionally a totally weird failure will occur, something that "couldn't happen" if you wrote it yourself. Like, you realise that the AI didn't refactor your code when you added the last feature, so it added "some tiny thing" to three separate functions. Then over several refactors it only updated one of those "tiny things", so now that specific feature only breaks in very specific cases.

        Or let's say you want to get it to write something that AI seems to have problems with. Example, assemble and manage a NAS array using mdadm. I've been messing with that recently and Google Gemini has lost the whole array twice, and utterly failed to figure out how to rename an array device name. It's a hoot. Just to see if it would ever figure it out I kept going. Pages and pages of over-and-back, repeating the same mistakes, missing the obvious. Maybe it's been trained on 10 years of Muppets online giving terrible advice on how to manage mdadm?

        • johnisgood 3 hours ago

          > I've found that if you're a novice coder you don't know what to ask for.

          And this is the reason for why I think I am productive with LLMs, and why people who know nothing about the underlying concepts are not going to be as productive.

        • fakedang an hour ago

          As a counterexample, there's someone who vibe-coded a subscription adult website complete with payments and stuff, while having zero computer science experience while living in an RV. I can't find the link now, although last I saw on X, she was complaining about being blocked by trad finance after Bill Ackman's campaign.

          So yeah, it's absolutely possible. From personal experience, I was able to implement a basic scan and go application complete with payment integrations without going through a single piece of documentation.

          As long as you're ready to jostle for a bit in frustration with an AI, you can make something monetizable once you've identified an underserved market.

      • oldenlessons 3 hours ago

        What you are talking about here is accidental vs essential complexity as described by Brooks in the 80s.

        Your claim that LLMs do away entirely with accidental complexity and manage essential complexity for you is not supported by reality. Adding these tools to workflows adds a tonne of accidental complexity, and they still cannot shield you from all the essential complexity because they are often wrong.

        There have been endless noise made over semantics but the plain fact is that LLMs render output that is incongruent with reality very often. And now we are trying to remedy it with what amounts to expert systems.

        There is no silver bullet. You have to painstakingly get rid of accidental complexity and tackle essential complexity in order to build complex and useful systems.

        I don't understand what's so abhorrent about it that people invent layers and layers of accidental complexity trying to avoid facing simple facts. We need to understand computers and domains with high accuracy to build any useful software and that's how it's always been and how it's always gonna be.

        There is no silver bullet.

      • goosejuice 6 hours ago

        > From my perspective, my decades of accumulating know-how is entirely pointless and wiped away in the last 2 years.

        I find this very difficult to believe, but I have no idea what you do. I'm a generalist and this isn't even close to true for me with state of the art llms.

        • oldenlessons 3 hours ago

          Nothing has served me better over the past few decades than accumulating ever more detailed and accurate knowledge of what it is exactly that computers do under the hood.

          All the layers of abstraction are well intended and often useful. But they by no means eliminate the need to understand in detail the hard facts underlying computer engineering if you want to build performant and reliable software.

          You can trade that off at different rates for different circumstances but the notion that you can do away entirely with the need to know these details has never been true.

          More people being enabled to think less about these details necessitates more expertise to exist to support them, never less.

      • anon7000 7 hours ago

        I agree at least a little bit, but let’s be honest: the history of software engineering is a history of higher and higher levels of abstraction wrapping the previously levels.

        So part of this is just another abstraction. But another part, which I agree with, is that abstracting how you learn shit is not good. For me, I use AI in a way that helps me learn more and accomplish more. I deliberately don’t cede my thinking process away, and I deliberately try to add more polish and quality since it helps me do it in less time. I don’t feel like my know-how is useless — instead, I’m seeing how valuable it is to know shit when a junior teammate is opening PRs with critical mistakes because they don’t know any better (and aren’t trying to learn)

        • oldenlessons 3 hours ago

          I like to read books on computers from the 70s and 80s. No trite analogies, just hard facts and diagrams. And explanations that start from scratch, requiring no previous knowledge - because there was none.

          The thing about these layers of abstraction is that they add load and thus increase the demand for people and teams and organizations that command these lower levels. The idea that, on a systemic level, higher abstraction levels can diminish the importance, size, complexity or expertise needed overall or even keep it at current levels is entirely misguided.

          As we add load on top, the base has to become stronger and becomes more important, not less.

  • BoiledCabbage 11 hours ago

    > What you're describing is called: programming.

    Is that the part of programming that you enjoy? Remembering logger vs logging?

    For me I enjoyed the technical chalenges, the design, solving customer problems all of that.

    But in the end, focus on the parts you love.

    • okayishdefaults 11 hours ago

      This is a sign that the user hasn't taken the time to set up their tools. You should be able to type log and have it tab complete because your editor should be aware of the context you're in. You don't need a fuzzy problem solver to solve non-fuzzy problems.

      • bitpush 11 hours ago

        > user hasn't taken the time to set up their tools

        The user, infact, has setup a tool for the task - an "AI model", unless you're saying one tool is better than others.

        • sodality2 11 hours ago

          Then it's a real bad case of using the LLM hammer thinking everything is a nail. If you're truly using transformer inference to auto fill variables when your LSP could do that with orders of magnitude less power usage, 100% success rate (given it's parsed the source tree and knows exactly what variables exist, etc), I'd argue that that tool is better.

          Of course LLMs can do a lot more than variable autocomplete. But all of the examples given are things that are removing cognitive overhead that probably won't exist after a little practice doing it yourself.

          • chrisweekly 10 hours ago

            This. Set up your dev env and pay attention to details and get it right. Introducing probabilistic codegen before doing that is asking for trouble before you even really get started accruing tech debt.

            • bitpush 9 hours ago

              You say "probabilistic" as if some kind of gotcha. The binary rigidness is merely an illusion that computers put up. At every layer, there's probabilistic events going on.

              - Your hot path functions get optimized, probabilistically

              - Your requests to a webserver are probabilistic, and most of the systems have retries built in.

              - Heck, 1s and 0s operate in a range, with error bars built in. It isnt really 5V = 1 and 0V = 0.

              Just because YOU dont deal with probabilistic events while programming in rust, or python doesnt mean it is inherently bad. Embrace it.

              • sodality2 8 hours ago

                We’re comparing this to an LSP or intellisense type of system, how exactly are these probabilistic? Maybe they crash or get a memory leak every once in a while but that’s true of any software including an inference engine… I’m much more worried about the fact that I can’t guarantee that if I type in half of a variable name, that it’ll know exactly what i’m trying to type. It would be like preparing to delete a line in vim and it predicts you want to delete the next three. Even if you do 90% of the time, you have to verify its output. It’s nothing like a compiler, spurious network errors, etc (which still exist even with another layer of LLM on top).

              • AdieuToLogic 8 hours ago

                >> Introducing probabilistic codegen ...

                > Just because YOU dont deal with probabilistic events while programming in ...

                Runtime events such as what you enumerate are unrelated to "probabilistic codegen" the GP references, as "codegen" is short for "code generation" and in this context identifies an implementation activity.

                • bitpush 7 hours ago

                  The scheduler that puts your program on a CPU works probabilistically. There are no rigid guarentees of workloads in Linux. Those only exist in real time operating systems.

                  • AdieuToLogic 7 hours ago

                    > The scheduler that puts your program on a CPU works probabilistically. There are no rigid guarentees of workloads in Linux. Those only exist in real time operating systems.

                    Again, the post to which you originally replied was about code generation when authoring solution source code.

                    This has nothing to do with Linux, Linux process scheduling, RTOS[0], or any other runtime concern, be it operating system or otherwise.

                    0 - https://en.wikipedia.org/wiki/Real-time_operating_system

            • motorest 7 hours ago

              > This. Set up your dev env and pay attention to details and get it right. Introducing function declarations before knowing what assembly instructions you need to generate is asking for trouble before you even really get started accruing tech debt.

              Old heads cling to their tools and yell at kids walking on lawns, completely unaware that the world already changed right under their noses.

              • wizzwizz4 an hour ago

                We know the "world has changed": that's why we're yelling. The Luddites yelled when factories started churning out cheap fabric that'd barely last 10 years, turning what was once a purchase into a subscription. The villagers of Capel Celyn yelled when their homes were flooded to provide a reservoir for the Liverpool Corporation – a reservoir used for drinking water, in which human corpses lie.

                This change is good for some people, but it isn't good for us – and I suspect it's not really good for you, either.

          • motorest 7 hours ago

            > Then it's a real bad case of using the LLM hammer thinking everything is a nail. If you're truly using transformer inference to auto fill variables when your LSP could do that with orders of magnitude less power usage, 100% success rate (given it's parsed the source tree and knows exactly what variables exist, etc), I'd argue that that tool is better.

            I think you're clinging onto low-level thinking, whereas today you have tools at your disposal that allow you to easily focus on higher level details while eliminating the repetitive work required by, say, the shotgun surgery of adding individual log statements to a chain of function calls.

            > Of course LLMs can do a lot more than variable autocomplete.

            Yes, they can.

            Managing log calls is just one of them. LLMs are a tool that you can use in many, many applications. And it's faster and more efficient than LSPs in accomplishing higher level tasks such as "add logs to this method/methods in this class/module". Why would anyone avoid using something that is just there?

          • bayesianbot 10 hours ago

            Honestly, I've used a fully set up Neovim for the past few years, and I recently tried Zed and its "edit prediction," which predicts what you're going to modify next. I was surprised by how nice that felt — instead of remembering the correct keys to surround a word or line with quotes, I could just type either quotation mark, and the edit prediction would instantly suggest that I could press Tab to jump to the location for the other quote and add it. And not only for surrounding quotes, it worked with everything similar with the same keys and workflow.

            Still prefer my neovim, but it really made me realize how much cognitive load all the keyboard shortcuts and other features add, even if they feel like muscle memory at this point.

          • scubbo 9 hours ago

            I have seen people suggesting that it's OK that our codebase doesn't support deterministically auto-adding the import statement of a newly-referenced class "because AI can predict it".

            I mean, sure, yes, it can. But drastically less efficiently, and with the possibility of errors. Where the problem can be easily soluble, why not pick the solution that's just...right?

        • lmm 10 hours ago

          In my experience consistency from your tools is really important, and AI models are worse at it than the more traditional solutions to the problem.

        • rendaw 9 hours ago

          I don't want to wade into the debate here, but by "their tools" GP probably meant their existing tools (i.e. before adding a new tool), and by "a fuzzy problem solver" was referring to an "AI model".

      • paulmooreparks 10 hours ago

        I know old timers who think auto-completion is a sign of a lazy programmer. The wheel keeps turning....

      • goosejuice 5 hours ago

        Or the user works with user-hostile tools. Some stacks, cloud providers, etc are absolutely horrible to use.

        There are many people out there that have absolutely no idea how horrible or great they have it.

      • motorest 7 hours ago

        > This is a sign that the user hasn't taken the time to set up their tools.

        You are commenting a blog post on how a user set up his tools. It's just that it's not your tool that is being showcased.

        > You should be able to type log and have it tab complete because your editor should be aware of the context you're in.

        ...or, hear me out, you don't have to. Think about it. If you have a tool that you type "add logs" and it's aware of best practices, context, and your own internal usage... I mean, why are you bothering with typing "log" at all?

        • anon22981 3 hours ago

          To be perfectly fair saying ”it’s aware of the best practices, context and internal usage” is very misleading. It’s aware none of those (as it is not ”aware” of anything), and that is perfectly clear when it produces nonsensical results. Often the results are fine, but I see nonsensical results often enough in my more LLM dependant coworkers’ PRs.

          I’m not saying not to use them, but you putting it like that is very dishonest and doesn’t represent the actual reality of it. It doesn’t serve anyone but the vendors to be a shill about LLMs.

    • endymion-light 2 hours ago

      Yeah - a lot of these complaints feel like what I heard very early in my career about how you shouldn't learn python. The C learning I did was still useful, but I appreciate not artisanally crafting all my memory management by hand so that I can ship something that i've created faster

      • eulgro an hour ago

        I definitely would never use Python in production. But it remains an amazing tool for prototyping and writing quick dirty scripts.

        I can reasonably expect Python to be installed on every Linux system, the debugging experience is amazing (e.g. runtime evaluation of anything), there's a vast amount of libraries and bindings available, the amount of documentation is huge and it's probably the language LLMs know best.

        If there were two languages I would suggest anyone to start with, it would be C and Python. One gives a comprehensive overview of low-level stuff, the other gives you actual power to do stuff. From there on you can get fancy and upgrade to more advanced languages.

        • endymion-light an hour ago

          Oh 100%, there's definitely an important trade-off, and it's important to know. But there was definitely a cultural disdain and judgment for "why are you using python to create a simple side project - C is superior".

          It's still important to know both, and especially when I began working on aspects like multithreading I found my basis in C helped me learn far easier, but i'm definitely more supportive of the ship it mindset.

          It's better to have a bad side project online than have none - you learn far more creating things then never making things, and if you need LLMs and Python to do that, fine!

          I think it depends on how you approach these tools, personally I still quite focus on learning general, repeatable concepts from LLMs as i'm an idiot who needs different terms repeated 50 times in similar ways to understand them properly!

    • sodapopcan 9 hours ago

      > Is that the part of programming that you enjoy? Remembering logger vs logging?

      If you're proficient in a programming language then you don't need to remember these things, you just do it, much like spoken language.

      • bubblyworld 8 hours ago

        This isn't a language thing, it's a project thing. Language things I can do fluently (like the example of a for loop in the OP comment... lol). But I work on so many different projects that it's impossible to keep this kind of dependency context fresh in my head. And I think that's fine? I'm more than happy to delegate that kind of stuff.

      • simonw 9 hours ago

        I find there is a limit to the number of programming languages I can stay actively proficient in at any given time.

        I am using a much wider range of languages now that I have LLM assistance, because I am no longer incentivized to stick to a small number that are warm in my mental cache.

        • MangoToupe 3 hours ago

          Really? At some point syntax became kind of a vague sense of color on top of the data flow, which is ultimately the same in any language. I don't even recall what it means to be proficient in one language versus another—surely most career programmers can ramp up on any given syntax or runtime in a relatively short period of time. The hard part is laying out the data flow.

          Granted, AI can definitely ease that ramp-up time at the cost of lengthening it.

          • kaibee an hour ago

            > At some point syntax became kind of a vague sense of color on top of the data flow

            Thank you for this analogy.

    • dakiol 2 hours ago

      If you remove all the tiny details that you detest because you think you should better spend your time on the “important stuff”, be careful; you may wake up one day and not care enough about anything because you have been discarding stuff bit by bit.

    • andy99 11 hours ago

      I like building stuff - I mean like construction, renovations. I like figuring out how I need to frame something, what order, what lengths and angles to cut. Obviously I like making something useful, but the mechanics are fun too.

    • noisy_boy 9 hours ago

      > Is that the part of programming that you enjoy? Remembering logger vs logging?

      No but genuinely like writing informative logs. I have been in production support roles and boy does the lack of good logging (or barely any logs at all!) suck. I prefer print style debugging and want my colleagues on the support side to have the same level of convenience.

      Not to mention the advantages of being able to search through past logs for troubleshooting and analysis.

    • MangoToupe 3 hours ago

      Enjoying any part of this seems a little odd to me. The enjoyable part is using the thing you built. Regardless, programming (remembering logger vs logging, syntax, debugging) is certainly the easier end of things.

    • bowsamic 5 hours ago

      It's not about focussing on what you love but about basic competence and being able to deal with simple programming problems

    • morkalork 11 hours ago

      I actually take pride in the logs I write because I write good ones with exactly the necessary context to efficiently isolate and solve problems. I derive a little bit of satisfaction from closing bugs faster than my colleagues who write poor logs.

    • AdieuToLogic 8 hours ago

      >> What you're describing is called: programming.

      > Is that the part of programming that you enjoy? Remembering logger vs logging?

      If a person cannot remember what to use in order to define their desired solution logic (how do I make a log statement again?), then they are unqualified to implement same.

      > But in the end, focus on the parts you love.

      Speaking only for myself, I love working with people who understand what they are doing when they do it.

      • pyridines 8 hours ago

        It's not unreasonable to briefly forget details like that, especially when you're dealing with a multi-language codebase where "how do I make a log statement?" requires a different pattern in each one.

        • AdieuToLogic 8 hours ago

          > It's not unreasonable to briefly forget details like that, especially when you're dealing with a multi-language codebase where "how do I make a log statement?" requires a different pattern in each one.

          You make my point for me.

          When I wrote:

            ... I love working with people who understand what they
            are doing when they do it.
          
          This is not a judgement about coworker ability, skill, or integrity. It is instead a desire to work with people who ensure they have a reasonable understanding of what they are about to introduce into a system. This includes coworkers who reach out to team members in order achieve said understanding.
  • _alternator_ 11 hours ago

    Yeah, it’s also surprising because the user really shouldn’t be using f-strings for logging since they get interpolated whether or not the log level is set to INFO. This is more important when the user is writing say, debug logs that run inside hot loops, which will incur a significant performance penalty by converting lots of data to its string representation.

    But sure, vibe away.

    • d0mine 9 hours ago

      f""-strings for logging is an example of "practicality beats purity"

      Yes, f""-strings may be evaluated unnecessarily (perhaps, t-strings could solve it). But in practice they are too convenient. Unless profiler says otherwise, it may be ok to use them in many circumstances.

      • jackblemming 7 hours ago

        Heh, I get to totally dunk on this guy by calling him a vibe coder for not using lazy evaled string interpolation.

        Wait a second.. If I do ANY ACTUAL engineering and log out the time savings, it's completely negligible and just makes the code harder to read?

        It is complete insanity to me that literally every piece of programming literature over the past sixty years has been drilling the concept about code readability over unncessary optimizations and yet still I constantly read completely backwards takes like this.

    • astrange 10 hours ago

      Format strings are very useful, so I'd suggest fixing the language to let you use them. You don't have to live with it interpreting them too early!

      Even better, you should be interpreting them at time of reading the log, not when writing it. Makes them a lot smaller.

      • WD-42 9 hours ago

        The thing is, the logging calls already accept variable arguments that do pretty much what people use f-string in logging calls for already, except better. People see f-string, they like f-string, and they end up in logs, that's really all there is to it.

    • 2muchcoffeeman 6 hours ago

      Good news. You can get AI to refactor this sort of stuff away easily.

    • blibble 11 hours ago

      yep, putting user input into the message to be interpolated is asking for trouble

      in C this leads to remote code execution (%n and friends)

      in java (with log4j) this previously lead to remote code execution (despite being memory safe)

      why am I not surprised the slop generator suggests it

  • bigstrat2003 11 hours ago

    One thing that has become abundantly clear from the AI craze is how many people - who do programming for a living - really don't like programming. I don't really understand why they got into the field; to be honest, it seems kind of like someone who doesn't like playing the guitar embarking on a career as a guitarist. But regardless of the reasons they seem to be pretty happy for a chance to not have to program any more.

    • a_bonobo 11 hours ago

      Do you like 'solving problems' or do you like 'getting into the weeds'? Both are valid, and both are common uses of programming.

      When I was younger, I loved 'getting into the weeds'. 'Oh, the audio broke? That gives me a great change to learn more about ALSA!'. Now that I'm older, I don't want to learn more about ALSA, I've seen enough. I'm now more in camp 'solving problems', I want the job done and the task successfully finished to a reasonable level of quality, I don't care which library or data structure was particularly used to get the job done. (both camps obviously overlap, many issues require getting into the weeds)

      In this framework, the promise of AI is great for camp 'solving problems' (yes yes hallucinations etc.), but horrible for camp 'getting into the weeds'. From your framing you sound like you're from camp 'getting into the weeds', and that's fine too. But I can't say camp 'solving problems' doesn't like programming. Lot of carpenters out there who like to build things without caring what their hammer does.

      • Dilettante_ 8 hours ago

        >I'm now more in camp 'solving problems', I want the job done and the task successfully finished to a reasonable level of quality

        To split the definitions one step further: That actually sounds not like you 'enjoy solving problems'(the process), but rather you 'enjoy not having the problem anymore'(the result).

        Meaning you don't like programming for itself(anymore?), but merely see it as a useful tool. Implying that your life would be no less rich if you had a magical button that completely bypassed the activity.

        I don't think someone who would stop doing it if given the chance can be said to "like programming", and certainly not in the way GP means.

      • astrange 10 hours ago

        I spent a very long time getting into the weeds to learn everything about computer architecture, because at the time it seemed like it was the only way to do it and I wanted to have a career. In the meantime social media / cloud hosting / StackOverflow were invented, it became much easier for people to write online, and it turned out I didn't need to do any of that because the actual authors have all explained themselves on it.

        Though, doing this is still the right way to learn how to debug things!

        nb I actually just realized I never understood a specific bit of image processing math after working on ffmpeg for years, asked a random AI, and got a perfectly clear explanation of it.

      • skydhash 8 hours ago

        I like solving problem. But I also want the problem to stay solved. And if I happen to see a common pattern between problems, then I build a solution generator.

        Maybe because I don’t think in terms of code. I just have this mental image that is abstract, but is consistent. Code is just a tool to materialize it, just like words are a tool to tell a story. By the time I’m typing anything, I’m already fully aware of my goals. Designing and writing are two different activities.

        Reading LLMs code is jarring because it changes the pattern midway. Like an author smashing the modern world and middle earth together. It’s like writing an urban fantasy and someone keeps interrupting you with some hard science-fiction ideas.

      • paulmooreparks 10 hours ago

        Exactly this. I still "get into the weeds" without AI if I really need to dig into learning something new or if I want to explore some totally new idea (LLMs don't really do "totally new"). If I'm debugging a CRUD app, though... eh, it's sunny outside and I only have a couple more hours of daylight, so, AI it is.

    • latexr 3 hours ago

      > I don't really understand why they got into the field; to be honest

      Money. Startup hype. Thinking they’ll be the next Zuckerberg (as if that’d be a good thing).

    • simonw 9 hours ago

      One thing that has become abundantly clear from the AI craze is how many people who do programming for a living are actively hostile to fascinating new applications of computer science that open up entirely new capabilities and ways of working.

      • probably_wrong 3 hours ago

        I think you're painting with too wide a brush.

        Someone may say, like in this post, "AI makes my work easier", and that may be a valid point.

        Someone else may say "AI has made my work much harder" [1], and that's a valid point too.

        It may very well be that we'll find a place in the middle, but in the meantime it seem disingenuous to me to accept one side without acknowledging that the other is making valid points too.

        [1] https://news.ycombinator.com/item?id=44558665

    • closewith 6 hours ago

      Most software developers work in software because it's well paid and much easier to break into than other similarly paid jobs.

      Most software developers never write software outside of education or employment. That's completely normal.

      Even most recreational programming is to solve a problem. Very few developers write software for the pleasure of it and would happily use a magical "solve my problem" button instead.

      This is also true for all employers and customers.

      Honestly, it sounds to me that you fundamentally misunderstand the industry in which you work.

      • gonzalohm 16 minutes ago

        Why would you work in a field that you have no interest in? Seems like a good way to get depressed

    • sneak 8 hours ago

      I love programming, but 90% of it is crappy toil due to language or tool design. I especially hate about 90% of the stuff one has to do to work around bad design decisions when writing significant amounts of python or javascript.

      Disliking toil is not the same as disliking programming.

  • phendrenad2 11 hours ago

    Yes but cognitive load is a real thing. Being free to not think about the proper format to log some generic info about the state of the program might seem like a small thing, but remember, that frees up your mind to hold other concerns. See well-trodden research that the human mind can hold roughly three to five meaningful items in working memory at once. When in the flow of programming, you probably have a complicated unconscious process of kicking things out of working memory and re-acquiring them by looking at the code in front of you. I think the author is correctly observing that they are getting the benefit of not having to evict something from their mental cache to remember how logging works for this particular project (especially egregious if you work on 10 codebases and they each use a different logger).

    • fleebee 2 hours ago

      How often do we print debug something that isn't just a single variable? I have defined a macro in my editor that expands into a nicely formatted logger call when I type "dbg" and the variable name. I can do it without thinking or losing focus.

      On the other hand, if I used an LLM to guess what I wanted to get logged (as the author did), I'd have to read and verify the LLMs output. That's cognitive overhead I normally wouldn't have.

      Maybe I'm missing something, but I just don't find myself in situations where I only vaguely know what I want to debug and would benefit from an LLM guessing what that could be.

  • ghurtado 8 hours ago

    > What you're describing is called: programming.

    And once you have enough experience, you realize that maintaining your focus and managing your cognitive workload are the key levers that affect productivity.

    But, it looks like you are still caught up with iterating over arrays, so this realization might still be a few years away for you.

    • jychang 8 hours ago

      Yeah. This is a really weird complaint to be honest.

      By that standard, python is not real programming because you're not managing your own memory. Is python considered AI now?

  • conradev 11 hours ago

    The author clearly dislikes writing logging code enough to put the work into creating a fine-tuned model for the purpose.

    I thought “making tools to automate work” was one of the key uses of a computer but I might be wrong

  • webstrand 11 hours ago

    I can totally write the logging code myself, but its tedious formatting the log messages "nicely". In my experience AI will write a nice log message and capture relevant variables automatically, unlike handwritten statements where I inevitably have to make a second pass to include a critical value I missed.

    • skydhash 8 hours ago

      I think we need to abandon this idea of writing code like a scribe copying a book. There’s a plethora of tools ready to help you take advantage of the facts

      - that the code itself is an interlinked structure (LSPs and code navigation),

      - that the syntax is simple and repetitive (snippets and generators),

      - that you are using a very limited set of symbols (grep, find and replace, contextual docs, completion)

      - and that files are a tools for organization (emacs and vim buffers, split layout in other editors)

      Your editor should be a canvas for your thinking, not an assembly line workspace where you only type code out.

  • xdfgh1112 11 hours ago

    For me it's the opposite, I know exactly how to write log lines. It's just tedious. AI auto completes pretty much what I would have written.

    • lmm 6 hours ago

      If it's really that tedious and mechanical, there should be a code-level affordance for it (e.g. a macro, or something like https://github.com/lancewalton/treelog). Code is read more than it's written, and code that can be autocompleted isn't worth reading.

      • danielbln 4 hours ago

        That's just a different type of automation. We can argue all day about determinism and repeatability, but at the end of tue day we are all in the business of automation. We are arguing about the specifics.

        • lmm 4 hours ago

          > That's just a different type of automation. We can argue all day about determinism and repeatability, but at the end of tue day we are all in the business of automation. We are arguing about the specifics.

          Why choose to use something unreliable instead of using or writing something reliable? Isn't reliability the whole point of automation in the first place?

          And I think it's fairly widely accepted that you shouldn't check compiled binaries or generated code into your source control, you should check in the configuration that generates it and work with that. (Of course this presupposes that your generator is reliable)

  • mym1990 7 hours ago

    Abstraction has always been a part of programming. Are you going to deride someone for not remembering how to write a sorting algorithm from scratch when .sort() is available? This is just another instance of that, trivial as it may be. The next abstraction level for programming is just natural language, if you want to try to gate keep that, have fun.

    • skydhash 41 minutes ago

      Natural language to formal language is not a matter of implementation details, it’s a matter of ambiguity where the definition of a term is contextual. Formalism is removing the dependency on context so that the semantic of a term or a proposition is embedded in itself.

      Formalism is so essential that we we use it to create spontaneous form of programming language (that we call pseudocode) to express some idea clearly.

  • chvid 6 hours ago

    The section you refer to is a justification for code completion and possible providing (visual) feedback for whether a various constructs are spelled or used correctly. It is fairly established that sort of thing increases programmer productivity (as in writing a Java program using notepad vs. writing it using IntelliJ).

    In the old days, we used to do this by using static type inference. This is harder to do in dynamic languages (such as Python), so now we try to do it with LLMs.

    It is not obvious to me that LLMs are a better solution; you may be able to do more but you loose the predictability of the classic approach.

  • bravesoul2 10 hours ago

    Also logging is important! Send structured logs if possible. Make sure structure is consistent between logs. You may have to reach for some abstraction or metaprogramming to do this.

    All logs can be message, object and no need to format anything.

    That said ai saves typing time.

  • paulmooreparks 10 hours ago

    I like having muscles. I hate lifting weights. I like being fit. I hate running. I like being able to play guitar and piano. I hate practicing. I like having food in my pantry. I hate grocery shopping. I like having custom software that fits my needs. I hate writing code.

    • bowsamic 5 hours ago

      Do you like doing anything? Sorry but it's just straight up bad if you only like results and hate the process to get there. That is not a fulfilling life

    • therein 10 hours ago

      But this is using a machine to do the lifting for you so you don't develop the muscles. You are actually not strong through technology but left weak and helpless when left on your own.

      It is just a bunch of people that don't take pride in self-sufficiency. It is a muscle that has atrophied for them.

      • Dilettante_ 7 hours ago

        I do not (confidently) know how to make fire from "scratch". I do not know how to butcher or skin an animal. I do not know how to spin cloth, nor how to stitch it into clothing. My "finding the perfect kind of stone for knapping into a handaxe" muscles have fully atrophied.

        We live in a society. That means giving up self-sufficiency in exchange for bigger leverage in our chosen specialisation. I am 110% confident that when electric power became widespread, people were making the exact same argument against using it as you are making now.

      • Disposal8433 8 hours ago

        "I'm a chef, I hate cooking, I buy readymade meals in the supermarket."

        You're right about the pride of writing actually good code. I think a lot about why I'm still writing software, and while I don't have an answer, it feels like the root cause is that LLMs deprive us of thoughts and decisions, our humanity actually.

        I have never felt threatened by an LSP or a text editor. But LLMs remove every joy, and their output is bad or may not what you wanted. If I hated programming, I would actually buy software as I don't have such precise needs to require tools perfect for those needs.

        No need to enjoy a good meal, AI will chew food for you and inject it in your bloodstream. No need to look at nature, AI will take pictures and write a PDF report.

        Tools help because they are useful. AI is in a weird position to replace every job, activity, and feeling. I don't know who enjoys that but it's very strange. Do they think living in a lounge chair like Wall-E spaceship is good?

        As for the article, it's yet another developer not using its tools properly. The free JetBrains code completion is bad, and using f-strings in logs is bad. I would reject that in a merge request, sorry. But thinking too much about it makes me sad about the state of software development, and sad about the pride and motivation of some (if not most) developers nowadays.

  • adamhartenz 10 hours ago

    You don't use auto-complete for for-loops? Wait... You use a compiled language, rather than writing machine code by hand? Some would call THAT programming.

  • CafeRacer 4 hours ago

    Yeah! And actually writing this inefficient code in python? Seriously? Be a man, write assembly.

interroboink 12 hours ago

The thing I like here is that it runs locally. I use Vim keyword completion[1] a lot for next-word completion. It does a broadly similar sort of "look at surrounding code to offer good suggestions" thing (no LLM stuff, of course). It's wrong often, but it's useful enough that it saves me time overall, I feel.

So, this sounds to me like an expanded version of that, more or less.

I think I'd prefer an AI future with lots of little focused models running locally like this rather than the "über models in the cloud" approach. Or at least having such options is nice.

[1] https://vim.fandom.com/wiki/Any_word_completion

There's also omni-completion, a bit more advanced: https://vim.fandom.com/wiki/Omni_completion

andrelaszlo 2 hours ago

I would have flagged that they're logging their Redis URL, if I was reviewing this. Most of the time this includes credentials.

Normally I think it's a bit rude to criticize the code of blog posts, bit I thought it was relevant here for these reasons:

"I often don’t even remove when I’m done debugging because they’re now valuable in prod" - think about where your production credentials end up. Most of the time, logging them won't hurt, just like keeping your password on a post-it doesn't hurt most of the time.

The arguments about letting an AI reduce the mental overhead is compelling, but this shows one of the (often mentioned) risks: you didn't write it so you didn't consider the implications.

Or maybe the author did consider it, and has a lot of good arguments for why logging it is perfectly safe. I often get pushback from other devs about stuff like this, for example:

- We're the only ones with access to the logs (still, no reason to store credentials in the logs)

- The Redis URL only has an IP, no credentials. (will we remember to update this log line when the settings.redis_url changes?)

- We only log warnings or higher in production (same argument as above)

Maybe I should stop worrying and learn to love AI? Human devs do the same thing, after all?

voidUpdate 5 hours ago

Visual studio added the single line autocomplete thing a while back and it lasted about 2 days before i turned it off because it was frequently wrong and just got in the way. I often use the double tab template insert feature, but that would often accept the incorrect autocomplete instead, so i had to delete it and try again

  • specproc 4 hours ago

    Totally agree, this behaviour is maddening. I set a hotkey to toggle completions, made it useful again.

cosmicgadget 12 hours ago

Automates a tedious, time-consuming task. Easy to catch and correct failures. Love it. My only concern is (other than maybe verbosity) is of my log writing moments are opportunities to briefly reflect on what I have written, and maybe catch problems early.

  • specproc 4 hours ago

    Yeah, the areas AI has provided real value to me are the simple things: logging, docstrings and cli stuff being the best examples.

    I can throw up a basic sketch, focusing on the code, and get it to add the quality stuff after.

physicles 7 hours ago

Tab autocomplete in Cursor is surprisingly good at guessing what I want to log when I type slog.Info. I’m enjoying that time save.

I agree with K&R about debuggers: when writing services that you need to debug in prod, you live and die by your logs. That said, sometimes an interactive debugger is faster than adding logs, like when you’re not sure about the precise call stack that leads to a given point, or there’s a bunch of stuff you need to keep track of and adding logs for all of it is tedious. But pretty quickly you can hit a point where you’re wasting more time in the debugger than it would’ve taken to add those logs…

  • nxpnsv 6 hours ago

    It does well but also sometimes gets it completely wrong but plausible looking. Together, we managed to make several quite intricate bugs. About two months in, I actually don't think my coding speed increased much from using Cursor - and I just can't take "Perfect, ..." "You're absolutley right, ..." - even with custom rules to supress it, it just can't help itself.

Animats 7 hours ago

Usually, people use AI for reading logs, looking for interesting events or patterns. That goes way back. Microsoft was using it to classify and group similar crash dumps back in the 1990s.

Rust lets you write a default debug print for each struct, and will generate one if asked. So you don't have to write out all the fields yourself. That's enough to do the annoying part of the job.

  • djmips 6 hours ago

    Microsoft was using 'AI' in the 90s?

    • closewith 5 hours ago

      Expert systems and decision trees were AI in the '90s.

      Remember, it's only AI until it works.

WhyNotHugo 2 hours ago

You should not use f-strings for logging, because that performs all the formatting _before_ the logging module determines if the log needs to be printed. You want the formatting overhead to happen only for logs that will get printed. You can’t achieve that with f-strings, just use normal logging calls.

It kinda speaks bad of the auto-completion that it suggests such anti-pattern.

andutu 7 hours ago

On the one hand writing logs can be tedious. Essentially logs are breadcrumbs signifying when significant state changes have logically taken place. From what I've seen, logs are added after every few lines and I always fantasized about creating a language where logging is "automatic".

On the other hand, writing logs is a skill worth mastering. Wrote too many and when a service crashes you have to shift through lots of noise and potentially miss the signal. I once went down a rabbit hole trying to root cause an issue in a Gunicorn application that had custom health check logic in it. An admin worker would read health check state from a file that worker threads wrote to. Sometimes a race condition would occur, in which an error log would be emitted. The thing was, this error wasn't fatal and was a red herring for why the service actually crashed. Of instead it would have been logged at the debug level a lot of time would have been saved.

Fine let LLMs write code but take logging seriously!!!

Guid_NewGuid 9 hours ago

The use of AI for this seems somewhat overkill, one could just use a language and environment that is aware of whether `logger` or `logging` is available, and what variables are in scope. We have tools that allow us to treat programming as more than guessing the next character at random. Rather they allow you to offload tracking this context to the machine in a reliable way, typed languages and auto complete.

I do wonder if this is part of the divide in how useful one finds LLMs currently. If you're already using languages which can tell you what expressions are syntactically valid while you type rather than blowing up at runtime the idea a computer can take some of the cognitive overhead is less novel.

  • gbalduzzi 6 hours ago

    I think we are overcomplicating this a bit.

    Typing "log" and tab to accept the auto complete is faster than writing the whole log yourself.

    When you need to add a bunch of log statement, this tool makes the activity faster and less tedious.

    Is it a technological breakthrough? No

    Does it save hours of developer time each day? No

    Is it nice to have? Hell yeah

i_niks_86 3 hours ago

Could full-line code completion and smarter log inference change how developers approach debugging and monitoring altogether? If so, how?

arscan 10 hours ago

An aside, but it’s quite unfortunate that logger.info and logging.info are automatically linkified because of the .info TLD in this case. I don’t recommend clicking on those links.

MagicMoonlight 5 hours ago

I thought this was going to be about an AI which actually writes useful logs for your program. Eg if it crashes, it explains in normal language what happened.

Spivak 10 hours ago

Python programmers, don't use f-strings for your logs.

    # yes
    logger.info("Super log %s", var)
    # no
    logger.info(f"Super log {var}")
I know, it's not as nice looking. But the advantage is that the logging system knows that regardless of what values var takes that it's the same log. This is used by Sentry to aggregate the same logs together. Also if the variable being logged also happens to contain a %s then the f-string version will throw an exception. It doesn't matter because f-strings are so fast but the % method is also lazy and doesn't interpolate/format if it's not going to be logged. Maybe in the future we'll get to use template strings for this.
  • bbkane 10 hours ago

    Thanks for the reasons! I've been using f-strings because they're easy to keep track of, but you make really good arguments (I hadn't thought of Sentry/similar systems using %s to aggregate logs)

    • WD-42 9 hours ago

      They aren't just arguments, it's facts. f-strings in logs, especially in a hot code path, can be really bad for performance.

  • nxpnsv 6 hours ago

    Well if you use loguru (and really you should, it is awesome) you would use, `logger.info("Super log {var}", var=var)` (possibly without the keys). It is also lazy and works better for structured logs.

spencer-p 10 hours ago

I appreciate how the author highlighted the python domain-specific tricks like dropping imports and rewriting tabs/spaces. It's good to be reminded that even with "large" language models you can get better results with quality over quantity.

  • lblume 5 hours ago

    From how I read it dropping imports happens for every language.

fudged71 9 hours ago

Tangential, but making the logs understandable by the LLM is also very useful

BiteCode_dev an hour ago

I have a /addlogs claude command just for that. Logging is mostly boiler plate, I'm glad I can outsource it.

spapas82 8 hours ago

This is interesting, I like the way jetbrains is using the local models for auto completion.

Do you know if there's a similar solution for vscode?

neuroelectron 6 hours ago

An example of the output would be very helpful

suriya-ganesh 9 hours ago

This tracks with how I've been doing my debug logs.

I ask the model to create tons of logs for a specific function and make sure there's an emoji in the beginning to make it unique (I know HN hates emojis).

Best thing of all, I can just say delete all logs with emojis and my patch is ready. Magical usage of LLMs

pandemic_region 5 hours ago

> I’ve been a happy JetBrains customer for a long time now, and it’s because they ship features like this.

The Goland and Pycharm experience must be radically different from the Java experience then.

nvader 11 hours ago

I'd like to consult the HN hive mind on an tangential point.

Does anyone else here dislike loguru on appearance? I don't have a well-articulated argument for why I don't like it, but it subconsciously feels like a tool that is not sharp enough.

Was looking for evidence, either way, honestly. The author is using loguru here and I've run into it for a number of production deployments.

Anyone have experiences to share?

  • nxpnsv 6 hours ago

    It looks great out of the box, has fancy colors (if you want them). It is easier to configure than logging and you don't need a lot go get_logger stuff. It does log rotation, and you has lazy formatted {} comments `logger.info("x={x:.2f}", x)`. It is better for threading/multiproc, handles exceptions better. And it is fast. I like it on appearance.

swiftcoder 6 hours ago

Programmer apparently never learned about "import ... as" syntax, adopts fuzzy LLM autocomplete instead. News at 11

  • swiftcoder 4 hours ago

    And look... I do get it. I have to switch between rust/typescript/python on the regular, and 3x sets of quirky syntax is a lot to keep in working memory.

    I'd rather simplify our software stacks than accept that an autocomplete that lies to me is the only tractable solution, though

greatgib 11 hours ago

Loguru sucks very badly! I would advise you not to use it. It is like trying to not do like everyone else, but still doing in a terribly wrong way ...

For example the backtrace try to be cooler for display but are awful, and totally not appropriate to pipe in monitoring systems like Sentry.

In the same way, as you can see in the article, that is the only logging library that doesn't accept the standard %-string syntax to use instead its own shitty syntax based on format.

  • nxpnsv 6 hours ago

    The {} syntax is better, and more similar to f-strings. And you don't have to use that backtrace. I use it to send json for some batch jobs, or have it write to google cloud logger for gcp jobs. Are you holding it wrong?

    • greatgib 5 hours ago

      It's totally subjective that it is better. It's not like it can "execute code" inline like f-string. Just the replacement character is the same, end of the comparison. And any way the point is that more that it totally breaks the general convention for no valid reason. It's not even optional.

      For gcp jobs you can just output the structured log in the standard output so I don't see the purpose here. There are other structured logging libraries if this is what you need.

      For the backtrace yes, because logging.exception will be broken by default and all tools that process the backtrace automatically like sentry. And again, for no valid reason except than "look, my traceback looks so much cooler on my screen because I added crap and new lines when printing it"...

      • nxpnsv 4 hours ago

        I don’t care too much about % vs {} . I have stuff where i need nice logs locally, and structured logs in batch, for me loguru made that easy. I don’t think it’s a terrible thing at all, but you do you :)

TZubiri 9 hours ago

I once wrote a simple python program that logged all execution, self inspected code and printed code and variables as output.

Then I learned why programs don't do that by default.

Not to be snarky, but when you become more experienced you will figure out that logging is just writing to permanent storage, one of the most basic blocks of programming, you don't need a dependency for that, writing to disk should be as natural as breathing air, you can do print("var",var). That's it.

If you are really anal you can force writing to a file with a print argument or just a posix open and write call. No magic, no remembering, no blogpost. Just done, and next.

cratermoon 11 hours ago

AOP solved this 30 years ago, though.

whyenot 8 hours ago

> My favorite use-case for AI is writing logs

You mean like this?

    [2025-07-17 18:05] Pallet #3027 stacked: Coast‐live oak, 16" splits, 0.22 cord.
    [2025-07-17 18:18] Moisture check → 14 % (prime burning condition).
    [2025-07-17 18:34] Special request tagged: “Larina—aromatic madrone, please!”
    [2025-07-17 18:59] Squirrel incident logged: one (1) cheeky Sciurus griseus absconded with wedge.
...or something more like this?

    Base-10 (log10) handy values
    --------------------------------
    x        log10(x)
    ------------------
    2        0.3010
    e        0.4343
    10       1.0000
    42       1.6232
    1000     3.0000
    1e6      6.0000