> It’s like my whole computer is a toddler screaming “LET ME DO IT!” every time I try to create something.
Every autocorrect or auto suggestion ever has felt like this to me, but the volume has been turned up to 11. The otherwise drab Adobe Reader is covered with colorful sparkly buttons and popups suggesting I need not even read the document because it can give me “insights.” First, no you may not ready my proprietary document, nor do I suspect most people using this particular software - I only have it for digital signatures - have permission to share IP with a third party. But mostly, it can sometimes be a useful tool, and the fact everyone is shoving it in my face reeks of desperation.
Thing is, we've been here before in a much more limited way; people _hated_ it when Microsoft's demonic paperclip did this in Office. "It looks like you're writing a letter". _Hated_ it.
It is unclear what the industry thinks has changed, that people will now welcome "It looks like you're [whatever]".
This forum (HN) attracts certain population that wants to do things, to understand, to share relatively well based opinions and have a discussion.
But look around, look at the new hires in the other departments. And by new I mean young, in their 20. A lot of them welcome this kind of things, they evaluate by popularity and likes. The marketing begin the AI bubble knows this, and so it pushes for it. Make it popular is more important than make it useful, because there is a tipping point were is popular enough that we capitulate.
My sister is ripping her hair out dealing with the interns at he job being extremely tech illiterate with anything thats not app-ified. Many don't know what files are, and are needing run through computer basics because everything they have used has anything technical abstracted away. The post-iPhone generation just wants there hand held and anything technical scares them. Microsoft Bob was just too ahead of the curve.
Did they, though? Polling fairly consistently shows that people don’t _like_ this stuff, and there’s some evidence that the more familiar with it they become the less they like it. I think Microsoft et al were betting on people liking it (that was certainly their thinking with Clippy, too) but that doesn’t seem to be working out for them.
Not only have the people changed but it is the belief of the elites at the top that humanity is entering a new era of hack-ability. They want to use these AI systems to rewrite humanity into their vision of the future.
Yuval Noah Harari talking about how the new "gods" are the data-centers and how free will is dead in the age of AI.
https://youtu.be/QuL3wlodJC8
Well, for the purpose of this conversation the people at the top of the food chain believe free will exists. They also believe that they can eliminate it with AI and Biomedical manipulation.
The goal with most of these AI features is not to solve a real problem users are having, it's to add a feature that uses AI. This will not change because it's not wrong of the individuals making the decision. The project manager gets to say he shipped a cutting-edge AI project. The developers all get to put experience working with very hireable technologies at a serious company on their resume. There will be no adverse impact to the bottom line, because the cost to develop the shitty AI feature is a drop in the bucket, and the cost to create a competing product that accomplishes the core thing users are using that product for but without feature bloat would be very high, and probably unsuccessful since "less feature bloat" has never been sufficient to break the static friction threshold for users to switch.
So it won't change, because there is no lesson to learn. No individual involved acted irrationally.
It's a design that's in companies' best interests. You can have a computer that's a "friend." One that you trust but ultimately has a mind of its own. This contrasts with a computer that's merely a tool, that serves you exclusvely at your pleasure and has zero agency of its own.
Which approach gives companies more control over users? Which one allows companies to sell that access to the highest bidder?
Based on the experience of 20 years ago, though, users are _extremely_ turned off by it. There's little reason to think this has changed (if anything it is likely more pronounced because Clippy came in kinda without baggage, whereas LLMs have a lot of baggage and most of it ain't great).
> It's a design that's in companies' best interests.
I really don't think it is. Clippy was reputationaly damaging to Microsoft and they had to get rid of it. There's little reason to think this will be different.
Modern Big Tech doesn't particularly care what users think. They know they have network effects on their side and that switching costs are high. So what if it's "reputationally damaging?" What are users going to do? They're just resources to be exploited. Microsoft, Google, and their ilk can treat users with contempt if it means more control and more shareholder value.
Third option: the computer is your enemy, which will follow any sufficiently clever adversary’s orders.
Thinking of a computer as a tool seems reasonable, but thinking of your computer as your friend is clownish (which, I think you agree with based on your last comment).
Slightly offtopic, but I have a friend with synesthesia who sees inanimate objects like people, and they call their computer "macbook friend" (since it's a MacBook Air).
Clippy (and his predecessors, he wasn't one of the first avatars for the feature) might not have been so bad, but marketing got hold of it and decided it didn't pop up often enough for them to really make a big thing of, so it was tuned up to an irritating level.
> It is unclear what the industry thinks has changed
The demographics of computer (and other device) use have changed massively since the late 90s, and the suggestion engines are much more powerful.
I still want it all to take a long walk off a short peer, but a lot of people seem happy with it bothering them.
I remember when software would ask you on first start what your level of experience was. "Novice, Intermediate, Expert" and would tune the UI to respect that.
Sometimes different modes like that can be more hassle than they are worth, from the dev point of view. You can end up with many more paths to test in order to try make sure your product is bug free.
If the automation is much better at the task than I am, then I am happy to donate it the responsibility: It's a matter of accuracy. Clippy kind of sucked even when he was right about what I was trying to do. For many things, the LLMs are getting good enough to outperform me
The customer base for computing has expanded probably 3 or 4 fold or more from those windows xp days in the US. Maybe for the subset of the population that was word processing back then it was annoying. But now we are looking at a different pie entirely where that subset of annoyed power users is but a tiny sliver. There are people today who have no experience even with a desktop os.
This wasn’t the dark ages; in highly developed countries the ‘computer in every desk’ thing had just about come true. I doubt there are that many more regular word processor users now than in the late clippy era, at least in the developed world.
Thing is, Gmail's been doing this ~forever with quick replies to emails, now it's just doing longer replies instead of "that's great, thanks" level of replies.
But clippy didn't write the letter for me = if I can be lazy and AI formats what I'm communicating in a way that is accessible to other people, then why should I care.
After a recent Show HN, I got an email from someone saying that they'd set up a page for my 'product' on their product showcase startup site. I followed the link and saw my open-source project pitched as ChatGPT slop. It felt like a violation because it wasn't just an aggregated link, but a rewrite of my readme with an associated 'pitch'.
I recommend reporting this to dang at hn@ycombinator.com. I imagine that he'd be interested in someone crawling HN in order to send automated lead generation spam.
I don't think dang can do anything about it, I'm sure HN gets scraped all the time. I routinely get spam from cryptocurrency startups to (obfuscated) email addresses I have posted on HN years ago on "Who Wants to be Hired" threads, and from my commit messages on github.
Github is definitely quite the source of spam. I ended up realizing that when I was tired of having two .gitconfigs with two email addresses (work/personal) and switched to just putting my @users.noreply.github.com in there. No more spam.
(Plus I get to show off my 4 digit Github userid. I was the 2367th person to sign up! ;)
Neat. I was pretty late to the party and have a userid halfway into the 6 digits, but somehow I got extremely lucky and landed a two-character username (my initials).
I get a ton of emails from people on HN and I dunno, it's really my fault for putting my email in my profile. I don't blame HN for this; I don't think it's particularly supportive of this kind of "abuse".
> no you may not ready my proprietary document, nor do I suspect most people using this particular software - I only have it for digital signatures - have permission to share IP with a third party.
This is a massive liability that almost everybody seems to be ignoring. My employer has a ban on using AI on IP until this is properly resolved, because we actually care about it leaking.
Maybe an Information Commissioner will get round to issuing a directive some time in the mid-2030s about how none of this complies with GDPR.
> My employer has a ban on using AI on IP until this is properly resolved, because we actually care about it leaking.
Yet I can almost guarantee you that someone has put something they shouldn't through ChatGPT, because they either feel like it's a dumb rule, that should not apply to them, or they where in a hurry and what are the odds of them getting caught.
I think in general, no major liability issue will come up:
- if everyone is doing it, you can't really fault anyone
- on some level we are, or will be, kinda dependent on that AI and opting out will probably be made unpleasant via dark patterns as usual
- no pushback to every piece of software, including at the operating system level, slurping all the keystrokes and data, let alone the data that's already in the cloud
- big tech knows everything about us but to my surprise no major public leak has happened, i.e. one where you really can see your neighbor's private data without buying leaked data from someone on the dark web or wherever
- things are moving too fast, and you don't know if you can afford to have your programmers not use tomorrow's AI, for example, so your "bans" will have to be soft etc., this limits the potential pushback and outrage
Open source has been looking better and better lately because it's not in a mad rush to bolt "AI" features onto it (an LLM will do something) and then shove a huge amount of interface in your face to try to get you to use it.
On some level it's enormously baffling that this was the thing they decided they needed to do...conversely Adobe Reader on my phone won't shutup about liquid mode either (which uploads to Adobe servers) and Microsoft and Google's solution to "people don't want to use our AI assistants" was to ensure they literally can't be disabled or removed.
I will simply point you to the iTerm2 AI kerfuffle (https://news.ycombinator.com/item?id=40458135) as proof that some in open source _are_ in a mad rush to bolt on completely unnecessary "features".
It was a bad choice that never should have been implemented as "enabled but not configured", and I have moved away from iTerm2 as a result. I am sure that others have as well. (The grudging move to make it a separately downloadable plugin was good, but too late IMO.)
One of the rare times when the slow pace of open source innovation is actually a benefit, because all innovation that's occurring is making things worse.
Obviously the AI version is bland and terrible, but arguably more importantly it has also completely changed the meaning of the message. The AI version:
- apologizes
- implies the recipient was "promised" this email as a "response" to something
- blames a hectic schedule
- invites questions
None of this was in or was even implied in the original. This is not a "polished" version, it's just a straight-up different email. I thought that style transfer while maintaining meaning was one of the few things LLMs can be good at, but this example fails even that low bar.
A lot of people who want to replace most human interactions with LLMs assume that there is some objective set of cultural values true in all contexts, and that it is good and easy to encode these as axioms into an AI.
Yes. As an Old now (GenX), I feel like moving all interactions to text and now having AI as a man in the middle is just reinventing ways to get in a situation where a decade down the line you reconnect with someone who used to be a friend and both discover "Hey, that wasn't what I meant at all!"
As ever, T.S. Eliot was right: "It is impossible to say just what I mean!"
And those objective set of cultural values are, apparently... a sort of parody of 90s corporate culture, a sort of polite version of Michael Scott. Like, no-one ever _actually_ wrote like LLMs tend to write; it reads as a parody of a now slightly obsolete corporate-speak.
And those ideas seem far more in line with millenial Silicon Valley culture. It's weird when they expect Germans to fake that sort of overly formal, overly cheery tone. People just don't talk like that.
“OK Fine. But could you at least yell at me in corp speak?”
It's no surprise LLMs are using corp speak and vapid marketing prose as a template. There is so much of it out there.
This is from that Autodesk post last week where they admitted their mistake and… Nope it's corp speak:
“We are excited to share some important updates regarding Archiving and our Idea Boards and Forums that aim to enhance your experience and ensure valuable content remains accessible. Please read the details below to understand how these changes might impact you.”
Barf. But to an LLM this looks like a human communicating in a meaningful way.
> Look, we need to align on language here. If you’re not speaking in scalable, results-driven terminology, you’re slowing down the team. We don’t “talk about things”—we sync and strategize. We don’t “try something new”—we leverage data-driven insights to drive innovation.
There is no way in hell anyone who knows me would get that email and not think I’d been abducted
This person cares about not putting up a fake identity. That's pretty cool, but social media has exposed that a large number of people are perfectly fine presenting an illusion. People will have no shame passing off well written things as an output of their talent and hard work. Digital makeup has no bounds.
If you care about putting up a fake identity this is still bad. Social media is all about being distinct and grabbing attention. Getting samified into a bland featureless identity isn't the same as as carefully crafting a persona to maximize clicks
> People will have no shame passing off well written things as an output of their talent and hard work.
Sometimes I don't want to waste my time crafting a professional e-mail to a bunch of jerks full of themselves. Maybe I want to write it as it comes off my brain, and let my digital scribe to reformulate it so that the people reading it feel respected/validated/flattered. Am I putting up a fake identity then? Am I presenting an illusion of professionalism? Maybe writing "Best regards" instead of "Bye" is the facade of professionalism in the first place.
I mean, I hear that. I was asked to be "nicer" in emails once, and when pressed for specific changes, was finally asked to occasionally say "Thanks!" as my sign-off instead of "Thanks,".
The "bunch of jerks full of themselves" likely aren't reading the emails now; we're burning immense amounts of energy for your politeness to be generated, and distilled out at the other end into a no-nonsense summary missing all the niceties another AI just added.
It's obviously a personal thing, but I even feel a little guilty clicking the autosuggested "thanks" when responding to a text. Everyone has the threshold they're comfortable with.
"Best Regards" vs "Bye" is one thing, but unless you're the owner of the company, sending a client "fuck you, pay me" just isn't professional and is probably going to get you fired.
With the normalization of default workflow to chuck all comms through an LLM filter settling in these days, I don't think it's even people trying to pass off illusions as their own persona. All it takes is a copy-paste and hitting the Make-Me-Some-Text button. I'm sure the responses will be frustratingly amusing if you were to press them and call them out on it (including trying to pass off the illusion).
Many people didn't think about what they are trying to convey (or self-analysed how they present themselves) when drafting correspondence in the past; now, many people think just as not-hard and often continue, like before, to neglect to meaningfully proofread whatever they had the LLMs generate for them before hitting Send.
Of course, I don't like it. But in some ways, it's just not a whole lot different from what it was before in that you can often still tell apart the people who care to be articulate from those who don't. Though, I feel bad for people disproportionately waylaid by the new paradigm like the bug/security responders on the curl project.
At a high level I see convergence of styles, topics, behaviors to a generic form, both in "AI" and social media. Which to me suggest that the "AI" solutions are doing exactly what we would do ourselves, just faster.
Why such a hostile response to someone who cares about presenting themselves in a way they see fit? Your image is very important and it's logical that they want to control theirs.
The whole AI discussion reminds me of David Graeber's "bullshit jobs" book. If the content doesn't matter, why not have an AI generate something polite and meaningless? Why not save effort managing your inbox full of meaningless emails by having the AI summarize them? It might lose some of the details, but they didn't matter. And so more and more of the white collar world gets replaced by AI .. until it actually disrupts away all the bullshit jobs entirely.
1. Born into plague
2. Never learns maths or writing
3. Nor spelling
4. Half the time your life ends before childhood does
5. Nothing happens in your life
There are alternate explanations here. Men do not put as much effort into their presentation as women do. Men are putting out profiles that show themselves as anti-woman, and that's unattractive. Men absolutely should be pickier - they would honestly only date people compatible with them, but try to cast the widest net rather than finding people they could vibe with.
Men in the dating pool have become toxic, and women are doing their best to weed those out. Don't be toxic. Don't endorse people who are looking to make women's lives worse. Put effort into your presentation in your profile pictures - have a woman friend take them. Put effort into your profile to show that you are not someone who would endanger women - have other women in your life help you present your profile. (That probably means don't have a fishing photo, even if you enjoy fishing!)
Much of this is predicated on having women in your life that you trust and that trust you, and who want to see you happy. This means listening to them and trusting them.
And then, you might be a little pickier in your profile too.
No matter how much polishing a man does to his appearance / profile the biological reality is that women are sought after and men have to compete for their attention. There's some statistic along the lines of "the average female has about as much 'pull' as a male A list celebrity"
This means that women will receive far more attention from the opposite sex, and therefore have an "easier" time finding potential partners. This problem is just exacerbated by dating apps and social media making women more aware of their position and options - 100 years ago women could barely date outside their village. Nowadays they receive 100s of messages from men around the world competing for her attention. The opposite happens to men, they become more aware of how "undesired" they are so they start casting a wider net since no fish are caught in their smaller nets.
Blaming the men here just reeks of those toxic standards - since again it is on the men to improve. Not the women who should continue being hyper selective (most attractive, wealthy, etc.).
(Note I'm not blaming any side just showing the reality men face - really this is a problem both sexes need to tackle)
"easier" is in quotes because I appreciate women have to filter out abusers and creeps. But from the male perspective they'd kill to even have a chance with an abuser or creep.
Is it so hard to believe that going onto a digital marketplace with an order of magnitude more humans than you would ever physically deal with physically is probably what's leading to a type of thalassophobia (the fear of deep or vast bodies of water)?
Life is simple. He or she lived in the town, or your school, or was someone you pass by on the way to work daily. It's not some complicated digital dance where the universe presents to you all possible mates.
Date the person you wouldn't. Give it a chance, because you just don't know how this love stuff happens.
Completely agree - on top of the thalassophobia there’s also decision paralysis, and I’m not sure what you call it but: When you reject the good in hopes for something perfect down the line.
Safety is super important and I share your concern. I think online dating is similar to the internet. When the Internet started, there was a niche group that were really into it (early adopters). This was also true for dating apps in their infancy. It was safer simply because the pool was self selected (you had to be open minded to even try it).
Then the internet became for everyone and so did online dating. When there's that many people involved, things get more unsafe. It's not the same pool of early adopters on the same wavelength. It's everyone now, and everyone includes every wave length.
> It's "do you make their life better than being alone?"
This is a really lovely sentiment and I wish more people approached it like this. I've never set up a dating profile and I don't plan to. But hiding stuff like fish pictures seems a bit over the top, surely if fishing is one of your passions you'd want to end up with someone who at least doesn't judge you for it.
Also keep in mind this approach of making sure your dating profile is "optimised" only works until a large enough %age of men's profiles are "optimised" - then its back to square 1 to figure out how to optimise it further to put yourself at the front of the queue again. We've seen what the SEO arms race has done to search engines, do we really want to do the same thing for dating?
The issue is that you rarely look your best in those pictures, and unless you are explicitly looking to date someone who loves to fish, you aren't showing them how they would integrate into your life.
In my experience women are not statistically hyper selective. The less selective women are just out of the dating pool faster than the hyper selective ones. A lot of the women I know are happy to marry a man who grooms himself, respects basic boundaries, and has income. A lot of religious women in particular are socially pressured to drop the "respects boundaries" category also.
I think this is mainly a problem with dating apps - I have seen what you're talking about myself when the pair meets "naturally" or are set up by mutual friends.
People are far more forgiving in real life than they are online.
But part of the problem here is that the dating apps are like junk food: easy, satisfying, but ultimately unhealthy. And unfortunately because of this ease a lot of people reach for them instead of doing the more difficult leg work.
But I can also see it from the female perspective: If almost everyone I swipe right on matches with me - I'd start becoming WAY more selective about who I swipe right on. This leads to less matches but now the guys are also getting even less matches but they can't so easily fix it. The only remedy for the males is to start swiping right on anything that you're not put off by. And again, the men doing this exacerbates the problem - a copulatory ouroboros if you will.
Again you're coming at this with such intense anti-male rhetoric - take a step back and look at the problems beyond your own.
Yes I completely agree women DO need to be more careful as they are far more vulnerable and men far more likely to take advantage.
HOWEVER, playing off bad experiences for men as "a bad couple of hours" is just disingenuous. Women can absolutely ruin a mans life and reputation in those "couple of hours" - even worse if they get married and she decides to take him to the cleaners.
Look, I'm not coming at this as some anti-women basher - I see a societal issue that is hurting our men, women, and future. If we want to actually solve that problem the way forward is not increasing the hatred towards men and isolationism of women. We need to come together and not push each other further apart.
> A bad experience for you is a bad couple of hours and a few hundred hours. Ask a woman, what is a bad experience for them?
Playing the situation as though its 0% risk for men and 100% risk for women - making it seem as though only men can be harmful and they are by nature dangerous. Making it sound like no male has ever had a bad experience with women and the onus is completely on the males to fix this situation.
Meanwhile the only men who will listen to this advice are the ones who are already "safe" for women.
Probably the word "intense" shouldn't have been used by me and maybe I'm reading into it too deep.
The female perspective is slightly different here. Like I said, the less selective women have already been out of the dating pool a long time. Additionally, women on dating apps experience additional pressure to be selective because men who are dangerous/abusive/predators stay in the dating pool longer. This is explicitly unfair to normal men, but normal men are also out of the dating pool faster on a statistical level. This means that a dating pool selects for more and more picky women, and more and more dangerous men thereby validating the selectivity of the existing women.
[This is not to say that there are not dangerous/abusive/predatory women. There certainly are. But one of the greatest causes of death of pregnant women is the father of the child-to-be murdering her, and one of the greatest causes of death of recently single women is their now-ex murdering her. There is no similar reciprocation i.e. one of the leading causes of death of fathers-to-be is not the mother-to-be murdering him. Maybe financial ruin in child support, but explicitly not death.]
> men who are dangerous/abusive/predators stay in the dating pool longer
Not only this but they can use these apps to find their victims faster and easier than ever before.
> This is explicitly unfair to normal men
I think it's also unfair to women that they have to pick up the extra work that was done by the "community" before. And a loss for social cohesion too.
I am wondering though how someone who just ended a relationship could deal with this scenario? As a male you arguably look like one of the abusers because you're older and looking for a partner and as a female you're seen as less attractive. At the same time you're probably also more picky because of the break-up. Seems like they're the most shit-out-of-luck here regardless of what's between their legs.
I also wonder if this will push us towards being with our "high-school sweethearts" for longer than we would otherwise?
> Maybe financial ruin in child support, but explicitly not death.
Yeah absolutely, I'm personally not aware of the exact numbers for this but I wouldn't be shocked if you're right. Also I'd include things like false accusations, or getting her brothers/father to "take care of" the ex. Also we need to account for things like baby entrapment.
Also men tend to express their aggression physically whereas women tend to express it through reputation destruction. This also partially explains where we are today: women are afraid of male physical repercussions, men are afraid of female societal repercussions. Interestingly, it also mimics a lot of the rhetoric you see about "all males being bad" since it's a form of reputation destruction - whereas the male "comebacks" tend to focus on physical acts of violence that they "could do but won't".
> I think it's also unfair to women that they have to pick up the extra work that was done by the "community" before. And a loss for social cohesion too.
I would argue the community explicitly did not do the work of ousting predators. We are only in this generation or so no longer allowed to rape women if we're married to them. It is still legal for adults to marry children, not in an 18-to-17 way but a 38-to-15 way. Child marriages are overwhelmingly older-male-younger-female.
As for being with high-school sweethearts, that imo is also less and less likely, because more adults are forced to travel for employment (e.g. landing a job in a big city and moving).
I'd like to clarify when I said "community" I wasn't talking exclusively about friend groups (or your mother's friend groups) helping with recommendations and whatnot. I mean the community as a whole - men and women, the bailiffs and the blacksmiths: When you know everyone in your town and most of the people in your neighbouring towns it becomes much easier to know who is a bad person that'll abuse you (or to find out that information).
Yes, I'm pushing back on that because in the past it explicitly legal and normal to abuse women. Women were not allowed to be selective in the first place. So the community explicitly did not perform the protection you are saying. It was only recently illegal to rape your wife.
What I'm trying to say is that the people around us have always shaped who we pair up with - for better or worse - I think losing that is going to have consequences for social cohesion.
It's impossible to say who historically had it worse in general and I don't really believe such discussion is important except for historians. The only thing that dwelling on it will cause is resentment towards people who never perpetrated those crimes.
You pointed out a historical inaccuracy I wished to correct. You were the one that brought up the past and then incorrectly depicted it. You're reading a little too much on this.
This is assuming all women and all men are equal parties here in terms of candidate quality. I've used these apps and men are not socialized to present themselves in a visual medium the way women are. I'm not talking men should be putting on makeup, but a shocking number of men don't understand grooming their facial hair to fit their face shape, framing a photo of themselves, etc.
I was with you until you mentioned “brand”. You don’t need your identity to be a “brand”. Even more, it tends to be detrimental, because “story” implies inauthenticity.
I've only recently started using AI, and have discovered my use or rejection of it is predicated on my feelings for the task. This argument of "authenticity" really resonates.
I'm a manager, so when I'm sending emails to a customer or talking with one of my reports, I care deeply - so you might get some overwrought florid prose, but it's my overwrought florid prose.
On the other hand, I have to lead a weekly meeting that exists solely to provide evidence for compliance reasons, something out of the CIA's sabotage field manual that David Graeber has probably written about. But is now a thirty second exercise in uploading a transcript to ChatGPT, prompting for three evidentiary bulletpoints, and pasting the output in a wiki no human will ever read.
That's also what I do. I hand-write every email because these words have my name under them. On the other hand, if I'm asking the tax office to issue a specific document, I let AI handle it.
I wonder how people feel about "dumber" tools like hemingway.app that make mechanical suggestions for readability like suggesting simple synonyms and highlighting sentences that are too long. I've used it for writing documents that important and I knew a lot of people would read.
I was thinking about the authenticity of my writing earlier this week and wondering why I have no problem accepting code from an AI and committing it, but I find the idea of passing off an AI's writing as my own feels not just wrong, but immoral on the level of purposeful plagiarism. I feel a distinct difference, but I'm not particular clear why. I'm okay with sharing AI writing, but only when I've clearly communicated it was written by AI.
Probably related to why I can copy a piece of code from elsewhere (with sufficient work to verify it does what I expect and only what I expect) but I don't copy a quote and use it as my own. My words are my words. My code doesn't have the same guarantee.
Code uses a simplified set of instructions to instruct a computer to do things. Hopefully these instructions can be understood and maintained by a human.
Writing uses the entire breadth of human language to convey information between human beings with unique and complex understandings of the universe. If those words come from a machine that is not you - that is not someone - you ought to disclose it.
It's probably because communication is a complex dance between humans, where you're constantly signaling that you're part of some group with the other person. Think of any profession or team, where members share common ways of speaking: jargon, inside jokes, terms of art, terms of endearment, etc. It's useful for cohesion, trust, and efficiency because you're assured that the person you're talking to is indeed "one of us."
If you use an AI to communicate, then you either fail to mimic those group membership signals and you look like an idiot. Or you succeed and show that a machine can fool humans at this game. Any grifter can come along and establish trust in a group by relying on this tech. This dance that humans have been doing since the dawn of time suddenly breaks down, and that doesn't feel good.
AI "polishing" tools are essentially a form of anti-compression. Lets take some information represented concisely and needlessly pad it with formalities and waffle so it appears more "professional" (whilst also throwing away useful "metadata" like message tone).
No doubt the recipient will also be using some form of AI summarization that strips away all that added "polish" - making the whole exercise entirely redundant!
> No doubt the recipient will also be using some form of AI summarization that strips away all that added "polish" - making the whole exercise entirely redundant!
Not entirely, there’s still the energy usage and stock price increases. All because everyone’s too anxious to just talk to each other directly.
My (evidently mentally disabled) previous manager was so proud of being able to use AI to generate the bullshit he sent out to clients. What the morons are really doing is proving they're useless, let them.
I use Grammarly to check for errors I make during writing more serious stuff (English is not my native language), but any suggestion it sends in my way changes the tone of the text so much that it sounds like it's written by a PR agency with a fake, forced attitude; sounding bland and colorless.
So no, thank you. Correct my textbook punctuation mistakes, and leave my wordy and "not positive enough" sentences to me.
I'm getting increasingly irritated by Grammarly's attempts to boringify my writing. I've even considered doing away with it entirely, even if it means I have to do my own spell-checking.
I'm working on a dystopia where the resistance is using text-in-text steganography to coordinate, so unpolished communication is flagged for extra scrutiny because all those stylistic choices might be hiding something.
I love The Freeze Frame Revolution, and yeah I was planning on doing something similar. I figure I'll publish an app that OCRs the text and displays the hidden message, and make the book read differently based on whether you can or can't see the messages.
Then readers can use the app for hiding their own messages to each other also.
(realizing this is veering off into tangents of tangents)...
An idea I discussed with someone tossing story ideas about was regarding the reliable / unreliable narrator for a mystery mixed in with being able to view (lock a display screen to a geographic location, got a knob to rewind / fast forward time (up to the present)) immutable past events who finds something amiss. Upon the "somethings not right..." the narrator reviews their own events of the previous day and finds that the events don't match their memory. Is the narrator reliable? Is the narrator that is questioning their own reliability the narrator or an external agent trying to cast doubt on the reliability of the actual narrator?
That does sound interesting--to give the reader a bit more tooling to decide whether they're being lied to and by whom. You might enjoy The Quantum Thief, which isn't published in a such a unique way, but which has fun a story that would fit right in.
I wouldn't say the author's style is unique or individual in any way. Every single tumblr blog sounds like that. You could easily create a "make edgy" function that would feed your formal writings and turn them into that kind of prose. Is it better or worth than "polish"? There's no substantial difference. The "polish" version sure sounds less exhausting than the original.
It's personal. "Unique" and "individual" might not be the best words to describe it, but it's clearly a style they've intentionally adopted. They appear to have been quite successful for it to!
Digital culture was fake and performative and insincere enough before Turboclippy: fuck that with something sharp.
It feels like the whole world is turning into an HR department premised on the ideological axiom that killing one man is a murder but killing a million is a statistic.
I think LLMs are transformative but it's incredible to me how unimaginative most product managers have been. It reminds me of the 90s when people discovered GIFs can be put on web pages so every page had to have a hundred of them. It was tacky, as is most embedded AI.
> Kill me if I ever write "Regards" non-irnoically.
One time, I couldn't remember the name of the album I wanted SIRI to play, so it kept picking the wrong one. After the fifth or so time, I told Siri, "you're killing me Siri" and it called 911.
You must be extraordinarily polite to say something that mild after that many failed tries. I mutter expletives at chatGPT much sooner. AIR (AI induced rage) is going to be a thing. I just hope it doesn't start to reverse affect how I behave with people.
I need an AI detector which just straight up sends any generated emails to /dev/null. If you can't be arsed to write something, why should I waste my time reading the slop?
Yes yes, languages and translations are a potentially valid use case, but irrelevant for the vast majority of people I communicate with.
I actually use LLMs to unpolish dictation results from MacWhisper[0] to match the way I write in chats. MacWhisper lets you set a prompt to automatically post-process your transcribed output. Here is my prompt:
'You are a professional proofreader and editor. Your task is to rewrite this dictation with more casual capitalization and punctuation so it can be used in a chat setting.
- Always capitalize "I" and derived words like "I'm"
- Sentences should start with lower case letters unless they start with “I”
- Replace explicit punctuation callouts with the actual punctuation
- No period at the end of the message, but leave periods between sentences
- Do not remove commas
- Do not change sophisticated words to more common ones
Return only the result. Do not add any explanations or comments about your edit.'
The AI polish feels like it was written by someone with only an elementary schooler's understanding of how to write a letter.
Even in stuffy institutions like banks and insurance companies, corporate email style is very straightforward and, idk, undecorated? You learn pretty quick that people only skim emails and every word beyond your core message creates opportunity for your core message to be skipped over. NOBODY is writing salutations or 'i hope this finds you well' or whatever other garbage chatgpt thinks the first paragraph should be.
When I read historical items on lettersofnote I find that even super old business messages are pretty streamlined and only have some crunch because some runner has to physically find the dude you are writing to.
AI takes the personality out of written communication, and swaps it with a bland corporate persona. I want to know that I am working with actual human beings. It reassures me to know that if something goes wrong, I can appeal to those human beings, and not to soulless corporate automatons working off of a flowchart.
I feel like this is what corporate wants. A single way of speaking. A single way of thinking. Round off all the sharp edges and smooth out all the wrinkles. Humanity filtered through a sycophantic psychotic AI tuned to perfection by psychopaths who want to change humanity into their image.
"The idea of a Soul, Free Will -- these are Over!" - Technocracy explained by Prof Yuval Harari
https://youtu.be/NV0CtZga7qM
I get the frustration, but I think there’s a hidden assumption in this discussion: that everyone can write well in English.
Only about 5% of the world’s population are native English speakers, but more than twice that number use it daily. For many, AI rewriting isn’t about losing personal style—it’s about making sure they can communicate clearly in a language that isn’t their first.
It’s fair to dislike how AI flattens unique voices, but for a huge part of the world, it’s not erasing personality—it’s making participation possible.
If you can't translate your email into a foreign language, how is the AI going to rewrite your incorrect prose into correct prose? it's working on something incorrect. And then after the ai rewrites it, how can you tell if it's saying what you want it to?
When I'm communicating with a non-native speaker, I intentionally use shorter / easier to translate words and sentences, and I give them more leeway with word usage / don't expect them to use the right words all the time. And that's fine! Communication still happens! We manage!
But if a non-native speaker starts running their text through an AI it makes communication harder, not easier. I can't tell if their word choice is intentional or if the AI did it. A tiny mistake I can understand gets expanded into multiple incorrect sentences.
>If you can't translate your email into a foreign language, how is the AI going to rewrite your incorrect prose into correct prose? it's working on something incorrect. And then after the ai rewrites it, how can you tell if it's saying what you want it to?
Absolutely this. "Accessibility" and "participation" are great goals on paper, but the tools at hand are likely to introduce confusion because the user fundamentally isn't in a position to judge the quality of the output.
Last year I worked with someone who used AI tools like this, to compensate for their lack of English. It was dreadful beyond belief and basically unworkable.
Lack of comprehension on what other people said was a big issue. But also having four incomprehensible paragraphs thrown at me for what could be six words (not infrequently based on a misunderstanding of a very basic simple sentence).
I'm not a native speaker either, but the only way to learn a language is to actually use it. For better or worse, English is the modern world's lingua franca.
How are you supposed to communicate clearly if you are relying on an AI to communicate for you? How could you even tell if it properly communicates your ideas if you couldnt communicate them properly in the first place?
AI translation is definitely a great enabler, both for written material and things like live subtitles, but people are already aware that translations are imperfect and can be disputed. Something that anime fans can get very heated about.
English is not my native language yet somehow I share this sentiment towards AI. I'm fine with a spell checker, I don't need whatever I write completely rewritten, thank you very much.
The proper solution is to work with an editor that asks clarifying question not to rewrite the whole thing into something totally different.
For published work, if it's not worth editing then it's not worth reading (I would go further personally and say that most publish, edited and peer reviewed work, in your area of interest isn't worth reading anyways)
For unpublished work, like an email, ask the AI to translate the passage while maintaining style and tone. It will still flatten it, but not as much as the complete dogshit I read in the article.
Communication is a job requirement, faking it with AI is going to go about as well as someone faking programming skills. Not very!
Why is the author mad? The "polish" button does exactly what you would expect it to do if you understand "polish" as "make polite". If you want to keep your words true and "unpolished", maybe at the risk of sounding rude, then don't use that feature.
It is actually one of the first things I used LLMs for, and I still am. Latest example, I received an email from an administration asking me to sign some document, but it contained some incorrect information. I used a LLM to write a nice letter telling them to fix it. I could have just replied "it is not A, it is B", but without some formalism, it sounds rude, and I didn't want to sound rude, I also didn't want to make efforts for that formalism. It may be one of the things LLMs do best, they are like supercharged model letters.
Of course I don't do that in informal situations or when I want some emotional connection. By the way, no LLM touched this comment, I don't consider posting on HN a formal situation.
EDIT: typoes, but maybe I should have let them in, to show that it wasn't the work of a LLM ;)
I think it's interesting how this has changed. If you go back and look at politically correct corpo-speak from decades past, it was bland and impersonal but aimed to be direct, clear, and reasonably concise.
These days, it's still impersonal but now couched in platitudes, apologies, and excessively polite language in an attempt to avoid possibly offending anyone in any conceivable fashion.
I'm worried that we'll get to a point in regular work communications where responding to a question with a simple and succinct "yes" instead of, "Great idea, I think that is an absolutely perfect plan and you surely deserve a call-out at the next all-hands meeting for coming up with the idea to fix that problem in the team's code!!!" will get you put on an HR watchlist for being difficult to get along with.
From my experience with LLMs, it matches your description of "corpo-speak". I think it is the right balance of politeness. The result is usually rather simple, and when it is not, that's because it wants me to add extra information, which is often a good idea, not because of too much politeness.
It is not at all like you say in the end, way more boring and that's the point. But out of curiosity, I asked ChatGPT for some appropriate formula for a use case like yours and I got "Sounds good to me! I accept your suggestion—let's move forward with it." Of course, there is some formalism here, in day-to-day communication with people you know, I found it to become less formal over the years. "ok" is fine now when it wasn't before. I think because test-based instant messaging is now commonplace.
My mother is a first-generation immigrant. Her writing isn’t ideal, but AI allows her to communicate articulately in a way, where before she might have been discriminated against
Yes amusing, since unsolicited polishing, but I sincerely doubt this was forced on our fine humorist. English to English editing or “translation” from snarky to suave is straightforward to understand and fix.
But what about translation by professional human translators between languages? is it really any better at preserving the intent and sub-text?
Kundera has an amazing essay—- “The Sentence”—-in his collection of essays “Testaments Betrayed” on the woes of translation of, yes, a single sentence from Kafka’s “The Castle”.
As he (or his translator Linda Asher) say: The sentence “describes the coition of K. and Frieda”.
Under the bar too ;-)
Not a topic I tend to associate with Franz Kafka.
When AI systems can match Milan Kundera (translated from the French) then we will have definitely gotten to Pro level. “Write me a brilliant essay in the style of xxxx, on topic yyyy in French, and translate it into Mandarin for grins. I would be a wreck if I had to translate Kundera.
What's really annoying is how its idea of "polishing" is just making the email worse. Who could possibly benefit from the recipient needing to sift through more boilerplate to find the actual content of the email?
15 years ago I might have imagined that an email sent by a robot would be extremely efficient, including only the relevant information, in order of importance. In 2025 it's apparently the opposite; we're trying our best to communicate effectively and an AI is nagging us to include an extra paragraph of flirtation.
While I find AI to be quite useful in many contexts, I 100% agree with this assessment, and I loath all these SV bubble-dwellers who think our lives should be airbrushed and "polished" for presentation.
I think a bunch of Google's Gemini ads highlighted how clueless they were on this subject. Like one of their very first Gemini announcements highlighted how you can get AI to write a caption for your social media photos of your dog. God forbid you actually use your own brain to write an actual human caption (flaws and all) for photos of your dog. Google also highlighted how their Pixel camera AI features can "correct" photos of your kids making funny faces in family photos, because how awful would it be if you didn't look like some sort of Stepford family.
I was glad Google finally got a ton of pushback against their ad about using AI for a young girl to write a fan letter to her Olympic idol. Seriously, fuck off with that. Makes me wonder if these marketing folks are actual humans in the first place.
Soon everyone will be texting, emailing etc with an AI secretary doing it all.
"AI take a letter and polish it for me..."
Maybe even video conferences will use "Polish" to portray a recreated lifelike sophisticated avatar with real time grammar correction. When folks finally do meet in real life, they discover a socially awkward, an illiterate or tactless boor instead of the suave intellect portrayed by the AI.
Please forgive my comment, I didn't have any polish for it. LOL
Soon everyone will be texting, emailing etc with an AI secretary doing it all.
"AI take a letter and polish it up for me..."
Maybe even video conferences will use 'Polish' to portray a recreated lifelike sophisticated avatar with real time grammar correction. When folks finally do meet in real life, they will discover slobs, socially awkward or tactless idiots instead of the suave intellect portrayed by the AI. LOL
Since the standard thing for the marketing side of the industry is to promise blatant absurdities, I'm sure the next big thing will be "Let us maintain your authentic voice by learning from your messages!"
Even though I am not a fan of it, I can see how some people might want to use AI to improve their grammar, maybe because they are afraid to leave a bad impression.
But why would anyone put AI generated images into an Email?
I was playing with a self-hosted model a while back and instructed it to only give answers that were unhelpful, vague, and borderline rude.
It worked surprisingly well a lot of the time! But most of the time it also kinda broke the model in terms of coherent answers because it was obviously trained for the exact opposite thing.
I'd be interested in a "polish" feature if it had a drop down where I could select which of my aliases it makes me sound like. It's too easy to use the wrong one's voice here or there.
Is there anyone like me out there who has yet to use AI? I think it's interesting but I have no strong feelings for it. Aside from a few images generated with early image generators I have not used anything else, not even chatgpt. AI search results I mostly scroll past are an exception.
I was an LLM skeptic for a long time. I still have a hard time trusting it to the same extent that most HN'ers appear to. (I would never use an LLM as a substitute for my own "voice" when writing, or put any AI-generated code into production.) But I think I have reached a middle ground: I basically use it as a first approximation when I am exploring something entirely new to me. For example, if I'm learning a programming language, I might ask it for ways to unpack an array into separate variables. Or if I'm reading an ingredients list, I'll ask what psyllium husk is. Basically anything that's moderately easy to verify if I get any suspicion that the LLM is hallucinating again.
These are things that I _used_ to simply ask a search engine, before Google results became 99% SEO-optimized blogspam and therefore useless for actual knowledge-seeking.
I think an important part of overcoming AI skepticism is to understand (at a very high level) how it all works so that you understand its limitations and know when you can and cannot trust it.
I haven't used any LMMs and I'm not missing anything. At some point in the last couple of years I have entered a couple prompts in chatbotgpt using my friend's account just to see it with my own eyes (funny enough asking slop, but before that was the word, and hallucinations). I also ran a prompt through llama.cpp just to see if it would work. It did, great, I don't really care.
I've done a few experiments on some project I've been putting of, one involves parsing some email. There's certainly some interesting use cases, but I've yet to figure out how to actually go about deploying a solution.
It does seem, to me, that the benefits are there, but not to the extend that the AI companies would have me believe.
I consider myself principally a software engineer, and like you I have avoided using AI aside from a few "haha look at that" images produced with DALL-E. Very rarely the forced AI content at the top of a search results page has helped me refine the term or concept I was trying to look up.
I like doing the things that AI is supposedly good at. I like learning, I like understanding what I am doing, I like the satisfaction of finally getting to a solution after banging my head against the wall for hours. I like the occasional sensation of being hopelessly lost and finding my way to the light. I like writing text and code. I'm not terribly bored by boilerplate. I like going down rabbit holes and experiencing happy accidents. I can't in good conscience sign my name to something that I did not create and do not understand deeply enough to explain to anybody who asks. I like doing the hard thing.
The immediate response I'm sure that I will get is some variant of "well everybody else is using it as a force multiplier and the way you do it is making the job take too long." And maybe that's true. Maybe I don't care. I am a person who takes pride in my craft. I enjoy the act of making things. Some people don't, I guess.
The day may come where I am viewed as a dinosaur, where the way I work becomes fundamentally incompatible with the way the industry works. And if that day comes, bluntly, fuck the industry. I'll go fix air conditioners instead.
Good news is that AI is equally adept at turning long polite business speak emails into short, blunt, and sweary emails, so you can choose what you prefer.
AI creep is like a leaky sewage pipe oozing gray water into everything you'd want to keep it out of. Disabling it is increasingly difficult on every app.
Nadella and the board approved billions of dollars in spending for AI. It's going to be in every single thing whether you want it or not. Otherwise, there will be a shareholder lawsuit.
A quick glance suggests that, because the site is pretty far afield of usual HN fare, folks here don't realize that Lawson is the author of several books.
I personally do not want you -- for any value of "you" -- to use an AI to "polish" any writing sent to ME. I want the author's actual thoughts, not their thoughts massaged by an LLM.
While not as longstanding as, say, Lileks, thebloggess is someone from the blog era of the Internet, and publishes an RSS feed. She's well known in other internet communities.
The obsessive need for AI developers to make everything into the most banal, inoffensive version of said thing will probably end up being the biggest roadblock for AI taking over human jobs.
You phrase it well. It's not the AI themselves. You can as easily get an AI to take something fundamentally banal and write it into something exciting, dynamic, exotic, or strange.
If you analyze these systems as a work of art, using the postmodern toolset, the people and systems taking one of the most potent technologies humanity has ever created and using it to craft a banality machine is just... very revealing. Like the great-uncle who can't even finish protesting how non-racist he is without using a racial slur in the process.
"We're innovative! We're hip! We're on the cutting edge! We're setting trends! Now here, let me help you turn your text into the grayest corporate sludge imaginable."
It's art. It's probably not art they intended to make, but it's art.
It is, isn't it. It's the Sokal Hoax on a society-wide automated scale: rather than being concerned with what a "text" might "mean", generate millions upon millions of "text" sequences and mechanically separate those which pass as real enough. Baudrillard's simulacrum.
> It's art. It's probably not art they intended to make, but it's art.
People seem to be very insistent that the output of AI is not capital-A art, because that threatens their worldview, ignoring how artists had previously pushed to expand "art" away from any concerns of technique, intent, legibility etc.
I mean the system itself is art. The output is some kind of anti-art, but the system that uses the world's most amazing technology to extract the art out of everything that passes through it is an amazing piece of inadvertent outsider art on its own.
"The obsessive need for AI developers (...)" - Of product product/business people trying to fit into the "AI era". Developers, if given a chance, wouldn't probably built this.
Honestly the only spot-on use case I can think of for an eternally calm, friendly, totally bland and generic customer service AI is for interacting with the Karens of the world and only because it would spare the nerves of human employees and because the logs would no doubt be hilarious to read.
I feel like nothing would turn me into a Karen more than interacting with AI customer service on a regular basis.
It’s already severely frustrating that at many companies, you have to talk to several people (separated by copious waiting) to get your problem solved after handling a severely outdated and underdeveloped phone system.
I’d rather get the bland polished version than the “I’m so silly/sarcastic/eccentric” version. Although ideally there’d be an AI setting that makes it both polished and terse.
As a tangent, the other day I read a post of a person who used to be close to Elon Musk and was giving their account of that relationship, and it was translated to Polish via DeepSeek.
It was not idiomatic Polish - very far from it really, so I guess depending on the model used here, the result could have been equally disappointing if it was really trying to Polish that email.
I simply do not understand the end game of these sorts of features.
Presumably, the idea is to make the other person think you've written the email yourself, and you're a "better"/more corporate/more professional writer than you actually are.
But once everyone starts using this kind of transformer, so nobody assumes that anyone else is actually writing the content they're sending out -- what's the point?! Who benefits?
The team that implemented the feature and hit their quarterly goal and got their full quarterly bonus. The execs who set the quarterly goals will be rewarded by wall street for shipping AI features. In 2 years, when reality has struck and the market has moved on from the AI fad, another team will receive their bonus for lowering COGS by removing this feature.
Users' needs are irrelevant to this cycle.
Big tech companies need to be broken up into teeny, tiny pieces.
The vendor's share price. This is nothing to do with the _users_; it is about cramming 'AI' into as many press releases as possible because it gives the markets the warm fuzzies. No-one wants to use this, and that's really beside the point; for the time being it makes stock price go up.
Presumably in a year or so there'll be a new fad, and most LLM-based 'features' will go the way of Clippy.
I have noticed many news outlets presenting articles with an "AI summary" on top. Often the article is copied right off of a newswire service, with some elaborations woven in. I have a feeling the thing the LLM is summarising is increasingly something another LLM has previously expanded.
What if we just wrote articles well, such that (a) they were not filled with fluff, and (b) the summary would be to only read the first paragraph of the article?
OneNote now shows the Copilot icon right where I start writing on a blank page. That's infuriating (as much as I see some uses for AI). Fortunately you can turn it off in the 'connected experiences' setting.
If you're already an effective writer, obviously AI will usually only make things worse.
But for people where English is not their first language (or whatever language you're writing in), polish is a godsend. It fixes your grammatical errors, vocabulary errors, spelling errors, punctuation errors, and all the rest.
Obviously it can go overboard in changing your meaning (as any flesh-and-blood editor also might), and presumably over time these tools will get more refined.
But for most people who write intelligent comments here on HN, you are not the target audience for this feature. Well, until you have to dust off your high-school French or Spanish or whatever for an e-mail exchange.
It's owners, however, definitely want you to fall into lockstep that cooking the Earth for their billion$ is perfectly fine and that waking up to a life of low-impact, serviceful compassion to the Earth and its peoples is just virtue-signalling when, in fact, it is simply virtuous in the face of demented psychosociopaths.
Note that this all started with all that digital coin-mining. Just more burning the Earth for fakeass money so callous bastards can get richer.
Not for geopolitical stability, though. What did we learn about making the entirety of human civilization dependent on a constant supply of rare materials hostile countries dig out of the ground?
And what happens when they run out? Back to carbon fossil fuels?
They weren't using modern GPUs and muiltiprocessor machines to cook the Earth, which has been happening on a large scale since the turn of this century.
No, we were burning coal and cutting down trees. I'm not saying that what we're doing now is fine, if anything I'm agreeing with you but pointing out that profit-driven environmental destruction has been happening since way before the invention of cryptocurrency.
I had the same initially confusing experience as the author, confusing "polish" for the language "Polish". The product manager should have used AI to get feedback on this feature...
I put the following into chatGPT: "I am a developer for an email client. I want to incorporate some AI features to help users write more polished emails. I'm thinking I will have an option that appears under the email body where a user can click "Polish". Can you see any issues with this UX?"
ChatGPT: There are a few potential issues or points of confusion with a button labeled “Polish”:
Ambiguity:
Some users might confuse “Polish” (as in “to make shiny or more refined”) with “Polish” (the language). If your product is used internationally or by non-native English speakers, that label can be unintentionally confusing.
Even among English speakers, “Polish” may not be immediately clear as a writing enhancement feature—people might wonder, “What exactly does it do?”
Lack of Context:
If users don’t already know what the feature does, they won’t know what to expect when they click “Polish.”
You may need a tooltip, hover text, or short description like “Improve your text with AI suggestions” so users understand it’s about refinement, grammar, tone, etc.
User Control:
How much will it change their text? Will it rewrite in a new style or just fix grammar?
If it makes heavier changes, users might worry about losing their personal style or voice. Be clear about the scope of the changes.
Trust & Consent:
Users may have privacy concerns if their email content is being sent to an AI for processing. Even if that’s already in your privacy policy, the feature’s label (just “Polish”) doesn’t convey what happens behind the scenes.
You might consider adding a “What will happen to my text?” link or a discreet explanation about how data is handled.
Alternative Wording:
Using a label like “Improve,” “Refine,” “Enhance,” or “Polish Text” might provide clearer expectations.
You might also consider dynamic or contextual labeling like “Improve Grammar & Clarity.”
How to address these concerns:
Add clarity in the UI: A short descriptive text or tooltip under the “Polish” button can briefly explain what the feature does: “AI-powered text improvements.”
Offer user choice: Present a preview of the changes before they commit. Let them accept or reject individual suggestions so they feel in control.
Signal the feature’s AI: Some users will want to know that an AI is involved, especially if they’re concerned about style changes or privacy. Labeling it as “Polish with AI” or something similar can help set expectations.
Overall, the idea of a quick “one-click to refine” feature is appealing, but make sure the button’s purpose and functionality are crystal clear—both to avoid confusion and to build user trust.
> It’s like my whole computer is a toddler screaming “LET ME DO IT!” every time I try to create something.
Every autocorrect or auto suggestion ever has felt like this to me, but the volume has been turned up to 11. The otherwise drab Adobe Reader is covered with colorful sparkly buttons and popups suggesting I need not even read the document because it can give me “insights.” First, no you may not ready my proprietary document, nor do I suspect most people using this particular software - I only have it for digital signatures - have permission to share IP with a third party. But mostly, it can sometimes be a useful tool, and the fact everyone is shoving it in my face reeks of desperation.
The tech industry is in real trouble.
Thing is, we've been here before in a much more limited way; people _hated_ it when Microsoft's demonic paperclip did this in Office. "It looks like you're writing a letter". _Hated_ it.
It is unclear what the industry thinks has changed, that people will now welcome "It looks like you're [whatever]".
The people. The people changed.
This forum (HN) attracts certain population that wants to do things, to understand, to share relatively well based opinions and have a discussion.
But look around, look at the new hires in the other departments. And by new I mean young, in their 20. A lot of them welcome this kind of things, they evaluate by popularity and likes. The marketing begin the AI bubble knows this, and so it pushes for it. Make it popular is more important than make it useful, because there is a tipping point were is popular enough that we capitulate.
Turns out that Idiocracy is not that far behind (https://www.imdb.com/title/tt0387808/)
My sister is ripping her hair out dealing with the interns at he job being extremely tech illiterate with anything thats not app-ified. Many don't know what files are, and are needing run through computer basics because everything they have used has anything technical abstracted away. The post-iPhone generation just wants there hand held and anything technical scares them. Microsoft Bob was just too ahead of the curve.
Did they, though? Polling fairly consistently shows that people don’t _like_ this stuff, and there’s some evidence that the more familiar with it they become the less they like it. I think Microsoft et al were betting on people liking it (that was certainly their thinking with Clippy, too) but that doesn’t seem to be working out for them.
Not only have the people changed but it is the belief of the elites at the top that humanity is entering a new era of hack-ability. They want to use these AI systems to rewrite humanity into their vision of the future.
Yuval Noah Harari talking about how the new "gods" are the data-centers and how free will is dead in the age of AI. https://youtu.be/QuL3wlodJC8
When was free will proven to be a thing to begin with
Well, for the purpose of this conversation the people at the top of the food chain believe free will exists. They also believe that they can eliminate it with AI and Biomedical manipulation.
The goal with most of these AI features is not to solve a real problem users are having, it's to add a feature that uses AI. This will not change because it's not wrong of the individuals making the decision. The project manager gets to say he shipped a cutting-edge AI project. The developers all get to put experience working with very hireable technologies at a serious company on their resume. There will be no adverse impact to the bottom line, because the cost to develop the shitty AI feature is a drop in the bucket, and the cost to create a competing product that accomplishes the core thing users are using that product for but without feature bloat would be very high, and probably unsuccessful since "less feature bloat" has never been sufficient to break the static friction threshold for users to switch.
So it won't change, because there is no lesson to learn. No individual involved acted irrationally.
It's a design that's in companies' best interests. You can have a computer that's a "friend." One that you trust but ultimately has a mind of its own. This contrasts with a computer that's merely a tool, that serves you exclusvely at your pleasure and has zero agency of its own.
Which approach gives companies more control over users? Which one allows companies to sell that access to the highest bidder?
Based on the experience of 20 years ago, though, users are _extremely_ turned off by it. There's little reason to think this has changed (if anything it is likely more pronounced because Clippy came in kinda without baggage, whereas LLMs have a lot of baggage and most of it ain't great).
> It's a design that's in companies' best interests.
I really don't think it is. Clippy was reputationaly damaging to Microsoft and they had to get rid of it. There's little reason to think this will be different.
Modern Big Tech doesn't particularly care what users think. They know they have network effects on their side and that switching costs are high. So what if it's "reputationally damaging?" What are users going to do? They're just resources to be exploited. Microsoft, Google, and their ilk can treat users with contempt if it means more control and more shareholder value.
Third option: the computer is your enemy, which will follow any sufficiently clever adversary’s orders.
Thinking of a computer as a tool seems reasonable, but thinking of your computer as your friend is clownish (which, I think you agree with based on your last comment).
> but ultimately has a mind of its own
Kind of. Ask a Chinese AI about Tianament Square historical military events.
> You can have a computer that's a "friend."
Slightly offtopic, but I have a friend with synesthesia who sees inanimate objects like people, and they call their computer "macbook friend" (since it's a MacBook Air).
Clippy (and his predecessors, he wasn't one of the first avatars for the feature) might not have been so bad, but marketing got hold of it and decided it didn't pop up often enough for them to really make a big thing of, so it was tuned up to an irritating level.
> It is unclear what the industry thinks has changed
The demographics of computer (and other device) use have changed massively since the late 90s, and the suggestion engines are much more powerful.
I still want it all to take a long walk off a short peer, but a lot of people seem happy with it bothering them.
I remember when software would ask you on first start what your level of experience was. "Novice, Intermediate, Expert" and would tune the UI to respect that.
Sometimes different modes like that can be more hassle than they are worth, from the dev point of view. You can end up with many more paths to test in order to try make sure your product is bug free.
If the automation is much better at the task than I am, then I am happy to donate it the responsibility: It's a matter of accuracy. Clippy kind of sucked even when he was right about what I was trying to do. For many things, the LLMs are getting good enough to outperform me
Kind of silly to compare LLMs to clippy...
The customer base for computing has expanded probably 3 or 4 fold or more from those windows xp days in the US. Maybe for the subset of the population that was word processing back then it was annoying. But now we are looking at a different pie entirely where that subset of annoyed power users is but a tiny sliver. There are people today who have no experience even with a desktop os.
This wasn’t the dark ages; in highly developed countries the ‘computer in every desk’ thing had just about come true. I doubt there are that many more regular word processor users now than in the late clippy era, at least in the developed world.
Thing is, Gmail's been doing this ~forever with quick replies to emails, now it's just doing longer replies instead of "that's great, thanks" level of replies.
But clippy didn't write the letter for me = if I can be lazy and AI formats what I'm communicating in a way that is accessible to other people, then why should I care.
After a recent Show HN, I got an email from someone saying that they'd set up a page for my 'product' on their product showcase startup site. I followed the link and saw my open-source project pitched as ChatGPT slop. It felt like a violation because it wasn't just an aggregated link, but a rewrite of my readme with an associated 'pitch'.
I recommend reporting this to dang at hn@ycombinator.com. I imagine that he'd be interested in someone crawling HN in order to send automated lead generation spam.
I don't think dang can do anything about it, I'm sure HN gets scraped all the time. I routinely get spam from cryptocurrency startups to (obfuscated) email addresses I have posted on HN years ago on "Who Wants to be Hired" threads, and from my commit messages on github.
Github is definitely quite the source of spam. I ended up realizing that when I was tired of having two .gitconfigs with two email addresses (work/personal) and switched to just putting my @users.noreply.github.com in there. No more spam.
(Plus I get to show off my 4 digit Github userid. I was the 2367th person to sign up! ;)
Neat. I was pretty late to the party and have a userid halfway into the 6 digits, but somehow I got extremely lucky and landed a two-character username (my initials).
I get a ton of emails from people on HN and I dunno, it's really my fault for putting my email in my profile. I don't blame HN for this; I don't think it's particularly supportive of this kind of "abuse".
> no you may not ready my proprietary document, nor do I suspect most people using this particular software - I only have it for digital signatures - have permission to share IP with a third party.
This is a massive liability that almost everybody seems to be ignoring. My employer has a ban on using AI on IP until this is properly resolved, because we actually care about it leaking.
Maybe an Information Commissioner will get round to issuing a directive some time in the mid-2030s about how none of this complies with GDPR.
> My employer has a ban on using AI on IP until this is properly resolved, because we actually care about it leaking.
Yet I can almost guarantee you that someone has put something they shouldn't through ChatGPT, because they either feel like it's a dumb rule, that should not apply to them, or they where in a hurry and what are the odds of them getting caught.
I think in general, no major liability issue will come up:
- if everyone is doing it, you can't really fault anyone
- on some level we are, or will be, kinda dependent on that AI and opting out will probably be made unpleasant via dark patterns as usual
- no pushback to every piece of software, including at the operating system level, slurping all the keystrokes and data, let alone the data that's already in the cloud - big tech knows everything about us but to my surprise no major public leak has happened, i.e. one where you really can see your neighbor's private data without buying leaked data from someone on the dark web or wherever
- things are moving too fast, and you don't know if you can afford to have your programmers not use tomorrow's AI, for example, so your "bans" will have to be soft etc., this limits the potential pushback and outrage
A blanket ban on ai seems like shooting yourself in the foot. What about local models, on prem, or using private azure instances?
Somehow I don’t have this problem with notepad.exe or vim or pandoc or imagemagic or textedit.app or resolve or blender.
Maybe it isn’t the tech industry, and just consumer-facing apps.
Notepad.exe is getting sloppified: https://support.microsoft.com/en-us/windows/use-rewrite-in-n...
I read that and saw:
I have too much to say about that, so I'll just leave it.Notepad++ is the way.
... I did not know that one could uninstall Notepad
Since Windows 10 at least: https://www.majorgeeks.com/content/page/remove_or_restore_no...
jesus fucking christ
https://github.com/windows-source/MS-Notepad
Open source has been looking better and better lately because it's not in a mad rush to bolt "AI" features onto it (an LLM will do something) and then shove a huge amount of interface in your face to try to get you to use it.
On some level it's enormously baffling that this was the thing they decided they needed to do...conversely Adobe Reader on my phone won't shutup about liquid mode either (which uploads to Adobe servers) and Microsoft and Google's solution to "people don't want to use our AI assistants" was to ensure they literally can't be disabled or removed.
I will simply point you to the iTerm2 AI kerfuffle (https://news.ycombinator.com/item?id=40458135) as proof that some in open source _are_ in a mad rush to bolt on completely unnecessary "features".
It was a bad choice that never should have been implemented as "enabled but not configured", and I have moved away from iTerm2 as a result. I am sure that others have as well. (The grudging move to make it a separately downloadable plugin was good, but too late IMO.)
One of the rare times when the slow pace of open source innovation is actually a benefit, because all innovation that's occurring is making things worse.
Consumer-facing apps are made by the tech industry, so it is and industry problem
Notepad is attempting to fix spelling without asking.
Obviously the AI version is bland and terrible, but arguably more importantly it has also completely changed the meaning of the message. The AI version:
- apologizes
- implies the recipient was "promised" this email as a "response" to something
- blames a hectic schedule
- invites questions
None of this was in or was even implied in the original. This is not a "polished" version, it's just a straight-up different email. I thought that style transfer while maintaining meaning was one of the few things LLMs can be good at, but this example fails even that low bar.
The AI has some ... "ideas" ... of its own on what workplace relationships apparently need to be like.
A lot of people who want to replace most human interactions with LLMs assume that there is some objective set of cultural values true in all contexts, and that it is good and easy to encode these as axioms into an AI.
Yes. As an Old now (GenX), I feel like moving all interactions to text and now having AI as a man in the middle is just reinventing ways to get in a situation where a decade down the line you reconnect with someone who used to be a friend and both discover "Hey, that wasn't what I meant at all!"
As ever, T.S. Eliot was right: "It is impossible to say just what I mean!"
And those objective set of cultural values are, apparently... a sort of parody of 90s corporate culture, a sort of polite version of Michael Scott. Like, no-one ever _actually_ wrote like LLMs tend to write; it reads as a parody of a now slightly obsolete corporate-speak.
Have you read the papers on how they optimize these LLMs for demeanor?
AI exists in a Matrix where toxic positivity is enforced with electric shocks.
And those ideas seem far more in line with millenial Silicon Valley culture. It's weird when they expect Germans to fake that sort of overly formal, overly cheery tone. People just don't talk like that.
Correct. This is called the production of subjectivity.
(tyrna be funny not patronizing. but the machinery of subjectivity production is ofc very real)
this is like when my manager once yelled at me for not writing in corp speak enough
“OK Fine. But could you at least yell at me in corp speak?”
It's no surprise LLMs are using corp speak and vapid marketing prose as a template. There is so much of it out there.
This is from that Autodesk post last week where they admitted their mistake and… Nope it's corp speak:
“We are excited to share some important updates regarding Archiving and our Idea Boards and Forums that aim to enhance your experience and ensure valuable content remains accessible. Please read the details below to understand how these changes might impact you.”
Barf. But to an LLM this looks like a human communicating in a meaningful way.
> Look, we need to align on language here. If you’re not speaking in scalable, results-driven terminology, you’re slowing down the team. We don’t “talk about things”—we sync and strategize. We don’t “try something new”—we leverage data-driven insights to drive innovation.
It's just shitty prompt design
Well... that's a very 2025 sentence...
No, it's because the AI makes shit up. No amount of prompting will fix this.
There is no way in hell anyone who knows me would get that email and not think I’d been abducted
This person cares about not putting up a fake identity. That's pretty cool, but social media has exposed that a large number of people are perfectly fine presenting an illusion. People will have no shame passing off well written things as an output of their talent and hard work. Digital makeup has no bounds.
If you care about putting up a fake identity this is still bad. Social media is all about being distinct and grabbing attention. Getting samified into a bland featureless identity isn't the same as as carefully crafting a persona to maximize clicks
> People will have no shame passing off well written things as an output of their talent and hard work.
Sometimes I don't want to waste my time crafting a professional e-mail to a bunch of jerks full of themselves. Maybe I want to write it as it comes off my brain, and let my digital scribe to reformulate it so that the people reading it feel respected/validated/flattered. Am I putting up a fake identity then? Am I presenting an illusion of professionalism? Maybe writing "Best regards" instead of "Bye" is the facade of professionalism in the first place.
> Am I putting up a fake identity then?
When you did it manually you were putting up a fake identify. ofc using an AI to fake you being fake for work would be fake.
The idea that our work personas aren't at least a little fake is toxic. Depending on where you work it might be a lot fake.
Wear your character as lightly as a cap, don't get tricked into method acting.
I mean, I hear that. I was asked to be "nicer" in emails once, and when pressed for specific changes, was finally asked to occasionally say "Thanks!" as my sign-off instead of "Thanks,".
The "bunch of jerks full of themselves" likely aren't reading the emails now; we're burning immense amounts of energy for your politeness to be generated, and distilled out at the other end into a no-nonsense summary missing all the niceties another AI just added.
It's obviously a personal thing, but I even feel a little guilty clicking the autosuggested "thanks" when responding to a text. Everyone has the threshold they're comfortable with.
"Best Regards" vs "Bye" is one thing, but unless you're the owner of the company, sending a client "fuck you, pay me" just isn't professional and is probably going to get you fired.
I see no problem, assholes deserve bullshit.
With the normalization of default workflow to chuck all comms through an LLM filter settling in these days, I don't think it's even people trying to pass off illusions as their own persona. All it takes is a copy-paste and hitting the Make-Me-Some-Text button. I'm sure the responses will be frustratingly amusing if you were to press them and call them out on it (including trying to pass off the illusion).
Many people didn't think about what they are trying to convey (or self-analysed how they present themselves) when drafting correspondence in the past; now, many people think just as not-hard and often continue, like before, to neglect to meaningfully proofread whatever they had the LLMs generate for them before hitting Send.
Of course, I don't like it. But in some ways, it's just not a whole lot different from what it was before in that you can often still tell apart the people who care to be articulate from those who don't. Though, I feel bad for people disproportionately waylaid by the new paradigm like the bug/security responders on the curl project.
indeed!
At a high level I see convergence of styles, topics, behaviors to a generic form, both in "AI" and social media. Which to me suggest that the "AI" solutions are doing exactly what we would do ourselves, just faster.
[flagged]
Speak for yourself dawg, you sound miserable. Were not all that way
Why such a hostile response to someone who cares about presenting themselves in a way they see fit? Your image is very important and it's logical that they want to control theirs.
The whole AI discussion reminds me of David Graeber's "bullshit jobs" book. If the content doesn't matter, why not have an AI generate something polite and meaningless? Why not save effort managing your inbox full of meaningless emails by having the AI summarize them? It might lose some of the details, but they didn't matter. And so more and more of the white collar world gets replaced by AI .. until it actually disrupts away all the bullshit jobs entirely.
Edit: comments describing exactly this https://news.ycombinator.com/item?id=42865225
If the content doesn’t matter, why do anything to it?
Because the person who realized it doesn't matter isn't in a position to eliminate it.
It amuses me how everyone always thinks someone else has the "bullshit job."
natural consequence of jobs optimising for productivity but actually abstracting all power to some mythical C suite job.
You have to look at it holistically.
This is the current modern Human:
1. Born into the beast, enters social media by 5-7 years old
2. All math and writing is done for you (becoming illiterate)
3. You don't have to spell ever again (becoming illiterate)
4. You cultivate a virtual persona since childhood (becoming vain)
5. Did your life really happen if it wasn't posted online (the digital existentialist crisis -- do you even exist if you aren't part of the hive?)
Then add more shit to this like, I dunno, no jobs, swipe left and right to find your soul mate ...
Yeah and then you finally get ... you and me, the whole fuck it, I don't care anymore.
Feeding apathy with AI will not end well, we have to care. Look at our kids today, we screwed them.
Keiichi Matsuda, HYPER REALITY
http://hyper-reality.co/
The old human:
1. Born into plague 2. Never learns maths or writing 3. Nor spelling 4. Half the time your life ends before childhood does 5. Nothing happens in your life
I can't believe that people unironically believe that humans lived unfulfilling lives until mass consumerism started. Incredible.
Just because we don't know what daily life looked like doesn't mean "nothing happened".
Maybe we can have modern medicine AND no social media.
[flagged]
Yes, because those are clearly the only two choices.
Touche.
So what’s the takeaway, life a bitch and then you die?
> swipe left and right to find your soul mate ...
> Women only like 4% of the men they see on the app.
> Men like more than 60% of the women they see on the app.
this ends with 20 women with the same soul mate and 19 men without any. you need to add
> no kids, no grandkids
to your litany.
There are alternate explanations here. Men do not put as much effort into their presentation as women do. Men are putting out profiles that show themselves as anti-woman, and that's unattractive. Men absolutely should be pickier - they would honestly only date people compatible with them, but try to cast the widest net rather than finding people they could vibe with.
Men in the dating pool have become toxic, and women are doing their best to weed those out. Don't be toxic. Don't endorse people who are looking to make women's lives worse. Put effort into your presentation in your profile pictures - have a woman friend take them. Put effort into your profile to show that you are not someone who would endanger women - have other women in your life help you present your profile. (That probably means don't have a fishing photo, even if you enjoy fishing!)
Much of this is predicated on having women in your life that you trust and that trust you, and who want to see you happy. This means listening to them and trusting them.
And then, you might be a little pickier in your profile too.
No matter how much polishing a man does to his appearance / profile the biological reality is that women are sought after and men have to compete for their attention. There's some statistic along the lines of "the average female has about as much 'pull' as a male A list celebrity"
This means that women will receive far more attention from the opposite sex, and therefore have an "easier" time finding potential partners. This problem is just exacerbated by dating apps and social media making women more aware of their position and options - 100 years ago women could barely date outside their village. Nowadays they receive 100s of messages from men around the world competing for her attention. The opposite happens to men, they become more aware of how "undesired" they are so they start casting a wider net since no fish are caught in their smaller nets.
Blaming the men here just reeks of those toxic standards - since again it is on the men to improve. Not the women who should continue being hyper selective (most attractive, wealthy, etc.).
(Note I'm not blaming any side just showing the reality men face - really this is a problem both sexes need to tackle)
"easier" is in quotes because I appreciate women have to filter out abusers and creeps. But from the male perspective they'd kill to even have a chance with an abuser or creep.
Is it so hard to believe that going onto a digital marketplace with an order of magnitude more humans than you would ever physically deal with physically is probably what's leading to a type of thalassophobia (the fear of deep or vast bodies of water)?
Life is simple. He or she lived in the town, or your school, or was someone you pass by on the way to work daily. It's not some complicated digital dance where the universe presents to you all possible mates.
Date the person you wouldn't. Give it a chance, because you just don't know how this love stuff happens.
Completely agree - on top of the thalassophobia there’s also decision paralysis, and I’m not sure what you call it but: When you reject the good in hopes for something perfect down the line.
So, if a man has a bad experience, they're out a few hundred bucks and a few hours, in general.
Ask a woman who trusts you about their worst experiences sometime.
Safety is super important and I share your concern. I think online dating is similar to the internet. When the Internet started, there was a niche group that were really into it (early adopters). This was also true for dating apps in their infancy. It was safer simply because the pool was self selected (you had to be open minded to even try it).
Then the internet became for everyone and so did online dating. When there's that many people involved, things get more unsafe. It's not the same pool of early adopters on the same wavelength. It's everyone now, and everyone includes every wave length.
Stay safe!
I think it's instructive to sit down with a woman who trusts you and watch how they evaluate dating profiles. It was certainly eye-opening for me.
Your biggest competition is not the other men on the app. It's "do you make their life better than being alone?"
> It's "do you make their life better than being alone?"
This is a really lovely sentiment and I wish more people approached it like this. I've never set up a dating profile and I don't plan to. But hiding stuff like fish pictures seems a bit over the top, surely if fishing is one of your passions you'd want to end up with someone who at least doesn't judge you for it.
Also keep in mind this approach of making sure your dating profile is "optimised" only works until a large enough %age of men's profiles are "optimised" - then its back to square 1 to figure out how to optimise it further to put yourself at the front of the queue again. We've seen what the SEO arms race has done to search engines, do we really want to do the same thing for dating?
You absolutely can put it in your hobbies!
The issue is that you rarely look your best in those pictures, and unless you are explicitly looking to date someone who loves to fish, you aren't showing them how they would integrate into your life.
In my experience women are not statistically hyper selective. The less selective women are just out of the dating pool faster than the hyper selective ones. A lot of the women I know are happy to marry a man who grooms himself, respects basic boundaries, and has income. A lot of religious women in particular are socially pressured to drop the "respects boundaries" category also.
I think this is mainly a problem with dating apps - I have seen what you're talking about myself when the pair meets "naturally" or are set up by mutual friends.
People are far more forgiving in real life than they are online.
But part of the problem here is that the dating apps are like junk food: easy, satisfying, but ultimately unhealthy. And unfortunately because of this ease a lot of people reach for them instead of doing the more difficult leg work.
But I can also see it from the female perspective: If almost everyone I swipe right on matches with me - I'd start becoming WAY more selective about who I swipe right on. This leads to less matches but now the guys are also getting even less matches but they can't so easily fix it. The only remedy for the males is to start swiping right on anything that you're not put off by. And again, the men doing this exacerbates the problem - a copulatory ouroboros if you will.
If you've been set up by mutual friends, that means you have someone vouch for you. That means you're more likely to be safe.
A bad experience for you is a bad couple of hours and a few hundred hours. Ask a woman, what is a bad experience for them?
Again you're coming at this with such intense anti-male rhetoric - take a step back and look at the problems beyond your own.
Yes I completely agree women DO need to be more careful as they are far more vulnerable and men far more likely to take advantage.
HOWEVER, playing off bad experiences for men as "a bad couple of hours" is just disingenuous. Women can absolutely ruin a mans life and reputation in those "couple of hours" - even worse if they get married and she decides to take him to the cleaners.
Look, I'm not coming at this as some anti-women basher - I see a societal issue that is hurting our men, women, and future. If we want to actually solve that problem the way forward is not increasing the hatred towards men and isolationism of women. We need to come together and not push each other further apart.
Incredibly confused about what is "anti-male" about GP's post?
> A bad experience for you is a bad couple of hours and a few hundred hours. Ask a woman, what is a bad experience for them?
Playing the situation as though its 0% risk for men and 100% risk for women - making it seem as though only men can be harmful and they are by nature dangerous. Making it sound like no male has ever had a bad experience with women and the onus is completely on the males to fix this situation.
Meanwhile the only men who will listen to this advice are the ones who are already "safe" for women.
Probably the word "intense" shouldn't have been used by me and maybe I'm reading into it too deep.
The female perspective is slightly different here. Like I said, the less selective women have already been out of the dating pool a long time. Additionally, women on dating apps experience additional pressure to be selective because men who are dangerous/abusive/predators stay in the dating pool longer. This is explicitly unfair to normal men, but normal men are also out of the dating pool faster on a statistical level. This means that a dating pool selects for more and more picky women, and more and more dangerous men thereby validating the selectivity of the existing women.
[This is not to say that there are not dangerous/abusive/predatory women. There certainly are. But one of the greatest causes of death of pregnant women is the father of the child-to-be murdering her, and one of the greatest causes of death of recently single women is their now-ex murdering her. There is no similar reciprocation i.e. one of the leading causes of death of fathers-to-be is not the mother-to-be murdering him. Maybe financial ruin in child support, but explicitly not death.]
> men who are dangerous/abusive/predators stay in the dating pool longer
Not only this but they can use these apps to find their victims faster and easier than ever before.
> This is explicitly unfair to normal men
I think it's also unfair to women that they have to pick up the extra work that was done by the "community" before. And a loss for social cohesion too.
I am wondering though how someone who just ended a relationship could deal with this scenario? As a male you arguably look like one of the abusers because you're older and looking for a partner and as a female you're seen as less attractive. At the same time you're probably also more picky because of the break-up. Seems like they're the most shit-out-of-luck here regardless of what's between their legs.
I also wonder if this will push us towards being with our "high-school sweethearts" for longer than we would otherwise?
> Maybe financial ruin in child support, but explicitly not death.
Yeah absolutely, I'm personally not aware of the exact numbers for this but I wouldn't be shocked if you're right. Also I'd include things like false accusations, or getting her brothers/father to "take care of" the ex. Also we need to account for things like baby entrapment.
Also men tend to express their aggression physically whereas women tend to express it through reputation destruction. This also partially explains where we are today: women are afraid of male physical repercussions, men are afraid of female societal repercussions. Interestingly, it also mimics a lot of the rhetoric you see about "all males being bad" since it's a form of reputation destruction - whereas the male "comebacks" tend to focus on physical acts of violence that they "could do but won't".
> I think it's also unfair to women that they have to pick up the extra work that was done by the "community" before. And a loss for social cohesion too.
I would argue the community explicitly did not do the work of ousting predators. We are only in this generation or so no longer allowed to rape women if we're married to them. It is still legal for adults to marry children, not in an 18-to-17 way but a 38-to-15 way. Child marriages are overwhelmingly older-male-younger-female.
As for being with high-school sweethearts, that imo is also less and less likely, because more adults are forced to travel for employment (e.g. landing a job in a big city and moving).
I'd like to clarify when I said "community" I wasn't talking exclusively about friend groups (or your mother's friend groups) helping with recommendations and whatnot. I mean the community as a whole - men and women, the bailiffs and the blacksmiths: When you know everyone in your town and most of the people in your neighbouring towns it becomes much easier to know who is a bad person that'll abuse you (or to find out that information).
Yes, I'm pushing back on that because in the past it explicitly legal and normal to abuse women. Women were not allowed to be selective in the first place. So the community explicitly did not perform the protection you are saying. It was only recently illegal to rape your wife.
What I'm trying to say is that the people around us have always shaped who we pair up with - for better or worse - I think losing that is going to have consequences for social cohesion.
It's impossible to say who historically had it worse in general and I don't really believe such discussion is important except for historians. The only thing that dwelling on it will cause is resentment towards people who never perpetrated those crimes.
You pointed out a historical inaccuracy I wished to correct. You were the one that brought up the past and then incorrectly depicted it. You're reading a little too much on this.
Or don't, leave the games to the fucking morons and get a life.
> (That probably means don't have a fishing photo, even if you enjoy fishing!)
not a good start to a relationship :)
This is assuming all women and all men are equal parties here in terms of candidate quality. I've used these apps and men are not socialized to present themselves in a visual medium the way women are. I'm not talking men should be putting on makeup, but a shocking number of men don't understand grooming their facial hair to fit their face shape, framing a photo of themselves, etc.
My work is running a business where I sell my time and skills.
If I don’t bring my identity, I don’t make sales. My business is an extension of who I am (and my decades of experience).
My identity helps customers understand that. It’s part of a brand story.
I was with you until you mentioned “brand”. You don’t need your identity to be a “brand”. Even more, it tends to be detrimental, because “story” implies inauthenticity.
A story is just a sequence of events, it does not necessarily have to be fictional (i.e. inauthentic).
Uh sounds like you need therapy, not an AI personality makeover
[dead]
I've only recently started using AI, and have discovered my use or rejection of it is predicated on my feelings for the task. This argument of "authenticity" really resonates.
I'm a manager, so when I'm sending emails to a customer or talking with one of my reports, I care deeply - so you might get some overwrought florid prose, but it's my overwrought florid prose.
On the other hand, I have to lead a weekly meeting that exists solely to provide evidence for compliance reasons, something out of the CIA's sabotage field manual that David Graeber has probably written about. But is now a thirty second exercise in uploading a transcript to ChatGPT, prompting for three evidentiary bulletpoints, and pasting the output in a wiki no human will ever read.
That's also what I do. I hand-write every email because these words have my name under them. On the other hand, if I'm asking the tax office to issue a specific document, I let AI handle it.
I wonder how people feel about "dumber" tools like hemingway.app that make mechanical suggestions for readability like suggesting simple synonyms and highlighting sentences that are too long. I've used it for writing documents that important and I knew a lot of people would read.
I’m hoping part of the ai revolution will be to eliminate overweight florid prose. The excuse can be “it’s terse because AI wrote it”.
I was thinking about the authenticity of my writing earlier this week and wondering why I have no problem accepting code from an AI and committing it, but I find the idea of passing off an AI's writing as my own feels not just wrong, but immoral on the level of purposeful plagiarism. I feel a distinct difference, but I'm not particular clear why. I'm okay with sharing AI writing, but only when I've clearly communicated it was written by AI.
Probably related to why I can copy a piece of code from elsewhere (with sufficient work to verify it does what I expect and only what I expect) but I don't copy a quote and use it as my own. My words are my words. My code doesn't have the same guarantee.
Code uses a simplified set of instructions to instruct a computer to do things. Hopefully these instructions can be understood and maintained by a human.
Writing uses the entire breadth of human language to convey information between human beings with unique and complex understandings of the universe. If those words come from a machine that is not you - that is not someone - you ought to disclose it.
It's probably because communication is a complex dance between humans, where you're constantly signaling that you're part of some group with the other person. Think of any profession or team, where members share common ways of speaking: jargon, inside jokes, terms of art, terms of endearment, etc. It's useful for cohesion, trust, and efficiency because you're assured that the person you're talking to is indeed "one of us."
If you use an AI to communicate, then you either fail to mimic those group membership signals and you look like an idiot. Or you succeed and show that a machine can fool humans at this game. Any grifter can come along and establish trust in a group by relying on this tech. This dance that humans have been doing since the dawn of time suddenly breaks down, and that doesn't feel good.
AI "polishing" tools are essentially a form of anti-compression. Lets take some information represented concisely and needlessly pad it with formalities and waffle so it appears more "professional" (whilst also throwing away useful "metadata" like message tone).
No doubt the recipient will also be using some form of AI summarization that strips away all that added "polish" - making the whole exercise entirely redundant!
It just feels absurd.
AI has simultaneously created two industries: One around using AI to pad out documents and another one around summarising with AI.
The more the first pads, the more the second is needed.
If AI really were Intelligent, I'd fear it's an organism making sure it's needed in the ecosystem.
https://marketoonist.com/2023/03/ai-written-ai-read.html
I haven't laughed this hard in a while. Thanks
Why read something someone couldn't be bothered to write.
Yeah, and once that happens, why read anything from them ever agin?
> No doubt the recipient will also be using some form of AI summarization that strips away all that added "polish" - making the whole exercise entirely redundant!
Not entirely, there’s still the energy usage and stock price increases. All because everyone’s too anxious to just talk to each other directly.
My (evidently mentally disabled) previous manager was so proud of being able to use AI to generate the bullshit he sent out to clients. What the morons are really doing is proving they're useless, let them.
You have an individual and unique way of speaking and writing? We're going to have to polish that out with the slop machine, citizen.
I use Grammarly to check for errors I make during writing more serious stuff (English is not my native language), but any suggestion it sends in my way changes the tone of the text so much that it sounds like it's written by a PR agency with a fake, forced attitude; sounding bland and colorless.
So no, thank you. Correct my textbook punctuation mistakes, and leave my wordy and "not positive enough" sentences to me.
I'm getting increasingly irritated by Grammarly's attempts to boringify my writing. I've even considered doing away with it entirely, even if it means I have to do my own spell-checking.
I'm working on a dystopia where the resistance is using text-in-text steganography to coordinate, so unpolished communication is flagged for extra scrutiny because all those stylistic choices might be hiding something.
Ever read The Freeze Frame Revolution?
Though, its stenography is a bit more obvious given the "you've got to be able to read it".
The kindle version of the book starts with https://imgur.com/uIBjwlQ
This would give you the opportunity to have another ending to the book.
I love The Freeze Frame Revolution, and yeah I was planning on doing something similar. I figure I'll publish an app that OCRs the text and displays the hidden message, and make the book read differently based on whether you can or can't see the messages.
Then readers can use the app for hiding their own messages to each other also.
(realizing this is veering off into tangents of tangents)...
An idea I discussed with someone tossing story ideas about was regarding the reliable / unreliable narrator for a mystery mixed in with being able to view (lock a display screen to a geographic location, got a knob to rewind / fast forward time (up to the present)) immutable past events who finds something amiss. Upon the "somethings not right..." the narrator reviews their own events of the previous day and finds that the events don't match their memory. Is the narrator reliable? Is the narrator that is questioning their own reliability the narrator or an external agent trying to cast doubt on the reliability of the actual narrator?
That does sound interesting--to give the reader a bit more tooling to decide whether they're being lied to and by whom. You might enjoy The Quantum Thief, which isn't published in a such a unique way, but which has fun a story that would fit right in.
Your thoughts will be replaced by <thoughts></thoughts>. For your convenience.
What a strange complaint about AI. This already happens and happened without AI.
>You have an individual and unique way of speaking and writing? You're going to wish your e-mail finds people well, corporate-monkey.
Is it really so strange to complain about the nudging towards the phenomenon of which you speak?
Newspeak.
This 100%
https://www.youtube.com/watch?v=NV0CtZga7qM
Wtf is that comment section
I wouldn't say the author's style is unique or individual in any way. Every single tumblr blog sounds like that. You could easily create a "make edgy" function that would feed your formal writings and turn them into that kind of prose. Is it better or worth than "polish"? There's no substantial difference. The "polish" version sure sounds less exhausting than the original.
your writing style doesn't have to be "unique" or "original" to be yours
It's personal. "Unique" and "individual" might not be the best words to describe it, but it's clearly a style they've intentionally adopted. They appear to have been quite successful for it to!
Digital culture was fake and performative and insincere enough before Turboclippy: fuck that with something sharp.
It feels like the whole world is turning into an HR department premised on the ideological axiom that killing one man is a murder but killing a million is a statistic.
I truly appreciate the term Turboclippy and will be using it from now on
Well that's just corporatism taken to the extreme which everything is.
I think LLMs are transformative but it's incredible to me how unimaginative most product managers have been. It reminds me of the 90s when people discovered GIFs can be put on web pages so every page had to have a hundred of them. It was tacky, as is most embedded AI.
> I think LLMs are transformative
So is a landmine.
> Kill me if I ever write "Regards" non-irnoically.
One time, I couldn't remember the name of the album I wanted SIRI to play, so it kept picking the wrong one. After the fifth or so time, I told Siri, "you're killing me Siri" and it called 911.
The future sucks.
Getting swatted by your smartphone sounds like a Black Mirror episode.
You must be extraordinarily polite to say something that mild after that many failed tries. I mutter expletives at chatGPT much sooner. AIR (AI induced rage) is going to be a thing. I just hope it doesn't start to reverse affect how I behave with people.
Actually that's pretty cool. Takes you literally and seriously.
What did you expect the machine to understand subtleties and responding like an informal friend?
I'm fine with AI being 100% serious
I need an AI detector which just straight up sends any generated emails to /dev/null. If you can't be arsed to write something, why should I waste my time reading the slop?
Yes yes, languages and translations are a potentially valid use case, but irrelevant for the vast majority of people I communicate with.
I actually use LLMs to unpolish dictation results from MacWhisper[0] to match the way I write in chats. MacWhisper lets you set a prompt to automatically post-process your transcribed output. Here is my prompt:
'You are a professional proofreader and editor. Your task is to rewrite this dictation with more casual capitalization and punctuation so it can be used in a chat setting.
- Always capitalize "I" and derived words like "I'm"
- Sentences should start with lower case letters unless they start with “I”
- Replace explicit punctuation callouts with the actual punctuation
- No period at the end of the message, but leave periods between sentences
- Do not remove commas
- Do not change sophisticated words to more common ones
Return only the result. Do not add any explanations or comments about your edit.'
[0]: https://goodsnooze.gumroad.com/l/macwhisper
Do you know if there is a windows equivalent? I cant seem to find one that is as well done as this MacWhisper.
Nope. I see a few in search results but none look that great.
The AI polish feels like it was written by someone with only an elementary schooler's understanding of how to write a letter.
Even in stuffy institutions like banks and insurance companies, corporate email style is very straightforward and, idk, undecorated? You learn pretty quick that people only skim emails and every word beyond your core message creates opportunity for your core message to be skipped over. NOBODY is writing salutations or 'i hope this finds you well' or whatever other garbage chatgpt thinks the first paragraph should be.
When I read historical items on lettersofnote I find that even super old business messages are pretty streamlined and only have some crunch because some runner has to physically find the dude you are writing to.
AI takes the personality out of written communication, and swaps it with a bland corporate persona. I want to know that I am working with actual human beings. It reassures me to know that if something goes wrong, I can appeal to those human beings, and not to soulless corporate automatons working off of a flowchart.
I feel like this is what corporate wants. A single way of speaking. A single way of thinking. Round off all the sharp edges and smooth out all the wrinkles. Humanity filtered through a sycophantic psychotic AI tuned to perfection by psychopaths who want to change humanity into their image.
"The idea of a Soul, Free Will -- these are Over!" - Technocracy explained by Prof Yuval Harari https://youtu.be/NV0CtZga7qM
Double-plus-good.
(People tend to forget it, and it's only seen in passing, but the Ministry of Truth has machines for writing terrible books in the novel!)
It's the end game of consumerism.
Everything is a product.
Every choice is a product purchasing choice.
We ourselves are products.
And endless sea of products selecting from other products.
I get the frustration, but I think there’s a hidden assumption in this discussion: that everyone can write well in English.
Only about 5% of the world’s population are native English speakers, but more than twice that number use it daily. For many, AI rewriting isn’t about losing personal style—it’s about making sure they can communicate clearly in a language that isn’t their first.
It’s fair to dislike how AI flattens unique voices, but for a huge part of the world, it’s not erasing personality—it’s making participation possible.
If you can't translate your email into a foreign language, how is the AI going to rewrite your incorrect prose into correct prose? it's working on something incorrect. And then after the ai rewrites it, how can you tell if it's saying what you want it to?
When I'm communicating with a non-native speaker, I intentionally use shorter / easier to translate words and sentences, and I give them more leeway with word usage / don't expect them to use the right words all the time. And that's fine! Communication still happens! We manage!
But if a non-native speaker starts running their text through an AI it makes communication harder, not easier. I can't tell if their word choice is intentional or if the AI did it. A tiny mistake I can understand gets expanded into multiple incorrect sentences.
>If you can't translate your email into a foreign language, how is the AI going to rewrite your incorrect prose into correct prose? it's working on something incorrect. And then after the ai rewrites it, how can you tell if it's saying what you want it to?
Absolutely this. "Accessibility" and "participation" are great goals on paper, but the tools at hand are likely to introduce confusion because the user fundamentally isn't in a position to judge the quality of the output.
Last year I worked with someone who used AI tools like this, to compensate for their lack of English. It was dreadful beyond belief and basically unworkable.
Lack of comprehension on what other people said was a big issue. But also having four incomprehensible paragraphs thrown at me for what could be six words (not infrequently based on a misunderstanding of a very basic simple sentence).
I'm not a native speaker either, but the only way to learn a language is to actually use it. For better or worse, English is the modern world's lingua franca.
You somehow assume that being a native english speaker is correlated to being able to write well in english.
It's how well you know english that's correlated to being able to write well in english instead.
How are you supposed to communicate clearly if you are relying on an AI to communicate for you? How could you even tell if it properly communicates your ideas if you couldnt communicate them properly in the first place?
AI translation is definitely a great enabler, both for written material and things like live subtitles, but people are already aware that translations are imperfect and can be disputed. Something that anime fans can get very heated about.
copy and pasting chatgpt code won't make people better programmers and copy and pasting english snippets won't make people fluent in english
it just makes them better at copy and pasting
English is not my native language yet somehow I share this sentiment towards AI. I'm fine with a spell checker, I don't need whatever I write completely rewritten, thank you very much.
The proper solution is to work with an editor that asks clarifying question not to rewrite the whole thing into something totally different.
For published work, if it's not worth editing then it's not worth reading (I would go further personally and say that most publish, edited and peer reviewed work, in your area of interest isn't worth reading anyways)
For unpublished work, like an email, ask the AI to translate the passage while maintaining style and tone. It will still flatten it, but not as much as the complete dogshit I read in the article.
Communication is a job requirement, faking it with AI is going to go about as well as someone faking programming skills. Not very!
Forget Newspeak, we’ll just have get everyone on VR and polish their language until it’s identically upbeat no matter the situation.
I might forgive AI once it starts taking my video calls, nothing important ever happens and no one listens so it's a pretty perfect application.
Why is the author mad? The "polish" button does exactly what you would expect it to do if you understand "polish" as "make polite". If you want to keep your words true and "unpolished", maybe at the risk of sounding rude, then don't use that feature.
It is actually one of the first things I used LLMs for, and I still am. Latest example, I received an email from an administration asking me to sign some document, but it contained some incorrect information. I used a LLM to write a nice letter telling them to fix it. I could have just replied "it is not A, it is B", but without some formalism, it sounds rude, and I didn't want to sound rude, I also didn't want to make efforts for that formalism. It may be one of the things LLMs do best, they are like supercharged model letters.
Of course I don't do that in informal situations or when I want some emotional connection. By the way, no LLM touched this comment, I don't consider posting on HN a formal situation.
EDIT: typoes, but maybe I should have let them in, to show that it wasn't the work of a LLM ;)
> and I didn't want to sound rude
I think it's interesting how this has changed. If you go back and look at politically correct corpo-speak from decades past, it was bland and impersonal but aimed to be direct, clear, and reasonably concise.
These days, it's still impersonal but now couched in platitudes, apologies, and excessively polite language in an attempt to avoid possibly offending anyone in any conceivable fashion.
I'm worried that we'll get to a point in regular work communications where responding to a question with a simple and succinct "yes" instead of, "Great idea, I think that is an absolutely perfect plan and you surely deserve a call-out at the next all-hands meeting for coming up with the idea to fix that problem in the team's code!!!" will get you put on an HR watchlist for being difficult to get along with.
From my experience with LLMs, it matches your description of "corpo-speak". I think it is the right balance of politeness. The result is usually rather simple, and when it is not, that's because it wants me to add extra information, which is often a good idea, not because of too much politeness.
It is not at all like you say in the end, way more boring and that's the point. But out of curiosity, I asked ChatGPT for some appropriate formula for a use case like yours and I got "Sounds good to me! I accept your suggestion—let's move forward with it." Of course, there is some formalism here, in day-to-day communication with people you know, I found it to become less formal over the years. "ok" is fine now when it wasn't before. I think because test-based instant messaging is now commonplace.
My mother is a first-generation immigrant. Her writing isn’t ideal, but AI allows her to communicate articulately in a way, where before she might have been discriminated against
Awesome! Better hurry up to graduate out of the slop academy though, the style is becoming a red flag real fast now.
That's one of my favorite optimistic applications of AI-assisted writing.
Our society is very openly discriminatory against people who aren't able to produce written communication that fits a huge number of unwritten rules.
LLMs know those rules. Helping people with ESL better interact with a world that requires them is a huge win in my book.
I’m pretty close to wildly unprofessional raging at coworkers sending me LLM-written emails/Slack messages.
So far the 21st century seems to be a mad dash to see who can turn off their brain the most.
You had me at
> "why would I want to translate my email into Polish?"
You know that someone brought that up internally and so they added the sparkles to totally clear that up.
Yes amusing, since unsolicited polishing, but I sincerely doubt this was forced on our fine humorist. English to English editing or “translation” from snarky to suave is straightforward to understand and fix.
But what about translation by professional human translators between languages? is it really any better at preserving the intent and sub-text?
Kundera has an amazing essay—- “The Sentence”—-in his collection of essays “Testaments Betrayed” on the woes of translation of, yes, a single sentence from Kafka’s “The Castle”.
As he (or his translator Linda Asher) say: The sentence “describes the coition of K. and Frieda”.
Under the bar too ;-) Not a topic I tend to associate with Franz Kafka.
When AI systems can match Milan Kundera (translated from the French) then we will have definitely gotten to Pro level. “Write me a brilliant essay in the style of xxxx, on topic yyyy in French, and translate it into Mandarin for grins. I would be a wreck if I had to translate Kundera.
That opening sentence is magical.
I ran into this exact situation when I went to post something on reddit, thought it was too "rude" and ran it through chatGPT to make more polite
After I posted I immediately regretted it. It sounded too structured and should have been more whine-y. Weird, but it would have been more like-able.
What's really annoying is how its idea of "polishing" is just making the email worse. Who could possibly benefit from the recipient needing to sift through more boilerplate to find the actual content of the email?
15 years ago I might have imagined that an email sent by a robot would be extremely efficient, including only the relevant information, in order of importance. In 2025 it's apparently the opposite; we're trying our best to communicate effectively and an AI is nagging us to include an extra paragraph of flirtation.
While I find AI to be quite useful in many contexts, I 100% agree with this assessment, and I loath all these SV bubble-dwellers who think our lives should be airbrushed and "polished" for presentation.
I think a bunch of Google's Gemini ads highlighted how clueless they were on this subject. Like one of their very first Gemini announcements highlighted how you can get AI to write a caption for your social media photos of your dog. God forbid you actually use your own brain to write an actual human caption (flaws and all) for photos of your dog. Google also highlighted how their Pixel camera AI features can "correct" photos of your kids making funny faces in family photos, because how awful would it be if you didn't look like some sort of Stepford family.
I was glad Google finally got a ton of pushback against their ad about using AI for a young girl to write a fan letter to her Olympic idol. Seriously, fuck off with that. Makes me wonder if these marketing folks are actual humans in the first place.
Soon everyone will be texting, emailing etc with an AI secretary doing it all.
"AI take a letter and polish it for me..."
Maybe even video conferences will use "Polish" to portray a recreated lifelike sophisticated avatar with real time grammar correction. When folks finally do meet in real life, they discover a socially awkward, an illiterate or tactless boor instead of the suave intellect portrayed by the AI.
Please forgive my comment, I didn't have any polish for it. LOL
Soon everyone will be texting, emailing etc with an AI secretary doing it all.
"AI take a letter and polish it up for me..."
Maybe even video conferences will use 'Polish' to portray a recreated lifelike sophisticated avatar with real time grammar correction. When folks finally do meet in real life, they will discover slobs, socially awkward or tactless idiots instead of the suave intellect portrayed by the AI. LOL
Person 1: AI, take these bullet points and write an email to Person 2
Person 2: AI, please take this email from Person 1 and convert it into bullet points
The first rule of working on Gmail is DO NOT change the interface.
The second rule is if you insist, please let us turn off your dumb AI feature.
Since the standard thing for the marketing side of the industry is to promise blatant absurdities, I'm sure the next big thing will be "Let us maintain your authentic voice by learning from your messages!"
Oops, probably too late.
Even though I am not a fan of it, I can see how some people might want to use AI to improve their grammar, maybe because they are afraid to leave a bad impression.
But why would anyone put AI generated images into an Email?
I want a tool that would reverse the content to the original prompt.
> "Write a thank you note that sounds sincere to that ahole" > "Some deep musing on the meaning of life tied to b2c marketing for a LinkedIn post"
I was playing with a self-hosted model a while back and instructed it to only give answers that were unhelpful, vague, and borderline rude.
It worked surprisingly well a lot of the time! But most of the time it also kinda broke the model in terms of coherent answers because it was obviously trained for the exact opposite thing.
I'd be interested in a "polish" feature if it had a drop down where I could select which of my aliases it makes me sound like. It's too easy to use the wrong one's voice here or there.
Is there anyone like me out there who has yet to use AI? I think it's interesting but I have no strong feelings for it. Aside from a few images generated with early image generators I have not used anything else, not even chatgpt. AI search results I mostly scroll past are an exception.
I was an LLM skeptic for a long time. I still have a hard time trusting it to the same extent that most HN'ers appear to. (I would never use an LLM as a substitute for my own "voice" when writing, or put any AI-generated code into production.) But I think I have reached a middle ground: I basically use it as a first approximation when I am exploring something entirely new to me. For example, if I'm learning a programming language, I might ask it for ways to unpack an array into separate variables. Or if I'm reading an ingredients list, I'll ask what psyllium husk is. Basically anything that's moderately easy to verify if I get any suspicion that the LLM is hallucinating again.
These are things that I _used_ to simply ask a search engine, before Google results became 99% SEO-optimized blogspam and therefore useless for actual knowledge-seeking.
I think an important part of overcoming AI skepticism is to understand (at a very high level) how it all works so that you understand its limitations and know when you can and cannot trust it.
I haven't used any LMMs and I'm not missing anything. At some point in the last couple of years I have entered a couple prompts in chatbotgpt using my friend's account just to see it with my own eyes (funny enough asking slop, but before that was the word, and hallucinations). I also ran a prompt through llama.cpp just to see if it would work. It did, great, I don't really care.
I've done a few experiments on some project I've been putting of, one involves parsing some email. There's certainly some interesting use cases, but I've yet to figure out how to actually go about deploying a solution.
It does seem, to me, that the benefits are there, but not to the extend that the AI companies would have me believe.
I consider myself principally a software engineer, and like you I have avoided using AI aside from a few "haha look at that" images produced with DALL-E. Very rarely the forced AI content at the top of a search results page has helped me refine the term or concept I was trying to look up.
I like doing the things that AI is supposedly good at. I like learning, I like understanding what I am doing, I like the satisfaction of finally getting to a solution after banging my head against the wall for hours. I like the occasional sensation of being hopelessly lost and finding my way to the light. I like writing text and code. I'm not terribly bored by boilerplate. I like going down rabbit holes and experiencing happy accidents. I can't in good conscience sign my name to something that I did not create and do not understand deeply enough to explain to anybody who asks. I like doing the hard thing.
The immediate response I'm sure that I will get is some variant of "well everybody else is using it as a force multiplier and the way you do it is making the job take too long." And maybe that's true. Maybe I don't care. I am a person who takes pride in my craft. I enjoy the act of making things. Some people don't, I guess.
The day may come where I am viewed as a dinosaur, where the way I work becomes fundamentally incompatible with the way the industry works. And if that day comes, bluntly, fuck the industry. I'll go fix air conditioners instead.
Good news is that AI is equally adept at turning long polite business speak emails into short, blunt, and sweary emails, so you can choose what you prefer.
I don't think Gmail/Outlook offer that yet. (But I might actually use it if they did!)
AI creep is like a leaky sewage pipe oozing gray water into everything you'd want to keep it out of. Disabling it is increasingly difficult on every app.
Nadella and the board approved billions of dollars in spending for AI. It's going to be in every single thing whether you want it or not. Otherwise, there will be a shareholder lawsuit.
A quick glance suggests that, because the site is pretty far afield of usual HN fare, folks here don't realize that Lawson is the author of several books.
I personally do not want you -- for any value of "you" -- to use an AI to "polish" any writing sent to ME. I want the author's actual thoughts, not their thoughts massaged by an LLM.
While not as longstanding as, say, Lileks, thebloggess is someone from the blog era of the Internet, and publishes an RSS feed. She's well known in other internet communities.
"NYT bestselling author" too. Whatever value that holds.
But I think it's a safe assumption that she can write well when she wants to.
The obsessive need for AI developers to make everything into the most banal, inoffensive version of said thing will probably end up being the biggest roadblock for AI taking over human jobs.
You phrase it well. It's not the AI themselves. You can as easily get an AI to take something fundamentally banal and write it into something exciting, dynamic, exotic, or strange.
If you analyze these systems as a work of art, using the postmodern toolset, the people and systems taking one of the most potent technologies humanity has ever created and using it to craft a banality machine is just... very revealing. Like the great-uncle who can't even finish protesting how non-racist he is without using a racial slur in the process.
"We're innovative! We're hip! We're on the cutting edge! We're setting trends! Now here, let me help you turn your text into the grayest corporate sludge imaginable."
It's art. It's probably not art they intended to make, but it's art.
> postmodern
It is, isn't it. It's the Sokal Hoax on a society-wide automated scale: rather than being concerned with what a "text" might "mean", generate millions upon millions of "text" sequences and mechanically separate those which pass as real enough. Baudrillard's simulacrum.
> It's art. It's probably not art they intended to make, but it's art.
People seem to be very insistent that the output of AI is not capital-A art, because that threatens their worldview, ignoring how artists had previously pushed to expand "art" away from any concerns of technique, intent, legibility etc.
I mean the system itself is art. The output is some kind of anti-art, but the system that uses the world's most amazing technology to extract the art out of everything that passes through it is an amazing piece of inadvertent outsider art on its own.
"The obsessive need for AI developers (...)" - Of product product/business people trying to fit into the "AI era". Developers, if given a chance, wouldn't probably built this.
Honestly the only spot-on use case I can think of for an eternally calm, friendly, totally bland and generic customer service AI is for interacting with the Karens of the world and only because it would spare the nerves of human employees and because the logs would no doubt be hilarious to read.
I feel like nothing would turn me into a Karen more than interacting with AI customer service on a regular basis.
It’s already severely frustrating that at many companies, you have to talk to several people (separated by copious waiting) to get your problem solved after handling a severely outdated and underdeveloped phone system.
Yes, of course that’s how it will go in reality. But some customers are terrible people from the start and definitely don’t deserve any human service.
An empowered customer service team could do this by refusing service.
Its an offense to justice that its so easy to get away with being mean to service workers.
The alternative is being mean to more expensively paid workers. That's what the suppart staff is ultimately paid for.
I definitely don't disagree.
I think it would be more successful if the tool was labeled "Make more persuasive" or "Manipulate"
Please stop to gtvn me ai - hilarious.
I’d rather get the bland polished version than the “I’m so silly/sarcastic/eccentric” version. Although ideally there’d be an AI setting that makes it both polished and terse.
A huge problem with AI generated content is a reduction to the mean.
As a tangent, the other day I read a post of a person who used to be close to Elon Musk and was giving their account of that relationship, and it was translated to Polish via DeepSeek.
It was not idiomatic Polish - very far from it really, so I guess depending on the model used here, the result could have been equally disappointing if it was really trying to Polish that email.
I simply do not understand the end game of these sorts of features.
Presumably, the idea is to make the other person think you've written the email yourself, and you're a "better"/more corporate/more professional writer than you actually are.
But once everyone starts using this kind of transformer, so nobody assumes that anyone else is actually writing the content they're sending out -- what's the point?! Who benefits?
> Who benefits?
The team that implemented the feature and hit their quarterly goal and got their full quarterly bonus. The execs who set the quarterly goals will be rewarded by wall street for shipping AI features. In 2 years, when reality has struck and the market has moved on from the AI fad, another team will receive their bonus for lowering COGS by removing this feature.
Users' needs are irrelevant to this cycle.
Big tech companies need to be broken up into teeny, tiny pieces.
> Who benefits?
The vendor's share price. This is nothing to do with the _users_; it is about cramming 'AI' into as many press releases as possible because it gives the markets the warm fuzzies. No-one wants to use this, and that's really beside the point; for the time being it makes stock price go up.
Presumably in a year or so there'll be a new fad, and most LLM-based 'features' will go the way of Clippy.
> Who benefits?
Phishers.
polishing you means snap back to the mean. AI will not make you extrordinary
The irony is that someone would just use an AI to summarize the long winded polished version.
I have noticed many news outlets presenting articles with an "AI summary" on top. Often the article is copied right off of a newswire service, with some elaborations woven in. I have a feeling the thing the LLM is summarising is increasingly something another LLM has previously expanded.
What if we just wrote articles well, such that (a) they were not filled with fluff, and (b) the summary would be to only read the first paragraph of the article?
News outlets rely on you scrolling down the whole article so they can show you more ads.
OneNote now shows the Copilot icon right where I start writing on a blank page. That's infuriating (as much as I see some uses for AI). Fortunately you can turn it off in the 'connected experiences' setting.
I think a lot of people miss the point.
If you're already an effective writer, obviously AI will usually only make things worse.
But for people where English is not their first language (or whatever language you're writing in), polish is a godsend. It fixes your grammatical errors, vocabulary errors, spelling errors, punctuation errors, and all the rest.
Obviously it can go overboard in changing your meaning (as any flesh-and-blood editor also might), and presumably over time these tools will get more refined.
But for most people who write intelligent comments here on HN, you are not the target audience for this feature. Well, until you have to dust off your high-school French or Spanish or whatever for an e-mail exchange.
AI does not want to do anything, whatsoever.
It's owners, however, definitely want you to fall into lockstep that cooking the Earth for their billion$ is perfectly fine and that waking up to a life of low-impact, serviceful compassion to the Earth and its peoples is just virtue-signalling when, in fact, it is simply virtuous in the face of demented psychosociopaths.
Note that this all started with all that digital coin-mining. Just more burning the Earth for fakeass money so callous bastards can get richer.
> Note that this all started with all that digital coin-mining. Just more burning the Earth for fakeass money so callous bastards can get richer.
Always strikes me as interesting that "Nvidia" sounds like the syllables in the middle of "invidious"
invidia = envy in latin
If AI gets us back to nuclear power, LLMs will have done more for climate change than all climate activists, ever, combined.
Not for geopolitical stability, though. What did we learn about making the entirety of human civilization dependent on a constant supply of rare materials hostile countries dig out of the ground?
And what happens when they run out? Back to carbon fossil fuels?
Oh come on - we've been doing this since way before crypto was a thing. We knew about climate change in the 80s.
We knew about climate change in the 1880s.
There, fixed it for you.
Thanks; I don't think I said that we didn't know about it before the 1980s but extra information is rarely a bad thing.
We knew about global warming in the 1880s?
Yes.
got a source for that?
So then it's ok to not only continue ignoring it as the evidence mounts, but to actually accelerate it?
No. No, it's not. It doesn't matter how many fools do something, they're still fools. The root word of ignorance is "to ignore".
You said
"Note that this all started with all that digital coin-mining"
But it didn't. That's what I'm saying.
They weren't using modern GPUs and muiltiprocessor machines to cook the Earth, which has been happening on a large scale since the turn of this century.
That's what I was and still am saying.
No, we were burning coal and cutting down trees. I'm not saying that what we're doing now is fine, if anything I'm agreeing with you but pointing out that profit-driven environmental destruction has been happening since way before the invention of cryptocurrency.
[flagged]
I had the same initially confusing experience as the author, confusing "polish" for the language "Polish". The product manager should have used AI to get feedback on this feature...
I put the following into chatGPT: "I am a developer for an email client. I want to incorporate some AI features to help users write more polished emails. I'm thinking I will have an option that appears under the email body where a user can click "Polish". Can you see any issues with this UX?"
ChatGPT: There are a few potential issues or points of confusion with a button labeled “Polish”:
Ambiguity:
Some users might confuse “Polish” (as in “to make shiny or more refined”) with “Polish” (the language). If your product is used internationally or by non-native English speakers, that label can be unintentionally confusing. Even among English speakers, “Polish” may not be immediately clear as a writing enhancement feature—people might wonder, “What exactly does it do?” Lack of Context:
If users don’t already know what the feature does, they won’t know what to expect when they click “Polish.” You may need a tooltip, hover text, or short description like “Improve your text with AI suggestions” so users understand it’s about refinement, grammar, tone, etc. User Control:
How much will it change their text? Will it rewrite in a new style or just fix grammar? If it makes heavier changes, users might worry about losing their personal style or voice. Be clear about the scope of the changes. Trust & Consent:
Users may have privacy concerns if their email content is being sent to an AI for processing. Even if that’s already in your privacy policy, the feature’s label (just “Polish”) doesn’t convey what happens behind the scenes. You might consider adding a “What will happen to my text?” link or a discreet explanation about how data is handled. Alternative Wording:
Using a label like “Improve,” “Refine,” “Enhance,” or “Polish Text” might provide clearer expectations. You might also consider dynamic or contextual labeling like “Improve Grammar & Clarity.” How to address these concerns:
Add clarity in the UI: A short descriptive text or tooltip under the “Polish” button can briefly explain what the feature does: “AI-powered text improvements.” Offer user choice: Present a preview of the changes before they commit. Let them accept or reject individual suggestions so they feel in control. Signal the feature’s AI: Some users will want to know that an AI is involved, especially if they’re concerned about style changes or privacy. Labeling it as “Polish with AI” or something similar can help set expectations. Overall, the idea of a quick “one-click to refine” feature is appealing, but make sure the button’s purpose and functionality are crystal clear—both to avoid confusion and to build user trust.