Show HN: An MCP server that gives LLMs temporal awareness and time calculation
github.comThis is an open‑source Model Context Protocol (MCP) server that gives any LLM a sense of the passage of time.
Most MCP demos wire LLMs to external data stores. That’s useful, but MCP is also a chance to give models perception — extra senses beyond the prompt text.
Six functions (`current_datetime`, `time_difference`, `timestamp_context`, etc.) give Claude/GPT real temporal awareness: It can spot pauses, reason about rhythms, and even label a chat’s “three‑act structure”. Runs locally in <60 s (Python) or via a hosted demo.
If time works, what else could we surface? - Location / movement (GPS, speed, “I’m on a train”) - Weather (rainy evening vs clear morning) - Device state (battery low, poor bandwidth) - Ambient modality (user is dictating on mobile vs typing at desk) - Calendar context (meeting starts in 5 min) - Biometric cues (heart‑rate spikes while coding)
Curious what other signals people think would unlock better collaboration.
Full back story: https://medium.com/@jeremie.lumbroso/teaching-ai-the-signifi...
Happy to discuss MCP patterns, tool discovery, or future “senses”. Feedback and PRs welcome!
This title really doesn't fit what the submission did actually.
The submitter made a basic MCP function that returns the current time, so... Claude knows the current time. There is nothing about sundials and Claude didn't somehow build a calendar in any shape or form.
I thought this was something original or otherwise novel but it's not... it's not complex code or even moderately challenging code, nor is it novel, nor did it result in anything surprising... it's just a clickbaity title.
Fair point on the metaphor—let me be concrete.
What’s new here isn’t just exposing `current_datetime()`. The server also gives the model tools to reason about time:
I also request that Claude ask for time at every turn, which creates a timeseries that is parallel to our interactions. When Claude calls these every turn it starts noticing patterns (it independently labelled our chat as a three-act structure). That was the surprise that prompted the title.Ask Claude “what patterns do you see so far?” after a few exchanges.
If you still find it trivial after trying, happy to hear why—genuinely looking for ways to push this further. Thanks for the candid feedback.
Finding a good title is really hard. I'd appreciate any advice on that. You'll notice I wrote the article several weeks ago, and that's how long it took me to figure out how to pitch on HN. I'd appreciate any feedback to improve. Thanks!
Clearly an honest mistake but yeah a metaphor probably shouldn't be used in a title like this, since many readers will take it literally. I've changed the title now to language from the article.
(Submitted title was "Show HN: I gave Claude a sundial and it built a calendar")
Thanks so much for the title change! I completely understand.
I apologize to the community for the mistake. I appreciate this feature of this community's discourse. I'll remember to use literal, precise language in the future.
Your reworded title fits perfectly — thank you!
That's MCP/AI libraries for ya.
Give it a picture of the Sun at the same time every day, and lets see if it comes up with a calendar from that.
Agreed. I’m tired of these ridiculous claims by people just trying to hype up LLMs. Flagging this article.
I'm sorry for choosing an inappropriate title — that was my bad, and fortunately @dang helped correct this mistake.
Aside from the title, what claims do I make that you find ridiculous?
Physical/mental health and personal journaling?
I just finished some changes to my own little project that provides MCP access to my journal stored in Obsidian, plus a few CLI tools for time tracking, and today I added recursive yearly/monthly/weekly/daily automatic retrospectives. It can be tweaked for other purposes (e.g. project tracking) tweaking the templates.
https://github.com/robertolupi/augmented-awareness
Hey, thanks so much for sharing, your repo is really cool, including the GEMINI.md context engineering file!
I am curious: You say "offline-first or local-first, quantified self projects", what models do you use with your projects?
I find the LLMs like the Claude and GPT families to be incredibly impressive for integration and metacognition — however, I am not sure yet what LMs are best for that purpose, if there are any.
Your "Augmented Awareness" framework seems to be metacognition-on-demand. In practice, how has it helped you recently? Is it mostly automated, or does it require a lot of manual data transfers?
I am assuming that the MCP server is plugged into a model, and that in the model you run prompts to run retrospectives.
Have you written about this?
I was looking for the calendar app that was built but I guess it's metaphorical.
"We made an API for time so now the AI has the current time in it's context" is the bulk of it, yes?
One‑shot timestamps (the kind hard‑coded into Claude’s system prompt or passed once at chat‑start) go stale fast. In a project I did with GPT‑4 and Claude during a two‑week programming contest, our chat gaps ranged from 10 seconds to 3 days. As the deadline loomed I needed the model to shift from “perfect” suggestions to “good‑enough, ship it” advice, but it had no idea how much real time had passed.
With an MCP server the model can call now(), diff it against earlier turns, and notice: "you were away 3 h, shall I recap?" or "deadline is 18 h out, let’s prioritise". That continuous sense of elapsed time simply isn’t possible with a static timestamp stuffed into the initial prompt; you'd have to create a new chat to update the time, and every fresh query would require re‑injecting the entire conversation history. MCP gives the model a live clock instead of a snapshot.
The current time (and location of the user, looking at you google gemini) is injected in most LLM chats now isn't it?
At the start. But then human my perceive the rest of the conversation taking many minutes or hours, but the LLM never gets any signal that latter text is chronologically divided from earlier text. It needs a polling API like this.
Again and again, your code lacks the basics of engineering. Where is your package manager and requirements? Your code would never pass any test in a professional context. It's like you haven't went past a Python tutorial and feel the AI output is acceptable.
The docs are pictures, and what is a Pipfile in any context? It looks like a requirement file but you never bothered to follow the news about pip or uv.
Every AI project is like that and I'm really scared for the future of programming.
You can program just for fun, without having to make it a professional project. Just like you can do some woodworking without having the goal of becoming a professional carpenter.
Yes you can. Until managers and CEOs demand that you use those tools or you're fired. Whenever I sent such a bad project, I think of what may happen in the next 5 years and its dreadful. We're professionals after all.
And BTW it's already happening, it's not a fantasy.
You can both write hacky projects in your free time and write good, well-tested code in your professional life. It’s not that deep.
This is how I've always coded. My own projects are like freeform doodles on scrap paper. My professional work is completed, polished commissions.
What someone builds privately using AI has nothing to do with what expectations organizations decide to put on their employees. This isn't something that will make it into a professional context so who cares if it is in fact shit?!
Imagine a woodworking forum and someone being called out for showing off their little 6 piece tool box and someone saying how this doesn't adhere to residential building code and what this does for the profession of woodworkers...
Disposal8433, I am not unsympathetic to your point, but I think that bad managers and CEOs are bad managers and CEOs.
For instance at Boeing, the fault of software problems lies entirely on the managers: They made the decision to subcontract software engineering to a third party to cut cost, but also they didn't provide the contractor with enough context and support to do a good job. It's not subcontracting that was bad — because subcontracting can be the solution in some circumstances and with proper scoping and oversight — it was the management.
The MCP protocol is changing every few weeks, it doesn't make sense (to me at least) to professionalize a technical demo, and I appreciate that LLMs allow for faster iteration and exploration.
This really isn't dissimilar to any work I've seen in a professional setting, minus the screenshot docs. I agree those are bad. Everything useful is in the README.
`uv` is great but `pipenv` is a perfectly well-tested Python dependency manager (albeit slow). Down in the instructions it explicitly asks you to use `pipenv` to manage* dependencies. I also do not think your assertion of "what is a Pipfile in any context" is fair, as I don't think I've ever seen a project list a dependency manager and then explicitly call out artifacts that the dependency manager may require to function.
I am giving a lecture on context sensitive systems. One thing where all this context awareness failed was getting it into higher level reasoning and adapting program logic (think for example the android activity API). I was just telling the students that with MCPs as interface to all the context sources (like sensor based activity classifiers but definitely also time) we might overcome that challenge soon. Cool to see starting to implement that kind of stuff...
That's exactly what I've been thinking too!
MCP + LLMs = our solution to data integration problems, which include context awareness limitations.
It's an exciting development and I am glad you see it too!
Not really anything in there regarding the sundial. I'm guessing that was put in there metaphorically for clickbait reasons.
Knowing quite a bit about sundials I was genuinely curious about how that would work, as a typical (horizontal) sundial doesn't have enough information to make a calendar. It's a time of day device, rather than a time of year device. You could teach the model about the Equation of Time or the Sun's declination, but it wouldn't need the sundial at that point. There are sundials like a spider sundial, or nodus sundial, that encode date information too. But there's overlap/ambiguity between the two solstices as the sun goes from highest to lowest, then back to its highest declination. Leap years also add some challenges too. There are various ways to deal with those, but I think you can see why I was curious how producing a calendar from a sundial would work (without giving it some other information that makes the sundial unecessary).
I'm sorry for the misleading title about a sundial, it was a metaphor, and based on the feedback here, if I had to do it again I would pick a different one. :-)
My only worry with these MCP "sensors" is that they add-up to the token cost — and more importantly to the context window cost. It would be great to have the models regularly poll as new data and factor that into their inferences. But I think the models (at least with current attention) will always have a trade-off between how much they are provided and what they can focus on. I am afraid that if I provide Claude numerous senses, that it will lower its attention to our conversation.
But your exciting comment (and again I apologize for disappointing you!) makes me think about creating an MCP server that provides like the position of the sun in the sky for the current location, or maybe some vectorized representation of a specific sundial.
I think the digitized information that we experience is more native to models (i.e., require fewer processing steps to extract insights from), but it's possible that providing them this kind of input would result in unexpected insights. They may notice patterns, i.e., more grumpy when the sun is in this phase, etc.
Thanks for your thoughtfulness!
If it helps, I have several methods of computing the Sun's position at varying degrees of accuracy/complexity, and some sundial code at https://www.celestialprogramming.com/
Noted, thank you!
Why a tool though, why not just append these details onto the context, literally just append "current epoch" timestamp into the context between updates?
Great question! Injecting a raw epoch each turn can work for tiny chats, but a tool call solves four practical problems:
1. *Hands‑free integration*: ChatGPT, Claude, etc. don’t let you auto‑append text, so you have to manually do it. Here, a server call happens behind the scenes—no copy‑paste or browser hacks.
2. *Math & reliability*: LLMs core models are provably not able to do math (without external tools), this is a theoretical limitation that will not change. The server not only returns now() but also time_difference(), time_since(), etc., so the model gets ready‑made numbers instead of trying to subtract 1710692400‑1710688800 itself.
3. *Extensibility*: Time is just one "sense." The same MCP pattern can stream location, weather, typing‑vs‑dictation mode, even heart‑rate. Each stays a compact function call instead of raw blobs stuffed into the prompt.
So the tool isn’t about fancy code—it’s about giving the model a live, scalable, low‑friction sensor instead of a manual sticky note.
I love the basic point. Timing based association is fundamental to thinking, across species. How does the bunny knows that you're stalking it? Because your eyes move when it moves. I had no idea that LLMs missed all this. Plus the political reference is priceless.
Glad the little political wink landed with at least one reader!
You’re right: Stripping away all ambient context is both a bug and a feature. It lets us rebuild “senses” one at a time—clean interfaces instead of the tangled wiring in our own heads.
Pauses are the first step, but I’m eager to experiment with other low‑bandwidth signals:
• where the user is (desk vs. train) • weather/mood cues (“rainy Sunday coding”) • typing vs. speech (and maybe sentiment from voice) • upcoming calendar deadlines
If you could give an LLM just one extra sense, what would you pick—and why?
Claude can run code. Add to your customs instructions to check the time regularly and you're done. Why do you need an MCP?
It's good idea. I didn't think of it because this project came about a "let's try to write a remote MCP server now that the standard has stabilized."
But there are some issues:
1. Cheaper + Deterministic: It is much more costly, both in terms of tokens and context window. (Generating the code takes many more tokens than making a tool call.) And there can be variability in the query, like issues with timezones.
2. Portability: It is not portable, not all LLM or LM environments have access to a code interpreter. This is a much lower resource requirement.
3. Extensibility: This approach is extensible, and it allows us to expand the toolkit with additional cognitive scaffolds that help contextualize how we experience time for the model. (This is a fancy way of saying: The code only gives the timestamp, but building an MCP allows us to contextualize this information — "this is time I'm sleeping, this is the time I'm eating or commuting, etc.")
4. Security: Ops teams are happier approving a read-only REST call than arbitrary code running.
One last thing I will say: The MCP server specification is unclear how much the initial "instructions", the README.md of the server for the model, is discovered. In the "passage-of-time" MCP server, the instructions provide the model with information on each available tool as well as the requirement to poll the time at each message.
In practice, this hasn't really worked. I've had to add a custom instruction to "call current_datetime" at each message to get Claude to do it consistently over time.
Still, it is meaningful that I ask the model to make a single quick query rather than generate code.
Because LLMs are notoriously unreliable especially over long context. Telling it something like "check the time before every turn" is going to fail after enough interactions. MCP call is more reliable for programmatic and specific queries, like retrieving the time.
I would argue that "that gives any LLM a sense of the passage of time" is but a suspension of disbelief and metaphorical hope.
For those looking for "a calendar", here is one[0] I made from a stylized orrery. No AI. Should be printable to US Letter paper. Enjoy.
EDIT: former title asserted that the LLM built a calendar
[0] https://ouruboroi.com/calendar/2026-01-01
The sycophancy from Claude is incredibly jarring. I agree with Ethan Mollick that this could turn out to have more of a disastrous impact than AI hallucination.
https://www.linkedin.com/posts/emollick_i-am-starting-to-thi...
It's even a blocker for some design patterns. Ie it's difficult to discuss options and choose the best one when the AI agrees with you no matter what. If you ask "But what about X" it is more likely to reverse course and agree with your new position entirely.
It's really frustrating. I've come to loathe the agreeable tone because every time i see it i remember the times where i've hit this pain point in design.
I absolutely hate this too. And the only way around it is to manipulate it into cheerfully pointing out all the problems with something in a similarly sycophantic way.
If found that three words help "critical hat on". Then you get the real talk.
What a ridiculous world we live in.
In my ChatGPT customization prompt I have:
I want an intelligent agent (or one that pretends to be) that answers the question rather than something that I chat with.As an aside, I like the further prompt exploration approach.
An example of this from the other day - https://chatgpt.com/share/68767972-91a8-8011-b4b3-72d6545cc5... and https://chatgpt.com/share/6877cbe9-907c-8011-91c2-baa7d06ab4...
One part of this in comparison with the linked in post is that I try to avoid delegating choices or judgement to it in the first place. It is an information source and reference librarian (that needs to be double checked - I like that it links its sources now).
However, that's a me thing - something that I do (or avoid doing) with how I interact with an LLM. As noted with the stories of people following the advice of an LLM, it isn't something that is universal.
I'm struck by how often Claude responds with "You're right! Now let me look at the file..." when it can't know whether I'm right until after it looks at the file in question.
What am I missing? I am not seeing this particular example as sycophantic. Claude is saying, something like user's assertion is improbable but if it was the case, user needs to show/ prove some of the things in this table.
They have introduced a beta 'Preferences' feature recently under Custom Instructions. I've had good results from this preference setting in GPT:
I just copied it into Claude's preferences field, we'll see if it helps.[dead]