Show HN: An MCP server that gives LLMs temporal awareness and time calculation
Most MCP demos wire LLMs to external data stores. That’s useful, but MCP is also a chance to give models perception — extra senses beyond the prompt text.
Six functions (`current_datetime`, `time_difference`, `timestamp_context`, etc.) give Claude/GPT real temporal awareness: It can spot pauses, reason about rhythms, and even label a chat’s “three‑act structure”. Runs locally in <60 s (Python) or via a hosted demo.
If time works, what else could we surface? - Location / movement (GPS, speed, “I’m on a train”) - Weather (rainy evening vs clear morning) - Device state (battery low, poor bandwidth) - Ambient modality (user is dictating on mobile vs typing at desk) - Calendar context (meeting starts in 5 min) - Biometric cues (heart‑rate spikes while coding)
Curious what other signals people think would unlock better collaboration.
Full back story: https://medium.com/@jeremie.lumbroso/teaching-ai-the-signifi...
Happy to discuss MCP patterns, tool discovery, or future “senses”. Feedback and PRs welcome!
The submitter made a basic MCP function that returns the current time, so... Claude knows the current time. There is nothing about sundials and Claude didn't somehow build a calendar in any shape or form.
I thought this was something original or otherwise novel but it's not... it's not complex code or even moderately challenging code, nor is it novel, nor did it result in anything surprising... it's just a clickbaity title.
What’s new here isn’t just exposing `current_datetime()`. The server also gives the model tools to reason about time:
I also request that Claude ask for time at every turn, which creates a timeseries that is parallel to our interactions. When Claude calls these every turn it starts noticing patterns (it independently labelled our chat as a three-act structure). That was the surprise that prompted the title.Ask Claude “what patterns do you see so far?” after a few exchanges.
If you still find it trivial after trying, happy to hear why—genuinely looking for ways to push this further. Thanks for the candid feedback.
Finding a good title is really hard. I'd appreciate any advice on that. You'll notice I wrote the article several weeks ago, and that's how long it took me to figure out how to pitch on HN. I'd appreciate any feedback to improve. Thanks!
(Submitted title was "Show HN: I gave Claude a sundial and it built a calendar")
I just finished some changes to my own little project that provides MCP access to my journal stored in Obsidian, plus a few CLI tools for time tracking, and today I added recursive yearly/monthly/weekly/daily automatic retrospectives. It can be tweaked for other purposes (e.g. project tracking) tweaking the templates.
https://github.com/robertolupi/augmented-awareness
"We made an API for time so now the AI has the current time in it's context" is the bulk of it, yes?
The docs are pictures, and what is a Pipfile in any context? It looks like a requirement file but you never bothered to follow the news about pip or uv.
Every AI project is like that and I'm really scared for the future of programming.
`uv` is great but `pipenv` is a perfectly well-tested Python dependency manager (albeit slow). Down in the instructions it explicitly asks you to use `pipenv` to manage* dependencies. I also do not think your assertion of "what is a Pipfile in any context" is fair, as I don't think I've ever seen a project list a dependency manager and then explicitly call out artifacts that the dependency manager may require to function.
And BTW it's already happening, it's not a fantasy.
Imagine a woodworking forum and someone being called out for showing off their little 6 piece tool box and someone saying how this doesn't adhere to residential building code and what this does for the profession of woodworkers...
For instance at Boeing, the fault of software problems lies entirely on the managers: They made the decision to subcontract software engineering to a third party to cut cost, but also they didn't provide the contractor with enough context and support to do a good job. It's not subcontracting that was bad — because subcontracting can be the solution in some circumstances and with proper scoping and oversight — it was the management.
The MCP protocol is changing every few weeks, it doesn't make sense (to me at least) to professionalize a technical demo, and I appreciate that LLMs allow for faster iteration and exploration.
MCP + LLMs = our solution to data integration problems, which include context awareness limitations.
It's an exciting development and I am glad you see it too!
Knowing quite a bit about sundials I was genuinely curious about how that would work, as a typical (horizontal) sundial doesn't have enough information to make a calendar. It's a time of day device, rather than a time of year device. You could teach the model about the Equation of Time or the Sun's declination, but it wouldn't need the sundial at that point. There are sundials like a spider sundial, or nodus sundial, that encode date information too. But there's overlap/ambiguity between the two solstices as the sun goes from highest to lowest, then back to its highest declination. Leap years also add some challenges too. There are various ways to deal with those, but I think you can see why I was curious how producing a calendar from a sundial would work (without giving it some other information that makes the sundial unecessary).
My only worry with these MCP "sensors" is that they add-up to the token cost — and more importantly to the context window cost. It would be great to have the models regularly poll as new data and factor that into their inferences. But I think the models (at least with current attention) will always have a trade-off between how much they are provided and what they can focus on. I am afraid that if I provide Claude numerous senses, that it will lower its attention to our conversation.
But your exciting comment (and again I apologize for disappointing you!) makes me think about creating an MCP server that provides like the position of the sun in the sky for the current location, or maybe some vectorized representation of a specific sundial.
I think the digitized information that we experience is more native to models (i.e., require fewer processing steps to extract insights from), but it's possible that providing them this kind of input would result in unexpected insights. They may notice patterns, i.e., more grumpy when the sun is in this phase, etc.
Thanks for your thoughtfulness!
For those looking for "a calendar", here is one[0] I made from a stylized orrery. No AI. Should be printable to US Letter paper. Enjoy.
EDIT: former title asserted that the LLM built a calendar
[0] https://ouruboroi.com/calendar/2026-01-01
https://www.linkedin.com/posts/emollick_i-am-starting-to-thi...
As an aside, I like the further prompt exploration approach.
An example of this from the other day - https://chatgpt.com/share/68767972-91a8-8011-b4b3-72d6545cc5... and https://chatgpt.com/share/6877cbe9-907c-8011-91c2-baa7d06ab4...
One part of this in comparison with the linked in post is that I try to avoid delegating choices or judgement to it in the first place. It is an information source and reference librarian (that needs to be double checked - I like that it links its sources now).
However, that's a me thing - something that I do (or avoid doing) with how I interact with an LLM. As noted with the stories of people following the advice of an LLM, it isn't something that is universal.
It's really frustrating. I've come to loathe the agreeable tone because every time i see it i remember the times where i've hit this pain point in design.