Show HN: HN top 30 summarized by Gemini 2.5 Pro
5 mbm 12 5/1/2025, 8:12:01 PM tinysums.ai ↗
Fun little project, had Gemini 2.5 Pro summarize HN's top 30 each hour, both the stories and comment sections.
Pretty impressed with Gemini 2.5. It's probably the first model other than Claude 3.7 Sonnet where I actually find the output readable. I normally use 3.7 Sonnet for coding, but used Gemini for the codegen on this one as well. Was pretty impressed! Using Cursor, it seemed to instruction-follow better than Claude generally does, and remain lucid during very long agent sessions.
Thanks for your feedback!
--
But I would implement a "if landscape, then present list of titles on a left pane, with <a href='#pos'> page positional links to the right pane article stubs" (no JS needed).
- Rails 8
- Embedded React frontend w/ Tailwind
- Sidekiq for cron jobs
- Perplexity API for story info extraction via URL
- Gemini 2.5 Pro for classifying story extraction results as valid/invalid
- Gemini 2.5 Pro for summarizing stories and HN comment threads (tried Claude but preferred the personality of Gemini surprisingly)
- Cursor with Gemini 2.5 Pro + Claude 3.7 Sonnet for codegen
For example, the "story info extraction" and its "results evaluation" deserve some more in depth explanation.
How do you have the LLM summarize the article and the HN submission page: do you put the whole text in the input (as in: "Summarize the following text: <page text here>")?
For the story summary: send a prompt to the Perplexity API (which can access URLs) requesting an extraction of the article's content (a sort of very detailed technical brief). Then, use Gemini to classify the results from the Perplexity API as valid or invalid (sometimes, Perplexity isn't able to access the URL but it doesn't reliably return the string I've requested it to return when it isn't). After that, send a more detailed prompt to Gemini requesting the story summary be generated with a specific style and format (Markdown).
For the comments summary: use the HN API to pull all comments, then prepare a Markdown document with a selected subset of comments (based on user karma, replies, and thread depth) if exceeding a given number of comments, otherwise all comments. Annotate comments with an index (eg, 1.1) to indicate the nesting to the LLM. Along with some formatting and stylistic guidance, send that to the LLM requesting summarization.
While the story summaries are generally static once generated, regenerate the comment summaries on an interval (this could be optimized).
Hopefully that helps a little!
> [The] announcement ... stirred up the usual HN mix of excitement, skepticism, and security paranoia
"Ah, the concerns of the involved... Bakers and their care towards flour"