Show HN: 1 Million Rows

23 ankitchhatbar 40 8/11/2025, 12:14:44 PM 1mrows.pages.dev ↗

Comments (40)

timacles · 2h ago
One of the rare times HN can come together and agree on something. Which is: that this is a loose weekend project thrown together quickly that doesnt solve any problem in particular
stanmancan · 3h ago
Not 100% sure what I'm looking at here? Am I missing something or is it just table of data with an "infinite scroll" that loads 200 records at a time?
ankitchhatbar · 3h ago
Loading more than a few thousand rows on a web page will make unusably slow. Especially when you add a lot more features to it.

This is such that only what's seen or about to be seen is put on the page. The rest is kept ready on the server on local memory depending on what the user is doing.

This allows for a scalable solution that allows you to view thousands of records and interact with them

pmontra · 1h ago
We can't view at thousands of records at the same time, that is why we have pagination, filters and sorting. Any library that can display 50 rows per page is good to go. The real work is on the backend.

1M rows in memory, with pagination or infiniscroll, is interesting only if we load that data and go offline, and all of filtering and sorting is up to the browser. I'd say that it's a niche use case. Furthermore 1M row x 1kB each is 1GB so we enter an order of magnitude ridden with troubles.

motoxpro · 2h ago
mzajc · 2h ago
> This is such that only what's seen or about to be seen is put on the page

This reads like it's a lazy loading library, but then the roadmap has features like

> Assign Items to Users

> Kanban View

> Collaborative Editing

Which read like something you'd have in a project management solution. How do these two concepts form a cohesive product and who is the target audience? I've seen my fair share of Jira and Trello hellscapes, but I doubt Kanban boards with more entries than memory can handle are very common.

runlaszlorun · 2h ago
I'm not so sure that loading "more than a few thousand rows" is as bad as it used to be.

I did some quick benchmarks a couple years back. It's been a while but I want to say that Chrome was drawing 10k rows of a decent size each (10 columns of real world data and about 500b/row iirc) in about 300ms on a 10 year old MBP.

I'll do a little benchmarking later today if I get a chance.

zdragnar · 2h ago
10 columns is pretty tame. Add in another 5 or 10 to introduce horizontal scrolling, add in some fixed position columns, html inputs like a checkbox in each row, apply styling or custom rendering to individual cells, and it can starts to show a little more quickly.
stanmancan · 2h ago
I know why you're doing it; I'm just not sure what I'm looking at. I'm not sure if this is supposed to be a product, but right now it's literally just a paginated HTML table?
vlan121 · 3h ago
Looks like this, like someone is rebranding pagination.
burkaman · 3h ago
In Phase 3 there will be AI Features though.
leftnode · 3h ago
The company names look like Amazon merchants.
djfobbz · 1h ago
You basically reinvented something Elixir Streams already nails out of the box.

Streams in Elixir are lazy, chunked, and backpressure-friendly, so you can process any size dataset without loading it all into memory...whether it's a million or a trillion rows. The trick is you never try to render them all in the browser (that's where virtualization comes in).

So yeah...neat work, but battle-tested versions of this have been around for a while.

dustingetz · 1h ago
above 50k and the UI needs to change because 1) you can’t count the collection, and 2) the scrollheight becomes too unwieldy to use the mouse, slight adjustments to the handle will skip forward many pages. And if you try at some point you exceed the pixel height browser scroll bars can support, needing a custom non-native scroll bar. anyway well before that point, roughly at 50k records you need to switch to “search” UX (much smaller result sets) because there is no way to actually access page 99910 of your million record collection.

tldr: “show me the demo”

notachatbot123 · 3h ago
Congrats on your vibe-coded website!
zoba · 3h ago
Is this... a database? A set of react components? An app? Should be much more immediately understandable.
merelysounds · 2h ago
Possible bug report: on Safari mobile, when I grab the scroll bar and move it down a bit, the website reloads; if I do it again, I get an error message (“a problem repeatedly occurred…”).
NoboruWataya · 1h ago
I had hoped it might be like one of those "1 million pixels" websites, where anyone can run queries to update the rows.
mockingloris · 2h ago
I'd trim the columns down to a few and show the user a filter with the displaying columns turned on and let them know that they can toggle more on.

The column text length too should be trimmed to a uniform max-length except when clicked on. You could make it pup out on the page with CSS.

A better color scheme too won't hurt.

Had the same idea when I saw https://github.com/rowyio/rowy.

Stumbled on an idea while reading a HN entry a few days back and now I will merge them into a niche product idea.

└── Dey well

arjonagelhout · 58m ago
I get “A problem repeatedly occurred on <url>” on iOS Safari.
turblety · 3h ago
Nice one!

Coincidentally I worked on a large table renderer too this weekend: https://github.com/markwylde/react-massive-table

I noticed you didn't quite get to a million rows. For me, it cut off at 671088.

Same thing happened when I built my one.

I came across the same thing. In the end I just manual made the rows appear at their absolute position. Seemed to work well.

ankitchhatbar · 3h ago
What browser are you using? Some browsers cut off early due to scroll limitations. I could get Firefox to about 300,000
betageek · 3h ago
By "Lightning fast management tool" you mean "virtualised table"?
hn111 · 2h ago
Having the column widths jump around while scrolling and the absent page search doesn’t seem like ‘rock-solid reliability’ to me.

It seems this is just a minimum implementation of a ‘virtual list’.

pdyc · 1h ago
i dabbled into this last year and wrote about the challenge here https://newbeelearn.com/blog/million-rows-csv-debug-story/
jasonjmcghee · 2h ago
I tried the live demo with synthetic data.

To whom it may concern: I scrolled a bit with the scrollbar on iOS and the page immediately crashed.

OsrsNeedsf2P · 2h ago
On Android the rows just didn't load
pmontra · 2h ago
On Android Firefox they do load. I'm on my tablet. It fills the first half of the page with rows. The bottom half is empty.
spicybright · 3h ago
What is it though? Management tool is so vague.
wwdx · 3h ago
Good idea but it flashes/blinks when I scroll which doesns't feel very smooth.
bram2w · 3h ago
When I started working on Baserow (this seems similar based on the roadmap), a couple of years ago, I thought it would be a big challenge to quickly render a million rows in the browser. Introducing a system that fetches a page of rows based on the scroll offset, and with a small debounce did the trick. We only had a couple of field types, and it was all incredibly fast

The thing that make performance complicated for a no-code database is when you have 30 interconnected tables, some tables with 200 fields, containing many formulas or other computed fields like lookups or rollups. Updating a single cell, can result in thousands of other rows that must be updated across different tables. If there are 30 users making constant changes, locking PostgreSQL rows under the hood while the formulas are recalculated, and then a couple of n8n workflows making a many API requests to those tables, that's when things get interesting. Especially in combination with features like webhooks, real-time updates, 100+ filters, grouping, 26 field types, date dependencies, aggregations, importing/exporting whole databases.

When implementing a new feature, I've heard users say that's not complicated because it's just adding a checkbox. Making to run it at scale and keeping things performant is what's making it complicated.

captn3m0 · 4h ago
Firefox/iOS - attempting to scroll the demo after zooming in a bit, just refreshes the page.
oksurewhynot · 1h ago
I can only see 670k rows
oulipo · 2h ago
Nobody wants to "view" 1M row... people want to have analytics about what subset of the rows they should look at for a given task
forshaper · 4h ago
what can I do with these really big tables?
whalesalad · 3h ago
I really wish 1M rows was impressive.
xnx · 4h ago
"lightening fast"? Probably meant "lightning fast".
huqedato · 1h ago
What the heck is this app doing?
myflash13 · 3h ago
oh dear, another Airtable clone. Also see Baserow.
bestest · 2h ago
This is terrible and not worthy of HN front.

Terrible from the front-end side of implementation: - performs worse than your average arbitrary-amount-of-rows-that-won't-fit-on-the-screen library (it should perform the same no matter if its 1k, 1m or 1mm rows) - is seemingly buggy - is pointless on its own, because THIS demo is a client-side demo, and no one loads that much data on the client-side.

Revisit this when this demo is performant AND data is loaded from the backend.

Ignoring that, every front-end JS developer should explore these kinds of libs and also try to implement them themselves, because they're basically front-end 101.