Microsoft Python Driver for SQL Server

35 kermatt 12 9/17/2025, 3:28:25 PM github.com ↗

Comments (12)

denis_dolya · 36m ago
I’ve been working with SQL Server from Python on various platforms for several years. The new Microsoft driver looks promising, particularly for constrained environments where configuring ODBC has historically been a source of friction.

For large data transfers — for example, Pandas or Polars DataFrames with millions of rows — performance and reliability are critical. In my experience, fast_executemany in combination with SQLAlchemy helps, but bulk operations via OpenRowSets or BCP are still the most predictable in production, provided the proper permissions are set.

It’s worth noting that even with a new driver, integration complexity often comes from platform differences, TLS/SSL requirements, and corporate IT policies rather than the library itself. For teams looking to simplify workflows, a driver that abstracts these nuances while maintaining control over memory usage and transaction safety would be a strong improvement over rolling your own ODBC setup.

th0ma5 · 27m ago
This is the correct prospective. Often driver issues transcend technical and political boundaries. My old team dropped a vendor who changed the features of a driver and spent several years trying to find another as well as making that vendor reapply and make a new case, which, didn't work out for them.

No comments yet

zurfer · 1h ago
This is really timely. I just needed to build a connector to Azure Fabric and it requires ODBC 18 which in turn requires openssl to allow deprecated and old versions of TLS. Now I can revert all of that and make it clean :)
__mharrison__ · 1h ago
Very cool. Used to be a huge pain to connect to sqlserver from Python (especially non Windows platforms).
qsort · 52m ago
I do expect this package to make connecting easier, but it was okay even before. ODBC connectivity via pyodbc has always worked quite well and it wasn't really any different when compared to any other ODBC source. I'm more on the data engineering side and I'm very picky about this kind of stuff, I don't expect the average user would even notice besides the initial pain of configuring ODBC from scratch.
tracker1 · 8m ago
IIRC, I had trouble if I installed the MS ODBC driver and some of the updates for Ubuntu (WSL) out of order. I generally prefer a language driver package where available.

Would be nice if MS and Deno could figure things out to get SQL working in Deno.

abirch · 1h ago
What my workself would love is to easily dump Pandas or Polar data frames to SQL Tables in SQL Server as fast as possible. I see this bcp, but I don't see an example of uploading a large panda dataframe to SQL Server.
A4ET8a8uTh0_v2 · 1h ago
Honestly, what I find myself doing more often than not lately is not having problems with the actual data/code/schema whatever, but, instead, fighting with layers of bureaucracy, restrictions, data leakage prevention systems, specific file limitations imposed by the previously listed items...

There are times I miss being a kid and just doing things.

qsort · 45m ago
How large? In many cases dumping to file and bulk loading is good enough. SQL Server in particular has openrowsets that support bulk operations, which is especially handy if you're transferring data over the network.
abirch · 38m ago
Millions of rows large. I tried doing the openrowsets but encountered permission issues with the shared directory. Using fast_executemany with sqlalchemy has helped, but sometimes it's a few minutes. I tried bcp as well locally but IT has not wanted to deploy it to production.
sceadu · 49m ago
You might be able to do it with ibis. Don't know about the performance though
abirch · 43m ago
Thank you, I'll look into this. Yes performance is the main driver when some data frames have millions of rows.