Le Chat: Custom MCP Connectors, Memories

352 Anon84 147 9/4/2025, 11:04:36 AM mistral.ai ↗

Comments (147)

barrell · 11h ago
I recently upgraded a large portion of my pipeline from gpt-4.1-mini to gpt-5-mini. The performance was horrible - after some research I decided to move everything to mistral-medium-0525.

Same price, but dramatically better results, way more reliable, and 10x faster. The only downside is when it does fail, it seems to fail much harder. Where gpt-5-mini would disregard the formatting in the prompt 70% of the time, mistral-medium follows it 99% of the time, but the other 1% of the time inserts random characters (for whatever reason, normally backticks... which then causes it's own formatting issues).

Still, very happy with Mistral so far!

mark_l_watson · 11h ago
It is such a common pattern for LLMs to surround generated JSON with ```json … ``` that I check for this at the application level and fix it. Ten years ago I would do the same sort of sanity checks on formatting when I used LSTMs to generate synthetic data.
mpartel · 8h ago
Some LLM APIs let you give a schema or regex for the answer. I think it works because LLMs give a probability for every possible next token, and you can filter that list by what the schema/regex allows next.
hansvm · 7h ago
Interestingly, that gives a different response distribution from simply regenerating while the output doesn't match the schema.
Rudybega · 3h ago
This is true, but there are methods to greatly reduce the effect of this and generate results that match or even improve overall output accuracy:

e.g. DOMINO https://arxiv.org/html/2403.06988v1

joshred · 4h ago
It sounds like they are describing a regex filter being applied to the model's beam search. LLMs generate the most probable words, but they are frequently tracking several candidate phrases at a time and revising their combined probability. It lets them self correct if a high probability word leads to a low probability phrase.

I think they are saying that if highest probability phrase fails the regex, the LLM is able to substitute the next most likely candidate.

stavros · 1h ago
You're actually applying a grammar to the token. If you're outputting, for example, JSON, you know what characters are valid next (because of the grammar), so you just filter out the tokens that don't fit the grammar.
fumeux_fume · 8h ago
Very common struggle, but a great way to prevent that is prefilling the assistant response with "{" or as much JSON output as you're going to know ahead of time like '{"response": ['
XenophileJKO · 7h ago
Just to be clear for anyone reading this, the optimal way to do this is schema enforced inference. You can only get a parsable response. There are failure modes, but you don't have to mess with parsing at all.
psadri · 7h ago
Haven’t tried this. Does it mix well with tool calls? Or does it force a response where you might have expected a tool call?
fumeux_fume · 6h ago
It'll force a response that begins with an open bracket. So if you might need a response with a tool call that doesn't start with "{", then it might not fit your workflow.
viridian · 10h ago
I'm sure the reason is the plethora of markdown data is was trained on. I personally use ``` stuff.txt ``` extremely frequently, in a variety of places.

In slack/teams I do it with anything someone might copy and paste to ensure that the chat client doesn't do something horrendous like replace my ascii double quotes with the fancy unicode ones that cause syntax errors.

In readme files any example path, code, yaml, or json is wrapped in code quotes.

In my personal (text file) notes I also use ``` {} ``` to denote a code block I'd like to remember, just out of habit from the other two above.

accrual · 9h ago
Same. For me it's almost like a symbiotic thing to me. After using LLMs for a couple of years I noticed I use code blocks/backticks a lot more often. It's helpful for me as an inline signal like "this is a function name or hostname or special keyword" but it's also helpful for other people/Teams/Slack and LLMs alike.
OJFord · 7h ago
I'm the opposite, always been pretty good about doing that in Slack etc. (or even here where it doesn't affect the rendering) but I sometimes don't bother in LLM chat.
Alifatisk · 10h ago
I think this is the first time I stumped upon someone who actually mentions LSTM in a practical way instead of just theory. Cool!

Would you like to elaborate further on how the experience was with it? What was your approach for using it? How did you generate synthetic data? How did it perform?

p1esk · 8h ago
10 years ago I used LSTMs for music generation. Worked pretty well for short MIDI snippets (30-60 seconds).
freehorse · 8h ago
I had similar issues with local models, ended up actually requesting the backticks because it was easier this way, and parsed the output accordingly. I cached a prompt with explicit examples how to structure data, and reused this over and over. I have found that without examples in the prompts some llms are very unreliable, but with caching some example prompts this becomes a non-issue.
mejutoco · 9h ago
Funny, I do the same. Additionally, one can define a json schema for the output and try to load the response as json or retry for a number of times. If it is not valid json or the schema is not followed we discard it and retry.

It also helps with having a field of the json be the confidence or a similar pattern to act as a cut for what response is accepted.

tosh · 8h ago
I think most mainstream APIs by now have a way for you to conform the generated answer to a schema.
Alifatisk · 10h ago
I do use backticks a lot when sharing examples in different format when using LLMs and I have instructed them to do likewise, I also upvote whenever they respond in that matter.

I got this format from writing markdown files, it’s a nice way to share examples and also specify which format it is.

barrell · 10h ago
Yeah, that’s infuriating. They’re getting better now with structured data, but it’s going to be a never ending battle getting reliable data structures from an LLM.

This is maybe more maybe less insidious. It will literally just insert a random character into the middle of a word.

I work with an app that supports 120+ languages though. I give the LLM translations, transliterations, grammar features etc and ask it to explain it in plain English. So it’s constantly switching between multiple real, and sometimes fake (transliterations) languages. I don’t think most users would experience this

epolanski · 10h ago
I had a similar experience on my pipeline.

Was looking to both decrease costs and experiment out of OpenAI offering and ended up using Mistral Small on summarization and Large for the final analysis step and I'm super happy.

They have also a very generous free tier which helps in creating PoCs and demos.

WhitneyLand · 4h ago
Were you using structured output with gpt-5 mini?

Is there an example you can show that tended to fail?

I’m curious how token constraint could have strayed so far from your desired format.

barrell · 3h ago
Here is an example of the formatting I desired: https://x.com/barrelltech/status/1963684443006066772?s=46&t=...

Yes I use(d) structured output. I gave it very specific instructions and data for every paragraph, and asked it to generate paragraphs for each one using this specific format. For the formatting, I have a large portion of the system prompt detailing it exactly, with dozens of examples.

gpt-5-mini would normally use this formatting maybe once, and then just kinda do whatever it wanted for the rest of the time. It also would freestyle and put all sorts of things in the various bold and italic sections (using the language name instead of the translation was one of its favorites) that I’ve never seen mistral do in the thousands of paragraphs I’ve read. It also would fail in some other truly spectacular ways, but to go into all of them would just be bashing on gpt-5-mini.

Switched it over to mistral, and with a bit of tweaking, it’s nearly perfect (as perfect as I would expect from an LLM, which is only really 90% sufficient XD)

fkyoureadthedoc · 10h ago
Same, my project has a step that selects between many options when a user is trying to do some tasks. The test set for the workflow that supports this has a better success rate by about 7% on gpt-4.1-mini vs gpt-5 and gpt-5-mini (with minimal thinking)
brcmthrowaway · 6h ago
What are you actually making
barrell · 3h ago
https://phrasing.app

I’m making an app to learn multiple languages. This portion of the pipeline is about explaining everything I can determine about a work in a sentence in specifically formatted prose.

Example: https://x.com/barrelltech/status/1963684443006066772?s=46&t=...

viridian · 10h ago
I'm curious what your prompts look like, as this is the opposite of my experience. I use lmarena for many of the random one shot questions I have, and I've noticed that mistral-medium is almost always the worse of the two after I blind vote. Feels like it consistently takes losses from qwen, llama, gemini, gpt, you name it. I find it overwhelmingly the most likely to produce factually untrue information to an inquiry.

Would you be willing to share an example prompt? I'm curious to see what it'sesponding well to.

barrell · 9h ago
I provide it with data and ask it to convert it to prose in specific formats.

Mistral medium is ranked #8 on lmsys arena IIRC, so it’s probably just not your style?

I’m also comparing this to gpt-5-mini, not the big boy

viridian · 4h ago
I think input strategy probably accounts for the difference. Usually I'm just asking a short question with no additional context, and usually it's not the sort of thing that has one well defined answer. I'm really asking it to summarize the wisdom of the crowd, so to speak.

For example, I ask, what are the most common targets of removal in magic: the gathering? Mistral's answer is so-so, including a slew of cards you would prioritize removing, but also several you typically wouldn't, including things like mox amber, a 0 cost mana rock. Gemini flash gave far fewer examples, one for each major card type type, but all of them are definitely priority targets that often defined an entire metagame, like Tarmogoyf.

barrell · 3h ago
Ah yeah. I’m only grading it on its prose, formatting, ability to interpret data, and instruction following. I do not use it as a store of knowledge
FranklinMaillot · 8h ago
You may be aware of that, but they released mistral-medium-2508 a few days ago.
barrell · 7h ago
I did not! It’s not on azure yet and I’ve still got some credits to burn. That’s exciting though, hopefully it will iron out this weird ghost character issue.
thijsverreck · 11h ago
Any chance at fixing it with regex parsing or redoing inference when the results are below a certain treshold?
barrell · 10h ago
It’s user facing, so will just have an option for users to regenerate the explanation. It happens rarely enough that it’s not a huge issue, and normally doesn’t effect content (I think once I saw it go a little wonky and end the sentence with a few random words). Just sometimes switches to mono space font in the middle of a paragraph, or it “spells” a word wrong (spell is in quotes because it will spell `chien` as `chi§en`).

It’s pretty rare though. Really solid model, just a few quirks

noreplydev · 10h ago
mistral speed is amazing
mickael-kerjean · 10h ago
If someone from mistral comes around, is there a way for third party MCP implementation to be in the connector directory? I built a MCP connector allowing people to connect to every possible file transfer protocol from S3 to FTP(S), SFTP, SMB, NFS, Gdrive, Dropbox, azure blob, onedrive, sharepoint, etc.. and have a couple layers to delegate authentication, enforce authorisation, support for RBAC and create chroots so the LLM can't go haywire + tools to visualise and or edit hundreds of file format. Would be awesome to get it listed, and it's open source: https://github.com/mickael-kerjean/filestash
beernet · 9h ago
Mistral recently being valued at $14 billion in the previous funding appears like a steal to me, especially compared to the Anthropic and OAI valuations. Would be interesting to compare revenues and growth rates as well to put these valuations into better perspective.

Apart from that, Mistral appears to remain the only really relevant new player with European ties in the Gen AI space. Aleph Alpha is not heard of anymore and is essentially steered by the Schwarz Group now, so at least chances of an acquihire I guess.

riedel · 8h ago
Without creating so much buzz there is also still DeepL . They just announced an agent framework: https://www.heise.de/en/news/DeepL-presents-its-own-AI-agent...

I think AI in Europe is doable in general.

FinnLobsien · 7h ago
What's their unique value? How are they differentiated vs. OAI/Anthropic/etc. who have way more money/distribution/etc.?
meesles · 5h ago
Not being based in the US is quite a differentiator for a lot of the world
DetroitThrow · 5h ago
I think the obvious question is whether they provide any differentiation beyond merely their HQ jurisdiction, since I'm sure we can all agree Turkmenistan AI would be very important for Turkmenistani government agencies too..
saubeidl · 38m ago
The difference with Turkmenistan is that the EU is the world's second largest economy. Having a near-monopoly on that is better than fighting over the largest economy.
riedel · 5h ago
They get translation in many languages right (which is important in Europe). They do not offer general purpose GenAI yet. But as they provide models for translation and text editing they have gained the trust of many companies. If they now move towards agentic AI for administrative task, they for sure have chances in procurement.
ljlolel · 9h ago
Also Lovable
hansonkd · 6h ago
Lovable has the worst moat I have ever seen for a company.

Our engineer used lovable for about a day, then just cloned the repo and used cursor since it it was much more productive.

hashbig · 6h ago
I just couldn't love it, and frankly I don't get the hype around it. I recently found that all my use cases can be served by either:

1. A general purpose LLM chat interface with high reasoning capacity (GPT-5 thinking on web is my go to for now)

2. An agent that has unrestricted token consumption running on my machine (Claude Code with Opus and Amp are my go to for now).

3. A fine-tuned, single purpose LLM like v0 that is really good at one thing, in this case at generating a specific UI component with good design aesthetics from a wireframe in a sandbox.

Everything else seems like getting the worst of all worlds.

echelon · 8h ago
Aren't there a billion Lovable clones now that do the exact same thing?

I could never get anything useful out of Lovable and was frustrated with the long editing and waiting process.

I'd prefer a site builder template with dropdowns. Lovable feels like that type of product, just with an LLM facade.

I don't hate AI, I just wasn't getting into the groove with Lovable.

brulard · 7h ago
Yeah for me Lovable was not really lovable.
mark_l_watson · 11h ago
I pay to use ProtonMail’s privacy preserving Lumo LLM Chat with good web_search tooling. Lumo is powered by Mistral models.

I use Lumo a lot and usually results are good enough. To be clear though, I do fall back on gemini-cli and OpenAI’s codex systems for coding a few times a week.

I live in the US, but if I were a European, I would be all in on supporting Mistral. Strengthen your own country and region.

coolspot · 36m ago
Note that Proton is sketchy about their code being open-source and available for anyone to review: https://news.ycombinator.com/item?id=44665398
g-mork · 8h ago
I wonder what ProtonMail are doing internally? Mistral's public API endpoints route via CloudFlare, just like apparently every other hosted LLM out there, even any of the Chinese models I've checked
ac29 · 9m ago
Mistral small and large are open weight, so they are likely self hosting?
fauigerzigerk · 6h ago
>I live in the US, but if I were a European, I would be all in on supporting Mistral. Strengthen your own country and region

That's a bit of a double edged sword. My support goes as far as giving local offerings a try when I might not have done otherwise. But at that point they need to be able to compete on merit.

basisword · 2h ago
>> I live in the US, but if I were a European, I would be all in on supporting Mistral. Strengthen your own country and region.

The problem is that if it's actually successful it'll just be bought by one of the big US based competitors.

saubeidl · 37m ago
I don't think France would allow that to happen - they would block it on national interest grounds.
cramsession · 9h ago
I've never used their models, but I love that design. Kudos to the Mistral design team, the modern pixel art look with orange colors is very cool.
kjgkjhfkjf · 8h ago
Why would I want to use Mistral's MCP services instead of official MCP services from Notion, Stripe, etc.? It seems to me that the official MCP services would be strictly better, e.g. because I don't have to grant access to my resources to Mistral.
signatoremo · 10h ago
Related, Mistral is closing ơn a funding round at $14 billion valuation

https://www.bloomberg.com/news/articles/2025-09-03/mistral-s...

SilverElfin · 7h ago
Doesn’t seem like much. Anthropic raised nearly as much the other day in funding as what Mistral is being valued at. Can they really survive?
saubeidl · 6h ago
I find that American AI companies are being incredibly wasteful - they've famously been shown up by frugal Deepseek, but generally smarter architectures outweigh more raw resources.
santiagobasulto · 3h ago
I think the "frugality" has been challenged by several sources, right? Nobody can prove if they were so inexpensive as they claimed.
aargh_aargh · 11h ago
> Directory of 20+ secure connectors

What does secure mean in this context? I didn't see it explained here.

Perhaps they mean this?

> Admin users can confidently control which connectors are available to whom in their organization, with on-behalf authentication, ensuring users only access data they’re permitted to.

oezi · 10h ago
Yeah, and what kind of features do MCPs by Stripe and Paypal offer? Currency comversion? Fees? API Docs?
ffsm8 · 10h ago
Maybe also transaction search (helpful for customer support), just the currency conversion ratio or balances, helpful for accounting etc. lots of read only usecases around
amelius · 11h ago
> Everything available on the Free plan

Cool!

samuel · 8h ago
Custom connectors are cool and a good selling point but they have to be remote (afaik there is no Le Chat Desktop) so using it with local resources it's not impossible, but hard to set up and not very practical (you need tail scale funnel or equivalent).
Alifatisk · 10h ago
I use Qwen Chat as my daily, it’s good and does the job very well. I have never thought about trying out Mistral, is their model good at anything? Any area it excels at? Or is it far behind all the other models?
saratogacx · 7h ago
I pay for Mistral pro and it is cheaper than other options ($15/mo) and you get some free metered API usage which works with a lot of different Ai coding products. They frequently add features with little fanfare (this time is a bit of an exception).

I find them to be a pretty good over-all model although not at the bleeding edge. Their responses are very fast. Qwen is better with code/log analysis in my experience but general coding questions hasn't presented any problems.

Mistral's agent framework is pretty good too. You can make agents very easily with the le chat side or if you want to have more deep control, you get le platforme access and agents you make there can be used in le chat without costing against API usage.

Of the AI products I've been working with, and I've been trying a lot of them, Mistral is one I plan on keeping when I reduce myself down to 2-3 I want to stick around.

epolanski · 10h ago
Qwen is good for reasoning/feedback/creative work. I use it a lot when reviewing documentation, prose and some code, it's the one I like the most.

But when it comes to researching information is consistently among the worst performers in my comparisons.

Generally the order is Opus 4.1 > Perplexity > Gemini Pro >> GPT 5 >> Qwen.

Alifatisk · 5h ago
I got Perplexity Pro one year for free, what makes Opus 4.1 search so good? Is it worth switching?
epolanski · 4h ago
I don't know whether I can give you a conclusive answer, due to how query-dependent the results are and the lack of determinism.

I really like perplexity and if you get it free it's hard to justify spending 100$/month.

Alifatisk · 3h ago
I understand, I thought maybe there was a couple of cases you stumbled upon that Opus 4.1 was better at, which made you rank it higher than Perplexity!
powerapple · 9h ago
By research you mean web search/deep research or only use knowledge embedded in llm? I use ChatGPT most of time, didn't find Claude work better for me, maybe I should switch if there is a big gap in performance.
epolanski · 7h ago
Deep research, I don't trust LLMs with anything really.
joshwarwick15 · 11h ago
Collection of Remote only MCP Servers here: https://github.com/jaw9c/awesome-remote-mcp-servers
Adrig · 7h ago
I want to love Mistral but never really used their products. What are they good enough for / great at?
nop_slide · 7h ago
All I can think about is "La Chat" from three 6 mafia
b33j0r · 4h ago
That’s a good one! Haha

From a similar generation all I can think about is Homer Simpson trying to put together a BBQ grill, from only the french instructions: “Le Grille?? What the hell is Le Grille??”

bigmattystyles · 4h ago
There’s a great Belgian comic strip called ‘Le Chat’ by Phillipe Geluck. It’s for adults, it is great.
crowcroft · 7h ago
How effective are Mistral's models at this point?

My perception using them is that they have comparable models to OpenAI and others when it comes to general use 'chat' tasks, but they don't match something like gpt-5 Pro or high thinking when you need something more powerful.

Perhaps the problem is that you need to be competing right at the frontier, or not at all. I put Cohere in a similar bucket.

vintagedave · 2h ago
I don’t know why you’re downvoted, this is an excellent question.

A year ago I wanted to use and like Mistral, mostly because it was an upstart competitor to OpenAI. Yet I found its coding ability sadly lacking. I haven’t tried since. I also have seen them on the HN front page very rarely. I’m curious too how well they stand up these days.

moralestapia · 11h ago
With all the recent news, I cas see Mistral raising billions soon. Not that I endorse it, though.
jdross · 11h ago
They just did
moralestapia · 10h ago
You're right!
apwell23 · 11h ago
I don't use any of those products except github. which i don't need mcp for since i use claude code with gh cli installed.
pembrook · 9h ago
It feels like the only path Mistral has to win is by targeting risk-averse European enterprises by waving the EU banner and having them force it on their employees. Even then, if they fall far enough behind they won't be able to do that either. Seems like a sad outcome for European tech given all the talent Europe has. It's frustrating.

You watch the OpenAI launch videos and its a surprising variety of Europeans accents talking about all the value they're creating in the US, instead of back home, simply due to the more favorable business/investment policies of the US.

My pet theory is, outside of the silly regulatory stance, the real reason Europe can never compete in each wave of tech (mainframe > pc > internet > mobile > social > AI > etc.) is government pension systems hoovering up all private capital and investing it into european governments (bonds) instead of european businesses (equities).

Centralizing the financial assets of an entire country, subjecting it to the whims of politics thus requiring it be invested it into extremely low risk bonds instead of a larger portion in European equity indexes or even a tiny portion in venture capital has created this situation: https://i.redd.it/fxks3skmvt4e1.png

Yes, a vast majority of VC funds lose money. Hence why it's bucketed in 'alternatives' and never a major part of pension portfolios. But the small group of winners literally create the future tax base to fund the social welfare system to continue existing (not to mention the future military tech which it turns out is useful when your neighbors get hostile). Not taking the risk means you never get the reward.

If Europe put even 1-2% of their $5T in pension assets into venture...even grossly mismanaged Softbank style...I find it hard to imagine you wouldn't accidentally create a few $100+ Billion companies in 10-20 years. More important would be creating the startup ecosystem for taking the rest of the worlds capital into these ventures as a multiplier.

cheeseface · 9h ago
Government pension funds are a part of it, but it’s a combination of many things, like:

- The US has been a single market for a much longer time than the EU, and the EU still is not a single market, primarily due to language barriers (Germany, France, and Italy are large enough markets to have their own localized, but slightly worse SaaS options)

- European societies are more arranged around the common good and have lower income differences between people and super-wealthy individuals by design. The US is built around being the place where talented people can make the most money out of their skills, which results in many people worldwide choosing it as the place to go to, as the talent market is a global one.

- European values tend to value making as much money as possible or competing and being the winner less, which results in people grinding less and being happy when they become rich enough to focus on other things.

pembrook · 8h ago
Yes on the single market being a huge issue, but the other "cultural" differences are total BS, and just modern media narratives.

Western Europe and the US had essentially the same level of government safety nets (and government spending and economic growth) from the 1950s to the 1990s.

Who do you think started all of the European industrial giants that are still globally competitive today? Europe had no problem competing in the industrial revolution, if these were actually European values there would be no competitive European industry like there is no competitive European tech. It's only the digital revolution that Europe has struggled with.

dadoum · 2h ago
> Europe can never compete in each wave of tech (mainframe > pc > internet > mobile > social > AI > etc.)

I would not dismiss the contribution of European companies to each one of this domains so quickly though. Especially on the mobile side, there was a time where Nokia/Siemens/Ericsson/Alcatel were big names in that industry.

frabcus · 8h ago
I tried Mistral for a bit, and it is so fast everything else feels bad now by comparison. I think there's lots of opportunity for OpenAI, Anthropic to stumble on features and performance.
Scene_Cast2 · 9h ago
If you think that this is sad, take a look at Canada and Cohere. I think it's not that Europe is lacking something, but rather the US and China are the only ones that are able to pull something off.
stripe_away · 4h ago
huh.

Can your theory explain then the difference between San Francisco/Bay area and the rest of the United States? Perhaps it is California's generous tax policies compared to say Texas?

arisAlexis · 9h ago
Not true, I switched to LeChat for my every day stuff just because it is good enough and I like the french twist.
littlestymaar · 9h ago
> simply due to the more favorable business/investment policies of the US.

It has nothing to do with policies, or pension system or whatever, and all to do with market size: when building an American company, you have access to the whole US from the start, then you can build an international product (with all the hassle that comes with it). If you're “European”, you have 27 different markets to address and except your own, none of them is easier for you to get than for an American company.

The second hottest tech market after the US (and not that much behind) is China, and don't tell me that's because they have favorable business policies, ask Jack Ma! It's literally a totalitarian state where CEOs can get abducted if the CCP thinks they're getting too powerful. Talk about incentivizing risk taking. But that's a market of a billion people, the second highest GPD on the planet, and American companies can't monopolize every markets because they are being heavily restrained by the government.

The only way for Europe to thrive technologically, would be to close the doors to the American corporations, that's how you can have Alibaba or VKontakte.

I'm not holding my breath though.

pembrook · 8h ago
China is communist in name only. In fact they are the most capitalist major country on earth right now.

Government spending makes up a smaller % of Chinese GDP than in the US, so definitionally their economy is more privatized than the US economy. China is at 33%, US at 36%, Europe is at 50%.

For every Jack Ma, there's a million other Chinese businesses flourishing in every niche imaginable with very little oversight from the CCP.

littlestymaar · 8h ago
The world is much more complex than a dichotomy between “capitalism” and “communism”, and it doesn't make any sense to have a hierarchy of what is the “most capitalist country”.

Also, “government spendings” isn't a good proxy for how much a government intervenes in the economy, especially when the said government can just order businesses to do this or that without handing money to them.

pembrook · 8h ago
Of course, but it is literally the best proxy available in 1 metric.

Definitionally, it's the percentage of economic activity that is dictated by decentralized private market actors vs. centralized government ones.

All stats are imperfect reflections of reality. But name a better one for this particular issue.

DetroitThrow · 3h ago
State owned enterprises as a percentage of GDP, seems definitionally a better metric than government spending per GDP for comparing "decentralized private vs centralized government actors". This shows China at 29% and USA at 19% for 2024. So even discounting legal frameworks for state intervention, regulations, or supply-side policy it seems very contrived to reach your conclusion in terms of, again, "decentralized private market actors vs. centralized government ones" - and I'm aware of the difference it would mean wrt government spending as a % of GDP.

But I'd just like to point out how silly it is to dismiss the person's concerns by claiming we should all just agree to be reductive because it's easiest to discuss a single metric. It's certainly easiest to use this single metric to make the discussion about your conclusion, though, if that's what you were aiming for. I hope not, though.

pembrook · 2h ago
SOE % is definitely a great challenger for this. But I'd argue it misrepresents the picture by not including things like regulated monopolies in the US and poorly capturing taxation-driven redistribution.

Chinese utilities are all counted as SOEs, regulated utility monopolies in the US aren't, even though defacto they are government entities. The US likes to brand everything as more capitalist (just as China likes to brand everything as more communist), so this distorts the picture.

If we're just trying to capture the full picture of money flows in an economy, and whether each incremental currency unit is responding to market signals or not, % of GDP that is government spending is more reliable imo.

It's far easier to compare internationally and less fuzzy to calculate, given there's much more data on it globally.

bgwalter · 10h ago
It looks like Mistral wants to collect data before people are able to try out the sacred "AI". Too bad. If all training sites that Mistral stole had done the same, Mistral would not exist.

Naturally, Cloudflare is in the business, too:

https://docs.mistral.ai/deployment/self-deployment/cloudflar...

mistral.ai name servers:

  Name Server: ivan.ns.cloudflare.com
  Name Server: ada.ns.cloudflare.com
raffael_de · 11h ago
Every time I tried a Mistral model I was left rather underwhelmed and just went back to the usual options. Seems like their only USP at this point is Made in EU.
epolanski · 10h ago
Not my experience and I have compared OpenAI/Anthropic/Mistral quite some.

Speed and cost is a relevant factor. I have pipelines that need to execute tons of completions and has to produce summaries. Mistral Small is great at it and the responses are lightning fast.

For that use case if you went with US models it would be way more expensive and slow while not offering any benefit at all.

tormeh · 9h ago
If you give them money, Mistral will help you set up their models up in your basement. Also they're really cheap. That's their USP, I think.
greyb · 6h ago
I'm curious how relevant this actually is as a USP with the proliferation of open weight models and a glut of technical consultants.
simion314 · 10h ago
>Seems like their only USP at this point is Made in EU.

They are also releasing model weights for most of their models, where companies like Antropic and until recently OpenAI were FUDing the world that open source will doom us all.

Mistral smartest model is still behind Google, Antropic but they will catch up.

swores · 10h ago
Not a big deal, but FYI there's an 'h' in the company name "Anthropic".

Inspired by the Greek word for human: Anthropos / ἄνθρωπος, the same etymology as English words like anthropology, the study of humans.

(I'd hazard a guess that your first language is something like a Romance language such as French, where people would pronounce that "anthro..." as if there is no h? So a particularly reasonable letter to forget when typing!)

lbreakjai · 6h ago
We generally kept the traces of the original latin/greek in the french spelling. It's "anthropologie" in french, but "antropología" in spanish, or "antropologia" in italian and portuguese.

Which makes it particularly hard to write, compared to other latin languages.

swores · 5h ago
Interesting! French is the only one I'm familiar with, and just assumed it was representative of the others. Thanks for the extra context
simion314 · 7h ago
yes, my first language is Romanian, a romance language , and add on that my complete disrespect for the company anti open source FUD so I never waste my time to double check my spelling.
baq · 11h ago
they're also fast.
raffael_de · 11h ago
So are Gemini Flash (Lite) and GPT mini/nano.
threeducks · 10h ago

    - 1100    tokens/second Mistral Flash Answers https://www.youtube.com/watch?v=CC_F2umJH58
    -  189.9  tokens/second Gemini 2.5 Flash Lite https://openrouter.ai/google/gemini-2.5-flash-lite
    -   45.92 tokens/second GPT-5 Nano https://openrouter.ai/openai/gpt-5-nano
    - 1799    tokens/second gpt-oss-120b (via Cerebras) https://openrouter.ai/openai/gpt-oss-120b
    -  666.8  tokens/second Qwen3 235B A22B Thinking 2507 (via Cerebras) https://openrouter.ai/qwen/qwen3-235b-a22b-thinking-2507
Gemini 2.5 Flash Lite and GPT-5 Nano seem to be comparatively slow.

That being said, I can not find non-marketing numbers for Mistral Flash Answers. Real-world tps are likely lower, so this comparison chart is not very fair.

baalimago · 11h ago
Strongest argument I see for Mistral is that it's European. Which isn't a very good argument.
weweersdfsd · 10h ago
It's a good argument if you care about privacy and geopolitical risk, like a dictator suddenly deciding that citizens of your country should no longer have access, or should be monitored when using the service.
rsynnott · 10h ago
I mean, if the US had just had a succession of Obamas (or even an alternating succession of Obamas and GWBushes, really), then that wouldn't be a great argument. Given the current instability and capriciousness of the US regime, though... If you were a European company who wanted to use an LLM, it's going to at least factor into your risk planning.
mkreis · 11h ago
It is in regards to the GDPR. If you're a European vendor and process PII, you must ensure some level of data protection. If you want to be on the safe side, you'll pick European providers instead of US hyperscalers (who have EU data centers, but are still US owned).
mseri · 10h ago
True, but we should also remember that some services like the fast responses and the image generations (may?) run in US data centres also for Mistral. So that part of the data, in principle, may end up in the ends of other extra European countries.

This said, I am really supportive of Mistral, like their work, and hope that they will get more recognition and more EU-centric institutional support.

apwell23 · 10h ago
how does the 'memory' feature in mistral work wrt GDPR if i type in my personal information ?
dax_ · 10h ago
GDPR doesn't stop personal data being stored. It handles whom it can be shared with, when it has to be deleted, and only collect as much data as required. Also gives transparency to the users about their data use.

And if I were to give over personal information to an AI company, then absolutely I'll prefer a company who actually complies with GDPR.

apwell23 · 10h ago
yea i mean. how would they know how to remove it from 'memory' since they have no way to know with 100% accuracy which parts of my chart are PII.
rsynnott · 10h ago
The cautious approach on their part would be to just delete the whole thing on any subject access deletion request.
swores · 10h ago
As a metaphor (well, a simile) think of it like if they were providing you with an FTP server or cloud storage. It's your choice what, if any, personal data you put into the system, and your responsibility to manage it, not theirs.

As to what to do if you, with a customer's permission, put their PD (PII being an American term) into the system, and then get a request to delete it... I'm not sure, sorry I'm not an expert on LLMs. But it's your responsibility to not put the PD into the system unless you're confident that the company providing the services won't spread it around beyond your control, and your responsibility not to put it into the system unless you know how to manage it (including deleting it if and when required to) going forwards.

Hopefully somebody else can come along and fill in my gaps on the options there - perhaps it's as simple as telling it "please remove all traces of X from memory", I don't know.

edit: Of course, you could sign an agreement with an AI provider for them to be a "data controller", giving them responsibility for managing the data in a GDPR-compliant way, but I'm not aware of Mistral offering that option.

edit 2: Given my non-expertise on LLMs, and my experience dealing with GDPR issues, my personal feeling is that I wouldn't be comfortable using any LLM for processing PD that wasn't entirely under my control, privately hosted. If I had something I wanted to do that required using SOTA models and therefore needed to use inference provided by a company like Mistral, I'd want either myself or my colleagues to understand a hell of a lot more about the subject than I currently do before going down that road. Thankfully it's not something I've had to dig into so far.

t0lo · 11h ago
[flagged]
dang · 3h ago
"Eschew flamebait. Avoid generic tangents."

https://news.ycombinator.com/newsguidelines.html

loudmax · 10h ago
I'm glad to see a European company succeeding here, especially since Mistral has released open weights models. But you're deluding yourself if you think Mistral is any more moral than its American counterparts.
troyvit · 7h ago
Taking the morality of a company on its own isn't how I look at it, it's also the context. You might be right that if Mistral was born in the U.S. instead of France it would do the same shady stuff Anthropic and OpenAI are doing, but it wasn't and therefore it isn't. As a result it's a company I personally can work with for now.
ljlolel · 9h ago
Or if Europe is
apwell23 · 10h ago
how you come you are helping ycombinator by posting here then?
o_m · 10h ago
All or nothing mentally wont get you very far. You're digital device is probably made in China, but that doesn't mean you'll want to store your personal data in a Chinese data center. I try to choose European whenever possible and avoid or limit the use of American, Chinese, and Russian tech.
saubeidl · 10h ago
It is good to provide viewpoints opposing the imperialist propaganda that is frequently being spread here.
apwell23 · 7h ago
thank you for your service.
esafak · 10h ago
Activity here costs ycombinator too.
pembrook · 8h ago
[flagged]
dang · 3h ago
Please don't respond to a bad comment by breaking the site guidelines yourself. That only makes things worse.

https://news.ycombinator.com/newsguidelines.html

Barrin92 · 7h ago
>It's easy to look virtuous when you have other countries handle the 'immoral' military stuff,[...] Europe is a 24-year old trust fund kid working in a vegan commune

Europe has a higher industrial output than the US. In Unterlüß a town of 3500 people, Rheinmetall makes about 50% as many 155mm shells as the entire US makes annually. There's a reason your trust fund metaphor takes place in Brooklyn.

You might also want to remember that article 5 was invoked once, and it wasn't by Europe.

pembrook · 4h ago
Ahhh apparently Europe is more immoral than OP thought! Good job guys.

Kidding aside, if Europe is this hidden powerhouse as you claim, then its even more odd to be begging the Americans for defense support/leadership from across the Atlantic, and still be importing natural gas from the "evil" Russians while supposedly in a fight with them. Seems to undermine your point no?

troyvit · 7h ago
> If we pretend history begins at 1991...ignore the colonialism, world wars, fascism, communism, genocide, and body counts collectively in the hundred millions...

Uhhh it isn't Europe taking all its bad news out of its museums, friend. That's the good ol' U.S.A. attempting to hide from its own history.

> Europe is a 24-year old trust fund kid working in a vegan commune while living in a $2M Brooklyn apartment paid for by her dad who is an executive at Exxon.

What a very ... American analogy.

pembrook · 4h ago
Most of the "europe-is-utopia" voices on this site are actually American euro-fetishists in SF/LA/NYC. So I felt it the American analogy is more fitting.

If I'm wrong though, the irony that the European tech community has to resort to a US message board to voice their opinions, only serves to further underline my point.

apwell23 · 7h ago
> Uhhh it isn't Europe taking all its bad news out of its museums, friend.

Yes they proudly fill their museums to the brim with colonial loot. So Europeans can reminisce about good old days when they were the top dog.

epolanski · 10h ago
It's fast. Very fast. It's even faster than Gemini flash, which is fast.
htrp · 6h ago
Is that due to usage (or lack thereof)?
drno123 · 11h ago
Also, speed.