Why are there so many English-first AI models from China? Are they not interested in serving their own population? Or is it that if they publish Chinese-first models it won't get publicity in the West?
whynotmaybe · 16m ago
Haven't we reached a situation where English is the de facto language of scientific research, especially AI benchmarks ?
It's clearly impossible for me to try anything in Chinese, I'd need a translation.
chvid · 5m ago
All LLMs are trained on the same basic blob of data - mostly in English, mostly pirated books and stuff.
enlyth · 6m ago
I assume a large portion of high quality training material is in English
“In the later discussions, I suddenly thought of one of my favorite sayings — ‘A Buddha sees a single grain of rice as vast as Mount Sumeru.’”
This expression emphasizes the idea that even something seemingly small (like a grain of rice) can hold immense significance or value when viewed from a different perspective.
Thanks to chatgpt for translating this
Jotalea · 2h ago
I wonder if they will use this model for their AI assistant on their Xiaomi 15 series phones. They most likely will. I'm not really sure what to expect from it.
m4r1k · 54m ago
My Chinese friend told me MiMo doesn’t have a meaning in Chinese (of course Mi 米 = rice). Anybody have a clue for what it stands for?
gandalfgreybeer · 40m ago
A lot of Xiaomi products have the prefix Mi. My initial guess is Mo is for model.
I think it's funny that everything from Xiaomi is "Mi" because for me, the "mi", is "rice". Hahaha - so like all their stuff is "Rice-" this or that. Hahaha
fredwu · 1h ago
Not sure why it would be "funny" as this is literally why they named the company Xiaomi.
For me it's funny that all the products are called "Rice-something" that's funny, hahaha! :)
bojan · 15m ago
Not that different from "apple" something.
amazingamazing · 1h ago
just as funny as an Apple, for sure.
keepamovin · 48m ago
That's a good point hahahaha! :)
cruzcampo · 2h ago
Does Xiaomi literally mean Little Rice? That's what my very limited mandarin would suggest
keepamovin · 2h ago
That is what my literally also rather limited Chinese would suggest. haha
But with many single characters in Chinese, a Chinese person will tell you, if you ask for what a single character means, something like, "Well it's not so easy to pin down the meaning of that one. Sometimes we use it like this, and sometimes like that."
Sure, some characters have an easy meaning (for me, I think the rice in Mi is one of them!) but there's plenty where you cannot get a Chinese person to easily tell you what a single character means. I guess it's a little like, but not the same as, asking an English person to tell you, what any given "morpheme" (word part, like fac-) means. Hahaha. Not a perfect analogy tho! :)
Wow, that's interesting. I guess that's like a US company being called "MRE". We would view that like a veteran's owned and operated company. Interesting.
And all the products would be "MRE-Phone", "MRE-Pod", hehehe :)
...and searching for things related to multiple antennae just got harder.
They could've called it Xiaomimo.
arghwhat · 2h ago
multiple-input, multiple-output was horribly generic to begin with. Terms like multipath propagation and spatial multiplexing will do just fine.
ramesh31 · 3h ago
These benchmark numbers cannot be real for a 7b model
strangescript · 2h ago
The smaller models have been creeping upward. They don't make headlines because they aren't leapfrogging the mainline models from the big companies, but they are all very capable.
I loaded up a random 12B model on ollama the other day and couldn't believe how good it competent it seemed and how fast it was given the machine I was on. A year or so ago, that would have not been the case.
apples_oranges · 2h ago
exactly, it seems to validate my assumption from some time ago, that we will mostly use local models for everyday tasks.
pzo · 2h ago
yeah especially that this simplifies e.g. doing mobile app for 3rd party developers - not extra cost, no need to setup proxy server, monitoring usage to detect abuse, don't need to make complicated subscription plan per usage.
We just need Google or Apple to provide their own equivalent of both: Ollama and OpenRouter so user either use inference for free with local models or BringYourOwnKey and pay themself for tokens/electricity bill. We then just charge smaller fee for renting or buying our cars.
jillesvangurp · 2h ago
Including figuring out which more expensive models to use when needed instead of doing that by default. Early LLMs were not great at reasoning and not great at using tools. And also not great at reproducing knowledge. Small models are too small to reliably reproduce knowledge but when trained properly they are decent enough for simple reasoning tasks. Like deciding whether to use a smarter/slower/more expensive model.
wg0 · 2h ago
But who will keep them updated and what incentive they would have? That's I can't imagine. Bit vague.
ebiester · 7m ago
Eventually? Microsoft and Copilot, and Apple and Siri - even if they have to outsource their model making. It will be a challenge to desktop Linux.
cruzcampo · 2h ago
Who keeps open source projects maintained and what incentive do they have?
jsheard · 2h ago
Most open source projects don't need the kinds of resources that ML development does. Access to huge GPU clusters is the obvious one, but it's easy to forget that the big players are also using huge amounts of soulcrushing human labor for data acquisition, cleaning, labeling and fine tuning, and begrudgingly paying for data they can't scrape. People coding in their free time won't get very far without that supporting infrastructure.
I think ML is more akin to open source hardware, in the sense that even when there are people with the relevent skills willing to donate their time for free, the cost of actually realizing their ideas is still so high that it's rarely feasible to keep up with commercial projects.
cruzcampo · 1h ago
That's a fair point. I think GPU clusters are the big one, the rest sounds like a good fit for volunteer work.
simiones · 1h ago
For the bigger open source projects, companies who use that code for making money. Such as Microsoft and Google and IBM (and many others) supporting Linux because they use it extensively. The same answer may end up applying to these models though - if they really become something that gets integrated into products and internal workflows, there will be a market for companies to collaborate on maintaining a good implementation rather than competing needlessly.
nickip · 2h ago
What model? I have been using api's mostly since ollama was too slow for me.
patates · 1h ago
I really like Gemma 3. Some quantized version of the 27B will be good enough for a lot of things. You can also take some abliterated version[0] with zero (like zero zero) guardrails and make it write you a very interesting crime story without having to deal with the infamous "sorry but I'm a friendly and safe model and cannot do that and also think about the children" response.
Qwen3 and some of the smaller gemma's are pretty good and fast. I have a gist with my benchmark #'s here on my m4 pro max (with a whole ton of ram, but most small models will fit on a well spec'ed dev mac.)
Last time I did that I was also impressed, for a start.
Problem was that of a top ten book recommendations only the first 3 existed and the rest was a casually blended hallucination delivered in perfect English without skipping a beat.
"You like magic? Try reading the Harlew Porthouse series by JRR Marrow, following the orphan magicians adventures in Hogwesteros"
And the further towards the context limit it goes the deeper this descent into creative derivative madness it goes.
It's entertaining but limited in usefulness.
omnimus · 2h ago
LLMs are not search engines…
Philpax · 54m ago
An interesting development to look forward to will be hooking them up to search engines. The proprietary models already do this, and the open equivalents are not far behind; the recent Qwen models are not as great at knowledge, but are some of the best at agentic functionality. Exciting times ahead!
mirekrusin · 1h ago
Exactly, I think all those base models should be weeded out from this nonsense, kardashian-like labyrinths of knowledge complexities that just makes them dumber by taking space and compute time. If you can google out some nonsense news, it should stay there in search engines for retrieval. Models should be good at using search tools, not at trying to replicate their results. They should start from logic, math, programming, physics and so on, similar to how education system is suppose to equip you with. IMHO small models can give this speed advantage (faster to experiment ie. with parallel diverging results, ability to munch through more data etc). Stripped to this bare minimum they can likely be much smaller with impressive results, tunable, allow for huge context etc.
justlikereddit · 43m ago
They are generalists, being search engines is a subset of that.
bearjaws · 1h ago
My guess is that it is over fitted to the tests.
mirekrusin · 1h ago
Today's best models will be worse models for the rest of your life.
Go look at the benchmark numbers of qwen3-4B if you think these are unrealistic.
andrepd · 2h ago
Every LLM is basically being trained on benchmarks so "benchmark" as applied to LLMs is a pretty meaningless term.
xmorse · 41m ago
Xiaomi is an amazing company
w4yai · 3h ago
Anyone tried it ?
Alifatisk · 3h ago
No, where can I try it? I saw a huggingface link but I wonder if they host it themselves somewhere to like how Alibaba does with Qwen chat.
yorwba · 2h ago
There is a HuggingFace space (probably not official) at: https://huggingface.co/spaces/orangewong/xiaomi-mimo-7b-rl You might have to wait a minute to get a response. Also, the space doesn't seem to have turn-taking implemented, so after giving the Assistant's response, it kept on generating the Human's next message and so on and so forth.
CodeCompost · 2h ago
Open Source or Open Weights?
NitpickLawyer · 1h ago
MIT - so open source
Davidzheng · 1h ago
Weights
ilrwbwrkhv · 53m ago
And this point everybody will open source their models or weights. The only one which will not is open AI.
It's clearly impossible for me to try anything in Chinese, I'd need a translation.
Here is the meaning of the name
Described here: https://finance.sina.cn/tech/2020-11-26/detail-iiznctke33979...
在后来的讨论中,我突然想到了我最喜欢的一句话——“佛观一粒米,大如须弥山”。
Translated into English, it means:
“In the later discussions, I suddenly thought of one of my favorite sayings — ‘A Buddha sees a single grain of rice as vast as Mount Sumeru.’”
This expression emphasizes the idea that even something seemingly small (like a grain of rice) can hold immense significance or value when viewed from a different perspective.
Thanks to chatgpt for translating this
Also related reference https://en.wikipedia.org/wiki/Xiaomi#Name_etymology
Source (Chinese): https://finance.sina.cn/tech/2020-11-26/detail-iiznctke33979...
But with many single characters in Chinese, a Chinese person will tell you, if you ask for what a single character means, something like, "Well it's not so easy to pin down the meaning of that one. Sometimes we use it like this, and sometimes like that."
Sure, some characters have an easy meaning (for me, I think the rice in Mi is one of them!) but there's plenty where you cannot get a Chinese person to easily tell you what a single character means. I guess it's a little like, but not the same as, asking an English person to tell you, what any given "morpheme" (word part, like fac-) means. Hahaha. Not a perfect analogy tho! :)
Here's this list of morphemes I found just now thinking about this: https://www.fldoe.org/core/fileparse.php/16294/urlt/morpheme...
Seems incomplete list when you consider etymology of English words are often composed of parts from ages past! :)
And all the products would be "MRE-Phone", "MRE-Pod", hehehe :)
little rice
Yes.
But it's more complicated than that.
Probably within few hours will be released.
But yeah waiting is the easier option
They could've called it Xiaomimo.
I loaded up a random 12B model on ollama the other day and couldn't believe how good it competent it seemed and how fast it was given the machine I was on. A year or so ago, that would have not been the case.
We just need Google or Apple to provide their own equivalent of both: Ollama and OpenRouter so user either use inference for free with local models or BringYourOwnKey and pay themself for tokens/electricity bill. We then just charge smaller fee for renting or buying our cars.
I think ML is more akin to open source hardware, in the sense that even when there are people with the relevent skills willing to donate their time for free, the cost of actually realizing their ideas is still so high that it's rarely feasible to keep up with commercial projects.
[0]: https://huggingface.co/mlabonne/gemma-3-12b-it-abliterated
https://gist.github.com/estsauver/a70c929398479f3166f3d69bce...
Problem was that of a top ten book recommendations only the first 3 existed and the rest was a casually blended hallucination delivered in perfect English without skipping a beat.
"You like magic? Try reading the Harlew Porthouse series by JRR Marrow, following the orphan magicians adventures in Hogwesteros"
And the further towards the context limit it goes the deeper this descent into creative derivative madness it goes.
It's entertaining but limited in usefulness.
Go look at the benchmark numbers of qwen3-4B if you think these are unrealistic.