Pangu's Sorrow: The Sorrow and Darkness of Huawei's Noah Pangu LLM R&D Process

10 guardiangod 2 7/7/2025, 12:27:51 AM github.com ↗

Comments (2)

yms_hi · 4h ago
Calling a paper already determined to be AI-generated as "incident"? This is a major point of suspicion in the entire text.
nirui · 1h ago
Is the article a translation from Chinese? You have to have some deep knowledge on Chinese net slang and Huawei slang to correctly understand it.

And all that unnecessary emotional expressions. All of it made the article hard to read.

Here's takeaways I extracted:

1. The author claim to be "an employee of the Pangu Large Model Team and Huawei Noah's Ark Laboratory", a lower ranking "small worker". The first 4 bullet points supposed to prove that they have insider knowledge, which should authenticate the claims that followed. As of why Huawei named their teams in this oddly way is unexplained but do desire some psychiatric analysis.

2. "At the beginning, our (Huawei, editor's note) computing power was very limited..." (detail followed), "...At the same time, other domestic companies such as Alibaba (which published Qwen, editor's note) and Zhipu were training on GPUs and had already figured out the right method. The gap between Pangu and its competitors was getting bigger and bigger"

3. "In this situation, Wang Yunhe ('the current director of Noah', editor's note) and his small model laboratory took action. They claimed that they inherited and transformed from the old 135B parameters, and through training a short few hundred B of data, the average improvement of various indicators was about ten points. In fact, this was their first masterpiece of applying the shell to the large model. Huawei's laymen led the experts, which made the leaders completely unaware of this nonsense. They only thought that there must be some algorithm innovation. After internal analysis, they actually used Qwen (which is published by Alibaba, editor's note) 1.5 110B for continued training.", "By adding layers, expanding the ffn dimension, and adding some mechanisms from the Pangu pi paper, they gathered about 135B parameters. In fact, the old 135B has 107 layers, while this model has only 82 layers, and the various configurations are also different. After training, the distribution of many parameters of the new 135B of unknown origin is almost exactly the same as that of Qwen 110B. Even the class name of the model code was Qwen at the time, and they were too lazy to even change the name. The subsequent model is the so-called 135B V2. This model was also provided to many downstreams at the time, even including external customers."

And that's about it.

Also, yeah, the article was indeed a translation from Chinese. The [original post] was written in Chinese, and then got translated it to English by github.com/moonlightelite. That's why it felt odd to read.

[original post]: https://web.archive.org/web/20250706034203/https://github.co...

After reading the article, I feel this is less of a whistle blowing, more of an attack against Wang Yunhe. That's why there's so much emotional expressions, to (maybe) appeal to Huawei and/or the future employer of this individual. But that's just my personal feelings/hint.