Launch HN: Uplift (YC S25) – Voice models for under-served languages

43 zaidqureshi 17 8/19/2025, 12:10:09 PM
Hi HN, we are Zaid, Muhammad and Hammad, the co-founders of Uplift AI (https://upliftai.org). We build models that speak underserved languages — today: Urdu, Sindhi, and Balochi.

A billion people worldwide can't read. In countries like Pakistan – the 5th most populous country – 42% of adults are illiterate. This holds back the entire economy: patients can't read medical reports, parents can't help with homework, banks can't go fully digital, farmers can't research best practices, and people memorize smartphone app button sequences. Voice AI interfaces can fix all of this, and we think this will perhaps be one of the great benefits of modern AI.

Right now, existing voice models barely work for these languages, and big tech is moving slowly.

Uplift AI was originally a side project to make datasets for translation and voice models. For us it was a "cool side-thing" to work on, not an "important full-time thing" to work on. With some initial data we hacked together a Urdu Voice Bot on Whatsapp and gave it to one domestic worker. In two days 800 people were using it. When we dived deeper into understanding the users, we learned that text interfaces don't work for sooo many. So we started Uplift AI to solve this problem fulltime.

The most challenging part is that all the building blocks needed for great voice models are broken for these languages. For example, if you are creating a speech synthesis model, you will scrape a lot of data from youtube and auto-label it using a transcription model… all very easy to do in English. But it doesn't work in under-served languages because the transcription modes are not accurate.

There are many other challenges. Like when you hire human transcribers to label the data, often they don't have any spell correctors for their languages, and this creates lots of noise in the data… making it hard to train models with low data. There are many more challenges in phonemes, silence detection, diacritization etc.

We solve these problems by making great internal tooling to help with data labeling. Also, we source our own data and don't buy it. This is counterintuitive, but a big advantage over companies buying data and then training. By sourcing our own data we create the right data distributions and get much better models with much less data. By doing the entire thing inhouse, (data, labeling, training, deploying) we are able to make a lot faster progress.

Today we publicly offer a text to speech APIs for Urdu, Sindhi, and Balochi. Here's a video which shows this: https://www.loom.com/share/dcd5020967444c228e9c127151e7a9f5.

Khan Academy is using our tech to dub videos to Urdu (https://ur.khanacademy.org).

Our models excel at informational use cases (like AI bots) but need more work in emotive use-cases like poetry.

We have been giving a lot of people private access in beta mode, and today are launching our models publicly. We believe this will be the fastest way for us to learn about areas that are not performing well so we can fix them quickly.

We'd love to hear from all of you, especially around your experiences with under-served languages (not just the Pakistani ones we're starting with) and your comments in general.

Comments (17)

_waqas_ali_ · 5m ago
As a Sindhi speaker myself, amazing stuff. The output is so good. This unlocks the vastness of the internet for millions of people. I am imaging something like NotebookLM but for under-served languages or a hotline where people can call and talk/learn about anything. Do you guys have plans to create b2c products yourself?
pavlov · 1h ago
Nice! Clearly a big and underserved market for voice AI solutions.

Would be nice to have some code examples for using your TTS API with Pipecat.

zaidqureshi · 1h ago
I have to make that.. I did make one for LiveKit which utilizes our websocket API designed for real-time conversation API:

https://docs.upliftai.org/tutorials/livekit-voice-agent

zaidqureshi · 33m ago
btw I did try to first make it with Pipecat and was having some annoying windows issues with getting libraries installed for daily etc. so I posted something that was easily reproducible for the tutorial...
nojs · 42m ago
Nice, this is really needed. Would be cool to see some of the less common regional Chinese dialects, which are widely spoken and often the only language older people speak. And even just more accurate regional accents for Mandarin.
zaidqureshi · 38m ago
wow did not know that! Do you feel there is gap in speech understanding here or personalization missing with current TTS?
moinism · 47m ago
Congrats on the launch! Having support for regional voices is going to open up so many opportunities.
zaidqureshi · 39m ago
Agreed!
akshayp29 · 2h ago
Pretty cool! Do you think the model would be good at other under-served languages as well? Or is it hypertuned to just these?
zaidqureshi · 2h ago
The model itself can work well for new languages, its just the process of data gathering and maintaining high quality of data is what we have to figure out as we scale across languages.

Currently the model is only given data for these languages so it doesn't know anything else.

mandeepj · 24m ago
> just the process of data gathering and maintaining high quality of data is what we have to figure out as we scale across languages.

À crawler and days ingestion pipeline will not help with that?

zaidqureshi · 19m ago
Gathering audio data online is not that hard, but getting it accurately labelled is challenging, as the speech understanding systems for those languages aren't there either, so we can't automatically do that
akshayp29 · 1h ago
Cool - makes sense!
sanman8119 · 1h ago
Would love to see Malayalam here one day!
zaidqureshi · 1h ago
Yes! I will keep track of this comment for the day we do :P
yorwba · 1h ago
Unless that happens within a week or so, this thread will be locked and you won't be able to reply anymore.

It would be good to have a company blog with an RSS feed that people can subscribe to for updates.

zaidqureshi · 1h ago
ah, created a quick google form for language requests! https://forms.gle/XA6nZbmBNK5K7GJv5