Show HN: Inworld TTS – high-quality, affordable, and low-latency TTS

18 rogilop 11 6/26/2025, 5:11:11 PM inworld.ai ↗
Hi HN, Igor here, one of the engineers behind this project.

High-quality voice APIs are usually either expensive, slow, or both. Cheaper and faster solutions very often lack realism. We decided to build Inworld TTS to bridge this gap.

We just released two multilingual models. Our small model, named TTS-1, is on par with SOTA models quality-wise given objective metrics WER/SIM/DNSMOS. A larger model, TTS-1-Max, is even better. It can produce more nuanced speech and has ~3.5% better WER across all 11 supported languages averaged. Both models also support markup tags (e.g. prepend "[happy]" to the text to make the generation more enthusiastic, etc).

The models are built with LLaMA 1B and 8B being the SpeechLM backbones for TTS-1 and TTS-1-Max respectively. We up-trained both models on a mixture of text and audio, then finetuned on text-audio pairs and polished final checkpoints with GRPO on a small high-quality dataset. Our Speech Lab team (4 MLEs) started to work on collecting audio data around late February and exploring different audio codec architectures. We got inspired by the simplicity of the single vector quantization Xcodec2 neural audio codec architecture used and decided to use a similar idea. We started the training early April. Once codec was ready, SpeechLMs’ training took another month and a half. We finished mid-June, all - using 32 H100 GPUs.

To make models real-time ready during serving, we collaborated with Modular to migrate from vanilla vLLM solution to Mojo- written MAX server. Our bet of keeping serving architecture as simple as possible played out well: both models turned out to be really fast. TTS-1, which can be accessed via streaming API, has ~500ms p90 latency for returning the first ~2 seconds of audio. The pricing is simple, pay $5/1M characters. A larger model’s API access will be opened soon. We’ll share more details about serving performance optimizations made in the coming weeks.

We are also about to release all the training, modeling, and benchmarking code on GitHub to be transparent about how we made it. This repo is very flexible and can easily be adjusted to train an arbitrary neural net, but we’ll release the code with the focus on speech modeling. By the way, we’ve used PyTorch Lightning as the framework for multi-node/multi-GPU training as it proved to be very easy-to-use and reliable.

--

Check the TTS out at https://inworld.ai/tts

Happy to answer any questions you have!

Comments (11)

cremaster_ · 3h ago
I've used Inworld for AI characters in the past. Are you pivoting to a TTS company?

Also, can these voices be plugged into the Unreal/Unity SDKs?

rogilop · 2h ago
Not really, we aren't pivoting: TTS is a part of our strategy of making great AI solutions accessible for as many developers as possible. We don't have official plugins for UE/Unity yet, but will have something to share soon. So at the moment feel free to use directly via API.
jsx888 · 2h ago
Love it! Cant wait to try this out and cut down the costs we incur using other services.
audi0917 · 1h ago
The voices are realistic and lively - I will try it in my app - Thanks for the great launch!
rogilop · 1h ago
Oh, that's cool, please share the app)
RohanPanda99 · 3h ago
Kudos on the launch! The price-point along with superior quality compared to peer models would make it a go-to solution for TTS!
igh · 3h ago
Thank you for sharing the details!
rogilop · 2h ago
Sure! We plan to release a detailed tech report alongside with the repo too. We have a lot of interesting lessons to share.
kalacoffee · 3h ago
TTS Playground is easy to use and impressive. Clone voice was intuitive.
feifan123 · 3h ago
This is amazing! It unblocks many potential AI applications with voices.
fr25 · 3h ago
Interesting approach... thanks for sharing