This is a good release if they're not too cherry picked!
I say this every time it comes up, and it's not as sexy to work on, but in my experiments voice AI is really held back by transcription, not TTS. Unless that's changed recently.
ianbicking · 10h ago
FWIW in my recent experience I've found LLMs are very good at reading through the transcription errors
(I've yet to experiment with giving the LLM alternate transcriptions or confidence levels, but I bet they could make good use of that too)
vunderba · 10h ago
Pairing speech recognition with a LLM acting as a post-processor is a pretty good approach.
I put together a script a while back which converts any passed audio file (wav, mp3, etc.), normalizes the audio, passes it to ggerganov whisper for transcription, and then forwards to an LLM to clean the text. I've used it with a pretty high rate of success on some of my very old and poorly recorded voice dictation recordings from over a decade ago.
An LLM step also works pretty well for diarization. You get a transcript with speaker-segmentation (with whisper and pyannote for example), SPEAKER_01 says at some point „Hi I’m Bob. And here’s Alice“, SPEAKER_02 says „Hi Bob“ and now the LLM can infer that SPEAKER_01 = Bob and SPEAKER_02 = Alice.
soulofmischief · 4h ago
Yep, my agent i built years ago worked very well with this approach, using a whisper-pyannote combo. The fun part is knowning when to end transcription in noisy environments like a coffee shop.
Tokumei-no-hito · 10h ago
thanks for sharing. are some local models better than others? can small models work well or do you want 8B+?
vunderba · 10h ago
So in my experience smaller models tend to produce worse results BUT I actually got really good transcription cleanup with CoT (Chain of Thought models) like Qwen even quantized down to 8b.
mikepurvis · 10h ago
I was going to say, ideally you’d be able to funnel alternates to the LLM, because it would be vastly better equipped to judge what is a reasonable next word than a purely phonetic model.
ianbicking · 9h ago
If you just give the transcript, and tell the LLM it is a voice transcript with possible errors, then it actually does a great job in most cases. I mostly have problems with mistranscriptions saying something entirely plausible but not at all what I said. Because the STT engine is trying to make a semantically valid transcription it often produces grammatically correct, semantically plausible, and incorrect transcriptions. These really foil the LLM.
Even if you can just mark the text as suspicious I think in an interactive application this would give the LLM enough information to confirm what you were saying when a really critical piece of text is low confidence. The LLM doesn't just know what are the most plausible words and phrases for the user to say, but the LLM can also evaluate if the overall gist is high or low confidence, and if the resulting action is high or low risk.
throwawaymaths · 10h ago
do you know if any current locally hostable public transcribers are good at diarization? for some tasks having even crude diarization would improve QOL by a huge factor. i was looking at a whisper diarization python package for a bit but it was a bitch to deploy.
yeah as i said, i couldn't figure out how to deploy whisper-diarization.
iainmerrick · 10h ago
Deepgram does it.
throwawaymaths · 10h ago
sorry i meant locally hostable public. ill edit parent.
pinter69 · 11h ago
Right you are.
I've used speechmatics, they do a decent jon with transcription
theyinwhy · 10h ago
1 error every 78 characters?
pinter69 · 2h ago
The way to measure transcription accuracy is word error and not character error.
I have not really checked or trusted) speechmatics' accuracy benchmarks
But, from my experience and personal impression - it looks good, haven't done a quantitative benchmark
theyinwhy · 2h ago
Thanks for your constructive reply on my bad joke. I was referring to your original comment where you had a typo. I just couldn't resist, sorry.
causal · 10h ago
Play with the Huggingface demo and I'm guessing this page is a little cherry-picked? In particular I am not getting that kind of emotion in my responses.
backnotprop · 8h ago
It is hard to get consistent emotion with this. There are some parameters, and you can go a bit crazy, but it gets weird…
lvl155 · 8h ago
Can’t you get around that by synthetic data?
echelon · 9h ago
I absolutely ADORE that this has swearing directly in the demo. And from Pulp Fiction, too!
> Any of you fucking pricks move and I'll execute every motherfucking last one of you.
I'm so tired of the boring old "miss daisy" demos.
People in the indie TTS community often use the Navy Seals copypasta [1, 2]. It's refreshing to see Resemble using swear words themselves.
How does it work from the privacy standpoint? Can they use recorded samples for training?
Quarrel · 5h ago
Fun to play with.
It makes my Australian accent sound very English though, in a posh RP way.
Very natural sounding, but not at all recreating my accent.
Still, amazingly clear and perfect for most TTS uses where you aren't actually impersonating anyone.
echelon · 7h ago
Sadly they don't publish any training or fine tuning code, so this isn't "open" in the way that Flux or Stable Diffusion are "open".
If you want better "open" models, these all sound better for zero shot:
Zeroshot TTS: MaskGCT, MegaTTS3
Zeroshot VC: Seed-VC, MegaTTS3
Granted, only Seed-VC has training/fine tuning code, but all of these models sound better than Chatterbox. So if you're going to deal with something you can't fine tune and you need a better zero shot fit to your voice, use one of these models instead. (Especially ByteDance's MegaTTS3. ByteDance research runs circles around most TTS research teams except for ElevenLabs. They've got way more money and PhD researchers than the smaller labs, plus a copious amount of training data.)
xnx · 7h ago
Great tip. I hadn't heard of MegaTTS3.
No comments yet
teraflop · 10h ago
> Every audio file generated by Chatterbox includes Resemble AI's Perth (Perceptual Threshold) Watermarker - imperceptible neural watermarks that survive MP3 compression, audio editing, and common manipulations while maintaining nearly 100% detection accuracy.
I thought the point of this sort of watermark was that it was embedded somehow in the model weights, so that it couldn't easily be separated out. If you're going to release an open-source model that adds a watermark as a separate post-processing step, then why bother with the watermark at all?
jchw · 10h ago
Possibly a sort of CYA gesture, kinda like how original Stable Diffusion had a content filter IIRC. Could also just be to prevent people from accidentally getting peanut butter in the toothpaste WRT training data, too.
throw101010 · 3h ago
Stable Diffusion or rather Automatic1111 which was initially the UI of choice for SD models had a joke/fake "watermark" setting too which was deliberately doing nothing besides poking fun at people who were thinking that open source projects would really waste time on developing something that could easily be stripped/reverted by the virtue of being open source anyways.
vunderba · 10h ago
Yeah, there's even a flag to turn it off in the parser `--no-watermark`. I assumed they added it for downstream users pulling it in as a "feature" for their larger product.
echelon · 9h ago
1. Any non-OpenAI, non-Google, non-ElevenLabs player is going to have to aggressively open source or they'll become 100% irrelevant. The TTS market leaders are obvious and deeply entrenched, and Resemble, Play(HT), et al. have to aggressively cater to developers by offering up their weights [1].
2. This is CYA for that. Without watermarking, there will be cries from the media about abuse (from anti-AI outfits like 404Media [2] especially).
[1] This is the right way to do it. Offer source code and weights, offer their own API/fine tuning so developers don't have to deal with the hassle. That's how they win back some market share.
>Without watermarking, there will be cries from the media about abuse (from anti-AI outfits like 404Media [2] especially).
it is highly amusing that they still believe they can put that genie back in the bottle with their usual crybully bullshit.
nine_k · 6h ago
Some measures like that still sort of work. Try loading a scanned picture of a dollar bill into Photoshop. Try printing it on a color printer. Try printing anything on a coor printer without the yellow tracking pixels.
A lock needs not be infinitely strong to be useful, it just needs to take more resources to crack it than the locked thing is worth.
No comments yet
echelon · 8h ago
Nevermind, this is just ~3/10 open, or not really open at all [1]:
> For now, that means we’re not releasing the training code, and fine-tuning will be something we support through our paid API (https://app.resemble.ai). This helps us pay the bills and keep pushing out models that (hopefully) benefit everyone.
Big bummer here, Resemble. This is not at all open.
For everyone stumbling upon this, there are better "open weights" models than Resemble's Chatterbox TTS:
Zeroshot TTS: MaskGCT, MegaTTS3
Zeroshot VC: Seed-VC, MegaTTS3
These are really good robust models that score higher in openness.
Unfortunately only Seed-VC is fully open. But all of the above still beat Resemble's Chatterbox in zero shot MOS (we tested a lot), especially the mega-OP Chinese models.
(ByteDance slaps with all things AI. Their new secretive video model is better than Veo 3, if you haven't already seen it [2]!)
You can totally ignore this model masquerading as "open". Resemble isn't really being generous at all here, and this is some cheap wool over the eyes trickery. They know they retain all of the cards here, and really - if you're just going to use an API, why not just use ElevenLabs?
Shame on y'all, Resemble. This isn't "open" AI.
The Chinese are going to wipe the floor with TTS. ByteDance released their model in a more open manner than yours, and it sounds way better and generalizes to voices with higher speaker similarity.
Playing with open source is a path forward, but it has to be in good faith. Please do better.
[1] "10/10" open includes: 1. model code, 2. training code, 3. fine tuning code, 4. inference code, 5. raw training data, 6. processed training data, 7. weights, 8. license to outputs, 9. research paper, 10. patents. For something to be a good model, it should have 7/10 or above.
The weights are indeed open (both accessible and licensing-wise): you don't need to put that in square quotes. Training code is not. You can fine-tune the weights yourself with your own training code. Saying that isn't open is like saying ffmpeg isn't open because it doesn't do everything I need it to do and I have to wrap it with own code to achieve my goals.
echelon · 4h ago
Machine learning assets are not binary "open" or "closed". There is a continuum of openness.
To make a really poor analogy, this repo is like a version of Linux that you can't cross-compile or port.
To make another really poor (but fitting) analogy, this is like an "open core" SaaS platform that you know you'll never be able to run the features that matter on your own.
This repo scores really low on the "openness" continuum. In this case, you're very limited in what you can do with Chatterbox TTS. You certainly can't improve it or fit it to your data.
> You can fine-tune the weights yourself with your own training code.
This will never be built by anyone, and they know that. If it could be, they'd provide it themselves.
If you're considering Chatterbox TTS, just use MegaTTS3 [1] instead. It's better by all accounts.
This can be cross-compiled/ported in the Linux analogy. The Linux analogy would be more like: a kernel dev wrote code for some part of the Linux kernel using JetBrains' CLion. He used features of CLion that made this process much easer than if he had written the code using `nano`. By your logic, the resulting kernel code is not "open" because the tooling used to create it is not open. This is, of course, nonsense.
I agree that the project as a whole is less open than it could be, but the weights are indeed as open as they can be, no scare quotes required.
echelon · 51m ago
I really don't think your analogy fits the absurdity of lacking the tooling. It's more like you have to decompile an N64 cartridge ROM and don't have the tools. But I don't want to play that game.
I'll up the ante. I'll bet you money that nobody forks this and adds fine tuning for at least a year.
If you're going to drop weights on unsuspecting developers (who might not be familiar with TTS) and make them think that they'll fit their use case, that's a bit of a bait-and-switch.
Chatterbox TTS is only available over API for fine tunes. That's an incredibly saturated market, and there are better quality and cheaper models for this.
Chatterbox TTS is equivalent to already-released semi-open weights from ByteDance and other labs, and those models already sound and perform better.
It'd be truly exciting if Chatterbox fine tunes could be done as open weights, similar to how Flux operates. Black Forest Labs has an entire open weights ecosystem built around them. While they do withhold their pro / highest quality variants, they always release open weights with training code for each commercial release. That's a much better model for courting open source developers.
Another company doing "open weights" right is Lightricks with LTX-1. They have a commercial studio, but they release all of their weights and tuning code in the open.
I don't see how this is a carrot for open source. It's an ad for the hosted API.
gcr · 7h ago
not a single top-tier lab has a "10/10 open" model for any model type for any learning application since ResNet, it's not fair to shit on them solely for this
audiala · 3h ago
What is the current state of the art for open source multilingual TTS? I have found Kokoro to be great as English as well, but am still searching for a good solution for French, Japanese, German...
tevon · 3h ago
I just tested it out locally, really excellent quality, the server was easy to set up and well documented.
I'd love to get to real-time generation if that's in the pipeline? Would like to use it along with Home Assistant.
philipkiely · 8h ago
Example implementation with sample inference code + voice cloning example:
But if the model is any good someone will probably find a way to optimize it to run on even less.
Edit: Got it running on an old Nvidia 2060, I'm seeing ~5 GB VRAM peak.
magicalhippo · 8h ago
Looking at the issues page, it seems it's not well optimized[1] currently.
So out of the box it seems quite beefy consumer hardware will be needed for it to perform reasonably. However it seems like there's significant potential for improvements, though I'm no expert.
It's not a silly question, it's the best question!
If something can be run for free but it's cheaper to rent, it voids the DIY aspect of it.
01HNNWZ0MV43FF · 9h ago
I was going to report how it runs on an old CPU but after fussing with it for about 30 minutes, I can't even get it to run.
Listing the issues in case it helps anyone:
- It doesn't work with Python 3.13, luckily `uv` makes it easy to build a venv with 3.12
- It said numpy 1.26.4 doesn't exist. It definitely does, but `uv pip` was searching for it on the pytorch repo. I passed an `--index-strategy` flag so it would check other repos. This could just be a bug in uv, but when I see "numpy 1.26.4 doesn't exist" and numpy is currently on 2.x, my brain starts to cramp up.
- The `pip install chatterbox-tts` version has a bug in CPU-only mode, so I cloned the Git repo
- The version at the tip of main requires `protobuf-compiler` installed on Debian
- I got a weird CMake error that I can't decipher. I think maybe it's complaining that the Python dev headers are not installed. Why would they be, I'm trying to do inference, not compile Python...
I know anger isn't productive but this is my experience almost any time I'm running Somebody Else's Python Project. Hit an issue, back up, hit another issue, back up, after an hour it still doesn't run.
thorum · 8h ago
We’ll know AGI has arrived when it can figure out Python dependency conflicts
blharr · 8h ago
Maybe this wasn't here when you looked at it, but maybe try Python 3.11?
> We developed and tested Chatterbox on Python 3.11 on Debain 11 OS; the versions of the dependencies are pinned in pyproject.toml to ensure consistency.
bityard · 10h ago
Not a silly question, I came here to ask too. Curious to know whether I need a GPU costing 4 digits or if it will run on my 12-year-old thinkpad shitbox. Or something in between.
nmstoker · 10h ago
I've found it excellent with really common accents but with other accents (that are pretty common too) it can easily get stuck picking a different accent.
For instance several Scottish recordings ended up Australian, likewise a fairly mild Yorkshire accent
a_wild_dandan · 6h ago
I think this says more about Scottish than the model.
Quarrel · 5h ago
> For instance several Scottish recordings ended up Australian
Funnily enough, it made my Australian accent sound very English RP. I was suddenly very posh.
m3sta · 6h ago
Like a professional actor!
abraxas · 11h ago
Are these things good enough to narrate a book convincingly or does the voice lose coherence after a few paragraphs being spoken?
vunderba · 10h ago
Most of these TTS systems tend to fall apart the longer the text - it's a good idea to just wrap any longform text into separate paragraph segmented batches and then stitch them back together again at the end.
I've also found that if your one-shot sample wave isn't really clean that sometimes Chatterbox produces random unholy whooshing sounds at the end of the generated audio which is an added bonus if you're recording Dante's Inferno.
Once it's good enough Audible will be flooded with AI-narrated books so we'll know soon. (The only question is whether Amazon would disclose it, ofc)
landl0rd · 10h ago
Flip side is a solution where I can have a book without an audiobook auto-generated (or use an existing ebook rather than paying audible $30 for their version) and it's "good enough" is a legit improvement. AI generated isn't as good but it's better than nothing. Also, being able to interrupt and ask for more detail/context would be pretty nice. Like I'm reading some Pynchon and I have to stop sometimes and look up the name of a reference to some product nobody knows now, stuff like that.
skygazer · 8h ago
If you're willing to forgo the interactive LLM bit, kokoro-tts (just a script using Kokoro-ONNX) takes epubs and outputs a series of wavs or mp3s that need to be stitched together into chapters or audiobook m4a with some ffmpeg fu. I've listened to several generated audiobooks, and found them pretty good. Some nice generic narration-like prosody. It uses espeak-ng to generate phonemes and passes those to the model to render voice, so it generally pronounces things quite well. It comes with a handful of nice voices and several can be blended, but no easy voice cloning, like chatterbox, that I'm aware of.
I've used this repo and its great. It was one many things that inspired me in building a similar tool. I built https://desktop.with.audio
It was important to me that it be 100% private and local and wanted it to be a one time payment solution. Because it locally process your data it can be a one time payment text to speech app.
Audible has already flooded their store with generated audio books. Go to the "Plus Catalog" and it's filled with them. The quality at the moment is complete trash, but I can't imagine it won't get better quickly.
The whole audiobook business will eventually disappear - probably within the decade. There will only be ebooks and on-device AI assistants will read it to you on demand.
I imagine it'll go like this: First pre-generated audiobooks as audio files. Next, online service to generate audio on demand with hyper customizable voices which can be downloaded. Next, a new ebook format which embeds instructions for narration and pronunciation to be read on-device. Finally, AI that's good enough to read it like a storyteller instantly without hints.
satvikpendem · 5h ago
> There will only be ebooks and on-device AI assistants will read it to you on demand.
Honestly I read (or rather, listen to) a lot of books already by getting the epubs onto my phone then using a very basic TTS to read it out. Yes, they're definitely not as lifelike as even the most common AI TTS systems but they're good enough to listen to at high speed. Moon+ Reader is pretty good for Android, not sure about iOS.
I consult a company in the space (not resemble) and I can definitely say it can narrate a book
wsintra2022 · 10h ago
A year ago for fun I gave a friend a Carl Rogers therapy audiobook, for fun I made an Attenbrough esque reading and it was pretty good over a year ago so should be better now.
ineedasername · 9h ago
The emotional exaggeration is interesting, though I don't think I've come across anything quite so versatile and easy to "sculpt" as Elevenlabs and it's ability to generate a voice on the basis of a description of how you want the voice to sound. SparkTTS allows some additional parameters, and it's project on GitHub has placeholders in its code that indicate the model might be refined for more fine grained emotional control. As it is, I've had some success with it and other models by trying to influence prosody and tonality with some heavy handed queues in the text, which can then be used with VC to get closer to desired results, but it's a much more cumbersome process than Eleven.
stevage · 9h ago
Interesting demo. A few observations, having uploaded a snippet of my own voice, and testing with some of my own text:
- the output had some of the qualities of my voice, but wasn't super similar. (Then again, the fact it could even do this from such a tiny snippet was impressive)
- increasing "CFG/pace" (whatever CFG is) even a little bit often just breaks down into total gibberish
- it was very inconsistent whether it would come out with a kind of British accent or an American one. (My accent is Australian...)
- the emotional exaggeration was interesting, but it seemed to vary a lot exactly what kind of emotion would come out
racecar789 · 4h ago
I’d sign up for a service that calls a pharmacy on my behalf to refill prescriptions. In certain situations, pharmacies will not list prescriptions on their websites, even though they have the prescriptions on file, which forces the customer to call by phone — a frustrating process.
I do feel bad for pharmacists, their job is challenging in so many ways.
jeroenhd · 1h ago
Didn't Google already demo that with Google Duplex? It's not available here so I can't test it, but I think that's exactly the kind of thing duplex was designed to do.
Although, from a risk avoidance point of view, I'd understand if Google wanted to stay as far away from having AI deal with medication as possible. Who knows what it'll do when it starts concocting new information while ordering medicine.
ipsum2 · 3h ago
The voice cloning is okay, not as good as Eleven Labs. There's a Rick (from Rick and Morty) voice example, and the generated audio sounds muffled and low quality. I appreciate that its open source though.
iambateman · 8h ago
Just a regular reminder to tell your friends and family to be extra skeptical about phone conversations.
It’s becoming much more likely that the friend who desperately needs a gift card to Walmart isn’t the friend at all. :(
probably_wrong · 1h ago
My family members speak Spanish with an Argentinean accent. From what I've seen in the space it looks like I'm safe.
jeroenhd · 1h ago
Public research and well-intentioned AI companies is all focusing on (white) American English, but that doesn't mean the technology isn't being refined elsewhere. The scamming industry is massive and already goes to depths like slavery to get the job done.
I wouldn't assume you're safe just because the tech in your phone can't speak your language.
chii · 3h ago
the easiest way to defeat phone fraud is to ahead of time decide on a verbal password between family (and close friends, if they're close enough that you'd lend them money).
In a real scenario, they'd know the verbal password and you can authenticate them. Drum it into them that this password will prevent other people from impersonating you in this brave new world of ai voices and even video.
jimjimwii · 1h ago
That is more or less what i did with my parents, but this approach is still susceptible to active mitm attacks.
2 factor authentication through a secure app or a trusted family member is probably also needed though i haven't tackled this part with them yet.
chii · 1h ago
> 2 factor authentication through a secure app
the problem is that the sort of emergency scenario in which family member would need the help is not often done or possible via a secured app. It's often just a telephone, with a number that you cannot recognize - imagine getting that phone call from a police station in the middle of nowhere when arrested, then you dont have access to any of your personal belongings as they're confiscated. The phone is a landline from the police station!
Therefore, a verbal password is needed, as this scenario is exactly how a scammer would present as the emergency that they need help (usually, wire some dollars to this account to bail out).
mattigames · 6h ago
My bet is that the government at some point will have to put some pressure on Walmart and others to stop selling those gift cards completely, doing impersonations is getting too easy and too cheap for there not to be a flood of those scam calls in the near future.
MrThoughtful · 4h ago
How do you set the voice?
On the Huggingface demo, there seems to be no option for it.
It has a female voice. Any way to set it to a male voice?
ipsum2 · 3h ago
It's voice cloning. Maybe not available in the demo, but you just provide a different input.
You failed to mention that this is an ad for the company you work at. Also, the links don't even work without signing up for some shitty service.
pzo · 7h ago
It's only for English sadly
darccio · 2h ago
Are there any good options for non-English languages?
jeroenhd · 1h ago
It's not on the same level in terms of emotion, but I believe the research https://github.com/CorentinJ/Real-Time-Voice-Cloning was based on is mostly oriented around Chinese first (and then English). It seems to work well enough if you and the voice you're cloning speak the same language though I haven't tested it much.
internet_points · 13m ago
> Supported Lanugage
> Currenlty only English.
meh
andy_xor_andrew · 10h ago
in my experience, TTS has been a "pick two" situation:
- fast / cheap to run
- can clone voices
- sounds super realistic
from what I can tell, Chatterbox is the first that apparently lets you pick 3! (have not tried it myself yet, this is just what I can deduce)
CGamesPlay · 8h ago
Can you share one that is fast/cheap to run and sounds super realistic? I'm very interested in finding a good TTS and not really concerned about cloning any particular voice (but would like a "distinctive" voice that isn't just a preset one).
pzo · 7h ago
It's also about if you want multi lung support and if wanna run on edge devices. Chatterbox only support English.
benob · 3h ago
Watermarking is easily disabled in the code. I a wondering when they will release model weights with embedded watermarking.
Shopper0552 · 9h ago
Anyone know a good free open source speech to text? Looking for something for my laptop which is running Fedora KDE plasma.
pzo · 7h ago
Whisper large v3 turbo if need support of many languages and want fast enough for deployment even on smartphones (WhisperKit). Can also try lite whisper on HF if need even smaller weights and slightly faster speed.
They should put the meaning of "TTS" in the readme somewhere, probably near the top. Or their website.
byteknight · 10h ago
TTS is a very common initialism for Text-to-Speech going back to at least the 90s.
stevage · 9h ago
Yeah, it's a very common initialism for people who work in the space, and have some context.
j2kun · 10h ago
So? Acronym soup is bad communication.
aquariusDue · 10h ago
I miss glossaries.
dylan604 · 10h ago
Good writing rules can still be used even for repo READMEs where the first time an acronym is used it is spelled out to show what the acronym means. Too many assumptions being made that everyone is going to know it. Sometimes the author can be too inside baseball and assumes anyone reading their README will already know about the subject. Not all devs are literature majors and probably just never think about these things
rapfaria · 7h ago
An AI-powered browser extension that shows on hover the most likely acronym meaning, based on context you say?
sdenton4 · 10h ago
Table Top Simulator.
It's obviously an AI for playing wargames without having to bother painting all the miniatures, or finding someone with the same weird interest in Balkan engagements during the Napoleonic era.
kiririn7 · 9h ago
definitely worse than the new elevenlabs model(v3). that model is really good
plangary123 · 8h ago
I disagree
pradeepodela · 3h ago
What is the latency?
az226 · 10h ago
How does one train a TTS model with an LLM backbone? Practically, how does this work?
cyanf · 10h ago
you use a neural audio codec to encode audio into codebooks
then you could treat the codebook entries as tokens and treat audio generation as a next token prediction task
you then take the codebook entries generated and run it through the codec’s decoder and yield audio
it works surprisingly well
speech text models (tts model with an llm as backbone) is the current meta
decide1000 · 10h ago
How does it perform on multi-lingual tasks?
yjftsjthsd-h · 10h ago
The readme says it only supports English
causality0 · 9h ago
Anyone know how this compares to Kokoro? I've found Kokoro very useful for generating audiobook but it almost always pronounces words with paired vowels incorrectly. Daisy becomes die-zee, leave becomes lay-ve, etc.
nmstoker · 22m ago
If you're running Kokoro yourself then it might be worth checking your phonemizer / espeak-ng installs in case they are messing up the phonemes for those words (which are then passed on as inputs to Kokoro itself)
BigBananaGuy · 8h ago
Chatterbox sounds much more natural. The zero shot voice cloning and exaggeration feature is sick!
tuananh · 8h ago
for this, what does it take to support another language?
hsavit1 · 6h ago
another TTS that is only supporting English. This really irritates me
nmstoker · 13m ago
Maybe that irritation could be channelled to contributing into one that supports not only English? Even small steps like tweaking docs, adding missing/extra examples, fielding a few issues in GH (most are usually simple misunderstandings where a quick pointer can easily help a beginner)
jeroenhd · 1h ago
For what it's worth, there are also a whole bunch of models that speak Chinese.
So far the US and China are spearheading AI research, so it makes sense that models optimize for languages spoken there. Spanish is an interesting omission on the US part, but that's probably because most AI researchers in the US speak English even if their native tongue is Spanish.
This is a good release if they're not too cherry picked!
I say this every time it comes up, and it's not as sexy to work on, but in my experiments voice AI is really held back by transcription, not TTS. Unless that's changed recently.
(I've yet to experiment with giving the LLM alternate transcriptions or confidence levels, but I bet they could make good use of that too)
I put together a script a while back which converts any passed audio file (wav, mp3, etc.), normalizes the audio, passes it to ggerganov whisper for transcription, and then forwards to an LLM to clean the text. I've used it with a pretty high rate of success on some of my very old and poorly recorded voice dictation recordings from over a decade ago.
Public gist in case anyone finds it useful:
https://gist.github.com/scpedicini/455409fe7656d3cca8959c123...
Even if you can just mark the text as suspicious I think in an interactive application this would give the LLM enough information to confirm what you were saying when a really critical piece of text is low confidence. The LLM doesn't just know what are the most plausible words and phrases for the user to say, but the LLM can also evaluate if the overall gist is high or low confidence, and if the resulting action is high or low risk.
> Any of you fucking pricks move and I'll execute every motherfucking last one of you.
I'm so tired of the boring old "miss daisy" demos.
People in the indie TTS community often use the Navy Seals copypasta [1, 2]. It's refreshing to see Resemble using swear words themselves.
They know how this will be used.
[1] https://en.wikipedia.org/wiki/Copypasta
[2] https://knowyourmeme.com/memes/navy-seal-copypasta
I created an API wrapper that also makes installation easier (Dockerized as well) https://github.com/travisvn/chatterbox-tts-api/
Best voice cloning option available locally by far, in my experience.
It makes my Australian accent sound very English though, in a posh RP way.
Very natural sounding, but not at all recreating my accent.
Still, amazingly clear and perfect for most TTS uses where you aren't actually impersonating anyone.
If you want better "open" models, these all sound better for zero shot:
Zeroshot TTS: MaskGCT, MegaTTS3
Zeroshot VC: Seed-VC, MegaTTS3
Granted, only Seed-VC has training/fine tuning code, but all of these models sound better than Chatterbox. So if you're going to deal with something you can't fine tune and you need a better zero shot fit to your voice, use one of these models instead. (Especially ByteDance's MegaTTS3. ByteDance research runs circles around most TTS research teams except for ElevenLabs. They've got way more money and PhD researchers than the smaller labs, plus a copious amount of training data.)
No comments yet
Am I misunderstanding, or can you trivially disable the watermark by simply commenting out the call to the apply_watermark function in tts.py? https://github.com/resemble-ai/chatterbox/blob/master/src/ch...
I thought the point of this sort of watermark was that it was embedded somehow in the model weights, so that it couldn't easily be separated out. If you're going to release an open-source model that adds a watermark as a separate post-processing step, then why bother with the watermark at all?
2. This is CYA for that. Without watermarking, there will be cries from the media about abuse (from anti-AI outfits like 404Media [2] especially).
[1] This is the right way to do it. Offer source code and weights, offer their own API/fine tuning so developers don't have to deal with the hassle. That's how they win back some market share.
[2] https://www.404media.co/wikipedia-pauses-ai-generated-summar...
it is highly amusing that they still believe they can put that genie back in the bottle with their usual crybully bullshit.
A lock needs not be infinitely strong to be useful, it just needs to take more resources to crack it than the locked thing is worth.
No comments yet
https://github.com/resemble-ai/chatterbox/issues/45#issuecom...
> For now, that means we’re not releasing the training code, and fine-tuning will be something we support through our paid API (https://app.resemble.ai). This helps us pay the bills and keep pushing out models that (hopefully) benefit everyone.
Big bummer here, Resemble. This is not at all open.
For everyone stumbling upon this, there are better "open weights" models than Resemble's Chatterbox TTS:
Zeroshot TTS: MaskGCT, MegaTTS3
Zeroshot VC: Seed-VC, MegaTTS3
These are really good robust models that score higher in openness.
Unfortunately only Seed-VC is fully open. But all of the above still beat Resemble's Chatterbox in zero shot MOS (we tested a lot), especially the mega-OP Chinese models.
(ByteDance slaps with all things AI. Their new secretive video model is better than Veo 3, if you haven't already seen it [2]!)
You can totally ignore this model masquerading as "open". Resemble isn't really being generous at all here, and this is some cheap wool over the eyes trickery. They know they retain all of the cards here, and really - if you're just going to use an API, why not just use ElevenLabs?
Shame on y'all, Resemble. This isn't "open" AI.
The Chinese are going to wipe the floor with TTS. ByteDance released their model in a more open manner than yours, and it sounds way better and generalizes to voices with higher speaker similarity.
Playing with open source is a path forward, but it has to be in good faith. Please do better.
[1] "10/10" open includes: 1. model code, 2. training code, 3. fine tuning code, 4. inference code, 5. raw training data, 6. processed training data, 7. weights, 8. license to outputs, 9. research paper, 10. patents. For something to be a good model, it should have 7/10 or above.
[2] https://artificialanalysis.ai/text-to-video/arena?tab=leader...
To make a really poor analogy, this repo is like a version of Linux that you can't cross-compile or port.
To make another really poor (but fitting) analogy, this is like an "open core" SaaS platform that you know you'll never be able to run the features that matter on your own.
This repo scores really low on the "openness" continuum. In this case, you're very limited in what you can do with Chatterbox TTS. You certainly can't improve it or fit it to your data.
> You can fine-tune the weights yourself with your own training code.
This will never be built by anyone, and they know that. If it could be, they'd provide it themselves.
If you're considering Chatterbox TTS, just use MegaTTS3 [1] instead. It's better by all accounts.
[1] https://github.com/bytedance/MegaTTS3
This can be cross-compiled/ported in the Linux analogy. The Linux analogy would be more like: a kernel dev wrote code for some part of the Linux kernel using JetBrains' CLion. He used features of CLion that made this process much easer than if he had written the code using `nano`. By your logic, the resulting kernel code is not "open" because the tooling used to create it is not open. This is, of course, nonsense.
I agree that the project as a whole is less open than it could be, but the weights are indeed as open as they can be, no scare quotes required.
I'll up the ante. I'll bet you money that nobody forks this and adds fine tuning for at least a year.
And someone else fine-tuned it for German: https://huggingface.co/SebastianBodza/Kartoffelbox-v0.1
If you're going to drop weights on unsuspecting developers (who might not be familiar with TTS) and make them think that they'll fit their use case, that's a bit of a bait-and-switch.
Chatterbox TTS is only available over API for fine tunes. That's an incredibly saturated market, and there are better quality and cheaper models for this.
Chatterbox TTS is equivalent to already-released semi-open weights from ByteDance and other labs, and those models already sound and perform better.
It'd be truly exciting if Chatterbox fine tunes could be done as open weights, similar to how Flux operates. Black Forest Labs has an entire open weights ecosystem built around them. While they do withhold their pro / highest quality variants, they always release open weights with training code for each commercial release. That's a much better model for courting open source developers.
Another company doing "open weights" right is Lightricks with LTX-1. They have a commercial studio, but they release all of their weights and tuning code in the open.
I don't see how this is a carrot for open source. It's an ad for the hosted API.
I'd love to get to real-time generation if that's in the pipeline? Would like to use it along with Home Assistant.
https://github.com/basetenlabs/truss-examples/tree/main/chat...
Still working on streaming
But if the model is any good someone will probably find a way to optimize it to run on even less.
Edit: Got it running on an old Nvidia 2060, I'm seeing ~5 GB VRAM peak.
So out of the box it seems quite beefy consumer hardware will be needed for it to perform reasonably. However it seems like there's significant potential for improvements, though I'm no expert.
[1]: https://github.com/resemble-ai/chatterbox/issues/127
If something can be run for free but it's cheaper to rent, it voids the DIY aspect of it.
Listing the issues in case it helps anyone:
- It doesn't work with Python 3.13, luckily `uv` makes it easy to build a venv with 3.12
- It said numpy 1.26.4 doesn't exist. It definitely does, but `uv pip` was searching for it on the pytorch repo. I passed an `--index-strategy` flag so it would check other repos. This could just be a bug in uv, but when I see "numpy 1.26.4 doesn't exist" and numpy is currently on 2.x, my brain starts to cramp up.
- The `pip install chatterbox-tts` version has a bug in CPU-only mode, so I cloned the Git repo
- The version at the tip of main requires `protobuf-compiler` installed on Debian
- I got a weird CMake error that I can't decipher. I think maybe it's complaining that the Python dev headers are not installed. Why would they be, I'm trying to do inference, not compile Python...
I know anger isn't productive but this is my experience almost any time I'm running Somebody Else's Python Project. Hit an issue, back up, hit another issue, back up, after an hour it still doesn't run.
> We developed and tested Chatterbox on Python 3.11 on Debain 11 OS; the versions of the dependencies are pinned in pyproject.toml to ensure consistency.
Funnily enough, it made my Australian accent sound very English RP. I was suddenly very posh.
I've also found that if your one-shot sample wave isn't really clean that sometimes Chatterbox produces random unholy whooshing sounds at the end of the generated audio which is an added bonus if you're recording Dante's Inferno.
https://github.com/nazdridoy/kokoro-tts/blob/main/kokoro-tts
It was important to me that it be 100% private and local and wanted it to be a one time payment solution. Because it locally process your data it can be a one time payment text to speech app.
If you are interested in creating audiobooks from epubs check this demo: https://www.youtube.com/watch?v=pOHzo6Oq0lQ If you are interested in listening while reading with text highlighting check these demos: - https://www.youtube.com/watch?v=8yJ-lsbzAuw - https://www.youtube.com/watch?v=y8wi4d8xmnw
1. https://github.com/santinic/audiblez
The whole audiobook business will eventually disappear - probably within the decade. There will only be ebooks and on-device AI assistants will read it to you on demand.
I imagine it'll go like this: First pre-generated audiobooks as audio files. Next, online service to generate audio on demand with hyper customizable voices which can be downloaded. Next, a new ebook format which embeds instructions for narration and pronunciation to be read on-device. Finally, AI that's good enough to read it like a storyteller instantly without hints.
Honestly I read (or rather, listen to) a lot of books already by getting the epubs onto my phone then using a very basic TTS to read it out. Yes, they're definitely not as lifelike as even the most common AI TTS systems but they're good enough to listen to at high speed. Moon+ Reader is pretty good for Android, not sure about iOS.
``` watermarked_wav = self.watermarker.apply_watermarl(... ```
- the output had some of the qualities of my voice, but wasn't super similar. (Then again, the fact it could even do this from such a tiny snippet was impressive)
- increasing "CFG/pace" (whatever CFG is) even a little bit often just breaks down into total gibberish
- it was very inconsistent whether it would come out with a kind of British accent or an American one. (My accent is Australian...)
- the emotional exaggeration was interesting, but it seemed to vary a lot exactly what kind of emotion would come out
I do feel bad for pharmacists, their job is challenging in so many ways.
Although, from a risk avoidance point of view, I'd understand if Google wanted to stay as far away from having AI deal with medication as possible. Who knows what it'll do when it starts concocting new information while ordering medicine.
It’s becoming much more likely that the friend who desperately needs a gift card to Walmart isn’t the friend at all. :(
I wouldn't assume you're safe just because the tech in your phone can't speak your language.
In a real scenario, they'd know the verbal password and you can authenticate them. Drum it into them that this password will prevent other people from impersonating you in this brave new world of ai voices and even video.
2 factor authentication through a secure app or a trusted family member is probably also needed though i haven't tackled this part with them yet.
the problem is that the sort of emergency scenario in which family member would need the help is not often done or possible via a secured app. It's often just a telephone, with a number that you cannot recognize - imagine getting that phone call from a police station in the middle of nowhere when arrested, then you dont have access to any of your personal belongings as they're confiscated. The phone is a landline from the police station!
Therefore, a verbal password is needed, as this scenario is exactly how a scammer would present as the emergency that they need help (usually, wire some dollars to this account to bail out).
On the Huggingface demo, there seems to be no option for it.
It has a female voice. Any way to set it to a male voice?
Also, a deployable model: https://lightning.ai/bhimrajyadav/ai-hub/temp_01jwr0adpqf055...
> Currenlty only English.
meh
- fast / cheap to run
- can clone voices
- sounds super realistic
from what I can tell, Chatterbox is the first that apparently lets you pick 3! (have not tried it myself yet, this is just what I can deduce)
It's obviously an AI for playing wargames without having to bother painting all the miniatures, or finding someone with the same weird interest in Balkan engagements during the Napoleonic era.
then you could treat the codebook entries as tokens and treat audio generation as a next token prediction task
you then take the codebook entries generated and run it through the codec’s decoder and yield audio
it works surprisingly well
speech text models (tts model with an llm as backbone) is the current meta
So far the US and China are spearheading AI research, so it makes sense that models optimize for languages spoken there. Spanish is an interesting omission on the US part, but that's probably because most AI researchers in the US speak English even if their native tongue is Spanish.
https://news.ycombinator.com/item?id=44120204
https://news.ycombinator.com/item?id=44144155
https://news.ycombinator.com/item?id=44195105
https://news.ycombinator.com/item?id=44230867
https://news.ycombinator.com/item?id=44172134
https://news.ycombinator.com/item?id=44221910
https://news.ycombinator.com/item?id=44145564