Launch HN: Miyagi (YC W25) turns YouTube videos into online, interactive courses
We use LLMs to automatically generate quizzes, practice questions, and real-time feedback from any educational video or resource—turning passive watching into active learning. Here’s a short demo: https://youtu.be/alO7FaorHOY.
Improving education has always been tricky. Bloom’s 2-sigma problem (showing that a high-quality personal tutor is far more effective than conventional methods) has persisted, even as technology has advanced.
We met at MIT as CS majors and have always been passionate about education. Over the years, we’ve become teachers and experts in subjects like chess, algorithms, math, languages, and ninja warrior. A common theme was that we both heavily relied on YouTube to learn.
YouTube has incredible content for learning pretty much anything, but it’s buried in a lot of distractions. Also, passively watching videos is far less effective than taking notes, asking questions, and doing practice problems, which is what we aim to do with Miyagi Labs.
Our solution is essentially a multi-step function that takes in a YouTube playlist (or list of any resources) and outputs an entire course with summaries, questions, answers, and more. The pipeline is roughly: video/resource —> transcript/text —> chunks —> summary and question —> answers to questions, with some other features along the way.
We mostly use prompting and different models at each step to make the course as useful as possible. Certain topics require more practice problems vs. comprehension, and we’d use reasoning models for highly technical subjects.
We launched about three months ago and currently have 400+ courses and partnerships with some businesses and awesome creators. Some of our popular courses include 3Blue1Brown’s linear algebra course, a botany course on plants and ecology, and YC’s How to Start a Startup series.
Our product resembles classical MOOC-style course platforms in terms of UI, but is more interactive. It’s really easy to ask a question or receive custom feedback compared to a static course on Coursera. It’s also comparable to AI tutor sites, but we try to build more of a community and require less activation energy as a learner. We’re basically betting that AI can hugely improve education, but that students still want to learn from their favorite creators and want baseline shared resources for standard topics that are then augmented with personalized features.
You can try it here: https://miyagilabs.ai (no login required for most courses—but if you sign up you can also create your own course).
We’d love your feedback on what kinds of videos/resources you’d like to learn from, what’s missing from current learning tools, and if you know any creators or educators who would like to collaborate. Happy to hear any feedback and answer any questions!
I’m still coming up to speed on the full scope of what your product does, but I’m curious what you’d say to someone like pal2tec, who has some fairly strong and what I feel to be reasonable views about the impact of content summarization [0].
Getting direct buy-in and sharing revenue is great. But it’s not clear to me that this is the only thing that creators care about, i.e. are you still summarizing content you’re not monetizing without creator buy-in?
- [0] https://m.youtube.com/watch?v=ULUSS1-G3do
Just watched the video, I don't initially agree with his take completely but do totally respect the viewpoint and think a payment split to the creator whenever someone summarizes the video makes sense.
Yes we do offer the option to summarize content without creator buy-in, although it seems a bit different since we're also augmenting the content with questions etc. which should drive users to watch the video even more as opposed to skip it and just read the summary.
But you're right it's not perfect. If we ever have creators who don't want their stuff on our site we'd totally respect their wishes, but that hasn't been the case right now so this seems like the best thing to do.
Do note that this behavior of "opting creators into a program without their consent, justifying it via revenue share, and CYA with a 'they can opt out if they want to!' shield" is still... awful optics.
The whole Brave scandal (https://news.ycombinator.com/item?id=18736888) is a good case study on how laypeople will perceive this. It's not popular at all.
From a creator’s point of view, I think the concern would be about how true this remains as the product grows/evolves.
But as long as there’s an opt-out, that seems like a reasonable approach.
But in summary, YouTube is rolling out AI summarization features on some content without giving creators any say in the matter.
Concerns include:
- Low quality summarization of high quality content will devalue the content, and in many cases is just a worse version of the content
- Impact to watch time on the channel can impact channel success over time
- YouTube is not doing anything to compensate creators for reducing watch time such as sharing revenue from viewers who primarily interact with the AI summary
But I think he articulates this much better than I did. Much better to watch the video.
Association with that brand would be very valuable.
The referenced video is from a photographer who has some pretty strong and reasonable thoughts on this - specifically the features YouTube itself is experimenting with.
Depending on the nature of the AI product, it has the potential to completely sideline creators.
Not saying that’s what Miyagi is doing and it sounds like they’re actually working with creators on this which is good. But the broader point is that such tools need to be thoughtfully implemented.
- [0] https://m.youtube.com/watch?v=ULUSS1-G3do
I’m curious why you didn’t use multiple choice for the exercises? I feel like those would be easier than typing out full answers and be closer to MOOC style homework. Maybe have a longer written question at the end of a section.
The exercises work pretty well, I like the highlighting red wrong vs. green right. It does feel a bit like the MOOC-style discussions. The tutor doesn’t just tell you the answers which is cool, but something about talking with the tutor feels a bit flat. And the flashcards weren’t very helpful for the course I picked.
I could see myself doing some courses like this with some more gamification. Being able to filter by course provider (Ycombinator, or MIT) would be cool too.
Anything specific we could improve about talking to the tutor? Definitely will add some of those features and gamify better.
Something I’ve been doing more and more lately is asking chatgpt to create a detailed description of a topic which can be read aloud for whatever duration I plan on driving. This works exceptionally well - even for short 5 minute drives.
I wonder if the same can be done for video-based content. Sometimes I’m short on time but still want to learn something.
Here are some of the very best in the category, it would be really cool if you partnered with any of these.
https://www.youtube.com/@hungryhorsepoker
https://www.youtube.com/@CarrotCornerPoker
https://www.youtube.com/@PokerCoaching
UPDATE: " This course failed to generate. Please try again or contact us."
I really like a lot of the components of your idea, but the execution is underwhelming. Right now it feels like you're providing middling tools for too many components without nailing any of them. Alternatively I could watch the YT video at all ready has a transcript, take notes in any tool, and ask questions to any LLM; the piece missing is context, so that's where it feels like you should focus.
Re: assessments; it feels like you're being distracted here; I'm not convinced that's how your natural target market learns in this modality. We generate quizes in our product, but it's typically used in the "internal compliance" segment - think mandatory training like food safety for food preparers - not the external (typically adult) self-improvement market (which is huge!). If you're going to do asessments you need a lot of non-AI boilerplate around tracking, validation and certification/credentials. My two cents: quizes in your app are a cool demo feature with little real value.
Blows my mind that 1:1 tutoring dwarfs the impact of other factors such as socioeconomic status, reinforcement, assigned homework, classroom morale, etc (at least according to the researchers).
Does anyone know if this thesis has been replicated? Or if these results hold in modern times (original study was 40 years ago)?
[1] https://www.science.org/doi/full/10.1126/scirobotics.aat5954 [2] https://www.tandfonline.com/doi/abs/10.1080/00461520.2011.61... [3] https://journals.sagepub.com/doi/abs/10.3102/0013189X2091279...
Learning languages seems a bit different in that there's more focus on repetition compared to comprehension questions, but there are certain topics (like grammar concepts) that could work well in our current structure. Also there are some really popular YouTube channels for learning any language, so we definitely see a potential to augment those videos to more accurately & effectively learn.
For self-created courses, it's generally been quite accurate and we're playing around with some eval metrics to make it as good as possible, but it's definitely a concern.
I am curious if you are using any methodologies from the digital learning space like knowledge tracing to help ensure that learners are actually retaining knowledge and improving over time or knowledge mapping to understand the gaps that might exist in your content?
Do you maintain your own skills taxonomy? Are you tagging your questions or assessment events with knowledge components or skills of any kind to understand what you are testing your students for?
All of this is really cool, I’m just curious at what level you’ve gotten to on some of this because there is a very fine line in online educational content between making the students life more difficult and actually helping them learn, especially when you get into auto-generating content, and especially if you aren’t following solid principles to verify your content. (I work for an online education company and particularly in the space of training LLMs and verifying their outputs for use in educational contexts)
- For official courses the creators are doing some quality control and do necessary fixes. - For self-created courses there is zero human supervision or quality control.
Is that correct?