Show HN: Do Things That Don't Scale Simulation (mythosgym.com)
Simple live reload for developing static sites (leanrada.com)
Show HN: BinaryRPC – Lightweight WebSocket-based RPC framework in modern C++
I’m a recent CS graduate. During the past few months I wrote BinaryRPC, an open-source RPC framework in modern C++20 focused on low-latency, binary WebSocket messaging.
Why I built it * Wanted first-class session support, pluggable QoS levels and a simple middleware chain (global, specific, multi handler) without extra JSON/XML parsing. * Easy developer experience
A quick feature list * Binary WebSocket frames – minimal overhead * Built-in session layer (login / reconnect / heartbeat) * QoS1 / QoS2 with automatic ACK & retry * Plugin system – rooms, msgpack, etc. can be added in one line * Thread-safe core: RAII + folly
Still early (solo project), so any feedback on design, concurrency model or missing must-have features would help a lot.
Thanks for reading!
also see "Chat Server in 5 Minutes with BinaryRPC": https://medium.com/@efecanerdem0907/building-a-chat-server-i...
I built this project because I needed a simple but fast WebSocket-based RPC layer for my own real-time side projects. Existing options felt heavy or JSON-only, so I wrote something binary-focused and plugin-friendly.
I’d really appreciate any feedback on:
• Overall architecture / design smells • Concurrency model (thread-pool vs async IO) • “Must-have” features before this is production-ready
Design notes and a 5-minute chat-server demo are in this short post: https://medium.com/@efecanerdem0907/building-a-chat-server-i...
Any comments, suggestions or PRs are welcome. Thanks again!
* Works everywhere today (browsers, LB, PaaS) with zero extra setup. * One upgrade -> binary frames; no gRPC/proto toolchain or HTTP/3 infra needed. * Simple reliability: TCP handles ordering; I add optional QoS2 on top. * Lets me focus on session/room/middleware features first; transport is swappable later.
QUIC / gRPC-HTTP/3 is on the roadmap once the higher-level API stabilises.
- gRPC is not a library I would trust with safety or privacy. It's used a lot but isn't a great product. I have personally found several fuckups in gRPC and protobuf code resulting in application crashes or risks of remote code execution. Their release tagging is dogshit, their implementation makes you think the standard library and boost libraries are easy to read and understand, and neither takes SDLC lifecycles seriously since there aren't sanitizer builds nor fuzzing regime nor static analysis running against new commits last time I checked.
- http/3 using UDP sends performance into the crater, generally requiring _every_ packet to reach the CPU in userspace instead of being handled in the kernel or even directly by the network interface hardware
- multiplexing isn't needed by most websocket applications
I am a recent CS graduate and I work on this project alone. I chose WebSocket over TCP because it is small, easy to read, and works everywhere without extra tools. gRPC + HTTP/3 is powerful but adds many libraries and more code to learn.
When real users need QUIC or multiplexing, I can change the transport later. Your feedback helps me a lot.
I totally understand your reasoning behind leaning on websockets. You can test with a data channel in a browser app. But if we are talking low-latency, Superman fast, modern C++, RPC and forgeddaboutit. Look into handling an initial payload with credential negotiation outside of HTTP 1.1.
My current roadmap is:
1. Keep WebSocket as the “zero-config / browser-friendly” default. 2. Add a raw-TCP transport with a single-frame handshake: [auth-token | caps] → ACK → binary stream starts. 3. Later, test a QUIC version for mobile / lossy networks.
So users can choose: * plug-and-play (WebSocket) * ultra-low-latency (raw TCP)
Thanks for the nudge this will go on the transport roadmap.
Would be great if you report such remote code executions to the authors/Google. I am sure they handle CVEs etc. There has been a security audit like https://github.com/grpc/grpc/tree/master/doc/grpc_security_a...
> there aren't sanitizer builds nor fuzzing regime nor static analysis running against new commits last time I checked.
Are you making shit up as you go? I randomly picked a recently merged commit and this is the list of test suites ran on the pull request. As far as I recall, this has been the practice for at least 8 years+ (note the MSAN, ASAN, TSAN etc.)
I can see various fuzzers in the code base so that claim is also unsubstantiated https://github.com/grpc/grpc/tree/f5c26aec2904fddffb70471cbc...
[1] https://grpc.io/docs/languages/cpp/callback/
Basically what I am saying is - you need to place more abstraction between your code and the end-user API.
Take this line:
std::string sayMessage = payload["message"].template get<std::string>();
Why not make a templated getString<“message”> that pulls from payload? So that would instead just be:
auto sayMessage = payload[“message”].as_string() or
auto sayMessage = payload.getString<“message”>() or
std::string sayMessage = payload[“message”] //We infer type from the assignment!!
It’s way cleaner. Way more effective. Way more intuitive.
When working on this kind of stuff end-developer experience should always drive the process. Look at your JSON library. Well known and loved. Imagine if instead of:
message[“code”] = “JOIN”; it was instead something like:
message.template set<std::string, std::string>(“CODE”, “JOIN”);
Somehow I don’t think the latter would have seen any level of meaningful adoption. It’s weird, obtuse and overly complex. You need to hide all that.
Thank you for the detailed feedback—this is exactly the kind of input that helps the project grow.
You’re right: developer experience needs to be better. Right now there is too much boiler-plate and not enough abstraction. Your example
is the direction I want to take. I’ll add a thin wrapper so users can write `payload["key"].as_string()` or even rely on assignment type-inference. Refactoring the basic chat demo to be much shorter is now my next task.About C++20 modules: I agree they are the future. The single-header client was a quick MVP, but module support is on the roadmap as compiler tooling matures.
If you have more DX ideas or want to discuss API design, please open an issue on GitHub I’d be happy to collaborate.
Thanks again for the valuable feedback!
You've all made it very clear that from a user's perspective, a single-header library is still the gold standard for ease of use and integration. The ideal scenario is for a developer to just #include "binaryrpc.hpp" and have everything work without touching their build system, and I now see that as a crucial goal for the project. My framework isn't there yet, and the feedback has been a wake-up call that the current multi-header approach creates too much friction for new users.
So, my path forward is clear: 1. First, focus on simplifying the core API based on the initial feedback (e.g., creating wrapper objects for payloads). 2. Then, work towards providing a single-header distribution for maximum compatibility and ease of use.
I agree that modules are the future. But for now, delivering the most practical and frictionless developer experience seems to be the most important priority.
Thanks again for guiding me on this.
It follows the MQTT-style 2-step handshake:
1. Sender → `PUBLISH(id, data)` 2. Receiver → `PUBREC(id)` // stored as “seen but not completed” 3. Sender → `PUBREL(id)` 4. Receiver → `PUBCOMP(id)` // marks id as done, then passes data to the app layer
While an id is in “seen” state the receiver drops duplicates, so the message is delivered to user code exactly once per session even if the socket retries.
If the client reconnects with the same session-key, the server reloads the in-flight id table, so duplicates are still filtered. If the session is lost (no session-key) we fall back to at-least-once because there is no common store.
So: “exactly once within a persisted session; effectively once” as long as the application is idempotent. I’ll update the docs to state this more precisely. Thanks for pointing it out!
Sounds like RabbitMQ/AMQP/similar over WebSocket?
* BinaryRPC = direct request/response calls with optional QoS (per session). – No exchanges/queues, no routing keys. – One logical stream, messages mapped to handlers.
* RabbitMQ / AMQP = full message-broker with persistent queues, fan-out, topic routing, etc.
So you could say BinaryRPC covers the transport/QoS part of AMQP, but stays lightweight and broker-less. If an app later needs full queueing we can still bridge to AMQP, but the core idea here is “RPC first, minimal deps”.