I have all of my side projects in a big monorepo managed with bazel. Its got a little bit of everything in it.
All of my UIs are written in typescript, most using Angular but I am switching to a homegrown framework for smaller "finished" projects. I have been looking into adding a GTK desktop app, but haven't determined the language I want to use.
Web servers are predominantly go, but also have c++, java, and rust servers. Everything is grpc with my own frameworks smeared over that.
I put envoy in-front of all of my web servers, and use custom filters for shared functionality across all servers. Things like request telemetry, transcoding grpc to json, auth, and waf are all in this layer. Having it separated in the proxy layer avoids needing to reimplement these functions across the different servers.
For CLIs, virtually everything is in go, but I also have rust, zig, and bash.
ETL stuff is all go. I've looked into using apache beam, but my homegrown framework does what I need, so I haven't made the switch.
Deployments are all to knative. I have a k8s cluster that I run it in at home, and use the managed version on GCP (cloud run). Deploys are all handled by custom bazel rules.
Infra is all managed with terraform (managed by bazel).
I use a few different types of databases. Postgres (or cloudsql) for application relational db. For most things I can use a graph database, and use Firestore for that. Its much cheaper to run Firestore instead of cloudsql for small amounts of data. I am using biguqery for my data warehouse.
Model training is done in jax/tf, and inference is all done with tfserving.
All of my UIs are written in typescript, most using Angular but I am switching to a homegrown framework for smaller "finished" projects. I have been looking into adding a GTK desktop app, but haven't determined the language I want to use.
Web servers are predominantly go, but also have c++, java, and rust servers. Everything is grpc with my own frameworks smeared over that.
I put envoy in-front of all of my web servers, and use custom filters for shared functionality across all servers. Things like request telemetry, transcoding grpc to json, auth, and waf are all in this layer. Having it separated in the proxy layer avoids needing to reimplement these functions across the different servers.
For CLIs, virtually everything is in go, but I also have rust, zig, and bash.
ETL stuff is all go. I've looked into using apache beam, but my homegrown framework does what I need, so I haven't made the switch.
Deployments are all to knative. I have a k8s cluster that I run it in at home, and use the managed version on GCP (cloud run). Deploys are all handled by custom bazel rules.
Infra is all managed with terraform (managed by bazel).
I use a few different types of databases. Postgres (or cloudsql) for application relational db. For most things I can use a graph database, and use Firestore for that. Its much cheaper to run Firestore instead of cloudsql for small amounts of data. I am using biguqery for my data warehouse.
Model training is done in jax/tf, and inference is all done with tfserving.
Previously: Rails/React + Postgres + Heroku
React + Tailwind
PhaserJS
VSCode + Cursor