I don't get `.env` files. Just never made sense that we'd delegate managing environment variables to the processes which consumes them.
https://direnv.net is a better solution IMO. Once you set it up in your shell, it loads environment variables from the `.envrc` file in whatever the current directory is automatically. It includes a rich standard library (https://direnv.net/man/direnv-stdlib.1.html) for manipulation of PATH, etc.
For secrets, I just add this line to `.envrc`:
source_env_if_exists .envrc.private
And add `.envrc.private` to `.gitignore`. Now that just works anywhere, whether the authors of whatever tool I'm using official support `.env` files or not.
awestroke · 1d ago
This is about formalizing required secrets for dev.
Many repos have a ".env.sample", which is better than nothing - you copy it to .env and fill in the blanks.
If this is done automatically, even better.
If the automatic .env setup also loads secrets from your password manager (with prompting), even better.
judofyr · 1d ago
With direnv you can also run commands directly in the .envrc:
Every time you cd into the directory it will execute the command. Isn’t this even better than copying the secret into a local file?
conception · 1d ago
1password actually has environments now so you can just load it directly.
candiddevmike · 1d ago
It's such a shame this isn't built into shells. There isn't really a security issue here as you have to trust the directory first before it fires.
__jonas · 1d ago
I agree that loading environment variables should not be the responsibility of the process which consumes them, not sure what the benefit of renaming .env to .envrc is.
I use mise to load environment variables from .env, since I also use it to manage tool versions.
When I don't have that available I just do
set -a; source .env; set +a
direnv is definitely another good option.
mananaysiempre · 1d ago
Unlike .env, .envrc is not a series of key=value assignments, it’s a full Bash script such that any variables that are changed at the end of its execution are entered into your environment. (The idea is to approximate it being sourced into your shell, except it always uses Bash while you may be using a different shell.)
For example, I can write a Nix flake, put “use flake” (and nothing else) in my .envrc in the same directory, and have whatever PATH, PYTHONPATH, etc. changes that are needed to develop against the flake’s dependencies automatically applied when I enter the directory. You could almost certainly use this with virtualenv, nvm, or the like as well, I just haven’t tried.
jelder · 1d ago
direnv also has `dotenv` and `dotenv_if_exists` macros, which nicely gets around the fact that `.env` files typically omit the `export` statement.
throw-the-towel · 1d ago
And I don't get storing your configuration in env variables, period. They're not discoverable at all, there's no namespacing nor any other isolation, there's no structure to them, and if you want a list you'll have to come up with an ad-hoc encoding. Environment variables are a mess, you'd be better off with a config file.
w0m · 1d ago
I assume environment variables became popular as it's an 'easy' way to inject secrets without hardcoding them in a config file.
theozero · 1d ago
Whether you like them or not, using env vars is ubiquitous because it's easy. Anything else ends up tightly coupled within your application code or only within the environment. Often you have a set of config needed both for your application and your surrounding dev tools / scripts. This tries a provide a universal solution.
At the end of the day you could imagine varlock loading your config, and injecting it using a method that is not env vars - a file, sidecar process, etc.
0xbadcafebee · 1d ago
Most tech people don't grok the abstraction between an executable program and the environment it is executed in.
The thing that runs your code is responsible for adding the environment variables from wherever they are kept for that environment. The execution environments running your code are not checking out your Git repository and reading files from it before they execute your program; that would introduce a chicken-and-egg problem (and make it much harder to run your program in new environments).
Below is an example of why you can't just load all your variables in a single ".env" file.
Environments:
- *development*:
host: "my laptop"
variables: [FOO=dev]
- *staging*:
host: "a ci/cd server"
variables: [FOO=stage]
- *production*:
host: "a kubernetes server"
variables: [FOO=prod]
I want to run my program in development!
->> open a terminal on my MacOS laptop
->> change to directory with my code
->> load environment variables into shell
$ source .env
->> run my program
$ ./MyProgram.EXE
->> the shell takes the variables it has, and adds them to the program at runtime
char *const argv[] = {"/Users/me/app_src/MyProgram.EXE", NULL};
char *const envp[] = {"FOO=dev", NULL};
execve("/Users/me/app_src/MyProgram.EXE", argv, envp);
->> the program is loaded, executed, and accesses environment variables passed to it
I want to run my program in staging!
->> start a job on a ci/cd server
->> the ci/cd server starts a Docker container
->> the ci/cd server begins running commands in the Docker container
->> checkout latest code and change to new directory
->> load environment variables into shell
->> retrieves environment variables set in ci/cd job configuration
->> sets those variables in the shell
->> run my program
$ ./MyProgram.EXE
->> the shell takes the variables it has, and adds them to the program at runtime
char *const argv[] = {"/Users/me/app_src/MyProgram.EXE", NULL};
char *const envp[] = {"FOO=stage", NULL};
execve("/Users/me/app_src/MyProgram.EXE", argv, envp);
->> the program is loaded, executed, and accesses environment variables passed to it
I want to run my program in production!
->> start a job on a kubernetes server
->> a deployment scheduler schedules a pod to start\
->> it looks up a configmap with environment variables
->> it looks up a secrets object with secrets
->> it sets environment variables in the pod based on the configmaps and secrets
->> it starts a container in the pod
->> it begins running commands in the container
->> run my program
$ ./MyProgram.EXE
->> the container takes the variables it has, and adds them to the program at runtime
char *const argv[] = {"/Users/me/app_src/MyProgram.EXE", NULL};
char *const envp[] = {"FOO=prod", NULL};
execve("/Users/me/app_src/MyProgram.EXE", argv, envp);
->> the program is loaded, executed, and accesses environment variables passed to it
You have to add the environment variables to the execution environment before your program ever gets run. Otherwise (for example) you could never pull a Git repository to load variables, because where would the credentials to pull your Git repository come from? They have to be added beforehand. So that beforehand step is where you add all your environment variables for that environment. Your program merely reads them when it is executed; no need for your program to read additional files.
You should not do something like keep a ".env.dev", ".env.stage", ".env.prod", packaged up in a container. Each execution environment may need slightly different settings at run time. And secrets should not be kept in your Git repo, they should be loaded as-needed, at runtime, by your execution environment.
All this is covered neatly in The Twelve Factor App (https://12factor.net/config). As someone who's been doing this for two decades, I highly recommend everyone follow their guide to the letter.
theozero · 1d ago
Even if you don't set any values in your env files (although why not set some non-sensitive defaults) you can still benefit from the schema and validation that varlock provides.
Imagine every public docker container had an .env.schema which was validated on startup, instead of scattered info about the available env vars in their readme.
0xbadcafebee · 1d ago
Environment variables are a non-standard, limited form of transferring data. Depending on the platform, they may be case-insensitive, there are reserved words, size limits, naming rules, restricted charsets, etc. And their values are always strings. So having a schema on an environment variable isn't very useful, unless you do things like dynamic typing and type assumptions (which run into bugs). It's not a bad idea to have an opinionated library that defines some restrictions on environment variables, but the applications will always need to add more restrictions that your library has thought of.
Schemas are application-specific. Applications all deal with data types differently. Some data types (defined in schemas) even use transforms and complex custom algorithms before their data is validated. So it's better to let the application handle schemas directly on an as-needed basis.
All of that can be done independent of environment variables. Just make a library that validates data types, and pass your environment variables (or any data, from anywhere) to it. This is better not only because you can validate any kind of data, but you can load your data from places other than environment variables (from disk, from database, etc). This kind of general abstraction is more useful for general purpose computing, rather than a complex solution tailored for only one use case.
Finally: A schema isn't a replacement for documentation. Just because you have a technical document that defines what data is allowed in a variable, doesn't mean that somebody then knows what the hell that thing does - what it affects, when it should or shouldn't be used, etc. Documentation is for humans, schemas are for computers.
debarshri · 1d ago
You have to see the `.env` file from the context of where it gets deployed.
For example, when we [1] deploy applications in Kubernetes, we have built an admission controller that fetches secrets from vault and converts them into env variables or secrets for the application at runtime. In this way, you will only have a reference in the form of annotation for the application.
If you give an `.env` as is, people will extract that value and start using it. You will end up leaking secrets.
Another way we have been exploring injecting secrets is via a sidecar for the application or via SDK but the lift seems to be a bit too much.
I think the deployment environment should be responsible for injecting the credentials for the best posture.
Use case for non-deterministic processing of .env files?
ks2048 · 1d ago
What makes it "AI-friendly"?
theozero · 1d ago
Tons of folks have plaintext secrets in their env files which are leaked via AI assistants every day. By getting those out of plaintext, the risk is totally removed.
The schema itself (and the automatic types it can generate) also gives AI more context about what configuration is available, and what each item is for.
jaredcwhite · 1d ago
> plaintext secrets in their env files which are leaked via AI assistants
So…we're not just talking about secrets then. Any text in any file could be leaked. The solution isn't simply moving secrets out of env files, the solution is, um,
*not leaking the contents of local files*
My god. Have we forgotten all semblance of how security & privacy in computing should work?
No comments yet
xena · 1d ago
Buzzwords!
0xbadcafebee · 1d ago
I couldn't tell what this was at all for most of the page, but finally at the bottom it shows it's just a library for loading variables into a Node.js application
theozero · 1d ago
We're still working on the copy. Any specific feedback is appreciated.
Note that you can load env vars into anything via varlock run, not just javascript. The JS integration is a bit deeper, providing automatic type generation, log redaction, and leak prevention.
mathfailure · 1d ago
Why not integrate with KeePass(*)-family of secret stores or HashiCorp Vault?
theozero · 1d ago
Right now, you can use any provider that has a CLI to fetch a single secret. In the near future, we'll be adding native plugins to make these integrations even easier.
Vault can be a huge lift and doesn't make sense for many projects - we wanted to build a tool that makes sense from day one, even when there is no backing provider, but can grow with your team and change providers seamlessly.
moltar · 1d ago
Oh great, anti pattern now has a schema
vergessenmir · 1d ago
I don't get it, what does this do?
theozero · 1d ago
On most projects you end up wiring up a bunch of custom logic to handle your config, which is often injected as environment variables - think loading from a secure source, validation logic, type safety, pre-commit git scanning, etc.
It's annoying to do it right, so people often take shortcuts - skip adding validation, send files over slack, don't add docs, etc...
The common pattern of using a .env.example file leads to constant syncing problems, and we often have many sources of truth about our config (.env.example, hand-written types, validation code, comments scattered throughout the codebase)
This tool lets you express additional schema info about the config your application needs via decorators in a .env file, and optionally set values, either directly if they are not sensitive, or via calls to an external service. This shouldn't be something we need to recreate when scaffolding out every new project. There should be a single source of truth - and it should work with any framework/language.
oulipo · 1d ago
After having so many issues with .env files I just now do a simple .env at the root of the repo with just `ENV=prod/staging/dev` and then everything is generated (using 1password, etc) from a `config.ts` file in my `pkgs/config` from the monorepo
and this lets me type variables, add comments, many niceties, etc
theozero · 1d ago
This is exactly the kind of thing we are trying to replace. No more hacky custom solutions needed. With varlock you get type safety, validation, and the ability to compose together your config however you need to, without having to reinvent the wheel.
oulipo · 22h ago
Well, in order to have typesafety, validation, etc you might as well... directly write in Typescript?
I'd argue that *you* are reinventing the wheel with a custom format...
theozero · 8h ago
Personally I don't want to write custom code from scratch or glue together 5 tools to do this on every project. I've done it - many many many times... If your project is very simple with a few env vars, it may not be a big deal, but on large projects it quickly becomes a burden. Maybe not one you deal with every day, but regularly - when new teammates join, when new external services are added, when new deployment environments are being used.
Also a big goal here is to have a single tool that will work for all frameworks _and languages_. In monorepo projects, it's very common to have a different custom env var loading and validation setup in each child project, and sharing config across the project is very awkward.
Doing this stuff well is hard, which often means people take shortcuts. So many times dev teams gets stopped in their tracks because someone added a new env var and forgot to distribute it to everyone, or they only added the new key to staging and forgot production. We just want to make doing the right thing the easiest thing.
Imagine if this was just a standard, and when you look at a public docker container, there is a schema of the env vars it takes, rather than digging through the readme. When it boots up, if the env vars are invalid it will give you a clear error message. Imagine when you clone some template project, you get this all this validation and documentation for free, without having to trace through some custom code.
Yes - it is something new, but our hope is that it is intuitive enough to grasp even if it's new to you.
tylergetsay · 1d ago
its an old library but convict is great for this in the nodejs ecoystem
theozero · 1d ago
Hey all! Co-creator of varlock here. Varlock is the next evolution of what we learned building https://dmno.dev. We wanted to make things simpler (no more proprietary TypeScript schemas) and meet folks where they already are: .env files.
The hope is to remove the papercuts of dealing with env vars (and configuration more generally) by introducing an easy to understand system for validation, and type-safety, with the flexibility to use any third party provider to persist your secrets. We found ourselves reimplementing this stuff on every project, wiring together many tools and custom code, only to end up with a mediocre outcome.
The very common pattern of using `.env.example` leads to constant syncing problems, with many folks resorting to sharing .env files and individual secrets over slack, even when they know they shouldn’t. By turning that example into a schema and involving it in the loading process, it can never get out of sync. With validations built in, if something is wrong, you’ll know right away with a helpful error instead of an obscure runtime crash.
Because the system is aware whether things are sensitive or not it means we can do things like log redaction and leak prevention on the application side of things. Many tools try to do scanning but use regexes, while varlock knows the actual values to look for. We felt these were problems especially worth solving in a world where more frameworks are running the same code on both the server and client.
We intended to share this ourselves on here next week but you beat us to the punch. We’re in the midst of shipping the drop-in next.js integration (hopefully just merged today).
I also see a few comments about the “AI friendly” part. Right now tons of folks have sensitive keys in their .env files that are being leaked to AI assistants left and right. By removing plaintext secrets from env files, it entirely removes this problem. We also want to highlight the fact that with this DSL on top of .env we’re making it much easier for LLMs to write safer code. Part of creating devtools is trying to understand how they will be used in the wild so we’ve been trying to work with common tools (Cursor, Windsurf, Claude, Gemini, etc) to make sure they can coherently write @env-spec for varlock.
We’re literally just getting started so all of your feedback is super valuable.
We’ll continue to expand support for non-js languages (which already work via `varlock run`) as well as add more integrations, and eventually some CI/CD features for teams to help track and manage config.
Nullabillity · 1d ago
"Made for sharing", but suggests depending on 1Password?
It also seems.. irresponsible to claim that @sensitive values "will be always be redacted in CLI output", when the whole point of something like Varlock is to configure some external application that it doesn't control.
And what does "AI-friendly" mean here anyway... beyond, I suppose, varlock being AI slop itself.[0]
We say "made for sharing" because .env.schema replaces .env.example which always drifts from reality - and often requires insecurely sharing secrets manually.
Even if not setting values within your files, you can rely entirely on env vars in the platform where the code runs and still benefit from validation provided by varlock.
Right now we give 1Password as an example, but you can use any provider that has a CLI. We are also working on a plugin system that should make it easier to integrate with any provider.
As for redaction - that note is about how we redact your secrets from _our_ CLI output. However we also provide tools to redact within your application. Right now this works only in JavaScript, by patching global console methods. We will also hook into stdout for varlock run, similar to what the 1Password cli does.
The leak detection is much more interesting - especially in hybrid client/server frameworks, where you can easily shoot yourself in the foot.
By removing plaintext secrets from env files, we totally remove the risk of them leaking via AI code assistants, which I guarantee is happening millions of times a day right now. Also the schema itself and autogenerated types give AI much more context about your env.
halfcat · 1d ago
I don’t know about varlock, but 1Password’s `op` CLI tool seems to hook the STDOUT pipe and find/replace any instances of the secrets with “concealed by 1Password”. It works even if I drop into a REPL and try every way I can think of to print it out to the console.
So it would seem, on that front, that 1Password is doing the heavy lifting.
Using 1Password in this way has proven way better than storing .env files in plain text on dev machines, where the .env files get picked up if a company does backups, or someone stores a repo in their Dropbox folder, file gets flagged as potential malware and uploaded somewhere for further analysis, etc.
theozero · 1d ago
Exactly. We will do that to stdout - and can patch JS itself too.
The goal here is to just make it dead simple to do the right thing with minimal effort. Get secrets out of plaintext, avoid the need to send them around insecurely, and help make sure you don't shoot yourself in the foot, which is surprisingly easy to do in hybrid server/client frameworks like Next.js.
Can you set up validations, syncing with various backends, and these protections all of this yourself by wiring together a bunch of tools with custom code? Of course... But here's one that will do it all with minimal effort.
https://direnv.net is a better solution IMO. Once you set it up in your shell, it loads environment variables from the `.envrc` file in whatever the current directory is automatically. It includes a rich standard library (https://direnv.net/man/direnv-stdlib.1.html) for manipulation of PATH, etc.
For secrets, I just add this line to `.envrc`:
And add `.envrc.private` to `.gitignore`. Now that just works anywhere, whether the authors of whatever tool I'm using official support `.env` files or not.I use mise to load environment variables from .env, since I also use it to manage tool versions. When I don't have that available I just do
direnv is definitely another good option.For example, I can write a Nix flake, put “use flake” (and nothing else) in my .envrc in the same directory, and have whatever PATH, PYTHONPATH, etc. changes that are needed to develop against the flake’s dependencies automatically applied when I enter the directory. You could almost certainly use this with virtualenv, nvm, or the like as well, I just haven’t tried.
At the end of the day you could imagine varlock loading your config, and injecting it using a method that is not env vars - a file, sidecar process, etc.
The thing that runs your code is responsible for adding the environment variables from wherever they are kept for that environment. The execution environments running your code are not checking out your Git repository and reading files from it before they execute your program; that would introduce a chicken-and-egg problem (and make it much harder to run your program in new environments).
Below is an example of why you can't just load all your variables in a single ".env" file.
Environments:
I want to run my program in development! I want to run my program in staging! I want to run my program in production! You have to add the environment variables to the execution environment before your program ever gets run. Otherwise (for example) you could never pull a Git repository to load variables, because where would the credentials to pull your Git repository come from? They have to be added beforehand. So that beforehand step is where you add all your environment variables for that environment. Your program merely reads them when it is executed; no need for your program to read additional files.You should not do something like keep a ".env.dev", ".env.stage", ".env.prod", packaged up in a container. Each execution environment may need slightly different settings at run time. And secrets should not be kept in your Git repo, they should be loaded as-needed, at runtime, by your execution environment.
All this is covered neatly in The Twelve Factor App (https://12factor.net/config). As someone who's been doing this for two decades, I highly recommend everyone follow their guide to the letter.
Imagine every public docker container had an .env.schema which was validated on startup, instead of scattered info about the available env vars in their readme.
Schemas are application-specific. Applications all deal with data types differently. Some data types (defined in schemas) even use transforms and complex custom algorithms before their data is validated. So it's better to let the application handle schemas directly on an as-needed basis.
All of that can be done independent of environment variables. Just make a library that validates data types, and pass your environment variables (or any data, from anywhere) to it. This is better not only because you can validate any kind of data, but you can load your data from places other than environment variables (from disk, from database, etc). This kind of general abstraction is more useful for general purpose computing, rather than a complex solution tailored for only one use case.
Finally: A schema isn't a replacement for documentation. Just because you have a technical document that defines what data is allowed in a variable, doesn't mean that somebody then knows what the hell that thing does - what it affects, when it should or shouldn't be used, etc. Documentation is for humans, schemas are for computers.
For example, when we [1] deploy applications in Kubernetes, we have built an admission controller that fetches secrets from vault and converts them into env variables or secrets for the application at runtime. In this way, you will only have a reference in the form of annotation for the application.
If you give an `.env` as is, people will extract that value and start using it. You will end up leaking secrets.
Another way we have been exploring injecting secrets is via a sidecar for the application or via SDK but the lift seems to be a bit too much.
I think the deployment environment should be responsible for injecting the credentials for the best posture.
[1] https://adaptive.live
The schema itself (and the automatic types it can generate) also gives AI more context about what configuration is available, and what each item is for.
So…we're not just talking about secrets then. Any text in any file could be leaked. The solution isn't simply moving secrets out of env files, the solution is, um,
*not leaking the contents of local files*
My god. Have we forgotten all semblance of how security & privacy in computing should work?
No comments yet
Note that you can load env vars into anything via varlock run, not just javascript. The JS integration is a bit deeper, providing automatic type generation, log redaction, and leak prevention.
Vault can be a huge lift and doesn't make sense for many projects - we wanted to build a tool that makes sense from day one, even when there is no backing provider, but can grow with your team and change providers seamlessly.
It's annoying to do it right, so people often take shortcuts - skip adding validation, send files over slack, don't add docs, etc...
The common pattern of using a .env.example file leads to constant syncing problems, and we often have many sources of truth about our config (.env.example, hand-written types, validation code, comments scattered throughout the codebase)
This tool lets you express additional schema info about the config your application needs via decorators in a .env file, and optionally set values, either directly if they are not sensitive, or via calls to an external service. This shouldn't be something we need to recreate when scaffolding out every new project. There should be a single source of truth - and it should work with any framework/language.
the file is basically a big
``` const config = { http: { server_url: ENV === "prod" ? "https://myserver.com" : "http://localhost:3000" ... } } ```
and this lets me type variables, add comments, many niceties, etc
I'd argue that *you* are reinventing the wheel with a custom format...
Also a big goal here is to have a single tool that will work for all frameworks _and languages_. In monorepo projects, it's very common to have a different custom env var loading and validation setup in each child project, and sharing config across the project is very awkward.
Doing this stuff well is hard, which often means people take shortcuts. So many times dev teams gets stopped in their tracks because someone added a new env var and forgot to distribute it to everyone, or they only added the new key to staging and forgot production. We just want to make doing the right thing the easiest thing.
Imagine if this was just a standard, and when you look at a public docker container, there is a schema of the env vars it takes, rather than digging through the readme. When it boots up, if the env vars are invalid it will give you a clear error message. Imagine when you clone some template project, you get this all this validation and documentation for free, without having to trace through some custom code.
Yes - it is something new, but our hope is that it is intuitive enough to grasp even if it's new to you.
We think the decorator comments (it’s an open spec we call @env-spec - RFC is here https://github.com/dmno-dev/varlock/discussions/17) are an intuitive addition to .env files that are ubiquitous.
The hope is to remove the papercuts of dealing with env vars (and configuration more generally) by introducing an easy to understand system for validation, and type-safety, with the flexibility to use any third party provider to persist your secrets. We found ourselves reimplementing this stuff on every project, wiring together many tools and custom code, only to end up with a mediocre outcome.
The very common pattern of using `.env.example` leads to constant syncing problems, with many folks resorting to sharing .env files and individual secrets over slack, even when they know they shouldn’t. By turning that example into a schema and involving it in the loading process, it can never get out of sync. With validations built in, if something is wrong, you’ll know right away with a helpful error instead of an obscure runtime crash.
Because the system is aware whether things are sensitive or not it means we can do things like log redaction and leak prevention on the application side of things. Many tools try to do scanning but use regexes, while varlock knows the actual values to look for. We felt these were problems especially worth solving in a world where more frameworks are running the same code on both the server and client.
We intended to share this ourselves on here next week but you beat us to the punch. We’re in the midst of shipping the drop-in next.js integration (hopefully just merged today).
I also see a few comments about the “AI friendly” part. Right now tons of folks have sensitive keys in their .env files that are being leaked to AI assistants left and right. By removing plaintext secrets from env files, it entirely removes this problem. We also want to highlight the fact that with this DSL on top of .env we’re making it much easier for LLMs to write safer code. Part of creating devtools is trying to understand how they will be used in the wild so we’ve been trying to work with common tools (Cursor, Windsurf, Claude, Gemini, etc) to make sure they can coherently write @env-spec for varlock.
We’re literally just getting started so all of your feedback is super valuable.
We’ll continue to expand support for non-js languages (which already work via `varlock run`) as well as add more integrations, and eventually some CI/CD features for teams to help track and manage config.
It also seems.. irresponsible to claim that @sensitive values "will be always be redacted in CLI output", when the whole point of something like Varlock is to configure some external application that it doesn't control.
And what does "AI-friendly" mean here anyway... beyond, I suppose, varlock being AI slop itself.[0]
[0]: https://github.com/dmno-dev/varlock/tree/514917f4228d49d4404...
Even if not setting values within your files, you can rely entirely on env vars in the platform where the code runs and still benefit from validation provided by varlock.
Right now we give 1Password as an example, but you can use any provider that has a CLI. We are also working on a plugin system that should make it easier to integrate with any provider.
As for redaction - that note is about how we redact your secrets from _our_ CLI output. However we also provide tools to redact within your application. Right now this works only in JavaScript, by patching global console methods. We will also hook into stdout for varlock run, similar to what the 1Password cli does.
The leak detection is much more interesting - especially in hybrid client/server frameworks, where you can easily shoot yourself in the foot.
By removing plaintext secrets from env files, we totally remove the risk of them leaking via AI code assistants, which I guarantee is happening millions of times a day right now. Also the schema itself and autogenerated types give AI much more context about your env.
So it would seem, on that front, that 1Password is doing the heavy lifting.
Using 1Password in this way has proven way better than storing .env files in plain text on dev machines, where the .env files get picked up if a company does backups, or someone stores a repo in their Dropbox folder, file gets flagged as potential malware and uploaded somewhere for further analysis, etc.
The goal here is to just make it dead simple to do the right thing with minimal effort. Get secrets out of plaintext, avoid the need to send them around insecurely, and help make sure you don't shoot yourself in the foot, which is surprisingly easy to do in hybrid server/client frameworks like Next.js.
Can you set up validations, syncing with various backends, and these protections all of this yourself by wiring together a bunch of tools with custom code? Of course... But here's one that will do it all with minimal effort.