Maybe I missed it but the article doesn't mention the even easier way to see this: the activity tab.
It has everything. Any force push to hide ugly prototype code is kept forever which annoys me. I wish we were able to remove stuff from there but the only way to do it is to email support it seems?
Looking at some of my projects, it's entirely empty, or only has a few items, so I suspect it was introduced "recently" and doesn't have data from before then.
Thank you. I think that section has consisted of links to READMEs and stuff for so long I just stopped paying attention to it.
3abiton · 11h ago
Funny thing, we had a similar issue with one of our deployement in the past. It's similar to leaking accidently your password into bash history. Happens more than it should.
emmelaich · 11h ago
I guess it's possible to delete these forever as by deleting the entire repo and re uploading. As long as there are no forks.
oefrha · 12h ago
> GitHub keeps these dangling commits, from what we can tell, forever.
Not if you contact customer support and ask them to garbage collect your repo.
What I do when I accidentally push something I don’t want public:
- Force push;
- Immediately rotate if it’s something like a secret key;
- Contact customer support to gc the repo (and verify the commit is gone afterwards).
(Of course you should consider the damage done the moment you pushed it. The above steps are meant to minimize potential further damage.)
whyever · 12h ago
If you rotated the secret, why do anything else? I don't think there is any potential further damage (except maybe reputational).
oefrha · 11h ago
1. Not all secrets can be rotated. E.g. I can't just "rotate" my home address, which I prefer to be private.
2. Even for rotatable secrets, "I don't think there is any potential further damage" rests on the assumption that the secret is 100% invalidated everywhere. What if there are obscure and/or neglected systems, possibly outside of your control, that still accept that secret? No system is bug-free. If I can take steps to minimize access to an invalidated secret, I will.
jofzar · 10h ago
> 1. Not all secrets can be rotated. E.g. I can't just "rotate" my home address, which I prefer to be private.
Reporter can sell their current house and move to another home as a workaround
Closing ticket as workaround provided.
AppleBananaPie · 6h ago
Here's your promotion!
Thanks for being a great team player!
matsemann · 9h ago
Also avoids false positives in the future from automated scanners, bounty hunters etc. if you clean up now.
chickenzzzzu · 12h ago
Anyone who puts weight on digging through a project to see if they've ever leaked a secret is guilty of encouraging an antipattern-- the guaranteed outcome is you'll have an organization petrified of shipping anything, in case someone interprets it as bad or a security risk, etc.
mk89 · 12h ago
You can see it that way, however, there are automated tools to scan for secrets. Even github does it. In my opinion, this educates the developers to be more careful and slightly more security oriented, rather than afraid of shipping code.
I would also like to remind that a leaked AWS secret can cost 100Ks of $ to an organization. And AWS won't help you there.
It can literally break your company and get people unemployed, depending on the secret/saas.
chickenzzzzu · 9h ago
While I am not suggesting that people should go out and leak their secret keys or push a buffer overflow, the fastest way to learn that you have this problem is by pushing that code to the internet on a project that isn't important. The AWS secret key thing doesn't hold up here, you just really shouldn't do it, but how about an ec2 ssh key or passwords in plaintext? How did I learn about parameterized queries for SQL injection and XML escape vulnerabilities? By waking up to a Russian dude attacking my Java myspace clone.
No amount of internal review and coding standards and etc will catch all of these things. You can only hope that you build the muscle memory to catch most of them, and that muscle memory is forged through being punched in the face
Lastly, any pompous corporate developer making 200k a year or more who claims they've never shipped a vuln and that they write perfect code the first time is just a liar.
fisf · 9h ago
> No amount of internal review and coding standards and etc will catch all of these things. You can only hope that you build the muscle memory to catch most of them, and that muscle memory is forged through being punched in the face
Everything you mentioned is security 101, widely known, and can be caught by standard tools. Shrugging that off as a learning experience does not really hold much water in a professional context.
chickenzzzzu · 5h ago
"In a professional context". Spare me. Don't act like every company on earth has a free, performant, 100% accurate no false positive linter hooked up to their magical build pipeline. Have you seen the caliber of companies that have been affected by CVEs and password/PII leaks since just covid? It's everyone
The responsibility is on the programmer to learn and remember these things. Period, end of story. Just as smart pointers are a bandaid on a bigger problem with real consequences (memory fragmentation and cache misses), so too is a giga-linter that serves as permanent training wheels for so called programmers.
cedws · 11h ago
Git doesn’t clone those orphaned refs though right?
edverma2 · 13h ago
All devs should run open-source trufflehog as a precommit hook for all repositories on their local system. It’s not a foolproof solution, but it’s a small time investment to get set up and gives me reasonable assurance that I will not accidentally commit a secret. I’m unsure why this is not more widely considered standard practice.
ramon156 · 12h ago
If I'm honest, I don't know how much this happens at work, and even if it does it's not the end of the world. Just scratch the commit from existence.
In my head, the people who accidentally share secrets are also the people who couldn't setup trufflehog with a precommit.
Arainach · 12h ago
This isn't true in practice. Even among well educated high performing professionals, mistakes happen. Checklists save lives - in medicine, in aircraft maintenance, in all fields.
People who believe they know what they're doing get overconfident, move fast, and make mistakes. Seasoned woodworkers lose fingers. Experienced doctors lose patients to preventable mistakes. Senior developers wipe the prod database or make a commit they shouldn't.
>In a study of 100 Michigan hospitals, he found that, 30 percent of the time, surgical teams skipped one of these five essential steps: washing hands; cleaning the site; draping the patient; donning surgical hat, gloves, and gown; and applying a sterile dressing. But after 15 months of using Pronovost’s simple checklist, the hospitals “cut their infection rate from 4 percent of cases to zero, saving 1,500 lives and nearly $200 million,”
xlii · 11h ago
Aye.
I made shameful mistake of submitting private key (development one so harmless) only because it wasn’t gitignored and prehook script crashed without deleting it). More of a political/audit problem than a real one.
I guess I’m old enough to remember Murphy Laws and the one saying "safety system upon failure will bring protected system down first".
IshKebab · 11h ago
It's crazy how many people don't know this, despite it being fairly obvious.
I guess it's hubris. I don't make stupid mistakes. You see it a lot in discussions around Rust.
Pre-commit hooks are client-side only and opt-in; I've always been a big proponent of pre-commit hooks, as the sooner you find an issue the cheaper it is to fix, but over time pre-commit hooks that e.g. run unit tests tend to take longer and longer, and some people want to do rapid-fire commits instead of being a bit more thoughtful about it.
bapak · 12h ago
pre-commits require discipline:
- enforce them on CI too; not useful for secrets but at least you're eventually alerted
- do not run tasks that take more than a second; I do not want my commit commands to not be instant.
- do not prevent bad code from being committed, just enforce formatting; running tests on pre-commit is ridiculous, imagine Word stopping you from saving a file until you fixed all your misspellings.
ali_piccioni · 12h ago
I moved all my precommit hooks to prepush hooks. I don’t need a spellchecker disrupting my headspace when I’m deep into a problem.
My developer environments are setup to reproduce CI test locally, but if I need to resort to “CI driven development” I can bypass prepush hooks with —-no-verify.
pxc · 3h ago
CI driven development results in so many shitty commits, though, and it's so slow. I find it very miserable.
Pre-commit hooks should be much, much faster than most CI jobs; they should collectively run in less than a second if possible.
SAI_Peregrinus · 4h ago
A CI system can run the precommit hooks, and fail if any files are changed or the hooks don't exit successfully.
emmelaich · 11h ago
One good (and obviously bad) thing about Subversion was the ability to change history. As admin I was asked numerous times to change a commit message. To point to the correct Jira issue, for instance.
Also easier to enforce pre-commit, since it was done server side.
UnreachableCode · 12h ago
What I've never understood is, how is this an issue with private repos? Aside from open source projects I can't see the problem with accidentally doing this, even though it is a smell.
Thorrez · 9h ago
Different employees in the company have different permissions. If an employee with a lot of access commits a secret, then employees who shouldn't have that much access can take the secret and use it.
dspillett · 12h ago
Anything that makes the repo less private later (deliberate public release, hack (not just if the repo bit of anything that can connect to it), etc) means the secret is now in the open.
Always cycle credentials after an accident like committing them to source control. Do it immediately, you will forget later. Even if you are 100% sure the repo will never be more public, it is a good habit to form.
froobius · 12h ago
It's a bad idea...
- commit secret in currently private repo
- 3 years later share / make public
- forget the secret is in the commit history, and still valid, (and relatedly, having long-lived secrets is less secure)
Sure that might not happen for you, but the chances increase dramatically if you make a habit of commiting secrets.
yard2010 · 12h ago
In a large messaging app I worked for we self hosted a gitlab instance for this exact reason. I thought it was over the top but now I get it, you can never be too sure.
No comments yet
lqet · 11h ago
Many years ago at my first job after university, I accidentally committed a private key into our internal Git repository. We removed it, because we could not completely rule out the possibility that this repository would be made public to a customer, or to the world, in the future. I think we used filter-repo to get the key out of everywhere.
cess11 · 12h ago
It's called private but actually shared with a very large corporation you don't control, likely running on infrastructure they don't control. Due to the CLOUD Act it's also shared with the US government.
Cthulhu_ · 12h ago
Exactly; you should fully expect the NSA to have a copy of these logs as well. It can be very valuable to have secret keys from companies in adversarial countries (including your own).
Example, there's an ICE reporting app now where people can anonymously report ICE sightings... but how anonymous is it really? Users report a location, that can be cross-referenced with location histories and quicky led back to an individual. There may be retaliation to users of this app if the spiral into authoritarianism in the US continues.
cess11 · 11h ago
Right, so, some activists and freedom fighters have been doing stuff in environments they know to be hostile for a long time, while the US has just started growing some movements like that after a hiatus from sometime in the seventies and eighties until somewhat recently.
For now they're going to be making a lot of basic mistakes but eventually they'll grugq up and learn from people that are already used to dealing with the violence of their government.
bapak · 12h ago
Secrets gotta live somewhere. Are you supplying them every time you deploy or run CI?
larntz · 11h ago
Yes. Either via a secret manager (eg vault) or configured as repo secrets if that kind of infra isn't available.
Repo secrets are just stored on someone's computer and they obviously have the keys. This is what I mean.
Same for your vault. The vault might be encrypted, but at some point you have to give the keys to the vault.
Your secrets are not safe from someone if someone needs them to run your code.
larntz · 6h ago
> Your secrets are not safe from someone if someone needs them to run your code.
This is true. I don't disagree with that or you're assessment of repo secrets.
My comment was in the context of the grandparent committing secrets to a private repo which is a bad practice (regardless of visibility). You could do that for tests, sure (I would suggestion creating random secrets for each test when you can), but then you're creating a bad habit. If you can't use random secrets for tests repo secrets would be acceptable, but I wouldn't use them beyond that.
For CI and deploys I would opt for some kind of secret manager. CI can be run on your own infrastructure, secret managers can be run on your own infrastructure, etc...
But somewhere in the stack secret(s) will be exposed to _someone_.
UltraSane · 6h ago
I like to encrypt secrets with a master secret stored in a TPM. This makes it impossible to accidentally leak the secret.
cess11 · 11h ago
I'm not telling you what you should or should not do, especially not in the abstract. I commented on the deceptive terminology employed by a very large corporation with deep connections to rather distasteful activities and organisations.
frollogaston · 5h ago
For a long time and probably still today, Google AppEngine kinda encouraged storing secrets in the YAML, which is easy to accidentally git-commit. There's no easy way to pass secrets to your services otherwise, unlike Heroku etc where it's always been a single command to put them into envvars on the jobs.
And can we talk about the predatory pricing model? In AWS one secret service prices a secret for 0.4 dollars a month.
I was appalled when I first saw it, are you going to charge me 5$ a year for storing my 12 bytes?
bdcravens · 2h ago
If all you're doing is storing, and not using advanced features like auto rotation, Parameter Store is free for most use cases.
bob1029 · 6h ago
I got tired of "oops" over time and started abusing environment variables. If you have enough discipline to spend 10 seconds configuring them, you'll never have to worry about magic strings accidentally getting sucked up into source control.
The other upside with environment variables is that they work across projects. Set & forget, assuming you memorized the name. Getting at tokens for OpenAI, AWS, GH, etc., is already a solved problem on my machine.
I understand why a lot of developers don't do this though. Especially on Windows, it takes a somewhat unpleasant # of clicks to get to the UI that manages these things. It's so much faster (relatively speaking) to paste the secret into your code. This kind of trivial laziness can really stack up on you if you aren't careful.
frollogaston · 5h ago
Abusing? I thought this is exactly what envvars are for.
UltraSane · 6h ago
I encrypt any secret strings with a master password that lives either in a TPM module or a file named MASTER_SECRET that is absolutely not added to the Git repo. My standard new project script adds this file to .gitignore and I use a pre-commit hook that stops this file from being committed by accident.
ggm · 13h ago
Maybe a default secure delete option could be made a lower bar event?
Checkout to event, commit in clean state with prior log history, overlay the state after the elision and replace git repo?
When I had to retain log and elide state I did things like this in RCS. Getting date/time info right was tricky.
Sayrus · 13h ago
If you push a secret publicly, you should consider it leaked. On GitHub, you have 5 minutes on a non-watched repository (due to the delay) and less than 30 seconds on a watched repository to revoke it before it's been cloned and archived by a third-party. Whether that party is malicious or not, rewriting the Git history will not change anything that the secret is leaked. And you can already rewrite the Git History and garbage collect commits that aren't part of the tree anymore on most providers.
ggm · 13h ago
Yes I can see my off-line experience doesn't apply. Thanks.
volemo · 13h ago
If something got out to the internet, you won't get it back. There is little point in rewriting repo history if you have already made a secret public. Just change the secret as soon as you can.
gghffguhvc · 13h ago
The person who leaked it and the person/team that can rotate it might be in different silos or timezones etc. Rewriting the history is prudent but not sufficient.
orthoxerox · 12h ago
That's why key revocation, like credit card blocking, should be a separate service that is available 24x7. Like, if you know the value of an AWS token, this should be sufficient data for you to call an AWS API that revokes it.
badmintonbaseba · 11h ago
That doesn't help if revocation, without renewal means immediate outage.
jbverschoor · 13h ago
Yet people complain that Netflix/Youtube pull certain content ;)
tobyhinloopen · 12h ago
Yes, because paying customers will have the content removed but it will continue to be available for pirates.
tobyhinloopen · 12h ago
Anything pushed is to be considered leaked. You might as well leave the commit in and invalidate the secret.
Prickle · 12h ago
I am guilty of this one.
I was 30 minutes from a presentation, and couldn't figure out why my code couldn't get the key from the hosting service.
So I just hard coded the key. The key was rotated after the presentation.
Does not look very good on a repo.
alkonaut · 10h ago
So the question is: after I orphaned a commit how do I _truly_ make sure it's not visible anywhere on github? Is there no way short of contacting customer support to GC a repo? Shouldn't this just basically be a button on the repo, in the "danger zone" area of the repo maintenance?
SAI_Peregrinus · 4h ago
Assume you can't, even if you contact support someone will have archived it by the time anything gets done. Murphy's Law of Internet Data Storage applies: if you post something to the internet it'll be public forever; if there's something on the internet you remember seeing once and want to find again it will have link-rotted and been lost forever.
abhisek · 11h ago
The thing that people miss out is Git is really a content addressed storage. This means all commits, even the ones not linked to any refs are still stored and addressable.
p.s: If you run OSS project, please use Github Advanced Security and enable Push Protection against secrets.
exceptione · 11h ago
Are you talking about the local branch and the local reflog?
I thought garbage collection should get rid of all dangling stuff. But even without that, I am curious if pushing a branch would push the dangling commits as well.
john2go3 · 12h ago
Unfortunately for those of us without a Google account, it seems one is required to download the mentioned SQLite database (force_push_commits.sqlite3.)
gen6acd60af · 11h ago
Concerning.
It's interesting research, but will Truffle Security use the email addresses for lead gen or marketing purposes, like how they mined users' pingbacks from their XSS Hunter fork for stats?
An interesting look at one of the consequences of using git and public repo's.
Does leave me wondering how long before someone has a setup which detects and tries to exploit these in real-time, which feels like it could be nasty.
Also a challenge with these posts is they were unlikely to have been able to contact all the affected developers who have got exposed secrets, meaning that any that were uncontactable/non-responsive are likely still vulnerable now, I'd guess that means they're about see what happens if those secrets get abused, as people start exploring this more...
matsemann · 13h ago
There are hundred of setups like that already. If you push an AWS key or similar publicly you may have a bitcoin miner or botnet running on your cloud in matter of minutes.
raesene9 · 11h ago
The point here being the blog is about looking for oops commits to spot keys that would otherwise not necessarily be picked up automatically...
sunbum · 12h ago
Nope. Because if you push an AWS key then it gets automatically revoked by AWS.
matsemann · 11h ago
AWS was just an example, but it kinda proves my point though, that people are already monitoring this ;)
larntz · 11h ago
I wouldn't rely on anything other than rotating leaked credentials.
hboon · 13h ago
There are already people scanning git repos for Bitcoin/Ethereum/crypto keys and exploiting them immediately.
raesene9 · 11h ago
There's a lot of secret classes that aren't necessarily automatically scanned for. The Oops commit is a good signal that something shouldn't have been committed, even if automated scanners don't get it.
2OEH8eoCRo0 · 10h ago
Not just Git either. Push a container to Docker Hub and you'll get instant downloads. Presumably people scanning containers for secrets.
diogolsq · 12h ago
One more reason to activate key rotation.
NoahZuniga · 13h ago
I find it hard to believe that they could have made $25k with this. There are companies that scan all commits on gh for secrets, using similar techniques for finding secrets in files.
Sayrus · 13h ago
"70% of secrets leaked in 2022 remain valid today"[1] is a quote that should help understand the situation.
this is specifically deleted commits, which even if locally are deleted, are not so on GH, hence why he was able to find deleted .envs etc.
bashwizard · 11h ago
I'm surprised that it's not more. I couple of years ago I spent a few months basically github dorking for leaked api keys and made more than that.
wordofx · 13h ago
Congrats on commenting without reading the article.
kristopolous · 12h ago
I wonder if you can honeypot this.
xlii · 11h ago
Probably worth mentioning that force is a ref-related activity not a snapshot related activity. Garbage collection might remove unreferenced commits.
This should be done through history rewrites but as other commenters mention - GitHub has its own rights (and GitHub != git).
I’d recommend looking at simpler alternatives. IMO Jujutsu is mature enough for daily usages, and Fossil is a neat alternative if one wants to drop GitHub completely (albeit not very easy to use).
xyst · 11h ago
One of the reasons I keep `.env` and `.env.*` files in global ignore file
v3ss0n · 13h ago
Daily reminder:
- Once it is on the internet - it is always there so Rotate the key/secrets FIRST.
- Never think secrets are gone because of you have recommited .
Edit- Forget to add most important thing - rotating the key.
weird-eye-issue · 12h ago
I think you mean "rotate the keys"
GrandaPanda · 12h ago
Had it correct in the first two points, then contradicted yourself with the last. Rotate your secrets.
v3ss0n · 11h ago
Yeah good point. Rotating secrets is a point i forget to add.
No comments yet
hnlmorg · 13h ago
The problem here is that GitHub keeps the ref logs even for commits that no longer exist.
I don’t see how BFG helps here
v3ss0n · 11h ago
it rewrites the history. Isn't that really enough?
You can remove all the keys from the git history.
and I agree , i forget the point about rotating the key which i do always in first .
hnlmorg · 5h ago
No it’s not enough. Read the article and it will explain why.
Also, if you’re going to rotate your secrets (which you absolutely should do regardless) then everything else is pointless because it’s now just an invalid credential.
Timwi · 10h ago
It might remove it from your local repo, but not from GitHub, that's the point.
SillyUsername · 13h ago
Git never forgets, this isn't really a shocking revelation.
tux3 · 13h ago
Git does forget, it has a gc mechanism specifically for forgetting.
GitHub can't use the native git gc, and apparently doesn't have their own fork-aware and weird-cross-repo-merge-aware gc, so they might just not have built a way to track which commits are dangling.
Ah right, so forking the codebase, then deleting the original repo forces git to forget all copies. Gotcha thanks for the enlightenment.
tossandthrow · 13h ago
Git is not point in time backups. It is versioning.
You are free to organize your version history as you fit, and you can certainly rewrite history.
The only issue you might have is signed commits from collaborators, that you can not resign.
lloeki · 12h ago
> and you can certainly rewrite history.
But you can't coerce everyone in the world to remove all traces of the alternate history that was a thing before being rewritten.
So while you can make git forget something in your local repo, you can't make git forget across the decentralised set of repos, which is part of git's core design.
So in that sense, yes, git never forgets, by design.
eviks · 13h ago
What specific property of git mandates a website to not clean up those dangling commits?
orthoxerox · 12h ago
Git has no de jure hierarchy of repositories. We de facto treat the GH repo as the primary one (and call it "origin"), but mechanically it's a peer repo. Even though it lets other repos push it around a bit and obeys commands like "change the branch to point to another commit", there are no commands to force it to delete the data.
eviks · 12h ago
> Even though it lets other repos push it around
So there is hierarchy
> there are no commands to force it to delete the data.
That's just the current state, the question was how git prevents "de facto" deletion on a server? How is it anti-git to ask the server to execute git garbage collection commands, for example?
It has everything. Any force push to hide ugly prototype code is kept forever which annoys me. I wish we were able to remove stuff from there but the only way to do it is to email support it seems?
Here it is for the test repo mentioned
https://github.com/SharonBrizinov/test-oops-commit/activity
Looking at some of my projects, it's entirely empty, or only has a few items, so I suspect it was introduced "recently" and doesn't have data from before then.
Picking https://github.com/jellyfin/jellyfin/activity?sort=ASC as a busy example, Activity page has no data prior to 7th March 2023. So it has existed for 2 of GitHub's 17 years of existence.
Not if you contact customer support and ask them to garbage collect your repo.
What I do when I accidentally push something I don’t want public:
- Force push;
- Immediately rotate if it’s something like a secret key;
- Contact customer support to gc the repo (and verify the commit is gone afterwards).
(Of course you should consider the damage done the moment you pushed it. The above steps are meant to minimize potential further damage.)
2. Even for rotatable secrets, "I don't think there is any potential further damage" rests on the assumption that the secret is 100% invalidated everywhere. What if there are obscure and/or neglected systems, possibly outside of your control, that still accept that secret? No system is bug-free. If I can take steps to minimize access to an invalidated secret, I will.
Reporter can sell their current house and move to another home as a workaround
Closing ticket as workaround provided.
Thanks for being a great team player!
I would also like to remind that a leaked AWS secret can cost 100Ks of $ to an organization. And AWS won't help you there.
It can literally break your company and get people unemployed, depending on the secret/saas.
No amount of internal review and coding standards and etc will catch all of these things. You can only hope that you build the muscle memory to catch most of them, and that muscle memory is forged through being punched in the face
Lastly, any pompous corporate developer making 200k a year or more who claims they've never shipped a vuln and that they write perfect code the first time is just a liar.
Everything you mentioned is security 101, widely known, and can be caught by standard tools. Shrugging that off as a learning experience does not really hold much water in a professional context.
The responsibility is on the programmer to learn and remember these things. Period, end of story. Just as smart pointers are a bandaid on a bigger problem with real consequences (memory fragmentation and cache misses), so too is a giga-linter that serves as permanent training wheels for so called programmers.
In my head, the people who accidentally share secrets are also the people who couldn't setup trufflehog with a precommit.
People who believe they know what they're doing get overconfident, move fast, and make mistakes. Seasoned woodworkers lose fingers. Experienced doctors lose patients to preventable mistakes. Senior developers wipe the prod database or make a commit they shouldn't.
https://hsph.harvard.edu/news/fall08checklist/
>In a study of 100 Michigan hospitals, he found that, 30 percent of the time, surgical teams skipped one of these five essential steps: washing hands; cleaning the site; draping the patient; donning surgical hat, gloves, and gown; and applying a sterile dressing. But after 15 months of using Pronovost’s simple checklist, the hospitals “cut their infection rate from 4 percent of cases to zero, saving 1,500 lives and nearly $200 million,”
I made shameful mistake of submitting private key (development one so harmless) only because it wasn’t gitignored and prehook script crashed without deleting it). More of a political/audit problem than a real one.
I guess I’m old enough to remember Murphy Laws and the one saying "safety system upon failure will bring protected system down first".
I guess it's hubris. I don't make stupid mistakes. You see it a lot in discussions around Rust.
Unfortunately, that is impossible: https://trufflesecurity.com/blog/anyone-can-access-deleted-a...
- enforce them on CI too; not useful for secrets but at least you're eventually alerted
- do not run tasks that take more than a second; I do not want my commit commands to not be instant.
- do not prevent bad code from being committed, just enforce formatting; running tests on pre-commit is ridiculous, imagine Word stopping you from saving a file until you fixed all your misspellings.
My developer environments are setup to reproduce CI test locally, but if I need to resort to “CI driven development” I can bypass prepush hooks with —-no-verify.
Pre-commit hooks should be much, much faster than most CI jobs; they should collectively run in less than a second if possible.
Also easier to enforce pre-commit, since it was done server side.
Always cycle credentials after an accident like committing them to source control. Do it immediately, you will forget later. Even if you are 100% sure the repo will never be more public, it is a good habit to form.
- commit secret in currently private repo
- 3 years later share / make public
- forget the secret is in the commit history, and still valid, (and relatedly, having long-lived secrets is less secure)
Sure that might not happen for you, but the chances increase dramatically if you make a habit of commiting secrets.
No comments yet
Example, there's an ICE reporting app now where people can anonymously report ICE sightings... but how anonymous is it really? Users report a location, that can be cross-referenced with location histories and quicky led back to an individual. There may be retaliation to users of this app if the spiral into authoritarianism in the US continues.
For now they're going to be making a lot of basic mistakes but eventually they'll grugq up and learn from people that are already used to dealing with the violence of their government.
https://docs.github.com/en/actions/how-tos/security-for-gith...
Never commit secrets for any reason.
Same for your vault. The vault might be encrypted, but at some point you have to give the keys to the vault.
Your secrets are not safe from someone if someone needs them to run your code.
This is true. I don't disagree with that or you're assessment of repo secrets.
My comment was in the context of the grandparent committing secrets to a private repo which is a bad practice (regardless of visibility). You could do that for tests, sure (I would suggestion creating random secrets for each test when you can), but then you're creating a bad habit. If you can't use random secrets for tests repo secrets would be acceptable, but I wouldn't use them beyond that.
For CI and deploys I would opt for some kind of secret manager. CI can be run on your own infrastructure, secret managers can be run on your own infrastructure, etc...
But somewhere in the stack secret(s) will be exposed to _someone_.
Last time I tried, the default suggestion was Cloud KMS (yeah), now there's some new secret manager that also looks annoying: https://stackoverflow.com/questions/58371905/how-to-handle-s...
The other upside with environment variables is that they work across projects. Set & forget, assuming you memorized the name. Getting at tokens for OpenAI, AWS, GH, etc., is already a solved problem on my machine.
I understand why a lot of developers don't do this though. Especially on Windows, it takes a somewhat unpleasant # of clicks to get to the UI that manages these things. It's so much faster (relatively speaking) to paste the secret into your code. This kind of trivial laziness can really stack up on you if you aren't careful.
Checkout to event, commit in clean state with prior log history, overlay the state after the elision and replace git repo?
When I had to retain log and elide state I did things like this in RCS. Getting date/time info right was tricky.
So I just hard coded the key. The key was rotated after the presentation.
Does not look very good on a repo.
p.s: If you run OSS project, please use Github Advanced Security and enable Push Protection against secrets.
I thought garbage collection should get rid of all dangling stuff. But even without that, I am curious if pushing a branch would push the dangling commits as well.
It's interesting research, but will Truffle Security use the email addresses for lead gen or marketing purposes, like how they mined users' pingbacks from their XSS Hunter fork for stats?
https://portswigger.net/daily-swig/new-xss-hunter-host-truff...
Does leave me wondering how long before someone has a setup which detects and tries to exploit these in real-time, which feels like it could be nasty.
Also a challenge with these posts is they were unlikely to have been able to contact all the affected developers who have got exposed secrets, meaning that any that were uncontactable/non-responsive are likely still vulnerable now, I'd guess that means they're about see what happens if those secrets get abused, as people start exploring this more...
[1] https://blog.gitguardian.com/the-state-of-secrets-sprawl-202...
This should be done through history rewrites but as other commenters mention - GitHub has its own rights (and GitHub != git).
I’d recommend looking at simpler alternatives. IMO Jujutsu is mature enough for daily usages, and Fossil is a neat alternative if one wants to drop GitHub completely (albeit not very easy to use).
- Once it is on the internet - it is always there so Rotate the key/secrets FIRST.
- Never think secrets are gone because of you have recommited .
- Deleting a commit is not enough , use BFG Cleaner - https://rtyley.github.io/bfg-repo-cleaner/ , and force commit to change history.
Edit- Forget to add most important thing - rotating the key.
No comments yet
I don’t see how BFG helps here
Also, if you’re going to rotate your secrets (which you absolutely should do regardless) then everything else is pointless because it’s now just an invalid credential.
GitHub can't use the native git gc, and apparently doesn't have their own fork-aware and weird-cross-repo-merge-aware gc, so they might just not have built a way to track which commits are dangling.
But that's not obvious at all.
You are free to organize your version history as you fit, and you can certainly rewrite history.
The only issue you might have is signed commits from collaborators, that you can not resign.
But you can't coerce everyone in the world to remove all traces of the alternate history that was a thing before being rewritten.
So while you can make git forget something in your local repo, you can't make git forget across the decentralised set of repos, which is part of git's core design.
So in that sense, yes, git never forgets, by design.
So there is hierarchy
> there are no commands to force it to delete the data.
That's just the current state, the question was how git prevents "de facto" deletion on a server? How is it anti-git to ask the server to execute git garbage collection commands, for example?