I agree this was a security concern and it was reported and addressed appropriately. With that said as things go this is pretty minor; perhaps a medium severity issue. Information disclosures like this may be leveraged by attackers with existing access to the lower environment, in conjunction with other issues, to escalate their privileges. By itself, or without the existing access, it is not usable.
More over, the issue wasn’t that AWS recommended or automatically setup the environment insecurely. Their documentation simply left the commonly known best practice of disallowing trusts from lower to prod environments implicit, rather than explicitly recommending users follow that best practice in using the solution.
I don’t think over-hyping smaller issues, handled appropriately, helps anyone.
liquidpele · 4h ago
Sounds like typical hyperbole. Worked at a place once where some “security researcher” trashed the product because they could do bad things on the appliance… if logged in as root.
placardloop · 5h ago
This so called “security risk” is a role in a nonprod that can list metadata about things in your production accounts. It can list secret names, list bucket names, list policy names, and similar.
Listing metadata is hardly a security issue. The entire reason these List* APIs are distinct from Get* APIs is that they don’t give you access to the object itself, just metadata. And if you’re storing secret information in your bucket names, you have bigger problems.
voytec · 11m ago
> And if you’re storing secret information in your bucket names, you have bigger problems.
Yeah but the design should be made on the assumption that some customers will do stupid things, and protect them.
Not an identical case, but I once bought a Cisco router for home lab/learning and it appeared to be a hardware decommissioned by one of European banks, not flashed before being handed over to some asset disposal contractor. It eventually landed on an auctioning portal with bank's configuration. The bank was very meticulous with documenting stuff like the address of the branch where it was installed in device's config and ACL names/descriptions included employees' names and room numbers. You could easily extract the names of people granted extended access to internal systems.
So while I agree with you in principal, even financial institutions do stupid things, lack procedures or their processes don't always follow them. Cloud provider's design should assume their customers not following best practices.
gwbas1c · 5h ago
Depending on what the metadata is, it can be a huge security risk.
For example, some US government agencies consider computer names sensitive, because the computer name can identify who works in what government role, which is very sensitive information. Yet, depending on context, the computer name can be considered "metadata."
placardloop · 5h ago
AWS does not treat metadata with the same level of sensitivity as other data. The docs explicitly say that sensitive information should not be stored in eg tags or policies. If you are attempting to do so, you’re fighting against the very tool you’re using.
whs · 1h ago
To add on this point, in my interaction with AWS employees it seems that
- The account manager and the enterprise support TAM can view a list of all resources on the account, including metadata like resource name, instance type and cost explorer tags. Enterprise support routinely present a monthly cost review with us, so it is clear that they can always access this information without our explicit consent. They do not have the ability to view detailed internal information about it though, such as internal logs.
- When opening support case, the ticketing system ask for resource ARN which may contains the name. It seems that the support team can view some data about that object including monitoring data and internal logs, but potentially accessing "customer data" (such as ssh-ing into an RDS instance) requires explicit, one off consent.
- I never opened any issues about IAM policy, so I don't know if they see IAM role policy document
- It seems that the account ID and account name is also often used by both AWS' sales side and reseller's side. I think I read somewhere that it is possible to retrieve the AWS account ID if you know S3 bucket or something, and when exchanging data with external partner via AWS (eg. S3, VPC peering) you're required to exchange account ID to the partner.
vlovich123 · 4h ago
I invite you to consider the possibility that even though that’s the case, it’s Amazon’s fault for this design choice and one that can be critiqued especially since metadata disclosure can be paired with other exploits. For example, if I know a bucket name then I know the bucket’s domain name since buckets are by default created open to the public.
There’s no inherent reason for treating metadata as less sensitive and there would be fewer problems if it were treated with the same sensitivity as normal data.
Said another way, some users expect the metadata to be treated sensitively and Amazon’s subversion of this is an Amazon problem not a user problem since this user expectation is rather reasonable.
bigstrat2003 · 3h ago
> Said another way, some users expect the metadata to be treated sensitively and Amazon’s subversion of this is an Amazon problem not a user problem since this user expectation is rather reasonable.
It's an Amazon problem to the extent that they lose business over it. But if people choose to use AWS, despite having different requirements for data security than AWS provides, that is a user problem. At some point the onus is on the user to understand what a tool does and doesn't do, and not choose a tool that doesn't meet their requirements.
The globally unique names of S3 could be problematic with just the metadata of name.
You could figure out how a company names their S3 buckets. It's subtle, but you could create a bunch of typo'd variants of the buckets and sit around waiting for s3 server logs/cloudtrail to tell you when someone hits one of the objects.
When that happens, you could get the accessing AWS Account # (which isn't inherently private, but something that you wouldn't want to tell the world about), IAM user accessing object, and which object was attempted to be accessed.
Say the IAM user is a role with terribly insecure assume role policy... Or one could put an object where the misconfigured service was looking and it'd maybe get processed.
This kind of attack is preventable but I doubt most people are configuring SCPs to the level of detail you'd need to completely prevent this.
coredog64 · 3h ago
That’s why Amazon recommends the use of the expected owner parameter for S3 operations.
ISTR it’s also possible to apply an SCP that limits S3 reads and writes outside your organization. If not via an SCP then via a permission boundary at the least.
vlovich123 · 3h ago
I know because it was one of the key decisions we made with R2 and pushed this point in the community.
The majority of S3 buckets, especially valuable ones, remain created back when it was the default and thus the metadata sensitivity with bucket names remains (and that isn’t the only metadata issue).
belter · 2h ago
S3 buckets were never public by default. From the link you posted:
"...Amazon S3 buckets are and always have been private by default. Only the bucket owner can access the bucket or choose to grant access to other users..."
The feature and announcement you linked was about making active an additional safety feature that would block them becoming public. Even if you intentionally ( or accidentally ) configured them with public access.
The well known accidents in the past, of Facebook or the Pentagon having private data in public S3 buckets, I can only attribute to the modern practices of self-paced learning, skipping videos on Udemy courses or deciding formal training is no longer necessary because I can Google it...
everfrustrated · 3h ago
AWS S3 buckets have always been default private since forever.
stogot · 1h ago
> since buckets are by default created open to the public.
This is false
gitremote · 2h ago
"We kill people based on metadata."
- Ex-NSA chief Michael Hayden
Metadata is data. In a large corporation, metadata can also reveal projects under NDA that only a select few employees are supposed to know about about.
dangus · 4h ago
That sounds more like the government’s fault for putting a secret in the name.
Make the computer name a random string or random set of words, no relation to the person or department who uses it. Problem solved.
pixl97 · 3h ago
And more problems created.
Now you have to have another system that decodes the random words to human usable words. Is that information going to be stored all in one system? Is each team going to be responsible for the translation? How is that going to be protected from information loss?
I work with systems like this so, yea, it can be done. But it cannot be done trivially.
Mbwagava · 4h ago
I don't the US government is representative of any kind of advisable behavior. Perhaps if they weren't doing stuff that makes people want to murder them we wouldn't have to light piles of cash on fire to protect the perpetrators.
LPisGood · 4h ago
Whether or not it’s advisable doesn’t really change the fact that the if US government is commonly doing something then that it is not correct to describe a security impact to those SOPs as “hardly a security risk”
philipwhiuk · 5h ago
At the end of the day if you deploy a tool that can access production data, you need to treat it like production. That's the reality here.
placardloop · 5h ago
No, that’s not the reality. “Production data” isn’t as black and white as that.
Metadata about your account, regardless of if you call it “production” or not, is not guaranteed to be treated with the same level of sensitivity as other data. Your threat model should assume that things like bucket names, role names, and other metadata are already known by attackers (and in fact, most are, since many role names managed by AWS have default names common across accounts).
EliavLivneh · 5h ago
Hey, author of the blog here :)
Just wanted to point out that it is not just names of objects in sensitive accounts exposed here - as I wrote, the spoke roles also have iam:ListRoles and iam:ListPolicies, which is IMO much more sensitive than just object names. These contain a whole lot of information about who is allowed to do what, and can point at serious misconfigurations that can then be exploited onwards (e.g. misconfigured role trust policies, or knowing about over-privileged roles to target).
placardloop · 5h ago
ListPolicies does not show the contents of policies, so the information you mentioned isn’t possible to obtain from there.
Things like GetKeyPolicy do, but as I mentioned in my comments already, the contents of policies are not sensitive information, and your security model should assume they are already known by would-be attackers.
“My trust policy has a vulnerability in it but I’m safe because the attacker can’t read my policy to find out” is security by obscurity. And chances are, they do know about it, because you need to account for default policies or internal actors who have access to your code base anyway (and you are using IaC, right?)
You’re right to raise awareness about this because it is good to know about, but your blog hyperbolizes the severity of this. This world of “every blog post is a MAJOR security vulnerability” is causing the industry to think of security researchers as the boy who cried wolf.
marcusb · 4h ago
> “My trust policy has a vulnerability in it but I’m safe because the attacker can’t read my policy to find out”
The goal in preventing enumeration isn't to hide defects in the security policy. The goal is to make it more difficult for attackers to determine what and how they need to attack to move closer to their target. Less information about what privileges a given user/role have = more noise from the attacker, and more dwell time, all other things being equal. Both of which increase the likelihood of detection prior to full compromise.
zmgsabst · 3h ago
iam:ListRoles tells you ARNs and policy rules — at least, in their example response.
I don’t think this is a major or severe issue — but it certainly would provide information for pivots, eg, ARNs to request and information about from where.
ImPostingOnHN · 4h ago
I disagree with your opinion here: The contents of security policies can easily be sensitive information.
I think what you mean to say is, "Amazon has decided not to treat the contents of security policies as sensitive information, and told its customers to act accordingly", which is a totally orthogonal claim.
It's extremely unlikely that every decision Amazon makes is the best one for security. This is an example of where it likely is not.
placardloop · 4h ago
It’s not orthogonal. The foundation of good security is using your tools correctly. AWS explicitly tells users to not store sensitive information in policies. If you’re doing so, it’s not AWS making the mistake.
ImPostingOnHN · 4h ago
AWS is evidently not using their own tools correctly to build AWS then, because we know that the contents of security policies can easily contain sensitive information.
Just because Amazon tells people not to put sensitive information in a security policy, doesn't mean a security policy can't or shouldn't contain sensitive information. It more likely means Amazon failed to properly implement security policies (since they CAN contain sensitive information), and gives their guidance as an excuse/workaround. The proper move would be to properly implement security policies such that the access is as limited as expected, because again, security policies can contain sensitive information.
An analogy would be a car manufacturer that tells owners to not put anything in the car they don't want exploded. "But they said don't do it!" -- Obviously this is still unreasonable: A reasonable person would expect their car to not explode things inside it, just like a reasonable person would expect their cloud provider to treat customer security policies as sensitive data. Don't believe me here? Call up a sample of randomly-selected companies and ask for a copy of their security policies.
This is key to understand here: What Amazon says is best security given their existing decisions is not the best security for a cloud provider to provide customers. We're discussing the latter: Not security given a tool, but security of the tool itself, and the decisions that went into designing the tool. It's certainly not the case that the tool is perfect and can't be improved, and it's not a given that the tool is even good.
bulatb · 5h ago
Someone invented the two-sentence clickline. Now even blogs do it.
y-curious · 5h ago
Hackers Hate Him: The Weird Trick that Keeps Users Clicking in 2025
smallnix · 5h ago
Are HackerNews Comments the Next Victim of this Crazy Trend?
myself248 · 5h ago
It's more likely than you think!
abhisek · 4h ago
IAM is complex. More so with federation and cross account trust. Not sure every weakness can be considered as a vulnerability.
In this case, I was looking for a threat model within which this is a vulnerability but unable to find so.
jonfw · 4h ago
The security industry, unfortunately, is awash with best practice violations masquerading as vulnerabilities
18172828286177 · 2h ago
This is just incompetence on the part of the person deploying this solution. Just because AWS say “don’t deploy to management account” doesn’t mean you should a deploy something with access to all your accounts into a dev account.
tasuki · 58m ago
Without reading this article: of course!
Most "security tools" introduce security risks. An antivirus is usually a backdoor to your computer. So are various "endpoint protection" tools.
The whole security industry is a sham.
gitroom · 2h ago
Man, this got me thinking hard about where the line really is for what counts as a real risk versus hype. Everyone draws it different. you ever catch yourself worrying too much about stuff that probably isn't even a threat?
ahoka · 5h ago
The link is ironically blocked by my companies security suite.
MortyWaves · 5h ago
Blocking pseudo-“security research” like this one is probably a safe bet.
dangus · 5h ago
As an AWS-focused practitioner, I started doing Google Cloud training and it blew my mind when I found out that the multiple account sub-account mess that AWS continues to use just doesn’t exist there. GCP sensibly uses a folder and project system that provides a lot of flexibility and IAM control.
It also blew my mind that Google Cloud VPCs and autoscaling groups are global, so that you don’t have to jump through hoops and use the Global Accelerator service to architect a global application.
After learning just those two things I’m downright shocked that Google is still in 3rd place in this market. AWS could really use a refactor at this point.
thinkindie · 5h ago
I think Google scares a lot of people away with their approach of not being able to talk to any human whatsoever unless you spend a lot of money on a monthly basis.
I read a lot of horror stories of people getting in troubles with GCP and not being able to talk to a human person, whereas you would get access to some human presence with AWS.
Things might have been changed, but I guess a lot of people have still this in the back of their mind.
phinnaeus · 5h ago
I’m scared of Google realising that maintaining a given cloud product just isn’t fun anymore and sending it to the Google graveyard.
MortyWaves · 5h ago
Or sell it off to nefarious companies like SquareSpace
bigstrat2003 · 3h ago
Yeah I think anyone who chooses to do business with Google at this point is taking a needless risk. I wouldn't trust them to continue to provide anything except perhaps the ad business.
stackskipton · 4h ago
Like IoT service? I had friends working in that field that scrambled for a year when that happened and they will never touch Google Cloud again.
Xunjin · 4h ago
Ahhhh it wasn't nice, but we at Echo54[0] had a plan/execution (6 months before the due date) to migrate to RabbitMQ (with MQTT Plugin) works way better and cheaper than GCP IoT service. We did all of that in less than 2 months.
That may be true, but a lot of cloud customers are in that category of spending a lot of money on a monthly basis.
Google’s poor support reputation is deserved, but I’m not sure I’d want to architect extra stuff over that issue. After I found out those facts about GCP I was pretty sure I could have gotten 6 months of my professional life back because of the architecture of GCP being superior.
sumitkumar · 5h ago
So this is about customer support. Google supports by the customer by a better product but minimal manual support for issues later.
AWS has an organically evolved bad product which has been designed by long line of six page memos but a manual support in case things get too confusing or the customer just need emotional support.
icedchai · 4h ago
I've worked with both AWS and GCP off and on for 15 years. In general, I find GCP easier to work with: better developer experience, services that are simpler to configure (Cloud Run vs ECS/Fargate), etc. However, AWS is like the new IBM: nobody ever got fired for going with AWS...
philipwhiuk · 5h ago
One of the stand-out things at AWS Summit London was the number of talks basically saying:
"Yes accounts is a mess but they're what we have".
candiddevmike · 5h ago
Google's resource management + AWS's IAM + Azure's... nah == best of everything.
arccy · 5h ago
AWS IAM terrible, GCP's is much better
candiddevmike · 4h ago
In AWS, everything is in one place and uses a fairly expressive policy syntax. For GCP, you have " global IAM" in one place, contextual IAM in another (VPC-SC), per-resource IAM under the resource (GCS buckets), roles in another spot that require using the most sluggish docs website in the world to decode, and user/group management in an entirely separate app (cloud identity/workspace).
How is GCP much better? FWIW I use/evangelize GCP everyday. Their IAM setup is just very naive and seems like it has had things bolted on as an afterthought. AWS is much more well designed and future proof.
trallnag · 3h ago
It's not as clean in AWS as you make it out to be. Service control policies, resources policies in services like S3 and SNS...
bbarnett · 5h ago
Sort of the same with anything Amazon. Look at their retail website! It used to be the most ground breaking, impressive product search engine out there.
Now it's weird in a dozen different ways, and it endlessly spews ridiculous results at you. It's like a gorgeous mansion from the 1900s, which received no upkeep. It's junk now.
For example, if I want to find new books by an author I've bought from before, I have to go to: returns & orders, digital orders, find book and click, then author's name, all books, language->english, format->kindle, sort by->publication date.
There's no way to set defaults. No way to abridge the process. You mysteriously you cannot click on the author name in "returns & orders". It's simply quite lame.
Every aspect of Amazon is like this now. It was weird workflows throughout the site. It's living on inertia.
shermantanktop · 2h ago
We all say “Microsoft” “Google” “Amazon” as though each is a single monolithic entity with a consistency of culture, mission, and behavior. And yet I bet the company you work does things in marketing which don’t reflect how engineering thinks.
Your observations imply a root cause. But public information about Amazon’s corporate structure shows that AWS is almost a separate company from the website. Same is true for Google’s search vs YouTube or Apple hardware design vs their iMessages group.
gwbas1c · 5h ago
My experience with GCP was that the support staff was rude.
belter · 2h ago
What is the "account sub-account" you are referring to? Does it blow your mind Google Availability Zones are firewalls across the SAME data center?
I don't know GCP but my experiences with Azure were also way smoother than AWS. It's like the Amazon folks are not even trying to work on less friction...
datadrivenangel · 5h ago
Which parts of Azure have you used that have less friction than AWS?
stackskipton · 4h ago
Azure Ops type, IAM and organizing is much better.
MikeIndyson · 4h ago
Depends on your implementation, you should not store sensitive data in metadata
atoav · 3h ago
A fundamental problem that plagues many security solutions can be understood by analogy:
Imagine a incredibly secure castle. There are thick unclimbable walls, moats, trap rooms, everything compartmentalized, an attacker that gains control of one section didn't achieve much in terms of the whole castle, the men in each section are carefully vetted and are not allowed to have contact or family relationships with men stationed in other sections, so they cannot be easily bribed or forced to open doors. Everything is fine.
But the king is furious, the attackers shouldn't control any part of the castle! As a matter of principle! The architects reassure the king that everything is fine and there is no need to worry. The king is unconvinced, fires them and searches for architects that do his bidding. So the newlyfound architects scramble together and come up with secret hallways and tunnels, connecting all parts of the castle so the defenders can clear the building in each section. The special guards who are in charge of that get high priviledges, so they could even fight attackers who reach the kings bed room. The guard is also tasked to keep in touch with the attackers so they are extra prepared for when they attack and understand their mindset inside out.
The king is pleased, the castle is safe. One night one of those guards turns against the king and the attackers are sneaked into the castle. The enemy is suddenly everywhere and they kill the king. A battle that should have been fought in stages going inwards is now fought from the inside out and the defenders are suddenly trapped in the places that were meant for the very enemies they are fighting. The kingdom has fallen.
The problem with many security solutions – including AV solutions – is that you give the part of your system that comes into contact with the "enemy" the keys to your kingdom, usually with full unchecked priviledges (how else to read everything that is going on in the system). Actual security is the result of strict compartmentalization and a careful and continous vetting of how each section can be abused and leveraged once it has fallen. Just like in mechanical engineering where each new moving part can add a new failure point, in security adding a new priviledged thing adds a lot of new attack surface that wasn't previously there. And if that attack surface gives you the keys to the kingdom it isn't the security solution, it is the target.
swisniewski · 3h ago
The article is bullshit.
AWS has a pretty simple model: when you split things into multiple accounts those accounts are 100% separate from each other (+/- provisioning capabilities from the root account).
The only way cross account stuff happens is if you explicitly configure resources in one account to allow access from another account.
If you want to create different subsets of accounts under your org with rules that say subset a (prod) shouldn’t be accessed by another subset (dev), then the onus for enforcing those rules are on you.
Those are YOUR abstractions, not AwS abstractions. To them, it’s all prod. Your “prod” accounts and your “dev” account all have the same prod slas and the same prod security requirements.
The article talks about specific text in the AWS instructions:
“Hub stack - Deploy to any member account in your AWS Organization except the Organizations management account."
They label this as a “major security risk” because the instructions didn’t say “make sure that your hub account doesn’t have any security vulnerabilities in it”.
AWS shouldn’t have to tell you that, and calling it a major security risk is dumb.
Finally, the access given is to be able to enumerate the names (and other minor metadata) of various resources and the contents of IAM policies.
None of those things are secret, and every dev should have access to them anyways. If you are using IAC, like terraform, all this data will be checked into GitHub and accessible by all devs.
Making it available from the dev account is not a big deal. Yes, it’s ok for devs to know the names of IAM roles and the names of encryption key aliases, and the contents of IAM policies. This isn’t even an information disclosure vulnerability .
It’s certainly not a “major risk”, and is definitely not a case of “an AWS cross account security tool introducing a cross account security risk”.
This was, at best, a mistake by an engineer that deployed something to “dev” that maybe should have been in “prod” (or even better in a “security tool” environment).
But the actual impact here is tiny.
The set of people with dev access should be limited to your devs, who should have access to source control, which should have all this data in it anyways.
Presumably dev doesn’t require multiple approvals for a human to assume a role, and probably doesn’t require a bastion (and prod might have those controls), so perhaps someone who compromises a dev machine could get some
Prod metadata.
However someone who compromises a dev machine also has access to source control, so they could get all this metadata anyways.
More over, the issue wasn’t that AWS recommended or automatically setup the environment insecurely. Their documentation simply left the commonly known best practice of disallowing trusts from lower to prod environments implicit, rather than explicitly recommending users follow that best practice in using the solution.
I don’t think over-hyping smaller issues, handled appropriately, helps anyone.
Listing metadata is hardly a security issue. The entire reason these List* APIs are distinct from Get* APIs is that they don’t give you access to the object itself, just metadata. And if you’re storing secret information in your bucket names, you have bigger problems.
Yeah but the design should be made on the assumption that some customers will do stupid things, and protect them.
Not an identical case, but I once bought a Cisco router for home lab/learning and it appeared to be a hardware decommissioned by one of European banks, not flashed before being handed over to some asset disposal contractor. It eventually landed on an auctioning portal with bank's configuration. The bank was very meticulous with documenting stuff like the address of the branch where it was installed in device's config and ACL names/descriptions included employees' names and room numbers. You could easily extract the names of people granted extended access to internal systems.
So while I agree with you in principal, even financial institutions do stupid things, lack procedures or their processes don't always follow them. Cloud provider's design should assume their customers not following best practices.
For example, some US government agencies consider computer names sensitive, because the computer name can identify who works in what government role, which is very sensitive information. Yet, depending on context, the computer name can be considered "metadata."
- The account manager and the enterprise support TAM can view a list of all resources on the account, including metadata like resource name, instance type and cost explorer tags. Enterprise support routinely present a monthly cost review with us, so it is clear that they can always access this information without our explicit consent. They do not have the ability to view detailed internal information about it though, such as internal logs.
- When opening support case, the ticketing system ask for resource ARN which may contains the name. It seems that the support team can view some data about that object including monitoring data and internal logs, but potentially accessing "customer data" (such as ssh-ing into an RDS instance) requires explicit, one off consent.
- I never opened any issues about IAM policy, so I don't know if they see IAM role policy document
- It seems that the account ID and account name is also often used by both AWS' sales side and reseller's side. I think I read somewhere that it is possible to retrieve the AWS account ID if you know S3 bucket or something, and when exchanging data with external partner via AWS (eg. S3, VPC peering) you're required to exchange account ID to the partner.
There’s no inherent reason for treating metadata as less sensitive and there would be fewer problems if it were treated with the same sensitivity as normal data.
Said another way, some users expect the metadata to be treated sensitively and Amazon’s subversion of this is an Amazon problem not a user problem since this user expectation is rather reasonable.
It's an Amazon problem to the extent that they lose business over it. But if people choose to use AWS, despite having different requirements for data security than AWS provides, that is a user problem. At some point the onus is on the user to understand what a tool does and doesn't do, and not choose a tool that doesn't meet their requirements.
longer if using the console
You could figure out how a company names their S3 buckets. It's subtle, but you could create a bunch of typo'd variants of the buckets and sit around waiting for s3 server logs/cloudtrail to tell you when someone hits one of the objects.
When that happens, you could get the accessing AWS Account # (which isn't inherently private, but something that you wouldn't want to tell the world about), IAM user accessing object, and which object was attempted to be accessed.
Say the IAM user is a role with terribly insecure assume role policy... Or one could put an object where the misconfigured service was looking and it'd maybe get processed.
This kind of attack is preventable but I doubt most people are configuring SCPs to the level of detail you'd need to completely prevent this.
ISTR it’s also possible to apply an SCP that limits S3 reads and writes outside your organization. If not via an SCP then via a permission boundary at the least.
The majority of S3 buckets, especially valuable ones, remain created back when it was the default and thus the metadata sensitivity with bucket names remains (and that isn’t the only metadata issue).
"...Amazon S3 buckets are and always have been private by default. Only the bucket owner can access the bucket or choose to grant access to other users..."
The feature and announcement you linked was about making active an additional safety feature that would block them becoming public. Even if you intentionally ( or accidentally ) configured them with public access.
The well known accidents in the past, of Facebook or the Pentagon having private data in public S3 buckets, I can only attribute to the modern practices of self-paced learning, skipping videos on Udemy courses or deciding formal training is no longer necessary because I can Google it...
This is false
- Ex-NSA chief Michael Hayden
Metadata is data. In a large corporation, metadata can also reveal projects under NDA that only a select few employees are supposed to know about about.
Make the computer name a random string or random set of words, no relation to the person or department who uses it. Problem solved.
Now you have to have another system that decodes the random words to human usable words. Is that information going to be stored all in one system? Is each team going to be responsible for the translation? How is that going to be protected from information loss?
I work with systems like this so, yea, it can be done. But it cannot be done trivially.
Metadata about your account, regardless of if you call it “production” or not, is not guaranteed to be treated with the same level of sensitivity as other data. Your threat model should assume that things like bucket names, role names, and other metadata are already known by attackers (and in fact, most are, since many role names managed by AWS have default names common across accounts).
Just wanted to point out that it is not just names of objects in sensitive accounts exposed here - as I wrote, the spoke roles also have iam:ListRoles and iam:ListPolicies, which is IMO much more sensitive than just object names. These contain a whole lot of information about who is allowed to do what, and can point at serious misconfigurations that can then be exploited onwards (e.g. misconfigured role trust policies, or knowing about over-privileged roles to target).
Things like GetKeyPolicy do, but as I mentioned in my comments already, the contents of policies are not sensitive information, and your security model should assume they are already known by would-be attackers.
“My trust policy has a vulnerability in it but I’m safe because the attacker can’t read my policy to find out” is security by obscurity. And chances are, they do know about it, because you need to account for default policies or internal actors who have access to your code base anyway (and you are using IaC, right?)
You’re right to raise awareness about this because it is good to know about, but your blog hyperbolizes the severity of this. This world of “every blog post is a MAJOR security vulnerability” is causing the industry to think of security researchers as the boy who cried wolf.
The goal in preventing enumeration isn't to hide defects in the security policy. The goal is to make it more difficult for attackers to determine what and how they need to attack to move closer to their target. Less information about what privileges a given user/role have = more noise from the attacker, and more dwell time, all other things being equal. Both of which increase the likelihood of detection prior to full compromise.
https://docs.aws.amazon.com/IAM/latest/APIReference/API_List...
I don’t think this is a major or severe issue — but it certainly would provide information for pivots, eg, ARNs to request and information about from where.
I think what you mean to say is, "Amazon has decided not to treat the contents of security policies as sensitive information, and told its customers to act accordingly", which is a totally orthogonal claim.
It's extremely unlikely that every decision Amazon makes is the best one for security. This is an example of where it likely is not.
Just because Amazon tells people not to put sensitive information in a security policy, doesn't mean a security policy can't or shouldn't contain sensitive information. It more likely means Amazon failed to properly implement security policies (since they CAN contain sensitive information), and gives their guidance as an excuse/workaround. The proper move would be to properly implement security policies such that the access is as limited as expected, because again, security policies can contain sensitive information.
An analogy would be a car manufacturer that tells owners to not put anything in the car they don't want exploded. "But they said don't do it!" -- Obviously this is still unreasonable: A reasonable person would expect their car to not explode things inside it, just like a reasonable person would expect their cloud provider to treat customer security policies as sensitive data. Don't believe me here? Call up a sample of randomly-selected companies and ask for a copy of their security policies.
This is key to understand here: What Amazon says is best security given their existing decisions is not the best security for a cloud provider to provide customers. We're discussing the latter: Not security given a tool, but security of the tool itself, and the decisions that went into designing the tool. It's certainly not the case that the tool is perfect and can't be improved, and it's not a given that the tool is even good.
In this case, I was looking for a threat model within which this is a vulnerability but unable to find so.
Most "security tools" introduce security risks. An antivirus is usually a backdoor to your computer. So are various "endpoint protection" tools.
The whole security industry is a sham.
It also blew my mind that Google Cloud VPCs and autoscaling groups are global, so that you don’t have to jump through hoops and use the Global Accelerator service to architect a global application.
After learning just those two things I’m downright shocked that Google is still in 3rd place in this market. AWS could really use a refactor at this point.
I read a lot of horror stories of people getting in troubles with GCP and not being able to talk to a human person, whereas you would get access to some human presence with AWS.
Things might have been changed, but I guess a lot of people have still this in the back of their mind.
0. https://www.echo54.com/
Google’s poor support reputation is deserved, but I’m not sure I’d want to architect extra stuff over that issue. After I found out those facts about GCP I was pretty sure I could have gotten 6 months of my professional life back because of the architecture of GCP being superior.
AWS has an organically evolved bad product which has been designed by long line of six page memos but a manual support in case things get too confusing or the customer just need emotional support.
"Yes accounts is a mess but they're what we have".
How is GCP much better? FWIW I use/evangelize GCP everyday. Their IAM setup is just very naive and seems like it has had things bolted on as an afterthought. AWS is much more well designed and future proof.
Now it's weird in a dozen different ways, and it endlessly spews ridiculous results at you. It's like a gorgeous mansion from the 1900s, which received no upkeep. It's junk now.
For example, if I want to find new books by an author I've bought from before, I have to go to: returns & orders, digital orders, find book and click, then author's name, all books, language->english, format->kindle, sort by->publication date.
There's no way to set defaults. No way to abridge the process. You mysteriously you cannot click on the author name in "returns & orders". It's simply quite lame.
Every aspect of Amazon is like this now. It was weird workflows throughout the site. It's living on inertia.
Your observations imply a root cause. But public information about Amazon’s corporate structure shows that AWS is almost a separate company from the website. Same is true for Google’s search vs YouTube or Apple hardware design vs their iMessages group.
https://youtu.be/mDNHK-SzXEM?t=560
Imagine a incredibly secure castle. There are thick unclimbable walls, moats, trap rooms, everything compartmentalized, an attacker that gains control of one section didn't achieve much in terms of the whole castle, the men in each section are carefully vetted and are not allowed to have contact or family relationships with men stationed in other sections, so they cannot be easily bribed or forced to open doors. Everything is fine.
But the king is furious, the attackers shouldn't control any part of the castle! As a matter of principle! The architects reassure the king that everything is fine and there is no need to worry. The king is unconvinced, fires them and searches for architects that do his bidding. So the newlyfound architects scramble together and come up with secret hallways and tunnels, connecting all parts of the castle so the defenders can clear the building in each section. The special guards who are in charge of that get high priviledges, so they could even fight attackers who reach the kings bed room. The guard is also tasked to keep in touch with the attackers so they are extra prepared for when they attack and understand their mindset inside out.
The king is pleased, the castle is safe. One night one of those guards turns against the king and the attackers are sneaked into the castle. The enemy is suddenly everywhere and they kill the king. A battle that should have been fought in stages going inwards is now fought from the inside out and the defenders are suddenly trapped in the places that were meant for the very enemies they are fighting. The kingdom has fallen.
The problem with many security solutions – including AV solutions – is that you give the part of your system that comes into contact with the "enemy" the keys to your kingdom, usually with full unchecked priviledges (how else to read everything that is going on in the system). Actual security is the result of strict compartmentalization and a careful and continous vetting of how each section can be abused and leveraged once it has fallen. Just like in mechanical engineering where each new moving part can add a new failure point, in security adding a new priviledged thing adds a lot of new attack surface that wasn't previously there. And if that attack surface gives you the keys to the kingdom it isn't the security solution, it is the target.
AWS has a pretty simple model: when you split things into multiple accounts those accounts are 100% separate from each other (+/- provisioning capabilities from the root account).
The only way cross account stuff happens is if you explicitly configure resources in one account to allow access from another account.
If you want to create different subsets of accounts under your org with rules that say subset a (prod) shouldn’t be accessed by another subset (dev), then the onus for enforcing those rules are on you.
Those are YOUR abstractions, not AwS abstractions. To them, it’s all prod. Your “prod” accounts and your “dev” account all have the same prod slas and the same prod security requirements.
The article talks about specific text in the AWS instructions:
“Hub stack - Deploy to any member account in your AWS Organization except the Organizations management account."
They label this as a “major security risk” because the instructions didn’t say “make sure that your hub account doesn’t have any security vulnerabilities in it”.
AWS shouldn’t have to tell you that, and calling it a major security risk is dumb.
Finally, the access given is to be able to enumerate the names (and other minor metadata) of various resources and the contents of IAM policies.
None of those things are secret, and every dev should have access to them anyways. If you are using IAC, like terraform, all this data will be checked into GitHub and accessible by all devs.
Making it available from the dev account is not a big deal. Yes, it’s ok for devs to know the names of IAM roles and the names of encryption key aliases, and the contents of IAM policies. This isn’t even an information disclosure vulnerability .
It’s certainly not a “major risk”, and is definitely not a case of “an AWS cross account security tool introducing a cross account security risk”.
This was, at best, a mistake by an engineer that deployed something to “dev” that maybe should have been in “prod” (or even better in a “security tool” environment).
But the actual impact here is tiny.
The set of people with dev access should be limited to your devs, who should have access to source control, which should have all this data in it anyways.
Presumably dev doesn’t require multiple approvals for a human to assume a role, and probably doesn’t require a bastion (and prod might have those controls), so perhaps someone who compromises a dev machine could get some Prod metadata.
However someone who compromises a dev machine also has access to source control, so they could get all this metadata anyways.
The article is just sensationalism.