S3: "Block Public Access is now enabled by default on new buckets."
On the one hand, this is obviously the right decision. The number of giant data breeches caused by incorrectly configured S3 buckets is enormous.
But... every year or so I find myself wanting to create an S3 bucket with public read access to I can serve files out of it. And every time I need to do that I find something has changed and my old recipe doesn't work any more and I have to figure it out again from scratch!
sylens · 2h ago
The thing to keep in mind with the "Block Public Access" setting is that is a redundancy built in to save people from making really big mistakes.
Even if you have a terrible and permissive bucket policy or ACLs (legacy but still around) configured for the S3 bucket, if you have Block Public Access turned on - it won't matter. It still won't allow public access to the objects within.
If you turn it off but you have a well scoped and ironclad bucket policy - you're still good! The bucket policy will dictate who, if anyone, has access. Of course, you have to make sure nobody inadvertantly modifies that bucket policy over time, or adds an IAM role with access, or modifies the trust policy for an existing IAM role that has access, and so on.
simonw · 54m ago
I think this is the key of why I find it confusing: I need a very clear diagram showing which rules override which other rules.
andrewmcwatters · 2h ago
This sort of thing drives me nuts in interviews, when people are like, are you familiar with such-and-such technology?
Yeah, what month?
tester756 · 7m ago
If you're aware of changes, then explain that there were changes over time, that's it
crinkly · 2h ago
I just stick CloudFront in front of those buckets. You don't need to expose the bucket at all then and can point it at a canonical hostname in your DNS.
herpderperator · 50m ago
For the sake of understanding, can you explain why putting CloudFront in front of the buckets helps?
hnlmorg · 2h ago
That’s definitely the “correct” way of doing things if you’re writing infra professionally. But I do also get that more casual users might prefer not to incur the additional costs nor complexity of having CloudFront in front. Though at that point, one could reasonably ask if S3 is the right choice for causal users.
tayo42 · 1m ago
>S3 is the right choice for causal users.
It's so simple for storing and serving a static website.
Are there good and cheap alternatives?
gchamonlive · 1h ago
S3 + cloudfront is also incredibly popular so you can just find recipes for automating that in any technology you want, Terraform, ansible, plain bash scripts, Cloudformation (god forbid)
gigatexal · 1h ago
Yeah holy crap why is cloud formation so terrible?
SteveNuts · 1h ago
Last time I tried to use CF, the third party IAC tools were faster to release new features than the functionality of CF itself. (Like Terraform would support some S3 bucket feature when creating a bucket, but CF did not).
I'm not sure if that's changed recently, I've stopped using it.
gchamonlive · 40m ago
It's designed to be a declarative language, but then you have to do all sorts of filters and maps in any group of resources and suddenly you are programming in yaml with both hands tied behind your back
crinkly · 1h ago
It's actually incredibly cheap. I think our software distribution costs, in the account I run, are around $2.00 a month. That's pushing out several thousand MSI packages a day.
SOLAR_FIELDS · 3h ago
I honestly don't mind that you have to jump through hurdles to make your bucket publically available and that it's annoying. That to me seems like a feature, not a bug
dghlsakjg · 3h ago
I think the OPs objection is not that hurdles exist but that they move them every time you try and run the track.
simonw · 3h ago
Sure... but last time I needed to jump through those hurdles I lost nearly an hour to them!
I'm still not sure I know how to do it if I need to again.
general1726 · 58m ago
I think there is more of us who kind of degenerated from doing it the AWS way - API Gateway, serverless lambdas mess around with IAM roles until it works, ... - to - Give me EC2 / LightSail VPS instance maybe an S3 bucket let's set domain through Route53 and go away with the rest of your orchestrion AWS.
calmbonsai · 52m ago
There are entire industries that have largely de-volved their clouds primarily for footprint flexibility (not all AWS services are in all regions) and billing consistency.
JCM9 · 39m ago
Some good stuff here. I wish AWS would just focus on these boring, but ultimately important, things that they’re good at instead of all the current distractions trying to play catch up on “AI.” AWS leadership missed the boat there big time, but that’s OK.
Ultimately AWS doesn’t have the right leadership or talent to be good at GenAI, but they do (or at least used to) have decent core engineers. I’d like to see them get back to basics and focus there. Right now leadership seems panicked about GenAI and is just throwing random stuff at the wall desperately trying to get something to stick. Thats really annoying to customers.
You know what's still stupid? That if you have an S3 bucket in the same region as your VPC that you will get billed on your NAT Gateway to send data out to the public internet and right back in to the same datacenter. There is simply no reason to not default that behavior to opt out vs opt in (via a VPC endpoint) beyond AWS profiting off of people's lack of knowledge in this realm. The amount of people who would want the current opt-in behavior is... if not zero, infinitesimally small.
solatic · 2h ago
It's a design that is secure by default. If you have no NAT gateway and no VPC Gateway Endpoint for S3 (and no other means of Internet egress) then workloads cannot access S3. Networking should be closed by default, and it is. If the user sets up things they don't understand (like NAT gateways), that's on them. Managed NAT gateways are not the only option for Internet egress and users are responsible for the networks they build on top of AWS's primitives (and yes, it is indeed important to remember that they are primitives, this is an IaaS, not a PaaS).
bilalq · 2h ago
Fine for when you have no NAT gateway and have a subnet with truly no egress allowed. But if you're adding a NAT gateway, it's crazy that you need to setup the gateway endpoint for S3/DDB separately. And even crazier that you have to pay for private links per AWS service endpoint.
solatic · 41m ago
There's very real differences between NAT gateways and VPC Gateway Endpoints.
NAT gateways are not purely hands-off, you can attach additional IP addresses to NAT gateways to help them scale to supporting more instances behind the NAT gateway, which is a fundamental part of how NAT gateways work in network architectures, because of the limit on the number of ports that can be opened through a single IP address. When you use a VPC Gateway Endpoint then it doesn't use up ports or IP addresses attached to a NAT gateway at all. And what about metering? If you pay per GB for traffic passing through the NAT gateway, but I guess not for traffic to an implicit built-in S3 gateway, so do you expect AWS to show you different meters for billed and not-billed traffic, but performance still depends on the sum total of the traffic (S3 and Internet egress) passing through it? How is that not confusing?
It's also besides the point that not all NAT gateways are used for Internet egress, indeed there are many enterprise networks where there are nested layers of private networks where NAT gateways help deal with overlapping private IP CIDR ranges. In such cases, having some kind of implicit built-in S3 gateway violates assumptions about how network traffic is controlled and routed, since the assumption is for the traffic to be completely private. So even if it was supported, it would need to be disabled by default (for secure defaults), and you're right back at the equivalent situation you have today, where the VPC Gateway Endpoint is a separate resource to be configured.
Not to mention that VPC Gateway Endpoints allow you to define policy on the gateway describing what may pass through, e.g. permitting read-only traffic through the endpoint but not writes. Not sure how you expect that to work with NAT gateways. This is something that AWS and Azure have very similar implementatoons for that work really well, whereas GCP only permits configuring such controls at the Organization level (!)
They are just completely different networking tools for completely different purposes. I expect closed-by-default secure defaults. I expect AWS to expose the power of different networking implements to me because these are low-level building blocks. Because they are low-level building blocks, I expect for there to be footguns and for the user to be held responsible for correct configuration.
otterley · 2h ago
This is the intended use case for S3 VPC Gateway Endpoints, which are free of charge.
(Disclaimer: I work for AWS, opinions are my own.)
darkwater · 2h ago
I think they know it. They are complaining it's not enabled by default (and so do I).
hylaride · 22m ago
As others have pointed out, this is by design. If VPCs have access to AWS resources (such as S3, DynamoDB, etc), an otherwise locked down VPC can still have data leaks to those services, including to other AWS accounts.
It's a convenience VS security argument, though the documentation could be better (including via AWS recommended settings if it sees you using S3).
otterley · 2h ago
AWS VPCs are secure by default, which means no traffic traverses their boundaries unless you intentionally enable it.
There are many IaC libraries, including the standard CloudFormation VPC template and CDK VPC class, that can create them automatically if you so choose. I suspect the same is also true of commonly-used Terraform templates.
SOLAR_FIELDS · 2h ago
The problem is that the default behavior for this is opt-in, rather than opt-out. No one prefers opt-in. So why is it opt-in?
icedchai · 21m ago
If it were opt-out someone would accidentally leave it on and eventually realize that entire systems had been accidentally "backed up" and exfiltrated to S3.
SOLAR_FIELDS · 14m ago
What? The same is possible whether it's opt-in or opt-out. It's just that if you have the gateway as opt-out you wouldn't also have this problem AND a massive AWS bill. You would just have this problem.
otterley · 1h ago
AWS VPCs are secure by default, which means no traffic traverses their boundaries unless you intentionally enable it.
SOLAR_FIELDS · 23m ago
"The door is locked, so instead of suggesting to the end user that they should unlock the door with this key that we know how to give the end user deterministically, we instead tell them to drive across town and back on our toll roads and collect money from it"
Your job depends upon you misunderstanding the problem.
afandian · 3h ago
Having experienced the joy of setting up VPC, subnets and PrivateLink endpoints the whole thing just seems absurd.
They spent the effort of branding private VPC endpoints "PrivateLink". Maybe it took some engineering effort on their part, but it should be the default out of the box, and an entirely unremarkable feature.
In fact, I think if you have private subnets, the only way to use S3 etc is Private Link (correct me if I'm wrong).
It's just baffling.
time0ut · 2h ago
You can provision gateway endpoints for S3 and DynamoDB. They are free and considered best practice. They are opt-in though, but easy to enable.
dmart · 3h ago
VPC endpoints in general should be free and enabled by default. That you need to pay extra to reach AWS' own API endpoints from your VPC feels egregious.
otterley · 2h ago
Gateway endpoints are free. Network endpoints (which are basically AWS-managed ENIs that can tunnel through VPC boundaries) are not free.
S3 can use either, and we recommend establishing VPC Gateway endpoints by default whenever you need S3 access.
(Disclaimer: I work for AWS, opinions are my own.)
Hikikomori · 2h ago
Why don't you have gateway endpoints for all your APIs?
torginus · 30m ago
If you had an ALB inside the VPC that routed the requests to something that lives inside the VPC, which called the AWS PutObject api on the bucket, would that still be the case?
tux3 · 3h ago
That is price segmentation. People who are price insensitive will not invest the time to fix it
People who are probably shouldn't be on aws - but they usually have to for unrelated reasons, and they will work to reduce their bill.
nbngeorcjhe · 3h ago
> People who are price insensitive will not invest the time to fix it
This just sounds like a polite way of saying "we're taking peoples' money in exchange for nothing of value, and we can get away with it because they don't know any better".
robertlagrant · 2h ago
It's more like: we made loads of stuff super cheap but here's where we make some money because it scales with use.
amenghra · 3h ago
Price segmentation happens all the time in pretty much every industry.
hinkley · 1h ago
There’s an entire Pandora’s box of shitty things that happen in pretty much every industry. I don’t think you want to use that defense.
happytoexplain · 3h ago
>People who are price insensitive will not invest the time to fix it
Hideous.
kbolino · 3h ago
The problem is that VPC endpoints aren't free.
They should be, of course, at least when the destination is an AWS service in the same region.
[edit: I'm speaking about interface endpoints, but S3 and DynamoDB can use gateway endpoints, which are free to the same region]
otterley · 2h ago
Gateway endpoints are free. Network endpoints (which are basically AWS-managed ENIs that can tunnel through VPC boundaries) are not free.
S3 can use either, and we recommend establishing VPC Gateway endpoints by default whenever you need S3 access.
(Disclaimer: I work for AWS, opinions are my own.)
JoshTriplett · 45m ago
That's fascinating! I hadn't found that in the documentation; everything seems to steer people towards PrivateLink, not gateway endpoints.
Would you recommend using VPC Gateway even on a public VPC that has an Internet gateway (note: not a NAT gateway)? Or only on a private VPC or one with a NAT gateway?
otterley · 38m ago
I recommend S3 Gateways for all VPCs that need to access S3, even those that already have routes to the Internet. Plus they eliminate the need for NAT Gateway traversal for requests that originate from private subnets.
kbolino · 2h ago
Fair point, and valid for S3 (the topic at hand) and DynamoDB.
Other AWS services, though, don't support gateway endpoints.
paulddraper · 2h ago
Well yeah that's the point....why route through the public internet.
kbolino · 2h ago
I doubt the traffic ever actually leaves AWS. Assuming it does make it all the way out to their edge routers, the destination ASN will still be one of their own. Not that the pricing will reflect this, of course.
The other problem with (interface) VPC endpoints is that they eat up IP addresses. Every service/region permutation needs a separate IP address drawn from your subnets. Immaterial if you're using IPv6, but can be quite limiting if you're using IPv4.
immibis · 2h ago
A company making revenue is not stupid.
chisleu · 3h ago
> You don’t have to randomize the first part of your object keys to ensure they get spread around and avoid hotspots.
As of when? According to internal support, this is still required as of 1.5 years ago.
laurent_du · 55m ago
He's not talking about the prefix, just the beginning of the object key.
causal · 17m ago
This is super helpful. I would read a yearly summary like this.
aaronblohowiak · 3h ago
>VPC peering used to be annoying; now there are better options like Transit Gateway, VPC sharing between accounts, resource sharing between accounts, and Cloud WAN.
TGW is... twice as expensive as vpc peering?
klysm · 1h ago
VPC sharing is the sleeper here. You can do cross account networking all in the same VPC and skip all the expensive stuff.
aaronblohowiak · 1h ago
as long as your VPCs aren't too big, yea.
alFReD-NSH · 3h ago
And vpc sharing is free. Cost and architecture are tied.
Hikikomori · 3h ago
More than twice as same AZ is free with peering. But if you're big enough you can get better deals on cost.
But unlike peering TGW traffic flows through an additional compute layer so it has additional cost.
topher200 · 1h ago
I have a preempt-able workload for which I could use Spot instances or Savings Plans.
Does anyone have experience running Spot in 2025? If you were to start over, would you keep using Spot?
- I observe with pricing that Spot is cheaper
- I am running on three different architectures, which should limit Spot unavailability
- I've been running about 50 Spot EC2 instances for a month without issue. I'm debating turning it on for many more instances
stevejb · 1h ago
I just saw Weird Al in concert, and one of my favorite songs of his is "Everything You Know is Wrong." This is the AWS version of that song! Nice work Corey!
havefunbesafe · 1h ago
Weird AL or Weird A.I.?
gurjeet · 3h ago
It would've been nice if each of those claims in the article also linked to either the relevant announcement or to the documentation. If I'm interested in any of these headline items, I'd like to learn more.
TheP1000 · 2h ago
API gateway timeout increase has been nice.
coredog64 · 1h ago
It was always there but it required much more activity to get it done (document your use case & traffic levels and then work with your TAM to get the limit changed).
bob1029 · 3h ago
> Glacier restores are also no longer painfully slow.
Wouldn't this always depend on the length of the queue to access the robotic tape library? Once your tape is loaded it should move really quickly:
> Once upon a time Glacier was its own service that had nothing to do with S3. If you look closely (hi, billing data!) you can see vestiges of how this used to be, before the S3 team absorbed it as a series of storage classes.
Your assumption holds if they still use tape. But this paragraph hints at it not being tape anymore. The eternal battle between tape versus drive backup takes another turn.
bob1029 · 1h ago
I am also assuming that Amazon intends for the Deep Archive tier to be a profitable offering. At $0.00099/gb-month, I don't see how it could be anything other than tape.
simonw · 52m ago
I wonder if it's where old S3 hard drives go to die? Presumably AWS have the world's single largest collection of used storage devices - if you RAID them up you can probably get reliable performance out of them for Glacier?
lijok · 29m ago
You don’t raid old drives as it creates cascading failures because recovering from a failed drive adds major wear to other drives
bob1029 · 38m ago
I still don't know if it's possible to make it profitable with old drives in this kind of arrangement, especially if we intend to hit their crazy durability figures. The cost of keeping drives spinning is low, but is double-digit margin % in this context. You can't leave drives unpowered in a warehouse for years on end and say you have 11+ nines of durability.
hinkley · 9m ago
Unpowered in a warehouse is a huge latency problem.
For storage especially we now build enough redundancy into systems that we don't have to jump on every fault. That reduces the chance of human error when trying to address it, and pushing the hardware harder during recovery (resilvering, catching up in a distributed concensus system, etc).
When the entire box gets taken out of the rack due to hitting max faults, then you can piece out the machine and recycle parts that are still good.
You could in theory ship them all off to the backend of nowhere, but it seems that Glacier is all the places where AWS data centers are, so it's not that. But Glacier being durable storage, with a low expectation of data out versus data in, they could and probably are cutting the aggregate bandwidth to the bone.
How good do your power backups have to be to power a pure Glacier server room? Can you use much cheaper in-rack switches? Can you use old in-rack switches from the m5i era?
Also most of the use cases they mention involve linear reads, which has its own recipe book for optimization. Including caching just enough of each file on fast media to hide the slow lookup time for the rest of the stream.
Little's Law would absolutely kill you in any other context but we are linear write, orders of magnitude fewer reads here. You have hardware sitting around waiting for a request. "Orders of magnitude" is the space where interesting solutions can live.
Is tape even cost competitive anymore? The market would be tiny.
scubbo · 3h ago
I've had two people tell me in the last week that SQS doesn't support FIFO queues.
digianarchist · 1h ago
Would love an AWS equivalent to Cloud Run but the lambda changes are welcome nonetheless.
cldcntrl · 4h ago
> You don’t have to randomize the first part of your object keys to ensure they get spread around and avoid hotspots.
Not strictly true.
rthnbgrredf · 4h ago
Elaborate.
cldcntrl · 3h ago
The whole auto-balancing thing isn't instant. If you have a burst of writes with the same key prefix, you'll get throttled.
hnlmorg · 3h ago
Not the OP but I’ve had AWS-staff recommend different prefixes even as recently as last year.
If key prefixes don’t matter much any more, then it’s a very recent change that I’ve missed.
williamdclt · 3h ago
Might just be that the AWS staff wasn't up to date on this
time0ut · 2h ago
I have had the same experience within the last 18 months. The storage team came back to me and asked me to spread my ultra high throughput write workload across 52 (A-Za-z) prefixes and then they pre-partitioned the bucket for me.
S3 will automatically do this over time now, but I think there are/were edge cases still. I definitely hit one and experienced throttling at peak load until we made the change.
hnlmorg · 2h ago
That’s sounds like the problem we were having. Lots of writes to a prefix over a short period of time and then low activity to it after about 2 weeks.
hnlmorg · 3h ago
That’s possible but they did consult with the storage team prior to our consultation.
But I don’t know what conversations did or did not happen behind the scenes.
rthnbgrredf · 2h ago
By the way, that happens quite frequently. I regularly ask them about new AWS technologies or recent changes, and most of the time they are not aware. They usually say they will call back later after doing some research.
cldcntrl · 3h ago
That's right, same for me as of only a few months ago.
Ayesh · 3h ago
CloudFront also has 1TB of free data transfer a month under the forever-free perks.
nodesocket · 1h ago
I'll add: When doing instance to instance communication (in the same AZ) always use private ips. If you use public ip routing (even the same AZ) this is charged as regional data transfer.
Even worse, if you run self hosted NAT instance(s) don't use a EIP attached to them. Just use a auto-assigned public IP (no EIP).
NAT instance with EIP
- AWS routes it through the public AWS network infrastructure (hairpinning).
- You get charged $0.01/GB regional data transfer, even if in the same AZ.
NAT instance with auto-assigned public IP (no EIP)
- Traffic routes through the NAT instance’s private IP, not its public IP.
- No regional data transfer fee — because all traffic stays within the private VPC network.
- auto-assigned public IP may change if the instance is shutdown or re-created so have automations to handle that. Though you should be using the network interface ID reference in your VPC routing tables.
themafia · 28m ago
> You get charged $0.01/GB regional data transfer, even if in the same AZ.
My understanding is that transfer gets charged on both sides as well. So if you own both sides you'll pay $0.02/GB.
On the one hand, this is obviously the right decision. The number of giant data breeches caused by incorrectly configured S3 buckets is enormous.
But... every year or so I find myself wanting to create an S3 bucket with public read access to I can serve files out of it. And every time I need to do that I find something has changed and my old recipe doesn't work any more and I have to figure it out again from scratch!
Even if you have a terrible and permissive bucket policy or ACLs (legacy but still around) configured for the S3 bucket, if you have Block Public Access turned on - it won't matter. It still won't allow public access to the objects within.
If you turn it off but you have a well scoped and ironclad bucket policy - you're still good! The bucket policy will dictate who, if anyone, has access. Of course, you have to make sure nobody inadvertantly modifies that bucket policy over time, or adds an IAM role with access, or modifies the trust policy for an existing IAM role that has access, and so on.
Yeah, what month?
It's so simple for storing and serving a static website.
Are there good and cheap alternatives?
I'm not sure if that's changed recently, I've stopped using it.
I'm still not sure I know how to do it if I need to again.
Ultimately AWS doesn’t have the right leadership or talent to be good at GenAI, but they do (or at least used to) have decent core engineers. I’d like to see them get back to basics and focus there. Right now leadership seems panicked about GenAI and is just throwing random stuff at the wall desperately trying to get something to stick. Thats really annoying to customers.
Everything you know is wrong.
Weird Al. https://www.youtube.com/watch?v=W8tRDv9fZ_c
Firesign Theatre. https://www.youtube.com/watch?v=dAcHfymgh4Y
NAT gateways are not purely hands-off, you can attach additional IP addresses to NAT gateways to help them scale to supporting more instances behind the NAT gateway, which is a fundamental part of how NAT gateways work in network architectures, because of the limit on the number of ports that can be opened through a single IP address. When you use a VPC Gateway Endpoint then it doesn't use up ports or IP addresses attached to a NAT gateway at all. And what about metering? If you pay per GB for traffic passing through the NAT gateway, but I guess not for traffic to an implicit built-in S3 gateway, so do you expect AWS to show you different meters for billed and not-billed traffic, but performance still depends on the sum total of the traffic (S3 and Internet egress) passing through it? How is that not confusing?
It's also besides the point that not all NAT gateways are used for Internet egress, indeed there are many enterprise networks where there are nested layers of private networks where NAT gateways help deal with overlapping private IP CIDR ranges. In such cases, having some kind of implicit built-in S3 gateway violates assumptions about how network traffic is controlled and routed, since the assumption is for the traffic to be completely private. So even if it was supported, it would need to be disabled by default (for secure defaults), and you're right back at the equivalent situation you have today, where the VPC Gateway Endpoint is a separate resource to be configured.
Not to mention that VPC Gateway Endpoints allow you to define policy on the gateway describing what may pass through, e.g. permitting read-only traffic through the endpoint but not writes. Not sure how you expect that to work with NAT gateways. This is something that AWS and Azure have very similar implementatoons for that work really well, whereas GCP only permits configuring such controls at the Organization level (!)
They are just completely different networking tools for completely different purposes. I expect closed-by-default secure defaults. I expect AWS to expose the power of different networking implements to me because these are low-level building blocks. Because they are low-level building blocks, I expect for there to be footguns and for the user to be held responsible for correct configuration.
https://docs.aws.amazon.com/vpc/latest/privatelink/vpc-endpo...
(Disclaimer: I work for AWS, opinions are my own.)
It's a convenience VS security argument, though the documentation could be better (including via AWS recommended settings if it sees you using S3).
There are many IaC libraries, including the standard CloudFormation VPC template and CDK VPC class, that can create them automatically if you so choose. I suspect the same is also true of commonly-used Terraform templates.
This has been a common gotcha for over a decade now: https://www.lastweekinaws.com/blog/the-aws-managed-nat-gatew...
They spent the effort of branding private VPC endpoints "PrivateLink". Maybe it took some engineering effort on their part, but it should be the default out of the box, and an entirely unremarkable feature.
In fact, I think if you have private subnets, the only way to use S3 etc is Private Link (correct me if I'm wrong).
It's just baffling.
S3 can use either, and we recommend establishing VPC Gateway endpoints by default whenever you need S3 access.
(Disclaimer: I work for AWS, opinions are my own.)
People who are probably shouldn't be on aws - but they usually have to for unrelated reasons, and they will work to reduce their bill.
This just sounds like a polite way of saying "we're taking peoples' money in exchange for nothing of value, and we can get away with it because they don't know any better".
Hideous.
They should be, of course, at least when the destination is an AWS service in the same region.
[edit: I'm speaking about interface endpoints, but S3 and DynamoDB can use gateway endpoints, which are free to the same region]
S3 can use either, and we recommend establishing VPC Gateway endpoints by default whenever you need S3 access.
(Disclaimer: I work for AWS, opinions are my own.)
Would you recommend using VPC Gateway even on a public VPC that has an Internet gateway (note: not a NAT gateway)? Or only on a private VPC or one with a NAT gateway?
Other AWS services, though, don't support gateway endpoints.
The other problem with (interface) VPC endpoints is that they eat up IP addresses. Every service/region permutation needs a separate IP address drawn from your subnets. Immaterial if you're using IPv6, but can be quite limiting if you're using IPv4.
As of when? According to internal support, this is still required as of 1.5 years ago.
TGW is... twice as expensive as vpc peering?
But unlike peering TGW traffic flows through an additional compute layer so it has additional cost.
Does anyone have experience running Spot in 2025? If you were to start over, would you keep using Spot?
Wouldn't this always depend on the length of the queue to access the robotic tape library? Once your tape is loaded it should move really quickly:
https://www.ibm.com/docs/en/ts4500-tape-library?topic=perfor...
Your assumption holds if they still use tape. But this paragraph hints at it not being tape anymore. The eternal battle between tape versus drive backup takes another turn.
For storage especially we now build enough redundancy into systems that we don't have to jump on every fault. That reduces the chance of human error when trying to address it, and pushing the hardware harder during recovery (resilvering, catching up in a distributed concensus system, etc).
When the entire box gets taken out of the rack due to hitting max faults, then you can piece out the machine and recycle parts that are still good.
You could in theory ship them all off to the backend of nowhere, but it seems that Glacier is all the places where AWS data centers are, so it's not that. But Glacier being durable storage, with a low expectation of data out versus data in, they could and probably are cutting the aggregate bandwidth to the bone.
How good do your power backups have to be to power a pure Glacier server room? Can you use much cheaper in-rack switches? Can you use old in-rack switches from the m5i era?
Also most of the use cases they mention involve linear reads, which has its own recipe book for optimization. Including caching just enough of each file on fast media to hide the slow lookup time for the rest of the stream.
Little's Law would absolutely kill you in any other context but we are linear write, orders of magnitude fewer reads here. You have hardware sitting around waiting for a request. "Orders of magnitude" is the space where interesting solutions can live.
Is tape even cost competitive anymore? The market would be tiny.
Not strictly true.
If key prefixes don’t matter much any more, then it’s a very recent change that I’ve missed.
S3 will automatically do this over time now, but I think there are/were edge cases still. I definitely hit one and experienced throttling at peak load until we made the change.
But I don’t know what conversations did or did not happen behind the scenes.
Even worse, if you run self hosted NAT instance(s) don't use a EIP attached to them. Just use a auto-assigned public IP (no EIP).
My understanding is that transfer gets charged on both sides as well. So if you own both sides you'll pay $0.02/GB.