Helping birds and floating solar energy coexist (phys.org)
1 points by sohkamyung 36m ago 0 comments
Why Combine Uganda and Rwanda for Wildlife Adventure Safaris
2 points by gracedav 47m ago 0 comments
DNS piracy blocking orders: Google, Cloudflare, and OpenDNS respond differently
170 DanAtC 156 5/11/2025, 3:26:47 PM torrentfreak.com ↗
Who would have thought that Cisco would be on the side of the good guys for once?!
As for Cloudflare, what they do is scary. The screenshot clearly shows a valid HTTPS certificate, so either they don't do DNS blocking but instead implement the block on their loadbalancer side or they mis-issue HTTPS certificates. The former is only possible when the target site is also served by Cloudflare (which leaves the question what Cloudflare does for domains that are targetted by a court order but not using Cloudflare loadbalancing), the latter would be a serious breach of how HTTPS certificates should be issued.
And in the end I believe that courts need to be educated on how the Internet works. Companies should not be allowed to target DNS, they should be forced to target the actual entities doing the infringement - and if the target isn't in the scope of Western jurisdictions (that have various legal-assistance treaties), it's either tough luck (e.g. if the pirates are in Russia, China or other hostile nations) or they should get their respective government involved to use diplomatic means.
This is not an education issue. Rights holders want to use every tool in the box to add friction and barriers to piracy, courts offer pushback only when that would result in a marked loss in utility for ordinary users. They do not care about the sanctity of DNS or whatever engineer-brained ideals are being violated.
Thankfully, in this case the issue at hand is entirely unrelated to TLS, rogue CAs etc. Or even DNS record manipulation (for now)...
Cloudflare put a 'You're blocked' page, on the web server that Cloudflare are already running for their customer. The customer being the website that Cloudflare is being ordered to block (for users in certain countries).
Cloudflare's statement in that screenshot:
> Given the extraterritorial effect as well as the different global approaches to DNS-based blocking, Cloudflare has pursued legal remedies before complying with requests to block access to domains or content through the 1.1.1.1 Public DNS Resolver or identified alternate mechanisms to comply with relevant court orders. To date, Cloudflare has not blocked content through the 1.1.1.1 Public DNS Resolver.
I interpret this part of what Cloudflare said to mean, that so far every domain they've been asked to block has either been appealed successfully or they were using Cloudflare's CDN, DDoS mitigation & WAF services therefore they could just selectively block the visitors with HTTP 451. If they were asked to block a domain that wasn't using Cloudflare, I'm sure that would be the first instance of them having to modify the DNS response - but they would have to, or stop doing business in that jurisdiction like what OpenDNS did.
Cloudflare is quite notorious about not policing the content being fronted by their service, and are quite popular with less than legal (but still clearnet) sites.
In the example cases, they already had TLS certificates issued and were using them for the legitimate traffic of that domain as it was fronted by Cloudflare.
This is not an acceptable outcome in the courts view
Cloudflare is a public CA. Your browser or OS trusts it implicitly. If you don’t trust Cloudflare, remove it from that list I guess.
Very important distinction here, the people being 'impacted' by this court order are end-users who decide to use Cloudflare's recursive DNS resolver (1.1.1.1 / 1.0.0.1 etc).
There's also the topic of what authoritative nameserver a domain uses. And also if a domain uses Cloudflare's WAF/CDN services to front their website.
A website can use Cloudflare's WAF/CDN without using their authoritative nameserver, and vice versa.
In this case, every domain that's been ordered to be blocked was already using Cloudflare's WAF/CDN service. So Cloudflare did the block at that level, rather than changing how Cloudflare's recursive DNS resolver responds to DNS queries.
No additional TLS certificates were issued - they already had valid certs because they're fronting the domain.
Is this true for the free accounts? My understanding was that only enterprise and possibly pro accounts permitted this. I thought that people using free accounts had to point their entire zone entirely to CF to be managed only by CF. I could be wrong.
I don't use them for either, they've got too much market share for my comfort.
So in other words, instead of being able to have the root servers provide the IP addresses of my bare metal servers running NSD NS records I would have to tell the root servers via the registrars I use to give the NS names/IP's of cloudflares DNS servers. The domains are still on the dozen registrars I use but CF have to be authoritative for them for the free accounts or at least that is they way it was when I first played with CF after they stopped being honeypots that I contributed to. I would say custom DNS but nowadays that means 50 different things to 50 different DNS admins on HN. It's just apex NS records in the root anycast clusters.
https://developers.cloudflare.com/dns/zone-setups/partial-se...
Look at how much bullshit we tolerate from just Entrust: https://wiki.mozilla.org/CA/Entrust_Issues
For me it’s Cloudflare circumventing its transparency reporting. That’s lying. If they’re willing to lie about something like this, I wonder what else they found a technical workaround for.
There is no bypassing of certificate transparency, as there was no additional TLS certificate issued, it was already in use.
If Cloudflare was demanded to block a different site that did not use Cloudflare's WAF service, they would have to do something else at the recursive DNS resolver level. So far that hasn't happened, because Cloudflare is incredibly popular, especially so for less-than-legal sites.
Transparency as in their reporting, not the technical details of certificate issuance.
FTA: “Interestingly, Cloudflare maintains in its transparency report that it is not blocking content through its public DNS resolver. Instead, it points out that it uses ‘alternate mechanisms’.”
> 5. Cloudflare has never modified the intended destination of DNS responses at the request of law enforcement or another third party.
That's accurate, the DNS responses for these domains previously did, and still do, point to Cloudflare's WAF/CDN.
They haven't said anything like '...never blocked access to customer content...'.
It’s accurate in that bullshit isn’t technically a lie. If they’re willing to do this, they’re potentially willing to use their CDN to MITM DNS requests. Because after all, they’d be leaving the DNS request unmolested while doing the dirty work on their CDN.
Uhh no it’s not?
Supported TLS certs via Cloudflare: https://developers.cloudflare.com/ssl/reference/certificate-...
Those public CAs have to verify domain ownership via the methods outlined in the CA/Browser Forum's baseline requirements. None of which Cloudflare would be able to follow (on behalf of these domains in question) if they did not use either of Cloudflare's authoritative nameservers or WAF/CDN.
Now, if Cloudflare were a public CA, they would still have to behave correctly and follow the baseline requirements otherwise they would be distrusted by clients.
Note that Cloudflare have a certificate authority called 'Origin CA' https://blog.cloudflare.com/cloudflare-ca-encryption-origin/, it is not publicly trusted though. It doesn't need to be, it's for website operators to install on their own web server, before it gets fronted by Cloudflare - rather than just running a self-signed cert or serving plaintext.
Trusted root certs:
Apple: https://support.apple.com/en-gb/121672
Mozilla: https://ccadb.my.salesforce-sites.com/mozilla/CAInformationR...
Microsoft: https://ccadb.my.salesforce-sites.com/microsoft/IncludedCACe...
Chrome: https://chromium.googlesource.com/chromium/src/+/main/net/da...
It doesn't look like they are a sponsor of Let's Encrypt though, so I doubt they have any kind of special arrangement with them.
The customer grants Cloudflare a TLS certificate for their site either by uploading a cert manually, or letting Cloudflare issue a cert via the ACME protocol. They use that to present the site to the world. Cloudflare connects back to the origin site, and the origin either uses HTTP (bad! but possible), HTTPS with a self signed cert, HTTPS with another publicly trusted cert, or a cert that Cloudflare issues with their own (not publicly trusted) CA called Origin CA.
As the visitor, you there's no big sign saying 'Cloudflare can read this content as well as the origin website'. They're trusted to not be malicious sure, but there's a massive risk with using any sort of service like this that you don't control.
One of those massive risks turned reality with Cloudbleed in 2016/2017: https://en.wikipedia.org/wiki/Cloudbleed
https://project-zero.issues.chromium.org/issues/42450151
https://blog.cloudflare.com/incident-report-on-memory-leak-c...
https://blog.cloudflare.com/quantifying-the-impact-of-cloudb...
They can but they're not allowed to, that's the entire point.
> 4.17. Extended DNS Error Code 16 - Censored
> The server is unable to respond to the request because the domain is on a blocklist due to an external requirement imposed by an entity other than the operator of the server resolving or forwarding the query. Note that how the imposed policy is applied is irrelevant (in-band DNS filtering, court order, etc.).
Which would be relevant for Google DNS's "Query refused" at least. Although I guess it's possible maybe they do support it but Windows/Chromium don't...
[1] https://www.rfc-editor.org/rfc/rfc8914.html#section-4.17
Most of the eggs are in one basket. Same as trying to get individual ISP's to censor something, reaching out to each of the hundreds of registrars is time consuming and prone to being ignored depending on the country. If on the other hand a government can get cooperation from even 3 of the biggest "free" resolvers then its a big win for them. It's also easier to monitor people when they choose to use corporate resolvers like Cloudflare, Google, OpenDNS, etc...
That advice made sense in the plain-text HTTP era, but it's not longer viable; attempting to do that nowadays would only lead to an "invalid certificate" error page. The only ones which can make that work are the site itself, or a CDN in front of it (which, as others have noted, often means cloudflare can do that, but not other DNS providers like google).
It is probably quite a bit slower though needing to have roundtrips at each stage of the resolution, which is also likely a reason that these public resolvers get so much use (latency improvement via caching).
The average load time for a website is 2.5 seconds. The added load time from running your own recursive resolver, which is only added the first time the site is loaded, would be around 50ms, or 2% increase load time.
DNS resolving is not a major aspect of a typical websites load time. If you want to speed things up, run a local proxy which local cached version of all popular web frameworks and fonts, and have it be be constantly populated by a script running in the background. That will save you much more than 2% on first load.
Recursively resolve bbc.com: 18ms https://pastebin.com/d94f1Z7P Recursively resolve ethz.ch: 17ms https://pastebin.com/x6jSHgDn Recursively resolve admin.ch: 39ms: https://pastebin.com/DUTg8Rit
Page load in Firefox: bbc.com DOMContentLoaded: ~40ms, page loaded: ~300ms reuters.com DOMContentLoaded: ~200ms, page loaded: ~300ms google.com DOMContentLoaded: ~160ms, page loaded: ~290ms
So it's quite reasonable to do full recursive resolution, and you'll still benefit from caching after the first time it's loaded. One other idea I had but never looked into it was instead of throwing out entries after TTL expiry to just refresh it and keep it cached, no idea if BIND/Unbound can do that but you can probably build something with https://github.com/hickory-dns/hickory-dns to achieve that.
Google page speed (https://pagespeed.web.dev/analysis/https-bbc-com/yxcpaqmphq?...) use two other terms. First Contentful Paint, that is the first point in the page load timeline where the user can see anything on the screen, and Largest Contentful Paint, the render time of the largest image, text block, or video visible in the viewport, relative to when the user first navigated to the page. For bbc.com, those sites around 1 second mark.
Other measuring aspect is Time to First Byte, which is the time between the request for a resource and when the first byte of a response begins to arrive. For bbc.com that is 300ms.
My experience does not align with this. My Unbound instances cache only what I am requesting and I have full control over that cache memory allocation, min-ttl, zero-ttl serving and re-fetching, cron jobs that look up my most common requests hourly, etc... I do not have to share memory with anyone outside of my home. Just about anything I request on a regular basis is in the micro-seconds always shows as 0 milliseconds in dig. I've run performance tests against Unbound and all the major DNS recursive providers and my setup always wins for anything I use more than a few times a month or more than a dozen times in a year.
For the cases where I am requesting a domain for the first time the delay is a tiny fraction of the overall page loading of the site as belorn mentioned. I keep query response logs and that also has the response time for every DNS server I have queried. I also use those query response logs to build a table of domains that I look up hourly NS and A records to build the infrastructure cache in addition to resource record cache.
Now where there would be latency is if I had to enable my local Unbound -> DoT over Tinc VPN -> rented server Unbound -> root servers. That would only occur if my ISP decided to block anyone talking to the root servers directly and my DoT setup would only be in place while my legal teams get ready to roast my ISP and I start putting up billboards. That would of course be a waste of time and money when I could just get the IP's of censored sites from a cron-job running on multiple VM's and shove them into my hosts file. This could even be a public contribution into a git repo and automated on everyone's machines.
I used to do that, but it caused some odd issues at my former ISP, which I suspect were due to connection tracking state table exhaustion on their CGNAT box; running your own recursive server means a lot of UDP connections, and unlike with TCP, there's no well-defined point at which the connection tracking state can be released, which can lead to it accumulating. Making unbound use DoT to cloudflare made things much more stable (since DoT uses TCP, the connection tracking state can be released immediately when each connection is closed).
Needless to say, the bar is way lower. Anybody willing to pirate stuff can easily change their dns to any public dns service and access any website. You don’t even need a vpn.
1. I don't think anyone has been prosecuted for accessing/using pirated materials. The people who have been prosecuted for torrenting were liable because torrent clients also upload, thereby making them go beyond merely accessing/using.
2. Claiming that those sites (ie. live soccer streams) is "learning" is a stretch. Moreover no such "learning" exemption exists, at least in the US. The closest you have is fair use, which has a 4 part test. "Learning" is one of the tests, but isn't a sole determinant. Photocopying textbooks wholesale is obviously illegal, even if it's for "learning".
It's not clear why this would be a relevant distinction. If the use in question is fair use then copying is permitted. Why wouldn't this be the case for the person uploading the data as well as the person downloading it? Suppose you have a physical copy of a book and your friend wants a copy of a page for a use which is indisputably fair use, so you make a copy for them and give it to them for that purpose. How is that any different?
> Claiming that those sites (ie. live soccer streams) is "learning" is a stretch.
Wouldn't that depend on what the user is actually doing with it? If you're just watching the game with your friends, presumably not. If you're doing scientific research on sporting events and you need to run the video of every sporting event in the last 10 years through a computer for your study, maybe it is.
The server operator isn't the one using it.
Is it possible for anyone to make any fair use of GameOfThrones.mp4? Presumably yes, under some set of circumstances. And then the server operator has put it there for the people who want to use it in that way.
Some people might then use it in some other way, but some people might borrow a book from the library and then use it to author an infringing derivative work. Why should that be the responsibility of the library rather than the party using it in an infringing way?
of course, IP is recycled across many users, and connecting to these BT resources is not proof of piracy, the very practice of monitoring, is undeniable proof,that connectivity, != undeniable proof of piracy, so you have to offer fake BT pieces, then request download to confirm data is being moved, and argue this is indication of intent.
meanwhile you have to argue that buffer content of a video player, is not downloaded, and there is no right to access those memory ranges on your own system.
I know it flies in the face of how the bittorrent protocol should operate, but there's a technical possibility.
Another is using so called "seedbox" in the safe country, or torrenting only via vpn.
Maybe I want to be better at soccer and learn by observation.
Take a look here for a good start: https://www.techradar.com/news/best-dns-server
> When OpenDNS was first ordered to block pirate sites in France, the company made a simple but drastic decision to leave the country entirely, effectively affecting all French users. Last week, it repeated this response in Belgium following a similar court order.
$ dig kernel.org @208.67.220.220
; <<>> DiG 9.18.33-1~deb12u2-Debian <<>> kernel.org @208.67.220.220 ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: REFUSED, id: 12644 ;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 2
;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags:; udp: 1410 ; EDE: 16 (Censored) ;; QUESTION SECTION: ;kernel.org. IN A
;; ADDITIONAL SECTION: kernel.org. 0 IN TXT "The OpenDNS service is currently unavailable in France and some French territories due to a court order under Article L.333-10 of the French Sport Code. See https://support.opendns.com/hc/en-us"
;; Query time: 8 msec ;; SERVER: 208.67.220.220#53(208.67.220.220) (UDP) ;; WHEN: Sun May 11 22:56:23 UTC 2025
A DNS resolver... resolves (recursively). unbound[0] would be an example.
A proxy instead only forwards to a trusted DNS server (or servers) and may cache their responses but won't do any resolution by themselves. dnsmasq[1] would be an example.
My guess is a simple proxy is less vulnerable to UDP amplification attacks (and also vastly simpler to implement and maintain).
The drawback is you need a resolver you trust, but that might be okay if you actually do have one. E.g. some DNS server that you know is safe but is not operating in your country (you might just want to proxy it so its closer to you for lower latency).
[0] https://en.m.wikipedia.org/wiki/Unbound_(DNS_server)
[1] https://en.m.wikipedia.org/wiki/Dnsmasq
https://mullvad.net/en/help/dns-over-https-and-dns-over-tls
.... On second thought that is a bad analogy, that is more like running your own search engine. The dns equivalent to yandex would be 77.88.8.8
https://gist.github.com/mutin-sa/5dcbd35ee436eb629db78725810...
If they can serve your site with https normally, they can serve any content they want under it.
In some other cases I suppose they could downgrade the connection to HTTP in order to show their 451 page, but if the domain is HSTS'ed then that wouldn't work either. That'd have to just black-hole the query like Google does.
No comments yet
"Cloudflare Reverse Proxies Are Dumping Uninitialized Memory" - https://news.ycombinator.com/item?id=13718752
The laws that apply on the internet are very desperate attempts by people with no technical knowledge to control something that can't be controlled. They work only because ways to circumvent them are not yet easy to use by the masses.
I’d argue Silicon Valley pretending there is a natural arc of digital history towards freedom and enlightenment if we just leave everything alone is distinctly reminiscent of 90s free-trade optimism. And like that philosophy, this too one finds its tombstone in China.
Using the word "activities" implies that something different than what's really happening.
Ask the question like this: Should countries have the right to control information inside their borders? The answer to that question is no.
> Opposing this on the basis of "they'll extend it to political opposition ..." makes as much sense as opposing the arrest of criminals because "they'll extend it to political opposition ...".
If you make it less expensive to do something, you make it more likely that it happens. Incarcerating murderers and rapists is very important and is an effective deterrent against serious violent crimes, so creating prisons that make it easier to incarcerate political prisoners is bad but the thing it's necessary in order to do is more important.
Blocking streaming sites isn't nearly as important and it's also less effective for its intended purpose than it is for the ulterior purpose, because users will go out of their way to bypass censorship of streaming sites whereas inconvenient political content is censored not just with respect to its content but also its existence, and then if you create a censorship apparatus it allows people to be kept in the dark as to what is even being censored. So in that case the badness of the censorship apparatus existing far exceeds its value in being able to inconvenience some minor offenders.
Do you really believe that the interests of the people are inferior to the interests of capital? Do you actually believe that the interests of each group are aligned?
In what country is there actually a "right to private, encrypted communication"? At best there's rights for "privacy", which is a pretty woolly concept, but generally don't cover copyright infringement. More to the point, unless you reject the concept of copyright entirely, you have to accept that free speech rights will have to be "violated" to enforce it.
I recognize that this is not a popular opinion, but IMHO IT SHOULD BE covered by the "secure in their papers" section in the 4th Amendment in the US, and/or with established precedent regulating encryption export as armaments, by the 2nd & the Heller decision granting the rights afforded by the 2nd to the individual
at least that's the correct interpretation of the founding document as far as I can tell. not that it matters anymore.
You'd agree with say, <insert country> mass blocking sites for their people just because the sites say something about democracy?
The number of things which have been laundered under “prevent child porn” is absurd. No, it is not a problem which warrants a global panopticon state.
As for drug sales, I’m not sure what to say if you think that’s the pitch that’s gonna land here.
Giving up civil rights for the perennial boogiemen like terrorism or CSAM never results in less terrorism or CSAM, but does erode the rights of individuals.
The goal is to establish an undemocratic method of control over and coercion of individuals and the means of communication. This has been borne out time and time again.
But, no, some censorship is acceptable and necessary. But we have to be aware of—and appropriately guard against—the other kind, and sometimes that means having less capacity for the kind that would be acceptable than you would want if there was downside attached to it.
Dropping 1,000 nukes that would destroy humanity would be a mechanism to stop drugs and pornography. But it would be the wrong tool.
My understanding is DNS resolves a domain to an IP address. If there is any process that prohibits that, then it's not working by design.
Thankfully there are many resolvers that will always resolve no matter what 'legal' may throw at it. This is fundamental despite what content lies on the other side.
There will always be cat and mouse with speech and rights to access, and any protocols will be challenged. Thankfully, others will say 'no thank you' and refuse to listen to any order, legal or otherwise. And thankfully, they cannot be touched (VPNs, TOR et al).
Even the most censorship heavy countries in the world have to resort to physically shutting the internet down, because if there is a pathway, it will be found. It's just human nature.
No comments yet
Would moving domain registration into a public Blockchain allow for a more resilient and democratized internet?
If you only said more democratized I might lean towards yes with some caveats but you included resilient and DNS is not just peoples workstations and cell phones. It is used by very big and complex systems that make vast numbers of changes every second. Trying to force all of that through blockchain would require a complete re-thinking of how blockchain and the internet work in my opinion. I would be happy to be proven wrong. Someone could try it but that someone would have to be a very big organization for any kind of canary test. The devil would be in the implementation details as to how this monster would scale and handle a myriad of failure scenarios. People would also need to be able to troubleshoot complex misconfigurations. It would take some serious battle hardening before a production revenue generating company would take a chance with it.
IPNS is similar project that already exists.
No, you’d just get the protocol blocked and sanctioned.