I feel the new oniux command is doing both the right thing and the wrong thing:
- right thing: catch every network access and redirect to Tor
- wrong thing: create the user expectation that (if you remember to prepend "oniux") it'll catch every network access and redirect to Tor
It is essentially moral hazard. What happens when you accidentally forget "oniux"? Or think you've booted up a Tails environment but it's not? Or mistake the Tor Browser window for a Firefox window? You only have to resolve a DNS name _once_ for the world to know you're interested in accessing it.
I like the idea that oniux should not only intercept gethostbyname(), but also always set standard environment variables pointing to its SOCKS proxy. That way curl can do the right thing - refuse to pass .onion names to gethostbyname() - but support automatically passing them on to a proxy. If it's a non-Tor proxy, it should also do the right thing and refuse to resolve .onion addresses, leaving only safe ways forward, which is passing on name resolution to whichever proxy is configured, and the only proxy that will resolve .onion addresses is the Tor proxy.
0points · 7h ago
oniux is completely new to me, but this is not at all a new idea.
torsocks has been available doing the same thing since 2008.
captainmuon · 8h ago
I feel like most people only use Tor via the Tor browser or a socks proxy, and the developers in the ecosystem cater only to these users. But there are a bunch of other creative uses of Tor around.
A couple of years ago, I used the TransPort feature of Tor combined with an iptables rule to redirect certain applications over Tor, like a web browser. The goal was a poor man's VPN. Access some websites without your local network admin to know about it, and without the website to know who you are. Back then there was Java applets and Flash, and this worked to hide network requests from them, too, as opposed to other solutions. Later iptables removed the feature that allowed you to filter on PID and broke my workflow. I changed it to use a dedicated unix user for tor, but that broke at some point, too, and I just got a commercial VPN.
Tor discouraged my use case, and I guess if you are afraid of being tracked or recognized as a returning user, then you should stick to Tor browser. But everybody has their own use cases.
Joker_vD · 8h ago
> redirect certain applications over Tor, like a web browser
I personally use a proxy.pac file (which all both Firefox/Chrome support) with roughly the following contents:
function FindProxyForURL(url, host) {
var httpProxy = "PROXY localhost:3128";
var onionProxy = "SOCKS5 localhost:9050";
if (host.endsWith(".onion")) {
return onionProxy;
}
var proxiedDomains = [
"example.com",
...
];
for (var proxied of proxiedDomains) {
if (shExpMatch(host, proxied) || shExpMatch(host, "*." + proxied)) {
return httpProxy;
}
}
return "DIRECT";
}
The only inconvenient part is that Chrome for some stupid reason can't read this file from a file:// url, so I have to host it on my localhost; oh well.
geocar · 8h ago
Take care with this. Some people are putting sneaky code in that detects if your regular non-proxied access will receive some other network path via a .onion domain. It is not clear to me what exactly they are doing with this knowledge.
loa_in_ · 6h ago
That's anecdotal or is there something to confirm this?
iaaan · 4h ago
Not the person you replied to, but theoretically, it's easy for me to imagine how that would work, so I'd definitely be wary of using a solution like this.
knome · 9h ago
if they're going to be arbitrarily against env vars, like CURL_HOME, CURL_SSL_BACKEND, CURL_CA_BUNDLE, or the other dozen-ish variables curl already checks, an option in could .curlrc seem reasonable.
of course, having a CURL_ALLOW_ONION would allow the oniux program to set it, which would very easy and straight forward for both sides.
alternately, oniux could itself run a proxy and set the appropriate proxying environment variable, like HTTPS_PROXY. This would have the advantage of curl not having to do anything, but would add a rather ugly bit of complication to oniux.
seeing as the ability to run and inform curl of a proxy means oniux can already bypass the onion blocking with an envvar, adding one specifically to do that is convenient for callers, and does not expose the user to parent programs controlling onion exposure any more than it already does.
at best you could argue that requiring a full proxy makes it slightly harder for naive users to accidentally expose themselves since it would raise the bar for exposure from what curl knows, being the env var, to what curl has, in the form of an available proxy endpoint, but this isn't really a great excuse not to implement the CURL_ALLOW_ONION env var.
it's nice that curl is helpful for blocking by default, but having curl require the user to jump through hoops to unblock onion is a bit much.
remram · 8h ago
This doesn't really fix the problem. Curl is not the only tool to have implemented this block, many tools have, this was the point of Tor requesting this mechanism via an RFC. Is oniux going to set hundreds of environment variables to deactivate the block in all programs they know about? And cause users to send bug reports to all programs complying with their RFC that their tool doesn't yet know the workaround for?
The fix is much simpler: have oniux set $http_proxy (and drop non-tor traffic). This is the mechanism that makes the more sense and is in line with their own RFC.
immibis · 9h ago
One of the things on my cool ideas list is AF_ONION. getaddrinfo should be able to translate a .onion DNS name into an AF_ONION address immediately, and then you should be able to open an AF_ONION socket to that address. Tor would instantly be compatible with every program that doesn't assume IPv4/6 (which is shockingly few, but automatic Tor support would be a good reason to fix that). Same with I2P.
Prior to that, .onion blocking in getaddrinfo would also make sense - it would apply to a large swath of apps - and could be overridden with nsswitch.conf, perhaps.
Props to Daniel for recognizing that the situation is impossible to solve in a way that pleases everyone. Some people would just change it to meet the demands of the last person who asked, without thinking deeper.
- right thing: catch every network access and redirect to Tor
- wrong thing: create the user expectation that (if you remember to prepend "oniux") it'll catch every network access and redirect to Tor
It is essentially moral hazard. What happens when you accidentally forget "oniux"? Or think you've booted up a Tails environment but it's not? Or mistake the Tor Browser window for a Firefox window? You only have to resolve a DNS name _once_ for the world to know you're interested in accessing it.
I like the idea that oniux should not only intercept gethostbyname(), but also always set standard environment variables pointing to its SOCKS proxy. That way curl can do the right thing - refuse to pass .onion names to gethostbyname() - but support automatically passing them on to a proxy. If it's a non-Tor proxy, it should also do the right thing and refuse to resolve .onion addresses, leaving only safe ways forward, which is passing on name resolution to whichever proxy is configured, and the only proxy that will resolve .onion addresses is the Tor proxy.
torsocks has been available doing the same thing since 2008.
A couple of years ago, I used the TransPort feature of Tor combined with an iptables rule to redirect certain applications over Tor, like a web browser. The goal was a poor man's VPN. Access some websites without your local network admin to know about it, and without the website to know who you are. Back then there was Java applets and Flash, and this worked to hide network requests from them, too, as opposed to other solutions. Later iptables removed the feature that allowed you to filter on PID and broke my workflow. I changed it to use a dedicated unix user for tor, but that broke at some point, too, and I just got a commercial VPN.
Tor discouraged my use case, and I guess if you are afraid of being tracked or recognized as a returning user, then you should stick to Tor browser. But everybody has their own use cases.
I personally use a proxy.pac file (which all both Firefox/Chrome support) with roughly the following contents:
The only inconvenient part is that Chrome for some stupid reason can't read this file from a file:// url, so I have to host it on my localhost; oh well.of course, having a CURL_ALLOW_ONION would allow the oniux program to set it, which would very easy and straight forward for both sides.
alternately, oniux could itself run a proxy and set the appropriate proxying environment variable, like HTTPS_PROXY. This would have the advantage of curl not having to do anything, but would add a rather ugly bit of complication to oniux.
seeing as the ability to run and inform curl of a proxy means oniux can already bypass the onion blocking with an envvar, adding one specifically to do that is convenient for callers, and does not expose the user to parent programs controlling onion exposure any more than it already does.
at best you could argue that requiring a full proxy makes it slightly harder for naive users to accidentally expose themselves since it would raise the bar for exposure from what curl knows, being the env var, to what curl has, in the form of an available proxy endpoint, but this isn't really a great excuse not to implement the CURL_ALLOW_ONION env var.
it's nice that curl is helpful for blocking by default, but having curl require the user to jump through hoops to unblock onion is a bit much.
The fix is much simpler: have oniux set $http_proxy (and drop non-tor traffic). This is the mechanism that makes the more sense and is in line with their own RFC.
Prior to that, .onion blocking in getaddrinfo would also make sense - it would apply to a large swath of apps - and could be overridden with nsswitch.conf, perhaps.
Props to Daniel for recognizing that the situation is impossible to solve in a way that pleases everyone. Some people would just change it to meet the demands of the last person who asked, without thinking deeper.