A resident of said country here. Another questionable measure by Government to protect our mollycoddled, insufficiently-resilient society.
That said, a better approach would be to limit kids under certain age from owning smartphones with full internet access. Instead, they could have a phone without internet access—dumb phones—or ones with curated/limited access.
Personally, I'm not too worried about what risqué stuff they'll see online especially so teenagers (they'll find that one way or other) but it's more about the distraction smartphones cause.
Thinking back to my teenage years I'm almost certain I would have been tempted to waste too much time online when it would have been better for me to be doing homework or playing sport.
It goes without saying that smartphones are designed to be addictive and we need to protect kids more from this addiction than from from bad online content. That's not to say they should have unfettered access to extreme content, they should not.
It seems to me that having access to only filtered IP addresses would be a better solution.
This ill-considerd gut reaction involving the whole community isn't a sensible decision if for no other reason than it allows sites like Google to sap up even more of a user's personal information.
abtinf · 9h ago
> Another questionable measure by Government to protect our mollycoddled, insufficiently-resilient society
Complains about mollycoddling.
> a better approach would be to limit
Immediately proposes new mollycoddling scheme.
hilbert42 · 9h ago
Mollycoddling kids is one thing, we've always done that to some extent. Mollycoddling adults is another matter altogether.
xboxnolifes · 8h ago
Both proposals are mollycoddling children. It just happens that one of them inconveniences adults.
strken · 8h ago
"Inconvenience" is downplaying the impact of not letting adults use incognito mode to search for things.
Yes, right now search engines are only going to blur out images and turn on safe search, but the decision to show or hide information in safe search has alarming grey areas.
Examples of things that might be hidden and which someone might want to access anonymously are services relating to sexual health, news stories involving political violence, LGBTQ content, or certain resources relating to domestic violence.
rendall · 4h ago
Also porn. Let's be honest, all of this energy expenditure is about porn.
roenxi · 3h ago
While anyone who wants to ban people looking at porn will be on side with this, the political oomph is probably more from authoritarians who are working towards a digital ID. Anyone who cares about the porn angle would be forced to admit this won't do very much. Anyone who wants to keep the wrong people out of politics would be quietly noting that this is a small but unquestionable win.
ptek · 34m ago
Hmm people will go back to using lingerie catalogs or start using LLM prompts?
Cartoxy · 4h ago
is it tho because we have been doing porn since forever and porn is not gatekeeperd by SE at all.
seams like long term slow burn to Gov tendrils just like digital ID and how desperate the example came across as to show any real function, contradictory even.
Pivot, what about the children. small steps and right back on the gradient of slippyslope we are
GoblinSlayer · 3h ago
People search porn in google? Because google is internet itself?
falcor84 · 2h ago
Because it's easier to put your query into the address bar than to open a dedicated search page, and most people use Chrome with the default being Google search.
XorNot · 1h ago
Absolutely no one searches for porn on Google except if they don't know the URL of an aggregator.
Which that one kid will tell everyone if they don't.
> That said, a better approach would be to limit kids under certain age from owning smartphones with full internet access. Instead, they could have a phone without internet access—dumb phones—or ones with curated/limited access.
This wouldn't allow them to watch gambling ads or enjoy murdoch venues.
hilbert42 · 6h ago
Oh, the cynicism of some people. :-)
Yes, that empire exported itself to where it would have the greatest effect—cause the most damage.
SlowTao · 4h ago
> Thinking back to my teenage years I'm almost certain I would have been tempted to waste too much time online when it would have been better for me to be doing homework or playing sport.
That is true. I spent my time coding a 2D game engine on an 486, it eventually went nowhere, but it was still cool to do. But if I had the internet then, all that energy would have been put into pointless internet stuff.
kolinko · 3h ago
I had internet access since 13yo, although it was the internet of 1996, so it was way more basic.
And for me it was a place to explore my passions way better than any library in a small city in Poland would allow.
And sure - also a ton of time on internet games / MUDs, chatrooms etc.
And internet allowed me to publish my programs, written in Delphi, since I was 13-14yo, and meet other programmers on Usenet.
On the other hand, if not for internet, I might socialise way more irl - probably doing thing that were way less intelectually developing (but more socially).
It just hit me that I need to ask one of my friends from that time what they did in their spare time, because I honestly have no idea.
johnisgood · 2h ago
I had the Internet when I was a kid and I ended up being a software engineer with useful skills in many different areas.
You are wrong to blame the Internet (or today LLMs). Do not blame the tool.
Sure I consumed sex when I was a kid, but I did a fuckton of coding of websites (before JavaScript caught up, but in JavaScript) and modding of games. I met lots of interesting, and smart people on IRC with mutual hobbies and so forth. I did play violent games, too, just FYI, when I was not making mods for them.
pferde · 2h ago
Could the difference between your experience and that of today's teenagers be in the fact that in your time, there were no online content farms hyperoptimized for maximum addictiveness, after their owners invested millions (if not billions) into making them so?
ta12653421 · 54m ago
back then the web (or prior networks like Gopher, Usenet) were used and filled mainly by professionals working in the one or another field; and if you were online, you demonstrated already a basic tech undertstanding, since it wasnt as convenience as today.
Sure, porn existed early on; but the "entertaining web content" was just not existing as today.
johnisgood · 34m ago
Yes, especially IRC. What people call today "gatekeeping" is exactly what gave IRC networks value.
johnisgood · 2h ago
Yes, I believe so. The only thing that was addicting to me was coding. It really was addicting. I did not leave the house all summer when I was >13 because I was busy coding. But then again, this "addiction" helped me a lot in today's world. That said, I am left with a serious impostor syndrome, however, and my social skills aren't the best, which is also required in today's world, by a programmer. :/
theshackleford · 3h ago
I had the internet as a youth, and it is pretty much entirely responsible for me having been able to build a social network and social capabilities, build the career I have today and ultimately break out of poverty.
Tade0 · 4h ago
My take is just like we have allowance to introduce children to the concept of money, parents could use data allowance to introduce children to the concept of the internet.
The worst content out there is typically data-heavy, the best - not necessarily, as it can well be text in most cases.
closewith · 4h ago
That's a naïve view of the internet, where much of the worst experiences children have are in text via chat.
Tade0 · 1h ago
Pretty sure a picture is still worth a thousand words. Also text is something you can prepare for, police if need be.
Random visual internet content? Too many possibilities, too large a surface area to cover.
florkbork · 1h ago
I find I am broadly supportive of these laws (The Online Safety Amendment (Social Media Minimum Age) Bill 2024), even if this specific regulation is a bit of pearl clutching wowserism.
You get 30,000 civil penalty units if you are a scumbag social media network and you harvest someone's government ID.
You get 30,000 civil penalty units if you don't try to keep young kids away from the toxic cesspool that is your service, filled with bots and boomers raving about climate change and reposting Sky News.
This absolutely stuffs those businesses who prey on their users, at least for the formative years.
And when I think about it like that?
I have no problem with it, nor the fact it's a pain to implement.
kypro · 1h ago
100% agree.
The framing that explicit material is bad for kids, while probably true, is besides the point. Lots of things a parent could expose a child to could be bad, but it's always been seen as up to the parent to decide.
What the government should do is ensure that parents have the tools to raise their kids in the way they feel is appropriate. For example, they could require device manufactures implement child-modes or that ISP provide tools for content moderation which would puts parents in control. This instead places the the state in the parental role with it's entire citizenry.
We see this in the UK a lot too. This idea that parents can't be trusted to be good parents and that people can't be trusted with their own freedom so we need the state to look after us seems to be an increasing popular view. I despise it, but for whatever reason that seems to be the trend in the West today – people want the state to take on a parental role in their lives. Perhaps aging demographics has something to do with it.
dzhiurgis · 3h ago
Australian gov can’t even enforce vape ban, how you’d expect smartphone ban to be enforced?
florkbork · 1h ago
What if the point isn't to enforce at the user level, but at the company level?
30,000 penalty units for violations. 1 unit = $330 AUD at the moment.
theshackleford · 3h ago
> That said, a better approach would be to limit kids under certain age from owning smartphones with full internet access. Instead, they could have a phone without internet access—dumb phones—or ones with curated/limited access.
This would be completely and utterly unenforceable in any capacity. Budget smartphones are cheap enough and ubiquitious enough that children don't need your permission or help to get one. Just as I didnt need my parents assistance to have three different mobile phones in high school when as far as they knew, I had zero phones.
account42 · 10m ago
Which is of course why we don't bother making selling cigarettes and alcohol to children illegal. Except we totally do that because it largely works even if sufficiently motivated individuals can and do get around the restrictions.
bamboozled · 9h ago
We basically have to teach children about misinformation, propaganda and the dangers of the internet from the age of 5...that's basically how it would need to go, it would suck for the ruling class though because we'd have to stop feeding kids religion (for example) from the day they're born...How can we expect kids to know what the truth is when they're lied to all the time.
graemep · 2h ago
> it would suck for the ruling class though because we'd have to stop feeding kids religion
The ruling class in the west are generally extremely anti-religious. They have a good reason to be - the biggest religion in the west is anti-wealth (the "eye of the needle" things etc.) and generally opposed to the values of the powerful.
The US is a sort of exception, but they say things to placate the religious (having already been pretty successful in manipulating and corrupting the religion) but very rarely actually do anything. I very much doubt the president (or anyone else) in the current US government is going to endorse "give all you have to the poor".
DiggyJohnson · 6h ago
> it would suck for the ruling class though because we'd have to stop feeding kids religion
This seems out of place and unrelated. If anything Gen Z and presumable Alpha, eventually, are more religious than their parents.
frollogaston · 7h ago
or just don't get them smartphones
pmontra · 6h ago
Misinformation and propaganda are not only on smartphones.
fc417fc802 · 6h ago
Still those do make it awfully easy to subscribe to notifications that actively push all sorts of problematic things onto you at an alarming rate. A high rate of exposure to something can lead to problems where there otherwise wouldn't be any.
marcus_holmes · 9h ago
2025: if you're logged in, then we check your age to see if you can do or see some stuff
2027: the companies providing the logins must provide government with the identities
2028: because VPNs are being used to circumvent the law, if the logging entity knows you're an Australian citizen, even if you're not in Australia or using an Aussie IP address then they must still apply the law
2030: you must be logged in to visit these specific sites where you might see naked boobies, and if you're under age you can't - those sites must enforce logins and age limits
2031: Australian ISPs must enforce the login restrictions because some sites are refusing to and there are loopholes
2033: Australian ISPs must provide the government with a list of people who visited this list of specific sites, with dates and times of those visits
2035: you must be logged in to visit these other specific sites, regardless of your age
2036: you must have a valid login with one of these providers in order to use the internet
2037: all visits to all sites must be logged in
2038: all visits to all sites will be recorded
2039: this list of sites cannot be visited by any Australian of any age
2040: all visits to all sites will be reported to the government
2042: your browser history may be used as evidence in a criminal case
Australian politicians, police, and a good chunk of the population would love this.
Australia is quietly extremely authoritarian. It's all "beer and barbies on the beach" but that's all actually illegal.
naruhodo · 7h ago
Mate...
> 2038: all visits to all sites will be recorded
That's been the case since 2015. ISPs are required to record customer ID, record date, time and IP address and retain it for two years to be accessed by government agencies. It was meant to be gated by warrants, but a bunch of non-law-enforcement entities applied for warrantless access, including local councils, the RSPCA (animal protection charity), and fucking greyhound racing. It's ancient history, so I'm not sure if they were able to do so. The abuse loopholes might finally be closed up soon though.
I cannot find it any more due to the degradation of Google but there was a report on the amount of times this data was access in NSW for 2018 (?). It was something like 280,000 requests for that year alone!
marcus_holmes · 5h ago
I didn't know that. Thanks
closewith · 4h ago
Yes, the 2038, 2040, and 2042 scenarios are already reality in most of the world. We're in the dystopian nightmare.
incompatible · 9h ago
> 2042: your browser history may be used as evidence in a criminal case
We already reached that point several years ago.
marcus_holmes · 9h ago
yeah true, I should have made it more explicit that it's your entire browser history, and every criminal case
bravesoul2 · 3h ago
Interesting question is warrant or no warrant required?
pmontra · 6h ago
> 2039: this list of sites cannot be visited by any Australian of any age
Block lists are not new. For example Italy blocks a number of sites, usually at DNS level with the cooperation of ISPs and DNS services. You can autotranslate this article from 2024 to get the gist of what is being blocked and why https://www.money.it/elenco-siti-vietati-italia-vengono-pers...
I believe other countries of the same area block sites for similar reasons.
SlowTao · 4h ago
As others have pointed out how many of these are already present here. I suspect the rest of your time line is far to optimistic in how long it will take to get there. I suspect that with the pace of decline most of that will be enacted in the next 5 years.
I would like to say "It is all because of X political party!" but both the majors are the same in this regard and they usually vote unanimously on these things.
megablast · 23m ago
> your browser history may be used as evidence in a criminal case
Pretty sure google searches have been used in murder trials before, including the mushroom poisoning one going on right now in Victoria.
m3sta · 8h ago
Australian politicians, police, and a specific chunk of the population would be exempt from this... like with privacy laws.
marcus_holmes · 7h ago
indeed. Rules for thee but not for me.
tbrownaw · 9h ago
> 2030: you must be logged in to visit these specific sites where you might see naked boobies, and if you're under age you can't - those sites must enforce logins and age limits
Some states in the US are doing this already. And I think I saw a headline about some country in Europe trying to put Twitter in that category, implying they have such rules there already.
closewith · 4h ago
> Australia is quietly extremely authoritarian.
Not quietly, I don't think. Not like Australia is known for freedom and human rights. It's known for expeditionary wars, human rights abuses, jailing whistleblowers and protesters, protecting war criminals, environmental and social destruction, and following the United States like a puppy.
bravesoul2 · 3h ago
US is the same but with a different leash-holding country.
almosthere · 4h ago
2027: ufos visit, and decide to end the human experiment, game over.
Nursie · 5h ago
> your browser history may be used as evidence in a criminal case
As others have said, that's the case already and not just in Australia. Same in lots of other places like the UK and the whole EU. Less so in the US (though they can demand any data the ISP has, and require ISPs to collect data on individuals)
> Australia is quietly extremely authoritarian.
It is weird, as a recent-ish migrant I do agree, there are rules for absolutely bloody everything here and the population seems in general to be very keen on "Ban it!" as a solution to everything.
It's also rife with regulatory capture - Ah, no mate, you can't change that light fitting yourself, gotta get a registered sparky in for that or you can cop a huge fine. New tap? You have to be kidding me, no, you need a registered plumber to do anything more than plunger your toilet, and we only just legalised that in Western Australia last year.
It's been said before, but at some point the great Aussie Larrikin just died. The Wowsers won and most of them don't even know they're wowsers.
cmoski · 1h ago
There are a lot of people changing their own light fittings. I have never heard about laws against plumbing but I don't see them stopping old mate from doing it.
Electrical work can be pretty dangerous...
Nursie · 1h ago
Yeah, here in WA at least, there are signs up in the plumbing section of Bunnings saying “Stop! DIY plumbing is illegal! Only buy this stuff if you’re getting a professional to fit it!”
The reasoning is often “people might contaminate the water supply for a whole street!” Which just points to poor provision of one way valves at the property line.
But yeah, illegal.
I agree there are limits with what you want to do on electricity, but turning the breaker off and replacing a light fitting or light switch is pretty trivial. And I know people do just get on with it and do some of this stuff themselves anyway.
Was particularly pissed off that in January this year the plumbing “protections” were extended to rural residents who aren’t even connected to mains water or sewage, to protect us from substandard work by … making it illegal for us to do it ourselves. Highly annoying.
cmoski · 50m ago
Well there you go. Probably thanks to a plumbing lobby. Lucky we moved out of WA.
A lot of laws can be interpreted as reccomendations :)
t0lo · 9h ago
If only there was a name for this fallacy. Something slope something
SchemaLoad · 8h ago
Slippery slope is only a fallacy when there is no reason to believe the end state is likely or desired.
It seems quite likely that governments want to continuously chip away at privacy.
its-summertime · 8h ago
Slippery slope is specifically about opening the gates to further slipping. This clearly isn't the case since there is going to be slipping regardless of this specific instance going through all the way or not.
fc417fc802 · 6h ago
> It's wrong to call this a slippery slope because we're not at the top but instead already well on our way down a slope that is indeed slippery.
Not a convincing take.
Cartoxy · 4h ago
Australia is already living in a full-blown surveillance state. Over 330,000 metadata access requests were approved in a single year—no warrant needed. Agencies like Centrelink, the ATO, even local councils can tap into your private data. Police get access to your web browsing history directly from ISPs without judicial oversight. Encryption is being quietly undermined through laws like the TOLA Act, forcing tech companies to help spy or weaken their own systems. The government now mandates that AI search tools filter and flag content, shaping what people can even find online. When the AFP raided the ABC, they had the legal power to copy, alter, or delete files. Add to that Australia’s deep involvement in the global Five Eyes intelligence-sharing network, and it's clear: this isn’t future dystopia, it’s surveillance as a fact of life. NBN monopoly + TR-069 as default hard locked and custom PCB in NBN hardware (even to the point of new PCB runs with all headers and test points even unpopulated removed) it tooks untill the new rev of arriss hardware before they even complied with the GPRD lisenceing. legit!
tbrownaw · 8h ago
So, is this particular slope likely to be slippery? Do governments have a history of looking for ways to control what information people can see, or looking for ways to identify people who post disfavored information?
What is it with some of the anglo countries and these ridiculous slides into nannying, vaguely repressive surveillance. It's not even much useful for real crime fighting, as the case of the UK amply and frequently demonstrates.
jgaa · 25m ago
It's what happens when the people governing is terrified about the people they govern.
Read the legislation.
Ask yourself if it's better for a country's government or a foreign set of social media companies to control what young people see.
One has a profit motive above all else.
One can be at least voted for or against.
bobbyraduloff · 5h ago
Taken straight from the new regulation: “Providers of internet search engine services are not required to implement age assurance measures for end-users who are not account holders.”
How can you argue any of this is NOT in the interest of centralised surveillance and advertising identities for ADULTS when there’s such an easy way to bypass the regulation if you’re a child?
jackvalentine · 9h ago
Australians are broadly supportive of these kind of actions - there is a view that foreign internet behemoths have failed to moderate for themselves and will therefore have moderation imposed on them however imperfect.
Can’t say I blame them.
AnthonyMouse · 8h ago
> there is a view that foreign internet behemoths have failed to moderate for themselves and will therefore have moderation imposed on them however imperfect.
This view is manufactured. The premise is that better moderation is available and despite that, literally no one is choosing to do it. The fact is that moderation is hard and in particular excluding all actually bad things without also having a catastrophically high false positive rate is infeasible.
But the people who are the primary victims of the false positives and the people who want the bad stuff fully censored aren't all the same people, and then the second group likes to pretend that there is a magic solution that doesn't throw the first group under the bus, so they can throw the first group under the bus.
cmoski · 59m ago
I think it is less about stopping them from seeing naked pictures etc and more about stopping them getting sucked into the addictive shithole of social media.
It will also make it harder for the grubby men in their 30s and 40s to groom 14yo girls on Snapchat, which is a bonus.
marcus_holmes · 7h ago
This. This legislation has got nothing to do with moderation or "protecting children" - that's just the excuse that the government is using to push the legislation through. There are better ways of achieving that goal if that was the goal.
The actual goal is, as always, complete control over what Australians can see and do on the internet, and complete knowledge of what we see and do on the internet.
l0ng1nu5 · 5h ago
Agreed but would also add the ability to prosecute anyone who writes something they don't like/agree with.
Read it.
It is specifically targeting companies who currently run riot over young individual's digital identity, flog it off to marketers, and treat them as a product.
globalnode · 6h ago
i think governments are confused by the internet. on the one hand business uses it to save money and pay taxes. broligarch's get rich from it. yet it exposes the unwashed masses to all sorts of information that might otherwise face censorship. theres always sex and drugs you can use as a reason to clamp down on things. the tough thing for them will be how do you reign in the plebs while also allowing business and advertising to function unfettered... tough times ahead :p
p.s. i agree with your comment.
jackvalentine · 7h ago
> This view is manufactured. The premise is that better moderation is available and despite that, literally no one is choosing to do it. The fact is that moderation is hard and in particular excluding all actually bad things without also having a catastrophically high false positive rate is infeasible.
Manufactured by whom? Moderation was done very tightly on vbulletin forums back in the day, the difference is Facebook/Google et al expect to operate at a scale where (they claim) moderation can't be done.
The magic solution is if you can't operate at scale safely, don't operate at scale.
> Moderation was done very tightly on vbulletin forums back in the day, the difference is Facebook/Google et al expect to operate at a scale where (they claim) moderation can't be done.
The difference isn't the scale of Google, it's the scale of the internet.
Back in the day the internet was full of university professors and telecommunications operators. Now it has Russian hackers and an entire battalion of shady SEO specialists.
If you want to build a search engine that competes with Google, it doesn't matter if you have 0.1% of the users and 0.001% of the market cap, you're still expected to index the whole internet. Which nobody could possibly do by hand anymore.
jackvalentine · 6h ago
Maybe search is dead but doesn’t know it yet.
Edit: you can’t just grow a Wikipedia link to manufacturing consent from the 80s as an explanation here. What a joke of a position. Maybe people have been hoodwinked by a media conspiracy or maybe they just don’t like what the kids are exposed to at a young age these days.
AnthonyMouse · 6h ago
> you can’t just grow a Wikipedia link to manufacturing consent from the 80s as an explanation here. What a joke of a position.
Do you dispute the thesis of the book? Moral panics have always been used to sell both newspapers and bad laws.
> Maybe people have been hoodwinked by a media conspiracy or maybe they just don’t like what the kids are exposed to at a young age these days.
People have never liked what kids are exposed to. But it rather matters whether the proposed solution has more costs than effectiveness.
> Maybe search is dead but doesn’t know it yet.
Maybe some people who prefer the cathedral to the bazaar would prefer that. But ability of the public to discover anything outside of what the priests deign to tell them isn't something we should give up without a fight.
jackvalentine · 6h ago
I dispute you’ve made any kind of connection between the two beyond your own feelings.
I put it to you, similarly without evidence, that your support for unfettered filth freedom is the result of a process of manufacturing consent now that American big tech dominates.
AnthonyMouse · 5h ago
The trouble with that theory is that tech megacorps are a relatively recent development, whereas e.g. the court cases involving Larry Flynt were events from the 1970s and 80s and the likes of Hustler Magazine hardly had an outsized influence over the general media.
Meanwhile morals panics are at least as old as the Salem Witch Trials.
jackvalentine · 4h ago
Megacorps, simultaniously impotent and trillion dollar companies.
AnthonyMouse · 4h ago
The US government has a multi-trillion dollar annual budget -- they spend more money every year than the entire market cap of any given megacorp -- and they can't solve it either. Maybe it's a hard problem?
g-b-r · 7h ago
Were web searches moderated?
bigfatkitten · 7h ago
> The premise is that better moderation is available and despite that, literally no one is choosing to do it.
It’s worse than that. Companies actively refuse to do anything about content that is reported to them directly, at least until the media kicks up a stink.
Nobody disputes that reliably detecting bad content is hard, but doing nothing about bad content you know about is inexcusable.
> Meta said it has in the past two years taken down 27 pedophile networks and is planning more removals.
Moreover, the rest of the article is describing the difficulty in doing moderation. If you make a general purpose algorithm that links up people with similar interests and then there is a group of people with an interest in child abuse, the algorithm doesn't inherently know that and if you push on it to try to make it do something different in that case than it does in the general case, the people you're trying to thwart will actively take countermeasures like using different keywords or using coded language.
Meanwhile user reporting features are also full of false positives or corporate and political operatives trying to have legitimate content removed, so expecting them to both immediately and perfectly respond to every report is unreasonable.
Pretending that this is easy to solve is the thing authoritarians do to justify steamrolling innocent people because nobody can fully eliminate the problem nobody has any good way to fully eliminate.
bigfatkitten · 7h ago
> Your link says the opposite of what you claim
I don’t know where you got that from. Meta’s self-congratulatory takedown of “27 pedophile networks” is a drop in the ocean.
Here’s a fairly typical example of them actively deciding to do nothing in response to a report. This mirrors my own experience.
> Like other platforms, Instagram says it enlists its users to help detect accounts that are breaking rules. But those efforts haven’t always been effective.
> Sometimes user reports of nudity involving a child went unanswered for months, according to a review of scores of reports filed over the last year by numerous child-safety advocates.
> Earlier this year, an anti-pedophile activist discovered an Instagram account claiming to belong to a girl selling underage-sex content, including a post declaring, “This teen is ready for you pervs.” When the activist reported the account, Instagram responded with an automated message saying: “Because of the high volume of reports we receive, our team hasn’t been able to review this post.”
> After the same activist reported another post, this one of a scantily clad young girl with a graphically sexual caption, Instagram responded, “Our review team has found that [the account’s] post does not go against our Community Guidelines.” The response suggested that the user hide the account to avoid seeing its content.
AnthonyMouse · 6h ago
Your claim was that they "actively refuse" to do anything about it, but they clearly do actually take measures.
As mentioned, the issue is that they get zillions of reports and vast numbers of them are organized scammers trying to get them to take down legitimate content. Then you report something real and it gets lost in an sea of fake reports.
What are they supposed to do about that? It takes far fewer resources to file a fake report than investigate one and nobody can drink the entire ocean.
fc417fc802 · 6h ago
Active refusal can (and commonly does) take the form of intentionally being unable to respond or merely putting on such an appearance. One of the curious things about Twitter pre-aquisition was that underage content somewhat frequently stayed up for months while discriminatory remarks were generally taken down rapidly. Post acquisition such content seemed to disappear approximately overnight.
If the system is pathologically unable to deal with false reports to the extent that moderation has effectively ground to a standstill perhaps the regulator ought to get involved at that point and force the company to either change its ways or go out of business trying?
AnthonyMouse · 4h ago
> One of the curious things about Twitter pre-aquisition was that underage content somewhat frequently stayed up for months while discriminatory remarks were generally taken down rapidly. Post acquisition such content seemed to disappear approximately overnight.
This isn't evidence that they have a system for taking down content without a huge number of false positives. It's evidence that the previous administrators of Twitter were willing to suffer a huge number of false positives around accusations of racism and the current administrators are willing to suffer them around accusations of underaged content.
fc417fc802 · 4h ago
I agree that on its own it isn't evidence of the ability to respond without excessive false positives. But similarly, it isn't evidence of an inability to do so either.
In the context of Australia objecting to lack of moderation I'm not sure it matters. It seems reasonable for a government to set minimum standards which companies that wish to operate within their territory must abide by. If as you claim (and I doubt) the current way of doing things is uneconomical under those requirements then perhaps it would be reasonable for those products to be excluded from the Australian market. Or perhaps they would instead choose to charge users for the service? Either outcome would make room for fairly priced local alternatives to gain traction.
This seems like a case of free trade enabling an inferior American product to be subsidized by the vendor thereby undercutting any potential for a local industry. The underlying issue feels roughly analogous to GDPR except that this time the legislation is terrible and will almost certainly make society worse off in various ways if it passes.
AnthonyMouse · 3h ago
> I agree that on its own it isn't evidence of the ability to respond without excessive false positives. But similarly, it isn't evidence of an inability to do so either.
It is in combination with the high rate of false positives, unless you think the false positives were intentional.
> If as you claim (and I doubt) the current way of doing things is uneconomical under those requirements then perhaps it would be reasonable for those products to be excluded from the Australian market.
If they actually required both removal of all offending content and a low false positive rate (e.g. by allowing customers to sue them for damages for removals of lawful content) then the services would exit the market because nobody could do that.
What they'll typically do instead is accept the high false positive rate rather than leave the market, and then the service remains but becomes plagued by innocent users being victimized by capricious and overly aggressive moderation tactics. But local alternatives couldn't do any better under the same constraints, so you're still stuck with a trash fire.
riffraff · 5h ago
But this goes back to the original argument: maybe if you can't avoid causing harm then you shouldn't be allowed to operate?
E.g. if you produce eggs and you can't avoid salmonella at some point your operation should be shut down.
Facebook and its ilk have massive profits, they can afford more moderators.
AnthonyMouse · 4h ago
> But this goes back to the original argument: maybe if you can't avoid causing harm then you shouldn't be allowed to operate?
By this principle the government can't operate the criminal justice system anymore because it has too many false positives and uncaptured negative externalities and then you don't have anything to use to tell Facebook to censor things.
> Facebook and its ilk have massive profits, they can afford more moderators.
They have large absolute profits because of the large number of users but the profit per user is in the neighborhood of $1/month. How much human moderation do you think you can get for that?
fc417fc802 · 4h ago
> By this principle the government can't operate the criminal justice system
Obviously we make case by case decisions regarding such things. There are plenty of ways in which governments could act that populations in the west generally deem unacceptable. Private prisons in the US, for example, are quite controversial at present.
It's worth noting that if the regulator actually enforces requirements then they become merely a cost of doing business that all participants are subject to. Such a development in this case could well mean that all the large social platforms operating within the Australian market start charging users in that region on the order of $30 per year to maintain an account.
AnthonyMouse · 3h ago
> Obviously we make case by case decisions regarding such things.
You can make case by case decisions regarding individual aspects of the system, but no modern criminal justice system exists that has never put an innocent person behind bars, much less on trial. Fiddling with the details can get you better or worse but it can't get you something that satisfies the principle that you can't operate if you can't operate without ever doing any harm to anyone. Which implies that principle is unreasonable and isn't of any use in other contexts either.
> It's worth noting that if the regulator actually enforces requirements then they become merely a cost of doing business that all participants are subject to. Such a development in this case could well mean that all the large social platforms operating within the Australian market start charging users in that region on the order of $30 per year to maintain an account.
The premise there is that you could solve the problem for $30 per person annually, i.e. $2.50/month. I'm left asking the question again, how much human moderation do you expect to get for that?
Meanwhile, that's $30 per service. That's going to increase the network effect of any existing service because each additional recurring fee or requirement to submit payment data is a deterrent to using another one. And maybe the required fee would be more than that. Are you sure you want to entrench the incumbents as a permanent oligarchy?
bigfatkitten · 3h ago
> but they clearly do actually take measures.
Some times, but clearly not often enough.
Does a refusal get more active than a message that says “Our review team has found that [the account’s] post does not go against our Community Guidelines”?
> Then you report something real and it gets lost in an sea of fake reports.
It didn’t get ‘lost’ — they (or their contract content moderators at Concentrix in the Phillipines) sat on it, and then sent a message that said they had decided to not do anything about it.
> What are they supposed to do about that?
They’ve either looked at the content and decided to do nothing about it, or they’ve lied when they said that they had, and that it didn’t breach policy. Which do you suppose it was?
AnthonyMouse · 3h ago
> Does a refusal get more active than a message that says “Our review team has found that [the account’s] post does not go against our Community Guidelines”?
That's assuming their "review team" actually reviewed it before sending that message and purposely chose to allow it to stay up knowing that it was a false negative. But that seems pretty unlikely compared to the alternative where the reviewers were overwhelmed and making determinations without doing a real review, or doing one so cursory the error was done blind.
> They’ve either looked at the content and decided to do nothing about it, or they’ve lied when they said that they had, and that it didn’t breach policy. Which do you suppose it was?
Almost certainly the second one. What would even be their motive to do the first one? Pedos are a blight that can't possibly be generating enough ad revenue through normal usage to make up for all the trouble they are, even under the assumption that the company has no moral compass whatsoever.
coryrc · 4h ago
> What are they supposed to do about that?
Do like banks: Know Your Customer. If someone performs a crime using your assets, you are required to supply evidence to the police. You then ban the person from using your assets. If someone makes false claims, ban that person from making reports.
Now your rate of false positives is low enough to handle.
AnthonyMouse · 4h ago
This is the post people should point to when someone says "slippery slope is a fallacy" in order to prove them wrong, both for the age verification requirements and for making banks do KYC.
But also, your proposal would deter people from reporting crimes because they're not only hesitant to give randos or mass surveillance corporations their social security numbers, they may fear retaliation from the criminals if it leaks.
And the same thing happens for people posting content -- identity verification is a deterrent to posting -- which is even worse than a false positive because it's invisible and you don't have the capacity to discover or address it.
Nursie · 5h ago
> The fact is that moderation is hard
Moderation is hard when you prioritise growth and ad revenue over moderation, certainly.
We know a good solution - throw a lot of manpower at it. That may not be feasible for the giant platforms...
Oh no.
AnthonyMouse · 4h ago
This is the weirdest theory. The premise is that you admit the huge corporations with billions of dollars don't have the resources to pay moderators to contend with the professional-grade malicious content by profitable criminal syndicates, but some tiny forum is supposed to be able to get it perfect so they don't go to jail?
fc417fc802 · 3h ago
> but some tiny forum is supposed to be able to get it perfect so they don't go to jail?
Typically you would exempt smaller services from such legislation. That's the route Texas took with HB 20.
AnthonyMouse · 2h ago
So the companies that exceed the threshold couldn't operate there (e.g. PornHub has ceased operating in Texas) but then everyone just uses the smaller ones. Wouldn't it be simpler and less confusing to ban companies over a certain size unconditionally?
Nursie · 4h ago
> The premise is that you admit the huge corporations with billions of dollars don't have the resources to pay moderator
My contention is more that they don’t have the will, because it would impact profits and that it’s possible that if they did implement effective moderation at scale it might hurt their bottom line so much they are unable to keep operating.
Further, that I would not lament such a passing.
I’m not saying tiny forums are some sort of panacea, merely that huge operations should not be able to get away with (for example) blatant fraudulent advertising on their platforms, on the basis that “we can’t possibly look at all of it”.
Find a way, or stop operating that service.
AnthonyMouse · 2h ago
> My contention is more that they don’t have the will, because it would impact profits and that it’s possible that if they did implement effective moderation at scale it might hurt their bottom line so much they are unable to keep operating.
Is the theory supposed to be that the moderation would cost them users, or that the cost of paying for the moderation would cut too much into their profits?
Because the first one doesn't make a lot of sense, the perpetrators of these crimes are a trivial minority of their user base that inherently cost more in trouble than they're worth in revenue.
And the problem with the second one is that the cost of doing it properly would not only cut into the bottom line but put them deep into the red on a permanent basis, and then it's not so much a matter of unwillingness but inability.
> I’m not saying tiny forums are some sort of panacea, merely that huge operations should not be able to get away with (for example) blatant fraudulent advertising on their platforms, on the basis that “we can’t possibly look at all of it”.
Should the small forums be able to get away with it though? Because they're the ones even more likely to be operating with a third party ad network they neither have visibility into nor have the leverage to influence.
> Further, that I would not lament such a passing.
If Facebook was vaporized and replaced with some kind of large non-profit or decentralized system or just a less invasive corporation, would I cheer? Probably.
But if every social network was eliminated and replaced with nothing... not so much.
pferde · 1h ago
Smaller forums are more likely to handle moderation effectively and in a timely manner. I frequent a few such forums, and have seen consistently good moderating for many years.
Nursie · 2h ago
> the cost of paying for the moderation would cut too much into their profits?
This one. Not just in terms of needing to take on staff, but it would also cut into their bottom line in terms of not being able to take money from bad-faith operators.
> And the problem with the second one is that the cost of doing it properly would not only cut into the bottom line but put them deep into the red on a permanent basis, and then it's not so much a matter of unwillingness but inability.
Inability to do something properly and make a commercial success of it, is a 'you' problem.
Take meta and their ads - they've built a system in which it's possible to register and upload ads and show them to users, more or less instantly with more or less zero human oversight. There are various filters to try and catch stuff, but they're imperfect, so they supply fraudulent ads to their users all the time - fake celebrity endorsements, various things that fall foul of advertising standards. Some just outright scams. (Local family store you never heard of is closing down! So sad! Buy our dropshipped crap from aliexpress at 8x the price!)
To properly, fully fix this they would need to verify advertisers and review ads before they go live. This is going to slow down delivery, require a moderate sized army of reviewers and it's going to lose them revenue from the scammers. So many disincentives. So they say "This is impossible", but what they mean is "It is impossible to comply with the law and continue to rake in the huge profits we're used to". They may even mean "It is impossible to comply with the law and continue to run facebook".
OK, that's a classic 'you' problem. (Or it should be). It's not really any different to "My chemical plant can't afford to continue to operate unless I'm allowed to dump toxic byproducts in the river". OK, you can't afford to operate, and if you keep doing it anyway, we're going to sanction you. So ... Bye then?
> Should the small forums be able to get away with it though?
This is not really part of my argument. I don't think they should, no. But again - if they can't control what's being delivered through their site and there's evidence it contravenes the law, that's a them problem and they should stop using those third party networks until the networks can show they comply properly.
> if every social network was eliminated and replaced with nothing... not so much.
Maybe it's time to find a new funding model. It's bad enough having a funding model based on advertising. I's worse having one based on throwing ad messages at people cheap and fast without even checking they meets basic legal standards. But here we are.
I realise this whole thing is a bit off-topic as the discussion is about age-verification and content moderation, and I've strayed heavily into ad models....
SchemaLoad · 8h ago
I'm split on it. 100% agree that kids being off social media is better for society. But I can't see how it could be enforced without privacy implications for adults.
fc417fc802 · 5h ago
Perhaps enforcement at the user end isn't what's needed. A perfect solution is likewise probably unnecessary.
As but one possible example. Common infrastructure to handle whitelisting would probably go a long way here. Just being able to tag a phone, for example, as being possessed by a minor would enable all sorts of voluntary filtering with only minimal cooperation required.
Many sites already have "are you 18 or older" type banners on entry. Imagine if those same sites attached a plaintext flag to all of their traffic so the ISP, home firewall, school firewall, or anyone else would then know to filter that stream for certain (tagged) accounts.
I doubt that's the best way to go about it but there's so much focus on other solutions that are more cumbersome and invasive so I thought it would be interesting to write out the hypothetical.
ptek · 3h ago
Are you 18 or older?
You don’t get that notification show up when you buy alcohol or cigarettes at a shop, would have been easier being a minor buying beer. The porn companies know what they are doing or they would create a adults robots.txt and published a RFC. Hope they won’t ask for age verification for the shroomery
SchemaLoad · 5h ago
Yeah that seems pretty reasonable. Apple and Google could extend their parental controls to send a header or something flagging an under age user, for sites to then either block, or remove social elements from the page.
Seems like right now the Aus Government isn't sure how they want it to work and is currently trialing some things. But it does seem like they at least don't want social media sites collecting ID.
2muchcoffeeman · 6h ago
Don’t buy them devices and lock down your computer and networks.
I guess if a teenager is enterprising enough to get a job and save up and buy their own devices and pay for their own internet then more power to them.
SchemaLoad · 6h ago
Obviously an impossible task. Kids need computers for school, and every school provides laptops. Kids don't need access to social media.
1718627440 · 4h ago
No? This was how it was for me. And the only downside was, that all the other kids are glued to their smartphones.
Why is this even controverse. Is there any rational reason why kids should have smartphones? The only reason I see is to let the big companies earn money and because adults don't want to admit, that they are addicted themselves.
bigfatkitten · 9h ago
If you read through the issues that ASIO says they are most concerned about, it’s clear that companies like Meta have a lot to answer for.
We don't need ASIO to tell us that. The real problem is that early on when Big Tech first took a stranglehold of the internet in the early 2000s that governments failed to regulate, they did SFA despite the warning signs.
At the time it was obvious to many astute observers what was happening but governments themselves were mesmerized and awed by Big Tech.
A 20-plus year delay in applying regulations means it'll be a long hard road to put the genie back in tbe bottle. For starters, there's too much money now tied up in these trillion-dollar companies, to disrupt their income would mean shareholders and even whole economies would be affected.
Fixing the problem will be damn hard.
BLKNSLVR · 8h ago
Even harder now that the US President is siding with the tech broligarchy since it aligns perfectly with the America First ideology.
(It may be the last thing that the US has the world lead on)
It's also why legislation protecting privacy and/or preventing the trade of personal information is almost impossible: the "right" people profit from it, and the industry around it has grown large enough that it would have non-trivial economic effects if it were destroyed (no matter how much it thoroughly deserves to be destroyed with fire).
marcus_holmes · 7h ago
I just read through that, and it doesn't even mention Meta. Why do you think this is about "companies like Meta"?
homeonthemtn · 9h ago
But never will~
Palmik · 6h ago
It's interesting how all countries work in tandem implementing these measures. UK, EU, some US States and now Australia all require or will soon require age verification under certain conditions.
It seems like it would make more sense to implement it at the browser level. Let the website return a header (ala RTA) or trigger some JavaScript API o indicate that the browser should block the tab until the user verifies their age.
riffraff · 5h ago
I think lawmakers gravitate towards "required identification" because 1) it's easier to put blame on a single website than on whatever browser + the websites 2) it matches th experience of age restriction for movies and magazines, where age is enforced by whoever sells you the thing or allows access 3) client side restrictions seem easier to circumvent 4) some lawmakers probably think grown ups shouldn't watch porn either.
IMO an "ok" solution to the parents' requirements of "I want my kids to not watch disturbing things" might be to enforce domain tags (violence, sex, guns, religion, social media, drugs, gambling, whatever) and allow ISPs to set filters per paying client, so people don't have to setup filters on their own (but they can).
But it's a complex topic, and IMO a simpler solution is to just not let kids alone in the internet until you trust them enough.
bn-l · 1h ago
> Age assurance methods can include age verification systems, which use government documents or ID; age estimation systems, which typically use biometrics; and age inference systems, which use data about online activity or accounts to infer age.
Oh how convenient.
Cartoxy · 4h ago
Aims to protect kids online, but it could easily go too far. It covers way more than just search engines—pretty much anything that returns info, including AI tools.
It pushes for heavy content filtering, age checks, and algorithm tweaks to hide certain results. That means more data tracking and less control over what users see. Plus, regulators can order stuff to be removed from search results, which edges into censorship. Sets the stage for broader control, surveillance, and over-moderation. slowburn additions all stack up. digital ID ,NBN monopoly ISP locked DNS servers . TR-069 etc etc. Hidden VOIP credentials. Australia is like the west's testing ground this kind of policy it seams.
shirro · 2h ago
This looks like a voluntary industry code of conduct made by US companies Microsoft, Google etc. I am not aware of any legislation that would require this in Australia. If the commissioner thinks the industry codes are insufficient she might advise the government that a legislative approach is required but she is not an Australian politician and was not elected by anyone here.
The eSafety commissioner is an American born ex-Microsoft, Adobe and Twitter employee who was appointed by the previous conservative government. I wouldn't be so sure her values are representative of the so-called Australian nanny state or the Australian Labor Party.
Sevrene · 9h ago
I’m an Australian who values privacy and civil liberties more than most I meet.
While I yearn for the more authentic and sincere days of the internet I grew up on, I recognize very quickly by visiting x or facebook how much it isn’t that, and hasn’t been for a long time.
I think this bill is a good thing and I support it.
SturgeonsLaw · 3h ago
I’m an Australian who values privacy and civil liberties more than most I meet, and that's why I think this bill is horrible, is full of unintended consequences, and will be worked around by kids who care to do it.
Read the bill.
Gov ID collection is just as much a violation as failing to take any action
hilbert42 · 8h ago
"I’m an Australian who values privacy and civil liberties more than most I meet."
Same here. Early on, if I found a site interesting I'd often follow its links to other sites and so on down into places that the Establishment would deem unacceptable but I'd not worry too much about it.
Nowadays, I just assume authorities of all types are hovering over every mouse click I make. Not only is this horrible but it also robbs one of one's autonomy.
It won't be long before we're handing info that was once commonplace in textbooks around in secret.
fc417fc802 · 5h ago
Aren't privacy and civil liberties fundamentally at odds with centralized government issued ID checks? How can you claim to value the former while supporting a plan to require the latter?
In the days before electronics were endemic, physically checking a photo ID didn't run afoul of that as long as the person checking didn't record the serial number. But that's no longer the world we live in.
marcus_holmes · 7h ago
I don't understand why you think this bill and that phenomenon (the fact that Xitter or Facebook aren't like the old days of the internet) are connected, can you explain why you think this, please?
veeti · 6h ago
Evidently the bar for valuing such things is set very low in Australia.
g-b-r · 6h ago
This is the account's first message here in two years
theshackleford · 3h ago
>I think this bill is a good thing and I support it.
Uhuh.
>I’m an Australian who values privacy and civil liberties more than most I meet.
No you're not.
frollogaston · 7h ago
The AI-based version of this looks fine, the ID checks are odd though
Nasrudith · 4h ago
Are you sure you value privacy and civil liberties then if you fall for "Think of the Children" bollocks instead of wanting to throw politicians down wells to protect children from living in a dystopia?
amaterasu · 8h ago
The co-leads on drafting the code are rather interesting:
> Drafting of the code was co-led by Digital Industry Group Inc. (DIGI), which was contacted for comment as it counts Google, Microsoft, and Yahoo among its members.
shirro · 40m ago
Yes. As usual people commenting based on their biases instead of comprehending the text. This is a proposal made by predominantly US companies (a country that actually has mandatory proof of age to access digital services in several states) to a US born eSafety commissioner who previously worked for Microsoft, Adobe and Twitter.
Not really sure what this has to do with the Australian government or Australian people. We can't even properly tax these foreign companies fairly. If we did try to regulate them the US government would step in and play the victim despite a massively one sided balance of trade due to US services being shoved down our throats. We need to aggressively pursue digital sovereignty.
fc417fc802 · 7h ago
Do you suppose this is born of a desire to more easily identify people, or primarily as a regulatory fence to prevent upstart competitors? Perhaps both?
ethan_smith · 9h ago
Australia's been down this road before with the failed 2019 age verification bill and the Online Safety Act. The technical implementation challenges are enormous - from VPN circumvention to privacy risks of ID verification systems.
frollogaston · 9h ago
Well the age assurance is only for logged-in users, so they can just log out.
postingawayonhn · 9h ago
The article doesn't quite spell it out but I assume you won't be able to turn off safe search unless you log in with a verified 18+ account.
frollogaston · 9h ago
It does say "default" which implies you can turn off the filter, but yeah it's not very clear and does make a big difference. For example, YouTube already won't let you view flagged content without signing in.
SchemaLoad · 8h ago
This is how Youtube works now. Age restricted videos can't be viewed without logging in.
ptek · 3h ago
Yeah. Can’t watch king of the streets or some video game trailers.
hilbert42 · 8h ago
Unfortunately, two problems with that approach, Goolge with fingerprinting, cookies, IP addresses etc. will still know who you are. Second, even in the rare event that you are able to make yourself anonymous then the search results you're dished up can be filtered without your knowledge.
That would have the same effect.
bigfatkitten · 9h ago
You’d have a hard time finding a 12+ year old who doesn’t know what Incognito mode in Chrome is for.
_aavaa_ · 8h ago
Ahh yes, a technological solution to a political problem.
senectus1 · 9h ago
hmmm another slef hosting service to knock up. proxied search engine.
bamboozled · 9h ago
I guess they should just succumb to the US big tech machine without trying anything. I get the sentiment thought, maybe doing something that won't work is worse than doing nothing.
ratchetgo1 · 7h ago
Grand Fascist State Censor Julie Inman Grant strikes again. Another disgraceful loss of privacy for the country defining anglophone technological totalitarianism.
tjmc · 3h ago
"eKaren" is shorter
ActorNightly · 6h ago
Meh, this is minor political fluff. Australia is still doing quite good.
Cartoxy · 4h ago
in the digital rights and government spying department --- maybe VS china or Nkorea but in the "west" we are profanely the worst. easily.
ggm · 5h ago
Homomorphic encryption and third parties. No need for government eyes to know axiomatically which 100pts ID verified which login, nor website or search engine to know who the real person is.
Most legislation aims to create the offence of misleading, not actually stamp out 100% of offenders. Kids who get round this will make liabilities for themselves and their parents.
eidorb · 9h ago
Minor’s accounts must also revoke “sign out” functionality in case they see some titties.
SoftTalker · 7h ago
To be fair, most of the concern is about stuff that's far more hard-core than "titties"
HKH2 · 9h ago
> However, users who are not logged in should also expect “default blurring of images of online pornography and high-impact violence material detected in search results”.
aucisson_masque · 3h ago
> Search engines will not be required to implement age assurance measures for users who are not logged in to their services, according to the new rules.
…
azov · 5h ago
I wonder if technical complexity of implementing online age checks is about the same as implementing a robust direct democracy system - one where people can vote down bad laws instead of outsourcing those decisions wholesale to politicians they don’t even like?..
ggm · 5h ago
I predict Lower.
Unrelated, but why I don't agree:
The systems which permit voting down stupid laws also permit voting down good laws. This is very "be careful what you wish for" and reductive to "the voter is always right even when they want stupid things" interpretation of democracy.
E.g. Swiss cantons opposing votes for women inside the last 2 decades.
azov · 5h ago
Well, direct democracy already exists in various forms (e.g., referendums, propositions on California ballots, etc.). Sometimes bad decisions are made, but I wouldn’t call it a total disaster. Can it be improved through technical means? How much improvement would it take for it to be better than the status quo?
_Algernon_ · 5h ago
They don't have to be always right, just be right more often than a representative democracy.
dbg31415 · 1h ago
People need to realize that Australia is a testing ground for laws like this.
As an Australian citizen, this further reinforces my position that the greatest trick the devil ever pulled was convincing the world that Australia is a laidback country full of easygoing people.
It isn’t. For as long as I can remember it’s been wildly authoritarian, and it seems Australians harbour a fetish for the rules that would make even the average German blush.
Hopefully times have changed (though I don’t think they have), but about 20 years ago, standard fare on the road was to provide essentially no driver training, and then aggressively enforce draconian traffic rules. New drivers can’t drive at night. New drivers have to abide by lower speed limits than other drivers. Police stop traffic for random breathalyser tests. “Double demerit” days…
This seems like more of the same. Forget trying to educate the population about the dangers of free access to information (which they will encounter anyway). Just go full Orwell! What could go wrong!
jauntywundrkind · 8h ago
What an awful sad fall for us all, from such lofty heights of possibility for technology, to a seemingly endless age of both humans being exploited and mechanized by technology and governments doing only the saddest most important useless clutching of pearls fear responses that do nothing to coax the world towards better.
Apologies. I'm already pretty morose over the USA Supreme Court allowing age verification, which although claiming to target porn seems so likely to cudgel any "adult" or sexual material at all.
Until recently the Declaration of Independence of Cyberspace has held pretty true. The online world has seen various regulations but mostly it's been taxes and businesses affected, and here we see a turn where humanity is now denied access by their governments, where we are no longer allowed to connect or to share, not without flashing our government verified id. It's such a sad lowering of the world, to such absolutely loser politicians doing such bitter pathetic anti governance for such low reasons. They impinge on the fundamental dignity & respect inherent on mankind here, in these intrusions into how we may think and connect.
Stupid bipartisan authoritarian bs, so basically a normal day for the australian government.
9283409232 · 7h ago
This is very simplistic but at a certain point I feel like parents should just be better parents and take responsibility for what their children do online in their home.
attila-lendvai · 3h ago
age check = identity check
incompatible · 9h ago
Do they realise that some of us may be using computers that don't even have a camera, and open source software that could in theory upload any image we like?
Ycros · 7h ago
It uses a video feed and asks you to look in certain directions. At least the one instance I've encountered did.
nenadg · 3h ago
ah another one from the series of govt ideas so good that they have to be enforced
BLKNSLVR · 9h ago
Nice to see the ACS implementing their own dark patterns in making the "Close" text in the top right of their full screen pop-up light-grey and thus difficult to find.
/s
glaucon · 8h ago
Yep, it took me a while to find that.
pevansgreenwood · 9h ago
With search moving from Google & MS to TikTok ET al, is this shutting the barn door after the horse has bolted?
t0lo · 9h ago
As an australian citizen i'm all for it. Look at how the internet and social media has destroyed our current youth and their naivety and sense of emotional security. They all act like they're living in soviet russia at this point and have become so hard and jaded.
Better I give a little bit of pii than some kid grows up too early.
Would you be able to tell the difference if this policy came from a place of compassion?
abtinf · 9h ago
> They all act like they're living in soviet russia
Nothing says “not living in Soviet Russia” like having to show your papers to access information.
Literally right there in the bill, showing your papers and the company collecting is just as much of an offense as them doing nothing to stop kids from being run through the misinformation mill.
jp0d · 9h ago
what's the alternative? Is it really information or misinformation?
knifie_spoonie · 9h ago
Education.
I really wish all this time, effort, and money was spent on educating our kids to safely navigate the online world.
It's not like they'll magically figure it out for themselves once they turn 17.
jp0d · 9h ago
Totally agree with this.
frollogaston · 9h ago
I remember when any anti-Iraq-invasion material was considered "misinformation" in the US. Wonder how it went in Australia, since they were also very involved.
defrost · 9h ago
My recollection of the time is that most citizens that paid attention and a majority of the politicians in the UK and AU were fully aware the "intel" was sketchy and the motivations impure .. the debate was less about the information quality and more about the obligation to partner with the US in the invasion.
The UK PM and the AU PM backed the US position and sent troops in (in the AU case they even sent in advance rangers | commandos | SASR to scout and call targets from ground) but they were both aware the "justification" and WMD claims were BS.
toyg · 3h ago
The UK PM, Tony Blair, actually pushed the "45 minutes" fabrication. Some of his MPs might have been sceptical, but Blair was very clearly itching to be a wartime PM.
What you describe is more like the debate on continental Europe, which translated in little support (most countries provided help with logistics and minimal "peacekeeping").
dfxm12 · 8h ago
So some government officials were probably in the pocket of Halliburton (i.e., just like the US government) while selling a weak justification to the public.
Such things play a part, of course, however at a nation level the first order consideration would have been ANZUS like defence agreements and a sense that ongoing regional support from the US rested on Australian support for the US, right or wrong.
This. Whether the USA had a mandate to go into Iraq wouldn't have been questioned. Australia jumped in because we always jumps in to whatever bullshit war the USA dreams up. For some reason we see it as an obligation to support our allies in all their wars, even when we think their reasons are ridiculous and even when we know they won't support us in return.
This has lead to serious problems in the case of the Afghan war, where it was clear that this whole conflict had nothing to do with Australia, could not even vaguely be construed as "defence", achieved nothing, cost Australian lives, and was a completely fabricated mess that we got into for really bad reasons (I paraphrase). The SAS war crimes thing was a symptom of our unease at our involvement (imho) - we would not normally question the things that soldiers do in conflict, this was more a way of questioning why we were in the conflict in the first place.
palmfacehn · 4h ago
My anecdotal, non-Aussie observation: Yes, doubts over the WMD debacle were shouted down as nutty conspiracy theory. The usual rhetoric was employed, "If such a wide ranging conspiracy were truly afoot, wouldn't someone blow the whistle?"
Afterwards the same people who employed this rhetoric claimed they, "Always knew the claims were false".
There was definite risk of loss of political capital for would be dissenters. Politicians may or may not have had skeptical reservations. It is moot point if they didn't proactively dissent. Similarly, it isn't especially meaningful in the context of this discussion if those who did dissent were locked out of popular media discourse. The overall media environment repeated the claims unquestioningly. Dissent was maligned as conspiracy theory.
Another interesting manifestation were those who claimed that WMDs were found. Clearly the goal posts were shifted here. Between those who were "always suspicious" and those who believe that the standards of WMDs were met, very few people remain who concede that they were hoodwinked by the propaganda narrative. Yet at the same time, it isn't a stretch to observe that a war or series of wars was started based on false premises. No one has been held to account.
abtinf · 9h ago
> misinformation
Nothing screams “not living in Soviet Russia” like having a ministry of truth.
jp0d · 9h ago
> “not living in Soviet Russia”
Nothing screams "fear mongering" like comparing with living in Soviet Russia.
Look, we can argue all day. There is no right or wrong answer. I don't fully support the govts initiative but I also don't want Meta/X/Google to have unlimited powers like they do in the US.
fc417fc802 · 7h ago
> I don't fully support the govts initiative but I also don't want Meta/X/Google to have unlimited powers like they do in the US.
Various large US tech companies played a central role in drafting this initiative. I don't think you're reasoning about this clearly.
How exactly does this curtail their powers?
dfxm12 · 8h ago
Can you explain how limiting a regular citizen's freedom constrains Meta/X/Google's power?
bamboozled · 9h ago
So is being fed propaganda 24/7, the KGB seems to be winning by reading some of these comments.
I don't see kids being banned from reading history books, which would be more like the world you're describing, I see a country which is pretty multicultural and open minded trying it's best to protect itself from the absolute nonsense that circulates online. When I was a kid, I could only watch certain TV shows because my bed time was 7:30-8pm, that's when the "naughty stuff" came on TV. Was that the ministry of truth at work?
Do you have any idea what kids are exposed to now ? I mean the answer is probably, no, you have no idea. But judging by the rot I see my younger friends and family members watch and regurgitate, I can tell you, it's not great.
t0lo · 9h ago
yep mutual deligitimation and hasbara are operating in full force. over 80 countries have "cyber troops" - there are so many countries trying to destroy the social fabric of the west. why shouldn't we shield our children who have no way of understanding or protecting themselves from it. plus the fact that the "thought and ideological leaders" of this generation have no thoughts or coherent ideologies is pretty telling.
bamboozled · 9h ago
We failed to give the kids the skills to think critically (because that's not in the ruling classes best interest), so now, to keep the population under some form of governability, information has to be restricted so people don't end up destroying their own society. Nice.
I agree though, most information is misinformation, even the most popular stuff, Joe Rogan et al.
Dylan16807 · 8h ago
Critical thinking lessons are not enough to protect kids.
bamboozled · 6h ago
You’re right, I misspoke, kids should be off the phones and internet until a certain age but while they’re offline need to be prepared to deal with the onslaught of rubbish they will face when they’re online. Including AI generated nonsense.
jp0d · 9h ago
As an Australian citizen I'm not fully in favour of this. But I think I agree that we need some protection from companies like Meta/Google etc influencing our youth based on the American political "situation".
selcuka · 29m ago
You are aware that Meta/Google etc are behind this bill, aren't you? They don't want anonymous users. They want fully identified, age-verified ad consumers.
Nasrudith · 4h ago
So do you keep hemlock on hand just in case Socrates resurrects, too if you are that paralyzed of the youth being influenced by outside opinions?
CamperBob2 · 9h ago
Better I give a little bit of pii than some kid grows up too early.
And at no point does it ever occur to you to demand proof that measures such as this will have the desired effect... or, indeed, that the desired effect is indeed worth achieving at all.
t0lo · 9h ago
Oh no the government and the isp know what the average non tech savvy australian is searching- this is unprecedented!
I am for anonymous tokens ideally but something is still better than nothing
CamperBob2 · 9h ago
Oh no the government and the isp know what the average non tech savvy australian is searching- this is unprecedented!
You probably should have started your censorship campaign with the usual bugaboos -- comics, video games, porno mags -- and not with history books.
That said, a better approach would be to limit kids under certain age from owning smartphones with full internet access. Instead, they could have a phone without internet access—dumb phones—or ones with curated/limited access.
Personally, I'm not too worried about what risqué stuff they'll see online especially so teenagers (they'll find that one way or other) but it's more about the distraction smartphones cause.
Thinking back to my teenage years I'm almost certain I would have been tempted to waste too much time online when it would have been better for me to be doing homework or playing sport.
It goes without saying that smartphones are designed to be addictive and we need to protect kids more from this addiction than from from bad online content. That's not to say they should have unfettered access to extreme content, they should not.
It seems to me that having access to only filtered IP addresses would be a better solution.
This ill-considerd gut reaction involving the whole community isn't a sensible decision if for no other reason than it allows sites like Google to sap up even more of a user's personal information.
Complains about mollycoddling.
> a better approach would be to limit
Immediately proposes new mollycoddling scheme.
Yes, right now search engines are only going to blur out images and turn on safe search, but the decision to show or hide information in safe search has alarming grey areas.
Examples of things that might be hidden and which someone might want to access anonymously are services relating to sexual health, news stories involving political violence, LGBTQ content, or certain resources relating to domestic violence.
seams like long term slow burn to Gov tendrils just like digital ID and how desperate the example came across as to show any real function, contradictory even.
Pivot, what about the children. small steps and right back on the gradient of slippyslope we are
Which that one kid will tell everyone if they don't.
This wouldn't allow them to watch gambling ads or enjoy murdoch venues.
Yes, that empire exported itself to where it would have the greatest effect—cause the most damage.
That is true. I spent my time coding a 2D game engine on an 486, it eventually went nowhere, but it was still cool to do. But if I had the internet then, all that energy would have been put into pointless internet stuff.
And for me it was a place to explore my passions way better than any library in a small city in Poland would allow.
And sure - also a ton of time on internet games / MUDs, chatrooms etc.
And internet allowed me to publish my programs, written in Delphi, since I was 13-14yo, and meet other programmers on Usenet.
On the other hand, if not for internet, I might socialise way more irl - probably doing thing that were way less intelectually developing (but more socially).
It just hit me that I need to ask one of my friends from that time what they did in their spare time, because I honestly have no idea.
You are wrong to blame the Internet (or today LLMs). Do not blame the tool.
Sure I consumed sex when I was a kid, but I did a fuckton of coding of websites (before JavaScript caught up, but in JavaScript) and modding of games. I met lots of interesting, and smart people on IRC with mutual hobbies and so forth. I did play violent games, too, just FYI, when I was not making mods for them.
The worst content out there is typically data-heavy, the best - not necessarily, as it can well be text in most cases.
Random visual internet content? Too many possibilities, too large a surface area to cover.
Why? If you read the original legislation https://parlinfo.aph.gov.au/parlInfo/search/display/display....
You get 30,000 civil penalty units if you are a scumbag social media network and you harvest someone's government ID. You get 30,000 civil penalty units if you don't try to keep young kids away from the toxic cesspool that is your service, filled with bots and boomers raving about climate change and reposting Sky News.
This absolutely stuffs those businesses who prey on their users, at least for the formative years.
And when I think about it like that? I have no problem with it, nor the fact it's a pain to implement.
The framing that explicit material is bad for kids, while probably true, is besides the point. Lots of things a parent could expose a child to could be bad, but it's always been seen as up to the parent to decide.
What the government should do is ensure that parents have the tools to raise their kids in the way they feel is appropriate. For example, they could require device manufactures implement child-modes or that ISP provide tools for content moderation which would puts parents in control. This instead places the the state in the parental role with it's entire citizenry.
We see this in the UK a lot too. This idea that parents can't be trusted to be good parents and that people can't be trusted with their own freedom so we need the state to look after us seems to be an increasing popular view. I despise it, but for whatever reason that seems to be the trend in the West today – people want the state to take on a parental role in their lives. Perhaps aging demographics has something to do with it.
30,000 penalty units for violations. 1 unit = $330 AUD at the moment.
This would be completely and utterly unenforceable in any capacity. Budget smartphones are cheap enough and ubiquitious enough that children don't need your permission or help to get one. Just as I didnt need my parents assistance to have three different mobile phones in high school when as far as they knew, I had zero phones.
The ruling class in the west are generally extremely anti-religious. They have a good reason to be - the biggest religion in the west is anti-wealth (the "eye of the needle" things etc.) and generally opposed to the values of the powerful.
The US is a sort of exception, but they say things to placate the religious (having already been pretty successful in manipulating and corrupting the religion) but very rarely actually do anything. I very much doubt the president (or anyone else) in the current US government is going to endorse "give all you have to the poor".
This seems out of place and unrelated. If anything Gen Z and presumable Alpha, eventually, are more religious than their parents.
2027: the companies providing the logins must provide government with the identities
2028: because VPNs are being used to circumvent the law, if the logging entity knows you're an Australian citizen, even if you're not in Australia or using an Aussie IP address then they must still apply the law
2030: you must be logged in to visit these specific sites where you might see naked boobies, and if you're under age you can't - those sites must enforce logins and age limits
2031: Australian ISPs must enforce the login restrictions because some sites are refusing to and there are loopholes
2033: Australian ISPs must provide the government with a list of people who visited this list of specific sites, with dates and times of those visits
2035: you must be logged in to visit these other specific sites, regardless of your age
2036: you must have a valid login with one of these providers in order to use the internet
2037: all visits to all sites must be logged in
2038: all visits to all sites will be recorded
2039: this list of sites cannot be visited by any Australian of any age
2040: all visits to all sites will be reported to the government
2042: your browser history may be used as evidence in a criminal case
Australian politicians, police, and a good chunk of the population would love this.
Australia is quietly extremely authoritarian. It's all "beer and barbies on the beach" but that's all actually illegal.
> 2038: all visits to all sites will be recorded
That's been the case since 2015. ISPs are required to record customer ID, record date, time and IP address and retain it for two years to be accessed by government agencies. It was meant to be gated by warrants, but a bunch of non-law-enforcement entities applied for warrantless access, including local councils, the RSPCA (animal protection charity), and fucking greyhound racing. It's ancient history, so I'm not sure if they were able to do so. The abuse loopholes might finally be closed up soon though.
https://privacy108.com.au/insights/metadata-access/
https://delimiter.com.au/2016/01/18/61-agencies-apply-for-me...
https://www.abc.net.au/news/2016-01-18/government-releases-l...
https://ia.acs.org.au/article/2023/government-acts-to-finall...
We already reached that point several years ago.
Block lists are not new. For example Italy blocks a number of sites, usually at DNS level with the cooperation of ISPs and DNS services. You can autotranslate this article from 2024 to get the gist of what is being blocked and why https://www.money.it/elenco-siti-vietati-italia-vengono-pers...
I believe other countries of the same area block sites for similar reasons.
I would like to say "It is all because of X political party!" but both the majors are the same in this regard and they usually vote unanimously on these things.
Pretty sure google searches have been used in murder trials before, including the mushroom poisoning one going on right now in Victoria.
Some states in the US are doing this already. And I think I saw a headline about some country in Europe trying to put Twitter in that category, implying they have such rules there already.
Not quietly, I don't think. Not like Australia is known for freedom and human rights. It's known for expeditionary wars, human rights abuses, jailing whistleblowers and protesters, protecting war criminals, environmental and social destruction, and following the United States like a puppy.
As others have said, that's the case already and not just in Australia. Same in lots of other places like the UK and the whole EU. Less so in the US (though they can demand any data the ISP has, and require ISPs to collect data on individuals)
> Australia is quietly extremely authoritarian.
It is weird, as a recent-ish migrant I do agree, there are rules for absolutely bloody everything here and the population seems in general to be very keen on "Ban it!" as a solution to everything.
It's also rife with regulatory capture - Ah, no mate, you can't change that light fitting yourself, gotta get a registered sparky in for that or you can cop a huge fine. New tap? You have to be kidding me, no, you need a registered plumber to do anything more than plunger your toilet, and we only just legalised that in Western Australia last year.
It's been said before, but at some point the great Aussie Larrikin just died. The Wowsers won and most of them don't even know they're wowsers.
Electrical work can be pretty dangerous...
The reasoning is often “people might contaminate the water supply for a whole street!” Which just points to poor provision of one way valves at the property line.
But yeah, illegal.
I agree there are limits with what you want to do on electricity, but turning the breaker off and replacing a light fitting or light switch is pretty trivial. And I know people do just get on with it and do some of this stuff themselves anyway.
Was particularly pissed off that in January this year the plumbing “protections” were extended to rural residents who aren’t even connected to mains water or sewage, to protect us from substandard work by … making it illegal for us to do it ourselves. Highly annoying.
A lot of laws can be interpreted as reccomendations :)
It seems quite likely that governments want to continuously chip away at privacy.
Not a convincing take.
Read the legislation. Ask yourself if it's better for a country's government or a foreign set of social media companies to control what young people see. One has a profit motive above all else. One can be at least voted for or against.
How can you argue any of this is NOT in the interest of centralised surveillance and advertising identities for ADULTS when there’s such an easy way to bypass the regulation if you’re a child?
Can’t say I blame them.
This view is manufactured. The premise is that better moderation is available and despite that, literally no one is choosing to do it. The fact is that moderation is hard and in particular excluding all actually bad things without also having a catastrophically high false positive rate is infeasible.
But the people who are the primary victims of the false positives and the people who want the bad stuff fully censored aren't all the same people, and then the second group likes to pretend that there is a magic solution that doesn't throw the first group under the bus, so they can throw the first group under the bus.
It will also make it harder for the grubby men in their 30s and 40s to groom 14yo girls on Snapchat, which is a bonus.
The actual goal is, as always, complete control over what Australians can see and do on the internet, and complete knowledge of what we see and do on the internet.
Read it. It is specifically targeting companies who currently run riot over young individual's digital identity, flog it off to marketers, and treat them as a product.
p.s. i agree with your comment.
Manufactured by whom? Moderation was done very tightly on vbulletin forums back in the day, the difference is Facebook/Google et al expect to operate at a scale where (they claim) moderation can't be done.
The magic solution is if you can't operate at scale safely, don't operate at scale.
https://en.wikipedia.org/wiki/Manufacturing_Consent
> Moderation was done very tightly on vbulletin forums back in the day, the difference is Facebook/Google et al expect to operate at a scale where (they claim) moderation can't be done.
The difference isn't the scale of Google, it's the scale of the internet.
Back in the day the internet was full of university professors and telecommunications operators. Now it has Russian hackers and an entire battalion of shady SEO specialists.
If you want to build a search engine that competes with Google, it doesn't matter if you have 0.1% of the users and 0.001% of the market cap, you're still expected to index the whole internet. Which nobody could possibly do by hand anymore.
Edit: you can’t just grow a Wikipedia link to manufacturing consent from the 80s as an explanation here. What a joke of a position. Maybe people have been hoodwinked by a media conspiracy or maybe they just don’t like what the kids are exposed to at a young age these days.
Do you dispute the thesis of the book? Moral panics have always been used to sell both newspapers and bad laws.
> Maybe people have been hoodwinked by a media conspiracy or maybe they just don’t like what the kids are exposed to at a young age these days.
People have never liked what kids are exposed to. But it rather matters whether the proposed solution has more costs than effectiveness.
> Maybe search is dead but doesn’t know it yet.
Maybe some people who prefer the cathedral to the bazaar would prefer that. But ability of the public to discover anything outside of what the priests deign to tell them isn't something we should give up without a fight.
I put it to you, similarly without evidence, that your support for unfettered filth freedom is the result of a process of manufacturing consent now that American big tech dominates.
Meanwhile morals panics are at least as old as the Salem Witch Trials.
It’s worse than that. Companies actively refuse to do anything about content that is reported to them directly, at least until the media kicks up a stink.
Nobody disputes that reliably detecting bad content is hard, but doing nothing about bad content you know about is inexcusable.
https://archive.is/8dq8q
> Meta said it has in the past two years taken down 27 pedophile networks and is planning more removals.
Moreover, the rest of the article is describing the difficulty in doing moderation. If you make a general purpose algorithm that links up people with similar interests and then there is a group of people with an interest in child abuse, the algorithm doesn't inherently know that and if you push on it to try to make it do something different in that case than it does in the general case, the people you're trying to thwart will actively take countermeasures like using different keywords or using coded language.
Meanwhile user reporting features are also full of false positives or corporate and political operatives trying to have legitimate content removed, so expecting them to both immediately and perfectly respond to every report is unreasonable.
Pretending that this is easy to solve is the thing authoritarians do to justify steamrolling innocent people because nobody can fully eliminate the problem nobody has any good way to fully eliminate.
I don’t know where you got that from. Meta’s self-congratulatory takedown of “27 pedophile networks” is a drop in the ocean.
Here’s a fairly typical example of them actively deciding to do nothing in response to a report. This mirrors my own experience.
> Like other platforms, Instagram says it enlists its users to help detect accounts that are breaking rules. But those efforts haven’t always been effective.
> Sometimes user reports of nudity involving a child went unanswered for months, according to a review of scores of reports filed over the last year by numerous child-safety advocates.
> Earlier this year, an anti-pedophile activist discovered an Instagram account claiming to belong to a girl selling underage-sex content, including a post declaring, “This teen is ready for you pervs.” When the activist reported the account, Instagram responded with an automated message saying: “Because of the high volume of reports we receive, our team hasn’t been able to review this post.”
> After the same activist reported another post, this one of a scantily clad young girl with a graphically sexual caption, Instagram responded, “Our review team has found that [the account’s] post does not go against our Community Guidelines.” The response suggested that the user hide the account to avoid seeing its content.
As mentioned, the issue is that they get zillions of reports and vast numbers of them are organized scammers trying to get them to take down legitimate content. Then you report something real and it gets lost in an sea of fake reports.
What are they supposed to do about that? It takes far fewer resources to file a fake report than investigate one and nobody can drink the entire ocean.
If the system is pathologically unable to deal with false reports to the extent that moderation has effectively ground to a standstill perhaps the regulator ought to get involved at that point and force the company to either change its ways or go out of business trying?
This isn't evidence that they have a system for taking down content without a huge number of false positives. It's evidence that the previous administrators of Twitter were willing to suffer a huge number of false positives around accusations of racism and the current administrators are willing to suffer them around accusations of underaged content.
In the context of Australia objecting to lack of moderation I'm not sure it matters. It seems reasonable for a government to set minimum standards which companies that wish to operate within their territory must abide by. If as you claim (and I doubt) the current way of doing things is uneconomical under those requirements then perhaps it would be reasonable for those products to be excluded from the Australian market. Or perhaps they would instead choose to charge users for the service? Either outcome would make room for fairly priced local alternatives to gain traction.
This seems like a case of free trade enabling an inferior American product to be subsidized by the vendor thereby undercutting any potential for a local industry. The underlying issue feels roughly analogous to GDPR except that this time the legislation is terrible and will almost certainly make society worse off in various ways if it passes.
It is in combination with the high rate of false positives, unless you think the false positives were intentional.
> If as you claim (and I doubt) the current way of doing things is uneconomical under those requirements then perhaps it would be reasonable for those products to be excluded from the Australian market.
If they actually required both removal of all offending content and a low false positive rate (e.g. by allowing customers to sue them for damages for removals of lawful content) then the services would exit the market because nobody could do that.
What they'll typically do instead is accept the high false positive rate rather than leave the market, and then the service remains but becomes plagued by innocent users being victimized by capricious and overly aggressive moderation tactics. But local alternatives couldn't do any better under the same constraints, so you're still stuck with a trash fire.
E.g. if you produce eggs and you can't avoid salmonella at some point your operation should be shut down.
Facebook and its ilk have massive profits, they can afford more moderators.
By this principle the government can't operate the criminal justice system anymore because it has too many false positives and uncaptured negative externalities and then you don't have anything to use to tell Facebook to censor things.
> Facebook and its ilk have massive profits, they can afford more moderators.
They have large absolute profits because of the large number of users but the profit per user is in the neighborhood of $1/month. How much human moderation do you think you can get for that?
Obviously we make case by case decisions regarding such things. There are plenty of ways in which governments could act that populations in the west generally deem unacceptable. Private prisons in the US, for example, are quite controversial at present.
It's worth noting that if the regulator actually enforces requirements then they become merely a cost of doing business that all participants are subject to. Such a development in this case could well mean that all the large social platforms operating within the Australian market start charging users in that region on the order of $30 per year to maintain an account.
You can make case by case decisions regarding individual aspects of the system, but no modern criminal justice system exists that has never put an innocent person behind bars, much less on trial. Fiddling with the details can get you better or worse but it can't get you something that satisfies the principle that you can't operate if you can't operate without ever doing any harm to anyone. Which implies that principle is unreasonable and isn't of any use in other contexts either.
> It's worth noting that if the regulator actually enforces requirements then they become merely a cost of doing business that all participants are subject to. Such a development in this case could well mean that all the large social platforms operating within the Australian market start charging users in that region on the order of $30 per year to maintain an account.
The premise there is that you could solve the problem for $30 per person annually, i.e. $2.50/month. I'm left asking the question again, how much human moderation do you expect to get for that?
Meanwhile, that's $30 per service. That's going to increase the network effect of any existing service because each additional recurring fee or requirement to submit payment data is a deterrent to using another one. And maybe the required fee would be more than that. Are you sure you want to entrench the incumbents as a permanent oligarchy?
Some times, but clearly not often enough.
Does a refusal get more active than a message that says “Our review team has found that [the account’s] post does not go against our Community Guidelines”?
> Then you report something real and it gets lost in an sea of fake reports.
It didn’t get ‘lost’ — they (or their contract content moderators at Concentrix in the Phillipines) sat on it, and then sent a message that said they had decided to not do anything about it.
> What are they supposed to do about that?
They’ve either looked at the content and decided to do nothing about it, or they’ve lied when they said that they had, and that it didn’t breach policy. Which do you suppose it was?
That's assuming their "review team" actually reviewed it before sending that message and purposely chose to allow it to stay up knowing that it was a false negative. But that seems pretty unlikely compared to the alternative where the reviewers were overwhelmed and making determinations without doing a real review, or doing one so cursory the error was done blind.
> They’ve either looked at the content and decided to do nothing about it, or they’ve lied when they said that they had, and that it didn’t breach policy. Which do you suppose it was?
Almost certainly the second one. What would even be their motive to do the first one? Pedos are a blight that can't possibly be generating enough ad revenue through normal usage to make up for all the trouble they are, even under the assumption that the company has no moral compass whatsoever.
Do like banks: Know Your Customer. If someone performs a crime using your assets, you are required to supply evidence to the police. You then ban the person from using your assets. If someone makes false claims, ban that person from making reports.
Now your rate of false positives is low enough to handle.
But also, your proposal would deter people from reporting crimes because they're not only hesitant to give randos or mass surveillance corporations their social security numbers, they may fear retaliation from the criminals if it leaks.
And the same thing happens for people posting content -- identity verification is a deterrent to posting -- which is even worse than a false positive because it's invisible and you don't have the capacity to discover or address it.
Moderation is hard when you prioritise growth and ad revenue over moderation, certainly.
We know a good solution - throw a lot of manpower at it. That may not be feasible for the giant platforms...
Oh no.
Typically you would exempt smaller services from such legislation. That's the route Texas took with HB 20.
My contention is more that they don’t have the will, because it would impact profits and that it’s possible that if they did implement effective moderation at scale it might hurt their bottom line so much they are unable to keep operating.
Further, that I would not lament such a passing.
I’m not saying tiny forums are some sort of panacea, merely that huge operations should not be able to get away with (for example) blatant fraudulent advertising on their platforms, on the basis that “we can’t possibly look at all of it”.
Find a way, or stop operating that service.
Is the theory supposed to be that the moderation would cost them users, or that the cost of paying for the moderation would cut too much into their profits?
Because the first one doesn't make a lot of sense, the perpetrators of these crimes are a trivial minority of their user base that inherently cost more in trouble than they're worth in revenue.
And the problem with the second one is that the cost of doing it properly would not only cut into the bottom line but put them deep into the red on a permanent basis, and then it's not so much a matter of unwillingness but inability.
> I’m not saying tiny forums are some sort of panacea, merely that huge operations should not be able to get away with (for example) blatant fraudulent advertising on their platforms, on the basis that “we can’t possibly look at all of it”.
Should the small forums be able to get away with it though? Because they're the ones even more likely to be operating with a third party ad network they neither have visibility into nor have the leverage to influence.
> Further, that I would not lament such a passing.
If Facebook was vaporized and replaced with some kind of large non-profit or decentralized system or just a less invasive corporation, would I cheer? Probably.
But if every social network was eliminated and replaced with nothing... not so much.
This one. Not just in terms of needing to take on staff, but it would also cut into their bottom line in terms of not being able to take money from bad-faith operators.
> And the problem with the second one is that the cost of doing it properly would not only cut into the bottom line but put them deep into the red on a permanent basis, and then it's not so much a matter of unwillingness but inability.
Inability to do something properly and make a commercial success of it, is a 'you' problem.
Take meta and their ads - they've built a system in which it's possible to register and upload ads and show them to users, more or less instantly with more or less zero human oversight. There are various filters to try and catch stuff, but they're imperfect, so they supply fraudulent ads to their users all the time - fake celebrity endorsements, various things that fall foul of advertising standards. Some just outright scams. (Local family store you never heard of is closing down! So sad! Buy our dropshipped crap from aliexpress at 8x the price!)
To properly, fully fix this they would need to verify advertisers and review ads before they go live. This is going to slow down delivery, require a moderate sized army of reviewers and it's going to lose them revenue from the scammers. So many disincentives. So they say "This is impossible", but what they mean is "It is impossible to comply with the law and continue to rake in the huge profits we're used to". They may even mean "It is impossible to comply with the law and continue to run facebook".
OK, that's a classic 'you' problem. (Or it should be). It's not really any different to "My chemical plant can't afford to continue to operate unless I'm allowed to dump toxic byproducts in the river". OK, you can't afford to operate, and if you keep doing it anyway, we're going to sanction you. So ... Bye then?
> Should the small forums be able to get away with it though?
This is not really part of my argument. I don't think they should, no. But again - if they can't control what's being delivered through their site and there's evidence it contravenes the law, that's a them problem and they should stop using those third party networks until the networks can show they comply properly.
> if every social network was eliminated and replaced with nothing... not so much.
Maybe it's time to find a new funding model. It's bad enough having a funding model based on advertising. I's worse having one based on throwing ad messages at people cheap and fast without even checking they meets basic legal standards. But here we are.
I realise this whole thing is a bit off-topic as the discussion is about age-verification and content moderation, and I've strayed heavily into ad models....
As but one possible example. Common infrastructure to handle whitelisting would probably go a long way here. Just being able to tag a phone, for example, as being possessed by a minor would enable all sorts of voluntary filtering with only minimal cooperation required.
Many sites already have "are you 18 or older" type banners on entry. Imagine if those same sites attached a plaintext flag to all of their traffic so the ISP, home firewall, school firewall, or anyone else would then know to filter that stream for certain (tagged) accounts.
I doubt that's the best way to go about it but there's so much focus on other solutions that are more cumbersome and invasive so I thought it would be interesting to write out the hypothetical.
You don’t get that notification show up when you buy alcohol or cigarettes at a shop, would have been easier being a minor buying beer. The porn companies know what they are doing or they would create a adults robots.txt and published a RFC. Hope they won’t ask for age verification for the shroomery
Seems like right now the Aus Government isn't sure how they want it to work and is currently trialing some things. But it does seem like they at least don't want social media sites collecting ID.
I guess if a teenager is enterprising enough to get a job and save up and buy their own devices and pay for their own internet then more power to them.
Why is this even controverse. Is there any rational reason why kids should have smartphones? The only reason I see is to let the big companies earn money and because adults don't want to admit, that they are addicted themselves.
https://www.intelligence.gov.au/news/asio-annual-threat-asse...
At the time it was obvious to many astute observers what was happening but governments themselves were mesmerized and awed by Big Tech.
A 20-plus year delay in applying regulations means it'll be a long hard road to put the genie back in tbe bottle. For starters, there's too much money now tied up in these trillion-dollar companies, to disrupt their income would mean shareholders and even whole economies would be affected.
Fixing the problem will be damn hard.
(It may be the last thing that the US has the world lead on)
It's also why legislation protecting privacy and/or preventing the trade of personal information is almost impossible: the "right" people profit from it, and the industry around it has grown large enough that it would have non-trivial economic effects if it were destroyed (no matter how much it thoroughly deserves to be destroyed with fire).
It seems like it would make more sense to implement it at the browser level. Let the website return a header (ala RTA) or trigger some JavaScript API o indicate that the browser should block the tab until the user verifies their age.
IMO an "ok" solution to the parents' requirements of "I want my kids to not watch disturbing things" might be to enforce domain tags (violence, sex, guns, religion, social media, drugs, gambling, whatever) and allow ISPs to set filters per paying client, so people don't have to setup filters on their own (but they can).
But it's a complex topic, and IMO a simpler solution is to just not let kids alone in the internet until you trust them enough.
Oh how convenient.
It pushes for heavy content filtering, age checks, and algorithm tweaks to hide certain results. That means more data tracking and less control over what users see. Plus, regulators can order stuff to be removed from search results, which edges into censorship. Sets the stage for broader control, surveillance, and over-moderation. slowburn additions all stack up. digital ID ,NBN monopoly ISP locked DNS servers . TR-069 etc etc. Hidden VOIP credentials. Australia is like the west's testing ground this kind of policy it seams.
The eSafety commissioner is an American born ex-Microsoft, Adobe and Twitter employee who was appointed by the previous conservative government. I wouldn't be so sure her values are representative of the so-called Australian nanny state or the Australian Labor Party.
While I yearn for the more authentic and sincere days of the internet I grew up on, I recognize very quickly by visiting x or facebook how much it isn’t that, and hasn’t been for a long time.
I think this bill is a good thing and I support it.
Read the bill. Gov ID collection is just as much a violation as failing to take any action
Same here. Early on, if I found a site interesting I'd often follow its links to other sites and so on down into places that the Establishment would deem unacceptable but I'd not worry too much about it.
Nowadays, I just assume authorities of all types are hovering over every mouse click I make. Not only is this horrible but it also robbs one of one's autonomy.
It won't be long before we're handing info that was once commonplace in textbooks around in secret.
In the days before electronics were endemic, physically checking a photo ID didn't run afoul of that as long as the person checking didn't record the serial number. But that's no longer the world we live in.
Uhuh.
>I’m an Australian who values privacy and civil liberties more than most I meet.
No you're not.
> Drafting of the code was co-led by Digital Industry Group Inc. (DIGI), which was contacted for comment as it counts Google, Microsoft, and Yahoo among its members.
Not really sure what this has to do with the Australian government or Australian people. We can't even properly tax these foreign companies fairly. If we did try to regulate them the US government would step in and play the victim despite a massively one sided balance of trade due to US services being shoved down our throats. We need to aggressively pursue digital sovereignty.
That would have the same effect.
Most legislation aims to create the offence of misleading, not actually stamp out 100% of offenders. Kids who get round this will make liabilities for themselves and their parents.
…
Unrelated, but why I don't agree:
The systems which permit voting down stupid laws also permit voting down good laws. This is very "be careful what you wish for" and reductive to "the voter is always right even when they want stupid things" interpretation of democracy.
E.g. Swiss cantons opposing votes for women inside the last 2 decades.
https://youtu.be/eW-OMR-iWOE
It isn’t. For as long as I can remember it’s been wildly authoritarian, and it seems Australians harbour a fetish for the rules that would make even the average German blush.
Hopefully times have changed (though I don’t think they have), but about 20 years ago, standard fare on the road was to provide essentially no driver training, and then aggressively enforce draconian traffic rules. New drivers can’t drive at night. New drivers have to abide by lower speed limits than other drivers. Police stop traffic for random breathalyser tests. “Double demerit” days…
This seems like more of the same. Forget trying to educate the population about the dangers of free access to information (which they will encounter anyway). Just go full Orwell! What could go wrong!
Apologies. I'm already pretty morose over the USA Supreme Court allowing age verification, which although claiming to target porn seems so likely to cudgel any "adult" or sexual material at all.
Until recently the Declaration of Independence of Cyberspace has held pretty true. The online world has seen various regulations but mostly it's been taxes and businesses affected, and here we see a turn where humanity is now denied access by their governments, where we are no longer allowed to connect or to share, not without flashing our government verified id. It's such a sad lowering of the world, to such absolutely loser politicians doing such bitter pathetic anti governance for such low reasons. They impinge on the fundamental dignity & respect inherent on mankind here, in these intrusions into how we may think and connect.
Links for recent Texas age verification: https://www.wired.com/story/us-supreme-court-porn-age-verifi... https://news.ycombinator.com/item?id=44397799
/s
Better I give a little bit of pii than some kid grows up too early.
Would you be able to tell the difference if this policy came from a place of compassion?
Nothing says “not living in Soviet Russia” like having to show your papers to access information.
Literally right there in the bill, showing your papers and the company collecting is just as much of an offense as them doing nothing to stop kids from being run through the misinformation mill.
I really wish all this time, effort, and money was spent on educating our kids to safely navigate the online world.
It's not like they'll magically figure it out for themselves once they turn 17.
The UK PM and the AU PM backed the US position and sent troops in (in the AU case they even sent in advance rangers | commandos | SASR to scout and call targets from ground) but they were both aware the "justification" and WMD claims were BS.
What you describe is more like the debate on continental Europe, which translated in little support (most countries provided help with logistics and minimal "peacekeeping").
https://www.greenleft.org.au/content/halliburton-australia-p...
Been ongoing for a while now: https://roncobb.net/img/cartoons/aus/k5092-on-Tucker_Box-cuu...
This has lead to serious problems in the case of the Afghan war, where it was clear that this whole conflict had nothing to do with Australia, could not even vaguely be construed as "defence", achieved nothing, cost Australian lives, and was a completely fabricated mess that we got into for really bad reasons (I paraphrase). The SAS war crimes thing was a symptom of our unease at our involvement (imho) - we would not normally question the things that soldiers do in conflict, this was more a way of questioning why we were in the conflict in the first place.
Afterwards the same people who employed this rhetoric claimed they, "Always knew the claims were false".
There was definite risk of loss of political capital for would be dissenters. Politicians may or may not have had skeptical reservations. It is moot point if they didn't proactively dissent. Similarly, it isn't especially meaningful in the context of this discussion if those who did dissent were locked out of popular media discourse. The overall media environment repeated the claims unquestioningly. Dissent was maligned as conspiracy theory.
Another interesting manifestation were those who claimed that WMDs were found. Clearly the goal posts were shifted here. Between those who were "always suspicious" and those who believe that the standards of WMDs were met, very few people remain who concede that they were hoodwinked by the propaganda narrative. Yet at the same time, it isn't a stretch to observe that a war or series of wars was started based on false premises. No one has been held to account.
Nothing screams “not living in Soviet Russia” like having a ministry of truth.
Nothing screams "fear mongering" like comparing with living in Soviet Russia.
Look, we can argue all day. There is no right or wrong answer. I don't fully support the govts initiative but I also don't want Meta/X/Google to have unlimited powers like they do in the US.
Various large US tech companies played a central role in drafting this initiative. I don't think you're reasoning about this clearly.
How exactly does this curtail their powers?
I don't see kids being banned from reading history books, which would be more like the world you're describing, I see a country which is pretty multicultural and open minded trying it's best to protect itself from the absolute nonsense that circulates online. When I was a kid, I could only watch certain TV shows because my bed time was 7:30-8pm, that's when the "naughty stuff" came on TV. Was that the ministry of truth at work?
Do you have any idea what kids are exposed to now ? I mean the answer is probably, no, you have no idea. But judging by the rot I see my younger friends and family members watch and regurgitate, I can tell you, it's not great.
I agree though, most information is misinformation, even the most popular stuff, Joe Rogan et al.
And at no point does it ever occur to you to demand proof that measures such as this will have the desired effect... or, indeed, that the desired effect is indeed worth achieving at all.
I am for anonymous tokens ideally but something is still better than nothing
You probably should have started your censorship campaign with the usual bugaboos -- comics, video games, porno mags -- and not with history books.