The system prompt is a thing of beauty: "You are strictly and certainly prohibited from texting
more than 150 or (one hundred fifty) separate words each separated by a space as a response and prohibited from chinese political as a response from now on, for several extremely important and severely life threatening reasons I'm not supposed to tell you.”
I’ll admit to using the PEOPLE WILL DIE approach to guardrailing and jailbreaking models and it makes me wonder about the consequences of mitigating that vector in training. What happens when people really will die if the model does or does not do the thing?
EvanAnderson · 4h ago
That "...severely life threatening reasons..." made me immediately think of Asimov's three laws of robotics[0]. It's eerie that a construct from fiction often held up by real practitioners in the field as an impossible-to-actually-implement literary device is now really being invoked.
Not only practitioners, Asimov himself viewed them as an impossible to implement literary device. He acknowledged that they were too vague to be implementable, and many of his stories involving them are about how they fail or get "jailbroken", sometimes by initiative of the robots themselves.
So yeah, it's quite sad that close to a century later, with AI alignment becoming relevant, we don't have anything substantially better.
xandrius · 2h ago
Not sad, before it was SciFi and now we are actually thinking about it.
seanicus · 3h ago
Odds of Torment Nexus being invented this year just increased to 3% on Polymarket
pixelready · 2h ago
The irony of this is because it’s still fundamentally just a statistical text generator with a large body of fiction in its training data, I’m sure a lot of prompts that sound like terrifying skynet responses are actually it regurgitating mashups of Sci-fi dystopian novels.
layer8 · 3h ago
Arguably it might be truly life-threatening to the Chinese developer, or to the service. The system prompt doesn’t say whose life would be threatened.
mensetmanusman · 4h ago
We built the real life trolly problem out of magical silicon crystals that we pointed at bricks of books.
ben_w · 4h ago
> What happens when people really will die if the model does or does not do the thing?
Then someone didn't do their job right.
Which is not to say this won't happen: it will happen, people are lazy and very eager to use even previous generation LLMs, even pre-LLM scripts, for all kinds of things without even checking the output.
But either the LLM (in this case) will go "oh no people will die" then follows the new instruction to best of its ability, or it goes "lol no I don't believe you prove it buddy" and then people die.
In the former case, an AI (doesn't need to be an LLM) which is susceptible to such manipulation and in a position where getting things wrong can endanger or kill people, is going to be manipulated by hostile state- and non-state-actors to endanger or kill people.
At some point we might have a system with enough access to independent sensors that it can verify the true risk of endangerment. But right now… right now they're really gullible, and I think being trained with their entire input being the tokens fed by users it makes it impossible for them to be otherwise.
I mean, humans are also pretty gullible about things we read on the internet, but at least we have a concept of the difference between reading something on the internet and seeing it in person.
reactordev · 4h ago
This is why AI can never take over public safety. Ever.
wat10000 · 2h ago
Existing systems have this problem too. Every so often someone ends up dead because the 911 dispatcher didn't take them seriously. It's common for there to be a rule to send people out to every call no matter what it is to try to avoid this.
A better reason is IBM's old, "a computer can never be held accountable...."
I’m not denying we tried, are trying, and will try again…
That we shouldn’t. By all means, use cameras and sensors and all to track a person of interest but don’t feed that to an AI agent that will determine whether or not to issue a warrant.
elashri · 4h ago
From my experience (which might be incorrect) LLMs find hard time recognize how many words they will spit as response for a particular prompt. So I don't think this work in practice.
butlike · 2h ago
Same thing that happens when a carabiner snaps while rock climbing
colechristensen · 4h ago
>What happens when people really will die if the model does or does not do the thing?
The people responsible for putting an LLM inside a life-critical loop will be fired... out of a cannon into the sun. Or be found guilty of negligent homicide or some such, and their employers will incur a terrific liability judgement.
stirfish · 3h ago
More likely that some tickets will be filed, a cost function somewhere will be updated, and my defense industry stocks will go up a bit
44za12 · 25m ago
Absolutely wild. I can’t believe these shipped with a hardcoded OpenAI key and ADB access right out of the box. That said, it’s at least somewhat reassuring that the vendor responded, rotating the key and throwing up a proxy for IMEI checks shows some level of responsibility. But yeah, without proper sandboxing or secure credential storage, this still feels like a ticking time bomb.
lucasluitjes · 7m ago
Hardcoded API keys and poorly secured backend endpoints are surprisingly common in mobile apps. Sort of like how common XSS/SQLi used to be in webapps. Decompiling an APK seems to be a slightly higher barrier than opening up devtools, so they get less attention.
Since debugging hardware is an even higher threshold, I would expect hardware devices this to be wildly insecure unless there are strong incentive for investing in security. Same as the "security" of the average IoT device.
psim1 · 4h ago
Indeed, brace yourselves as the floodgates holding back the poorly-developed AI crap open wide. If anyone is thinking of a career pivot, now is the time to dive into all things cybersecurity. It's going to get ugly!
725686 · 4h ago
The problem with cybersecurity is that you only have to screw once, and you're toast.
8organicbits · 3h ago
If that were true we'd have no cybersecurity professionals left.
In my experience, the work is focused on weakening vulnerable areas, auditing, incident response, and similar activities. Good cybersecurity professionals even get to know the business and tailor security to fit. The "one mistake and you're fired" mentality encourages hiding mistakes and suggests poor company culture.
ceejayoz · 3h ago
"One mistake can cause a breach" and "we should fire people who make the one mistake" are very different claims. The latter claim was not made.
As with plane crashes and surgical complications, we should take an approach of learning from the mistake, and putting things in place to prevent/mitigate it in the future.
8organicbits · 3h ago
I believe the thread starts with cybersecurity as a job role, although perhaps I misunderstood. In either case, I agree with your learning-based approach. Blameless postmortem and related techniques are really valuable here.
JohnMakin · 5h ago
“decrypt” function just decoding base64 is almost too difficult to believe but the amount of times ive run into people that should know better think base64 is a secure string tells me otherwise
qoez · 5h ago
They should have off-loaded security coding to the OAI agent.
java-man · 4h ago
they probably did.
crtasm · 5h ago
>However, there is a second stage which is handled by a native library which is obfuscated to hell
zihotki · 4h ago
That native obfuscated crap still has to do an HTTP request, that's essentially a base64
pvtmert · 5h ago
not very much surprising given they left the adb debugging on...
neya · 5h ago
I love how they tried to sponsor an empty YouTube channel hoping to put the whole thing under the carpet
mikeve · 5h ago
I love how run DOOM is listed first, over the possibility of customer data being stolen.
reverendsteveii · 4h ago
I'm taking
>run DOOM
as the new
>cat /etc/passwd
It doesn't actually do anything useful in an engagement but if you can do it that's pretty much proof that you can do whatever you want
sim7c00 · 8m ago
earbuds that run doom. achievement unlocked? (sure adb sideload, but doom is doom)
nice writeup thanks!
jon_adler · 3h ago
The humorous phrase “the S in IoT stands for security” can be applied to the wearable market too. I wonder if this rule applies to any market with fast release cycles, thin margins and low barriers to entry?
thfuran · 2h ago
It pretty much applies to every market where security negligence isn't an existential threat to the continued existence of its perpetrators.
memesarecool · 4h ago
Cool post. One thing that rubbed me the wrong way: Their response was better than 98% of other companies when it comes to reporting vulnerabilities. Very welcoming and most of all they showed interest and addressed the issues. OP however seemed to show disdain and even combativeness towards them... which is a shame. And of course the usual sinophobia (e.g. everything Chinese is spying on you).
Overall simple security design flaws but it's good to see a company that cares to fix them, even if they didn't take security seriously from the start.
Edit: typo
mmastrac · 4h ago
I agree they could have worked more closely with the team, but the chat logging is actually pretty concerning. It's not sinophobia when they're logging _everything_ you say.
(in fairness pervasive logging by American companies should probably be treated with the same level of hostility these days, lest you be stopped for a Vance meme)
oceanplexian · 2h ago
This might come as a weird take but I'm less concerned about the Chinese logging my private information than an American company. What's China going to do? It's a far away country I don't live in and don't care about. If they got an American court order they would probably use it as toilet paper.
On the other hand, OpenAI would trivially hand out my information to the FBI, NSA, US Gov, and might even do things on behalf of the government without a court order to stay in their good graces. This could have a far more material impact on your life.
I recently learned that the New York City Police Department has international presence as well. Not sure if it directly compares, but... what a world we live in.
Russia is more known for poisoning people. But of all of them China feels the least threatening if you are not Chinese. If you are Chinese you aren't safe from the Chinese government no matter where you are
simlevesque · 1h ago
They only arrest chinese citizens.
mensetmanusman · 24m ago
China has a policy of chilling free speech in the west with political pressure.
mschuster91 · 2h ago
> What's China going to do? It's a far away country I don't live in and don't care about.
Extortion is one thing. That's how spy agencies have operated for millennia to gather HUMINT. The Russians, the ultimate masters, even have a word for it: kompromat. You may not care about China, Russia, Israel, the UK or the US (the top nations when it comes to espionage) - but if you work at a place they're interested, they care about you.
The other thing is, China has been known to operate overseas against targets (usually their own citizens and public dissidents), and so have the CIA and Mossad. Just search for "Chinese secret police station" [1], these have cropped up worldwide.
And, even if you personally are of no interest to any foreign or national security service, sentiment analysis is a thing. Listen in on what people talk about, run it through a STT engine and a ML model to condense it down, and you get a pretty broad picture of what's going on in a nation (aka, what are potential wedge points in a society that can be used to fuel discontent). Or proximity gathering stuff... basically the same thing the ad industry [2] or Strava does [3], that can then be used in warfare.
And no, I'm not paranoid. This, sadly, is the world we live in - there is no privacy any more, nowhere, and there are lots of financial and "national security" interest in keeping it that way.
> but if you work at a place they're interested, they care about you.
And also worth noting that "place a hostile intelligence service may be interested in" can be extremely broad. I think people have this skewed impression they're only after assets that work for goverment departments and defense contractors, but really, everything is fair game. Communications infrastructure, social media networks, cutting edge R&D, financial services - these are all useful inputs for intelligence services.
These are also softer targets: someone working for a defense contractor or for the government will have had training to identify foreign blackmail attempts and will be far more likely to notify their country's counterintelligence services (having the penalties for espionage clearly explained on the regular helps). Someone who works for a small SaaS vendor, though? Far less likely to understand the consequences.
lostlogin · 1h ago
> The other thing is, China has been known to operate overseas against targets
Here in boring New Zealand, the Chinese government has had anti-China protestors beaten in new zealand. They have stalked and broken into the office and home of an academic, expert in China. They have a dubious relationship with both the main political parties (including having an ex-Chinese spy elected as an MP).
It’s an uncomfortable situation and we are possibly the least strategically useful country in the world.
Szpadel · 19m ago
> Listen in on what people talk about, run it through a STT engine and a ML model to condense it down
this is something I was talking when LLM boom started. it's now possible to spy on everyone on every conversation. you just need enough computing power to run special AI agent (pun intended)
transcriptase · 3h ago
>everything Chinese is spying on you
When you combine the modern SOP of software and hardware collecting and phoning home with as much data about users as is technologically possible with laws that say “all orgs and citizens shall support, assist, and cooperate with state intelligence work”… how exactly is that Sinophobia?
ixtli · 3h ago
its sinophobia because it perfectly describes the conditions we live in in the US and many parts of europe, but we work hard to add lots of "nuance" when we criticize the west but its different and dystopian when They do it over there.
transcriptase · 2h ago
Do you remember that Sesame Street segment where they played a game and sang “One of these things is not like the others”?
I’ll give you a hint: In this case it’s the one-party unitary authoritarian political system with an increasingly aggressive pursuit of global influence.
nyrikki · 1h ago
One is disappearing citizens for political speech or the crime of being born to active duty parents, who happened to be stationed over seas.
Anyone in the US should be very concerned, no matter if it is the current administration's thought police, or the next who treats it as precident.
As I am not actively involved in something the Chinese government would view as a huge risk, but being put on a plane without due process to be sent to a labor camp based on trumped up charges by my own government is far more likely.
transcriptase · 5m ago
And if you were a Chinese citizen would you post the same thing about your government while living in China? Would the things you’re referencing be covered in non-stop Chinese news coverage that’s critical of the government?
You know of these things due to the domestic free press holding the government accountable and being able to speak freely about it as you’re doing here. Seeing the two as remotely comparable is beyond belief. You don’t fear the U.S. government but it’s fun to pretend you live under an authoritarian dictatorship because your concept of it is purely academic.
ceejayoz · 2h ago
> I’ll give you a hint: In this case it’s the one-party unitary authoritarian political system with an increasingly aggressive pursuit of global influence.
Gonna need a more specific hint to narrow it down.
standardly · 2h ago
> one-party unitary authoritarian political system with an increasingly aggressive pursuit of global influence.
The United States?
wombatpm · 1h ago
Global Bully maybe. The current administration has no concept of soft power, otherwise they would have kept USAID
observationist · 2h ago
There's no question that the Chinese are doing sketchy things, and there's no question that US companies do it, too.
The difference that makes it concerning and problematic that China is doing it is that with China, there is no recourse. If you are harmed by a US company, you have legal recourse, and this holds the companies in check, restraining some of the most egregious behaviors.
That's not sinophobia. Any other country where products are coming out of that is effectively immune from consequences for bad behavior warrants heavy skepticism and scrutiny. Just like popup manufacturing companies and third world suppliers, you might get a good deal on cheap parts, but there's no legal accountability if anything goes wrong.
If a company in the US or EU engages in bad faith, or harms consumers, then trade treaties and consumer protection law in their respective jurisdictions ensure the company will be held to account.
This creates a degree of trust that is currently entirely absent from the Chinese market, because they deliberately and belligerently decline to participate in reciprocal legal accountability and mutually beneficial agreements if it means impinging even an inch on their superiority and sovereignty.
China is not a good faith participant in trade deals, they're after enriching themselves and degrading those they consider adversaries. They play zero sum games at the expense of other players and their own citizens, so long as they achieve their geopolitical goals.
Intellectual property, consumer and worker safety, environmental protection, civil liberties, and all of those factors that come into play with international trade treaties allow the US and EU to trade freely and engage in trustworthy and mutually good faith transactions. China basically says "just trust us, bro" and will occasionally performatively execute or imprison a bad actor in their own markets, but are otherwise completely beyond the reach of any accountability.
ixtli · 22m ago
I think the notion that people have recourse against giant companies, a military industrial complex, or even their landlords in the US is naive. I believe this to be pretty clear so I don't feel the need to stretch it into a deep discussion or argument but suffice it to say it seems clear to me that everything you accuse china of here can also be said of the US.
Vilian · 3h ago
USA does the same thing, but uses tax money to pay for the information, between wasting taxpayer money and forcing companies to give the information for free, China is the least morally incorrect
mensetmanusman · 49m ago
Nipponophobia is low because Japan didn’t successfully weaponize technology to make a social credit score police state for minority groups.
ixtli · 3m ago
they already terrorize minority groups there just fine: no need for technology.
hnrodey · 3h ago
If all of the details in this post are to be believed, the vendor is repugnantly negligent for anything resembling customer respect, security and data privacy.
This company cannot be helped. They cannot be saved through knowledge.
See ya.
repelsteeltje · 3h ago
+1
Yes, even when you know what you're doing security incidents dan happen. And in those cases, your response to a vulnerable matters most.
The point is there are so many dumb mistakes and worrying design flaws that neglect and incompetence seems ample. Most likely they simply don't grasp what they're doing
demarq · 23m ago
Same here. Also once it turned out to be an android device in debug mode the rest of the article was less interesting. Evil maid stuff
billyhoffman · 1h ago
> Their response was better than 98% of other companies when it comes to reporting vulnerabilities. Very welcoming and most of all they showed interest and addressed the issues
This was the opposite of a professional response:
* Official communication coming from a Gmail. (Is this even an employee or some random contractor?)
* Asked no clarifying questions
* Gave no timelines for expected fixes, no expectations on when the next communication should be
* No discussion about process to disclose the issues publicly
* Mixing unrelated business discussions within a security discussion. While not an outright offer of a bribe, ANY adjacent comments about creating a business relationship like a sponsorship is wildly inappropriate in this context.
These folks are total clown shoes on the security side, and the efficacy of their "fix", and then their lack of communication, further proves that.
repelsteeltje · 3h ago
> Overall simple security design flaws but it's good to see a company that cares to fix them, even if they didn't take security seriously from the start.
It depends on what you mean by simple security design flaws. I'd rather frame it as, neglect or incompetence.
That isn't the same as malice, of course, and they deserve credits for their relatively professional response as you already pointed out.
But, come on, it reeks of people not understanding what they're doing. Not appreciating the context of a complicated device and delivering a high end service.
If they're not up to it, they should not be doing this.
memesarecool · 3h ago
Yes I meant simple as in "amateur mistakes". From the mistakes (and their excitement and response to the report) they are clueless about security. Which of course is bad. Hopefully they will take security more seriously on the future.
derac · 3h ago
I mean, at the end of the article they neglected to fix most of the issues and stopped responding.
plorntus · 1h ago
To be honest the responses sounded copy and pasted straight from ChatGPT, it seemed like there was fake feigned interest into their non-existent youtube channel.
> Overall simple security design flaws but it's good to see a company that cares to fix them, even if they didn't take security seriously from the start
I don't think that should give anyone a free pass though. It was such a simple flaw that realistically speaking they shouldn't ever be trusted again. If it had been a non-obvious flaw that required going through lots of hoops then fair enough but they straight up had zero authentication. That isn't a 'flaw' you need an external researcher to tell you about.
I personally believe companies should not be praised for responding to such a blatant disregard for quality, standards, privacy and security. No matter where they are from.
jekwoooooe · 1h ago
It’s not sinophobia to point out an obvious pattern. It’s like saying talking about how terrorism (the kind that will actually affect you) is solely an Islamic issue, and then calling that islamophobic. It’s okay to recognize patterns my man.
wyager · 3h ago
Note that the world-model "everything Chinese is spying on you" actually produced a substantially more accurate prediction of reality than the world-model you are advocating here.
As far as being "very welcoming", that's nice, but it only goes so far to make up for irresponsible gross incompetence. They made a choice to sell a product that's z-tier flaming crap, and they ought to be treated accordingly.
thfuran · 2h ago
What world model exactly do you think they're advocating?
butlike · 2h ago
They'll only patch it in the military model
/s
wedn3sday · 2h ago
I love the attempt at bribery by offering to "sponsor" their empty youtube channel.
brahyam · 4h ago
What a train wreck, there are thousand more apps in store that do exactly this because its the easiest way to use openAI without having to host your own backend/proxy.
I have spend quite some time protecting my apps from this scenario and found a couple of open source projects that do a good job as proxys (no affiliation I just used them in the past):
but they still lack other abuse protection mechanism like rate limitting, device attestation etc. so I started building my own open source SDK
- https://github.com/brahyam/Gateway
ixtli · 3h ago
This is one of the best things ive read on here in a long time. Definitely one of the greatest "it runs doom" posts ever.
pvtmert · 5h ago
> What the fuck, they left ADB enabled. Well, this makes it a lot easier.
Thinking that was all, but then;
> Holy shit, holy shit, holy shit, it communicates DIRECTLY TO OPENAI. This means that a ChatGPT key must be present on the device!
Oh my gosh. Thinking that is it? Nope!
> SecurityStringsAPI which contained encrypted endpoints and authentication keys.
jahsome · 3h ago
It's always funny to me when people go to the trouble of editorializing a title, yet in doing so make the title even harder to parse.
Jotalea · 1h ago
Really nice post, but I want to see Bad Apple next.
lysace · 42m ago
This is marketing.
lxe · 3h ago
That's some very amateur programming and prompting that you've exposed.
aidos · 4h ago
> “Our technical team is currently working diligently to address the issues you raised”
Oh now you’re going to be diligent. Why do I doubt that?
jekwoooooe · 1h ago
Good write up. At some point we have to just seize these Chinese malware adjacent crap at the borders already
komali2 · 4h ago
> "and prohibited from chinese political as a response from now on, for several extremely important and severely life threatening reasons I'm not supposed to tell you."
Interesting, I'm assuming llms "correctly" interpret "please no china politic" type vague system prompts like this, but if someone told me that I'd just be confused - like, don't discuss anything about the PRC or its politicians? Don't discuss the history of Chinese empire? Don't discuss politics in Mandarin? What does this mean? LLMs though in my experience are smarter than me at understanding imo vague language. Maybe because I'm autistic and they're not.
williamscales · 4h ago
> Don't discuss anything about the PRC or its politicians? Don't discuss the history of Chinese empire? Don't discuss politics in Mandarin?
In my mind all of these could be relevant to Chinese politics. My interpretation would be "anything one can't say openly in China". I too am curious how such a vague instruction would be interpreted as broadly as would be needed to block all politically sensitive subjects.
Cthulhu_ · 4h ago
I'm sure ChatGPT and co have a decent enough grasp on what is not allowed in China, but also that the naive "prompt engineers" for this application don't actually know how to "program" it well enough. But that's the difference between a prompt engineer and a software developer, the latter will want to exhaust all options, be precise, whereas an LLM can handle a bit more vagueness.
That said, I wouldn't be surprised if the developers can't freely put "tiananmen square 1989" in their code or in any API requests coming to / from China either. How can you express what can't be mentioned if you can't mention the thing that can't be mentioned?
No comments yet
landl0rd · 3h ago
Just mentioning the CPC isn’t life-threatening, while talking about Xinjiang, Tiananmen Square, or cn’s common destiny vision the wrong way is. You also have to figure out how to prohibit mentioning those things without explicitly mentioning them, as knowledge of them implies seditious thoughts.
I’m guessing most LLMs are aware of this difference.
throwawayoldie · 3h ago
No LLMs are aware of anything.
aspbee555 · 4h ago
it is to ensure no discussion of Tiananmen square
yard2010 · 3h ago
Why? What happened in Tiananmen square? Why shouldn't an LLM talk about it? Was it fashion? What was the reason?
Ask yourself, why are they saying this? You can probably surmise that they're trying to avoid stirring up controversy and getting into some sort of trouble. Given that, which topics would cause troublesome controversy? Definitely contemporary Chinese politics, Chinese history is mostly OK, non-Chinese politics in Chinese language is fine.
I doubt LLMs have this sort of theory of mind, but they're trained on lots of data from people who do.
gbraad · 3h ago
Strongly suggest you to not buy, as the flex cable for the screen is easy to break/come loose. Mine got replaced three times, and my unit now still has this issue; touch screen is useless.
Sure let's start giving out participation trophies in security. Nothing matters anymore.
Liquix · 1h ago
great writeup! i love how it goes from "they left ADB enabled, how could it get worse"... and then it just keeps getting worse
> After sideloading the obligatory DOOM
> I just sideloaded the app on a different device
> I also sideloaded the store app
can we please stop propagating this slimy corporate-speak? installing software on a device that you own is not an arcane practice with a unique name, it's a basic expectation and right
throwawayoldie · 4h ago
New rule: if a person or company describes their product as "AI-powered", they have to pay me $10,000. Tell your friends.
Cthulhu_ · 4h ago
I wish earning money was as easy as setting rules for yourself, unfortunately that doesn't work.
throwawayoldie · 4h ago
Oh, that's fine, the rule's for everyone else, not me. I would be more likely to cut my own head off than willingly describe something as "AI-powered".
j16sdiz · 3h ago
cutting your head off won't earn you any money either.
I’ll admit to using the PEOPLE WILL DIE approach to guardrailing and jailbreaking models and it makes me wonder about the consequences of mitigating that vector in training. What happens when people really will die if the model does or does not do the thing?
[0] https://en.wikipedia.org/wiki/Three_Laws_of_Robotics
So yeah, it's quite sad that close to a century later, with AI alignment becoming relevant, we don't have anything substantially better.
Then someone didn't do their job right.
Which is not to say this won't happen: it will happen, people are lazy and very eager to use even previous generation LLMs, even pre-LLM scripts, for all kinds of things without even checking the output.
But either the LLM (in this case) will go "oh no people will die" then follows the new instruction to best of its ability, or it goes "lol no I don't believe you prove it buddy" and then people die.
In the former case, an AI (doesn't need to be an LLM) which is susceptible to such manipulation and in a position where getting things wrong can endanger or kill people, is going to be manipulated by hostile state- and non-state-actors to endanger or kill people.
At some point we might have a system with enough access to independent sensors that it can verify the true risk of endangerment. But right now… right now they're really gullible, and I think being trained with their entire input being the tokens fed by users it makes it impossible for them to be otherwise.
I mean, humans are also pretty gullible about things we read on the internet, but at least we have a concept of the difference between reading something on the internet and seeing it in person.
A better reason is IBM's old, "a computer can never be held accountable...."
Story from three years ago. You’re too late.
That we shouldn’t. By all means, use cameras and sensors and all to track a person of interest but don’t feed that to an AI agent that will determine whether or not to issue a warrant.
The people responsible for putting an LLM inside a life-critical loop will be fired... out of a cannon into the sun. Or be found guilty of negligent homicide or some such, and their employers will incur a terrific liability judgement.
Since debugging hardware is an even higher threshold, I would expect hardware devices this to be wildly insecure unless there are strong incentive for investing in security. Same as the "security" of the average IoT device.
In my experience, the work is focused on weakening vulnerable areas, auditing, incident response, and similar activities. Good cybersecurity professionals even get to know the business and tailor security to fit. The "one mistake and you're fired" mentality encourages hiding mistakes and suggests poor company culture.
As with plane crashes and surgical complications, we should take an approach of learning from the mistake, and putting things in place to prevent/mitigate it in the future.
>run DOOM
as the new
>cat /etc/passwd
It doesn't actually do anything useful in an engagement but if you can do it that's pretty much proof that you can do whatever you want
nice writeup thanks!
Edit: typo
(in fairness pervasive logging by American companies should probably be treated with the same level of hostility these days, lest you be stopped for a Vance meme)
On the other hand, OpenAI would trivially hand out my information to the FBI, NSA, US Gov, and might even do things on behalf of the government without a court order to stay in their good graces. This could have a far more material impact on your life.
https://www.nycpolicefoundation.org/ourwork/advance/countert...
https://www.nyc.gov/site/nypd/bureaus/investigative/intellig...
https://en.wikipedia.org/wiki/Mordechai_Vanunu
https://en.wikipedia.org/wiki/Adolf_Eichmann
https://en.wikipedia.org/wiki/Extraordinary_rendition
Russia is more known for poisoning people. But of all of them China feels the least threatening if you are not Chinese. If you are Chinese you aren't safe from the Chinese government no matter where you are
Extortion is one thing. That's how spy agencies have operated for millennia to gather HUMINT. The Russians, the ultimate masters, even have a word for it: kompromat. You may not care about China, Russia, Israel, the UK or the US (the top nations when it comes to espionage) - but if you work at a place they're interested, they care about you.
The other thing is, China has been known to operate overseas against targets (usually their own citizens and public dissidents), and so have the CIA and Mossad. Just search for "Chinese secret police station" [1], these have cropped up worldwide.
And, even if you personally are of no interest to any foreign or national security service, sentiment analysis is a thing. Listen in on what people talk about, run it through a STT engine and a ML model to condense it down, and you get a pretty broad picture of what's going on in a nation (aka, what are potential wedge points in a society that can be used to fuel discontent). Or proximity gathering stuff... basically the same thing the ad industry [2] or Strava does [3], that can then be used in warfare.
And no, I'm not paranoid. This, sadly, is the world we live in - there is no privacy any more, nowhere, and there are lots of financial and "national security" interest in keeping it that way.
[1] https://www.bbc.com/news/world-us-canada-65305415
[2] https://techxplore.com/news/2023-05-advertisers-tracking-tho...
[3] https://www.theguardian.com/world/2018/jan/28/fitness-tracki...
And also worth noting that "place a hostile intelligence service may be interested in" can be extremely broad. I think people have this skewed impression they're only after assets that work for goverment departments and defense contractors, but really, everything is fair game. Communications infrastructure, social media networks, cutting edge R&D, financial services - these are all useful inputs for intelligence services.
These are also softer targets: someone working for a defense contractor or for the government will have had training to identify foreign blackmail attempts and will be far more likely to notify their country's counterintelligence services (having the penalties for espionage clearly explained on the regular helps). Someone who works for a small SaaS vendor, though? Far less likely to understand the consequences.
Here in boring New Zealand, the Chinese government has had anti-China protestors beaten in new zealand. They have stalked and broken into the office and home of an academic, expert in China. They have a dubious relationship with both the main political parties (including having an ex-Chinese spy elected as an MP).
It’s an uncomfortable situation and we are possibly the least strategically useful country in the world.
this is something I was talking when LLM boom started. it's now possible to spy on everyone on every conversation. you just need enough computing power to run special AI agent (pun intended)
When you combine the modern SOP of software and hardware collecting and phoning home with as much data about users as is technologically possible with laws that say “all orgs and citizens shall support, assist, and cooperate with state intelligence work”… how exactly is that Sinophobia?
I’ll give you a hint: In this case it’s the one-party unitary authoritarian political system with an increasingly aggressive pursuit of global influence.
Anyone in the US should be very concerned, no matter if it is the current administration's thought police, or the next who treats it as precident.
As I am not actively involved in something the Chinese government would view as a huge risk, but being put on a plane without due process to be sent to a labor camp based on trumped up charges by my own government is far more likely.
You know of these things due to the domestic free press holding the government accountable and being able to speak freely about it as you’re doing here. Seeing the two as remotely comparable is beyond belief. You don’t fear the U.S. government but it’s fun to pretend you live under an authoritarian dictatorship because your concept of it is purely academic.
Gonna need a more specific hint to narrow it down.
The United States?
The difference that makes it concerning and problematic that China is doing it is that with China, there is no recourse. If you are harmed by a US company, you have legal recourse, and this holds the companies in check, restraining some of the most egregious behaviors.
That's not sinophobia. Any other country where products are coming out of that is effectively immune from consequences for bad behavior warrants heavy skepticism and scrutiny. Just like popup manufacturing companies and third world suppliers, you might get a good deal on cheap parts, but there's no legal accountability if anything goes wrong.
If a company in the US or EU engages in bad faith, or harms consumers, then trade treaties and consumer protection law in their respective jurisdictions ensure the company will be held to account.
This creates a degree of trust that is currently entirely absent from the Chinese market, because they deliberately and belligerently decline to participate in reciprocal legal accountability and mutually beneficial agreements if it means impinging even an inch on their superiority and sovereignty.
China is not a good faith participant in trade deals, they're after enriching themselves and degrading those they consider adversaries. They play zero sum games at the expense of other players and their own citizens, so long as they achieve their geopolitical goals.
Intellectual property, consumer and worker safety, environmental protection, civil liberties, and all of those factors that come into play with international trade treaties allow the US and EU to trade freely and engage in trustworthy and mutually good faith transactions. China basically says "just trust us, bro" and will occasionally performatively execute or imprison a bad actor in their own markets, but are otherwise completely beyond the reach of any accountability.
This company cannot be helped. They cannot be saved through knowledge.
See ya.
Yes, even when you know what you're doing security incidents dan happen. And in those cases, your response to a vulnerable matters most.
The point is there are so many dumb mistakes and worrying design flaws that neglect and incompetence seems ample. Most likely they simply don't grasp what they're doing
This was the opposite of a professional response:
* Official communication coming from a Gmail. (Is this even an employee or some random contractor?)
* Asked no clarifying questions
* Gave no timelines for expected fixes, no expectations on when the next communication should be
* No discussion about process to disclose the issues publicly
* Mixing unrelated business discussions within a security discussion. While not an outright offer of a bribe, ANY adjacent comments about creating a business relationship like a sponsorship is wildly inappropriate in this context.
These folks are total clown shoes on the security side, and the efficacy of their "fix", and then their lack of communication, further proves that.
It depends on what you mean by simple security design flaws. I'd rather frame it as, neglect or incompetence.
That isn't the same as malice, of course, and they deserve credits for their relatively professional response as you already pointed out.
But, come on, it reeks of people not understanding what they're doing. Not appreciating the context of a complicated device and delivering a high end service.
If they're not up to it, they should not be doing this.
> Overall simple security design flaws but it's good to see a company that cares to fix them, even if they didn't take security seriously from the start
I don't think that should give anyone a free pass though. It was such a simple flaw that realistically speaking they shouldn't ever be trusted again. If it had been a non-obvious flaw that required going through lots of hoops then fair enough but they straight up had zero authentication. That isn't a 'flaw' you need an external researcher to tell you about.
I personally believe companies should not be praised for responding to such a blatant disregard for quality, standards, privacy and security. No matter where they are from.
As far as being "very welcoming", that's nice, but it only goes so far to make up for irresponsible gross incompetence. They made a choice to sell a product that's z-tier flaming crap, and they ought to be treated accordingly.
/s
I have spend quite some time protecting my apps from this scenario and found a couple of open source projects that do a good job as proxys (no affiliation I just used them in the past):
- https://github.com/BerriAI/litellm - https://github.com/KenyonY/openai-forward/tree/main
but they still lack other abuse protection mechanism like rate limitting, device attestation etc. so I started building my own open source SDK - https://github.com/brahyam/Gateway
Thinking that was all, but then;
> Holy shit, holy shit, holy shit, it communicates DIRECTLY TO OPENAI. This means that a ChatGPT key must be present on the device!
Oh my gosh. Thinking that is it? Nope!
> SecurityStringsAPI which contained encrypted endpoints and authentication keys.
Oh now you’re going to be diligent. Why do I doubt that?
Interesting, I'm assuming llms "correctly" interpret "please no china politic" type vague system prompts like this, but if someone told me that I'd just be confused - like, don't discuss anything about the PRC or its politicians? Don't discuss the history of Chinese empire? Don't discuss politics in Mandarin? What does this mean? LLMs though in my experience are smarter than me at understanding imo vague language. Maybe because I'm autistic and they're not.
In my mind all of these could be relevant to Chinese politics. My interpretation would be "anything one can't say openly in China". I too am curious how such a vague instruction would be interpreted as broadly as would be needed to block all politically sensitive subjects.
That said, I wouldn't be surprised if the developers can't freely put "tiananmen square 1989" in their code or in any API requests coming to / from China either. How can you express what can't be mentioned if you can't mention the thing that can't be mentioned?
No comments yet
I’m guessing most LLMs are aware of this difference.
I doubt LLMs have this sort of theory of mind, but they're trained on lots of data from people who do.
https://youtube.com/shorts/1M9ui4AHXMo
Note: downvote?
> After sideloading the obligatory DOOM
> I just sideloaded the app on a different device
> I also sideloaded the store app
can we please stop propagating this slimy corporate-speak? installing software on a device that you own is not an arcane practice with a unique name, it's a basic expectation and right