I hacked a dating app (and how not to treat a security researcher)

468 bearsyankees 270 5/12/2025, 4:39:10 PM alexschapiro.com ↗

Comments (270)

michaelteter · 9h ago
Not excusing this is any way, but this app is apparently a fairly junior effort by university students. While it should make every effort to follow good security (and communication) practices, I'd not be too hard on them considering how some big VC funded "adult" companies behave when presented with similar challenges.

https://georgetownvoice.com/2025/04/06/georgetown-students-c...

tmtvl · 7h ago
I vehemently disagree. 'Well, they didn't know what they were doing, so we shouldn't judge them too harshly' is a silly thing to say. They didn't know what they were doing _and still went through with it_. That's an aggravating, not extenuating, factor in my book. Kind of like if a driver kills someone in an accident and then turns out not to have a license.
michaelteter · 5h ago
Still not excusing them, but these HN responses are very hypocritical.

US tech is built on the "go fast, break things" mentality. Companies with huge backers routinely fail at security, and some of them actually spend money to suppress those who expose the companies' poor privacy/security practices.

If anything, college kids could at least reasonably claim ignorance, whereas a lot of HN folks here work for companies who do far worse and get away with it.

Some companies, some unicorns, knowingly and wilfully break laws to get ahead. But they're big, and people are getting rich working for them, so we don't crucify them.

mianos · 3h ago
It’s also why other regulatory zones outside the US, with much stronger privacy laws like the EU, don’t seem to produce as much innovation, while the US and China keep churning out new stuff.

It’s a trade-off between shipping fast and courting risk. I’m not judging one over the other; it comes down to what you’re willing to accept, not what you wish for.

mmanfrin · 5h ago
> They didn't know what they were doing _and still went through with it_

You don't know what you don't know; sometimes people can think they do know what they're doing and they just haven't encountered situations otherwise. We were all new to programming once; no one would ever become a solid engineer if they prevented themselves from building anything out of fear of doing something wrong that they did not account for out of lack of experience.

dmitrygr · 7h ago
+1: if you cannot do security, you have no business making dating apps. The kind of data those collect can ruin lives overnight. This is not a theory, here is a recent example: https://www.bbc.com/news/articles/c74nlgyv7r4o
steeeeeve · 5h ago
I would agree with you. Dating app data might not be legally protected like some PII out there, but there are easily foreseeable bad consequences from compromised dating app data of any kind. Security should be accounted for from the very beginning.
satanfirst · 7h ago
The claim that it should have come up in a government vetting process seems to be proof that one should publish one's own dating information before entrusting it to a site that might have lost it or worse might provide it to a government specifically.
burnt-resistor · 6h ago
If you cannot do security, you have no business making any app people use in significant numbers containing Personally Identifiable Information (PII).

Perhaps, like GDPR, HIPAA, and similar, any (web|platform)apps that contain login details and/or PII must thoroughly distance themselves from haphazard, organic, unprofessional, and (bad) amateurish processes and technologies and conform to trusted, proven patterns, processes, and technologies that are tested, audited, and preferably formally proven for correctness. Without formalization and professional standards, there are no standards and these preventable, reinvent-the-wheel-badly hacks will continue doing the same thing and expecting a different result™. Massive hacks, circumvention, scary bugs, other attacks will continue. And, I think this means a proper amount of accreditation, routine auditing, and (the scary word, but smartly) regulation to drag the industry (kicking-and-screaming if need by by showing using appropriate leadership on the government/NGO-SGE side) from an under-structured wild west™ into professionalism.

LadyCailin · 6h ago
This is exactly why I think software engineering should require a licensing requirement, much like civil engineering. I get that people will complain about that destroying all sorts of things, and it might, yes, but fight me. Crap like this is exactly why it should be a requirement, and why you won’t convince me that the idea is not in general a good one.
viraptor · 5h ago
While the idea is good, I'm not sure how this would get implemented realistically. The industry standards/audits are silly checkbox exercises rather then useful security. The biggest companies are often terrible as far as secure design goes. The government security rules lag years behind the SotA. For example how long did it take NIST to stop recommending changing passwords?

Civil engineering works well because we mostly figured it out anyway. But looking at PCI, SOX and others, we'd probably just require people to produce a book's worth of documentation and audit trail that comes with their broken software.

Spooky23 · 4h ago
I worked on a project that was using federal tax information and had IRS 1075 compliance requirements. Those follow some version of NIST that was out of date at the time.

We had two security teams. Security and compliance. It was not possible to be secure and compliant, so the compliance team had to document every deviance from the IRS standard and document why, then self-report us and the customer to audit the areas where we were outside the lines. That took a dozen people almost a year to do.

All of that existed because a US state (S Carolina iirc) was egregiously incompetent and ended up getting breached. Congress “did something” about it.

ikiris · 3h ago
This is why delegated authorities should be managing things instead of congress itself. Because congress has no idea what they're doing on technical topics generally.
Spooky23 · 41m ago
That is the 20th century innovation. Unfortunately, the king doesn’t like it.
no_wizard · 3h ago
There's no governing body that continually researches, vets and updates standards of security. There should be, honestly, but there isn't. Thats not true of professional engineering organizations, or medical boards, or the Bar Association etc.

They all update their recommendation and standards routinely, and do a reasonably good job at being professional organizations.

The current state of this as regards to the tech sector doesn't mean its impossible to implement.

Thats why all the usual standards (PCI, SOC2 in particular) are performative in practice. There's nothing that holds industry accountable to be better and there is nothing, from a legal stand point, that backs up members of the association if they flag an entity or individual for what would be effectively malpractice.

socalgal2 · 3h ago
I feel like people who suggest governing bodies for this kind of stuff always imagine some perfect unicorn organization that makes perfect recommendations where as I usually imaging every UX turning into the worst possible 20x step process because of "regulations" and it will actually just be theater and not actually solve whatever problems it claims to.
LadyCailin · 2h ago
I mean, bridges collapse sometimes. It’s not really about making things perfect from the get go, it’s about making sure that the industry as a whole learns from mistakes. And I agree that some of the existing standards and audits are checkboxes at best, and actively suggesting problems at worst. But, we need to be evolving those actively anyways, that has to be baked into the DNA of whatever this licensing scheme ends up being.

Anyways, I’m not the one who should be deciding the specifics here, it should be a collaboration between lots of different parties, even if I may have a seat at that table. But we have got to get away from the notion (as seen in other comments in this thread) that any sort of attempt to prevent this kind of harm before it happens is authoritarianism.

Implicated · 6h ago
Agreed. My stance on this changed over the course of some years after a close family member married an actual engineer (structural) and I got a lot of insight into that world.

It's astonishing to me the ease of which software developers can wreak _real_ measurable damage to billions of lives and have no real liability for it.

Software developers shouldn't call themselves engineers unless they're licensed, insured and able to be held liable for their work in the same way a building engineer is.

Spooky23 · 4h ago
Some engineers like to go on about this, but the reality is they offload the work to marginally qualified techs and unlicensed engineers and stamp the document, just like in software.

There are all sorts of failures in the structural space. How many pumped reinforced concrete buildings are being built in Miami right now? How many of them will be sound in 50-75 years? How likely is the architect/PE’s ghost to get sued?

PE’s are smart professionals and do a valuable service. But they aren’t magic, and they all have a boss.

tonyhart7 · 4h ago
well software generally harmless until you integrate in your car (see: Tesla)

I think there defo a line where bug in your puzzle app don't need a license vs AI that drive your 50k+ tesla

Anon1096 · 5h ago
I'm curious how you think this would be implemented. Do you think you should need a license to publish on GitHub? Write code on your own computer and run it? Because this was just a startup that some kids founded so saying that a license would have to be a prerequisite to hiring somebody would not cut it. You'd have to cut off their ability to write/run code entirely.
GuinansEyebrows · 4h ago
I mean, kind of? You can't really start any kind of trade business without credentials (other than low-paying under the table work for people who don't care).

You can't stop someone from doing electrical repairs on their own home but if the house burns down as a result, the homeowners' insurance will probably just deny the claim, and then they risk losing their mortgage. Basically, if you make it bureaucratically difficult to do the wrong thing, you'll encourage more of the right thing.

motorest · 6h ago
> This is exactly why I think software engineering should require a licensing requirement, much like civil engineering.

Civil engineering requires licensing because there are specific activities that are reserved for licensed engineers, namely things that can result in many people dying.

If a major screwup doesn't even motivate victims to sue a company then a license is not justified.

alpaca128 · 5h ago
I would say the risk of identity theft for over 150 million people justifies some preventative measures. And yes, there also were hundreds of lawsuits.

https://en.wikipedia.org/wiki/2017_Equifax_data_breach

Or how about four suicides and 900+ wrongful convictions?

https://en.wikipedia.org/wiki/British_Post_Office_scandal

Not to mention the various dating app leaks that led to extortion, suicides and leaking of medical information like HIV status. And not to forget the famous Therac-25 that killed people as direct result of a race condition.

Where's the threshold for you?

tonyhart7 · 4h ago
I mean this is Tech industry, everyone here gather data big tech or not,

I'm not saying I'm pro identity theft or data breach or something, but the industry culture is vastly different

people here are pro on move fast break things some of idea, I think you just cant tbh

ikiris · 3h ago
Everyone in business is move fast and break things and let people die if it's cheaper until regulations force them not to be. Software is just new enough that mostly doesn't exist yet.

No comments yet

LordDragonfang · 5h ago
Conversely, it's the scale, not magnitude. A single physical infrastructure failure can usually only harm a very limited number of people. A digital infrastructure breach can trivially harm millions.

Observing that each individual harm may not be worth the effort of suing over is evidence that the justice system is not effective at addressing harm in the aggregate, not evidence of lack of major harm.

hackable_sand · 6h ago
Yes, I will happily fight against authoritarian takes cloaked in vagueries.
jmb99 · 6h ago
I don’t believe engineering licensing is authoritarian, and I’d be interested in hearing why you believe that to be the case (especially, considering, most “real” engineering field have had licensing requirements for a century, without any real complaints against that process).
ikiris · 3h ago
They believe any regulation is authoritative overreach so I doubt you're gonna get anywhere.

Check their comments there's screeds about compelling labor over like basic concepts.

s1artibartfast · 5h ago
There are pretty major exceptions to what require engineering licenses, and it is pretty unclear where software should fall in.

You can sign a liability waiver and do all sorts of dangerous things.

>most “real” engineering field have had licensing requirements for a century, without any real complaints against that process).

Most newer engineering fields are trending away from licensing, not towards it. For example, medical device and drug engineering doesn't use it at all.

degamad · 2h ago
> medical device and drug engineering

is a special case exception, where rather than requiring licensing for the engineers building the product, we put detailled restrictions and regulations on what needs to be done (extensive testing, detailled evidence, monitoring programs, etc) before the product can be sold or marketed.

That is hardly an example of a field where risk-taking is encouraged and unlicensed persons are able to unleash their half-developed ideas on the public.

Do you have any other examples of fields which are "trending away" from licensing?

theultdev · 5h ago
You don't see how gate-keeping who can create software is authoritarian?

The distinction between creating virtual software and physical structures is fairly obvious.

Of course physical engineers that create buildings and roads need to be regulated for safety.

And there are restrictions already for certain software industries, such as healthcare.

Many other forms of software do not have the same hazards so no license should be needed, as it would be prone for abuse.

alpaca128 · 5h ago
I agree creating software in general shouldn't be gatekept, but requiring that app developers who process PII have more to show than vibe-coding experience would probably be beneficial.

I don't think anyone is proposing that Flappy Bird or Python scripts on Github should be outlawed. Just like you can still build a robot at home but not a bridge in the town center.

theultdev · 4h ago
OP didn't qualify the statement "This is exactly why I think software engineering should require a licensing requirement".

No mention of PII or any specifics.

SWE already has regulations. I see no need for a license requirement...

Concerning PII, it's kind of hypocritical for the gov to regulate when the NSA was proven to be collecting data on everyone against their will or knowledge.

LadyCailin · 2h ago
I’m happy to discuss specifics, so long as they don’t start with the premise “regulation is authoritarianism” and also are in good faith. Kids don’t have to have an engineering license to build a bridge out of popsicle sticks, I doubt you think that someone saying “building a bridge should require a civil engineering license” should apply to that. I’m not unreasonable. I just think there has been entirely too much demonstrated harm to start with the premise of “anyone can build any software they want at any time, with zero liability”.

These students may be liable for things after the fact, but that is hardly any consolation to the people that may have had their intimate personal data leaked. Even if they are successfully sued by everybody on the site, how much money could they possibly squeeze out of a bunch of college students? I don’t know how you can prevent this without some up front thing, such as a license, rather than making them liable after the fact.

jasonfarnon · 3h ago
" Crap like this is exactly why it should be a requirement, and why you won’t convince me that the idea is not in general a good one."

If you're looking for a regulatory fix, I would prefer something like a EU-style requirement on handling PII. Even the US model--suing in cases of privacy breaches--seems like it could be pretty effective in theory, if only the current state of privacy law was a little less pro-corporate. Civil suits could make life miserable for the students who developed this app.

LadyCailin · 2h ago
I can buy that. If I were dictator of the world, I wouldn’t say “making pong clones requires a license”. Even if you grossly negligently screw up the scoring system in your clone, I wouldn’t say you should be liable for anything. I think there are probably more cases where liability should exist, even without processing of personal data of any sort, and I don’t have an easy “one size fits all” regulation in mind either, it’s surely not going to be that easy, and I fully acknowledge that. I just wish we as an industry would start having that conversation in good faith.
johnfn · 3h ago
But no one was killed here, so your comparison really falls flat to me - there’s a reason we have a sliding scale of punishments that scale to the crime, and security issues are nowhere near the same level of severity as murder. It feels more like fining kids for putting up a lemonade stand without a business license.
voytec · 9h ago
I've also hit this link trying to get any info on "Cerca". It's from April 2025 and praises app created two months earlier. It looks like a LLM-hallucinated garbage. OP's entry mentions contacting Cerca team in February. So either this entry is about a flaw detected at launch date or some weird scheme.

Nonetheless: "two months old vulnerability" and "two months old students-made app/service".

michaelteter · 9h ago
Ah that's a shame.

It's hard to tell these days what is real.

Linkedin shows 2024 founded, and 2-10 employees. And that same Linkedin page has a post which directly links to this blurb: https://www.readfeedme.com/p/three-college-seniors-solved-th...

The date of this article is May 2025, and it references an interview with the founders.

bearsyankees · 9h ago
I think the date there is March 25
barbazoo · 8h ago
How is one supposed to know that it's just a bunch of script kiddies we shouldn't be too hard on if their apps get released under "Cerca Applications, LLC".
yard2010 · 5h ago
These guys should probably study something else.
selcuka · 2h ago
Fair point, but come on. Not returning the OTP (which is supposed to be a secret) in the response payload is common sense, whether you are a seasoned developer or a high school student.

It is also a commercial product, not something they made for fun:

    In-App Purchases
    - Cerca App $9.99
    - Cerca App 3 month $9.99
    - 10 Swipes $2.99
    - 3 Swipes $0.99
    - 5 swipes $1.99
    - 3 Searches $1.99
    - 10 Searches $3.99
    - 5 Searches $2.99
peterldowns · 9h ago
I hear you but if you're processing passports and sexual preferences you have to at least respond to the security researcher telling you how you're leaking them to absolutely anyone. This is a total clusterfuck and there are zero excuses for the lack of security here.
genewitch · 9h ago
i have an idea, if you don't know anything about app security, don't make an app. "Whataboutism" not-withstanding, this actually made me feel a little ill, and your comment didn't help. I have younger friends that use dating sites and having their information exposed to whoever wants it is gross, and the people who made it should feel bad.

They should feel bad about not communicating with the "researcher" after the fact, too. If i had been blown off by a "company" after telling them everything was wide open to the world for the taking, the resulting "blog post" would not be so polite.

STOP. MAKING. APPS.

dylan604 · 9h ago
Stop pushing POCs into PROD.

There's nothing wrong with making your POC/MVP with all of the cool logic that shows what the app will do. That's usually done to gain funding of some sort, but before releasing. Part of the releasing stage should be a revamped/weaponized version of the POC, and not the damn POC itself. The weaponized version should have security stuff added.

That's much better than telling people stop making apps.

genewitch · 8h ago
These "devs" released an app to prod that took passport information and who knows what else. They had no business asking for any of that PII.

If all of the developers were named and shamed, would you, as a hiring manager, ever hire them to develop an app for you? Or would you, in fact, tell them to stop making apps?

They enabled stalkers. There's no possible way to argue that they didn't, you don't know, and some random person just looked into it because their friends mentioned the app and found all of this. I guarantee if anyone with a modicum of security knowledge looks the platform over there's going to be a lot more issues.

It's one thing to be curious and develop something. It's another to seek VC/investments to "build out the service" by collecting PII and not treating it as such. Stop. Making. Apps.

dylan604 · 8h ago
If I were ever a hiring manager, hell would have frozen over. But I'm not one for immediately firing someone for making mistakes. Correct the mistake and move on. Some mistakes will never be forgotten, and that dev will forever remember that somethings need extra attention.

Also, if we're talking about a company that had a hiring manager in the process of making an app and did not hire employees with security knowledge somewhere in the process, then the entire company is rotten.

Let me flip this on its head though with your same logic. If you're the type of person that would be willing to provide an app your passport information. Stop. Using. Apps.

genewitch · 8h ago
These apps are for people who are looking for mates, temporary or otherwise. There may be more nuance than "dummy gave passport info to app"
dylan604 · 8h ago
Not once ever in my quest of looking for a mate did the potential mate ask to see my passport. There are times when common sense must be used. If an app is asking for invasive data that just feels out of place, just stop. The juice isn't worth the squeeze
genewitch · 8h ago
I had a feeling some would get hung up on the "passport" thing. The "private" intimate chats were leaked, too. And full name, city, university, phone numbers, sexual preferences, and geolocation. And photographs, obviously. I assume the passport/ID stuff was for "verified accounts", but again, none of that crap should be saved in a database - a boolean default false "VERIFIED" in the user table should suffice.

The disclosure didn't show every API endpoint, just a few dealing with auth and profiles. They also mentioned only a few PII, you can tell because there were multiple screenshots spread throughout the post. I'm harping on passport for the reason you specify, too; but mostly that information shouldn't be stored...

zdragnar · 6h ago
Setting aside all of the other info that was leaked, knowing that the only profiles you see are actual, real people would be nice.

Way back when I last used a dating site, a significant percentage of profiles ended up being placeholders for scams of some sort.

In fact, several texted me a link to some bogus "identity verification" site under the guise of "I get too many fake bot profile hits"... Read the fine print, and you're actually signing up for hundreds of dollars worth of pron subscriptions.

If the dating app itself verified people were real, AND took reports of spam seriously, AND kept that information in a way that wasn't insecure, it'd be worth it.

imiric · 8h ago
You're shouting into the void. The people making this type of product have zero regard for their users' data, and best engineering or security practices. They're using AI to pump out a product as quickly as possible, and if it doesn't work (i.e. makes them money), they'll do it again with something else.

This can only be solved by regulation.

ghssds · 8h ago
Programming should require a gouvernment-emited license reserved to alumni of duly certified schools. Possession of a turing-complete compiler of interpreter without permission should be a felony.
motorest · 6h ago
> Programming should require a gouvernment-emited license reserved to alumni of duly certified schools.

Nonsese. I've met PhDs in computer science that were easily out-performed by kids fresh out of coding bootcaps. Do you think that spending 5 years doing a few written exampls makes you competent at cyber security? Absurd.

dyslexit · 6h ago
I'm pretty sure the comment was sarcastic. The grandparent comment was so over the top with its moral outrage that sarcasm feels like about the only appropriate response.
yamazakiwi · 4h ago
I am now realizing that it was most likely sarcastic after reading your comment and am now wondering how I didn't take the extreme speech as obvious sarcasm before.

Should've know when they said interpreters and compilers.

Incidentally I replied with sarcasm to theirs as well so it all works out.

yamazakiwi · 7h ago
You’ve successfully contributed 20 pts to your institutional privilege score; Impressive! You're just one step away from your next badge:

"Class Immobility" (95% of users unlock this without trying!)

How to unlock: Be denied access to an accredited education. Work twice as hard for half the recognition. Watch opportunities pass you by while gatekeepers congratulate themselves!

pixl97 · 7h ago
While previous is an over reactions, the wild west free for all we have is also a problem.

At the end of the day the masses will finally get tired of the fuckery of programmers doing whatever they want and start putting laws in place, and the laws will be passed by the stupidest people among us.

Programmers now should start looking into standards of professional behaviors before they are forced on them by law.

yamazakiwi · 7h ago
The problem isn't that anyone has access to programming, it's that corporate incentives prioritize profit over quality, security, and ethics.

And sure, if your follow-up is "that won’t change," I get it, but that doesn’t mean the open nature of programming is the problem.

>At the end of the day the masses will finally get tired of the fuckery of programmers doing whatever they want and start putting laws in place, and the laws will be passed by the stupidest people among us.

I agree laws will pass eventually but it won't start from the people. They rarely even think or hear about software security as something other than an amorphous boogie man, and there are no repercussions so any voices are easily forgotten. Eventually, it will be some big tech corp executive or politician moving into government convincing them to create a security auditing authority to extract money from these companies and/or shut them down.

I'm sure we can find some holier than thou types to fill chairs with security auditors for the new "SSC" once it's greenlit.

GuinansEyebrows · 4h ago
We could probably stand to stop treating software engineering like some holy calling for geniuses only and start treating it for what it is: a skilled trade that can be regulated and accredited like all the rest of them. It's really not a wild idea and it wouldn't stop kids (or anyone, really) from learning on their own. My parents taught me how to use tools as a kid and I learned how to fix my own toilet, but I didn't decide that made me qualified to go start plumbing professionally without apprenticing first.
yamazakiwi · 4h ago
I completely agree! Thank you for this
rs186 · 5h ago
There is a point to your comment, but I am afraid you are shouting at the wrong thing.

Instead, I think this is the fair approach: anyone is free to make a website/app/VR world whatever, but if it stores any kind of PII, you had better know what you are doing. The problem is not security. The problem is PII. If someone's AWS key got hacked, leaked and used by others, well it's bad, but that's different from my personal information getting leaked and someone applying for a credit card on my behalf.

yibg · 6h ago
End of the day it's an ROI analysis (using the term loosely here, more of a gut feel). What is the cost and benefits of making an app more secure vs pushing out an insecure version faster. Unfortunately in today's business and funding climate, the latter has better pay off (for most things anyways).

Until the balance of incentives changes, I don't see any meaningful change in behavior unfortunately.

imiric · 8h ago
That sounds like you're excusing them.

You know what else was an app built by university students? The Facebook. We're all familiar with the "dumb fucks" quote, with Meta's long history of abusing their users' PII, and their poor security practices that allowed other companies to abuse it.

So, no. This type of behavior must not be excused, and should ideally be strongly regulated and fined appropriately, regardless of the age or experience of the founders.

SpaceL10n · 8h ago
I worry about my own liability sometimes as an engineer at a small company. So many businesses operate outside of regulated industries where PCI or HIPAA don't apply. For smaller organizations, security is just an engineering concern - not an organizational mandate. The product team is focused on the features, the PM is focused on the timeline, QA is focused on finding bugs, and it goes on and on, but rarely is there a voice of reason speaking about security. Engineers are expected to deliver tasks on the board and litte else. If the engineers can make the product secure without hurting the timeline, then great. If not, the engineers end up catching heat from the PM or whomever.

They'll say things like...

"Well, how long will that take?"

or, "What's really the risk of that happening?"

or, "We can secure it later, let's just get the MVP out to the customer now"

So, as an employee, I do what my employer asks of me. But, if somebody sues my employer because of some hack or data breach, am I going to be personally liable because I'm the only one who "should have known better"?

SoftTalker · 7h ago
You're not really an engineer. You won't be signing any design plans certifying their safety, and you won't be liable when it's proven that they aren't safe.
kohbo · 5h ago
Depends on your industry. Even if SWE's aren't out here getting PE's there is absolutely someone signing off on all things safety-related.
marcellus23 · 2h ago
> engineer

> noun

> a person who designs, builds, or maintains engines, machines, or public works.

pixl97 · 7h ago
If it's an LLC/Corp you should be protected by the corporate veil unless you've otherwise documented you're committing criminal behavior.

But yea, the lack of security standards across organizations of all sizes is pitiful. Releasing new features always seems to come before ensuring good security practices.

sailfast · 8h ago
I would personally want to know the law enough to protect myself, push back on anything illegal in writing, and then get written approval to disregard to be totally covered - but I understand that even this can be hard if you’re one or two devs deep at a startup or whatever. Personally, if I didn’t think they were pursuing legal work I’d leave.
remus · 8h ago
As an engineer I'm a small org I think it's our responsibility to educate the rest of the team about these risks and push to make sure they get engineering time to mitigate these issues. It's not easy, but it's important stuff that could sink the business if it's not taken seriously.
kelnos · 8h ago
As much as I despise the "I was just following orders" defense, do make sure you get anything like that in writing: an email trail where you raise your concerns about the lack of security, with a response from a boss saying not to bother with it.

Not sure where you are located, but I don't know of any case where an individual rank-and-file employee has been held legally responsible for a data breach. (Hell, usually no one suffers any consequences for data breaches. At most the company suffers a token fine and they move on without caring.

hnlmorg · 7h ago
> do make sure you get anything like that in writing: an email trail where you raise your concerns about the lack of security, with a response from a boss saying not to bother with it.

A few years ago I was put in the situation where I needed to do this and it created a major shitstorm.

“I’m not putting that in writing” they said.

However it did have the desired effect and they backed down.

You do need to be super comfortable with your position in the company to pull that stunt though. This was for a UK firm and I was managing a team of DevOps engineers. So I had quite a bit of respect in the wider company as well as stronger employment rights. I doubt I’d have pulled this stunt if I was a much more replaceable software engineer in an American startup. And particularly not in the current job climate.

hiatus · 8h ago
Are you an officer of the company? If not I would not think you could be personally liable.
yieldcrv · 7h ago
not in my experience
andrelaszlo · 8h ago
Oops! Nice find!

To limit his legal exposure as a researcher, I think it would have been enough to create a second account (or ask a friend to create a profile and get their consent to access it).

You don't have to actually scrape the data to prove that there's an enumeration issue. Say your id is 12345, and your friend signs up and gets id 12357 - that should be enough to prove that you can find the id and access the profile of any user.

As others have said, accessing that much PII of other users is not necessary for verifying and disclosing the vulnerability.

ofjcihen · 8h ago
This is the standard and obvious way to go about things that most security researchers ignore.

While you can definitely want PII protected and scrape data to prove a point it’s unnecessary and hypocritical.

strunz · 51m ago
Eh, part of assessing the vulnerability is how deep it goes. Showing that there were no gates or roadblocks to accessing all the data is a valid thing to research, otherwise they can later say "oh we hade rate limiting in place" or "we had network vulnerability scanners which would've prevented a wholesale leak".
mtlynch · 9h ago
This is a pretty confusing writeup.

>First things first, let’s log in. They only use OTP-based sign in (just text a code to your phone number), so I went to check the response from triggering the one-time password. BOOM – the OTP is directly in the response, meaning anyone’s account can be accessed with just their phone number.

They don't explain it, but I'm assuming that the API is something like api.cercadating.com/otp/<phone-number>, so you can guess phone numbers and get OTP codes even if you don't control the phone numbers.

>The script basically just counted how many valid users it saw; if after 1,000 consecutive IDs it found none, then it stopped. So there could be more out there (Cerca themselves claimed 10k users in the first week), but I was able to find 6,117 users, 207 who had put their ID information in, and 19 who claimed to be Yale students.

I don't know if the author realizes how risky this is, but this is basically what weev did to breach AT&T, and he went to prison for it.[0] Granted, that was a much bigger company and a larger breach, but I still wouldn't boast publicly about exploiting a security hole and accessing the data of thousands of users without authorization.

I'm not judging the morality, as I think there should be room for security researchers to raise alarms, but I don't know if the author realizes that the law is very much biased against security researchers.

[0] https://en.wikipedia.org/wiki/Goatse_Security#AT&T/iPad_emai...

lima · 8h ago
> They don't explain it, but I'm assuming that the API is something like api.cercadating.com/otp/<phone-number>, so you can guess phone numbers and get OTP codes even if you don't control the phone numbers.

They mention guessing phone numbers, and then the API call for sending the OTP... literally just returns the OTP.

mtlynch · 8h ago
Yeah, I guess there's no reason for the API to ever return the OTP, but the severity depends on how you call the API. If the API is `api.cercadating.com/otp/<unpredictable-40-character-token>`, then that's not so bad. If it's `api.cercadating.com/otp/<guessable four-digit number>` that's a different story.

From context, I assume it's closer to the latter, but it would have been helpful for the author to explain it a bit better.

bearsyankees · 8h ago
Hi, author here! My bad if that was not clear. The endpoint was just a POST request where the body was the phone number, so that is all you needed to know to take over someone's account.
joshstrange · 5h ago
I think it could be a tad bit clearer. I understand what you are saying but this thread requires reading multiple messages, parsing out the wrong parts, and putting together the correct ones to fully understand.

Put very simply, they exposed an endpoint that took a phone number as input to send a OTP code. That's reasonable and many companies do this without issue. The problem is, instead of just sending the OTP code they _returned the code to the client_ as well.

There is never a good reason to do this, it defeats the entire purpose. The only reason you send a code to a phone is for the user to enter to prove they "own" that phone number.

It's like having a secure vault but leaving a post-it note with the combination stuck to it.

tptacek · 8h ago
Read the original complaint in the Auernheimer case. Prosecutors had (extensive) intent evidence that is unlikely to exist here. The defendants in that case were also accused of disclosing the underlying PII, which is not something that appears to have happened here.
SoftTalker · 7h ago
I was going to say the headline of the post, "I hacked..." could almost be taken as a confession. But that's not the actual title of the linked article. I'm almost tempted to flag this submission for clickbait embellishment in the title.
lcnPylGDnU4H9OF · 6h ago
It was submitted by the author: https://news.ycombinator.com/item?id=43966279.
mtlynch · 8h ago
Yeah, I agree Auernheimer was a much more attractive target for prosecution, but do you think this student is legally safe in what they're doing here?
tptacek · 8h ago
I would personally not scrape the endpoint to collect statistics and inform the severity estimation, but I'm a lot more risk averse than most. But prosecution of good-faith security research is disfavored, so as long as you don't do anything to breach the assumption of good faith (as defendants in the trial you mentioned repeatedly did) I think you're probably fine.

The bigger thing is just that there's no actual win in scraping here. It doesn't make the vulnerability report any more interesting; it just reads like they're trying to make the whole thing newsier. Some (very small) risk, zero reward.

shayanbahal · 8h ago
I had a similar experience with another dating app, although they never got back to me. When I tried to get the founders attention by changing his bio to contact me text, they restored a backup lol

years later I saw their instagram ad and tried to see if the issue still exists, and yes it did. Basically anyone with the knowledge of their API endpoints (which is easy to find using the app-proxy-server) you have full on admin capabilities and access to all messages, matching, etc.

I wonder if I should go back and try again... :-?

cobalt60 · 7h ago
Why not disclose it as a responsible dev with contacts and move on.
pixl97 · 7h ago
If a company is not responsible enough to follow up on security reports you should not follow up, but instead disclose it to the world.
flutas · 7h ago
tbh, I agree.

I've sent 2 big bugs like this, one Funimation and one for a dating app.

Funimation you could access anyones PII and shop orders, they ignored me until I sent a linkedin message to their CTO with his PII (CC number) in it.

The "dating" app well they were literally spewing private data (admin/mod notes, reports, private images, bcrytped password, ASIN, IP, etc) via a websocket on certain actions. I figured out those actions that triggered it, emailed them and within 12 hours they had fixed it and made a bug bounty program to pay me out of as a thank you.

Importantly, I also didn't use anyone else's data/account, I simply made another account that I attacked to prove. Yes it cost me a monthly sub ~$10 to do so. But they also refunded that.

shayanbahal · 7h ago
I think it took so long that I moved on, but you are right and I should have done that. Probably I'll take a look again to see if I can do it now :)
evantbyrne · 6h ago
Been there. Nagged the city of Seattle for nearly two years about fixing their insecure digital wallets, and in return they just acted weird to me and never really fixed the problem. Wouldn't tell me anything not even the vendor so I could communicate to them that this issue could exist elsewhere. The goal of these tactics is to delay long enough that you give up on publishing. So publish. Just be ethical and stay within the bounds of the law on what you access and release.
shayanbahal · 6h ago
I did a quick test and seems like the full admin access that I used to get is slightly fixed/changed. I'm wondering if there was an issue and I have enough data to show there were full compromised of all users data, but it is changed now (might still be vulnerable but let's say it's not). should I still release something? they should have notified their users of such an issue right?
evantbyrne · 6h ago
Sounds worthy of a blog post to me
nixpulvis · 9h ago
People need to be forced to think twice before taking in such sensitive information as a passport or even just addresses. This sort of thing cannot be allowed to be brushed off as just a bunch of kids making an app.
VBprogrammer · 8h ago
The UK government are trying really hard to mandate IDs for access to porn sites. Can't wait for that to blow up in their faces.
pixl97 · 7h ago
"They" don't care, the entire point of many of these laws is to increase the friction and fear of being disclosed that you don't visit these sites in the first place.
kelnos · 8h ago
And for things like passport or other ID details, there's also no reason to expose them publicly at all after they've been entered. If you want an API available to fetch the data so you can display it in the UI, there's no need to include the full passport/ID number; at the very least it can be obscured with only the last few digits sent back via the API.

But for something like a dating site, It's enough for the API to just return a boolean verified/not-verified for the ID status (or an enum of something like 'not-verified', 'passport', 'drivers-license', etc.). There's no real need to display any of the details to the client/UI.

(In contrast with, say, and airline app where you need to select an identity document for immigration purposes, where you'd want to give the user more details so they can make the choice. But even then, as they do in the United app, they only show the last few digits of the passport number... hopefully that's all that's sent over their internal API as well.)

jonny_eh · 8h ago
There should to be some kind of government operated identity confirmation service that is secure/private.

Or by someone "government-like" such as Apple or Google.

steeeeeve · 5h ago
Government is the worst possible solution to every problem.

(not an attack on you. I have to say that every time I see someone say anything along the lines of "the government should do it")

GuinansEyebrows · 3h ago
Government is made the worst possible solution thanks to lobbying and lawyers. It doesn't have to be this way.
clifflocked · 8h ago
OAuth exists and can be used to confirm someone's identity by linking their Google account.
nixpulvis · 8h ago
To be fair, I wouldn't want my google account linked to my dating profile. Aggregating services has risks too.
knicholes · 8h ago
Maybe secondary google account.
kelnos · 8h ago
Linking a Google account doesn't confirm your identity, though. It just confirms that you created a Google account with a particular name.
smt88 · 8h ago
A Google account does nothing to prove identity
behringer · 8h ago
when I worked for the government, within 2 months they had leaked all of my data to the black market.

Governments should not be confirming shit.

pixl97 · 7h ago
The government already has all your data so I'm not sure who you think should be confirming identity.
behringer · 5h ago
and my point is that they leak it. So it's hardly useful to have them both house and confirm the data when they can't house it properly.

They'll be confirming data that is publicly available.

jonny_eh · 5h ago
But what additional data are you worried about them having?
vincvinc · 3h ago
koakuma-chan · 9h ago
Were they not using some kind of third party identity verification service? That's what I usually see apps do. Don't tell me those third party services still share your ID with the app (like the actual images)?
nixpulvis · 9h ago
Read the article. They clearly have their own OTP setup.

But if they are asking for your passport, then they have access to it. It's not a third party asking and providing them with some checkmark or other reduced risk data.

koakuma-chan · 9h ago
I have read the article and OTP has nothing to do with identity verification. I'm asking because every single time I went through identity verification the app used a third party service that is supposed to be trustworthy.
nixpulvis · 9h ago
I see what you mean. But they literally had passport front/back URLs, so they aren't using a third party for that either.
blantonl · 9h ago
Returning the OTP in the request API response is wild. Like why?
MBCook · 8h ago
So the UI can check if what they enter is correct.

It’s very sensible and an obvious solution if you don’t think about the security of it.

A dating app is one of the most dangerous kinds of app to make due to all the necessary PII. this is horrible.

ryanisnan · 8h ago
> if you don’t think about the security of it.

This is big brain energy. Why bother needing to make yet another round trip request when you can just defer that nonsense to the client!

joelhaasnoot · 8h ago
No one would ever hack my app!
benmmurphy · 7h ago
I’ve seen banks where the OTP code is generated on the client and then sent to the server.
pydry · 8h ago
Smacks of vibe coding
MBCook · 7h ago
Could be. Somewhere else in these comments someone was saying they found evidence that the app was coded that way.

But they also said it was a project by two students. And I could absolutely see students (or even normal developers) who aren’t used to thinking about security make that mistake. It is a very obvious way to implement it.

In retrospect I know that my senior project had some giant security issues. There were more things to look out for than I knew about at that time.

bitbasher · 8h ago
I don't think a language model is that stupid. This smacks of pure human stupidity and/or offshoring.
orphea · 8h ago
But LLMs are that stupid. Do you remember that guy who vibe coded a cheating tool for interviews and who literally leaked all his api keys/secrets to GitHub because neither him nor a LLM didn't know better?
bitbasher · 8h ago
Fair enough. Since it's trained on human stupidity, I suppose it would reflect that stupidity as well.
immibis · 7h ago
Is that the same guy who had his degree revoked for creating a cheating tool for interviews and is now a millionaire for creating a cheating tool for interviews?
matja · 9h ago
Eliminate your database costs with this one easy trick!
hectormalot · 9h ago
One reason I could think of is that they may return the database (or cache, or something else) response after generating and storing the OTP. Quick POCs/MVPs often use their storage models for API responses to save time, and then it is an easy oversight...
oulu2006 · 2h ago
that's my first thought at as well - like a basic CRUD operation that returns the row that was created as a response.
gwbas1c · 5h ago
It appears that the OTP is sent from "the response from triggering the one-time password".

I suspect it's a framework thing; they're probably directly serializing an object that's put in the database (ORM or other storage system) to what's returned via HTTP.

ceejayoz · 9h ago
Save a HTTP request, and faster UX! What's not to love?

When Pinterest's new API was released, they were spewing out everything about a user to any app using their OAuth integration, including their 2FA secrets. We reported and got a bounty, but this sort of shit winds up in big companies' APIs, who really should know better.

mooreds · 9h ago
I too am bewildered.

Maybe to make it easier to build the form accepting the OTP? Oversight?

I can't think of any other reasons.

Vuska · 9h ago
Oversight. Frameworks tend to make it easy to make an API endpoint by casting your model to JSON or something, but it's easy to forget you need to make specific fields hidden.
Alex-Programs · 8h ago
I assume that whoever wrote it just has absolutely no mental model of security, has never been on the attacking side or realised that clients can't be trusted, and only implemented the OTP authentication because they were "going through the motions" that they'd seen other people implement.
pixl97 · 7h ago
Everyone that programs should take blackhat classes of some kind. I talk to so many programmers that really don't understand what hackers/attackers can actually do.
ceejayoz · 7h ago
ksala_ · 9h ago
My best guess would be some form of testing before they added sending the "sending a message" part to the API. Build the OTP logic, the scaffolding... and add a way to make sure it returns what you expect. But yes absolutely wild.
bearsyankees · 9h ago
ungreased0675 · 8h ago
I would like to see laws that make storing PII as dangerous as storing nuclear waste. Leaks should result in near-certain bankruptcy for the company and legal jeopardy for the people responsible.

That’s the best way I can think of to align incentives correctly. Right now there’s very little downside to storing as much user information as possible. Data breach? Just tweet an apology and keep going.

hiatus · 8h ago
> I would like to see laws that make storing PII as dangerous as storing nuclear waste.

This is a little extreme IMO. PII encompasses a lot of data, including benign things like email address stored only for authentication and contact purposes.

pixl97 · 6h ago
I mean, we could consider email like light waste, can't dump it in the environment like plastic trash, but if you handle it correctly with cheap disposal methods it will be ok.

Things like photos of IDs/passports should be considered yellowcake.

gwbas1c · 5h ago
White collar jail?

That might be the only way to give the issue the attention it deserves.

edm0nd · 9h ago
> I have been met with radio silence.

Thats when its time to inform them you are dumping the vuln to the public in 90 days due to their silence.

hbn · 9h ago
That's more of a punishment to innocent users than the business
nick238 · 9h ago
Disclosure is good for the 'innocent users' as they are made aware that their data may have been leaked (who knows if the company can do the sufficient auditing and forensics to detect total scraping), rather than just being oblivious because the company just didn't bother to tell them.
maxverse · 8h ago
Is there any reason to not just privately email the users? "Hey, I'm so and so, a security researcher. I was able to gather your data from <Company>, which has not responded to any inquiries from me. Please be aware that your data is mismanaged and vulnerable, and I encourage you to voice your concern directly to <Company>."
Ajedi32 · 7h ago
Seems like a reasonable idea, though depending on how many users are affected that may effectively amount to going public. Also only works if the vulnerability gives you access to all customer emails, and you're willing to exploit it to get that info (which might not be a good idea legally speaking).
yard2010 · 5h ago
Make it better: find a lawyer that would sue, send them the details, you can find like 10 ppl out of 10k who would love to sue, you get your bounty from the lawyer.
kube-system · 8h ago
> Disclosure is good for the 'innocent users' as they are made aware that their data may have been leaked

Presuming perfect communication which is never the case for security vulnerabilities on a consumer application.

ericmcer · 8h ago
This is a rare case where the leak is so egregious he could actually reach out to all the users themselves to let them know. Especially the ones with passport info.
kenjackson · 9h ago
True. Maybe let them know you will be directly contacting each user and letting them know that this service has exposed their personal information to hackers.
nick238 · 9h ago
I'd definitely not do that. POCing a scraper to check is fine, but you shouldn't save any PII from that data. You're also saying you're the "hacker", as you don't know if it's actually been revealed to others without the forensics that (hopefully) only the business can do.
kenjackson · 6h ago
Yeah. Not good practical advice on my part.
OutOfHere · 9h ago
There is no vulnerability here. It's just out in the open.
myself248 · 8h ago
Imagine if they tried to claim that. "Everything was just out on the front lawn, you can't blame us for not locking the door because we didn't even have a door!"
9283409232 · 9h ago
Good way to get yourself sued and have possible criminal charges brought up to you.
Buttons840 · 9h ago
Yeah. Security researchers face the threat of lawsuits constantly, while those who build insecure apps face no consequences.

We are literally sacrificing national security for the convenience of wealthy companies.

SoftTalker · 7h ago
Well it's kind of like "I walked around the neighborhood trying everyone's front door, I found one unlocked and I could even enter the house and rummage through their personal effects. Just trying to improve the security of the neighborhood!"
Buttons840 · 7h ago
Yes, but the house also has like 250 million people's precious possessions inside, including your own. And foreigners who are not subject to our laws are testing the door constantly. Yes, in this situation it would be like 1 honest researcher also approaching to test the door--seems fine to me.

On second thought, maybe physical buildings are not a good analogy.

yard2010 · 5h ago
If you keep PII of 10k people in your house - LOCK YOUR GODDAMN DOOR
b8 · 9h ago
Which has never happened before and if it does then the EFF would back you presumably.
9283409232 · 9h ago
This is a completely uninformed comment. Security researchers get sued or threatened all the time. Bunnie was threatened by Microsoft for publishing his research on Xbox vulnerabilities, the city of Columbus sued David Ross for his reporting on data exposed during a ransomware attack, Google has threatened action against a few security researchers if memory serves and that is just what I can remember off the top of my head.
retrac · 9h ago
The government of Nova Scotia, Canada used to host its FOIA releases (similar to American freedom of information laws) on a website, with a URL along the lines of server.example.gov.ns.ca/foiadoc?=00031337

They are public and intended to be publicly accessed. A clever teenager [1] noticed -- hey, is that a sequential serial number? Well, yes it was. And so he downloaded all the FOIA documents. Well it turns out they aren't public. The government hosted all the FOIA documents that way, including self-disclosures (which include sensitive information and are only released to the person who the information is about). They never intended to publicly release a small subset of those URLs. (Even though they were transparently guessable.)

Unauthorized access of a computer system carries up to 10 years in prison. The charges were eventually dropped [2] and I don't think a conviction was ever likely. Poor fellow still went through the whole process of being dragged out of bed by armed police.

[1] https://www.cbc.ca/news/canada/nova-scotia/freedom-of-inform...

[2] https://www.techdirt.com/2018/05/08/police-drop-charges-file...

uneekname · 7h ago
Genuine question, how could a well-formed HTTP request for a URL ever be considered unauthorized access? If I request something and someone responds...shouldn't it be their responsibility not to share important information?

Edit: should have read the linked article before commenting. It totally wasn't, and the charges were dropped...after thoroughly harassing the kid.

Alex-Programs · 5h ago
The mental and moral model used by programmers ("you own the backend; I own the frontend; if your backend returns stupid stuff to the frontend without me actively breaking into it, that's your fault") is not, as far as I can tell, shared by broader society.
koakuma-chan · 9h ago
Why did they charge the teen and not the government of NS?
9283409232 · 4h ago
Why did the government of Nova Scotia not charge itself?
tptacek · 9h ago
I've spent my entire career doing this, have been personally "threatened" several times, and until relatively recently kept track of researchers dealing with legal threats. The concern is overblown. In cases that go beyond a nastygram from a lawyer, it is almost always the fact pattern that some aggravating factor is present: a consulting agreement that initiated the testing and forecloses disclosure, or the preservation and/or publication of the PII itself, or attempts to pivot and persist access after finding a vulnerability.

It's an especially superficial argument on this story, where the underlying vulnerability has essentially already been disclosed.

secalex · 9h ago
Depending on what he actually did to enumerate that database and whether he downloaded all that PII I think changes the risk profile.
tgsovlerkhgsel · 9h ago
Threats with the goal to prevent publication are incredibly common.

Following up on the threat is much less common, and the best way to prevent that (IMO) is to remove the motivation to do so: Once the vuln is public and further threats can not prevent the publication, just draw more negative attention to the company, the company has much fewer incentives to threaten or follow up on threats already made.

It's not a guarantee, you can always hit a vindicative and stupid business owner, but usually publishing in response to threats isn't just the right thing to do (to discourage such attempts) but also the smart thing to do (to protect yourself).

secalex · 9h ago
Agreed. I've been doing this for 25+ years and personally know a dozen people who have been threatened and several who have been sued or faced potential prosecution for legitimate security research. I've experienced both situations!

That doesn't make it right, and the treatment of the researcher here was completely inappropriate, but telling young researchers to just go full disclosure without being careful about documentation, legal advice and staying within the various legal lines is itself irresponsible.

chickenzzzzu · 9h ago
Imagine banking your physical and financial security on a presumption that the EFF can help you XD
edm0nd · 9h ago
most certainly not (at least in the US).

I'm so tired of researchers being ignored when they bring a serious vuln to a company to be met with silence and/or resistance on top of them never alerting their users about it.

gwbas1c · 5h ago
FYI: This is more common than you think.

I briefly worked with a company where I had to painfully explain to the lead engineer that you can't trust anything that comes from the browser; because a hacker can curl whatever they want.

Our relationship deteriorated from there. Needless to say, I don't list the experience on my resume.

xutopia · 10h ago
That's crazy to not have responded to his repeated requests!
benzible · 10h ago
As someone managing a relatively low-profile SaaS app, I get constant reports from "security researchers" who just ran automated vulnerability scanners and are seeking bounties on minor issues. That said, it's inexcusable - they absolutely need to take these reports seriously and distinguish between scanner spam and legitimate security research like this.

Update: obviously I just skimmed this, per responses below.

nick238 · 9h ago
Pardon sir, I see you have:

* Port 443 exposed to the internets. This can allow attackers to gain access to information you have. $10k fee for discovery

* Your port 443 responds with "Server: AmazonS3" header. This can allow attackers to identify your hosting company. $10k fee for discovery.

Please remit payment and we will offer instructions for remediation.

sshine · 9h ago
They already met with him and acknowledged the problem. So their lack of follow-up is an attempt to push things under the rug. Users deserve to know that their data was compromised. In some places of the world it is a crime to not report a data leak.
bee_rider · 10h ago
It sounds like they actually met with him, patched the issues, and then didn’t respond afterwards. IMO that is quite rude of them toward him, but they do seem to have taken the issue itself somewhat seriously.
benzible · 9h ago
Ah, sorry, I need to actually read things before I react :)
moonlet · 10h ago
Not really if they don’t have any security or even devsecops yet… if they just have devs and those devs are people who are relatively junior / just out of school, I could unfortunately absolutely see this happening
mytailorisrich · 9h ago
A company has no duty to report to you about just because you kindly notified them of a vulnerability in their software.

> During our conversation, the Cerca team acknowledged the seriousness of these issues, expressed gratitude for the responsible disclosure, and assured me they would promptly address the vulnerabilities and inform affected users.

Well that was the decent thing to do and they did it. Beyond that it is their internal problem and, especially they did fix the issue according to the article.

Engineers can be a little too open and naive. Perhaps his first contacts was with the technical team but then managament and the legal team got hold of the issue and shut it off.

kadoban · 9h ago
> > During our conversation, the Cerca team acknowledged the seriousness of these issues, expressed gratitude for the responsible disclosure, and assured me they would promptly address the vulnerabilities and inform affected users.

> Well that was the decent thing to do and they did it. Beyond that it is their internal problem and, especially they did fix the issue according to the article.

They didn't inform anyone, as far as I can tell. Especially users need(ed) to be informed.

It's also at least good practice to let security researchers know schedule of when it's safe to inform the public, otherwise in the future disclosure will be chaotic.

sakjur · 8h ago
Taking Yale as a starting point, they seem to have failed their legal obligation to inform their Conneticut users within 60 days (assuming the author of the post would’ve received a copy of such a notification).

https://portal.ct.gov/ag/sections/privacy/reporting-a-data-b...

I doubt this is an engineering team’s naivete meeting a rational legal team’s response. I’d guess it’s rather facing marketing or management naivete that sticking your head in the sand is the correct way to deal with a potential data leak story.

mytailorisrich · 8h ago
Companies won't inform of vulnerabilities. They may/should inform users if they think their data was breached, which is different.

Not clear why "the public" should be informed, either.

Ultimately they thanked the researcher and fixed the issue, job done.

kadoban · 1h ago
> Companies won't inform of vulnerabilities. They may/should inform users if they think their data was breached, which is different.

They who wrote up an API with extremely basic security flaws, and didn't know until someone came and told them. Let's be honest: they have _no_ idea if anyone's data was breached. Users should know so they can be extra cautious, the data in question can ruin lives.

> Not clear why "the public" should be informed, either.

The public will be informed because why would the security researcher keep quiet? They also _should_ know because it's important information for someone considering trusting that company with sensitive information.

> Ultimately they thanked the researcher and fixed the issue, job done.

Hard disagree. It's not the worst possible response, but it's not good and it wasn't done.

pixl97 · 6h ago
>Not clear why "the public" should be informed, either.

Because it's the law in some states now.

Furthermore mandated reporting requirements is how you keep companies from making stupid security decisions in the first place. Mishandling data this way should be a business ending event.

autoexec · 5h ago
Instead it seems like business as usual. Without laws with teeth sharp enough to hurt it'll just continue to be like this.
pixl97 · 6h ago
>A company has no duty to report to you about just because you kindly notified them of a vulnerability in their software.

Then you have no duty to report the vuln to the company and instead should feel free to disclose it to the world.

A little politeness goes a long ways on both sides.

hamish-b · 3h ago
I'm not sure how I hadn't heard about Charle's Proxy for iPhone before! I've done some light pentesting before and had to manually result to grepping for strings throughout the app binary. Glad to have found out about this, especially for when apps are only on iOS.
bearsyankees · 3h ago
Awesome glad to help!! It is a pretty good tool (unless apps use SSL pinning)
12_throw_away · 9h ago
Hot take: just like real engineers, there should be a Software Engineer licensing exam that's legally required before you can handle PII ... because this is the alternative.

Before I was allowed to hand out juice cups at my kids' preschool, I had to do a 2 hour food safety course and was subject to periodic inspections. That is infinity% more oversight than I received when storing highly sensitive information for ~10^5 users.

aDyslecticCrow · 9h ago
A few European countries' "masters of computer science" is just a normal "engineering" degree with a focus on software for any speciality credits. I can call myself an "engineer", even though my software profession does not value the distinction.

Though I'm sceptical it would help. API design is generally not taught in university courses, and perhaps shouldn't (too specific).

I instead feel that GDPR has already done a lot of heavy lifting. By raising the price of "find out", people got a bit more careful about the "fuck around" part. It seems to push companies to take it seriously.

The step two is forcing companies to take security breaches and security disclosures seriously, which CRA (Cyber Resilience Act) may help.... at the cost of swamps of byrocratic overhead that is also included ofcourse.

pixl97 · 6h ago
Bureaucracy is the cost of human laziness...

I mean, do you trust that the chemical industry will self regulate and keep dangerous chemicals out of your drinking water?

Then why do we trust software companies to keep you and your data safe?

We will get more regulations over time no matter how much we complain about it because people are rather lazy at the end of the day and more money for less work is a powerful motivator.

gwbas1c · 5h ago
Your comment should be the top post in this thread. Unfortunately, there is a group of HN readers who downvote all comments that suggest we (software developers) should be licensed, even though plenty of other fields require it.

I think we'll need to start pushing on lawmakers.

andoando · 6h ago
If they're sending the OTP to the user, its because the OTP is being checked client side, so you might have been able to just call the authentication endpoint directly.
joshstrange · 4h ago
More likely it's misconfiguration of some kind.

Perhaps a hold over from testing (where you don't always want to send the SMS). Maybe just the habit/pattern of returning the item you just created in the DB and not remembering to mark the field as private. There are a whole slew easy foot-guns. I'm not defending it but I doubt it's to do client-side validation, that would be insanity. It's easy enough to not notice a body on a response that you don't care about client side, "200? Cool, keep moving". It's still crazy they were returning the OTP and I sure hope it wasn't on purpose.

swyx · 10h ago
> Since then, I have reached out multiple times (on March 5 and March 13) seeking updates on remediation and user notification plans. Unfortunately, as of today’s publication date (April 21, 2025), I have been met with radio silence. To my knowledge, Cerca has not publicly acknowledged this incident or informed users about this vulnerability, despite their earlier assurances to me. They also never followed up with me following our call and ignored all my follow up emails.

there can always be another side to this story but also wtf. this kind of shit makes me want to charles-proxy every new app i run because who knows what security any random startup has

genewitch · 9h ago
I'd not heard of Charles Proxy nor gobuster.

Years ago there was a firmware for mango travel routers that let you MITM anything connected to it, and i bought two of them, and then the information about how to set it up disappeared (i can't find it). the GL.iNet mango travel routers, is what i mean. I have one wireguarded with the switch set to shut off access or wireguard only; the other one is for IOT devices and is connected via 10mbit, so even if someone managed to hack one of the two IOT things here they couldn't exfil very much, and i'd notice the blinking.

andrewmcwatters · 9h ago
Charles Proxy has been in the industry for many years now. It's a common tool for basic reverse engineering.
nerdsniper · 8h ago
Somewhat downplaying it. Charles is easily the most popular tool for reverse engineering client-server communications in mobile apps.

Certificate pinning frustrates Charles by hampering MITM attempts. It can be difficult to extract/replace pinned certificates from the latest versions of Android/iOS apps. Often you can extract them from older versions using specialized tools, if old-enough versions exist and those certificates are still valid for API endpoints of interest.

andrewmcwatters · 8h ago
Yeah, I definitely did. lol

It's like saying IDA Pro is just an interesting piece of software for looking at binaries, but the grandparent comment is surely from someone who doesn't look at these utilities, so I guess that's why I didn't press it.

sillywabbit · 6h ago
There's probably some benefit to having people who will tell you about security issues rather than exploit them. You can't really blame businesses / app devs for wanting to be left alone though.
9283409232 · 9h ago
There's no penalty for failing at privacy and security so companies would rather play the odds that they will be fine than invest in proper practices. Alex says Cerca is being misleading when it comes to encryption but it seems to me they are outright lying and will likely face no consequences for it. In a more just world, this would trigger so many regulatory and compliance audits.
thesuitonym · 9h ago
> Alex says Cerca is being misleading when it comes to encryption but it seems to me they are outright lying and will likely face no consequences for it.

Trasmitting information via HTTPS is usually enough to say your app uses "encryption and other industry-standard measures to protect your data."

mooreds · 9h ago
> There's no penalty for failing at privacy and security

I wouldn't say there's no penalty (they might have to pay for a year of identity theft protection or a fine).

I agree that the consequences are not in line with the damage to the public or customer base.

MaKey · 9h ago
The GDPR allows for huge fines, so for companies operating in Europe there is an incentive to take privacy and security seriously.
brazzy · 9h ago
Namely, up to 20 million EUR or 4% of the previous year's revenue, whichever is larger.
camcil · 9h ago
In a data conscious world, the complete and utter disregard for PII and lack of competency in design and implementation would result in catastrophic business failure.

They may have "patched" the ability to exploit it in this way, but the plaintext data is still there in that same fragile architecture and still being handled by the same org that made all of these same fundamental mistakes in the first place. Yikes.

hiatus · 8h ago
> In a data conscious world, the complete and utter disregard for PII and lack of competency in design and implementation would result in catastrophic business failure.

As you are probably well aware, we do not live in that world. Companies like Equifax can suffer breaches exposing the personal information of millions and stock still goes up.

CobrastanJorji · 8h ago
Sorry about that. Please fill out this class action postcard, and, if approved, you will receive up to two years of identity protection services provided by Equifax (to be served concurrently with any other court-ordered two years of identity protection services), or, if you have financial damages you can conclusively prove are directly linked to this specific identity disclosure, you may mail your evidence to the provided address for up to $1000 in restitution, pending arbitration.
nickpeterson · 8h ago
Maybe they’ll send you duplicate $1000 checks if you claim to be the other people in the leak.
baxtr · 8h ago
PII data breaches, especially PHI data can lead to high financial losses, mostly in the US through litigation. Fines in the EU are low in comparison.

Companies don’t like to talk about this, and they bury these costs deep down in their financial statements. But trust me, they’re quite substantial.

senderista · 8h ago
If that's true, then stock prices should reflect that. But that's not what we see after major PII breaches at publicly traded companies.
baxtr · 7h ago
So you have seen the failure of Apple’s car project in their stock price?
ngangaga · 8h ago
It's worth noting that companies that are too big to fail (as I assume credit bureaus are considered) are great places to park money.
AlienRobot · 8h ago
>the OTP is directly in the response

I forgot my password.

Type your username:

Your password is hunter2.

Vibes.

orphea · 7h ago
Many-many moons ago I saw a forum website that would tell

"Sorry, you can't use password qwerty123. This password is already used by user SweetLemon13115"

koakuma-chan · 9h ago
I thought Apple was checking apps? How did this go through? In any sane jurisdiction exposing PII like that is illegal.
gagik_co · 9h ago
Apple mainly checks for obvious malware API calls on device, it would really be out of its scope to check the backend security for every app ever.
nixpulvis · 9h ago
This is why Apple security theater leads people to a false sense of security.

Better to make secure operating systems that inform users of bad access patterns and let the developers be free to produce.

Nothing protects you from giving info to a broken backend though, so people should be more cautious and repercussions for insecure backends should be higher.

Lucasoato · 9h ago
I don't think the App Store provider should be blamed for a security posture that is just wrong by design. The company is responsible to guarantee a meaningful degree of security for their user data.
efdee · 9h ago
Remind me what the 30% markup is for, again?
lcnPylGDnU4H9OF · 6h ago
Marketing and distribution.
hackan · 9h ago
nobody does a free security checkup xD not even apple
tough · 9h ago
free? I thought they still had their 30% racket and wait while we review 3 months pipeline going for their walled gardens
hackan · 9h ago
yeah, no. security evaluations cost a ton, and they take time. apple is doing nothing at all, just charging for being in a premium market.
nixpulvis · 9h ago
"Premium"

Apps on the app store are hardly much better than anywhere else.

tough · 9h ago
The "premium" presumably is the market apple's commands and nobody else's does

iOS users still spend more dollars per average in apps than android ones even if android has more users i think ?

nixpulvis · 9h ago
Also true. But it's just sad to see the average quality of an app has gone way down over the years.
koakuma-chan · 9h ago
Didn't cost a ton for this article's author.
some_furry · 9h ago
j45 · 5h ago
A real issue is how much data and functionality to access said data is exposed for the front end that could conveniently be held server side.
gxs · 8h ago
FYI the Hinge app works the same way

I requested my data and all the image URLs are publicly accessible - and the URLs provided are both your own images and the images of anyone who’d ever viewed your profile

reliablereason · 5h ago
The security is in the fact that you cant just list every single user and all their info. You would need to scrape the info.

A URL with a cryptic file name is theoretically just as secure as a a random password.

yieldcrv · 7h ago
this is useful! I am considering building a dating app with its own twist and seeing the api endpoints this team went with is useful

under the hood they're all the same, just with different theming and market segmentation

tuwtuwtuwtuw · 10h ago
I m not sure I understand properly. Did he try to hack a random service he encountered? Is that even legal? Where I live (Sweden) it's definitely not legal.
secalex · 9h ago
IANAL and this is not legal advice, but you probably fine reverse engineering a mobile app and intercepting your own network traffic. He was doing ok until he started enumerating IDs in their database, at which point he started venturing into the territory that got weev 3.5 yrs.

https://www.wired.com/2013/03/att-hacker-gets-3-years/

I am not endorsing this interpretation of the CFAA, but this kid needs a lawyer.

tptacek · 8h ago
I mean, he ventured in that direction, but until he discloses PII and leaks evidence of his intent that's the extent of the similarity: directional. People on message boards drastically underrate the importance of intent evidence in criminal cases; they all want there to be some hard-and-fast rule like "if you can see it in the URL, and you don't use a single-quote character to break SQL with it, it's fair game", which is not at all how it works.
tuwtuwtuwtuw · 8h ago
His blog post seem to make it clear that his intent was to gain access to data in a computer system he did not have permission to access. Why would "disclose PII" be relevant?
tptacek · 8h ago
CFAA cases turn on the "why" as much as the "how", and "because I wanted to find and disclose security vulnerabilities for the good of the public" is a disfavored "why". Read the sentencing filings in the case you're talking about to see more about the implication of disclosure.
bink · 10h ago
It's become a bit of a grey area thanks to most large companies having bug bounty programs now. I think some researchers just assume that all companies are OK with testing against their production services. IMHO it's almost certainly illegal, but simply won't be enforced unless the hacker/researcher does something malicious.
janalsncm · 9h ago
If only all hackers lived in jurisdictions which enforce anti-hacking laws. If I am making an app, I’m not going to rely on the police to enforce cybersecurity.
charcircuit · 9h ago
It's not legal in America either. And he is posting with what may be his real name which adds extra risk.
bee_rider · 9h ago
I’m not in security (thank goodness, it sounds like a legal minefield). It sounds like this system was so wildly insecure that… I actually kinda wonder what laws specifically he broke.

If you just text out passwords to anybody who asks, are they really doing unauthorized access? Lol.

I’m sure it was illegal somehow, though.

carefulfungi · 8h ago
It is very likely illegal but also discouraged to be prosecuted. From the federal government's guidelines on prosecuting unauthorized computer access (https://www.justice.gov/jm/jm-9-48000-computer-fraud):

> "The attorney for the government should decline prosecution if available evidence shows the defendant’s conduct consisted of, and the defendant intended, good-faith security research."

soco · 10h ago
It doesn't even look that they tried to secure anything initially. Security by design? Ha.
sherdil2022 · 10h ago
They might not have a playbook on how to handle such reports. Doesn’t mean they shouldn’t respond. They are also probably sh*t scared about legal ramifications - but not responding only makes them look even worse. None-the-less it is amazing how many of these products and services don’t put security and user privacy first.

Open for discussion - What would make them pay attention?

bravoetch · 9h ago
I think most companies have a weak playbook for this kind of interaction. I once bought a product (and I'm going to be deliberately vague) from a company whose customers are mostly very famous people around the globe. The URL for my order included the order number, and that page showed everything about my order and my PII. Naturally I tried changing the order number, and wowzers I was able to see emails, phones, addresses, contacts for the PA/agent and sometimes direct contact info for the ordering party.

When I contacted the company about this, they didn't thank me or really acknowledge the problem. They fixed it about a month later by requiring login to view order URLs. I feel like they should have let their customers know all their PII data was exposed - I know they didn't, I never got such a notification.

camcil · 10h ago
Affecting their bottom-line via litigation, less usage, whatever...
squigz · 9h ago
Fines that are enough to actually hurt, and regulations that force companies to actually secure PII.

If they're scared of such things, then maybe they shouldn't be making and marketing a dating app. It's not 2003 anymore, and this isn't some innocent app - they're collecting information on passports and sexual preferences for thousands of people. They should be aware of the responsibility that ought to come with that.

voytec · 9h ago
I'm flagging this submission. Look at the author[0], at the "Georgetown students..." (won't backlink again) post linked below stating that Cerca was 2 months old in April, and OP's post from April stating that they hacked this thing two months earlier.

It's some self-promo or whatever scheme/scam bullshit.

[0] https://news.ycombinator.com/from?site=alexschapiro.com

bearsyankees · 9h ago
Hi author here! Not exactly sure what you are talking about — I think I found this vulnerability pretty close to when the app first went public but not sure why that makes it a scam

And I posted this blog because I think people will find it interesting!

Happy to answer any other questions when I get back to my computer :)

bearsyankees · 8h ago
nik_0_0 · 8h ago
Posting the same link 4 times in 18 days, by the author, certainly seems like self-promo, but somehow allowed? I don't see any URL manipulation, and it certainly took off today. (I found it interesting!)

A&B testing of post names seems to lead some useful information ;)

I don't see your reference to "Georgetown students..." in either the website link or the user's submissions? Was it modified?

bearsyankees · 8h ago
Glad you found it interesting, yeah I was experimenting with different names and obviously this one was the best. Not trying to self-promo as I am not like selling any product but just thought people would enjoy the article! Sorry if I violated any of the unwritten HN norms... but glad people are reading it now and having interesting discussions
tptacek · 8h ago
You definitely shouldn't do what you did here, gaming your submissions this way. You can post your own stuff, of course.
phyzix5761 · 9h ago
Imagine every time you entered a specific physical location you would increase your exposure to a detrimental disease. After only entering a couple of times you've contracted this disease and each subsequent visit to this place makes the illness worse.

A few people try to warn you but you choose not to listen and, in fact, you recruit the government to make it easier to enter such places with safeguards that don't actually protect you from the disease and encourage you to enter more frequently.

You're then surprised why you're ill to the brink of death and blame the location as the sole cause for your ails. Yes, the location is to blame but so are you for continuing to enter even after getting sick.

Why do you do this? Because you want something. Convenience, pleasure, a distraction, etc. But you refuse to acknowledge that its killing you.

This is how we should view optional services that require us to give our PII data in exchange for hours of attention-grabbing content. They're designed to sell your eyeballs and data to advertisers. You know this already but you can't say no. You're sick and refuse to acknowledge it.

RussianCow · 8h ago
> This is how we should view optional services that require us to give our PII data in exchange for hours of attention-grabbing content.

This is a nice fantasy, but realistically it means you shouldn't use probably 90% of services out there, which isn't reasonable for most people. Plus, there are plenty of companies with treasure troves full of data on you that have equally questionable data security/privacy practices that you've never even directly interacted with.

We need regulation. There is no other alternative. And we need to stop blaming victims of data breaches for companies not putting basic security measures in place. I don't think it's unreasonable to expect every company you interact with to securely store your sensitive data. If a place was physically making people ill like in your thought experiment, they wouldn't be around for very long; I think we should demand the same for our data.

phyzix5761 · 8h ago
No one is blaming the victims. Please read my comment again. What I'm saying is that regulation puts in guardrails that don't actually do anything to protect your data.
pixl97 · 5h ago
>What I'm saying is that regulation puts in guardrails that don't actually do anything to protect your data.

Right, and when you go to the grocery store you catch listeria every time? Oh wait, food handling is rather safe because of well enforced regulation.

The problem with libertarians is they don't think of the wide spread public effects of their behaviors. Trash piles up outside their house and suddenly bears are eating the neighbors.

phyzix5761 · 3h ago
Food regulations work. Data security regulations don't. Why? Because food safety is a pretty static practice. It doesn't change that often. But software is dynamic. New vulnerabilities and breach techniques come out faster than the speed at which politicians can regulate them. Its a cat and mouse game and government is a really slow and fat cat.
bongodongobob · 9h ago
But a dating app specifically needs all kinds of personal info. That's like, what it's for.
phyzix5761 · 8h ago
But its your choice to use it right?
kube-system · 8h ago
Yes, it is your choice to contract with another party who agrees to keep your information secure. However, it is also their fault when they do not uphold their agreement.