Copilot broke audit logs, but Microsoft won't tell customers

335 Sayrus 112 8/20/2025, 12:18:00 AM pistachioapp.com ↗

Comments (112)

TheRoque · 3h ago
In my opinion, using AI tools for programming at the moment, unless in a sandboxed environment and on a toy project, is just ludicrous. The amount of shady things going on in this domain (AI trained on stolen content, no proper attribution, not proper way to audit what's going out to third party servers etc.) should be a huge red flag for any professional developer.
AdieuToLogic · 2h ago
> In my opinion, using AI tools for programming at the moment, unless in a sandboxed environment and on a toy project, is just ludicrous.

Well put.

The fundamental flaw is in trying to employ nondeterministic content generation based on statistical relevance defined by an unknown training data set, which is what commercial LLM offerings are, in an effort to repeatably produce content satisfying a strict mathematical model (program source code).

ozim · 6m ago
Sounds like we are in the end game for „move fast and break things”. Doesn’t feel like we can invent something that moves even faster and breaks more.
mlyle · 2h ago
Nearly as bad: trying to use systems made out of meat, evolved from a unrelated background and trained on an undocumented and chaotic corpus of data, to try and produce content satisfying a strict mathematical model.
AdieuToLogic · 1h ago
Except that the "systems made out of meat" are the entities which both define the problem needing to be solved and are the sole determiners if said problem has been solved.

Of note too is that the same "systems made out of meat" have been producing content satisfying the strict mathematical model for decades and continue to do so beyond the capabilities of the aforementioned algorithms.

mlyle · 7m ago
It's usually not the same pile of meat defining the problem and solving the problem.

Yes, humans exceed the capability of machines, until they don't. Machines exceed humans in more and more domains.

The style of argument you made about the nature of the machinery used applies just as well (maybe better) to humans. To get a valid argument, we'll need to be more nuanced.

ygritte · 10m ago
+1. Also, those systems made out of meat were the ones that discovered the strict mathematical models in the first place and the rules by which they work.
cookiengineer · 1h ago
The difference: Meatbags created something like an education system, where the track record in it functions as a ledger for hiring companies.

There is no such thing for AI. No ledger, no track record, no reproducibility.

Gud · 9m ago
On the other hand, some of the most capable meat bags said fuck you to your record keeping system and dropped out.
ozim · 10m ago
Batter. Meatbags created liability system where if meatbag harms others they go to jail.
jdiff · 1h ago
Meaty feet can be held to a fire. To quote IBM, "A computer can never be held accountable."
MobiusHorizons · 1h ago
This is the question I keep asking leaders (I literally asked a VP this question once in an all hands). How do we approach the risk associated mistakes made by AI?(process, legal, security, insurance etc) We have process and legal agreements in place to deal with humans that work for a business making mistakes. We need analogs for AI if we want to use it in similar ways.
galaxyLogic · 7m ago
My question is if I get some code from AI, save it to a file, then modify it or add some functions to it, can I still claim the copyright for it at the top of the file? Do I need to give the AI any credit?

I'm asking because I read somewhere that "AI produced output cannot be copyrighted". But what if I modify that output myself? I am then a co-creator, right, and I think I should have a right to some copyright protection.

Covenant0028 · 52m ago
I suspect the analog will be that the "human in the loop" will bear all the consequences. Perhaps even if they did nothing wrong and are in fact the victim in that situation.

Take the case of Linda Yaccarino. Ordinarily, if a male employee publicly and sexually harassed his female CEO on Twitter, he would (and should) be fired immediately. When Grok did that though, it's the CEO who ended up quitting.

jpcosta · 1h ago
What was the answer? Asking for a vp friend
AdieuToLogic · 25m ago
>>> Meaty feet can be held to a fire. To quote IBM, "A computer can never be held accountable."

>> This is the question I keep asking leaders (I literally asked a VP this question once in an all hands). How do we approach the risk associated mistakes made by AI?

> What was the answer? Asking for a vp friend

This is a difficult issue to tackle, no doubt. What follows drifts into the philosophical realm by necessity.

Software exists to provide value to people. Malicious software qualifies as such due to the desires of the actors which produce same, but will no longer be considered here as this is not germane.

AI is an umbrella term for numerous algorithms having wide ranging problem domain applicability and often can approximate near-optimal solutions using significantly less resources than other approaches. But they are still algorithms, capable of only one thing - execute their defined logic.

Sometimes this logic can produce results similar to be what a person would in a similar situation. Sometimes the logic will produce wildly different results. Often there is significant value when the logic is used appropriately.

In all cases AI algorithms do not possess the concept of understanding. This includes derivatives of understanding such as:

  - empathy
  - integrity
  - morals
  - right
  - wrong
Which brings us back to part of the first quoted post:

  To quote IBM, "A computer can never be held accountable."
Accountability requires justification of actions taken or lack thereof, which demands the ability to explain why said actions were undertaken relative to other options, and implies a potential consequence be imposed by an authority.

Algorithms can partially "justify their output" via strategic logging, but that's about it.

Which is why "a computer can never be held accountable." Because it is a machine, executing the instructions ultimately initiated by one or more persons whom can be held accountable.

jp0d · 1h ago
Countless bodies consisting of the said meat have been responsible for the advancement of technology so far. If these meat brains don't contribute to any new advancements then the corpus of data will stay stagnant!
nyc_data_geek · 32m ago
Where do you think training data comes from
ygritte · 12m ago
I'm getting so tired of this dumb kind of non-argument. You can't defend LLMs on their own merits, so you try to make them look smarter by throwing shade on humans. That's a non sequitur and whataboutism.
mlyle · 10m ago
Nah-- I feel like I have my eyes pretty wide open about the shortcomings of LLMs (but still find them useful often).

But any argument seeking to dunk on LLMs needs to not also apply equally to the alternative (humans).

bcrosby95 · 48m ago
Those systems of meat were trained over hundreds of millions of years compared to mere months or years.
hitarpetar · 15m ago
I find this take to be purely misanthropic. we are more than stochastic parrots
devjab · 48m ago
Unless you turn telemetry off (and believe they respect it) your entire filestructure, error and metadata will be shipped to Microsoft with no audit log available, simply by using VSCode. Which is frankly what copilot is doing here, except it's doing it on your 365 documents.

I'm personally less concerned about Microsoft's impact on safety in terms of software development than I am with how all my data is handled by the public sector in Denmark. At this point they shouldn't be allowed to use Windows.

TheRoque · 34m ago
Sending some telemetry metadata on private servers is vastly different from sending code chunks and in some cases private environnement variables. On top of that, there are already many exploits and failures related to these tools which one again don't compare to simple telemetry. And I'm not even talking about the ethics of reproduced code without proper attribution, which is a different subject.
matt3210 · 1h ago
Use AI to audit what’s produced by AI. Problem solved! /sarcasm
ThrowawayTestr · 2h ago
Companies won't use open source software because of licencing concerns but if you launder it through an LLM it's hunky-dory.
neuroelectron · 3h ago
The icing on the shit cake is a text editor programmed in typeScript with an impossible to secure plugin architecture.
Sparkyte · 1h ago
AI is a pump and dump scheme promoted by large companies who can't innovate in order to drive up sales. It isn't even AI, it is just weighted variables.
t1E9mE7JTRjf · 43m ago
I imagine when people started using the typewriter, some people writing on paper said similar things. Ultimately 'shady' things are irrelevant. Shady is subjective, and people don't use a technology for lack of related things they might not agree with, let alone what others dislike. They want to get things done, said technology gets thing done.
lokar · 4h ago
Wait, copilot operates as some privileged user (that can bypass audit?), not as you (or better, you with some restrictions)

That can’t be right, can it?

Hilift · 4m ago
In Windows, if a process has Backup privilege it can bypass any permissions, and it is not audited by default due to it would create too much audit volume by actual backup applications. Any process that has this privilege can use it, but the privilege is disabled by default, so it would require deliberate enablement. It is fairly easy to enable in managed code like C#. Same goes for Restore privilege.
catmanjan · 4h ago
As someone else mentioned the file isnt actually accessed by copilot, rather copilot is reading the pre-indexed contents of the file in a search engine...

Really Microsoft should be auditing the search that copilot executes, its actually a bit misleading to be auditing the file as accessed when copilot has only read the indexed content of the file, I don't say I've visited a website when I've found a result of it in Google

internetter · 1h ago
> I don't say I've visited a website when I've found a result of it in Google

I mean, it depends on how large the index window is, because if google returned the entire webpage content without leaving (amp moment), you did visit the website. fine line.

tomrod · 4h ago
Sure sounds like, for Microsoft, an audit log is optional when it comes to cramming garbage AI integrations in places they don't belong.
ceejayoz · 4h ago
dhosek · 3h ago
That was a laugh-out-loud moment in that film.
lokar · 4h ago
lol. I’ve avoided MS my entire (30+ year) career. Every now and then I’m reminded I made the right choice.
trinsic2 · 29m ago
I woke up to MS in 2023[0]. Never again.

[0]: https://www.scottrlarson.com/publications/publication-transi...

tomrod · 4h ago
Brilliant.
jjkaczor · 4h ago
So... basically like when Delve was first introduced and was improperly security trimming things it was suggesting and search results.

... Or ... a very long-time ago, when SharePoint search would display results and synopsis's for search terms where a user couldn't open the document, but could see that it existed and could get a matching paragraph or two... Best example I would tell people of the problem was users searching for things like: "Fall 2025 layoffs"... if the document existed, then things were being planned...

Ah Microsoft, security-last is still the thing, eh?

ocdtrekkie · 3h ago
I would say "insecure by default".

I talked to some Microsoft folks around the Windows Server 2025 launch, where they claimed they would be breaking more compatibility in the name of their Secure Future Initiative.

But Server 2025 will load malicious ads on the Edge start screen[1] if you need to access a web interface of an internal thing from your domain controller, and they gleefully announced including winget, a wondeful malware delivery tool with zero vetting or accountability in Server 2025.

Their response to both points was I could disable those if I wanted to. Which I can, but was definitely not the point. You can make a secure environment based on Microsoft technologies, but it will fight you every step of the way.

[1] As a fun fact, this actually makes Internet Explorer a drastically safer browser than Edge on servers! By default, IE's ESC mode on servers basically refused to load any outside websites.

beart · 2h ago
I've always felt that Microsoft's biggest problem is the way it manages all of the different teams, departments, features, etc. They are completely disconnected and have competing KPIs. I imagine the edge advertising team has a goal to make so much revenue, and the security team has a goal to reduce CVEs, but never the twain shall meet.

Also you probably have to go up 10 levels of management before you reach a common person.

ValveFan6969 · 2h ago
I can only assume that Microsoft/OpenAI have some sort of backdoor privileges that allows them to view our messages, or at least analyze and process them.

I wouldn't be surprised.

faangguyindia · 3h ago
I've disabled copilot i don't even find it useful. I think most people who use copilot have not see "better".
Spooky23 · 4h ago
No, it accesses data with the users privilege.
gpm · 3h ago
Are you telling me I, a normal unprivileged user, have a way to read files on windows that bypasses audit logs?
Spooky23 · 2h ago
If there is a product defect? Sure.

The dude found the bug, reported the bug, they fixed the bug.

This isn’t uncommon, there bugs like this frequently in complex software.

gpm · 2h ago
I think you just defined away the entire category of vulnerability known as "privilege escalation".
p_ing · 2h ago
This isn’t an example of escalation. Copilot is using the user’s token similar to any other OAuth app that needs to act on behalf of the user.
lokar · 2h ago
If that is true, then how did it not get logged? The audit should not be under the control of the program making the access.
lokar · 3h ago
I'm guessing they are making an implicit distinction between access as the user, vs with the privs of the user.

In the second case, the process has permission to do whatever it wants, it elects to restrain itself. Which is obviously subject to many more bugs then the first approach.

jeanlucas · 4h ago
A better title would be: Microsoft Copilot isn't HIPAA compliant

A title like this will get it fixed faster.

rst · 4h ago
It already is fixed -- the complaint is that customers haven't been notified.
stogot · 1h ago
Haven’t and “they actively chose not to”.
samename · 47m ago
Active vs passive language strikes again
fulafel · 1h ago
> CVEs are given to fixes deployed in security releases when customers need to take action to stay protected. In this case, the mitigation will be automatically pushed to Copilot, where users do not need to manually update the product and a CVE will not be assigned.

Is this a feature of CVE or of Microsoft's way of using CVE? It would seem this vulnerability would still benefit from having a common ID to be refrenced in various contexts (eg vulnerability research). Maybe there needs to be another numbering system that will enumerate these kinds of cases and doesn't depend on the vendor.

No comments yet

Foobar8568 · 25m ago
We have cases were purview were missing logs. Fun stuff when we tried to figure out a postmortem at my work.

Microsoft tools can't be trust anymore, something really broke since COVID...

degamad · 2h ago
One thing that's not clear in the write-up here: *which* audit log is he talking about? Sharepoint file accesses? Copilot actions? Purview? Something else?
RachelF · 1h ago
Lots of things aren't clear.

Copilot is accessing the indexed contents of the file, not the file itself, when you tell it not to access the file.

The blog writer/marketer needs to look at the index access logs.

internetter · 1h ago
> The blog writer/marketer needs to look at the index access logs.

How can you say this if microsoft is issuing a fix?

usr1106 · 42m ago
How does their auditing even work? Auditing should happen at kernel level, I sure hope they don't have Copilot in their kernel. So how can any access go unaudited?

Well, the article did not say whether the unaudited access was possible in the opposite order after boot. First ask without reference and get it without audit log. Then ask without any limitation and get an audit log entry.

Did Copilot just keep a buffer/copy/context of what it had before in the sequence described. I guess that would go without log entry for any program. So what did MS change or fix? Producing extra audit log entries from user space?

catmanjan · 38m ago
In this scenario Copilot is performing RAG, so the auditing occurs when Copilot returns hits from the vector search engine its connected to - it seems there was a bug where it would only audit when Copilot referenced the hits in its result.

The correct thing to do would be to have the vector search engine do the auditing (it probably already does, it just isn't exposed via Copilot) because it sounds like Copilot is deciding if/when to audit things that it does...

nzeid · 4h ago
Hard to count the number of things that can go wrong by relying directly on an LLM to manage audit/activity/etc. logs.

What was their bug fix? Shadow prompts?

jsnell · 4h ago
> Hard to count the number of things that can go wrong by relying directly on an LLM to manage audit/activity/etc. logs.

Nothing in this post suggests that they're relying on the LLM itself to append to the audit logs. That would be a preposterous design. It seems far more likely the audit logs are being written by the scaffolding, not by the LLM, but they instrumented the wrong places. (I.e. emitting on a link or maybe a link preview being output, rather than e.g. on the document being fed to the LLM as a result of RAG or a tool call.)

(Writing the audit logs in the scaffolding is probably also the wrong design, but at least it's just a bad design rather than a totally absurd one.)

nzeid · 4h ago
Heard, but since the content or its metadata must be surfaced by the LLM, what's the fix?
nzeid · 4h ago
Thinking about this a bit - you'd have to isolate any interaction the LLM has with any content to some sort of middle end that can audit the LLM itself. I'm a bit out of my depth here, though. I don't know what Microsoft does or doesn't do with Copilot.
verandaguy · 4h ago
I'm very sceptical of using shadow prompts (or prompts of any kind) as an actual security/compliance control or enforcement mechanism. These things should be done using a deterministic system.
ath3nd · 3h ago
I bet you are a fan of OpenAI's groundbreaking study mode feature.
gpm · 4h ago
I'd hope that if a tool the LLM uses reveals any part of the file to the LLM it counts as a read by every user who sees any part of the output that occurred after that revelation was added to the context.
downrightmike · 3h ago
Shadow copies
dmitrijbelikov · 36m ago
Nobody usually bothers with logging actions with files, well, that is, it is like that almost everywhere. Downloading files is not a joke, there are many nuances, for example: - format - where to store - logging - info via headers
zavec · 2h ago
Just to make sure I'm understanding footnote one correctly: it shows up (sometimes before and hopefully every time now) as a copilot event in the log, and there's no corresponding sharepoint event?

From a brief glance at the O365 docs it seems like the 'AISystemPluginData` field indicates that the event in the screenshot showing the missing access is a copilot event (or maybe they all get collapsed into one event, I'm not super familiar with O365 audit logs), and I'm inferring from the footnote that there's not another sharepoint event somewhere in either the old or new version. But if there is one that could at least be a mitigation if you needed to do such a search on the activity before the fix.

jayofdoom · 4h ago
Generally speaking, anyone can file a CVE. Go file one yourself and force their response. This blogpost puts forth reasonably compelling evidence.
fulafel · 18m ago
Not exactly.

There are several CVE numbering authorities and some of them (including the original MITRE, national CERTs etc), accept submissions from anyone, but there's evaluation and screening. Since Microsoft is their own CNA, most of them probably wouldn't issue a MS CVE without some kind of exceptional reason.

thombles · 2h ago
Is there value in requesting a CVE for a service that only Microsoft runs? What's a user supposed to do with that?
aspenmayer · 4h ago
It’s true. The form is right here. When they support PGP, I suspect they know what they’re doing and why, and have probably been continuously doing so for longer than I have been alive. Just look at their sponsors and partners.

https://cveform.mitre.org/

Please only use this for legitimate submissions.

db48x · 4h ago
Fun, but it doesn’t deserve a CVE. CVEs are for vulnerabilities that are common across multiple products from multiple sources. Think of a vulnerability in a shared library that is used in most Linux distributions, or is statically linked into multiple programs. Copilot doesn’t meet that criteria.

Honestly, the worst thing about this story is that apparently the Copilot LLM is given the instructions to create audit log entries. That’s the worst design I could imagine! When they use an API to access a file or a url then the API should create the audit log. This is just engineering 101.

gpm · 4h ago
Huh, there are CVEs for windows components all the time, random example: https://msrc.microsoft.com/update-guide/vulnerability/CVE-20...

Including for end user applications, not libraries, another random example: https://msrc.microsoft.com/update-guide/vulnerability/CVE-20...

ecb_penguin · 3h ago
> CVEs are for vulnerabilities that are common across multiple products from multiple sources.

This is absolutely not true. I have no idea where you came up with this.

> Honestly, the worst thing about this story is that apparently the Copilot LLM is given the instructions to create audit log entries.

That's not at all what the article says.

> That’s the worst design I could imagine!

Ok, well, that's not how they designed it.

> This is just engineering 101.

Where is the class for reading 101?

HelloImSteven · 2h ago
CVEs aren’t just for common dependencies. The “Common” part of the name is about having standardized reporting that over time helps reveal common issues occurring across multiple CVEs. Individually they’re just a way to catalog known vulnerabilities and indicate their severity to anyone impacted, whether that’s a hundred people or billions. There are high severity CVEs for individual niche IoT thermostats and light strips with obscure weaknesses.

Technically, CVEs are meant to only affect one codebase, so a vulnerability in a shared library often means a separate CVE for each affected product. It’s only when there’s no way to use the library without being vulnerable that they’d generally make just one CVE covering all affected products. [1]

Even ignoring all that, people are incorporating Copilot into their development process, which makes it a common dependency.

[1]: https://www.redhat.com/en/topics/security/what-is-cve

immibis · 2h ago
More accurately, CVEs are for vulnerabilities that may be present on many systems. Then, the CVE number is a reference point that helps you when discussing the vulnerability, like asking whether it's present on a particular system, or what percentage of systems are patched. This vulnerability was only present on one system, so it doesn't need a CVE number. It could have a Microsoft-assigned bug number, but it doesn't need a CVE.
fulafel · 12m ago
This may be a stated reason but it's questionable logic. There are of course many cases where people need to reference and discuss this vulnerability and its impact.
heywire · 4h ago
I am so tired of Microsoft cramming Copilot into everything. Search at $dayjob is completely borked right now. It shows a page of results, but the immediately pops up some warning dialog you cannot dismiss that Copilot can’t access some file “” or something. Every VSCode update I feel like I have to turn off Copilot in some new way. And now apparently it’ll be added to Excel as well. Thankfully I don’t have to use anything from Microsoft after work hours.
troad · 3h ago
> Every VSCode update I feel like I have to turn off Copilot in some new way.

This has genuinely made me work on switching to neovim. I previously demurred because I don't trust supply chains that are random public git repos full of emojis and Discords, but we've reached the point now where they're no less trustworthy than Microsoft. (And realistically, if you use any extensions on VS Code you're already trusting random repos, so you might as well cut out the middle man with an AI + spyware addiction and difficulties understanding consent.)

TheRoque · 3h ago
Same. Actually made me switch to neovim more and more. It's a great time to do so, with the new native package manager (now working in nightly 0.12)
candiddevmike · 4h ago
RE: VSCode copilot, you're not crazy, I'm seeing it too. And across multiple machines, even with settings sync enabled, I have to periodically go on each one and uninstall the copilot extension _again_. I'll notice the Add to chat... in the right click context menu and immediately know it got reinstalled somehow.

I'd switch to VSCodium but I use the WSL and SSH extensions :(

userbinator · 4h ago
Thankfully I don’t have to use anything from Microsoft after work hours.

There are employers where you don't have to use anything from Microsoft during work hours either.

keyle · 4h ago
Everything except the best thing they could have brought back: Clippy! </3
fragmede · 2h ago
So Louis Rossmann put out a YouTube video encouraging internet users to change their profile pictures to an image of Clippy, as a form of silent protest against unethical conduct by technology companies, so it's making a comeback!
sgentle · 2h ago
The coercion will continue until metrics improve.
troad · 3h ago
Microsoft's ham-fisted strategy for trying to build a moat around its AI offering, by shoving everyone's documents in it without any real informed consent, genuinely beggars belief.

It will not successfully create a moat - turns out files are portable - but it will successfully peeve a huge number of users and institutions off, and inevitably cause years of litigation and regulatory attention.

Are there no adults left at Microsoft? Or is it now just Copilot all the way up?

p_ing · 2h ago
Copilot pulls from the substrate, like many other apps. No files are store in Copilot. They’re usually on ODSP but could be in Dataverse or a non-Microsoft product like Confluence (there goes your moat!).
QuadmasterXLII · 4h ago
This seems like a five alarm fire for HIPPA, is there something I’m missing?
Spooky23 · 4h ago
It’s a bug. He reported it, they fixed it.

It is not a five alarm fire for HIPAA. HIPAA doesn’t require that all file access be logged at all. HIPAA also doesn’t require that a CVE be created for each defect in a product.

End of the day, it’s a hand-wavy, “look at me” security blog. Don’t get too crazy.

waffleiron · 2h ago
I am more on the privacy side of things like HIPAA, but I would like to link the following.

https://www.hhs.gov/sites/default/files/january-2017-cyber-n...

loeg · 4h ago
It's HIPAA.
adzm · 4h ago
The HIPAA hippo certainly encourages this confusion
ivewonyoung · 4h ago
It's HIPPA now for all intensive purposes.
overgard · 2h ago
I don’t know much about audit logs, but the more concerning thing to me is it sounds like it’s up to the program reading the file to register an access? Shouldn’t that be something at the file system level? I’m a bit baffled why this is a copilot bug instead of a file system bug unless copilot has special privileges? (Also to that: ick!)
IcyWindows · 2h ago
I suspect this might be typical RAG where there is a vector index or chucked data it looks at.
conartist6 · 19m ago
Lie, cheat.
xet7 · 4h ago
sub7 · 43m ago
Windows and any softwaqre coming out of Redmond today is pure spyware with little to 0 utility.

This Clippy 2.0 wave of apps will obviously be rejected by the market but it can't come soon enough.

The higher $msft gets, the more pressure they have to be invasive and shittify everything they do.

Josh5 · 4h ago
are they even sure that the AI even accessed the content that second time? LLMs are really good and making up shit. I have tested this by asking various LLMs to scrape data from my websites while watching access logs. Many times, they don't and just rely on some sort of existing data or spout a bunch of BS. Gemini is especially bad like this. I have not used copilot myself, but my experience with other AI makes me curious about this.
bongodongobob · 4h ago
This is it. M365 uses RAG on your enterprise data that you allow it to access. It's not actually accessing the files directly in the cases he provided. It's working as intended.
albert_e · 2h ago
If this is indeed how copilot is archtected, then it needs clear documentation -- that it is a non-audited data store.

But how then did MS "fix" this bug? Did they stop pre-ingesting, indexing, and caching the content? I doubt that.

Pushing (defaulting) organizations to feed all their data to Copilot and then not providing an audit trail of data access on that replica data store -- feels like a fundamental gap that should be caught by a security 101 checklist.

crooked-v · 4h ago
If that's the case, then as noted in the article, the 'as intended' is probably violating liability requirements around various things.
sailfast · 2h ago
Correct. It is precisely that a user can ask about someone’s medical history (or whatever else) and not be reported that would be in violation of any heavily audited system. LLM Summaries break the compliance.
thenaturalist · 4h ago
Hardly have I ever seen corporate incentives so aligned to overhype the capabilities of a technology while it being so raw and unpolished as this one.

The bubble bursting will be epic.

stogot · 1h ago
Remember when CISA called Microsoft’s security culture deficient?

https://www.cisa.gov/sites/default/files/2025-03/CSRBReviewO...

And remember when the Microsoft CEO responded that they will care about security above all else?

https://blogs.microsoft.com/blog/2024/05/03/prioritizing-sec...

Doesn’t seem they’re doing that does it?

userbinator · 1h ago
They do care about security --- they care a lot about telling you about it.
micromacrofoot · 4h ago
AI induced hysteria is probably wider spread than initially thought, these people are absolutely insane