AI Malware Is Here: New Report Shows How Fake AI Tools Are Spreading Ransomware

38 karlperera 16 6/1/2025, 4:36:03 PM blog.talosintelligence.com ↗

Comments (16)

neilv · 23h ago
> malicious actors are exploiting its popularity by distributing a range of malware disguised as AI solutions’ installers and tools.

This sounds like the ordinary problem of ordinary malware in programs downloaded and installed off the Internet.

"AI malware is here" makes me think malware in the behavior of the "AI", or malware developed by AI.

Noumenon72 · 1d ago
"AI Malware Is Here" implies the AI is writing malware. You wouldn't say "IRS Malware Is Here" if people were distributing fake tax-filing tools with malware, which is all that's shown here.
AIorNot · 1d ago
I think what worries me is that AI allows the fraud barrier to entry to be lowered - malware is malware but ai built website with a landing page looks like some startup - now with veo3 you can even generate talking head video testimonials in an instant to go with a fake but auth looking gwebsite

No comments yet

tough · 1d ago
Fake AI tools aren't AI Malware

I wont even click the link and give them a visit

tokai · 1d ago
Its just bonzi buddy and browser toolbars once again it seems.
tough · 1d ago
Malware is just a niche of adware anyways, all the ads are mostly vectors for malware to get into your computer.

Use adblcok, Use firefox, be free

Terr_ · 22h ago
I suspect the advertising space would be very different if they were strongly liable for scams/viruses being distributed through their networks.

But somehow everything these days "needs" to be monopoly-scale which means moderation/vetting must fall by the wayside.

e2le · 1d ago
How will websites survive if they can't serve malvertisements to users?
Avicebron · 1d ago
ideally by the natural selection of providing something valuable (to people who visit the site)
antonkar · 1d ago
Sadly it is almost inevitable we'll have another Bredolab-size botnet (30 mln computers, 1% of all) by cybercriminals but this time it'll be an AI botnet.

As soon as AI components will be useful, they'll be in botnets. And with such a size, you have more compute than OpenAI and can train a frontier misaligned model or modify an open source one:

The solution is to not just have 10% of compute in clouds like we have now but at least 50% in SOTA clouds, to have more compute than botnets. Have AI model App Store, probably NVIDIA will become like Apple:

Will have at least the most minimal checks on what runs on their hardware

We'll have to do it but better to do it early not late

It can be a unicorn startup that NVIDIA will want to buy (motivating gamers to put GPUs in clouds is easy, same with others really, you can share $30-1500/month with them by renting GPUs from your cloud to corporations and others)

There is Salad but they don't secure the hardware and software, so can't really have corporate clients. Amazon AWS, Azure, others, show that it's a real business. Unicorn-sized just in the USA (even bigger if global) if you'll do the math

christianqchung · 1d ago
>And with such a size, you have more compute than OpenAI and can train a frontier misaligned model or modify an open source one

That doesn't seem like a reasonable conclusion based on Open AI and leading lab expenditures. They need training data too, would one group actually be able to amass all of this? If you hijack 30 million random computers that's not as nearly as useful as 300K B200 GPUS for model training. Am I missing something?

antonkar · 23h ago
It's a lot, but you're right that the more realistic scenario is just running an existing open source model - sadly those are trivial to misalign

So we better have SOTA clouds: people don't update computers for months and as Bredolab shows - their GPUs are up for grabs

Ideally and int. org. will do it but they are slow and countries don't cooperate enough, so a startup is more realistic

karlperera · 1d ago
AI generated threats to cyber security are real. AI tools are being used to generate code faster and little or no tech knowledge is needed to carry out an attack. This was not possible before. Automated attacks are being driven with the help of AI too. APTs and deepfakes and many other forms of threats are now common. See: https://www.cshub.com/threat-defense/articles/cyber-security...
TZubiri · 23h ago
"Chatgpt is just the tip of the iceberg, here's 47 tools that.. "

https://x.com/JuvidX/status/1664756189689774080

One of the disadvantages of openness, everyone can fork or wrap the api, and mitm themselves at either no or minimal added value.