Saw this today: "Threat Actors Exploit Adobe Cold...
# adobe
j
Saw this today: "Threat Actors Exploit Adobe ColdFusion CVE-2023-26360 for Initial Access to Government Servers" https://www.cisa.gov/news-events/cybersecurity-advisories/aa23-339a
s
Patches have been available since at least March: https://helpx.adobe.com/security/products/coldfusion/apsb23-25.html
d
Patch your servers. The patches for these exploits have been available for months. Also, one of the servers on that list was a CF2016 server. That server should have been upgraded ages ago.
s
(at work we run CVE checkers regularly and we update all libraries and servers pretty much as soon as a new version appears -- does CFML have an automated CVE checker?)
j
I just posted for information. It's an interesting read.
b
Interesting read - thanks for sharing. These incident summaries are both from compromises that occurred after patches were available, but what’s been top-of-mind for me is that both CVE-2023-26360 and CVE-2023-29300 were reported by Adobe as having been exploited/discovered in the wild prior to patch availability. So that’s pretty good confirmation that there are some threat actors who are actively looking for / exploiting zero-day ACF vulnerabilities against some targets. Lots of organizations probably don’t need to realistically worry about zero-day vulnerabilities, but it’s good to proactively take steps to make compromise an exploitation harder. And anecdotally there appears to be a sizable footprint of past-EOL versions of ColdFusion out there. If you’re running post-EOL ColdFusion (and you really shouldn’t), the following done at a WAF or similar can go a long way - -normalize the request URI -block any requests with
..
in the URI path (which could be attempts to exploit directory traversal vulnerabilities) -block any requests in which the normalized URI path starts with a case-insensitive
/CFIDE
In addition to that, strict outbound network filtering can go a long way to stopping a full exploit chain and minimizing impact.
👍 3
❤️ 1
Based on the incident summaries and other sources, it also sounds like an EDR agent on the server may have been bacon-saving technology, in terms of detection or prevention
s
As a background task these days, I'm constantly working on botnet detection and blocking -- and we're slowly building up a long list of problematic IPs around the world that are running active vulnerability scanners... most of them are targeting WordPress/Django and similar CMS systems that are often left unpatched/unsecured but we see all sorts of bizarre patterns across requests (such as crawlers in Singapore and Moscow that are scanning legacy URLs that we haven't had active for over three years!).
b
Hmmmm..interesting @seancorfield. Maybe crawlers working from data from localized/international search engines indices (not Google) with stale data? And especially on public cloud hosts, I always wonder how much scanning is actively malicious vs. just a huge volume of bug bounty reconnaissance. (Though still problematic from an IP reputation pov)
s
With us it could be scraping public profile data (since we're a dating site). But the scripting botnets looking for vulnerabilities... I read that there are 1,000-1,500 such botnets active at any time, all busily scanning every website they can, just looking for ways in...
👍 1
d
@Brian Reilly IMO there's zero chance that the volume of bug bounty research is anywhere near the volume of malicious probing.
b
🤷‍♂️ I’ve tended to see huge volumes of “Fuzz Faster U Fool” and similar user agents at times, which is pretty common tool for BB recon and crawling. I have a hunch there’s a large amount redundant (different people) continual probing public cloud address space, and then trying to map back any findings to owner orgs/programs. Mixed in with large amounts of real-malicious probing. But rate-limits and bot detection will block indiscriminately 😃