Heads up, global outage for all CrowdStrike Window...
# _general
j
1. Click on See advanced repair options on the Recovery screen. 2. In the Advanced Repair Options menu, select Troubleshoot. 3. Next, choose Advanced options. 4. Select Startup Settings. 5. Click on Restart. 6. After your PC restarts, you will see a list of options. Press 4 or F4 to start your PC in Safe Mode. 7. Open Command Prompt in Safe Mode. 8. In the Command Prompt, navigate to the drivers directory: cd \windows\system32\drivers 9. To rename the CrowdStrike folder, use ren CrowdStrike CrowdStrike_old
1. Boot Windows into Safe Mode or the Windows Recovery Environment 2. Navigate to the C:\Windows\System32\drivers\CrowdStrike directory 3. Locate the file matching “C-00000291*.sys”, and delete it. 4. Boot the host normally.
1
m
This is a total shit show. Almost everyone of our endpoints and VDAs were sitting at a bluescreen
j
Workaround Steps: • Reboot the host to give it an opportunity to download the reverted channel file. If the host crashes again, then: ◦ Boot Windows into Safe Mode or the Windows Recovery Environment ◦ Navigate to the C:\Windows\System32\drivers\CrowdStrike directory ◦ Locate the file matching “C-00000291*.sys”, and delete it. ◦ Boot the host normally. Note: Bitlocker-encrypted hosts may require a recovery key. Latest Updates • 2024-07-19 05:30 AM UTC | Tech Alert Published. • 2024-07-19 06:30 AM UTC | Updated and added workaround details. • 2024-07-19 08:08 AM UTC | Updated
m
yeah the endpoints are what's going to hurt us the worst. non-persistent VDAs on an image from 2 weeks ago are easy to fix
j
Updated 42 minutes ago, looks like they've revoked and somehow the revoke is hitting boxes before the bad .sys is reverting
We're having mixed success with the double reboot, but more success than failures at this point.
n
This caused a global airstop at AA, United, and Delta. We're hit with it, too, though I don't know how bad. Hopefully enough that we kick them to the curb, if I'm being honest.
s
Anyone seen any way to recover Azure based VMs?
n
If the workaround is to detach the old disk and attach it as a secondary to another VM, wouldn't that work? Shut down the Azure VM, snapshot the disk, convert to managed disk, attach to new VM, then vice versa?
s
I tried but may have screwed something up as it fails to boot with: \Windows\system32\winload.efi error code 0xc000000e
m
FYi Microsoft is impacted by another issue
affecting M365
l
My current place is in the clear because we use S1, but my prior place, oof - major impact, even servers.
n
90% of our Cloud Connectors are currently down, and that's just the tip of the iceberg.
m
if someone knows the proper way to detach a disk from one VM and get it attached to another VM, please let me know. i've had to power down a bad vm, snapshot it, convert snap to disk, attach disk as secondary drive to working VM, delete the shit file, detach and re-attach. really fun and speed process ........
n
I'm curious if there's a better way, because that's precisely what I do when I want to restore a snapshot when working on my master images. It's awful.
n
Honestly wonder how CS survives this.
m
QA is so overrated....
l
Their stock is already down 20%, they might not.
n
lol, I was literally just checking their stock.
m
just think about that worldwide there are maybe a million machines running crowdstrike with an outtage spanning multiple hours
n
We have over 11k persistent VDAs currently unregistered...
j
This is what happens when people use an agile approach to code and it's not tested correctly, because most orgs just accept the Vendor pushing updates like this and having auto update on in production without enough testing.
m
people never learn
seen this so many times
n
I just think it's nuts that this went out globally all at once. No phased approach whatsoever.
m
a reason why we used shavlik before
didnt trust patches
j
It's being referred to as the worst IT disaster ever with several governments setting up task forces to help manage it.
Crazy
n
lolol...
😁 2
You don't say.
l
oof
Or even separate industries. Maybe don't do all 4 major airlines at once?
m
Or banks or health
n
It was way more than 4, lol
Hospitals are the big one. No surgeries will happen if they can't monitor anesthesia, etc.
And I imagine some people will pass and lawsuits will be brought forward against CS.
l
image.png
I worked with a guy that did Desktop Support and that was ALL HE EVER DID for over a decade. Guess he was on to something.
n
Reminiscent of post-9/11.
j
This was a tweet I saw earlier in the day.
😆 1
😅 1
r
My mind will not wrap around how this update could be released. It seems like even the most basic testing of it would have revealed the problem and stopped the rollout.
p
it's clownstrike. users do the testing. Also did they not have the same exact issue a couple weeks ago? Or i was dreaming that.
j
It is rather mind blowing, isn't it? It's like the IT Pandemic
j
image.png
n
So how is everyone validating VMs in Azure? With no local console access, we're basically blind as to why VMs aren't coming online. We may wind up provisioning new VMs for these users because manually remediating them with disk swaps is insanity.
p
rebooting them a bunch of times.
n
That has worked for you?
p
shockingly so. MS recommended it also.
j
I'd say rebooting up to 10 times has worked for maybe 60-75% of our endpoints. We had a backlog of Bitlocker key fetchers, and while desktop support was on the floor waiting for a DA to fetch said keys, they were told to just reboot.
p
"We have received feedback from customers that several reboots (as many as 15 have been reported) may be required, but overall feedback is that reboots are an effective troubleshooting step at this stage." https://azure.status.microsoft/en-gb/status
n
Are you rebooting them in Azure or through Citrix Cloud?
p
I've been doing via Azure. But the only affected machines i have are infra, as the VDI are non persistent.
n
Hmm, okay. These are persistent VDAs, not infrastructure.
p
Mine is a really small sample also.
n
600+ here
p
probably easier for you to do via cloud console.
good luck.
s
All I have been doing is letting the VDI's crash and reboot in Azure and they have slowly been coming back. I really do not want to have to mount the OS Drive and delete the file on everyone of them.
n
I'm on reboot 14 of some Azure VMs and no dice so far.
s
It is crazy how random it is for one to come back. I had ~200 this morning when I was called in and it is now down to 54.
n
One of the guys on another team ran this script inside Azure CLI and it resolved the issue. Basically automates removing the file. Took under 10 minutes to fix a single VM: https://github.com/Azure/repair-script-library/blob/main/src/windows/win-crowdstrike-fix-bootloop.ps1
s
So I tested this and it does work but the new disk it creates has none of the citrix tags. Wil the cause issue later?
n
Honestly, not sure. We remediated 700+ VDAs over the weekend using that method.
In other news, CrowdStrike stock is down 31% since Thursday's close.
r
Time to buy. 😁
n
Buying the dip doesn't work if the dip continually gets shallower, lol.