This message was deleted.
# _general
s
This message was deleted.
j
local is typically better for VDA registration, also should probably look at the Advanced Health Check stuff for StoreFront and making sure all your STA stuff is in a happy place https://docs.citrix.com/en-us/storefront/current-release/configure-manage-stores/advanced-store-settings.html#advanced-health-check this might also be pertinent https://docs.citrix.com/en-us/tech-zone/learn/tech-briefs/local-host-cache-ha-daas.html#ha-mode-for-resource-locatio[…]ishing-different-appsdesktops
c
I should mention we aren't publishing Desktops just Apps. I will look at the Advance Health. I am worried there is a latency or routing problem, but no idea where to start troubleshooting that.
j
What breaks in your DR scenario - as in, which components are gone which is inducing the pain?
c
Backend database for a custom application. The app does a lot of DB calls so the app server needs to be reeeaaal close.
j
I meant more around your registration scenario and the connection failures? sounds like in a prod scenario you have two resource locations with cloud connectors in each, but the NetScaler Gateway is only in one DC? Where is StoreFront in this scenario - is it a stretched deployment or two storefront deployments one per site?
c
Everything but the VDAs are on the east coast. NS, Storefront, CC. The DR was set up as a stand alone site. They aren't linked from what I can tell.
Looking at a report. We would have about 100 connection timeouts or Logon Timeouts before the DR cut over. After the DR there are 900 timeouts. I am pretty lost.
j
Think I am kind of lost on what your setup looks like here/what DR looks like - can't pull a picture into my brain based on the above
c
I can try to clarify. We have our full prod on the east coast. That is our Netscalers HA pair. Doing gateway for external and a VIP to the Storefronts internally. They have 3 cloud connectors we use for STA and XML. The VDAs live in the same data center as everything else. But the main application our users use, let's call it Custom1, has a requirement to live on the same subnet as the database. Last week that database went belly up. So they had to fail over to Denver, which has a replicated database for Custom1. So I had to move the app VDAs to Denver to follow the database. Now users are still using the Netscalers on the east coast and the main app servers for Custom1 are in Denver. I think it is adding too much latency or something that is causing users to have timeout errors and not getting the app they need.
j
Ah i see, so in your "broken state" your NetScaler, StoreFront, Cloud Connectors are still on the east-coast, but your application and the VDA's have been moved to the denver site. So all functions associated with brokering and VDA registration still take the same initial registration/access/enumeration path, but then traverse your network between east-coast and denver to actually connect to the VDA?
👍 1
c
So this is where my gap in knowledge is. There is an infrastructure in Denver. But it seems to be completely separate. They have standalone cloud connectors, storefront, and Netscalers. With their own DNS, so I don't think they are set up right for this.
j
It sort of sounds like a full multi-site scenario has been setup (above sounds like a solid base), but it doesn't look like you are using that second set of infrastructure if you are just failing over the VDA's and making no other changes anywhere....means that everything else in the second site is setup but more for a full DR. If your VDAs point back to Cloud Connectors in East, even when existing in Denver, then you need to have network connectivity in place (including between NetScaler in east and VDAs in Denver) etc. When you failover the VDAs, are they changing hosting connections etc?
c
The network connectivity exists. I am starting to think the issue might be with the RDS server being faraway. I see the users getting a new RDS license, but most of the ones having issues are getting new licenses or reissues. Could be a coincidence tho. But the Denver VDAs are registering with the Denver Cloud Connectors. But the users are using the East Cloud Connectors to fulfill the XML/Auth requirements. I saw this article and I am going to test it next week. Thoughts? https://support.citrix.com/article/CTX207038/application-not-launching-and-the-session-is-stuck-at-the-prelogon-state
j
if your gateway object in east is using east Cloud Connectors for STA, and storefront is using east Cloud Connectors for enumeration but your VDAs are using the Denver Cloud Connectors for registration, that can cause some weirdness for sure - that's where the advanced health check with storefront can come into play, at the same time as ADC based XML load balancing etc - you are in the fun stuff now 🙂 Quick test would be to see if things are any better if your VDAs in a failover state route via the cloud connectors in east - still wondering how the catalog looks from a power management standpoint? If you are using power management that is
c
Do you suggest I switch the Denver ones to the east coast?
j
Only for testing purposes - few other considerations in the mix from a proper design perspective but it will point you in the right direction (lots of unknowns in the mix currently)