https://cypress.io logo
Join Discord
Powered by
# cypress-cloud
  • a

    adventurous-dream-20049

    12/06/2022, 6:06 PM
    For SSO issues, please reach out to our support team at support@cypress.io which is included in your paid plan.
  • a

    adventurous-beach-9295

    12/06/2022, 7:30 PM
    Thx
  • b

    broad-potato-69393

    12/06/2022, 8:34 PM
    Hi guys - is dashboard down for a lot (all) of users? We've been consistently getting timeouts (10 mins) on all test runs the whole of today. Our test runs are usually around 6 mins.
  • a

    adventurous-dream-20049

    12/06/2022, 9:11 PM
    We are currently experiencing a partial outage to analytics, which you can check here: https://www.cypressstatus.com/ However, I'm not aware of anything else being affected. Happy to investigate though. Can you send an email to support@cypress.io which is included in your paid plan?
  • b

    broad-potato-69393

    12/06/2022, 9:34 PM
    Thanks Shawn - I noticed the partial outage. Just happened to coincide with our issues so I thought it might be related. Have filed a support ticket
  • i

    important-fish-42817

    12/07/2022, 10:35 AM
    Hi 🙂 This error happens when trying to start cloud tests from visual code studio
  • m

    magnificent-finland-58048

    12/07/2022, 8:53 PM
    Here's a very ambiguous CI issue during parallelization It happens because Chrome rolls out varying versions of Chrome, and if the parallelized CI machines do not match that versios, you get the error. Sporadic issue. It's the first time I saw it externally.
    Copy code
    You passed the --parallel flag, but we do not parallelize tests across different environments.
    
    This machine is sending different environment parameters than the first machine that started this parallel run.
    
    The existing run is: https://cloud.cypress.io/projects/7mypio/runs
    
    In order to run in parallel mode each machine must send identical environment parameters such as:
    
     - specs
     - osName
     - osVersion
     - browserName
     - browserVersion (major)
    
    This machine sent the following parameters:
    
    {
      "osName": "linux",
      "osVersion": "Ubuntu - ",
      "browserName": "Chrome",
      "browserVersion": "107.0.5304.121",
    https://github.com/muratkeremozcan/tour-of-heroes-react-cypress-ts/actions/runs/3642676076/jobs/6150068190 to work around it, we have to ensure the same Chrome version in all CI machines https://github.com/muratkeremozcan/tour-of-heroes-react-cypress-ts/pull/187/commits/f0d37f6fd9bea8d124a30135d2193a8d6c04abc0
    Copy code
    - name: Install Specific Chrome Version
            run: |
              sudo apt-get install -y wget
              sudo wget -q https://dl.google.com/linux/direct/google-chrome-stable_current_amd64.deb
              sudo apt-get install ./google-chrome-stable_current_amd64.deb
    This adds CI minutes. It would be ideal if Cypress had a solution to it for parallelization.
  • a

    adventurous-dream-20049

    12/07/2022, 9:18 PM
    Thanks for reporting Murat. The current workaround is to use a specific version as you did or use one of Cypress images: https://github.com/cypress-io/cypress-docker-images/tree/master/browsers Currently, this typically happens whenever a new browser version is released since it grabs the latest. I've also seen the happen when using ubuntu-latest 😢 The good news is, we are actively investigating, so I'll loop in your CSM to keep you in the loop of any updates
  • a

    adventurous-dream-20049

    12/07/2022, 9:19 PM
    Hi 🙂 This error happens when trying to
  • m

    magnificent-finland-58048

    12/07/2022, 9:20 PM
    thank you thank you the former (specific chrome version) is preferable although it costs CI minutes, because the latter requires a yml update when Cypress or browser version changes
    a
    • 2
    • 2
  • b

    blue-battery-71202

    12/08/2022, 1:27 PM
    Hello guys, Is there any way to group cypress runs together on Dashboard? We are using a GHA that runs 4 parallel steps all running the cypress-github-action on different specs and env vars. I want all of them combined together in the dashboard as a result. If there is any tip to achieve parallel execution with different folders/env vars - would be highly appreciated.
    f
    • 2
    • 1
  • d

    dry-noon-83565

    12/14/2022, 2:19 AM
    Hi all! I plan to onboard my web team (15ppl) to use cypress cloud. In the past, we have been using cypress library for integration tests and debugging test failures on GitHub PR has been a pain point for us. We tested cypress cloud and really liked the feature. However, I'm concerned about the billing. We have about 100 integration tests, and we trigger the test workflow on every commit. For the enterprise plan, it asks for $5/1000 test results. If our PR has 10 commits on average, we will pay $5 per PR. This is quite expensive. I'm curious if cypress support team has any suggestions to reduce this cost. Thanks!
  • f

    fresh-doctor-14925

    12/14/2022, 7:11 AM
    I'd say you're better off speaking to sales@cypress.io regarding costs That said, you could also consider moving to running your tests per merge rather than per commit
  • d

    dry-noon-83565

    12/14/2022, 7:12 AM
    Thanks, I scheduled a meeting with them tomorrow. Running test per merge is too late though. We'd like to make sure no integration test is broken before the PR is merged.
  • d

    dry-noon-83565

    12/14/2022, 7:13 AM
    for example, I made a commit, and someone requested a change, shall we not rerun the integration test after the change? For unit tests, I would definitely run them.
  • f

    fresh-doctor-14925

    12/14/2022, 7:16 AM
    Sure. Though on the other hand do you really want to run your full integration suite every time someone fixes a typo in a comment? It's a balancing act
  • d

    dry-noon-83565

    12/14/2022, 7:53 AM
    How can the test tell it's a comment update or a config 1 liner change that can take down the entire facebook? If the test infrastructure isn't smart enough to detect dependencies, always rerunning the test will be the safest bet. That being said, I think a good balance would be run the test when we open a PR and before we want to merge a PR. I don't know if there is an event for the latter.
  • d

    dry-noon-83565

    12/14/2022, 8:56 AM
    looks like merge queue would be a good candidate for running the integration tests: https://docs.github.com/en/repositories/configuring-branches-and-merges-in-your-repository/configuring-pull-request-merges/managing-a-merge-queue
  • d

    dry-noon-83565

    12/14/2022, 8:56 AM
    not yet publicly available though...
  • f

    fresh-doctor-14925

    12/14/2022, 9:01 AM
    > How can the test tell it's a comment update or a config 1 liner change that can take down the entire facebook? It can't. But if you're that worried about a commit causing a catastrophic outage, and all 100 of your specs are absolutely necessary, then it's not going to be cheap. If you're testing at such a scale, you can probably negotiate on price
  • d

    dry-noon-83565

    12/14/2022, 9:05 AM
    Our baseline requirement is making sure all tests pass before the PR is merged. Once per PR is fine as long as it checks the latest change with the base branch. Besides the new GitHub merge queue, is there any suggestion to set this up with the existing GitHub features?
  • f

    fresh-doctor-14925

    12/14/2022, 9:06 AM
    No. I have my tests running on merge, but I also have two environments that changes are deployed to before they go anywhere near production
  • d

    dry-noon-83565

    12/14/2022, 9:07 AM
    by running on merge, do you mean checking 'push' events on main? What do you do if the integration test fails after the PR is merged? Do you auto revert?
  • d

    dry-noon-83565

    12/14/2022, 9:08 AM
    do you not run the tests if your PR introduces new test suites?
  • f

    fresh-doctor-14925

    12/14/2022, 9:10 AM
    1) We have a release step that releases the changes to the given branch/environment. The tests wait for that to be completed via the
    workflow_run
    event 2) The tests are in the same repo as the code, so yes. They will be run once they have been released on the branch
  • d

    dry-noon-83565

    12/14/2022, 9:21 AM
    Thanks for the insights. Unfortunately, I don't think running tests after PR is merged would be acceptable for our team. We have a similar release process, too, but even if it doesn't affect production users, it will break the main branch and disrupt the workflow of other engineers in the team if they rebase the broken main code. We also experienced issues in the past where integraiton test passed locally but failed on GitHub due to bad environment configuration. I'm concerned that PRs will revert frequently. It will save a lot of costs. Comparing that with engineering time for reverting and debugging, it's an interesting tradeoff 🙂
  • f

    fresh-doctor-14925

    12/14/2022, 9:26 AM
    No problem. Happy to help Yeah, I think you've got a good case there for testing per commit. Engineering time can be harder to quantify, but if so many people are reliant on your dev environment then I can see why you'd want to test on a more granular basis and prevent too much time being wasted Presumably if you're in a bigger org you should have a procurement team that can negotiate you a better deal?
  • a

    adventurous-dream-20049

    12/14/2022, 2:06 PM
    @fresh-doctor-14925 is correct. The best people to talk to will be sales@cypress.io. I will reach out and provide all the context you have provided here prior to your call.
  • t

    thousands-house-85089

    12/14/2022, 2:12 PM
    You can set up different scripts in your package.json, one that logs to cypress dashboard and one that doesn't. You could run the one that doesn't on every commit, and output results somewhere else i.e. we have our gitlab integration set up with MS Teams so it spits out a message into a Teams channel. Then you can save the dashboard costs for Release Candidate runs when you've got a stable RC, or live-only smoke tests etc. Think about who you need to have visibility on the dashboard, and if others can gain visibility elsewhere. Also think about scheduled runs and how often you actually need results captured in the dashboard.
  • b

    best-flower-17510

    12/15/2022, 8:17 PM
    cc @acceptable-flag-91551
1...1011121314Latest