Is there a reason we’re excluding check connection...
# contributing-to-airbyte
j
Is there a reason we’re excluding check connection failure tests from the standard source tests?
u
Copy code
// /**
  // * Verify that when given invalid credentials, that check connection returns a failed response.
  // * Assume that the {@link TestSource#getFailCheckConfig()} is invalid.
  // */
  // @Test
  // public void testCheckConnectionInvalidCredentials() throws Exception {
  // final OutputAndStatus<StandardCheckConnectionOutput> output = runCheck();
  // assertTrue(output.getOutput().isPresent());
  // assertEquals(Status.FAILED, output.getOutput().get().getStatus());
  // }
u
@charles
u
i have no problem with adding it.
u
i think the original reason i pulled it out was that there was a lot of uncertainty around how the interface with python was going to work.
u
and i wasn't sure that i wanted to ask a user to give invalid configuration.
u
like it seemed like another confusing thing that would provide friction that didn't provide that much value since we do very little useful stuff when logging this sort of error anyway.
u
now that the python this is more stable, i'm fine adding it back if you think it would be valuable.
u
also fine if we leave this out until we do another iteration the standard tests and / or revamp error handling on check connection.
u
I think it would be valuable. Already ran into one case where an integration wasn’t returning a valid response due to this.
u
I’ll add it back in after the underlying issue is fixed
u
The specific problem was an error within the logging inside the catch
u
ah
u
I think generally the error handling in check connections is good
u
okay.
u
you may need to add it optionally for now so that you don't need to edit every single source.
u
just as a heads up.
u
If it isn’t working for a source, it probably needs to be fixed.
u
It means the error message won’t correctly propagate to the UI
u
sorry. i was unclear.
u
So I’m fine adding it and then retesting all
u
And fixing
u
i'm assuming you are impplementing this by adding a
get_invalid_config
method to the python iface.
u
which means you need to implement it for every source.
u
i'm just suggesting you may not want to have to do that in one go.
u
and instead figure out a way to only run that test if that is implemented.
u
oh
u
hmm
u
i'm trying to be really careful about not doing changes that require every integration to change in one commit.
u
and always have a strategy for how i can do one or a few at a time.
u
because we're hitting the number of integrations where doing anything for all of them in one go is just not feasible.
u
Originally I was thinking it could just modify the correct config
u
ah. you could try it 🤷 . i think it's possible to modify that config in such a way that it remains valid.
u
we could modify every secret field
u
are there any cases where a salted hash of the secret is still valid?
u
:-p
u
i hate that we are doing more and more magic in these tests. everytime we do it bites us. @s is tearing his hair out right now on incremental on some of this stuff.
u
Also once we add a standard test for exchange rate we’ll need to disable this test for that case specifically
u
are there any cases where a salted hash of the secret is still valid?
as long as you hash it twice 😉
u
haha
u
But I do feel like this is a useful test because it’ll just make setup harder if the error message for bad credentials isn’t valid.
u
yeah
u
i think you have now described all of the cases i was worried about, so at least we are on the same page. 😂
u
i think i'd lean a little bit on the side of doing it explicitly. user provided invalid config, and if it is not provided we don't test it.
u
but if you want to try the magic way, i don't think i have ground to stand on since i've committed my own magic sins in that tests suite. 😬
u
hmm
u
I feel like the Dockerization of standard tests made it really hard to edit them
u
like there’s a separate multistep build when (I think) we don’t even have a use case that isn’t just mapping a raw file?
u
hmm
u
I guess there are exceptions
u
like google sheets
u
which is a major exception
u
lol yeah.
u
i think you could argue that maybe we dockerized too soon. e.g. we've given up early velocity where we shouldn't have at the beginning.
u
but i think it's keeping us honest.
u
it's the only way i can think of to force us to keep our tests so they can run on any connector regardless of language implementation .
u
Yeah :/
u
Feels bad to either have to do something hacky or spend hours to add a trivial test case
u
Going to do the hacky one and see if it works