Guys, 1 more error: ```Request URL: <http://local...
# dev-frontend
a
Guys, 1 more error:
Copy code
Request URL: <http://localhost:8001/api/v1/source_implementations/discover_schema>
Body: { sourceImplementationId: "60b7efbe-bae4-4094-a154-380a6274a76d" }
Status Code: 500 Internal Server Error
s
what's the exception in the server logs?
a
Copy code
dataline-server | 2020-09-10 20:54:35 DEBUG UncaughtExceptionMapper:42 - Uncaught exception
dataline-server | java.lang.RuntimeException: Terminal job does not have an output
dataline-server | 	at io.dataline.server.handlers.SchedulerHandler.lambda$discoverSchemaForSourceImplementation$0(SchedulerHandler.java:120) ~[dataline-server-dev.jar:?]
dataline-server | 	at java.util.Optional.orElseThrow(Optional.java:401) ~[?:?]
dataline-server | 	at io.dataline.server.handlers.SchedulerHandler.discoverSchemaForSourceImplementation(SchedulerHandler.java:120) ~[dataline-server-dev.jar:?]
dataline-server | 	at io.dataline.server.apis.ConfigurationApi.lambda$discoverSchemaForSourceImplementation$11(ConfigurationApi.java:190) ~[dataline-server-dev.jar:?]
dataline-server | 	at io.dataline.server.apis.ConfigurationApi.execute(ConfigurationApi.java:291) ~[dataline-server-dev.jar:?]
dataline-server | 	at io.dataline.server.apis.ConfigurationApi.discoverSchemaForSourceImplementation(ConfigurationApi.java:190) ~[dataline-server-dev.jar:?]
dataline-server | 	at jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:?]
dataline-server | 	at jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:?]
dataline-server | 	at jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:?]
dataline-server | 	at java.lang.reflect.Method.invoke(Method.java:564) ~[?:?]
dataline-server | 	at org.glassfish.jersey.server.model.internal.ResourceMethodInvocationHandlerFactory.lambda$static$0(ResourceMethodInvocationHandlerFactory.java:52) ~[jersey-server-2.31.jar:?]
dataline-server | 	at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher$1.run(AbstractJavaResourceMethodDispatcher.java:124) ~[jersey-server-2.31.jar:?]
c
i think we’re in a state right now where these endpooints aren’t going to work because the backend worker doesn’ta ctually work yet:
Copy code
source_implementations/discover_schema
source_implementations/check_connection
destination_implementations/check_connection
connections/sync
jobs/*
s
that shouldn't be the case though
acceptance tests are running discover
c
ah. okay. didn’t realize any of the workers were working at this point.
my bad. i’ll shutup.
🔇 3
s
haha didn't mean that
🤣 1
just meant that some workers work
c
lolol. no offense taken.
you want to work with artem to debug the discover_schema stuff?
s
artem, have you started the whole app via
VERSION=dev docker-compose up
?
👍 1
yeah we'll have a go it
💪 1
can you run
docker-compose down
to turn off anything currently running, then run
./tools/build/acceptance_tests.sh
- does that pass?
c
(just a thought: michel added the integrations to the docker hub account, so the should be pull-able now; i don’t know if artem has ever pulled them before though)
a
oh. Missed your messages. Yep I started everything via
VERSION=dev docker-compose up
yep let me check acceptance tests
hm I haven’t got those file locally. Maybe I am on some older version of branch. I will check tomorrow on fresh master whether I still have problems with it
@s when I run tests:
Copy code
> Task :dataline-api:generateApiClient
Output directory does not exist, or is inaccessible. No file (.openapi-generator-ignore) will be evaluated.
Successfully generated code to /Users/jamakase/Projects/dataline/dataline-api/build/generated/api/client
> Task :dataline-api:openApiGenerate
Output directory does not exist, or is inaccessible. No file (.openapi-generator-ignore) will be evaluated.
Successfully generated code to /Users/jamakase/Projects/dataline/dataline-api/build/generated/api/server
> Task :dataline-commons:compileJava
/Users/jamakase/Projects/dataline/dataline-commons/src/main/java/io/dataline/commons/resources/MoreResources.java:54: error: cannot find symbol
    Preconditions.checkArgument(!name.isBlank());
                                     ^
  symbol:   method isBlank()
  location: variable name of type String
/Users/jamakase/Projects/dataline/dataline-commons/src/main/java/io/dataline/commons/resources/MoreResources.java:65: error: cannot find symbol
        searchPath = Path.of(url.toURI());
                         ^
  symbol:   method of(URI)
  location: interface Path
/Users/jamakase/Projects/dataline/dataline-commons/src/main/java/io/dataline/commons/io/IOs.java:37: error: cannot find symbol
      Files.writeString(filePath, contents, StandardCharsets.UTF_8);
           ^
  symbol:   method writeString(Path,String,Charset)
  location: class Files
/Users/jamakase/Projects/dataline/dataline-commons/src/main/java/io/dataline/commons/io/IOs.java:46: error: cannot find symbol
      return Files.readString(path.resolve(fileName), StandardCharsets.UTF_8);
                  ^
  symbol:   method readString(Path,Charset)
  location: class Files
/Users/jamakase/Projects/dataline/dataline-commons/src/main/java/io/dataline/commons/env/Env.java:35: error: cannot find symbol
      Env.valueOf(Objects.requireNonNullElse(System.getenv("ENV"), "test").toUpperCase());
                         ^
  symbol:   method requireNonNullElse(String,String)
  location: class Objects
5 errors
> Task :dataline-commons:compileJava FAILED
FAILURE: Build failed with an exception.
* What went wrong:
Execution failed for task ':dataline-commons:compileJava'.
> Compilation failed; see the compiler error output for details.
* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output. Run with --scan to get full insights.
* Get more help at <https://help.gradle.org>
BUILD FAILED in 2m 17s
s
🤯
how are you running these? are you running
./gradlew build
?
a
For tests I ran the command you mentioned. As for app I just use docker-compose up to pull form docker registry and run
s
and this happened when running tests?
via
./tools/bin/acceptance_tests.sh
?
a
Actually those file is in bin directory but yes.
s
with a fresh pull from master, can you run
./gradlew build
once then run the acceptance test script again?
it may be that java deps don't get pulled properly when running acceptance tests since it's a separate project
and that's what might be causing the
Symbol not found
exceptions
a
But idea is not to build anything locally. I just deleted all images locally and ran compose up. And it pulled all the images.
s
acceptance tests aren't running in a container, they're the only thing running locally. but the server that they're testing against is running in a container
a
Ok. I will test. But anyway I do not see how it can change env, as I expect backend to work fine without running any tests. So I do not think failing tests will affect default setup in any way. Will try to check on another pc
s
Do you want to have a call to screen share through this?
Things are working on master for me so it might be that there is some env-specific issue we're not seeing locally that's easier to find that way
a
@s just tested it on another pc and it worked fine. I am going to try to update brew and see whether problem is in some specific outdated package
ok @s now tests are passing, though I still have the problem with schema discovery with the logs I posted into channel
s
that problem is happening when you call it from the UI?
a
yes
s
can you share the payload you're sending to the API?
a
the one that is first in this thread 🙂
s
good point 🙂
@charles will help out here
👍 1
c
what sort of source are you using?
and does check_connection for this source work?
a
but how can I make check connection, as to create connection I need to first get schema
or do you mean to do it after we create source?
c
after we we create the source.
a
what sort of source are you using?
we have only postgres right now. Am I wrong?
👍 1
c
there’s one other one now, but postgres is a good one to use for testing!
so i meant does this api request suceed for the source
/v1/source_implementations/check_connection:
a
Maybe that’s just because I enter random values and expect discover schema to work 🤪
🤣 1
c
yeah. so what discover schema is doing is trying to actually query a db.
i’ll pass you some credentials for a public postgres db. one sec.
a
I mean I create source, specify destination and then I need to discover_schema
ok. Could you please give me 1 so I will check with real one. And then lets discuss what to do with check_connection and when to call it 🙂
c
yup
sent you some credentials to a test databse.
a
Charles, if you expect that I will call check_connection after create source, then there is 1 problem: what if I create source and then check connection? Then I will have broken connection and user won’t be able to delete it. So I suppose for this case we need some backend call that will be responsible for it. So it will create source, check connection and if its failing - delete source and return that something bad is hapenning
c
here’s the order in which i imagine api calls happening when everything works perfectly:
Copy code
/v1/source_implementations/create
/v1/source_implementations/check_connection (this will return true if dataline was able to connect to the connection you just created)
/v1/source_implementations/discover_schema (gets the schema)
a
actually I can try to put it in web_backend so it will be something like
Copy code
const source = createSource(sourceDto);
const cnc = checkConnection(source);

if (cnc.success === false){
  deleteSource(source);
}
c
i’m okay with that.
a
yeah, but first 2 will be very difficult to manage on UI. I would even say almost impossible until our logic bases on whether connection was created or not. Currently if source is created - user will be routed to second step.
c
the other option is if it fails then the user inputs new data and then you call
source_implementations/update
and try again.
a
also what to do, if user created source, but then somehow discover_schema fails? Like psql db started to fail?
the other option is if it fails then the user inputs new data and then you call
but its not possible as source will be already created and user will be routed to second step where he needs to specify destinations and we do not store that much info about onboarding so we won’t be able to implement it
c
okay. if that’s the easier thing for you to do that’s okay. how would you handle this case in that approach?
also what to do, if user created source, but then somehow discover_schema fails? Like psql db started to fail?
a
anyway - I suppose the only possible way here BFF. Don’t think we will be able to do it in a good way without it. But still even with it, I right now see, that in some rare cases user can end up in strange situation when something is failed | not created | connection lost and etc.
`okay. if that's the easier thing for you to do that's okay. how would you handle this case in that approach?`no idea how to achieve it with the API we have now. It is still possible that user can create connection today and open it tomorrow and discover schema will start failing. It may be not a big deal when he is creating second connection but pretty annoying on onboarding flow.
c
this isn’t a good option, but it’s all i can think of for now. how about if in the onboarding flow if discover_schema fails, you just use an empty schema object on the frontend? at least it’ll allow you to complete the onboarding flow.
definitely open to a better idea if you have one.
a
probably discover schema should be somehow remembered when connection is created. So when we create source, we also check that connection is working and it discovers schema. Then even if connection is lost - we can use schema. + have somewhere control on UI to refresh schema manually when necessary, or refresh it when connection is reinstated. but of course that’s more complex than we have now
how about if in the onboarding flow if discover_schema fails
I think thats totally perfect as a fast workaround. Any other fix that comes up into my mind is much more complex 🙂
c
👍
okay. lmk if you’re still having trouble with getting discover_schema to work with the creds that i gave you.
a
sure will test asap
👍 1
Also do you think job will start working after this fix? Maybe thats why I had no jobs 🙂
c
yeah. hopefully 🤞
a
Ok, discover_schema works now 👍
c
Woah. As in you get back actual data, right?
a
yes. I tested now without your fix for empty schema
c
Woohoo!