<@U5M2KE6P5> and everyone else interested in using...
# prisma-whats-new
s
@eliezedeck and everyone else interested in using the latest version of the subscriptions protocol. Graphcool is now compatible with 0.7.x as well as older versions of the protocol. As a temporary workaround you need to manually configure the timeout as detailed in https://github.com/apollographql/subscriptions-transport-ws/issues/149#issuecomment-307622783
👍🏻 2
a
@sorenbs Will you update the existing samples to reflect that change?
s
I am hoping Apollo will change the timeout behaviour in a point release soon. When that happens we will update all examples to use 0.7.x without the custom timeout config
👍🏻 2
a
Good point @sorenbs
e
@sorenbs Thanks for the update. Does this mean Queries + Mutations over WebSocket also? or just that it doesn’t crash anymore?
🙏🏻 1
s
Queries + Mutations over websocket is not supported yet.
😭 1
e
Thanks. Hope those will be implemented soon.
👍🏻 1
s
Hehe - to you, what are the main benefits queries+mutations over websocket provide?
a
Speed
e
Well, for countries like Europe / US, not very apparent at first because the network is already fast there. But for countries like mine (Madagascar) or under developed countries, WebSocket is a huge plus. Mostly because of the speed and latency characteristics of the network. In my analysis, one connection that lives longer is better than possibly more HTTPS handshake every-few minutes or seconds, especially if you are trying to build an application that will stay connected for 8 hours of work day. First advantage is bandwidth cost. For each HTTPS API call, there are some amount of data that you always need to send, this adds up to a lot, for mobile, this is great data savings. I have tested with Graphcool, and typical call will be at least 2 - 4 KBytes. Because WebSocket can maintain states, it only need to exchange necessary information once, and in most cases, the amount of data fits in one single packet, and even less than 512 Bytes in most scenarios. This is what Meteor does because they are aware of this advantage. That means, REST can be 3x more expensive in terms of bandwidth compared to WebSocket, especially with small changes (most cases). Second, latency. It goes without saying, that higher bandwidth usage directly means more latency. Moreover, everytime you issue an HTTP API call, you have a series of checks, like authentication, protocol checks, etc... Those overhead when adding-up, will not be cool. With WebSocket, you only take care of most of the conditional checks at the initial handshake, once connected, the network stack will ensure the security (I mean TCP and the TLS layer), you save server CPU cycles, customers get faster response time, and that’s quite a big deal. I am not sure how long is the overhead and latencies measured at Graphcool level, but for my tests, and simple insertion mutation takes about 400 - 500 ms, where the ping to the Graphcool endpoints are actually just about 275 ms (that’s Europe servers). So, at the best case scenarios, at least there is 100 ms of overhead from the perspective of the users. And latency from 300 ms to 500 ms is actually very noticeable. Compared to a Meteor app based in the US, my ping is about 375 - 400 ms, and mutations gets answer just around the same time with some minor jitter (like less than 20 ms), like there was no overhead at all. I believe this is mainly because of WebSocket. And let’s not forget that Meteor implementation is mainly pure Javascript, with old Node 4.x. Servers don’t need to spend time and CPU cycles with the same routine of HTTPS boilerplates in all future calls. Only the first WebSocket establishment takes the same time as a normal HTTPS plus just a few millis. So, that’s quite a big deal of performance gain. Also with Firebase, the mutations takes about 375 ms; I know they have been around for longer and have been doing a lot of optimizations. Given the fact that Graphcool servers are by default in Europe, which is about 250 ms away from me, I expect a performance characteristics that beats Firebase which has all their Realtime Database located in Central US. I mean, I get about 450 ms with Graphcool which is closer to me, and only 375 ms with Firebase being much further away. I’m doing the comparison for the greater good here, not meaning to undervalue Graphcool in any way, I think the very fact that I’m trying to say this already indicates that I think Graphcool has a great value. But there are a lot of rooms for the performance, speed and latencies to improve. WebSocket is one of these. That’s why I’m eager for Queries + Mutations over Websocket. 🙂
3
@sorenbs sorry for the above very long response
d
@eliezedeck wow, thanks --I learned a lot from that
💯 1
e
@dk0r happy it was useful to you
a
@sorenbs With regards to speed, I don't see a Content-Encoding: gzip anywhere in the responses. That might save quite some bandwidth... @nilan any details on this, otherwise it's a bug 🙂
e
@agartha I think not having GZipped response is a wisdom. If you lookup the IPs for
api.graph.cool
, there are actually 8 IPv4 for it (and most likely behind 8 distinct servers also). Assuming those are the frontends for the APIs that we call, those will get hit by thousands, probably millions of requests a second. I mean, we can have as many free tier as we need as developers; that can be a lot, and not only that but also there are those commercial tiers, I’m sure they are many as well. Gzipping will cost them a lot of performance hit, and I for one, would not like to see latencies increase even more.
s
Thank you for the very detailed answer @eliezedeck - that’s exactly what I was hoping for 🙂 We have some interesting work in the pipeline that will improve server-side latency quite dramatically. Hopefully I will be able to announce the results of this in a week or two.
❤️ 2
1
a
Server performance is something they can affect, bandwidth from the client they cannot. A lot of users are targeting mobile as well, I really believe saving bandwidth is equally important and in terms of performance outweighs the server hit
e
Good point. And once again, WebSocket to the rescue 😉
a
If you don't need the bandwidth saving, don't send the Accept-encoding header. You should get lower latency, and increased bandwidth usage, which might be beneficial on high speed connections. For mobile, send the header, take the latency, lower the bandwidth. Everybody happy 🙂
👍 1