tls malloc error
# help
j
@erikcorry moving the thread here 🙂 See code below, is there more cleanup i need to do to avoid the malloc error - already at the second loop I get the error: MALLOC_FAILED error.
Copy code
check_repos_test:

  while true:
    https_exec := catch --trace:
      
      // Open network
      network_interface := net.open
      client := http.Client.tls network_interface
        --root_certificates=CERTIFICATES
      
      // Set headers for auth
      headers/http.Headers := http.Headers
      headers.set "PRIVATE-TOKEN" GL_TOKEN

      // Make get request
      response := client.get "gitlab.com" GL_PATH --headers=headers
      
      // Check response
      if response.status_code != 200:
        print "FAILED_WITH $response.status_code"
      else:
        result := json.decode_stream response.body
        print "RESULT: $response.status_code :: $result[0]["status"]"
        if result[0]["status"] == "success":
          print "BUILD_STATUS: SUCCESS"
        else:
          print "BUILD_STATUS: $result[0]["status"]"

      // close network interface
      network_interface.close

    if https_exec:
      print "ERROR: $https_exec"

    sleep --ms=CALL_SERVER_TIMEOUT
Interesting! Put in several "print_objects" calls before during and after to monitor object counts, and now there is no out of memory error... is print_objects doing garbage collection or somhow freeing up memory?
It is enough with a single print_objects before net.open, then it works every time...
having a single print_objects after network_interface.close results in out of memory error every SECOND time ??
e
Yes, print_objects causes a full GC.
I think you need to add
client.close
and I think you can move the
net.open
outside the loop and reuse it with no close.
It's possible that you don't have a close method on the connection in your version of the HTTP library 😦 It's supposed to auto-close when you have read the data, but it doesn't due to a bug.
Thanks for the report!
If you insert
response.body.read
that should trigger the auto-close.
I'll think about how to fix this.
(Without the auto-close, it is left to the garbage collector to close the connection. That works, but it releases with a delay, so you can still get OOM.)
j
Thanks!!
just tried calling response.body.read directly, but it did not have an effect... by the way, the json.decode_stream already called the read method
I dont have the client.close method 😕
e
OK I'm not sure why that doesn't help you. The idea is that calling read one more time after the json deserialization makes the HTTP library realize there's nothing left and initiate auto-close.
We are working on a couple of things to help with the OOM issues. The next version of the HTTP library will be able to make multiple HTTP requests on a single TLS socket, so that you don't have to reconnect all the time. This is in the https://github.com/toitlang/pkg-http repo. You can check it out and use a local import to try it.
Copy code
jag pkg uninstall http
jag pkg install --local --name=http ../../path/to/repo
The reason it is not yet rolled out is that it is not backwards compatible. You must call close on the client now, it does not auto-close. This could make the memory leaks worse if people don't call close. We will likely release it soon with a major version bump to reflect the changed API.
The other thing we are doing is moving a lot of the TLS connection logic into Toit, so that we don't have to allocate such big buffers on the C++ size. This will also mean that lost connections can be cleaned up with a single GC. The situation right now where you sometimes need more than one GC is caused by the hybrid C++/Toit solution for MbedTLS. If you are interested in seeing how the sausage is made, the code review for this new feature is at: https://github.com/toitlang/toit/pull/1263
k
The initial TLS in Toit work is part of SDK v2.0.0-alpha.45 that ships in Jaguar v1.7.13.
j
Latest version seems to have fixed this issue, thanks!
k
w00t!