This message was deleted.
# ask-for-help
s
This message was deleted.
j
Would you mind sharing the api endpoint snippets here?
n
sure,
Copy code
svc = bentoml.Service('my_model')

@svc.api(input = JSON(), output = JSON(), route = f'/a/b')
@auth.authorize
def predict(input_data, request_context: bentoml.Context):
	pred = get_pred(input_data)
	nr_logger.logMessage('status', 'success')
	return pred
also sharing entire traceback for the error
I have edited the entrypoint from dockerfile using the following, i think the change succeeded, but getting this error post deployment,
Copy code
{% extends bento_base_template %}
{% block SETUP_BENTO_ENTRYPOINT %}
{{ super() }}
ENTRYPOINT [ "{{ bento__entrypoint }}", "ddtrace-run", "bentoml", "serve", "{{ bento__path }}", "--production" ]
{% endblock %}
we are using newrelic and datadog for logging
j
Is
get_pred
an async function?
n
no
j
It seems that the error happens inside of
ddtrace/contrib/asgi/middleware
How did you injected that middleware to bentoml?
n
i have installed ddtrace library and then added it to entrypoint as mentioned here https://bentoml.slack.com/archives/CKRANBHPH/p1671784996428719?thread_ts=1671780417.642689&cid=CKRANBHPH
j
I'm afraid that will not work as the developer of ddtrace expected.
n
the same thing works for fastapi in my other application. instructions : https://docs.datadoghq.com/tracing/trace_collection/dd_libraries/python/?tab=containers
one interesting thing is when i comment out the newrelic logging from the app, it seems to be working
j
python app.py
of FastAPI is just for debug use case. In production you will also need
uvicorn
or other production server to run the app.
bentoml serve
works like
uvicorn
, it will start a production ready supervisor too. That could be the reason ddtrace not work.
And bentoml already has built-in tracing feature https://docs.bentoml.org/en/latest/guides/tracing.html. It supports all mainstream tracing protocols and standard.
🆗 1
n
So for my fast app I have been running ddtrace-run unicorn app
It seems to be working
j
https://www.datadoghq.com/blog/ingest-opentelemetry-traces-metrics-with-datadog-agent/ It seems that Datadog supports otlp protocols natively. We suggest to use them together.
n
Sure will take a look, but as of right now what do you think we should try to resolve this
j
Since ddtrace is a monkey patching solution, it would be their duty to ensure if it works with other frameworks.
n
If I look at the trace back I can see the error originating from new relic then to ddtrace
Do you think we should be doing something on new relic side
j
It seems like a really common error that someone called an async function return value rather than
await
it
just FYI
n
Yeah looks like it but I'm not using any async methods explicitly anywhere
What does setting bentoml_config environment variable do ?
We have seen one weird case where we were getting this coroutine error, and when we set this env variable it seems to go away
Wanted to check if bentoml has some default settings which are getting overwritten by setting this config
j
All default configs here: BentoML/default_configuration.yaml at main · bentoml/BentoML (github.com)
n
Do you think any of these default logging have async operations?
In our configuration file we have set all logging to false
j
Maybe you can try without
--production
flag
n
will try, may i ask how it would make a difference just curious
sharing our configuration file
Copy code
api_server:
  http:
    port: 5000
  workers: 2
  metrics:
    enabled: False
  logging:
    access:
      enabled: False
      request_content_length: False
      request_content_type: False
      response_content_length: False
      response_content_type: False
getting the same issue without --produciton as well
Copy code
{"message": "Exception in 'lifespan' protocol\n", "exc_info": "Traceback (most recent call last):\n  File \"/usr/local/lib/python3.10/site-packages/uvicorn/lifespan/on.py\", line 86, in main\n    await app(scope, self.receive, self.send)\n  File \"/usr/local/lib/python3.10/site-packages/uvicorn/middleware/proxy_headers.py\", line 78, in __call__\n    return await <http://self.app|self.app>(scope, receive, send)\n  File \"/usr/local/lib/python3.10/site-packages/uvicorn/middleware/message_logger.py\", line 86, in __call__\n    raise exc from None\n  File \"/usr/local/lib/python3.10/site-packages/uvicorn/middleware/message_logger.py\", line 82, in __call__\n    await <http://self.app|self.app>(scope, inner_receive, inner_send)\n  File \"/usr/local/lib/python3.10/site-packages/starlette/applications.py\", line 112, in __call__\n    await self.middleware_stack(scope, receive, send)\n  File \"<string>\", line 5, in wrapper\n  File \"/usr/local/lib/python3.10/site-packages/starlette/middleware/errors.py\", line 146, in __call__\n    await <http://self.app|self.app>(scope, receive, send)\n  File \"/usr/local/lib/python3.10/site-packages/ddtrace/contrib/asgi/middleware.py\", line 102, in __call__\n    return await <http://self.app|self.app>(scope, receive, send)\n  File \"/usr/local/lib/python3.10/site-packages/ddtrace/contrib/asgi/utils.py\", line 69, in new_application\n    return await instance(receive, send)\nTypeError: 'coroutine' object is not callable", "levelname_bracketed": "[ERROR]", "component": "[dev_api_server]", "trace_msg": "", "request_id": null}
/usr/local/lib/python3.10/site-packages/uvicorn/lifespan/on.py:-1: RuntimeWarning: coroutine 'middleware_wrapper' was never awaited
@Jiang, @Bo any thoughts?
update on this, looks like it was an issue from newrelic side
g
relief
🤣 1
b
joke aside. Good to hear that. keep us updated
n
Hey @Bo the integration of bentoml and datadog is working just fine. now that we moved away from newrelic 🕺
b
yaya