This message was deleted.
# ask-for-help
s
This message was deleted.
๐Ÿฑ 1
๐Ÿ 1
u
It may be related to this issue.
I tried to deploy the model via Web UI, but it also failed with 'Non Deployed' status... ๐Ÿ˜ž
j
Is it possible to upgrade yatai and yatai-deployment in your cluster?
There is an official guide in the yatai doc to upgrade them
u
Thank you for answering. I followed the upgrade guideline, and my yatai and yatai-deployment versions are the latest ones...
j
Oh ic. Let me take a look on the doc. I remember it should be hpa_config rather than autoscaling.
โค๏ธ 1
u
Thanks a lot !!! Now, I'm following this doc verifying each step, but the same error occurs again at the end when running
kubectl -n yatai-deployment logs -f deploy/yatai-deployment
as below
Copy code
$ kubectl -n yatai-deployment logs -f job/yatai-deployment-default-domain
time="2023-04-14T06:34:15Z" level=info msg="Creating ingress default-domain- to get a ingress IP automatically"
time="2023-04-14T06:34:15Z" level=info msg="Waiting for ingress default-domain-2xp4m to be ready"
time="2023-04-14T06:34:45Z" level=info msg="Ingress default-domain-2xp4m is ready"                                                                                                                                           
time="2023-04-14T06:34:45Z" level=info msg="you have not set the domain-suffix in the network config, so use magic DNS to generate a domain suffix automatically: `***.***.***.**.<http://sslip.io|sslip.io>`, and set it to the network config"


$ kubectl -n yatai-deployment logs -f deploy/yatai-deployment                                                                                                          
Found 2 pods, using pod/yatai-deployment-84b5584ff9-mhwb7
1.6814581142693202e+09  INFO    setup   starting manager        {"bento deployment namespaces": ["yatai"]}                                                                                                                   
1.6814581145232673e+09  INFO    controller-runtime.metrics      Metrics server is starting to listen    {"addr": "127.0.0.1:8080"}
...
...
...
1.6814581299666548e+09  ERROR   controller-runtime.source       if kind is a CRD, it should be installed before calling Start   {"kind": "HorizontalPodAutoscaler.autoscaling", "error": "no matches for kind \"HorizontalPod
Autoscaler\" in version \"autoscaling/v2beta2\""}
<http://sigs.k8s.io/controller-runtime/pkg/source.(*Kind).Start.func1.1|sigs.k8s.io/controller-runtime/pkg/source.(*Kind).Start.func1.1>
        /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.13.0/pkg/source/source.go:139
<http://k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext|k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext>
...
...
...
j
It IS autoscaling. The issue is
Copy code
runners:
        - name: iris_clf
          resources:
              limits:
                  cpu: "1000m"
                  memory: "1Gi"
              requests:
                  cpu: "500m"
                  memory: "512m"
              autoscaling:
                  maxReplicas: 4
                  minReplicas: 1
The autoscaling should be in the same indent level of resources
u
Oh, I got it. Then the example *yaml file* in the README may be wrong
I ran it again after eliminating the last autoscaling part, but it failed with CrashLoopBackOff ...
Copy code
yatai-deployment      yatai-deployment-84b5584ff9-mhwb7                            0/1     CrashLoopBackOff   4 (56s ago)   14m
yatai-deployment      yatai-deployment-default-domain-jlmq7                        0/1     Completed          0             79m
And, it may be caused by a similar error
Copy code
$ kubectl logs yatai-deployment-84b5584ff9-mhwb7 --namespace=yatai-deployment
...
1.681458946660936e+09   ERROR   controller-runtime.source       if kind is a CRD, it should be installed before calling Start   {"kind": "HorizontalPodAutoscaler.autoscaling", "error": "no matches for kind \"HorizontalPod
Autoscaler\" in version \"autoscaling/v2beta2\""}
j
Ah you are correct.
As for the new issue, which version of k8s are uou using?
u
I'm using this version
Copy code
$ kubectl version
Client Version: <http://version.Info|version.Info>{Major:"1", Minor:"25", GitVersion:"v1.25.0", GitCommit:"a866cbe2e5bbaa01cfd5e969aa3e033f3282a8a2", GitTreeState:"clean", BuildDate:"2022-08-23T17:44:59Z", GoVersion:"go1.19", Compiler:"gc", Platform:"linux/amd64"}
Kustomize Version: v4.5.7
Server Version: <http://version.Info|version.Info>{Major:"1", Minor:"26", GitVersion:"v1.26.3", GitCommit:"9e644106593f3f4aa98f8a84b23db5fa378900bd", GitTreeState:"clean", BuildDate:"2023-03-15T13:33:12Z", GoVersion:"go1.19.7", Compiler:"gc", Platform:"linux/amd64"}
j
Sorry I'm AFK for a while. I noticed that you are using 1.26, and yatai doesn't support k8s 1.26 now due to the API changes.
u
Oh, it's okay! Then, which version should I use?
j
From 1.20 to 1.24 is verified. 1.25 should also work.
๐Ÿ‘ 1
u
I'm gonna try it right now! Thank you for answering ๐Ÿ™‚
I finally solved the error!... It was caused by the k8s version as you mentioned... I downgraded k8s to 1.20.0, and restarted minikube with an argument
--kubernetes-version=v1.20.0
Thank you for your support! I highly appreciate you :)
j
That's great. I also opened two PR for the yatai doc.
๐Ÿ‘ 1