This message was deleted.
# ask-for-help
s
This message was deleted.
πŸ¦„ 1
πŸ‘€ 1
🏁 1
k
It looks that it has to do this https://github.com/golang-migrate/migrate/issues/282, but did anyone come up with the solution?
x
This is because yatai was accidentally killed during migration, you can use the following script to remove yatai completely and reinstall it https://github.com/bentoml/Yatai/blob/main/scripts/delete-yatai.sh
k
Hi @Xipeng Guan thanks for the quick response. Where are the instructions we should follow to install yatai v1.0.0? What was our mistake in the first place?
x
I think it's an accident that yatai's process was accidentally shut down while doing the migration operation, clear the dirty data caused by the migration and try again.
k
x
yes
k
Thanks!
hi @Xipeng Guan we try deleting and reinstalling everything but we still have the same issue. Let me give a little bit of context. 1. We deleted everything with the script 2. Created the DB and checked the postgres and the blob storage according to instructions (eveything is okay) 3. When we go to step 4.1 it gives the message above and it keeps restarting indefinitely and cannot even create the pod The only difference is that the our cluster.name in not
cluster.local
but instead our own cluster.name What do you suggest? Thanks!
k8s version: 1.21.5 k8s
x
which step you failed?
k
final step 4.1 - Install Yatai. Cannot start the pod - it keeps restarting and display this message
Copy code
2022-10-18T13:28:39+03:00 INFO[0000] migrate up...
2022-10-18T13:28:39+03:00 Error: migrate up db: cannot migrate up: Dirty database version 1. Fix and force version.
2022-10-18T13:28:39+03:00 Usage:
2022-10-18T13:28:39+03:00   yatai-api-server serve [flags]
2022-10-18T13:28:39+03:00
2022-10-18T13:28:39+03:00 Flags:
2022-10-18T13:28:39+03:00   -c, --config string    (default "./yatai-config.dev.yaml")
2022-10-18T13:28:39+03:00   -h, --help            help for serve
2022-10-18T13:28:39+03:00
2022-10-18T13:28:39+03:00 Global Flags:
the
helm upgrade --install
one
x
Can you check to see if the pvc your postgres uses is old? kubectl -n yatai-system get pvc
k
We have an external postgress
Copy code
k8s-cluster git:(main) βœ— kubectl -n postgres get pvc
NAME                                       STATUS    VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS      AGE
data-postgres-postgresql-ha-postgresql-0   Bound     pvc-0cf3cb2e-4df2-4881-8128-7bde76f37fea   100Gi      RWO            rook-ceph-block   347d
data-postgres-postgresql-ha-postgresql-1   Bound     pvc-115ba34b-1959-49d0-a272-6c82e5c240b7   100Gi      RWO            rook-ceph-block   347d
data-postgresql-ha-postgresql-0            Pending                                                                                          7d2h
data-postgresql-ha-postgresql-1            Pending                                                                                          7d2h
data-postgresql-ha-postgresql-2            Pending                                                                                          7d2h
x
If you are using external postgresql, please delete the yatai database and recreate it
k
we did that already
we deleted PG_DATABASE and created again.
Any other suggestion ?
x
Can you connect to postgresql and run the following sql?
Copy code
delete from schema_migrations;
Then restart the deployment of yatai:
Copy code
kubectl -n yatai-system rollout restart deploy/yatai
k
Copy code
➜  k8s-cluster git:(main) βœ— kubectl -n yatai-system delete pod postgresql-ha-client 2> /dev/null || true; \
kubectl run postgresql-ha-client --rm --tty -i --restart='Never' \
    --namespace yatai-system \
    --image <http://docker.io/bitnami/postgresql-repmgr:14.4.0-debian-11-r13|docker.io/bitnami/postgresql-repmgr:14.4.0-debian-11-r13> \
    --env="PGPASSWORD=$PG_PASSWORD" \
    --command -- psql -h $PG_HOST -p $PG_PORT -U $PG_USER -d $PG_DATABASE -c "delete from schema_migrations;"
DELETE 1
pod "postgresql-ha-client" deleted
➜  k8s-cluster git:(main) βœ— kubectl -n yatai-system rollout restart deploy/yatai
deployment.apps/yatai restarted
x
I found a permission error in your postgresql account:
Copy code
2022-10-24T13:44:25.209058889Z  (details: pq: permission denied to create extension "pgcrypto")
k
So do we suppose to install it with superuser role?
x
If you don't want to assign the superuser role to this user, you can create this extension manually with the superuser and restart yatai
k
We fixed that issue, but now we have another one
Copy code
relation "epoch_seq" already exists)
It seems that it goes into a series of microbugs. We deleted the DB and created again.
Copy code
2022-10-24T17:12:36.285728980Z [36mINFO[0m[0000] migrate up...                                
2022-10-24T17:12:36.903884550Z [36mINFO[0m[0000] [2022-10-24 17:12:36.903664343 +0000 UTC m=+0.749205908] 1/u initialize_tables (606.438794ms) 
2022-10-24T17:12:36.950121149Z [36mINFO[0m[0000] [2022-10-24 17:12:36.949887406 +0000 UTC m=+0.795428971] 2/u add_deployment_namespace (652.646809ms) 
2022-10-24T17:12:36.993766325Z [36mINFO[0m[0000] [2022-10-24 17:12:36.993608196 +0000 UTC m=+0.839149781] 3/u drop_github_username (696.370044ms) 
2022-10-24T17:12:37.016661057Z Error: migrate up db: cannot migrate up: migration failed: ALTER TYPE ... ADD cannot run inside a transaction block in line 0: ALTER TYPE "resource_type" ADD VALUE 'yatai_component';
2022-10-24T17:12:37.016685252Z 
2022-10-24T17:12:37.016689039Z CREATE TABLE IF NOT EXISTS "yatai_component" (
2022-10-24T17:12:37.016692626Z     id SERIAL PRIMARY KEY,
2022-10-24T17:12:37.016695943Z     uid VARCHAR(32) UNIQUE NOT NULL DEFAULT generate_object_id(),
2022-10-24T17:12:37.016699219Z     name VARCHAR(128) NOT NULL,
2022-10-24T17:12:37.016702355Z     description TEXT,
2022-10-24T17:12:37.016705561Z     version VARCHAR(128),
2022-10-24T17:12:37.016708747Z     cluster_id INTEGER NOT NULL REFERENCES "cluster"("id") ON DELETE CASCADE,
2022-10-24T17:12:37.016712233Z     organization_id INTEGER NOT NULL REFERENCES "organization"("id") ON DELETE CASCADE,
2022-10-24T17:12:37.016715750Z     kube_namespace VARCHAR(128) NOT NULL,
2022-10-24T17:12:37.016719597Z     manifest JSONB,
2022-10-24T17:12:37.016723134Z     creator_id INTEGER NOT NULL REFERENCES "user"("id") ON DELETE CASCADE,
2022-10-24T17:12:37.016726601Z     latest_heartbeat_at TIMESTAMP WITH TIME ZONE DEFAULT NULL,
2022-10-24T17:12:37.016729907Z     latest_installed_at TIMESTAMP WITH TIME ZONE DEFAULT NULL,
2022-10-24T17:12:37.016733193Z     created_at TIMESTAMP WITH TIME ZONE NOT NULL DEFAULT CURRENT_TIMESTAMP,
2022-10-24T17:12:37.016736489Z     updated_at TIMESTAMP WITH TIME ZONE,
2022-10-24T17:12:37.016739575Z     deleted_at TIMESTAMP WITH TIME ZONE
2022-10-24T17:12:37.016742651Z );
2022-10-24T17:12:37.016745677Z 
2022-10-24T17:12:37.016748722Z CREATE UNIQUE INDEX "uk_yataiComponent_clusterId_name" ON "yatai_component" ("cluster_id", "name");
2022-10-24T17:12:37.016752399Z  (details: pq: ALTER TYPE ... ADD cannot run inside a transaction block)
2022-10-24T17:12:37.017030986Z Usage:
2022-10-24T17:12:37.017042668Z   yatai-api-server serve [flags]
2022-10-24T17:12:37.017047458Z 
2022-10-24T17:12:37.017051866Z Flags:
2022-10-24T17:12:37.017056164Z   -c, --config string    (default "./yatai-config.dev.yaml")
2022-10-24T17:12:37.017060302Z   -h, --help            help for serve
2022-10-24T17:12:37.017083756Z 
2022-10-24T17:12:37.017088465Z Global Flags:
2022-10-24T17:12:37.017092713Z   -d, --debug   debug mode, output verbose output
2022-10-24T17:12:37.017097472Z 
2022-10-24T17:12:37.017101620Z error: migrate up db: cannot migrate up: migration failed: ALTER TYPE ... ADD cannot run inside a transaction block in line 0: ALTER TYPE "resource_type" ADD VALUE 'yatai_component';
2022-10-24T17:12:37.017106720Z 
2022-10-24T17:12:37.017111108Z CREATE TABLE IF NOT EXISTS "yatai_component" (
2022-10-24T17:12:37.017115867Z     id SERIAL PRIMARY KEY,
2022-10-24T17:12:37.017120516Z     uid VARCHAR(32) UNIQUE NOT NULL DEFAULT generate_object_id(),
2022-10-24T17:12:37.017125315Z     name VARCHAR(128) NOT NULL,
2022-10-24T17:12:37.017130064Z     description TEXT,
2022-10-24T17:12:37.017134653Z     version VARCHAR(128),
2022-10-24T17:12:37.017139091Z     cluster_id INTEGER NOT NULL REFERENCES "cluster"("id") ON DELETE CASCADE,
2022-10-24T17:12:37.017143720Z     organization_id INTEGER NOT NULL REFERENCES "organization"("id") ON DELETE CASCADE,
2022-10-24T17:12:37.017148739Z     kube_namespace VARCHAR(128) NOT NULL,
2022-10-24T17:12:37.017152977Z     manifest JSONB,
2022-10-24T17:12:37.017156724Z     creator_id INTEGER NOT NULL REFERENCES "user"("id") ON DELETE CASCADE,
2022-10-24T17:12:37.017161002Z     latest_heartbeat_at TIMESTAMP WITH TIME ZONE DEFAULT NULL,
2022-10-24T17:12:37.017165290Z     latest_installed_at TIMESTAMP WITH TIME ZONE DEFAULT NULL,
2022-10-24T17:12:37.017182273Z     created_at TIMESTAMP WITH TIME ZONE NOT NULL DEFAULT CURRENT_TIMESTAMP,
2022-10-24T17:12:37.017186581Z     updated_at TIMESTAMP WITH TIME ZONE,
2022-10-24T17:12:37.017190869Z     deleted_at TIMESTAMP WITH TIME ZONE
2022-10-24T17:12:37.017195257Z );
2022-10-24T17:12:37.017199736Z 
2022-10-24T17:12:37.017203863Z CREATE UNIQUE INDEX "uk_yataiComponent_clusterId_name" ON "yatai_component" ("cluster_id", "name");
2022-10-24T17:12:37.017208482Z  (details: pq: ALTER TYPE ... ADD cannot run inside a transaction block)
x
I found a related issue in the golang-migrate repo, so I fixed it with the new golang-migrate version, can you replace the yatai image to test it?
Copy code
kubectl -n yatai-system patch deploy yatai --patch '{"spec": {"template": {"spec": {"containers": [{"name": "yatai", "image": "<http://quay.io/bentoml/yatai:1.0.0-d9|quay.io/bentoml/yatai:1.0.0-d9>", "imagePullPolicy": "Always"}]}}}}'
k
Copy code
INFO[0000] opening db...                                
INFO[0000] migrate up...                                
Error: migrate up db: cannot migrate up: migration failed: relation "epoch_seq" already exists in line 0: 
CREATE SEQUENCE epoch_seq INCREMENT BY 1 MAXVALUE 9 CYCLE; (details: pq: relation "epoch_seq" already exists)
I can see that is deleted in
000001_initialize_tables.down.sql
before creating the new tables https://github.com/bentoml/Yatai/blob/271a6b5f01c1e586acc8baa8d1e83ba41a78e3ba/api-server/db/migrations/000001_initialize_tables.down.sql#L29-L33 anything we can do to fix that ?
Hi @Xipeng Guan, any suggestion on this issue?
x
I suggest you rollback yatai image to the original version and enable postgresql autocommit https://www.postgresql.org/docs/current/ecpg-sql-set-autocommit.html
k
What do you mean original version? This is a fresh install of v1.0.0
x
yes, I mean v1.0.0