Hello guys, I am facing the following problem with...
# all-things-deployment
q
Hello guys, I am facing the following problem with deploying datahub v0.8.15 on kubernetes. I am running the elasticsearch-init-job and these are the logs that I am getting:
Copy code
2021/10/13 11:17:40 Waiting for: https://<my-es-url>:443
2021/10/13 11:17:45 Received 200 from https://<my-es-url>:443
creating datahub_usage_event_policy
{
  "policy": {
    "policy_id": "datahub_usage_event_policy",
    "description": "Datahub Usage Event Policy",
    "default_state": "Rollover",
    "schema_version": 1,
    "states": [
      {
        "name": "Rollover",
        "actions": [
          {
            "rollover": {
              "min_index_age": "1d"
            }
          }
        ],
        "transitions": [
          {
            "state_name": "ReadOnly",
            "conditions": {
              "min_index_age": "7d"
            }
          }
        ]
      },
      {
        "name": "ReadOnly",
        "actions": [
          {
            "read_only": {}
          }
        ],
        "transitions": [
          {
            "state_name": "Delete",
            "conditions": {
              "min_index_age": "60d"
            }
          }
        ]
      },
      {
        "name": "Delete",
        "actions": [
          {
            "delete": {}
          }
        ],
        "transitions": []
      }
    ],
    "ism_template": {
      "index_patterns": [
        "datahub_usage_event-*"
      ],
      "priority": 100
    }
  }
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed

  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
100  1061    0     0  100  1061      0   5055 --:--:-- --:--:-- --:--:--  5052
100  1807  100   746  100  1061    842   1198 --:--:-- --:--:-- --:--:--  2041
}{"_id":"datahub_usage_event_policy","_version":1,"_primary_term":1,"_seq_no":0,"policy":{"policy":{"policy_id":"datahub_usage_event_policy","description":"Datahub Usage Event Policy","last_updated_time":1634123866020,"schema_version":1,"error_notification":null,"default_state":"Rollover","states":[{"name":"Rollover","actions":[{"rollover":{"min_index_age":"1d"}}],"transitions":[{"state_name":"ReadOnly","conditions":{"min_index_age":"7d"}}]},{"name":"ReadOnly","actions":[{"read_only":{}}],"transitions":[{"state_name":"Delete","conditions":{"min_index_age":"60d"}}]},{"name":"Delete","actions":[{"delete":{}}],"transitions":[]}],"ism_template":[{"index_patterns":["datahub_usage_event-*"],"priority":100,"last_updated_time":1634123866020}]}}}
creating datahub_usagAe_event_index_template
{
  "index_patterns": ["datahub_usage_event-*"],
  "mappings": {
    "properties": {
      "@timestamp": {
        "type": "date"
      },
      "type": {
        "type": "keyword"
      },
      "timestamp": {
        "type": "date"
      },
      "userAgent": {
        "type": "keyword"
      },
      "browserId": {
        "type": "keyword"
      }
    }
  },
  "settings": {
    "index.opendistro.index_state_management.rollover_alias": "datahub_usage_event"
  }
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed

  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
100   467  100    21  100   446    180   3824 --:--:-- --:--:-- --:--:--  4025
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed

  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
100    59    0     0  100    59      0    287 --:--:-- --:--:-- --:--:--   287
100   144  100    85  100    59    283    196 --:--:-- --:--:-- --:--:--   480
2021/10/13 11:17:47 Command finished successfully.
}{"acknowledged":true}{"acknowledged":true,"shards_acknowledged":true,"index":"datahub_usage_event-000001"}
So, it seems good. Then, when I am trying to access datahub, I am getting the following error on GMS:
Copy code
11:47:09.155 [qtp544724190-13] ERROR c.l.metadata.dao.search.ESSearchDAO - Search query failed:Elasticsearch exception [type=index_not_found_exception, reason=no such index [corpuserinfodocument]]
11:47:09.155 [qtp544724190-15] ERROR c.l.metadata.dao.search.ESSearchDAO - Search query failed:Elasticsearch exception [type=index_not_found_exception, reason=no such index [dataflowdocument]]
11:47:09.155 [qtp544724190-11] ERROR c.l.metadata.dao.search.ESSearchDAO - Search query failed:Elasticsearch exception [type=index_not_found_exception, reason=no such index [dashboarddocument]]

11:47:09.129 [qtp544724190-11] ERROR c.l.metadata.dao.search.ESSearchDAO - Search query failed:Elasticsearch exception [type=index_not_found_exception, reason=no such index [datajobdocument]]
11:47:09.129 [qtp544724190-12] ERROR c.l.metadata.dao.search.ESSearchDAO - Search query failed:Elasticsearch exception [type=index_not_found_exception, reason=no such index [chartdocument]]
11:47:09.129 [qtp544724190-9] ERROR c.l.metadata.dao.search.ESSearchDAO - Search query failed:Elasticsearch exception [type=index_not_found_exception, reason=no such index [datasetdocument]]
Does anyone know why is this happening? I am running the elasticsearch job, but then it looks like there are indexes still missing.. I am using the AWS opensearch and I am passing these env variables into the elasticsearch init job: DATAHUB_ANALYTICS_ENABLED: true and USE_AWS_ELASTICSEARCH: true
e
This is very weird. Seems like you are using a very old version of gms
Can you confirm the tag you are using for datahub-gms? (And possibly all other datahub pods)
I have a feeling you are using “latest” which has old version of datahub. “head” has the latest version of datahub
q
Probably you are right.. I am hitting another issue, I'll start a new thread for that. Thanks a lot for your help