Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Back-off restarting failed container check-db #866

Open
2 tasks done
stephen-lazarionok opened this issue Jun 2, 2024 · 3 comments
Open
2 tasks done

Back-off restarting failed container check-db #866

stephen-lazarionok opened this issue Jun 2, 2024 · 3 comments
Labels
kind/bug kind - things not working properly

Comments

@stephen-lazarionok
Copy link

stephen-lazarionok commented Jun 2, 2024

Checks

Chart Version

8.9.0

Kubernetes Version

Client Version: version.Info{Major:"1", Minor:"25", GitVersion:"v1.25.3", GitCommit:"434bfd82814af038ad94d62ebe59b133fcb50506", GitTreeState:"clean", BuildDate:"2022-10-12T10:47:25Z", GoVersion:"go1.19.2", Compiler:"gc", Platform:"darwin/arm64"}
Kustomize Version: v4.5.7
Server Version: version.Info{Major:"1", Minor:"27", GitVersion:"v1.27.10", GitCommit:"0fa26aea1d5c21516b0d96fea95a77d8d429912e", GitTreeState:"clean", BuildDate:"2024-01-17T13:38:41Z", GoVersion:"go1.20.13", Compiler:"gc", Platform:"linux/amd64"}

Helm Version

version.BuildInfo{Version:"v3.12.3", GitCommit:"3a31588ad33fe3b89af5a2a54ee1d25bfe6eaa5e", GitTreeState:"clean", GoVersion:"go1.20.7"}

Description

I was trying to redeploy Airflow but it

  • keep Airflow pods created keep restarting with "Back-off restarting failed container check-db "
  • eventually I got "{db.py:1758} INFO - Connection successful."

Relevant Logs

airflow-db-migrations-6867d67f79-b5hl9                            0/1     Init:0/1    7 (5m14s ago)   18m
airflow-scheduler-59fd578c45-xv7hl                                0/2     Init:0/2    7 (5m26s ago)   18m
airflow-scheduler-65c666bcb9-75rwj                                2/2     Running     0               145m
airflow-sync-connections-7b99456945-clh52                         0/1     Init:0/2    7 (5m21s ago)   18m
airflow-sync-pools-8967f946d-mwt4h                                1/1     Running     0               18m
airflow-sync-users-7cd5c575cb-5v9gg                               0/1     Init:0/2    7 (5m30s ago)   18m
airflow-web-6f5fffb68d-gqss5                                      1/1     Running     1 (112m ago)    145m
airflow-web-cb655b569-77l4g                                       0/1     Init:0/2    7 (5m20s ago)   18m
Type     Reason     Age                   From               Message
  ----     ------     ----                  ----               -------
  Normal   Scheduled  12m                   default-scheduler  Successfully assigned airflow2/airflow-web-cb655b569-77l4g to zing-staging-k8spool-2c16-ram-ji8a2
  Normal   Pulling    12m                   kubelet            Pulling image "z/z-airflow-dags:0.45.0-staging"
  Normal   Pulled     12m                   kubelet            Successfully pulled image "z/z-airflow-dags:0.45.0-staging" in 1.69428358s (1.69430606s including waiting)
  Normal   Created    6m46s (x5 over 12m)   kubelet            Created container check-db
  Normal   Started    6m46s (x5 over 12m)   kubelet            Started container check-db
  Normal   Pulled     6m46s (x4 over 11m)   kubelet            Container image "z/z-airflow-dags:0.45.0-staging" already present on machine
  Warning  BackOff    2m15s (x19 over 10m)  kubelet            Back-off restarting failed container check-db in pod airflow-web-cb655b569-77l4g_airflow2(d2347d9f-3f7c-462d-bd7d-b18ae2c85fea)

Custom Helm Values

No response

@stephen-lazarionok stephen-lazarionok added the kind/bug kind - things not working properly label Jun 2, 2024
@thesuperzapper
Copy link
Member

@stephen-lazarionok almost certainly this is a connectivity or credentials issue between your cluster and the Postgres DB, and not an issue with the chart.

You should check the logs of the container itself (rather than just the events) to get more context as to why it's failing.

@harsh0522
Copy link

hey i am facing this problem even after checking many resources unable to solve it .......

image

image

even after applying hese changes in values.yaml still same

image

please help

@thesuperzapper
Copy link
Member

@harsh0522 in your image, the postgres container is "Pending", this is probably due to some kind of PVC issue.

Check that your cluster has a default StorageClass (or use the values to set a specific one).

Also, you can confirm this by checking the events of the Postgres pod (with kubectl describe).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug kind - things not working properly
Projects
None yet
Development

No branches or pull requests

3 participants