-
Notifications
You must be signed in to change notification settings - Fork 550
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Sync "olm" failed: no catalog sources available #740
Comments
@deniskril thanks for letting us know about your issue! Could you provide some more information about:
|
i just create only https://github.com/operator-framework/operator-lifecycle-manager/blob/master/deploy/upstream/quickstart/olm.yaml
|
and packageserver cannot start :( |
@deniskril could you check if the |
logs from operatorhubio-catalog-zdmdl: kubectl -n olm get CatalogSource operatorhubio-catalog -o yaml
|
in olm namespace deploy packageserver isn't created and
|
@deniskril Do |
kubectl -n olm logs olm-operators-pp52b
|
I am having the same issue. Events show "12m Warning Unhealthy Pod Liveness probe failed: timeout: failed to connect service "localhost:50051" within 1s" Is there a way to change the service registry to external repo instead of local one ? |
I suspect something about the OLM install may have gone awry. Could you try with the newer install instructions and see if you still have an issue? https://github.com/operator-framework/operator-lifecycle-manager/releases/tag/0.10.0 |
I have seen the same issue as well. This happens on the latest 0.10.0 release. $ kubectl get pods -n olm $ kubectl describe pod olm-operators-6h76b Normal Scheduled 8m45s default-scheduler Successfully assigned olm/olm-operators-6h76b to node1.example.com Any ideas? |
Verified that this also happens on 0.10.1. It has not been fixed. Steps to reproduce:
Symptoms are identical to described above. |
The same problem reproduce on v0.10.1 with 2 nodes K8S. |
Em...I have fixed that caused by myself, I lanuched 2 K8S nodes in OpenStack cluster and I haven't config security group, and caused network problem between master and minion, It worked when I fix the network problem. |
Hi, |
Hi, I have a similar problem. It seems related to the way in which the liveness check is configured. If I run the liveness check manually with localhost as the hostname, as per the pod configuration, I receive the same error.
If I run the liveness check manually without the hostname specified then it works.
I looked at the liveness checks for all the other pods and while they are all http-get, none of them specify a hostname e.g.
Is it possible to configure OLM not to specify localhost as the hostname for the liveness checks on the registry-server? |
Hi , were you able to resolve this issue? I have same problem. |
Hello!
I installed OLM on my kubernetes cluster and I see errors in catalog-operator
time="2019-03-05T13:52:24Z" level=info msg="retrying olm"
E0305 13:52:24.962371 1 queueinformer_operator.go:155] Sync "olm" failed: no catalog sources available
time="2019-03-05T13:52:25Z" level=info msg="building connection to registry" currentSource="{olm-operators olm}" id=8q7aJ source=olm-operators
time="2019-03-05T13:52:25Z" level=info msg="client hasn't yet become healthy, attempt a health check" currentSource="{olm-operators olm}" id=8q7aJ source=olm-operators
and operators can't installed
Please help.
kubernetes 1.12.1
olm - https://github.com/operator-framework/operator-lifecycle-manager/blob/master/deploy/upstream/quickstart/olm.yaml
The text was updated successfully, but these errors were encountered: