We're experiencing a problem with the cert-manager related to TLS Certificates. When we deploy an application using helm, with all the required annotations, TLS secret is not created.
Ingress shows the following error:
What I've found is that from the Kubernetes dashboard, when I get details from the ingress resource on the secret I get a 404 error. The ingress resource gets created referencing a secret that doesn't exist.
Looking at the cert-manager namespace, I found what appears to be two deployments:
The one with a year-old seems to not trigger at all. The one 4-month old seem to trigger but fails continuously with the following errors. And is displaying red for an evicted pod that failed, but it is running.
E1209 19:46:28.340854 1 reflector.go:138] external/io_k8s_client_go/tools/cache/reflector.go:167: Failed to watch *v1.Certificate: failed to list *v1.Certificate: the server could not find the requested resource (get certificates.cert-manager.io)
E1209 19:46:41.726643 1 reflector.go:138] external/io_k8s_client_go/tools/cache/reflector.go:167: Failed to watch *v1.CertificateRequest: failed to list *v1.CertificateRequest: the server could not find the requested resource (get certificaterequests.cert-manager.io)
E1209 19:46:42.842402 1 reflector.go:138] external/io_k8s_client_go/tools/cache/reflector.go:167: Failed to watch *v1.Issuer: failed to list *v1.Issuer: the server could not find the requested resource (get issuers.cert-manager.io)
E1209 19:46:43.581019 1 reflector.go:138] external/io_k8s_client_go/tools/cache/reflector.go:167: Failed to watch *v1.ClusterIssuer: failed to list *v1.ClusterIssuer: the server could not find the requested resource (get clusterissuers.cert-manager.io)
E1209 19:46:51.205804 1 reflector.go:138] external/io_k8s_client_go/tools/cache/reflector.go:167: Failed to watch *v1.Challenge: failed to list *v1.Challenge: the server could not find the requested resource (get challenges.acme.cert-manager.io)
E1209 19:46:51.819486 1 reflector.go:138] external/io_k8s_client_go/tools/cache/reflector.go:167: Failed to watch *v1.Order: failed to list *v1.Order: the server could not find the requested resource (get orders.acme.cert-manager.io)
This is a new cluster I'm working with. I found on the cert-manager namespace a total of 473 evicted pods (I got an urge to clean those, I should right?)
Anyway, the main issue is the TLS Secret not being created by the cert-manager. I can provide a ton of additional information, but everything else looks fine.
In the end, I resolved this issue by scaling the replica set for cert-manager1. That cause the pods to be restarted, and everything worked correctly.
However, upon further investigation is not correct to have to deploy as is likely they are conflicting with each other. Part of the solution is to remove one of them, have only one working. Also, updating to the latest version:
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
cert-manager cert-manager 1 2020-07-24 18:49:08.541265133 +0530 +0530 deployed cert-manager-v0.15.v0.15.1
cert-manager1-113bce5f cert-manager 1 2021-08-03 14:54:54.351112781 +0000 UTC deployed cert-manager-0.1.101.4.2
Related
I followed here.
In next step(verify the installation),
When I create deployment in cluster2(remote cluster), sidecar injector not working like below...
kubectl get event -n sample
LAST SEEN TYPE REASON OBJECT MESSAGE
42m Warning FailedCreate replicaset/helloworld-v2-79bf565586 Error creating: Internal error occurred: failed calling webhook "namespace.sidecar-injector.istio.io": failed to call webhook: Post "https://istiod.istio-system.svc:443/inject/cluster/cluster2/net/network1?timeout=10s": unsupported service type "ExternalName"
2m45s Warning FailedCreate replicaset/helloworld-v2-79bf565586 Error creating: Internal error occurred: failed calling webhook "namespace.sidecar-injector.istio.io": failed to call webhook: Post "https://istiod.istio-system.svc:443/inject/cluster/cluster2/net/network1?timeout=10s": unsupported service type "ExternalName"
53m Normal ScalingReplicaSet deployment/helloworld-v2 Scaled up replica set helloworld-v2-79bf565586 to 1
41m Normal ScalingReplicaSet deployment/helloworld-v2 Scaled up replica set helloworld-v2-79bf565586 to 1
Is this issue about firewalld ? (I used AWS EKS v1.24)
When I searched something about this, Other people's message is context deadline exceeded in here...
Can anybody help me? :(
Search Internet, and re install istio at all...
I'm using Hyperledger Fabric and now I'm trying to make a backup of the current situation and restore it on a different computer.
I'm following the procedure found in hyperledger-fabric-backup-and-restore.
The main steps being:
Copy the crypto-config and the channel-artifacts directory
Copy the content of all peers and orderer containers
Modify the docker-compose.yaml to link containers volumes to the local directory where I have the backup copy.
Yet it's not working properly in my case: when I restart the network with ./byfn.hs up I first have all the containers correctly up and running then, whatever operation I try and execute on the channel (peer channel create, peer channel join, peer channel update) fails with error:
Error: got unexpected status: BAD_REQUEST -- error applying config update to existing channel 'mychannel': error authorizing update: error validating ReadSet: proposed update requires that key [Group] /Channel/Application be at version 0, but it is currently at version 1
Is there anything I should do which is not mentioned on hyperledger-fabric-backup-and-restore ?
I got the same error while trying to create a channel. Turning the "network down" and then "network up" solved my problem.
I recently installed an openshift instance with 2 brokers 2 nodes and 3 mongodb/active mq nodes.
I used the openshift origin puppet module and it is mostly working ok.
I can create, move & deploy normal and scalable applications but when I push changes to my gear (using git) I get the following error message:
Failed to report deployment to broker. This will be corrected on the next git push. Message: Connection reset by peer - SSL_connect
The push itself is successful:
remote: Git Post-Receive Result: success
remote: Activation status: success
remote: Deployment completed with status: success
But I always get this error message when I push.
I checked the node and broker logs I tried to tcpdump the servers and use wireshark to check the communication between node and broker and I tried to google the error and came up with mostly nothing.
I also went over the deployment guide and checked the installation and everything seems to be in order.
When I:
curl https://MyBroker/broker/rest/api
I get an api response and not an SSL error:
{"api_version":1.7,"data":{"API": {"href":"https://MyBroker/broker/rest/api","method":"GET","optional_params":[] ..
Any help will be appreciated.
Thank you
Keren
after the fixpack in subject, i cannot manage to direct update my application.
It notify me that there is a new version available, but when i hit Update it keeps saying that the download failed (both on iOS and Android).
I attached my android to adb and i noticed those lines in the console:
Authentication error: Unable to respond to any of these challenges:
{wl-composite-challenge=WWW-Authenticate: WL-Composite-Challenge}
java.io.IOException: Error downloading update file The following
message has been received from the server instead of the expected
application update zip file: HTTP/1.1 401 Unauthorized
/-secure-{"challenges":{"wl_deviceNoProvisioningRealm":{"token":"1or0tj7gnoev1rn06s188j4u9h"},"wl_antiXSRFRealm":{"WL-Instance-Id":"np8c8o3c4dk1k7s79i2ikddfab"}}}/
at
com.worklight.androidgap.plugin.WebResourcesDownloaderPlugin$WebResourcesDownloader.downloadZipFile(WebResourcesDownloaderPlugin.java:364)
To complete the information I deployed the IBM_Worklight_Console with no security role in its web.xml and we have Worklight 6.1.0.1 installed on WAS Network Deployment 7.0.0.23 running AIX.
Before the fixpack, everything worked well.
Thank you
EDIT: here you can see my application-descriptor.xml and my server configuration:
I forgot about this stack, however I found the problem.
When I open my application, the WL Framework asks for new available update. In the same time, my application ask an adapter for some data, before the WL server can response about the Update. So if it arrives first the response of my adpater, then the Update response will contain a different requestID causing a 403 forbidden.
I don't know if I explained it clearly, however the temporary fix is to put updateSilently: true, not disturbing the direct update.
A lot of people have ask this question but it was 2 month ago with another gitlab version,
I'm using gitlab 5.2 in a fresh debian 7.0 serveur
everything looks Okay on the website but when I run /home/git/gitlab-shell/bin/check I've got this error :
Check GitLab API access: FAILED. code: 302
Check directories and files:
/home/git/repositories: OK
/home/git/.ssh/authorized_keys: OK:
I'm running on a custum ssh port but I'm able to connect.
When pushing I've got this error:
git push -vu origin master
Pushing to ssh://git#apps.ndd.fr:2232/Users/test.git
fatal: The remote end hung up unexpectedly
Thanks for your answers!
I've just got the same error and go look onto the code.
The thing I've found the gitlab_net module going for answer at #{host}/check (gitlab-shell/lib/gitlab_net.rb)
host method is defined as "#{config.gitlab_url}/api/v3/internal", and at the same time config.gitlab_url defined in ./gitlab-shell/config.yml "Should end with a slash" (c) So my web server just returns 302 on a request to remove double slashes.
FYI: That fail is about API and not about web service. So it's non-critical in many cases anyway.
I think it's a minor bug in code and there is a close issue to this: https://github.com/gitlabhq/gitlabhq/issues/3483