IntelliJ Kubernetes View not show any cluster information - intellij-idea

Kubernetes View not working, only show path to cert for active context and message in the toolbar that indicate unknown cluster
I tried set path to config file, but without successful
LOGS:
2020-01-07 12:07:53,303 [15549238] WARN - lij.kubernetes.model.ModelData - Unable to read OpenAPI specification from C:\Users\ondra\.kube\config\admin.conf
com.google.gson.JsonSyntaxException: java.lang.IllegalStateException: Expected BEGIN_OBJECT but was STRING at line 1 column 1 path $
at com.google.gson.internal.bind.ReflectiveTypeAdapterFactory$Adapter.read(ReflectiveTypeAdapterFactory.java:226)
at com.google.gson.Gson.fromJson(Gson.java:927)
at com.google.gson.Gson.fromJson(Gson.java:865)
But that config file is originaly generated from Kubernetes

I found problem. Kubernetes admin.conf file have relative paths to certs (e.g. ..\certs...) I changed to absolute path and it showed informations, but apears unauthorized

Related

Log file specified by gemfire.log-file never gets created

I have a web application hosted in Tomcat that uses a Geode cache but I cannot get Geode to produce a log file. The cache and properties, including the log-file property, are created programmatically. I see some Geode logging in the Tomcat stdout and it seems to confirm the log-file property has been set:
....
13:30:43,149 | INFO | [LoggingSession] | Startup Configuration:
### GemFire Properties defined with system property ###
conserve-sockets=false
### GemFire Properties defined with api ###
....
log-disk-space-limit=0
log-file=/local/install/user1/config/Lev1/Web/WebAppServer/Server1_1/logs/gemfire.log
log-file-size-limit=0
log-level=config
....
However, the file specified never gets created.
I have tried setting the file permissions to 777 on that directory, as well as setting the log-level to 'fine' but neither made a difference. The Geode output only shows up in the stdout, which I believe is the default.
Why isn't the log file specified by the log-file property getting created?

RKE2 Authorized endpoint configuration help required

I have a rancher 2.6.67 server and RKE2 downstream cluster. The cluster was created without authorized cluster endpoint. How to add an authorised cluster endpoint to a RKE2 cluster created by Rancher article describes how to add it in an existing cluster, however although the answer looks promising, I still must miss some detail, because it does not work for me.
Here is what I did:
Created /var/lib/rancher/rke2/kube-api-authn-webhook.yaml file with contents:
apiVersion: v1
kind: Config
clusters:
- name: Default
cluster:
insecure-skip-tls-verify: true
server: http://127.0.0.1:6440/v1/authenticate
users:
- name: Default
user:
insecure-skip-tls-verify: true
current-context: webhook
contexts:
- name: webhook
context:
user: Default
cluster: Default
and added
"kube-apiserver-arg": [
"authentication-token-webhook-config-file=/var/lib/rancher/rke2/kube-api-authn-webhook.yaml"
to the /etc/rancher/rke2/config.yaml.d/50-rancher.yaml file.
After restarting rke2-server I found the network configuration tab in Rancher and was able to enable authorized endpoint. Here is where my success ends.
I tried to create a serviceaccount and got the secret to have token authorization, but it failed when connecting directly to the api endpoint on the master.
kube-api-auth pod logs this:
time="2022-10-06T08:42:27Z" level=error msg="found 1 parts of token"
time="2022-10-06T08:42:27Z" level=info msg="Processing v1Authenticate request..."
Also the log is full of messages like this:
E1006 09:04:07.868108 1 reflector.go:139] pkg/mod/github.com/rancher/client-go#v1.22.3-rancher.1/tools/cache/reflector.go:168: Failed to watch *v3.ClusterAuthToken: failed to list *v3.ClusterAuthToken: the server could not find the requested resource (get clusterauthtokens.meta.k8s.io)
E1006 09:04:40.778350 1 reflector.go:139] pkg/mod/github.com/rancher/client-go#v1.22.3-rancher.1/tools/cache/reflector.go:168: Failed to watch *v3.ClusterAuthToken: failed to list *v3.ClusterAuthToken: the server could not find the requested resource (get clusterauthtokens.meta.k8s.io)
E1006 09:04:45.171554 1 reflector.go:139] pkg/mod/github.com/rancher/client-go#v1.22.3-rancher.1/tools/cache/reflector.go:168: Failed to watch *v3.ClusterUserAttribute: failed to list *v3.ClusterUserAttribute: the server could not find the requested resource (get clusteruserattributes.meta.k8s.io)
I found that SA tokens will not work this way so I tried to use a rancher user token, but that fails as well:
time="2022-10-06T08:37:34Z" level=info msg=" ...looking up token for kubeconfig-user-qq9nrc86vv"
time="2022-10-06T08:37:34Z" level=error msg="clusterauthtokens.cluster.cattle.io \"cattle-system/kubeconfig-user-qq9nrc86vv\" not found"
Checking the cattle-system namespace, there are no SA and secret entries corresponding to the users created in rancher, however I found SA and secret entries related in cattle-impersonation-system.
I tried creating a new user, but that too, only resulted in new entries in cattle-impersonation-system namespace, so I presume kube-api-auth wrongly assumes the location of the secrets to be cattle-system namespace.
Now the questions:
Can I authenticate with downstream RKE2 cluster using normal SA tokens (not ones created through Rancher server)? If so, how?
What did I do wrong about adding the webhook authentication configuration? How to make it work?
I noticed, that since I made the modifications described above, I cannot download the kubeconfig file from the rancher UI for this cluster. What went wrong there?
Thanks in advance for any advice.

Take an error when I've try apply my CA certificate to Apache Solr

I've try to apply my CA certificate to Solr. I've already reach solr with http or self-signed certificate following their own recipe in there: enabling ssl
But, when I try to apply my CA certificate I take an error : "HTTP ERROR 404 javax.servlet.UnavailableException: Error processing the request. CoreContainer is either not initialized or shutting down."
Full error message that I've take on browser
My solr.in.sh config is:
SOLR_SSL_ENABLED=true
SOLR_SSL_KEY_STORE=/etc/default/mykeystore
SOLR_SSL_KEY_STORE_PASSWORD=********
SOLR_SSL_TRUST_STORE=/etc/default/mykeystore
SOLR_SSL_TRUST_STORE_PASSWORD=********
SOLR_SSL_NEED_CLIENT_AUTH=false
# SOLR_SSL_WANT_CLIENT_AUTH=false
#SOLR_SSL_CLIENT_HOSTNAME_VERIFICATION=true
SOLR_SSL_CHECK_PEER_NAME=false
SOLR_SSL_KEY_STORE_TYPE=JKS
SOLR_SSL_TRUST_STORE_TYPE=JKS
I followed this two link for convert my pem file to key store: first:1 then:2 (I applied just fourth step in second link) then named the file as mykeystore.
I tried a lot of solution which some of them in stackoverflow. But none of them are my answer. Any help, any idea can be very useful. I'm totally stuck. What can I do/check?

I am not able to set System Properties from EAR or WAR in JBOSS EAP 7.1

I need to set trustStore and trustStorePassword with in the EAR or WAR by using System.setProperty() method, on the same lines I printed System.getProperties() and found in logs that javax.net.ssl.trustStore and javax.net.ssl.trustStorePassword were being set to the exact location and password what it needs to be set, but still not able to validate server certificate from the trustStore.
Getting this error :
sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
But if I set these two parameters from the startup script i.e.
$JBOSS_HOME/bin/standalone.sh -c standalone-full-ha.xml -Djavax.net.debug=none -Djavax.net.ssl.trustStore=path to truststore.jks -Djavax.net.ssl.trustStorePassword=password
It was able to validate server's certificate successfully.
Is there any restrictions in the JBOSS EAP 7.1 to set System Properties from the deployments ? or Is there any configuration that I was missing ?
System properties for certificates are read on the JVM start up so setting them in a deployment is too late.

AlfrescoRuntimeException:GetModelsDiff return status is 403 and api/solr/aclchangesets return status:403

I installed Alfresco on Windows 7 with the executable default install.My installation is Alfresco community version (5.0.d).
I tried to configue SSL link. I changed the file named generate_keystores.bat located in D:\Alfresco\alf_data\keystore.
It makes me generate my self-signed certificates.
Then I replaced all .keystore and .truststore with my certificates and I also imported certificates into Java's keystore which is named cacerts.
I configured Tomcat server to browse my /share only in https.
When I run it all things look prefect, but I cannot search users and site with it.
It seems the indexing has broken and the solr.log output ERROR logs:
2015-10-13 21:11:15,007 ERROR
[org.alfresco.solr.tracker.AbstractTracker] Tracking failed
org.alfresco.error.AlfrescoRuntimeException: 09132881
api/solr/aclchangesets return status:403
at org.alfresco.solr.client.SOLRAPIClient.getAclChangeSets(SOLRAPIClient.java:159)
at org.alfresco.solr.tracker.AclTracker.checkRepoAndIndexConsistency(AclTracker.java:347)
at org.alfresco.solr.tracker.AclTracker.trackRepository(AclTracker.java:313)
at org.alfresco.solr.tracker.AclTracker.doTrack(AclTracker.java:104)
at org.alfresco.solr.tracker.AbstractTracker.track(AbstractTracker.java:153)
at org.alfresco.solr.tracker.TrackerJob.execute(TrackerJob.java:47)
at org.quartz.core.JobRunShell.run(JobRunShell.java:216)
at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:563)
'2015-10-13 21:11:15,012 ERROR [org.alfresco.solr.tracker.AbstractTracker] Tracking failed
org.alfresco.error.AlfrescoRuntimeException: 09132882 GetModelsDiff return status is 403
at org.alfresco.solr.client.SOLRAPIClient.getModelsDiff(SOLRAPIClient.java:1091)
at org.alfresco.solr.tracker.ModelTracker.trackModelsImpl(ModelTracker.java:249)
at org.alfresco.solr.tracker.ModelTracker.trackModels(ModelTracker.java:207)
at org.alfresco.solr.tracker.ModelTracker.doTrack(ModelTracker.java:167)
at org.alfresco.solr.tracker.AbstractTracker.track(AbstractTracker.java:153)
at org.alfresco.solr.tracker.TrackerJob.execute(TrackerJob.java:47)
at org.quartz.core.JobRunShell.run(JobRunShell.java:216)
at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:563)
Could anybody tell me the reason of this issue?