Knox does not work after Hive service restart - hive

I use SQL Developer and some third party jar files for accessing Hive.
When ever there is a Hive service restart - My connection object wont let me connect to Hive after the restart. My admin team need to restart the metastore too. And then few more config changes, admin team does - and then I need to remove the cacerts file, add certificates to cacerts again using Apache knox.
Have any of you faced similar problems and managed to fixed it ?
Thanks
LNC

Sorry for the late response here. This sounds like an issue that has since been resolved with HiveServer2 using a random key for signing the cookie that is used to optimize authentication of each http request for a given session. When HS2 is restarted a new key is created and the Knox server continues to send the previously cached cookie which was signed with the previous random key. There should be no reason to mess with cacerts and the like. A simple - yet annoying - restart of Knox should suffice. You may also turn off cookie based authentication but that will degrade performance.

Related

Could anyone connect Cloud SQL with cloud sql proxy pod

I'm trying to setup a very basic wordpress setup as explained in this document: https://cloud.google.com/kubernetes-engine/docs/tutorials/persistent-disk
And cloud sql proxy is giving me certificate errors:
esonika#cloudshell:~ (esonika)$ k logs wordpress-8d7998ccd-xnfn9 -c cloudsql-proxy
2022/12/30 10:43:38 using credential file for authentication; email=cloudsql-proxy#esonika.iam.gserviceaccount.com
2022/12/30 10:43:38 Listening on 127.0.0.1:3306 for esonika:europe-west9:mysql-wordpress-instance
2022/12/30 10:43:38 Ready for new connections
2022/12/30 10:44:01 New connection for "esonika:europe-west9:mysql-wordpress-instance"
2022/12/30 10:44:02 couldn't connect to "esonika:europe-west9:mysql-wordpress-instance": x509: certificate is valid for 38-968d77ed-a928-4b25-97d3-5451b5f3c670.europe-west9.sql.goog, not esonika:mysql-wordpress-instance
I dont know why a certificate such as "38-968d77ed-a928-4b25-97d3-5451b5f3c670.europe-west9.sql.goog" is created and where.
Tried resetting ssl configurations and it didn't work.
Usually, if you don't explicitly set a SSL connection on your Cloud SQL instance, the communication with the database is in plain text.
EXCEPT when you create a tunnel with Cloud SQL proxy. This time, a secure connection is created, with encrypted data. The encryption is ensure by this automatically and ephemeral certificate created by the proxy.
Here is a doc which might help you in connecting to Cloud SQL from GKE using sidecar pods.
Thanks, The document doesn't list anything that I haven't tried. I think there is an internal issue with cloud_sql_proxy, that's why I decided to switch Cloud SQL to a private network only and wordpress pod is directly connecting to Cloud SQL private IP.
I was running into the same issue around the time you posted this question. I also reset SSL configuration on the DB like you did. My solution was upgrading from the version 1.11 to 1.33.2. It resolved all of the x509 errors. No clue why it suddenly stopped working.

Service Fabric certificate swap. Apps failing to activate

We have 5 Service Fabric nodes running 2 applications in 4 environments in Azure. Our Network team wanted to switch out our cluster certificate so they generated a new one (I believe in the key vault) and swapped it to primary. We updated our project to use the new cert's thumbprint. We successfully authenticated to the cluster and deployed (via Visual Studio) using the new cert, but now the application will not activate. The error we are seeing is:
System.Hosting' reported Error for property 'Activation:1.0:1.0:131965678558388988'.
There was an error during activation.There is already a certificate with thumbprint 123oldCertNumber bound to port 200appPort. New certificate thumbprint specified: 321newCertNumber
Additionally, we tried deleting the old cert which is now in the secondary slot, but it just processes for hours saying "Cluster is updating user certificate." and eventually fails to delete the cert.
Any help would be greatly appreciated!
Here's what worked for me:
I deleted all applications, but did not unprovision them. I then reset all nodes to clear them out. Then I deleted the old cert - this time it worked. I redeployed and viola, it activated no problem. Well, almost, I have one node that is stuck with the same error message. I've tried deleting data and resetting it, but haven't been able to clear it yet.
If both of you applications were using the old certificate then you may have encountered a problem described in documentation describing Upgrading multiple applications with HTTPS endpoints.
When the first application goes to update it will fail to configure the HTTPS port with the new certificate, since the second application is still running and has already configured the HTTPS port with the old certificate. The only path forward is to remove both applications that are sharing the port and then upgrade.
For this reason you may want to consider approaches to prevent this problem in the future. You could:
Combine services from the two applications into a single application, or
Run each application on a different port.
I just finished upgrading my applications to use a new cert and here is what you must do.
I have 3 applications using the one cert.
Instructions
In the Application Manifest, I removed the binding
from 2 of my 3 applications. It is important that you do not remove the binding from all of your apps at once.
I then redeployed the 2 apps with the binding removed.
I updated the 3rd Application with the new cert and redeployed
i then added back the binding in the other 2 applications and updated them to use the new cert and redeployed.
That was all that was needed.
Here is a link to the solution
Renew endpoint certificate

How to update Let's Encrypt SSL after changing domain A record?

I am new to Cpanel and using CloudLinux 7.4.
When adding a new site to Cpanel a Lets encrypt SSL is created in the background which is great, however, I have an issue creating a site where the A records are not pointed to the server at the time of creating the site (for instance, I am setting up a site that is on another server and will point to the Linux server once ready).
The SSL is created but marked as self-signed which is logical since the IP can't be verified. How can I force the SSL to update after I have pointed the A records to the Linux server?
I am working on a large site currently using an SSL, I would like to avoid as much down-time as possible when transferring over.
(Posted on behalf of the question author).
After much search I have ended finding out how to sort the above. Firstly, I was wrong about the type of SSL, it is not Let's Encrypt but Comodo but I don't think this makes any difference.
On my server the auto SSL is set to run at 3 am but if the change is urgent: go to WHM >> manage autoSSL. Under the manage users tab there is a blue bottom on the left of the user called check “username”. This sends the user back in the queue, in total in took a couple of minutes to update and the SSL is now fine.
I was originally looking at Cpanel site login and not WHM, little info on the subject on the web.
Recommended you to enable auto renew once the SSL expire, its will help you to keep the SSL active for your website.

Cannot get Azure WCF service to work with Client Certificates

I have a WCF service that I want to secure with Client Certificates but I cannot get it to work on Azure.
I removed Azure from the equation by hosting the service on a standard Windows Server on Amazon. I installed both the service and client certificates (none are self-signed) into the Local Machine 'Personal' store on this server including the chained certificates and it all worked as expected, called from my local PC, with the client cert set against the binding/behavior. It did not work without the certificate being specified so it definitely worked correctly this way.
I then deployed the service to Azure. The client and server certificates are uploaded to the portal and set in the config against "Local Machine/My" and the CA and root certificates are uploaded and I tried them in various stores including "My", "Trusted" and "CA". Every variation I try, I continue to get "The HTTP request was forbidden with client authentication scheme 'Anonymous'" called from exactly the same program locally with the only change being the client endpoint address.
As another detail, I can get it to work without certificates so there is no problem with the web service but I am unsure how to work out what is actually happening with the certificate handshake between client and service.
I have finally got it to work, and have written a guide here: Blogspot.co.uk
I'm not sure what I had got wrong before since I have not done anything too weird to make it work. I think perhaps I had a small defect somewhere in configuration that I eventually fixed by starting again. Anyway, it DOES work and provides some useful security on Azure.
See my answer to this SO post - bottom line, cert in LocalMachine/My and run with elevated privileges in csdef file add:
<Runtime executionContext="elevated" />

How can I work with Novell eDirectory services in J2SE?

How can I work with Novell eDirectory services in J2SE? Will JNDI work with eDirectory? What are some resources I can use to learn about whatever library or libraries you suggest?
I just want to play around with retrieving information via LDAP for right now, and if I get things working the way I want, I will probably need to be able to modify objects later on.
Thanks!
JNDI should work with eDirectory.....
try; http://developer.novell.com/wiki/index.php/Jldap and http://developer.novell.com/wiki/index.php/Novell_LDAP_Extended_Library
Used it successfully with OpenLDAP and should suffice for eDirectory as well.
Any LDAP interface you want to use should work fine against eDirectory.
Be aware that the configuration of the LDAP server may not allow clear text passwords, thus a bind to port 636 via SSL (Where you have the certificate imported into the keystore already) or via TLS (retrieve the tree CA's public key on the fly).
If you have administrative access to the eDirectory server, you can easily change that, but still best to confirm that you can get it to work over SSL/TLS (aka LDAPS).
If you really need it, you can ask the admins for a server with only a replica of some test partition (and thus no real user data in its view) and test via cleartext against that.
It is very easy in eDirectory to add a new replica of a partition, carve off or merge a partition, and all can be done live.
It is similarly very easy to host replicas of many partitions on one server. (The official limit is, no limit on the number or partitions in a tree, or replicas on a server, but it used to be 256 in older versions (before 8.x) )
If you are allowed access to the eDirectory server, you want to to ask for access to Dstrace (several versions of this, see Many Faces of Dstrace). There is a web interface (server:8008 on Netware, 8010 on Windows, 8028 on Unix/Linux usually) or other interfaces. If you enable the LDAP trace option (and turn off all the others) you can fairly completely debug what is going on at the server side. See the errors, the communication, or lack thereof and so on.