Trying to deserialize a session but no signature validation key specified - anypoint-studio

I am facing the following issue:
org.mule.session.SerializeAndEncodeSessionHandler - Trying to deserialize a session but no signature validation key specified
what results in session variable not being deserialized, thus I can't access them
The issue exists when I run the project on mule kernel 3.9.0.
It works fine on 3.9.0 runtime in AnyPointStudio.

You need to specify a secret key in Java property mule.session.sign.secretKey to sign the session variables that are created by the collection splitter. This is a consequence of security patch: https://help.mulesoft.com/s/article/Runtime-Security-Patch-31-October-2019
For example:
-Dmule.session.sign.secretKey=REPLACE_BY_SECRET_VALUE
See section 'Patch Configuration' of above article for details.

Related

JSF MAC did not verify! error on clustered environment [duplicate]

I have a JSF application that uses Mojarra 2.2.9
and is deployed on WebSphere 8.5.5.4 on clustered environement
and javax.faces.STATE_SAVING_METHOD is set to client.
Even though all my application beans are request scoped, sometimes when the user session is valid and the user is doing post request on a page he gets ViewExpiredException. What may be causing this issue and how can I solve it?
Will changing the javax.faces.STATE_SAVING_METHOD to server solve it? If so, what is the impact of doing this on memory?
Also, does this have anything to do with cluster environement and maybe there's some missing configuration on the Websphere that will solve the issue?
This will happen if the client side state is encrypted by one server and decrypted by other server and the servers don't use the same AES key for this. Normally, you should also have seen below warning in server log:
ERROR: MAC did not verify
You need to ensure that you have set jsf/ClientSideSecretKey in web.xml with a fixed AES key, otherwise each server will (re)generate its own AES key during startup/restart (which is used during encrypting view state).
<env-entry>
<env-entry-name>jsf/ClientSideSecretKey</env-entry-name>
<env-entry-type>java.lang.String</env-entry-type>
<env-entry-value>[AES key in Base64 format]</env-entry-value>
</env-entry>
You can use this snippet to generate a random AES256 (32bit) key in Base64 format.
KeyGenerator keyGen = KeyGenerator.getInstance("AES");
keyGen.init(256); // Use 128 for 16bit key.
String key = Base64.getEncoder().encodeToString(keyGen.generateKey().getEncoded());
System.out.println(key); // Prints AES key in Base64 format.
In case you get Java Security: Illegal key size or default parameters? error, install the cryptography extension as instructed in the link, or else generate a random AES128 (16bit) key instead.
After having the key, make absolutely sure you don't publish/opensource your key.
Further you also need to ensure you have added <distributable /> tag to web.xml so JSF will perform more agressive session dirtying and the HTTP sessions (including view scoped beans themselves!) are properly synced across servers.
Another probable cause of ViewExpiredException with client side state saving is that you've set the Mojarra-specific context param com.sun.faces.clientStateTimeout in web.xml which represents the time in seconds before an incoming client side state is considered expired. This is however unlikely the case here as that context param has a rather self-explaining name which you would have spotted by just glancing over web.xml.
See also:
com.sun.faces.ClientStateSavingPassword - recommendations for actual password?
javax.faces.application.ViewExpiredException: View could not be restored
You must have the distributable tag in your web.xml as mentioned by balusc

Not a valid value of the atomic type 'xs:ID'

Trouble getting FusionAuth as IDP to pass samltest.id.
FusionAuth installed on test.example.com upstream of NGINX with SSL, all on Ubuntu 18.04.
Create application in FusionAuth
Name: SamlTest
Id: 1214aabe-5697-44bd-a271-511d43b63913
In SAML tab set [1]
Issuer: https://samltest.id/saml/sp
ACS: https://samltest.id/Shibboleth.sso/SAML2/POST
View application, under SAML v2 Integration details
Metadata URL: https://test.example.com/samlv2/metadata/63326230-3433-3661-3939-626632386436
Provide Metadata URL to samltest.id [2] and get following errors
moment.metadata:1: element EntityDescriptor: Schemas validity error : Element '{urn:oasis:names:tc:SAML:2.0:metadata}EntityDescriptor', attribute 'ID': '64643134-3530-3365-6433-393236336261' is not a valid value of the atomic type 'xs:ID'.
moment.metadata:1: element IDPSSODescriptor: Schemas validity error : Element '{urn:oasis:names:tc:SAML:2.0:metadata}IDPSSODescriptor': The attribute 'protocolSupportEnumeration' is required but missing.
moment.metadata fails to validate
Is it possible that the ID needs to start with something other than a number [3]?
EDIT1 - start
Fairly certain that the issue identified in [3] is what's triggering the first error. Manually modified XML file, prepended ID with an _ (underscore) and submitted it to a local Shibboleth SP install and that got rid of the 'xs:ID' error.
I don't think we can resolve the 'protocolSupportEnumeration' missing error.
EDIT 1 - end*
Any help would be appreciated.
[1] https://samltest.id/download/#SAMLtest_Metadata
[2] https://samltest.id/upload.php
[3] https://docs.secureauth.com/pages/viewpage.action?pageId=6226279
Issue fixed by developer with patch to fusionauth-samlv2-X.Y.Z.jar.
See discussion here: Github

How to use kafka-avro-console-producer ?

If I use 'kafka-console-producer' - it automatically picks up JASS file and runs normally (can produce to a remote topic).
If I use 'kafka-avro-console-producer' with exact same configuration but with added schema property - it complains about JASS configuration:
'Could not find a 'KafkaClient' entry in the JAAS configuration. System property 'java.security.auth.login.config' is not set'
How to make it working?
Please look into these blogs, you will know that how can you update your security and how can you add property to procedure.
Authentication using SASL
secure-kafka-java-producer-with-kerberos

SCOPES_WARNING in BigQuery when accessed from a Cloud Compute instance

Every time I use bq on a Cloud Compute instance, I get this:
/usr/local/share/google/google-cloud-sdk/platform/bq/third_party/oauth2client/contrib/gce.py:73: UserWarning: You have requested explicit scopes to be used with a GCE service account.
Using this argument will have no effect on the actual scopes for tokens
requested. These scopes are set at VM instance creation time and
can't be overridden in the request.
warnings.warn(_SCOPES_WARNING)
This is a default micro in f1 with Debian 8. I gave this instance access to all Cloud APIs and its service account is also an owner of a project. I run gcloud init. But this error persists.
Is there something wrong?
I noticed that this warning did not appear on an older instance running SDK version 0.9.85, however I now get it when creating a new instance or upgrading the the latest Gcloud SDK.
The scopes warning can be safely ignored, as it's just telling you that the only scopes that will be used are the ones specified at instance creation time, which is the expected behavior of the default GCE service account.
It seems the 'bq' tool doesn't distinguish between the default service account on GCE and a regular service account and always tries to set the scopes explicitly. The warning comes from oauth2client, and it looks like it didn't display this warning in versions prior to v2.0.0.
I've created public issue to track this which you can star to get updates:
https://code.google.com/p/google-bigquery/issues/detail?id=557

Does MsBuild SonarQube Runner support encryption?

I see on the page Settings Encryption a section about the MSBuild.SonarQube.Runner configuration file.
But the documented portion has a format which doesn't match the example SonarQube.Analysis.xml file.
However, I have tried putting these properties in the SonarQube.Analysis.xml file, in a format corresponding to the current one. Long story made short, the msbuild.sonarqube.runner puts the {aes}encrypted password in the Basic authorization field of the HttpRequest sent to SonarQube.
I guess that the client should unencrypt the password before putting it in the Authentication header. Otherwise, the sonarQube server won't allow the user to query the properties Uri (something like /api/properties?resource=projectKey)
You're right, I've fixed the documentation to reflect the effective format of the SonarQube Scanner for MSBuild.
Regarding the encryption of the sonar.password property, this is currently not supported by the MSBuild Scanner: It does not (yet?) know about the encryption logic, and therefore sees the raw encrypted value only. However, some other properties can be encrypted: the ones that are read during the end step of the SonarQube Scanner for MSBuild, which is under the hood launching the sonar-runner.
I've created the following ticket to keep track of this limitation: https://jira.sonarsource.com/browse/SONARMSBRU-192