What is the difference of credential-store and secret-key-credential-store - ssl

In the following table the different credential store implementations of different credential types are listed.
Credential Type
KeyStoreCredentialStore
PropertiesCredentialStore
PasswordCredential
Supported
Unsupported
KeyPairCredential
Supported
Unsupported
SecretKeyCredential
Supported
Supported
I still do not quite understand the difference of KeyStoreCredentialStore (credential-store) and PropertiesCredentialStore (secret-key-credential-store) in wildfly subsystem elytron. If KeyStoreCredentialStore supports SecretKeyCredential, why one need PropertiesCredentialStore type?

An official documentation describe the differences of credential store implementations with details very well. However, for someone starting new with this topic, it can be confusing. Hence, I thought of briefly describing the differences and practical benefits based on my experience:
KeyStoreCredentialStore (i.e. credential-store) and PropertiesCredentialStore (i.e. secret-key-credential-store) are two default credential store implementations WildFly Elytron contain.
KeyStoreCredentialStore implementation backed by a Java KeyStore which is protected using the mechanisms provided by the KeyStore implementations. As listed in table above it supports credential types as PasswordCredential, KeyPairCredential and SecretKeyCredential.
PropertiesCredentialStore is another implementation dedicated to store SecretKeyCredential using a properties file and its primary purpose is to provide an initial key to a server environment. It does not offer any protection of the credentials it stores but can be still from filesystem level its access restricted to just the application server process.
In my case I needed e.g. SecretKeyCredential to encrypt expression (i.e. passwords in clear text) in server configuration file and I added my SecretKey to KeyStoreCredentialStore protected by password, rather than using PropertiesCredentialStore.

Related

HttpRequest Host Vulnerabilites

I would like your advice regarding any security vulnerabilities from extracting the Domain Name from the Host property on an HttpRequest?
I have developed a PWA using ASP.NET Core that is multi-tenant and I extract the domain (i.e. The tenant) name from the host (HttpRequest.Host) which I use to look up information in a database.
For example, if I had a URL like www.JoeBloggs.com the extracted domain would be 'JoeBloggs'. Using this I then retrieve the information I require for that tenant.
Information is always sent over a HTTPS connection.
Can the Host value be faked or potentially used in a SQL Injection attack if I am using the domain name as part of a database lookup? 
Thanks, Advance.
Historically there have been a slew of HTTP Host header attacks in which target webservers implicitly trust the Host header value with no/improper whitelist checking or sanitization. In short, it is possible to fake this value in certain contexts/configurations.
For concerns regarding SQL injection specifically, you should already be using prepared statements and parameterized queries to mitigate such risks; if you aren't already you should absolutely be working to refactor your SQL interaction code to do so. Even if the Host value sent by a malicious client is intended to exploit a SQL injection vulnerability somewhere downstream from the HTTP server, such a value shouldn't be able to trigger any unintended functionality or data exposure as parameters passed to query strings via mechanisms like prepared statements/parameterized queries wouldn't interpretable as SQL statements.
Relatedly, if you're using the value of the Host header to determine whether a client should receive any sort of "privileged" information in the response from your server - don't. A Host header value is not nearly a stand-in for a proper authentication/authorization flow and should absolutely never be used as such, considering it's able to be manipulated rather trivially. You can certainly use it in conjunction with other, more secure methods of authentication/authorization, but using it by itself is a big security no-no.
This advice does not preclude there being a separate exploitable flaw/bug in your database, database driver, or anywhere else in your stack that examines the contents of the Host header.

Should the schema be altered if the server does not handle all attributes?

If our SCIM server only handles a small subset of the attributes in the core User schema and ignores most other attributes:
Should the server return a reduced schema that reflects what is supported on the schemas endpoint?
Or should it return the full default core schema definition?
And if the schema is altered to reflect what the server actually supports, should it still be named urn:ietf:params:scim:schemas:core:2.0:User, or does it need to get a different name?
Should the server return a reduced schema that reflects what is supported on the schemas endpoint?
Yes.
Or should it return the full default core schema definition?
No.
Service providers are free to omit attributes and change attribute characteristics, provided it does not change any other requirements outlined in the RFC nor redefine the attributes. The purpose of the discovery endpoints, including "/Schemas", is to provide service providers the ability to specify their schema definitions.
And if the schema is altered to reflect what the server actually supports, should it still be named urn:ietf:params:scim:schemas:core:2.0:User, or does it need to get a different name?
Provided you meet the above criteria, the schema should continue to be named urn:ietf:params:scim:schemas:core:2.0:User. But, you should use custom resources and/or extensions for new attributes/resources not defined in the RFC.
I agree that the RFC could perhaps be more clear about this, but there are some hints throughout, such as the following from Section 2:
SCIM's support of schema is attribute based,
where each attribute may have different type, mutability,
cardinality, or returnability. Validation of documents and messages
is always performed by an intended receiver, as specified by the SCIM
specifications. Validation is performed by the receiver in the
context of a SCIM protocol request (see [RFC7644]). For example, a
SCIM service provider, upon receiving a request to replace an
existing resource with a replacement JSON object, evaluates each
asserted attribute based on its characteristics as defined in the
relevant schema (e.g., mutability) and decides which attributes may
be replaced or ignored.
Additional references:
https://www.ietf.org/mail-archive/web/scim/current/msg02851.html
SCIM (System for Cross-domain Identity Management) core supported attributes

Multi-tenancy in Golang

I'm currently writing a service in Go where I need to deal with multiple tenants. I have settled on using the one database, shared-tables approach using a 'tenant_id' decriminator for tenant separation.
The service is structured like this:
gRPC server -> gRPC Handlers -
\_ Managers (SQL)
/
HTTP/JSON server -> Handlers -
Two servers, one gRPC (administration) and one HTTP/JSON (public API), each running in their own go-routine and with their own respective handlers that can make use of the functionality of the different managers. The managers (lets call one 'inventory-manager'), all lives in different root-level packages. These are as far as I understand it my domain entities.
In this regard I have some questions:
I cannot find any ORM for Go that supports multiple tenants out there. Is writing my own on top of perhaps the sqlx package a valid option?
Other services in the future will require multi-tenant support too, so I guess I would have to create some library/package anyway.
Today, I resolve the tenants by using a ResolveTenantBySubdomain middleware for the public API server. I then place the resolved tenant id in a context value that is sent with the call to the manager. Inside the different methods in the manager, I get the tenant id from the context value. This is then used with every SQL query/exec calls or returns a error if missing or invalid tenant id. Should I even use context for this purpose?
Resolving the tenant on the gRPC server, I believe I have to use the UnaryInterceptor function for middleware handling. Since the gRPC
API interface will only be accessed by other backend services, i guess resolving by subdomain is unneccessary here. But how should I embed the tenant id? In the header?
Really hope I'm asking the right questions.
Regards, Karl.
I cannot find any ORM for Go that supports multiple tenants out there. Is writing my own on top of perhaps the sqlx package a valid option?
ORMs in Go are a controversial topic! Some Go users love them, others hate them and prefer to write SQL manually. This is a matter of personal preference. Asking for specific library recommendations is off-topic here, and in any event, I don't know of any multi-tenant ORM libraries – but there's nothing to prevent you using a wrapper of sqlx (I work daily on a system which does exactly this).
Other services in the future will require multi-tenant support too, so I guess I would have to create some library/package anyway.
It would make sense to abstract this behavior from those internal services in a way which suits your programming and interface schemas, but there's no further details here to answer more concretely.
Today, I resolve the tenants by using a ResolveTenantBySubdomain middleware for the public API server. I then place the resolved tenant id in a context value that is sent with the call to the manager. Inside the different methods in the manager, I get the tenant id from the context value. This is then used with every SQL query/exec calls or returns a error if missing or invalid tenant id. Should I even use context for this purpose?
context.Context is mostly about cancellation, not request propagation. While your use is acceptable according to the documentation for the WithValue function, it's widely considered a bad code smell to use the context package as currently implemented to pass values. Rather than use implicit behavior, which lacks type safety and many other properties, why not be explicit in the function signature of your downstream data layers by passing the tenant ID to the relevant function calls?
Resolving the tenant on the gRPC server, I believe I have to use the UnaryInterceptor function for middleware handling. Since the gRPC API interface will only be accessed by other backend services, i guess resolving by subdomain is unneccessary here. But how should I embed the tenant id? In the header? [sic]
The gRPC library is not opinionated about your design choice. You can use a header value (to pass the tenant ID as an "ambient" parameter to the request) or explicitly add a tenant ID parameter to each remote method invocation which requires it.
Note that passing a tenant ID between your services in this way creates external trust between them – if service A makes a request of service B and annotates it with a tenant ID, you assume service A has performed the necessary access control checks to verify a user of that tenant is indeed making the request. There is nothing in this simple model to prevent a rogue service C asking service B for information about some arbitrary tenant ID. An alternative implementation would implement a more complex trust-nobody policy whereby each service is provided with sufficient access control information to make its own policy decision as to whether a particular request scoped to a particular tenant should be fulfilled.

Apache Ignite defining pluggable hashing algorithm

On page https://ignite.apache.org/features/datagrid.html I've have found following information:
"Unlike other key-value stores, Ignite determines data locality using a pluggable hashing algorithm. Every client can determine which node a key belongs to by plugging it into a hashing function, without a need for any special mapping servers or name nodes. "
How can I define my own hashing algorithm?
In order to do this you can implement AffinityFunction interface and provide the implementation via CacheConfiguration#affinity configuration property.

Authentication Providers, design pattern to adopt

I have a windows phone 8.1 client. This client connects to a Web API (ASP.NET) and fetches the supported Authentication Providers. At the moment its Google and Twitter. The user (wp 8.1) can select which provider he wants to use for the authentication purpose.
Based on the provider selected on the phone the underlying implementation flow for the authentication is different, in other words Google has one flow and Twitter has another flow. Because of this I have switch statements in my client that looks like the following
switch(authProvider)
case: "Google":
GoogleAuthProvider.PerfomAuthentication();
break;
case: "Twitter"
TwitterAuthProvider.PerformAuthentication();
break;
My main problem around this is that I am now hard coding the provider. The rest of my phone app uses IOC (MVVMLight) and in t this case I am hard coding. How do I get rid of this, without explicitly referring to the container? Plus lets say at a later point in time an additional auth provider is supported, then based on the current implementation I need to modify the client code as well, how do I minimize this?
From the example you provided the State GoF pattern will be up to the task assuming that the authentication interface is uniform (consists of one method – PerformAuthentication and potentially of other methods that are common across all the other possible providers). So that you have to create the interface IAuthenticationProvider and inject its implementation into the logic that actually gets executed (the logic that previously contained switch).
In fact it is very similar to the Strategy that is injected via DI but Strategy just encapsulates the algorithm where State is more powerful and suitable for the domain of authentication (it might have …state and other methods/properties – not that bad, right? :)
If you face the providers with different functional capabilities and interfaces you might want to choose the Bridge pattern that unites the heterogeneous authentication interfaces under the umbrella of a single interface. But it seems to me that the usage of Bridge would be an overengineering here.