On page https://ignite.apache.org/features/datagrid.html I've have found following information:
"Unlike other key-value stores, Ignite determines data locality using a pluggable hashing algorithm. Every client can determine which node a key belongs to by plugging it into a hashing function, without a need for any special mapping servers or name nodes. "
How can I define my own hashing algorithm?
In order to do this you can implement AffinityFunction interface and provide the implementation via CacheConfiguration#affinity configuration property.
Related
In the following table the different credential store implementations of different credential types are listed.
Credential Type
KeyStoreCredentialStore
PropertiesCredentialStore
PasswordCredential
Supported
Unsupported
KeyPairCredential
Supported
Unsupported
SecretKeyCredential
Supported
Supported
I still do not quite understand the difference of KeyStoreCredentialStore (credential-store) and PropertiesCredentialStore (secret-key-credential-store) in wildfly subsystem elytron. If KeyStoreCredentialStore supports SecretKeyCredential, why one need PropertiesCredentialStore type?
An official documentation describe the differences of credential store implementations with details very well. However, for someone starting new with this topic, it can be confusing. Hence, I thought of briefly describing the differences and practical benefits based on my experience:
KeyStoreCredentialStore (i.e. credential-store) and PropertiesCredentialStore (i.e. secret-key-credential-store) are two default credential store implementations WildFly Elytron contain.
KeyStoreCredentialStore implementation backed by a Java KeyStore which is protected using the mechanisms provided by the KeyStore implementations. As listed in table above it supports credential types as PasswordCredential, KeyPairCredential and SecretKeyCredential.
PropertiesCredentialStore is another implementation dedicated to store SecretKeyCredential using a properties file and its primary purpose is to provide an initial key to a server environment. It does not offer any protection of the credentials it stores but can be still from filesystem level its access restricted to just the application server process.
In my case I needed e.g. SecretKeyCredential to encrypt expression (i.e. passwords in clear text) in server configuration file and I added my SecretKey to KeyStoreCredentialStore protected by password, rather than using PropertiesCredentialStore.
The page on Service Discovery using apache curator (https://github.com/Netflix/curator/wiki/Service-Discovery) introduces the following concepts:
The main abstraction class is ServiceProvider. It encapsulates the discovery service for a particular named service along with a provider strategy. A provider strategy is a scheme for selecting one instance from a set of instances for a given service. There are three bundled strategies: Round Robin, Random and Sticky (always selects the same one). ServiceProviders are allocated by using a ServiceProviderBuilder.
Each of the above query methods calls ZooKeeper directly. If you need more than occasional querying of services you can use the ServiceCache. It caches in memory the list of instances for a particular service. It uses a Watcher to keep the list up to date. You allocate a ServiceCache via the builder returned by ServiceDiscovery.serviceCacheBuilder().
I can see how to use the Provider strategies with a ServiceProviderBuilder, but there's no equivalent method on the ServiceCacheBuilder, and the only relevant method available on the ServiceCache class itself is getInstances(), which gets all instances.
How can I use a provider strategy with a ServiceCache?
#simonalexander2005 I was just looking in the code and it turns out that ServiceProvider internally already uses a serviceCacheBuilder. TBH - I've either forgotten about this or it got put in by another committer - I'm not sure. Anyway, I'm very sorry about the runaround here. Also, the documentation must be updated to reflect this - I'll open an issue for this today. I'm sure this be maddening to you, again sorry for this. The good news, though, is that with ServiceProvider you automatically get caching.
Frankly, the docs on this are really bad. It would be fantastic if someone could give a pull request with better docs...
Notice that ServiceCache implements InstanceProvider. Also notice that ProviderStrategy.getInstance() has as its argument InstanceProvider. Therefore, you can pass a ServiceCache instance to whichever ProviderStrategy you want to use.
I hope this helps.
Can Redis be used as a self populating cache (or pull-through cache) ?
In other words, is it able to create an entry on the fly if this entry is not cached yet ?
Redis is just a store: you add things to it and retrieve them back again. It has no awareness of what you are using it for (caching) or knowledge of the backend it would perform lookups from, that will depend on the application handling the request and using Redis to cache.
Can Redis be used as a self-populating cache (or pull-through cache)?
Yes! But Redis doesn't have an implementation for self-population.
So you just have to implement it yourself and it's easy too.
Define a wrapper class that extends(is-a relation) a redis client(of your choice).
Define Factory interfaces to create objects.
Override necessary methods that require pull-through implementation
3.1 If the key already exists, return it.
3.2 Otherwise use factory interfaces to create the value, cache it and return it.
Hope this answer is generic enough for any redis client.
Im new to extjs, which could be right place to apply my proxy, is it store or model? What is the difference over them, which could be better place to gain more advantage?
Proxy can now be attached to either a Store or a Model. Proxies can be configured with Readers and Writers which decode and encode communications with your server.
What different in adding to model or store?
Store:
We configured our Store to use an Ajax Proxy, telling it the url to load data from and the Reader used to decode the data. In this case our server is returning JSON, so we've set up a Json Reader to read the response. Here store support addition feature such as filter,sorting and grouping which we can do in mode class.
Model:
Model is just a set of fields and their data. four principal parts of Model are Fields, Proxies, Associations and Validations. so its clear that apart from proxies it support associations and validations. Its main benefit is that we can easily load and save Model data without creating a Store.
As whole we can say its based on your requirement to decide which one to use. mostly everyone prefer store to avail many feature rather than model.
To put the question into some context, the system exposing the web service uses GUIDs internally as identifiers for all entities.
In such case, when designing a public facing data integration web service (used mainly for importing or exporting data from the system to other, external systems), what would you consider as pros and cons of using the internal identifiers in the interface of the service?
With such solution, the export methods of the web service would return the dto's identified with GUIDs and the import methods would accept similar dto's - I assume the external system would be responsible for generating new GUIDs for new entities.
What are the potential issues one might face with this approach?
The technology stack for the system and web service is .NET / WCF / SOAP
First, let's look at the more generic "how do I set up a public API" question, my first exercise is determining what information is needed by the consumer of the service. I also look and see if there are is company specific naming in the object model. I then create a service model (data contract, if you want WCF specific) that matches the view I want to show the consumer. This includes a unique key, which is more often a SKU string (human readable key) than a GUID/int (the actual derived primary key), as the SKU is public and the means of storing in the database is not. So, in general, I would not expose these primary key concepts, if that is what the GUID is.
Now to the question of "do you see problems with this approach". I will focus on more general concepts so you can make a more informed decision, as there is no 100% right/wrong answer.
As long as this is machine to machine and the use of the GUID is something both systems are aware of, I see nothing particularly scary about this approach. If this ultimate goes to a human readable system where the GUID has to be interacted with, then you have an issue.
One potential issue with the system is exposing your own primary key information to customer or client systems, who don't have to understand this level of detail. If this is actually "semi-public" with a select list of vendors, the "risk" might be less. This is the primary issue I see.
One could argue the weight of the GUID (128 bits) versus a smaller identifier, but this is a bogus answer, IMO, as the network latency more than outweighs sending a few more bytes as a request parameter.