Spymemcached hashing algorithm for multiple membase server - spymemcached

Platform: spymemcached-2.7.3.jar, 64 bit Windows 7 OS
We have two membase servers (Non Clustered Environment) and we are using spymemcached java client for setting and getting data from memcache. We are not using any replication between two membase servers.
We are using following code to set data in memcache. Looks like MemcachedClient always first try to put/get data in server1 if its available. If server1 is down, then MemcachedClient put/get from server2.
Does spymemcached uses any hashing algoritham to decide from which server it needs to set/get data? Any documentation available which explains how it works?
code
public class Main {
public static void main(String[] args) throws IOException, URISyntaxException {
MemcachedClient client;
URI server1 = new URI("http://192.168.100.111:8091/pools");
URI server2 = new URI("http://127.0.0.1:8091/pools");
ArrayList<URI> serverList = new ArrayList<URI>();
serverList.add(server1);
serverList.add(server2);
client = new MemcachedClient(serverList, "default", "");
client.set("spoon", 50, "Hello World!");
client.shutdown(10, TimeUnit.SECONDS);
System.exit(0);
}
}

The constructor MemcachedClient(List, String, String) will connect to the first URI in the list to obtain information about the entire cluster. This means that if you had 10 servers in you cluster you can specify one ip address to connect to all of them. The reason a list of URI's is allowed is so that if the server you are getting cluster information from goes down you can try to get cluster information from another server in the cluster.
The hashing algorithm used by Spymemcached in this case is determined by Membase when the cluster configuration begins. If you look through some json that is sent to Spymemcached during the configuration phase you will see the hash algorithm is CRC. Look in the DefaultHashAlgorithm class for more information on CRC.
Also, I'm curious why your using Membase as described.

Related

Can I allow multiple http clients to consume a Flowable stream of data with resteasy-rxjava2 / quarkus?

Currently I am able to see the streaming values exposed by the code below, but only one http client will receive the continuous stream of values, the others will not be able to.
The code, a modified version of the quarkus quickstart for kafka reactive streaming is:
#Path("/migrations")
public class StreamingResource {
private volatile Map<String, String> counterBySystemDate = new ConcurrentHashMap<>();
#Inject
#Channel("migrations")
Flowable<String> counters;
#GET
#Path("/stream")
#Produces(MediaType.SERVER_SENT_EVENTS) // denotes that server side events (SSE) will be produced
#SseElementType("text/plain") // denotes that the contained data, within this SSE, is just regular text/plain data
public Publisher<String> stream() {
Flowable<String> mainStream = counters.doOnNext(dateSystemToCount -> {
String key = dateSystemToCount.substring(0, dateSystemToCount.lastIndexOf("_"));
counterBySystemDate.put(key, dateSystemToCount);
});
return fromIterable(counterBySystemDate.values().stream().sorted().collect(Collectors.toList()))
.concatWith(mainStream)
.onBackpressureLatest();
}
}
Is it possible to make any modification that would allow multiple clients to consume the same data, in a broadcast fashion?
I guess this implies letting go of backpressure, because that would imply a state per consumer?
I saw that Observable is not accepted as a return type in the resteasy-rxjava2 for the Server Side Events media-tpe.
Please let me know any ideas,
Thank you
Please find the full code in Why in multiple connections to PricesResource Publisher, only one gets the stream?

User destinations in a multi-server environment? (Spring WebSocket and RabbitMQ)

The documentation for Spring WebSockets states:
4.4.13. User Destinations
An application can send messages targeting a specific user, and Spring’s STOMP support recognizes destinations prefixed with "/user/" for this purpose. For example, a client might subscribe to the destination "/user/queue/position-updates". This destination will be handled by the UserDestinationMessageHandler and transformed into a destination unique to the user session, e.g. "/queue/position-updates-user123". This provides the convenience of subscribing to a generically named destination while at the same time ensuring no collisions with other users subscribing to the same destination so that each user can receive unique stock position updates.
Is this supposed to work in a multi-server environment with RabbitMQ as broker?
As far as I can tell, the queue name for a user is generated by appending the simpSessionId. When using the recommended client library stomp.js this results in the first user getting the queue name "/queue/position-updates-user0", the next gets "/queue/position-updates-user1" and so on.
This in turn means the first users to connect to different servers will subscribe to the same queue ("/queue/position-updates-user0").
The only reference to this I can find in the documentation is this:
In a multi-application server scenario a user destination may remain unresolved because the user is connected to a different server. In such cases you can configure a destination to broadcast unresolved messages to so that other servers have a chance to try. This can be done through the userDestinationBroadcast property of the MessageBrokerRegistry in Java config and the user-destination-broadcast attribute of the message-broker element in XML.
But this only makes the it possible to communicate with a user from a different server than the one where the web socket is established.
I feel I'm missing something? Is there anyway to configure Spring to be able to safely use MessagingTemplate.convertAndSendToUser(principal.getName(), destination, payload) in a multi-server environment?
If they need to be authenticated (I assume their credentials are stored in a database) you can always use their database unique user id to subscribe to.
What I do is when a user logs in they are automatically subscribed to two topics an account|system topic for system wide broadcasts and account|<userId> topic for specific broadcasts.
You could try something like notification|<userid> for each person to subscribe to then send messages to that topic and they will receive it.
Since user Ids are unique to each user you shouldn't have an issue within a clustered environment as long as each environment is hitting the same database information.
Here is my send method:
public static boolean send(Object msg, String topic) {
try {
String destination = topic;
String payload = toJson(msg); //jsonfiy the message
Message<byte[]> message = MessageBuilder.withPayload(payload.getBytes("UTF-8")).build();
template.send(destination, message);
return true;
} catch (Exception ex) {
logger.error(CommService.class.getName(), ex);
return false;
}
}
My destinations are preformatted so if i want to send a message to user with id of one the destinations looks something like /topic/account|1.
Ive created a ping pong controller that tests websockets for users who connect to see if their environment allows for websockets. I don't know if this will help you but this does work in my clustered environment.
/**
* Play ping pong between the client and server to see if web sockets work
* #param input the ping pong input
* #return the return data to check for connectivity
* #throws Exception exception
*/
#MessageMapping("/ping")
#SendToUser(value="/queue/pong", broadcast=false) // send only to the session that sent the request
public PingPong ping(PingPong input) throws Exception {
int receivedBytes = input.getData().length;
int pullBytes = input.getPull();
PingPong response = input;
if (pullBytes == 0) {
response.setData(new byte[0]);
} else if (pullBytes != receivedBytes) {
// create random byte array
byte[] data = randomService.nextBytes(pullBytes);
response.setData(data);
}
return response;
}

Ignite and Kafka Integration

I am trying the Ignite and Kafka Integration to bring kafka message into Ignite cache.
My message key is a random string(To work with Ignite, the kafka message key can't be null), and the value is a json string representation for Person(a java class)
When Ignite receives such a message, it looks that Ignite will use the message's key(the random string in my case) as the cache key.
Is it possible to change the message key to the person's id, so that I can put the into the cache.
Looks that streamer.receiver(new StreamReceiver) is workable
streamer.receiver(new StreamReceiver<String, String>() {
public void receive(IgniteCache<String, String> cache, Collection<Map.Entry<String, String>> entries) throws IgniteException {
for (Map.Entry<String, String> entry : entries) {
Person p = fromJson(entry.getValue());
//ignore the message key,and use person id as the cache key
cache.put(p.getId(), p);
}
}
});
Is this the recommended way? and I am not sure whether calling cache.put in StreamReceiver is a correct way, since it is only a pre-processing step before writing to cache.
Data streamer will map all your keys to cache affinity nodes, create batches of entries and send batches to affinity nodes. After it StreamReceiver will receive your entries, get Person's ID and invoke cache.put(K, V). Putting entry lead to mapping your key to corresponding cache affinity node and sending update request to this node.
Everything looks good. But result of mapping your random key from Kafka and result of mapping Person's ID will be different (most likely different nodes). As result your will get poor performance due to redundant network hops.
Unfortunately, current KafkaStreamer implementations doesn't support stream tuple extractors (see e.g. StreamSingleTupleExtractor class). But you can easily create your own Kafka streamer implementation using existing one as example.
Also you can try use KafkaStreamer's keyDecoder and valDecoder in order to extract Person's ID from Kafka message. I don't sure, but it can help.

RavenDB failover scenario. How to know the actual server?

I'm setting up a project with replication and failover for RavenDB (server and client 3.0), and now I'm testing with a replica DB.
The failover behavior is very simple: I've two servers, one on 8080 and one on 8081. The configuration is basically this:
store.FailoverServers.ForDatabases = new Dictionary<string, ReplicationDestination[]>
{
{
"MyDB",
new[]
{
new ReplicationDestination
{
Url = "http://localhost:8080"
},
new ReplicationDestination
{
Url = "http://localhost:8081"
}
}
}
};
The failover IS working well, I've tried to shut down the first server (that is the one used in the DocumentStore configuration) and the second one is responding as expected.
What I want to know is: is there a way to understand what is the current failover server that is responding to the queries? If inside the session I try to navigate the DocumentSession properties (as the session.Advanced.DocumentStore.Identifier) I cannot find references to the second server, but I see only reference to the first one, that is the one used for the configuration.
Am I missing something?
You can use the ReplicationInformer.FailoverStatusChanged to get notified on failovers.
You can access the replication informer using: DocumentStore.GetReplicationInformerForDatabase()

Options for creating dynamic filters (xpath) in a Camel route

I've the following static route that is loaded at my server startup. It listens for UDP messages on a port and pushes these messages to the seda queue defined in the route below.
from("mina:udp://hostipaddress:9998?sync=false").wireTap(
"seda:sometag?size=100&blockWhenFull=true&multipleConsumers=true");
Now I can have multiple clients that want to receive/subscribe to these messages. They also want to dynamically select which feeds they need.
Each client send a subscription request (REST) to the server (implemented using Spring-MVC, Jetty, Camel).
As soon as the server receives a request I create a new Camel route that looks like:
from("seda:sometag?multipleConsumers=true")
.routeId(RouteIdCreator.createRouteId(toIP, toPort, "sometag"))
.filter()
.xpath(this.xpathFilter).unmarshal().jaxb("sometag").marshal()
.json().wireTap("mina:udp://client_ip_address:20001?sync=false");
Once this route is deployed it will start to send UDP messages to the client_ip_address: 20001 (as specified in the dynamic route above.)
The client can send different filters to the server.
In case this server receives the new filter it does the following
1. checks if there is a route running (based on client ip and port)
2. If there is route running it stops that route and deletes this route with the older filter
3. It then recreates a new route which differs from the last route only in the xpathfilter.
My issue is that step 2 takes a lot of time (to stop and restart)
Is there is a way to resolve this issue?
Basically I want to change the XPath expression in the route without stops/migrating the route.
PS: I've also posted this on the official Camel mailing list.
You can try to store the xpath filter in a database (basically a simple table with the ip and the filter associated) when you receive a new subscription. Then you can read this filter from the database in the route, and use it as a filter.
from("seda:sometag?multipleConsumers=true")
.routeId(RouteIdCreator.createRouteId(toIP, toPort, "sometag"))
.setHeader("ip").constant(client_ip_adresse)
.filter().xpath(simple("${bean:xpathFilterComponent?methode=find}"))
.unmarshal().jaxb("sometag").marshal()
.json().wireTap("mina:udp://client_ip_address:20001?sync=false");
And your bean should look like
public class XpathFilterCompnent {
public void save(String ip, String filter){
//store a filter for an ip in database, when a subscription is received
}
public void find(#Header("ip") String ip){
String filter = ... //retreive filter from database
return filter;
}
}