RavenDB failover scenario. How to know the actual server? - ravendb

I'm setting up a project with replication and failover for RavenDB (server and client 3.0), and now I'm testing with a replica DB.
The failover behavior is very simple: I've two servers, one on 8080 and one on 8081. The configuration is basically this:
store.FailoverServers.ForDatabases = new Dictionary<string, ReplicationDestination[]>
{
{
"MyDB",
new[]
{
new ReplicationDestination
{
Url = "http://localhost:8080"
},
new ReplicationDestination
{
Url = "http://localhost:8081"
}
}
}
};
The failover IS working well, I've tried to shut down the first server (that is the one used in the DocumentStore configuration) and the second one is responding as expected.
What I want to know is: is there a way to understand what is the current failover server that is responding to the queries? If inside the session I try to navigate the DocumentSession properties (as the session.Advanced.DocumentStore.Identifier) I cannot find references to the second server, but I see only reference to the first one, that is the one used for the configuration.
Am I missing something?

You can use the ReplicationInformer.FailoverStatusChanged to get notified on failovers.
You can access the replication informer using: DocumentStore.GetReplicationInformerForDatabase()

Related

My SQL Server Can Only Handle 2 players?

I am developing a game using TCP. The clients send and listen the server using TCP. When the server receives a request, then it consults the database (SQL Server Express / Entity Framework) and sends a response back to client.
I'm trying to make a MMORPG, so I need to know all the players locations frequently, so I used a System.Timer to ask the server the location of the players around me.
The problem:
If I configure the timer to trigger for every 500ms a method that asks the server the currently players location, then I can open 2 instances of the client app, but it's laggy. If I configure to trigger for every 50ms, then when I open the second instance, the SQL Server throws this exception often:
"The connection was not closed. The connection's current state is open."
I mean, what the hell? I know I am requesting A LOT of things to the database in a short period, but how do real games deals with this?
Here is one code that throws the error when SQL Server seems to be overloaded (second line of the method):
private List<CharacterDTO> ListAround()
{
List<Character> characters = new List<Character>();
characters = ObjectSet.Character.AsNoTracking().Where(x => x.IsOnline).ToList();
return GetDto(characters);
}
Your real problem is ObjectSet is not Thread Safe. You should be creating a new database context inside ListAround and disposing it when you are done with it, not re-using the same context over and over again.
private List<CharacterDTO> ListAround()
{
List<Character> characters = new List<Character>();
using(var ObjectSet = new TheNameOfYourDataContextType())
{
characters = ObjectSet.Character.AsNoTracking().Where(x => x.IsOnline).ToList();
return GetDto(characters);
}
}
I resolved the problem changing the strategy. Now I don't update the players positions in real time to the database. Instead, I created a list (RAM memory) in the server, so I manage only this list. Eventually I will update the information to the database.

Current MongoDB server time in VB.Net

How do I get the MongoDB's time or use it in a query from VB.NET?
For example, in the Mongo shell I would do:
db.Cookies.find({ expireOn: { $lt: new Date() } });
In PHP I can easily do something like this:
$model->expireOn = new MongoDate();
How do I approach this in VB.Net? I don't want to use the local machine's time. This obviously doesn't work...
MongoDB.Driver.Builders.Query.LT("expireOn", "new Date()")
If you merely want to remove expired cookies from your collection, you could use the TTL collection feature which will automatically remove expired entries using a background worker on the server, hence using the server's time:
db.Cookies.ensureIndex( { "expireOn": 1 }, { expireAfterSeconds: 0 } )
If you really need to query, use a service program that runs on the server or ensure your clocks are reasonably synchronized because clocks that are considerably off can cause a plethora of problems, especially for web servers and email servers. (Consider HTTP headers like Date, LastModified and If-Modified-Since, Email Timestamps, HMAC/timestamp validation against replay attacks, etc.).

cxf failover recovery

I have a cxf JAX-WS client. I added the failover strategy. The question is how the client can recovery from the backup solution and use again the primary URL? Because now after the client will switch to secondary URL remains there, will not use the primary URL even if this become available again.
The code for the client part is:
JaxWsProxyFactoryBean factory = new JaxWsProxyFactoryBean();
factory.setServiceClass(GatewayPort.class);
factory.setAddress(this.configFile.getPrimaryURL());
FailoverFeature feature = new FailoverFeature();
SequentialStrategy strategy = new SequentialStrategy();
List<String> addList = new ArrayList<String>();
addList.add(this.configFile.getSecondaryURL());
strategy.setAlternateAddresses(addList);
feature.setStrategy(strategy);
List<AbstractFeature> features = new ArrayList<AbstractFeature>();
features.add(feature);
factory.setFeatures(features);
this.serviceSoap = (GatewayPort)factory.create();
Client client = ClientProxy.getClient(this.serviceSoap);
if (client != null)
{
HTTPConduit conduit = (HTTPConduit)client.getConduit();
HTTPClientPolicy policy = new HTTPClientPolicy();
policy.setConnectionTimeout(this.configFile.getTimeout());
policy.setReceiveTimeout(this.configFile.getTimeout());
conduit.setClient(policy);
}
You may add the primary URL to the alternate addresses list instead of setting that to JaxWsProxyFactoryBean. This way, since you are using SequentialStrategy, the primary URL will be checked first for every service call, if it fails then secodary URL will be tried.
You might as well try an alternative CXF failover feture with failback.
https://github.com/jaceko/cxf-circuit-switcher

Options for creating dynamic filters (xpath) in a Camel route

I've the following static route that is loaded at my server startup. It listens for UDP messages on a port and pushes these messages to the seda queue defined in the route below.
from("mina:udp://hostipaddress:9998?sync=false").wireTap(
"seda:sometag?size=100&blockWhenFull=true&multipleConsumers=true");
Now I can have multiple clients that want to receive/subscribe to these messages. They also want to dynamically select which feeds they need.
Each client send a subscription request (REST) to the server (implemented using Spring-MVC, Jetty, Camel).
As soon as the server receives a request I create a new Camel route that looks like:
from("seda:sometag?multipleConsumers=true")
.routeId(RouteIdCreator.createRouteId(toIP, toPort, "sometag"))
.filter()
.xpath(this.xpathFilter).unmarshal().jaxb("sometag").marshal()
.json().wireTap("mina:udp://client_ip_address:20001?sync=false");
Once this route is deployed it will start to send UDP messages to the client_ip_address: 20001 (as specified in the dynamic route above.)
The client can send different filters to the server.
In case this server receives the new filter it does the following
1. checks if there is a route running (based on client ip and port)
2. If there is route running it stops that route and deletes this route with the older filter
3. It then recreates a new route which differs from the last route only in the xpathfilter.
My issue is that step 2 takes a lot of time (to stop and restart)
Is there is a way to resolve this issue?
Basically I want to change the XPath expression in the route without stops/migrating the route.
PS: I've also posted this on the official Camel mailing list.
You can try to store the xpath filter in a database (basically a simple table with the ip and the filter associated) when you receive a new subscription. Then you can read this filter from the database in the route, and use it as a filter.
from("seda:sometag?multipleConsumers=true")
.routeId(RouteIdCreator.createRouteId(toIP, toPort, "sometag"))
.setHeader("ip").constant(client_ip_adresse)
.filter().xpath(simple("${bean:xpathFilterComponent?methode=find}"))
.unmarshal().jaxb("sometag").marshal()
.json().wireTap("mina:udp://client_ip_address:20001?sync=false");
And your bean should look like
public class XpathFilterCompnent {
public void save(String ip, String filter){
//store a filter for an ip in database, when a subscription is received
}
public void find(#Header("ip") String ip){
String filter = ... //retreive filter from database
return filter;
}
}

Spymemcached hashing algorithm for multiple membase server

Platform: spymemcached-2.7.3.jar, 64 bit Windows 7 OS
We have two membase servers (Non Clustered Environment) and we are using spymemcached java client for setting and getting data from memcache. We are not using any replication between two membase servers.
We are using following code to set data in memcache. Looks like MemcachedClient always first try to put/get data in server1 if its available. If server1 is down, then MemcachedClient put/get from server2.
Does spymemcached uses any hashing algoritham to decide from which server it needs to set/get data? Any documentation available which explains how it works?
code
public class Main {
public static void main(String[] args) throws IOException, URISyntaxException {
MemcachedClient client;
URI server1 = new URI("http://192.168.100.111:8091/pools");
URI server2 = new URI("http://127.0.0.1:8091/pools");
ArrayList<URI> serverList = new ArrayList<URI>();
serverList.add(server1);
serverList.add(server2);
client = new MemcachedClient(serverList, "default", "");
client.set("spoon", 50, "Hello World!");
client.shutdown(10, TimeUnit.SECONDS);
System.exit(0);
}
}
The constructor MemcachedClient(List, String, String) will connect to the first URI in the list to obtain information about the entire cluster. This means that if you had 10 servers in you cluster you can specify one ip address to connect to all of them. The reason a list of URI's is allowed is so that if the server you are getting cluster information from goes down you can try to get cluster information from another server in the cluster.
The hashing algorithm used by Spymemcached in this case is determined by Membase when the cluster configuration begins. If you look through some json that is sent to Spymemcached during the configuration phase you will see the hash algorithm is CRC. Look in the DefaultHashAlgorithm class for more information on CRC.
Also, I'm curious why your using Membase as described.