SQL Server - AlwaysOn Redirect Writes with Readable Secondary Enabled - sql-server-2016

I have AlwaysOn setup among two nodes running 2016 enterprise edition. The option Readable Secondary is set to Read-intent only.
On using the parameter ApplicationIntent=ReadOnly it is always redirected to secondary for read operations. But our application also performs writes on database, which cannot happen at secondary due to the above parameter.
Is there any other parameter or setting which redirects reads to secondary and writes to primary? Else, do we need to maintain two connection strings in application, one for writes and other for reads?

You do need two types connection strings in your application. One is for readonly. One is for write and read.
For readonly connection, the connection string needs to have "ApplicationIntent=Readonly".
For write and read connection, the connection string do not include "ApplicationIntent"

Related

Connection strings for SQL Azure with geo-replica

With read-only routing, we can have a Failover Group listener direct the connection to a read-only secondary automatically, which can provide additional capacity.
I have set this up but I am confused about the fact that the FG provides two different FQDNs for the connection, one is servername.database.windows.net and the other servername.secondary.database.windows.net. These work as expected when the system is up and running but what is not clear is what happens to the secondary connection if the primary goes offline and a failover takes place. Would the secondary connection automatically route to the new primary/only server or would it simply stop working because there would be no secondaries available?
I would test it but I can't find a way to take the secondary offline to simulate it being unavailable.
Alternatively. when I tried using the primary connection with ApplicationIntent=ReadOnly, it seems it sends all traffic to the primary server so that doesn't work either.
what happens to the secondary connection if the primary goes offline and a failover takes place?
Auto-failover groups provide read-write and read-only listener end-points that remain unchanged during geo-failovers. This means you do not have to change the connection string for your application after a geo-failover, because connections are automatically routed to the current primary. Whether you use manual or automatic failover activation, a geo-failover switches all secondary databases in the group to the primary role.
would it simply stop working because there would be no secondaries available?
If you add a single database to the failover group, it automatically creates a secondary database using the same edition and compute size on secondary server.
If you add a database that already has a secondary database in the secondary server, that geo-replication link is inherited by the group.
When you add a database that already has a secondary database in a server that is not part of the failover group, a new secondary is created in the secondary server.
Refer: Auto-failover groups overview & best practices

Hangfire using multiple connection string and DbContext

I'm having trouble using Hangfire with multiple connections on the Entity Framework. I have only one server that stores Hangfire jobs and each Job must be run with a different connection string. Example: I have 5 jobs stored and each job that launches must use a specific connection in its DbContext. In the requests of my API application I use HttpContext where I already inform through it which database should I use in the connection string. I am unable to inform an HttpContext to the hangfire and thus take advantage of the logic that already works. I am using dependency injection so the instances are created as soon as the job triggers the method. I could pass the name of the database as a parameter of the method that Hangfire should trigger, however I can't do anything with this information since I'm using Dependency Injection and at that moment the DbContext instances have already been created and without the connection string. Has anyone ever needed something like that?
If you go through the hangfire documents you'll get your answer.
Hangfire document
It is possible to run multiple server instances inside a process, machine, or on several machines at the same time. Each server use distributed locks to perform the coordination logic.
Each Hangfire Server has a unique identifier that consist of two parts to provide default values for the cases written above. The last part is a process id to handle multiple servers on the same machine. The former part is the server name, that defaults to a machine name, to handle uniqueness for different machines. Examples: server1:9853, server1:4531, server2:6742.
Since the defaults values provide uniqueness only on a process level, you should handle it manually if you want to run different server instances inside the same process:
r options = new BackgroundJobServerOptions
{
ServerName = String.Format(
"{0}.{1}",
Environment.MachineName,
Guid.NewGuid().ToString())
};
var server = new BackgroundJobServer(options);
// or
app.UseHangfireServer(options);

Membase caching pattern when one server in cluster is inaccessible

I have an application that runs a single Membase server (1.7.1.1) that I use to cache data I'd otherwise fetch from our central SQL Server DB. I have one default bucket associated to the Membase server, and follow the traditional data-fetching pattern of:
When specific data is requested, lookup the relevant key in Membase
If data is returned, use it.
If no data is returned, fetch data from the DB
Store the newly returned data in Membase
I am looking to add an additional server to my default cluster, and rebalance the keys. (I also have replication enabled for one additional server).
In this scenario, I am curious as to how I can use the current pattern (or modify it) to make sure that I am not getting data out of sync when one of my two servers goes down in either an auto-failover or manual failover scenario.
From my understanding, if one server goes down (call it Server A), during the period that it is down but still attached to the cluster, there will be a cache key miss (if the active key is associated to Server A, not Server B). In that case, in the data-fetching pattern above, I would get no data returned and fetch straight from SQL Server. But, when I attempt to store the data back to my Membase cluster, will it store the data in Server B and remap that key to Server B on the next fetch?
I understand that once I mark Server A as "failed over", Server B's replica key will become the active one, but I am unclear about how to handle the intermittent situation when Server A is inaccessible but not yet marked as failed over.
Any help is greatly appreciated!
That's a pretty old version. But several things to clarify.
If you are performing caching you are probably using a memcached bucket, and in this case there is no replica.
Nodes are always considered attached to the cluster until they are explicitly removed by administrative action (autofailover attempts to automate this administrative action for you by attempting to remove the node from the cluster if it's determined to be down for n amount of time).
If the server is down (but not failed over), you will not get a "Cache Miss" per se, but some other kind of connectivity error from your client. Many older memcached clients do not make this distinction and simply return a NULL, False, or similar value for any kind of failure. I suggest you use a proper Couchbase client for your application which should help differentiate between the two.
As far as Couchbase is concerned, data routing for any kind of operation remains the same. So if you were not able to reach the item on Server A. because it was not available, you will encounter this same issue upon attempting to store it back again. In other words, if you tried to get data from Server A and it was down, attempting to store data to Server A will fail in the exact same way, unless the server was failed over between the last fetch and the current storage attempt -- in which case the client will determine this and route the request to the appropriate server.
In "newer" versions of Couchbase (> 2.x) there is a special get-from-replica command available for use with couchbase (or membase)-style buckets which allow you to explicitly read information from a replica node. Note that you still cannot write to such a node, though.
Your overall strategy seems very sane for a cache; except that you need to understand that if a node is unavailable, then a certain percentage of your data will be unavailable (for both reads and writes) until the node is either brought back up again or failed over. There is no

Is there a log file of running processes in Server Advantage

My name is Josue
I need your help with this:
Is there any way to audit or monitor the server processes that connect to the
Advantage Database Server?
Is there a log of running processes?
Thank's
There is no existing log of processes that use Advantage Database Server. Because it is a client/server architecture, there is no mechanism that I am aware of that can easily associate a connection on the server to a specific process.
However, it would be possible to use the system procedure sp_mgGetConnectedUsers() to obtain some of this information. It might be possible to use it to obtain the information you are looking for at a given point in time (a snapshot).
The output of that procedure includes three fields that you might be interested in. The Address column gives the address of the machine that connected to Advantage. It is typically the IP address of the client application. But it can also be of the form "IPC Connection N", which indicates that it is using shared memory for communications; this means that the client process is running on the same machine as the server.
The TSAddress column might also be of interest. If the connection is made by a client that is running through terminal services (e.g., a remote desktop), then that column contains the IP address of the client machine. If you are interested in knowing processes that originate from the server machine itself, then you would need this field to differentiate between those and clients that connected through terminal services.
The other column of potential interest would be ApplicationID. By default, that field contains the process name (e.g., the executable) of the client application. This could help identify the actual process. It is not guaranteed, though. The application itself can change that value through mechanisms such as sp_SetApplicationID.

Stored Connection Strings per user

In the past I've used a Singleton Pattern to load the connection string when the application starts via the global.asa file.
I have a project now where each user has a unique connection string to the database. I would like to load this connection string once. The issue is that the singleton pattern will not work for me since each user has there own connection string. Basically the connection string is created dynamically.
I do not want to store it is session. If anyway has a clever way of doing this in .NET let me know ?
Connections to the database are quite expensive, in terms of resources, and I personally would suggest that you reconsider your requirements of having one per user. Unless you can guarantee that the total number of users is going to be very small (say no more than 5-10).
Having said that, you can just store the connection in the User object that represents your user. Or have a global dictionary that maps user ids to connection strings.
If the only difference between the connection strings is the username/password, you could consider impersonating the client and using Windows authentication in SQL Server instead.