Circuit breakers per server in Traefik - traefik

According to the documentation, circuit breakers operate at the backend level. I was wondering if it's possible to set circuit breakers per server.
If I understand correctly if the circuit breaker is switched on (a server returning 500 errors), then the entire backend is disabled. Instead, with circuit breakers per server, only the problematic server will be removed from the pool.
Am I missing something here?

Related

Edit SQL Requests in Transit

I am trying to update a legacy system's sql solution to use the cloud.
The solution today involves a customer Windows SQL server installed onsite, then various machines are configured to connect to that IP Address / Port / Server Name. When they do connect the machines will set up any tables that are missing and regularly send their data. Data rates are low for an individual machine. Roughly one write request ever 10 seconds (it varies a lot), no more than 2-3k of information on each write request.
Moving this to the cloud is tricky mostly because each of the machines do not have a unique identifier. The good news is that we have the legacy machines connected to a IOT Gateway (Just think RPI) that knows a unique machineId. Furthermore the IOTG is a full fledge computer but not too powerful of one, and its Disk is an SD card.
New and Old Network Layout
So far I have had a few things fall on their face.
1) Setting the Machine to think the DB's IP/Port is that of the IOT Gateway. Setting up an Express server on the IOTG, listening, then injecting the unique id into the queries that I'd proxy up to the cloud. I may have had a bug, but for some reason I couldn't even see the requests coming in on the port. Even if I could I'd still have to figure out how to decode them. Shouldn't I at least be able to see these requests coming in?
2) Started looking into SQLite. The idea being to have SQLite listen on the port as an actual DB then have a process in the IOTG query data out of SQLite, append a unique ID, and then send it to the cloud. Unfortunately SQLite does not listen on a port.
I am starting to looking at just installing a whole SQL server on the device, but I'd really like to avoid that. I'm pretty sure its fairly large and writing to disk is not advisable for a small embedded system like I'm running.
Generally my questions boil down to:
1) Should I be able to see SQL Queries in an express server?
2) Should I be using a different tech? I failed to find a different more sql specific proxy.
3) Am I correct to think that the SQLite path is dead? Even if I could find a way to attach it to a port there is still not going to be any sort of response from SQLite when the clients try to make a connection.
4) Am I wrong to fear the local server? Diving into some documentation for making express work with DBs gets me to here: https://www.microsoft.com/en-us/sql-server/developer-get-started/node/ubuntu/ which suggests 4GB of memory, we're working on 0.5GB.
Any other thoughts on how to approach this would be great.

LDAP - write concern / guaranteed write to replicas prior to return

Is OpenLDAP (or are any of LDAP's flavors) capable of providing write concern? I know it's an eventually consistent model, but there's more then a few DB's that have eventual consistency + write concern.
After doing some research, I'm still not able to figure out whether or not it's a thing.
The UnboundID Directory Server provides support for an assured replication mode in which you can request that the server delay the response to an operation until it has been replicated in a manner that satisfies your desired constraints. This can be controlled on a per-operation basis by including a special control in the add/delete/modify/modify DN request, or by configuring the server with criteria that can be used to identify which operations should use this assured replication mode (e.g., you can configure the server so that operations targeting a particular set of attributes are subjected to a greater level of assurance than others).
Our assured replication implementation allows you to define separate requirements for local servers (servers in the same data center as the one that received the request from the client) and nonlocal servers (servers in other data centers). This allows you tune the server to achieve a balance between performance and behavior.
For local servers, the possible assurance levels are:
Do not perform any special assurance processing. The server will send the response to the client as soon as it's processed locally, and the change will be replicated to other servers as soon as possible. It is possible (although highly unlikely) that a permanent failure that occurs immediately after the server sends the response to the client but before it gets replicated could cause the change to be lost.
Delay the response to the client until the change has been replicated to at least one other server in the local data center. This ensures that the change will not be lost even in the event of the loss of the instance that the client was communicating with, but the change may not yet be visible on all instances in the local data center by the time the client receives the response.
Delay the response to the client until the result of the change is visible in all servers in the local data center. This ensures that no client accessing local servers will see out-of-date information.
The assurance options available for nonlocal servers are:
Do not perform any special assurance processing. The server will not delay the response to the client based on any communication with nonlocal servers, but a change could be lost or delayed if an entire data center is lost (e.g., by a massive natural disaster) or becomes unavailable (e.g., because it loses network connectivity).
Delay the response to the client until the change has been replicated to at least one other server in at least one other data center. This ensures that the change will not be lost even if a full data center is lost, but does not guarantee that the updated information will be visible everywhere by the time the client receives the response.
Delay the response to the client until the change has been replicated to at least one server in every other data center. This ensures that the change will be processed in every data center even if a network partition makes a data center unavailable for a period of time immediately after the change is processed. But again this does not guarantee that the updated information will be visible everywhere by the time the client receives the response.
Delay the response to the client until the change is visible in all available servers in all other data centers. This ensures that no client will see out-of-date information regardless of the location of the server they are using.
The UnboundID Directory Server also provides features to help ensure that clients are not exposed to out-of-date information under normal circumstances. Our replication mechanism is very fast so that changes generally appear everywhere in a matter of milliseconds. Each server is constantly monitoring its own replication backlog and can take action if the backlog becomes too great (e.g., mild action like alerting administrators or more drastic measures like rejecting client requests until replication has caught up). And because most replication backlogs are encountered when the server is taken offline for some reason, the server also has the ability to delay accepting connections from clients at startup until it has caught up with all changes processed in the environment while it was offline. And if you further combine this with the advanced load-balancing and health checking capabilities of the UnboundID Directory Proxy Server, you can ensure that client requests are only forwarded to servers that don't have a replication backlog or any other undesirable condition that may cause the operation to fail, take an unusually long time to complete, or encounter out-of-date information.
From reviewing RFC3384 discussion of replication requirements with respect to LDAP, it looks as though LDAP only requires eventual consistency and does not require transactional consistency. Therefore any products which support this feature are likely to do this with vendor specific implementations.
CA Directory does support a proprietary replication model called MULTI-WRITE which guarantees that the client obtains write confirmation only after all replicated instances have been updated. In addition it supports the standard X.525 Shadowing Protocol which provides lesser consistency guarantees and better performance.
With typical LDAP implementations, an update request will normally return immediately when the DSA handling this request has been updated, and not when the replica instances have been updated. This is the case with OpenLDAP I believe. The benefits are speed, the downsides are lack of guarantee that an updated has been applied to all replicas.
CA's directory product uses a Memory Mapped system and writes are so fast this is not a concern.

Best practice: handling errors in linked servers

I am using SQL Server 2008 R2 to connect to a number of other servers of the same type from within triggers and stored procedures. These servers are geographically distributed around the world and it is vital that any errors in communication between the servers are logged along with the data that was supposed to be sent so the communication may be re-attempted at a later time. The servers are participating in an Observer pattern with one of the servers acting as the observer and handling routing of messages between the other servers.
I am looking for specific advice on how best to handle errors in this situation, particularly connectivity errors and any potential pitfalls to look out for when performing queries on remote servers.
If you are using the Linked Server and sending the data to the other server over linked server connection, there is no inherent way to log these request, unless you add an application logic to do so.
with a linked server, if one of the server goes down then there will be an error thrown in the application logic, i.e. in your case the stored procedure or the trigger will fail, saying the server does not exist or the server is down.
In order to avoid this, we try to use the Service Broker, where it implements the Queue Logic, in this case you can always keep the logging and also ensure that the messages will be delivered irrespective of the server down times ( in case of server down time, the message waits until it is read).
http://technet.microsoft.com/en-us/library/ms166104%28v=sql.105%29.aspx
Hope this helps
Linked servers may not be the best solution for the model you're trying to implement, since the resilience you require is very difficult to achieve in the case of a linked server communication failure.
The fundamental problem is that in the case of a linked server communication failure the database engine raises an error with a severity of 20, which is high enough to abort the currently executing batch - bypassing any error handling code in the batch (for example TRY...CATCH).
SQL 2005 and later include the procedure sp_testlinkedserver which enable the availability of the linked server to be tested before attempting to execute commands - however, this doesn't get around problems created by communication errors encountered during a command.
There are a couple of more robust options you could consider. One is the Service Broker, which provides an asynchronous message queuing model. This isn't a perfect fit for the observer pattern but the activation feature provides a means to implement push-notifications from a central point. Since you mention messaging, the conversation model employed by Service Broker might suit your aims.
The other option is transactional replication; this might be more suitable if the data flow is purely from the central server to the observers.

How can i protect my server from multiple queries on port 80?

i have a very simple server running WAMP on a windows machine, with a php code who is a simple API for my clients that returns an XML. The things is that the hardware is very modest, and if a user calls the link to the API and hits F5 many times (calls the link repeatedly) the server performance goes down a little (response time goes up). Is there a way to limit the queries on port 80?
I know how to limit this in the the php code, but i think it is not good practice because even if you limit the queries on the php code the query is already made and I'm consuming resource checking with php if the user is making many queries.
Well, if you want to catch it before it reaches PHP, an Apache module would be one approach, e.g. mod_cband. Other than that, your firewall might help you, but I don't know if the default Windows one is up for that.
Other than that, handling it in your PHP code wouldn't be that bad. Yes, checking a DB consumes time, but it's still faster than collecting and returning XML.
Implement access control to the resources, keep track of active sessions and don't initiate heavy tasks while that particular user has a task open...?

Stop Monitoring SQL Services for Registered Servers in SMSS

Question: Is it possible to stop SSMS from monitoring the service status of registered servers?
Details:
SSMS 2008 monitors the service status of every registered server. From what I have seen it seems to reach out to every registered server every minute or so to check it's status, in my case that is over 100 servers. This process has raised issues with our Security and Network departments. Network identified it initially as suspicious traffic due to the fact that it appeard as an unknown utility was scanning the network for SQL Servers. Security was concerned because the Security Event Logs on each server are being filled up with my logon events.
I have looked all over for a setting but can't seem to find one. Am I missing it somewhere?
TIA,
Brian
I finally found an answer!!
While it is not possible (at least that I've found) to stop SSMS from checking the service status of registered servers it is possible to change the interval at which it checks it.
The short version is to create the following registry keys (DWORD):
(SQL Server 2008)
HKLM\Software\Microsoft\Microsoft SQL Server\100\Tools\Shell | PollingInterval = 600 (decimal)
(SQL Server 2005)
HKLM\Software\Microsoft\Microsoft SQL Server\90\Tools\Shell | PollingInterval = 600 (decimal)
This will make SSMS connect automatically every minute instead of every few seconds.
See this MS Connect Post for details.
Since it doesn't appear that there's any way to stop these status checks by SSMS, can you focus on helping them to see their harmlessness?
Can the network group allow certain exceptions to this particular rule (pinging servers on port 1433) in their scanning software, which would allow you and your group to monitor SQL Server uptime? Even if you weren't using SSMS, this type of sweeping monitoring activity is pretty common, and you'll know the requests will only ever come from a handful of workstations.
I don't think these SQL status checks generate any more events in the security log than any other activity, so maybe they were just concerned because it was something they weren't expecting. Could the security group be convinced that these events aren't dangerous, again as long as they're coming from certain approved workstations?
If neither of these is an option (or even if it is), you could help mitigate the problem by not connecting to all your SQL servers at once. Maybe just connect to the ones you need at the time - it looks like loading the entire list actively connects to each of them, but just connecting to the ones you intend to use in that session might help reduce the number of network sessions open.
I hope this helps - if it doesn't, or you've got some additional input that might help find a workaround, please post it!