Why are there Network Security Groups created automatically in azure and there is no bind Subnet? - azure-virtual-network

There are some pools in my batch accounts are created using this VNet and Subnet. Usually when we create VNet, we will bind an NSG in its Subnet, but the Subnet has no corresponding NSG attached.
So I don't understand why the irregular NSG in the picture was created and there was no Subnet.
Why the 'deployIfNotExists' Policy action failed.
Does anyone know about these?

When you create an Azure Batch pool in a virtual network, you don't have to specify NSGs at the virtual network subnet level, because Batch configures its own NSGs.
Batch automatically allocates additional networking resources in the
resource group containing the VNet.
For each 100 dedicated or low-priority nodes, Batch allocates: one
network security group (NSG), one public IP address, and one load
balancer. These resources are limited by the subscription's resource
quotas. For large pools, you might need to request a quota increase
for one or more of these resources.
You will also could check the activity log from network security group. Event initiated by Azure Batch.

Related

CloudSQL Replicas Load Balance

When read replica is created 2 IPs are assigned to the master and the read replica.
So when an application is connected to the CloudSQL using master IP, does it only use the master instance or is it connected to both instances?
Does the CloudSQL load balance the traffic among the replicas or do the application have to manually connect to the replicas?
Is there a way to achieve this without manually connecting to each instance?
So when an application is connected to the CloudSQL using master IP,
does it only use the master instance or is it connected to both
instances?
When the client is connected to the IP address of the master, it is only connected to the master.
Does the CloudSQL load balance the traffic among the replicas or do
the application have to manually connect to the replicas?
Google Cloud SQL does not load balance. If you wish to distribute read-only traffic, the client must perform that function.
Is there a way to achieve this without manually connecting to each
instance?
No. The client must connect to the masters and replicas to distribute read-only traffic. Logic must be present to send write traffic to the master only.
I wrote an in-depth article on this topic:
Google Cloud SQL for MySQL – Connection Security, High Availability and Failover

How to better utilize local cache with load balancing strategies?

I have an Authentication service where I need to cache some user information for better performance. I chose to use local cache because Authentication service probably will be called on each request so I want it to be super fast. Compared to remote cache options local cache is a lot faster (local cache access is below 1ms while remote cache access is around 25ms).
The problem is I can not cache as much information as a distributed cache without running out of memory (talking about millions of users). I can either leave it as it is and when local cache reaches the memory limit it would evict some other data but that would be bad optimization of the cache. Or I can use some kind of load balancer strategy where users will be redirected to same Authentication service instances based on their IP address or other criteria thus the cache hits will be a lot higher.
It kind of defeats the purpose of having stateless services however I think I can slightly compromise from this principle in network layer if I want both consistency and availability. And as for Authentication both are crucial for full security (user info always has to be up-to-date and available).
What kind of load balancing techniques out there for solving this kind of problem? Can there be other solutions?
Note: Even though this question is specific to Authentication I think many other services that are frequently accesses and requires speed can benefit a lot from using local caches.
So - to answer the question here - load balancers can handle this with their hashing algorithms.
I'm using Azure a lot so I'm giving Azure Load Balancer as an example:
Configuring the distribution mode
Load balancing algorithm
From the docs:
Hash-based distribution mode
The default distribution mode for Azure
Load Balancer is a five-tuple hash.
The tuple is composed of the:
Source IP
Source port
Destination IP
Destination port
Protocol type
The hash is used to map traffic to the available servers. The
algorithm provides stickiness only within a transport session. Packets
that are in the same session are directed to the same datacenter IP
behind the load-balanced endpoint. When the client starts a new
session from the same source IP, the source port changes and causes
the traffic to go to a different datacenter endpoint.

LDAP Fault-tolerance configuration (e.g SunOne)

LDAP Fault-tolerance configuration (e.g SunOne):
Does anyboby know how to configuration "Fault-tolerance" for LDAP, e.g SunOne LDAP.
I search via google without any userful result?
Thanks
Assuming, by "fault tolerance," "high availability (HA)" is being asked, I would say it can be achieved by redundancy. And, it would not be peculiar to SunOne or any directory server software from other vendors.
There are different ways to solve this. It depends on the business requirements and the affordability. One method that comes to mind is to have the LDAP software installed on an HA pair. This requires hardware and OS capabilities for fail-over and it requires two servers (in a world of virtualization, "server" can mean different things [physical box, frame, LPAR, etc.]; so, I'll just leave the interpretation to the reader). When one server fails, the other server takes over and assumes the primary role in the pair. This is the fault-tolerance part. In this approach, the machine/server with the secondary role is passive (i.e., it's not serving clients) until the primary goes down. You will need to implement LDAP data replication between two servers. They can be two LDAP masters in a P2P replication topology.
Another method is to have multiple LDAP servers (i.e., masters, replicas) and cluster them using a network dispatcher (ND) software/appliance/etc., which would distribute the incoming traffic to the individual servers (usually replicas) in the cluster. If you lose one replica in the cluster, ND will not send any traffic to that replica until it comes back. However, other replicas will still be receiving load and therefore serving to the incoming traffic. This is the fault-tolerance part in this method. The degree of the availability you want will also dictate what can be done in a clustered environment. You can have a single LDAP master (to which the organization's applications would make updates) and keep it out of the cluster, but pair with another server for fail-over (so you wouldn't lose availability for updates from the applications - this also gives you the freedom to do maintenance on the master without interrupting your applications [well, they need to be written to be able to write to more than one LDAP master if the primary one is not available]). You would have to have the secondary server to receive replication from the primary in any case. If the budget doesn't let you have more servers/replicas, then you can put the master server along with replicas in the cluster as well to help with the read traffic. Instead of an HA-pair in which one of the servers would be passive, you can have two masters configured in a P2P replication topology and have them both in the cluster to help with the traffic too. There are different ways to approach to this method depending on the level of redundancy wanted or that can be afforded.

How to load balance url request to a dedicated weblogic node?

For some performance issue, i need to process one kind of request in a dedicated node. For example, I need to process all request like http://hostname/report* on node1. So, I added a rule in load balancer to redirect http://hostname/report* to http://node1name/report*. But node1 ask me to login again. And I was logged in http://hostname/ already. How can I directly access without login again?
As #JoseK mentioned, it looks like you don't have session replication and failover configured between the servers. You will need all of your application servers to be inside the same WebLogic cluster and you will also have to pick their secondary session replication node to be the destination for in-memory replication. You can dictate this by assigning the dedicated node to a specific machine, which is then selected as the secondary replication target for all cluster members.
Also, for session replication to work, all objects within your session have to be/implement serializable.

Web App: High Availability / How to prevent a single point of failure?

Can someone explain to me how high-availability ("HA") works for a web application ... because I assume HA means that there exist no single-point-of-failure.
However, even if a load balancer is used- isn't that the single point of failure?
I have found this article on the subject:
http://www.tenereillo.com/GSLBPageOfShame.htm
Basically if you do not require long lasting sticky sessions you can configure your DNS servers to return multiple A records (IP addresses) for your website.
Web browsers are smart enough to try all the addresses until they find one that works.
In simple words high availability can be defined as running a system 24*7 without a downtime even if there are hardware and software failures. In other way a fault tolerance application. This helps ensure uninterrupted use of the application for it’s intended users.
Read more on High Availability Deployment Architecture
It works the following way that you setup two HA Proxy servers with heartbeat, so when one fails (stops responding to queries), it's being removed from the cluster.
Requests from HA Proxy can be forwarded to web servers in round robin fashion, and if one web server fails, HA Proxy servers do not try to contact it until it's alive.
Web servers are storing all dynamic information in database, which is replicated across two MySQL instances.
As you can see, HA Proxy and Cluster MySQL (or simply MySQL replication) as well IP Clustering here is the key.
Sure it is when operated alone. Usual highly available setup includes 2 or more load balancers running in cluster in either active/active or active/passive configuration. To further increase the availability you can have 2 different Internet Service Providers (or geo distributed datacenters) each running a pair of clustered load balancers. Then you configure DNS A record resolving to 2 distinct public IP addresses which guarantees round-robin processing splitting DNS requests evenly (CloudFlare is very fast and reliable at this). There's also possibility to return IP address of datacenter closest to your originating geo location by using something like PowerDNS dnsdist
This is what big players do to make their services highly available.
Please read https://docs.oracle.com/cd/E23824_01/html/821-1453/gkkky.html for more clearity. Actually both load balancer uses same vip(Virtual IP Address. https://techterms.com/definition/vip).
HA architecture is a entire field and multiple books were written on it, so it is hard to answer in a short paragraph.
To sum up the ideal situation, you would be using multiple servers, interconnected to a layer of multiple load balancers. The nodes and LB will be located in a few different data centers, and connected to different network backbone. Ideally the data centers will be located all over the world.
In short, all component will have redundancy, including the load balancers.
For a starting point, see Wikipedia for High Availability Cluster