I'm reading about different strategies for load balancers and one of them is hashing by the source IP (or the session ID). As far as I understand the idea behind that is to connect every user with a specific server, which can allow storing data about a particular user on the matching server. The examples I found are game servers (in case of disconnection user will be connected to the same server) and e-commerce websites (to store items in the cart for not logged-in users).
However, this strategy looks off to me from the availability perspective. What happens if some nodes failover? Will all users from the matching IP addresses won't be able to use the service then?
If not - how the fallback is implemented (maybe you can reference a nice article)?
If yes - in what use cases it is justified to use such a strategy? At the first glance, it doesn't seem to me that using it for game servers and e-commerce is a good idea
Related
I'd like to implement a simple video chat system for students to tutor each other. I'm a one man show, and would like a system I can run in a cost effective way starting with 10 users, and hopefully scale up as needed.
WebRTC seems like a great, low latency, and cheap option to build this feature. However, if clients are communicating, then they must know each other's public IP. Is this a significant privacy or security issue?
What is the worst case scenario of somebody getting my IP address? Wouldn't any malicious actor have to get through my ISP to get my specific location?
Thanks!
If you host it yourself, WebRTC can be extremely cost-effective. I've been running the SFU at galene.org (disclaimer: I'm the main developer), which is used for multiple lectures with up to a hundred students. Even though this is a full-fledged SFU (and not a mere TURN server), hosting amounts to just over €6/month.
If your tutoring sessions involve just two or three people, then peer-to-peer WebRTC might be enough, but even then a TURN server will be required, especially if some of your users are on university networks. For larger groups, you will need to push your traffic through an SFU.
If you do peer-to-peer WebRTC, then any user can learn the IP of any user they are communicating with; this is most probably not an issue, since the IP addresses are most probably already being disclosed (e.g. in mail headers). If you go though an SFU, then the IP addresses are not deliberately disclosed, but they might still leak; for example, the SFU implementation mentioned above (Galene) discloses IP addresses when a user initiates a file transfer since file transfers happen directly between clients, in a peer-to-peer fashion. (It may be possible to avoid this disclosure by setting the iceTransportPolicy field to relay in the PeerConnection constructor, but I haven't tested how effective it is.)
WebRTC doesn't have to be P2P. You could run a SFU. Each user will upload their video to your server, and the server will distribute via WebRTC. Then the users will never know each others IPs.
I don't have any exact numbers, but it isn't expensive either. Your biggest expense will probably be bandwidth. Lots of Open Source SFUs exist, this is a good list to get started.
I'm looking at using traffic mirroring with Istio to dark test releases.
The mirror'd traffic will mean write APIs like order & payment, etc are called multiple times which I don't want else I'll be charging the customer twice & sending them a duplicate product.
Is there a standard way to prevent this (stubbing seems an odd thing to do in production) or is mirroring only really applicable for read APIs?
Issue
There is diagram of mirroring setup wih traffic flows.
Although these mirror requests are mirrored as “fire and forget” and the reply from the mirror service is just dropped (by the envoy proxy sidecar) to /dev/null and not returned to the caller, it's still uses this api.
Solution
As mentioned in comments
In my opinion you should add path for your testing purposes with some custom header, so this could be tested only by you or your organization, and the customer shouldn't be involved in that.
This topic is described in detail here by Christian Posta.
When we deploy a new version of our service and mirror traffic to the test cluster, we need to be mindful of impact on the rest of the environment. Our service will typically need to collaborate with other services (query for data, update data, etc). This may not be a problem if the collaboration with other services is simply reads or GET requests and those collaborators are able to take on additional load. But if our services mutates data in our collaborators, we need to make sure those calls get directed to test doubles and not the real production traffic.
There are a few approaches you may consider, all of them are described in the link above:
Stubbing out collaborating services for certain test profiles
Synthetic transactions
Virtualizing the test-cluster’s database
Materializing the test-cluster’s database
In practice, mirroring production traffic to our test cluster (whether that cluster exists in production or in non-production environments) is a very powerful way to reduce the risk of new deployments. Big webops companies like Twitter and Amazon have been doing this for years. There are some challenges that come along with this approach, but there exist decent solutions as discussed in the patterns above.
I am working on an embedded system that should support SNMPv3, and I am wondering how to let the user add new USM?
Is it reasonable to let the user add it by SNMP? by HTTPS?
Thanks
Avner
Depends on where you are aiming at with your product:
There are users out there that will only expect to receive some traps from your SNMPv3 device - Those users might not necessarily have access to tooling allowing them to add users via SNMP (or don't want to be bothered with having to install and operate free tooling).
Whether it makes sense to implement a user administration frontpage using HTTPS strongly depends on the amount of muscle your platform has. To me, that sounds a bit heavy-weight.
Most SNMPv3 devices I have come across so far had a simple ssh command-line-based interface for user management, so I would expect that is common practice in the market. Remember, adding users is a one-time activity in most environments.
What you should and must allow in any case is changing users' encryption and authentication keys using the USM MIB - The whole SNMPv3 security concept will cease to be of any value when keys cannot be frequently (ideally, in an automated way) changed.
I have come across a term SDN (software defined network). I have gone through some related webpages and understood that it is basically related to networking virtualisation. I want to understand SDN from an application developer/programmer's perspective. For example, if I have created a set of websites and web services (in .Net), what are the things that would be different in SDN than in a conventional network, in terms of development and deployment.
I would appreciate if somebody could explain this with an example.
Thanks a lot.
Before I say anything I should mention everything I say here might be changed in future. The reason is that this field is still in the research area and might face some modifications in future. Even some use cases may be removed or added.
One thing I should maybe add is that SDN is not just network visualization. It could be more than that. I think it is better to say one of the applications/usages of SDN is network visualization.
Let's consider onos:
ONOS is implementing an controller which internet service providers would use to cerate path and links. ISP's just specify the two end points and then ONOS goes and sends flows to different switches in order to make path between them. ONOS calls it "intent", that is an intent is created between two points.
Imagine Datacenter (DC) A wants to backup it's data at 3 am tomorrow. With SDN they can call their ISP and ask for a "path" at 3am tomorrow with bandwidth of 50Gb/s. The two DC may have full view of the network or just a subset of the view. (i.e. visualized view ) The ISP's SDN controller would go and program the required switches using OF commands to create that path for the two DC.
SDN is mostly used in programming network. For example:
As you said network visualization
Network Access Control -> kinda Security
Datacenter Optimization
Dynamic Interconnects -> creating paths and stuff
Another possible usage is the following: Let's say you have your website up and user/client want to connect to your website and browse it. One can say, users need to create a path from client to your server using their ISP. However, I think this maybe by the browser for you. I doubt this is feasible but I've heard this usage too.
In general I think SDN would not make a big change in terms of web browsing and web servers. Usually web browsing is just short connections and packets. I think SDN would make more sense for long time connections like backing up a whole DC or streaming videos. The reason is that configuring a path from client to host is a bit expensive and a bit time consuming (at least with the current technology) so it is worth it for connections over a long period.
I hope it helps.
I know this is a difficult question but here it is, in context:
Our company has a request to build a WordPress website for a certain client. The caveat is that, on one day per year, for a period of about 20 minutes, 5,000 - 10,000 people will attempt to access the home page of this website. Their purpose: Only to acquire an outbound link to another site.
My concern is, no matter what kind of hosting we provide, the server may reject the connections after a certain number of connections are reached.
Any ideas on this?
This does not depend on WordPress. WordPress is basically software to render webpages: it helps you to quickly modify the content content of a page. Other software like for instance Apache accepts connections and redirects the calls to for instance WordPress.
Apache can be configured to accept more connections. I think the default is about 200. If that's bad really depends. If the purpose is only to give another URL, you can say that connections will be terminated fast. So that's not really an issue. If on the other hand you want to generate an entire page using PHP and MySQL it can take some time before a client is satisfied. In that case 200 connections are perhaps not sufficient.
As B-Lat points out. You can use cloud computing platforms like Google App Engine or Microsoft Azure that provide a lot of server power. But only bill their clients on the consumption on these resources. In other words you can accept thousands of connections at once. But you don't need to pay for the other days when clients visit your website less often.