hangfire dashboard on several web servers - hangfire

We have two servers and would like to deploy a hangfire dashboard on both so we can take down one of the machines for maintenance if/as needed. They will both access shared tables in the same database. They will not be load balanced, but they both should be accessible from seperate internal Urls at the same time.
Are there any problems with deploying the dashboard on multiple IIS servers? Should each instance have a BackgroundJobServer running or will this cause issues?
I had a related issue I have moved onto the hangfire forums https://discuss.hangfire.io/t/scheduled-job-starts-processing-twice/3076

Related

Build an app stack that has no single point of failure and is fault tolerant

Considering a three-tier app ( web server, app server and database).
[Apache Web server -> Tomcat app server -> database]
How to build an app stack (leaving out the database part) that has no single point of failure and is fault tolerant ?
IMHO, this is quite an open-ended question. How specific is a single point of failure - single app server, single physical server, single data centre, network?
A starting point would be to run your Tomcat and Apache servers in clusters. Alternatively, you can run separate instances, fronted with a load balancer such as HAProxy - except, to avoid a single point of failure, you will need redundancy on the load balancer as well. I recently worked on a project where we had two instances of a load balancer, fronted with a virtual IP (VIP). The load balancers communicated with two different app server instances, using a round-robin approach. Clients connected to the VIP in order to use the application, and they were completely oblivious to the fact that there were multiple servers behind it.
As an additional comment, you may also want to look at space-based architecture - https://en.wikipedia.org/wiki/Space-based_architecture.

Can I place views, models, controllers on different EC2 server?

I am working on a very big project of social networking in YII framework where the load balancing is very important issue that is arising.
What I need is :
I want to keep all the three layers ( models, views, controllers) on different EC2 amazon servers so that load balancing can be done in an efficient way.
What can I do for that in YII ?
Any help ?
For your load balancing you should not seperate the application on 3 different instances.
You should have the same app (with all the models, views and controllers) on several servers and then depending on each server's CPU and RAM usage the load balancer will redirect the end user on the appropriate server.
I don't even know if separate the app is doable, and if it is the user will have to wait much longer:
The front controller will call some models => One or several calls to the model server = some time
The front controller have to send the datas to the view => more time
At the end the user will have waited more than on a loaded server!
I'd highly recommend Amazon's Elastic Beanstalk service, as I'm using it for a project I'm developing which is also based on the Yii Framework.
The solution i use, is to deploy my application on 3 servers and keep them in sync from a deployment server with rsync. My static content comes from a 4rth server, but that would put all your code on 3 servers, as 3 exact clones. Imo this is the best because with amazone you can just spawn more clones if you need to scale your balancing.
Load balancing means that you will server a portion of the users on 1 server and another portion on a 2nd server and so on..
But if you split up your models/controllers/views you do not understand what load balancing is about.

Resque Workers from other hosts registered and active on my system

The Rails application I'm currently working on is hosted at Amazon EC2 servers. It's using Resque for running background jobs, and there are 2 such instances (would-be production and a stage). Also I've mounted Resque monitoring web app to the /resque route (on stage only).
Here is my question:
Why there are workers from multiple hosts registered within my stage system and how can I avoid this?
Some additional details:
I see workers from apparently 3 different machines, but only 2 of them I managed to identify - the stage(obviously) and the production. The third has another address format(starts with domU) and haven't any clue what it could be.
It looks like you're sharing a single Redis server across multiple resque server environments.
The best way to do this safely is to use separate Redis servers or separate Redis databases or namespaces. The Redis-namespace gem can be used with Resque to isolate each environments Resque queues and worker data.
I can't really help you with what the unknown one is, but I had something similar happen when moving hosts and having dns names change. The only way I found to clear out the old ones was to stop all workers on the machine, fire up IRB, require 'resque' and look at Resque.workers. This will list all the workers resque knows about, which in your case will include about 20 bogus ones. You can then do:
Resque.workers.each do {|worker| worker.unregister_worker}
This should prune all the not-really-there workers and get you back to a proper display of the real workers.

AppFabric crashes when used on specific IIS site

I'm setting up an AppFabric caching cluster on a small webfarm (5 web servers).
The caching cluster is installed on the same servers that run the IIS, if that matters.
I only use the AppFabric cache for my Model layer, meaning mostly business logic objects created from database queries. No page caching or similar.
This works just fine when enabled on the main website.
However on one of the 5 web servers there's a second IIS site, which hosts a couple of services, amongst others 3 WCF endpoints, as well as 2 old-school ASMX webservices.
When I enabled the AppFabric caching for this site, it tears the whole cluster down. A call to Get-CacheClusterHealth shows all 5 are completely gone (100% in Unallocated named cache fractions)
The Model code is actually the exact same DLLs as we use for the main website, so I doubt it's anything in the code (since the main site works)
I noticed this error in IIS -> AppFabric Dashboard: Error occurs while parsing service file myendpoint.svc
So that got me thinking: Could this be caused by the WCF endpoints somehow ?
There is a related question to this here:-
AppFabric Cache server and web application on same physical machine
Microsoft don't recommend having cache nodes being dual use (also hosting applictions). This could be the cause of your problem. We use an appfabric cache cluster but we dedicate them to appfabric and nothing else. See the article from MS here:-
AppFabric Caching Physical Architecture

Web Server being used as File Storage - How to improvise?

I am making a DR plan for a web application which is hosted on a production web server. Now that web server also acts as a file storage for storing the feed uploads files (used by the web application as input) and report files( output of web application processing). Now if the web server goes down , the files data is also lost, so need to design a solution and give recomendations which eliminates this single point of failiure.
I have thought of some recommendations as follows-
1) Use a seperate file server however it requires a new resources
2) Attach a data volume mounted on the web server which is mapped to some network filer ( network storage) which can be used to store the feeds and reports. In case the web server goes down , the network filer can be mounted and attached to the contingency web server.
3) There is one more web server which is load balanced however that is not currently being used as file storage , and if we can implement a feature which takes the back up of the file data regularly to that load balanced second web server , we can start using that incase the first web server goes down. The back up can be done through a back up script, or seperate windows service , or some scheduling job for scheduling the backup job every night.
Please help me to review above or suggest new recommendations to help eliminate this single point of failiure problem on the web server. It would be highly appreciated?
Regards
Kapil
I've successfully used Amazon's S3 to store the "output" data of web and non-web applications. Using a service like that is beneficial from the single-point-of-failure perspective because then any other instance of that web application, or a different type of client, on the same server or in a completely different datacenter still has access to the same output files. Another similar option is Rackspace's CloudFiles.
Both of these services are very redundant, and you could use them as the back, and keep the primary storage on your server, or use them as the primary and keep a backup on your other web server. There are lots of options! Hops this info helps.