Job sheduler to be used across multiple web applications - asp.net-core

I have multiple web applications (.NET Core 3.1) that need to utilize one resource (another, third-party, legacy application) via it's API. The servers are really terrible and I can only make only few concurrent request and the same time (assume I can't change that).
I'm looking for some scheduler that can be used across all applications to manage requests made to that legacy application.
For now each application uses it's own Quartz .Net instance for different maintenance purposes. From what I know there is no way in which I can "chain" does instances so that only certain number of request will be made to that API at the same time across multiple Quartz schedulers (correct me if I'm wrong).
I was looking at Hangfire for answers but I don't think it is possible as well. According to this github extension I can actually create one Hangfire server that will be used by multiple applications, each one utilizing different queue but from what I can gather I cannot limit the number of workers per queue to control the number of requests made in total.
Any ideas?

Related

How to architect scheduled API to API integration

My organization moves data for customers between systems, these integrations are in BizTalk and are done by file, sometimes to/from APIs. More and more customers are switching to APIs so we are facing more and more API to API integrations.
I'm mostly a backend developer but have been tasked with finding out how we can find a more generic pattern or system to make these integrations, we are talking close to a thousand of integrations.
But not thousands of different APIs, many customers use the same sort of systems.
What I want is a solution that:
Fetches data from the source api
Transforms the data to the format for the target api
Sends the data to the target api
Another requirement is that it should be possible to set a schedule when these jobs should run.
This is easily done in BizTalk but as mentioned there will be thousands of integrations and if we need to change something in one of the steps it will be a lot of work.
My vision is something that holds interfaces to all APIs that we communicate with and also contains the scheduled jobs we want to be run between them. Preferrably with logging/tracking.
There must be something out there that does this?
Suggestions?
NOTE: No cloud-based solutions since they are not allowed in our organization.
You can easily implement this using temporal.io open source project. You can code your integrations using a general-purpose programming language. Temporal ensures that the integration runs to completion in the presence of all sorts of intermittent failures. Scheduling is also supported out of the box.
Disclaimer: I'm a founder of the Temporal project.

Any downside of running multiple hosted service within a .Net Core Windows Service?

Currently, we have a .Net Framework 4.7 based windows service that we install through MSI built using Wix. But during install, we register multiple windows services for the same exe with difference being in the arguments passed to each service. It would look like Myapp.exe -instance 1, Myapp.exe -instance 2..and so on. Each instance uses a different configuration based on the instance number and will poll different IBM MQ and process messages. We install around 14 such instances.
Now that we are looking to migrate to .Net Core, we are wondering, if its worth changing this deployment model and instead move to using multiple instances of hosted services. With this, we will simply register the hosted service multiple times but with different constructor parameter. So I am trying to understand, what could be potential downside of this approach. Till now, I could think of coupe of them.
Since these runs as independent processes, we currently have ability to stop/start specific instance of windows service. So we will potentially lose that ability.
Since these runs as independent processes, we can easily identify memory spike in a specific instance of windows service. So for troubleshooting, we can just focus on specific instance. With single executable, we lose this ability as well.
Apart from these, what other potential pitfalls that I may come across with this approach?
Also for the above 2 points, is there any workaround when using multiple hosted services?
I'm not sure specifically about Windows Services but I had the same question for microservices. I think in general, there isn't much either way but some things to consider:
All services go down if you need to deploy a new one (but if they are all the same, you are more likely to update all of them at the same time)
Coordinating between them (if necessary) might be easier (locks, transactions etc) if they are together but likewise might allow you to do things that break encapsulation because you can
They would all start and stop at the same time in a single service, if you want to control them separately, you will either need an external enable-disable mechanism or separate windows services.
If you will ever need to separate them e.g. onto separate machines, you will have to do the risky work of separating them later.
It sounds like they are largely identical just targetting different data so there aren't any things I can think of that would be a problem.

Use IronWorkers while using my work

My website is hosted on AWS Elastic Beanstalk (PHP). I use Yii Framework as an MVC.
A while ago I wanted to run a SQL query everyday. I looked up how to run crons on Beanstalk and it seemed complicated to merge the concepts of Cloud and Cron. I ran into Iron Worker (http://www.iron.io/worker), and managed to create a worker that is currently doing its job fine.
Today I want to run a more complex cron (Look for notifications in my database, decide whether to send an email, build an email template and send the email (via AWS SES).
From what I understand, worker files are supposed to be self-contained items, with everything they need to work.
However, I have invested a lot of time and effort in building my MVC. I have complex models, verifications, an email templating engine, etc...
It seems very difficult to use the work I've done to create an Iron Worker. Even if I managed to port all of my code to a worker (which seems like a great deal of work), it means anytime I make changes to my main code I need to make sure the worker also has those changes. It means I would have a "branch" of my code. Even more so if I want to create more workers in the future.
What is the correct approach?
Short-term, you could likely just use the scheduling capabilities in IronWorker and have the worker hit an endpoint in your application. The endpoint will then trigger the operations to run within your app environment.
Longer-term, we do suggest you look at more of a service-oriented approach whereby you break your application up to be more loose-coupled and distributed. Here's a post on the subject. The advantages are many especially around scalability and development agility.
https://blog.heroku.com/archives/2013/12/3/end_monolithic_app
You can also take a look at this YII addition.
http://www.yiiframework.com/extension/yiiron/
Certainly don't want you rewrite your app unnecessarily but there are likely areas where you can look to decouple. Suggest creating a worker directory and making efforts to write the workers to be self-contained. In that way, you could run them in a different environment and just pass payloads to the worker. (Push queues can also be used to push to these workers.) Once you get used to distributed async processing, it's a pretty easy process to manage.
(Note: I work at Iron.io)

Rails "sub-environment" -- still production (or test, etc.) but different

How should we best handle code that is part of a single Rails app, but is used in several different "modes"?
We have several different cases of an app that is driven from the same data sources (MySQL, MongoDB, SOLR) and shares core logic, assets, etc. across multiple different uses.
Background/details:
HTML vs REST API
A common scenario is that we have HTML and REST interfaces. These differences are handled through routing (e.g. /api/v1/user/new vs /user/new) -- with minor differences they provide the same functions. This seems reasonably clean to me.
Multi-tenant
Another common scenario is that the app is "multi-tenant", determined mainly by subdomain of the URL, e.g. partner1.example.com and partner2.example.com (or query-string parameter for API customers) -- each has a number of features or properties that differ. This is handled by a filter ApplicationController using data largely stored in a set of tenant-specific database tables with tenant-specific functionality encapsulated by methods. This also seems reasonably clean to me.
Offline Tasks
One scenario is that a great deal of the data is acquired through a very large number of tasks, running pretty much continuously: feed loaders, scrapers, crawlers, and other tasks of this sort ... the kinds of things you would find in a search engine, which is a large part of what we do. These tasks are launched on idle server instances and run periodically ... but are just rake tasks that are part of the app.
These tasks are characteristically different than our front-end code -- they update data, run calculations, do maintenance tasks and so on -- some tasks run for days (e.g. update 30M documents from an external web service). In the end, these tasks create and keep fresh the core data that our front end app uses.
This one doesn't seem as clean to me, in particular, in some cases, these tasks are running and doing data updates at the same time as our application is using them, so occasionally need to defer to the front-end app when we're under peak loads.
Major Variants of the App
This last case is clearly wrong -- we have made major customizations of our app -- 15% or 20% different, by making branches and then running as an entirely separate app, sharing some of the core data sometimes, but using some of its own data other times. We have mostly fixed this now, as it was, of course, untenable.
OK, there's a question in here somewhere, right?
So in particular for the offline tasks I feel like the app really needs to be launched in a "mode" or perhaps "sub-environment". But we still have normal development, test, qa, demo, pre_release, production environments that have their own isolated data and other configuration parameters. For each of these, we want to be able to run, develop, test and deploy the various "modes" of the application.
Can anyone suggest an appropriate architecture that is similar to the declarative notions of standard Rails environments?
If the number of modes is ever-increasing:
Perhaps the offline tasks could be separated from the main app, into their own application (or a parent abstract task with actual tasks inheriting from it and deployed individually).
If the number of modes is relatively small and won't be changing often:
You could put the per-mode configuration into a config file, logically separate from the rest of the code. Then during the deployments, you would be able to provide a combination of (environment, mode, set of hosts) and get a good level of control of your environments while using the same codebase.

Weblogic work manager

I am new to weblogic server. I am using work manager. I want to know what is work manager and why we need it. What is the difference between normal request with out work manager and with work manager !!
I think the documentation is rather good on this subject.
WebLogic Server prioritizes work and allocates threads based on an
execution model that takes into
account administrator-defined
parameters and actual run-time
performance and throughput.
Administrators can configure a set of
scheduling guidelines and associate
them with one or more applications, or
with particular application
components. For example, you can
associate one set of scheduling
guidelines for one application, and
another set of guidelines for other
application. At run-time, WebLogic
Server uses these guidelines to assign
pending work and enqueued requests to
execution threads.
Essentially, with work managers you can attach a scheduling policy to an application to e.g. make sure that a specific application gets a fair share of the available computing resources under a heavy load situation. Or you might want to restict the maximum number of threads that will be allocated to an application to prevent a buggy/untested application to bring the whole application server to its knees. (But surely all apps have been tested not to do anything like that.... ;) )
Outside of modifying the default allocation algorithms, the Work Manager is also useful if you are using a Foreign JMS Provider (such as IBM MQ) and need to process more than 16 messages at a time.