Multiple Mobilefirst-Server artifacts concurrent deploy - ibm-mobilefirst

I use a batch procedure for deploying MFP v7 artifacts (wlapps and adapters).
The procedure is based on the standard ant tasks defined in worklight-ant-deployer.jar.
The MFP environment runs onto a WAS cell, and consists of a single AdminService application managing multiple WLRuntimes.
Is it possible to run two (or more) deploy tasks concurrently against different WLRuntime targets ?
Furthermore, sticking to a single WLRuntime, is it possible to deploy different multiple artifacts concurrently ?
Thanks in advance for any answer/comment.
Ciao, Stefano.

For a single WL runtime, all deployments are internally done sequentially. You can start the deployments concurrently, but internally only one deployment is done after the other, due to a transaction locking mechanism. If you start too many deployments in parallel, it may come to timeout situations, even though this is seldom. By default, a deployment transaction waits for 20 minutes before it may time out.
Note: starting deployments in parallel means here using ant tasks or the wladm tool or the REST service directly. In the MobileFirst Admin Console UI, you will see deploy buttons disabled when another deployment transaction is ongoing, hence in the UI, it is not so easily possible to start deployments in parallel. The UI tries to prohibit that.
Note 2: the 20 minutes that I mentioned above is for the locking mechanism itself. Ant/wladm has its own parameters for time out that may be lower, hence in ant tasks, you might get time outs quicker than 20 min. See here.
For multiple WL runtimes, deployments can be concurrently. The mentioned locking mechanism is per runtime, hence deployments that occur in one WL runtime will not influence any other WL runtime.

Related

Scheduling a task in a Distributed Environment like Kubernetes

I have a FAST API based Rest Application where I need to have some scheduled tasks that needs to run every 30 mins. This application will run on Kubernetes as such the number of instances are not fixed. I want the Scheduled Jobs to only trigger from one of the available instance and not from all the running instances creating a Race condition, as such I need some kind of locking mechanism that will prevent the schedulers to fire if one is already running. My App does connect to a MySql compatible Aurora DB running on AWS. Can I achieve this with ApScheduler, if not are there any alternatives available?

Flink on yarn use yarn-session or not?

There are two methods to deploy flink applications on yarn. The first one is use yarn-session and all flink applications are deployed in the session. The second method is each flink application deploy on yarn as a yarn application.
My question is what's the difference between these two methods? Which one to choose in product environment?
I can't find any material about this.
I think the first method will save resources since only need one jobmanager(yarn application master). While it is also the disadvantage since the only jobmanager can be the bottleneck while flink applications getting more and more.
Both modes have their uses in production environments.
Session mode generally makes sense when you will be running a bunch of short-lived jobs, and want to avoid the overhead of starting up a cluster for each one. On the other hand, there are security implications, as any credentials available to any of the jobs will be accessible to all of the jobs. Cluster-per-job mode may use more resources overall, but is, in some sense, more straightforward.

Bamboo remote agent pool

In Jenkins, we can define labels and group number of build slaves under the label. This label then can be mapped to job so jenkins will automatically pick the available build slaves in the pool and execute the jobs. Is something similar available in bamboo to create remote agent pool?
I hope I understood your question correctly but anyway... There's similar concept in Bamboo. There are two types of agents:
Local ones which operate as a thread in Bamboo server. Generally, not recommended for bigger Bamboo instances due to performance and security reasons.
Remote ones which are basically separate processes running the builds, ideally on a different machine so Bamboo server doesn't suffer from higher hardware load.
The match between job and agents bases on job requirement and agent capabilities, e.g:
Agent define a capability, effectively states what it can build, what tools are installed, e.g. .NET or JDK
Job/deployment environment define a requirement which is need to successfully accomplish the task, e.g. Git and Maven.
In the end Bamboo tries to find an agent which provides full set of capabilities a job/deployment environment requires.
The special rules applies if an agent is dedicated to an job or environment or agent is elastic agent (runs in EC2).
More reading:
https://confluence.atlassian.com/bamkb/difference-between-local-agents-and-remote-agents-457703602.html
https://confluence.atlassian.com/bamboo/configuring-a-job-s-requirements-289277064.html
https://confluence.atlassian.com/bamboo/configuring-a-job-s-requirements-289277064.html
https://confluence.atlassian.com/bamboo/requirements-for-deployment-environments-838427584.html
https://confluence.atlassian.com/bamboo/dedicating-an-agent-629015108.html
https://confluence.atlassian.com/bamboo/managing-your-elastic-image-configurations-289277147.html

Deploying ASP.NET Core application to ElasticBeanstalk without temporary HTTP 404

Currently, ElasticBeanstalk supports ASP.NET Core applications only on Windows platforms (when using the web role), and with Windows-based platform, you can't have Immutable updates or even RollingWithAdditionalBatch for whatever reason. If the application is running with a single instance, you end up with the situation that the only running instance is being updated. (Possible reasons for running a single instance: saving cost because it is just a small backend service, or it might be a service that requires a lot of RAM in comparison to CPU time, so it makes more sense to run one larger instance vs. multiple smaller instances.)
As a result, during deployment of a new application version, for a period of up to 30 seconds, you first get HTTP 503, then HTTP 404, later HTTP 502 Bad Gateway, before the new application version actually becomes available. Obviously this is much worse compared to e.g. using WebDeploy on a single server in a "classic" environment.
Possible workarounds I can think of:
Blue/Green deployments: slow (because it depends on DNS changes), and it seems like it is more suitable for "supervised" deployments, not for automated deploy pipelines.
Modify the autoscaling group to enforce 2 active instances before deployment (so that EB can do its normal Rolling update thing), then change back. However it is far from ideal to mess with resources created and managed by EB (like the autoscaling group), and it requires a fairly complex script (you need to wait for the second instance to become active, need to wait for rolling deployment etc.).
I can't believe that this are the only options. Any other ideas? The minimal viable workaround for me would be to at least get rid of the temporary 404s because this could seriously mislead API clients (or think of the SEO effect in case of a website if a search engine spider gets a 404 for every URL). As long as it is 5xx at least everybody knows it is just a temporary error.
Finally, in Feb 2019, AWS released Elastic Beanstalk Windows Server platform v2, which supports Immutable und Rolling with an additional Batch deployments and platform updates (like their Linux-based stacks already supported for ages):
https://docs.aws.amazon.com/elasticbeanstalk/latest/relnotes/release-2019-02-21-windows-v2.html
This solves the problem even for environments (normally) running just one instance.

NServiceBus Outbox with RavenDB persistence clean up / DeduplicationDataCleanup design

I'm looking into an issue where we're seeing CPU usage maxed out on a RavenDB instance using NServiceBus outbox implementation.
The design currently has all outbox workers configuring the deduplicationdatacleanup settings in their start up configuration. i.e. these settings:
endpointConfiguration.SetTimeToKeepDeduplicationData(TimeSpan.FromDays(7));
endpointConfiguration.SetFrequencyToRunDeduplicationDataCleanup(TimeSpan.FromMinutes(1));
If you've got multiple workers that are processing messages for the system, should the cleanup be implemented on each of those, or should it be treated like a cronjob where you run the clean up process on just one of those workers, or a dedicated system in the environment that is not a worker, but more of a utility role?
I would imagine the latter, otherwise if you scale out workers, all of them are going to be trying to run the cleanup process every minute, or am I understanding the way this configuration executes the clean up incorrectly?
Thanks