Flink on yarn use yarn-session or not? - hadoop-yarn

There are two methods to deploy flink applications on yarn. The first one is use yarn-session and all flink applications are deployed in the session. The second method is each flink application deploy on yarn as a yarn application.
My question is what's the difference between these two methods? Which one to choose in product environment?
I can't find any material about this.
I think the first method will save resources since only need one jobmanager(yarn application master). While it is also the disadvantage since the only jobmanager can be the bottleneck while flink applications getting more and more.

Both modes have their uses in production environments.
Session mode generally makes sense when you will be running a bunch of short-lived jobs, and want to avoid the overhead of starting up a cluster for each one. On the other hand, there are security implications, as any credentials available to any of the jobs will be accessible to all of the jobs. Cluster-per-job mode may use more resources overall, but is, in some sense, more straightforward.

Related

How to redirect the Apache log in Kubernetes

I am having one namespace and one deployment(replica set), My Apache logs should be written outside the pod, how is it possible in Kubernetes.
This is a Community Wiki answer so feel free to edit it and add any additional details you consider important.
You should specify more precisely what you exactly mean by outside the pod, but as David Maze have already suggested in his comment, take a closer look at Logging Architecture section in the official kubernetes documentation.
Depending on what you mean by "outside the Pod", different solution may be the most optimal in your case.
As you can read there:
Kubernetes provides no native storage solution for log data, but you can integrate many existing logging solutions into your Kubernetes
cluster ... Cluster-level logging architectures are described in assumption that a logging backend is present inside or outside of your cluster.
Here are mentioned 3 most popular cluster-level logging architectures:
Use a node-level logging agent that runs on every node.
Include a dedicated sidecar container for logging in an application pod.
Push logs directly to a backend from within an application.
Second solution is widely used. Unlike the third one where the logs pushing needs to be handled by your application container, sidecar approach is application independend, which makes it much more flexible solution.
So that the matter was not so simple, it can be implemented in two different ways:
Streaming sidecar container
Sidecar container with a logging agent

Mirror canary deployment in RTL?

I am new to canary deployments. We are going to start doing canary deployments via Istio.
I was assuming this would just be a deployment mechanism, probably with some Istio routing testing in a pre-prod env but in earlier test envs we'd ring fence to a version being tested as we do today.
It's been suggested the canary concept is applied to all test environments so we effectively run all versions we expect to canary test in prod in the Route To Live.
Wondring what approach others are taking?
Mirroring
As mentioned here
Using Istio, you can use traffic mirroring to duplicate traffic to another service. You can incorporate a traffic mirroring rule as part of a canary deployment pipeline, allowing you to analyze a service's behavior before sending live traffic to it.
If you're looking for best practices I would recommend to start with this tutorial on medium, because it is explained very well here.
How Traffic Mirroring Works
Traffic mirroring works using the steps below:
You deploy a new version of the application and switch on traffic
mirroring.
The old version responds to requests like before but also sends an asynchronous copy to the new version.
The new version processes the traffic but does not respond to the user.
The operations team monitor the new version and report any issues to the development team.
As the application processes live traffic, it helps the team uncover issues that they would typically not find in a pre-production environment. You can use monitoring tools, such as Prometheus and Grafana, for recording and monitoring your test results.
Additionally there is an example with nginx that perfectly shows how it should work.
Canary deployment
As mentioned here
One of the benefits of the Istio project is that it provides the control needed to deploy canary services. The idea behind canary deployment (or rollout) is to introduce a new version of a service by first testing it using a small percentage of user traffic, and then if all goes well, increase, possibly gradually in increments, the percentage while simultaneously phasing out the old version. If anything goes wrong along the way, we abort and rollback to the previous version. In its simplest form, the traffic sent to the canary version is a randomly selected percentage of requests, but in more sophisticated schemes it can be based on the region, user, or other properties of the request.
Depending on your level of expertise in this area, you may wonder why Istio’s support for canary deployment is even needed, given that platforms like Kubernetes already provide a way to do version rollout and canary deployment. Problem solved, right? Well, not exactly. Although doing a rollout this way works in simple cases, it’s very limited, especially in large scale cloud environments receiving lots of (and especially varying amounts of) traffic, where autoscaling is needed.
There are the differences between k8s canary deployment and istio canary deployment.
k8s
As an example, let’s say we have a deployed service, helloworld version v1, for which we would like to test (or simply rollout) a new version, v2. Using Kubernetes, you can rollout a new version of the helloworld service by simply updating the image in the service’s corresponding Deployment and letting the rollout happen automatically. If we take particular care to ensure that there are enough v1 replicas running when we start and pause the rollout after only one or two v2 replicas have been started, we can keep the canary’s effect on the system very small. We can then observe the effect before deciding to proceed or, if necessary, rollback. Best of all, we can even attach a horizontal pod autoscaler to the Deployment and it will keep the replica ratios consistent if, during the rollout process, it also needs to scale replicas up or down to handle traffic load.
Although fine for what it does, this approach is only useful when we have a properly tested version that we want to deploy, i.e., more of a blue/green, a.k.a. red/black, kind of upgrade than a “dip your feet in the water” kind of canary deployment. In fact, for the latter (for example, testing a canary version that may not even be ready or intended for wider exposure), the canary deployment in Kubernetes would be done using two Deployments with common pod labels. In this case, we can’t use autoscaling anymore because it’s now being done by two independent autoscalers, one for each Deployment, so the replica ratios (percentages) may vary from the desired ratio, depending purely on load.
Whether we use one deployment or two, canary management using deployment features of container orchestration platforms like Docker, Mesos/Marathon, or Kubernetes has a fundamental problem: the use of instance scaling to manage the traffic; traffic version distribution and replica deployment are not independent in these systems. All replica pods, regardless of version, are treated the same in the kube-proxy round-robin pool, so the only way to manage the amount of traffic that a particular version receives is by controlling the replica ratio. Maintaining canary traffic at small percentages requires many replicas (e.g., 1% would require a minimum of 100 replicas). Even if we ignore this problem, the deployment approach is still very limited in that it only supports the simple (random percentage) canary approach. If, instead, we wanted to limit the visibility of the canary to requests based on some specific criteria, we still need another solution.
istio
With Istio, traffic routing and replica deployment are two completely independent functions. The number of pods implementing services are free to scale up and down based on traffic load, completely orthogonal to the control of version traffic routing. This makes managing a canary version in the presence of autoscaling a much simpler problem. Autoscalers may, in fact, respond to load variations resulting from traffic routing changes, but they are nevertheless functioning independently and no differently than when loads change for other reasons.
Istio’s routing rules also provide other important advantages; you can easily control fine-grained traffic percentages (e.g., route 1% of traffic without requiring 100 pods) and you can control traffic using other criteria (e.g., route traffic for specific users to the canary version). To illustrate, let’s look at deploying the helloworld service and see how simple the problem becomes.
There is an example.
There are additional resources you may want to check about traffic mirroring in istio:
https://istio.io/latest/docs/tasks/traffic-management/mirroring/
https://itnext.io/use-istio-traffic-mirroring-for-quicker-debugging-a341d95d63f8
https://dev.to/peterj/mirroring-traffic-with-istio-service-mesh-2cm4
https://livebook.manning.com/book/istio-in-action/chapter-5/v-7/130
https://istio.io/latest/docs/tasks/traffic-management/traffic-shifting/#apply-weight-based-routing

Service Fabric - Local Cluster - Queuing

I am in a situation where I can use Service Fabric (locally) but cannot leverage Azure Service Bus (or anything "cloud"). What would be the corollary for queuing/pub-sub? Service Fabric is allowed since it is able to run in a local container, and is "free". Other 3rd party messaging infrastructure, like RabbitMQ, are also off the table (at the moment).
I've built systems using a locally grown bus, built on MSMQ and WCF, but I don't see how to accomplish the same thing in SF. I suspect I can have SF services use a custom ICommunicationListener that exposes msmq, but that would only be available inside the cluster (the way I understand it). I can build an HTTPBridge (in SF) in front of those to make them available outside the cluster, but then I'd lose the lifetime decoupling (client being able to call a service, using queues, even if that service isn't online at the time) since the bridge itself wouldn't benefit from any of the aspects of queuing.
I have a few possibilities but all suffer from some malady that only exists because of SF, locally. Also, the same code needs to easily deploy to full Azure SF (where I can use ASB and this issue disappears) so I don't want to build two separate systems just because of where I am hosting it in some instances.
Thanks for any tips.
You can build this yourself, for example like this. This uses a BrokerService that will distribute message-data to subscribed services and actors.
You can also run a containerized queuing platform like RabbitMQ with volumes.
By running the queue system inside the cluster you won't introduce an external dependency.
The problem is not SF, The main issue with your design is that you are coupling architectural requirements to implementations. SF runs on top of VirtualMachines, in the end, the only difference is that SF put the services in those machines, using another solution you would have an Agent Deploying these services in there or doing a Manual deployment. The challenges are the same.
It is clear from the description that the requirement in your design is a need for a message queue, the concept of queues are the same does not matter if it is Service Bus, RabbitMQ or MSMQ. Each of then will have the basic foundations of queues with specifics of each implementation, some might add transactions, some might implement multiple patterns, and so on.
If you design based on specific implementation, you will couple your solution to the implementation and make your solution hard to maintain and face challenges like you described.
Solutions like NServiceBus and Masstransit reduce a lot of these coupling from your code, and if you think these are not enough, you can create your own abstraction. Then you use configurations to tied your business logic to implementations.
Despite the above advice, I would not recommend you using different
solutions per environment, because as said previously, each solution
has it's own implementations and they might not assimilate to each other, as example, you might face issues in
production because you developed against MSMQ on DEV and TEST
environments, and when deployed to Production you use ServiceBus, they
have different limitations, like message size, retention period and son
on.
If you are willing to use MSMQ, you can add MSMQ to the VMs running your cluster and connect from your services without any issue. Take a look into this SO first: How can I use MSMQ in Azure Service Fabric

Should all pods using a redis cache be constrained to the same node as the rediscache itself?

We are running one of our services in a newly created kubernetes cluster. Because of that, we have now switched them from the previous "in-memory" cache to a Redis cache.
Preliminary tests on our application which exposes an API shows that we experience timeouts from our applications to the Redis cache. I have no idea why and it issue pops up very irregularly.
So I'm thinking maybe the reason for these timeouts are actually network related. Is it a good idea to put in affinity so we always run the Redis-cache on the same nodes as the application to prevent network issues?
The issues have not arisen during "very high load" situations so it's concerning me a bit.
This is an opinion question so I'll answer in an opinionated way:
Like you mentioned I would try to put the Redis and application pods on the same node, that would rule out wire networking issues. You can accomplish that with Kubernetes pod affinity. But you can also try nodeslector, that way you always pin your Redis and application pods to a specific node.
Another way to do this is to taint your nodes where you want to run your workloads and then add a toleration to the Redis and your application pods.
Hope it helps!

Multiple Mobilefirst-Server artifacts concurrent deploy

I use a batch procedure for deploying MFP v7 artifacts (wlapps and adapters).
The procedure is based on the standard ant tasks defined in worklight-ant-deployer.jar.
The MFP environment runs onto a WAS cell, and consists of a single AdminService application managing multiple WLRuntimes.
Is it possible to run two (or more) deploy tasks concurrently against different WLRuntime targets ?
Furthermore, sticking to a single WLRuntime, is it possible to deploy different multiple artifacts concurrently ?
Thanks in advance for any answer/comment.
Ciao, Stefano.
For a single WL runtime, all deployments are internally done sequentially. You can start the deployments concurrently, but internally only one deployment is done after the other, due to a transaction locking mechanism. If you start too many deployments in parallel, it may come to timeout situations, even though this is seldom. By default, a deployment transaction waits for 20 minutes before it may time out.
Note: starting deployments in parallel means here using ant tasks or the wladm tool or the REST service directly. In the MobileFirst Admin Console UI, you will see deploy buttons disabled when another deployment transaction is ongoing, hence in the UI, it is not so easily possible to start deployments in parallel. The UI tries to prohibit that.
Note 2: the 20 minutes that I mentioned above is for the locking mechanism itself. Ant/wladm has its own parameters for time out that may be lower, hence in ant tasks, you might get time outs quicker than 20 min. See here.
For multiple WL runtimes, deployments can be concurrently. The mentioned locking mechanism is per runtime, hence deployments that occur in one WL runtime will not influence any other WL runtime.