How to get session id of Quick migration of VM triggered through veeam VBR? - migration

I triggered quick migration for one VM using Veeam VBR, from one Vcenter to another Vcenter. Currently it's returning session once it completes the migration. I need session or session id to monitor or poll for the status/completion of migration task once quick migration is triggered.
Please let me know is there a way to get running session ?

Related

AWS EMR: Run Job Flow where is the driver and Application Master located?

Where does the driver and application master run on EMR 6.9 with boto3.client('emr').run_job_flow(...) in regards to MASTER/CORE/TASK nodes?
This question is not in regards to ssh'ing into the master node and executing spark-submit as described in this blog by aws. I think that is clear which process runs where.
AWS documentation, probably for good reason, says the same thing that Spark say about where the driver and application master run in both client and cluster mode. EMR's default master is yarn so this answer is accurate about how it works
Client mode, driver will be running in the machine where application got submitted and the machine has to be available in the network till
the application completes.
Cluster mode, driver will be running in application master(one per spark application) node and machine submitting the application need
not to be in network after submission
Okay but I am submitting via boto3 api so what is the master node where the driver and AM reside? I would have thought so but this documentation by aws to me makes it sound like the AM could be run on the CORE or the TASK nodes in +6.X.
What I trying to understand by this question is I have a on demand MASTER node that is okay size and Spot TASK nodes that are really small. If either driver and AM are running on the TASK node I would upgrade that instance.

The concurrent snapshot for publication xxx is not available because it has not been fully generated

I'm having trouble running replication on sql server 2019. On replication monitor in Distributor to Subscriber history section I get action message:
The concurrent snapshot for publication xxx is not available because
it has not been fully generated or the Log Reader Agent is not running
to activate it. If the generation of the concurrent snapshot was
interrupted, the Snapshot Agent for the pub.
Is this message the cause of the replication I'm working on not running? I have tried various ways that I found on the internet, and nothing worked.
Does anyone have a solution?

Scheduling a task in a Distributed Environment like Kubernetes

I have a FAST API based Rest Application where I need to have some scheduled tasks that needs to run every 30 mins. This application will run on Kubernetes as such the number of instances are not fixed. I want the Scheduled Jobs to only trigger from one of the available instance and not from all the running instances creating a Race condition, as such I need some kind of locking mechanism that will prevent the schedulers to fire if one is already running. My App does connect to a MySql compatible Aurora DB running on AWS. Can I achieve this with ApScheduler, if not are there any alternatives available?

What is the programmatic way to disconnect an agent in Bamboo?

Jenkins has a programmatic mechanism for taking nodes offline, is there an equivalent for Bamboo? Ideally I could trigger an offline disconnect after the agent finishes any currently executing jobs.
What is the programmatic way to disconnect a node in Jenkins?
You can achieve this using the Bamboo REST API. Here is a link to the specific call DELETE /agent/{agentId}.

How to properly restart a kafka s3 sink connect?

I started a kafka s3 sink connector (bundle connector from confluent package) since 1 May. It works fine until 8 May. Checking the status, it tells that some aws exception crashes this connector. This should not be a big problem, so I want to restore it.
I tried the following steps:
I POST /connectors/s3sink/restart . Then I saw the connector is in RUNNING mode, but the task is still FAIL.
Then I PUT /connectors/s3sink/task/0/restart. Ok, now the task is in RUNNING mode.
But then I tail the log, I found it starts to rewrite the old data, such as 3 May data. And it messed the old data!
So, does connect restart REST API reset the offset? I thought it will save the offset and just start from the offset it fails.
And how to restart a failed connector task correctly? By deleting those PODs? (using kubernetes), or by REST /task/0/restart? When should I use /connectors/s3sink/restart?
/connector/:name/restart is a rolling restart operation on the worker leader that needs to propagate to all worker server tasks in async fashion. So, you need to ensure network connection between the leader worker and all others.
/connector/:name/task/:num/restart will send request straight to that worker, restarting the thread.
Restart should not reset the offset since they are stored in the consumer offsets topic for that connect cluster. If anything, the tasks were not able to commit offsets back to the __consumer_offsets topic, but you should see logs for that.