how to send customized Test status in Auto trigger e-mail after git pipeline job got passed or failed - gitlab-ci

I have created a Git pipeline job, which running daily and sending automatically e-mail for build status, but I want to customise the e-mail content with Specific Scenario Number, Status and Reason of Script failure.
How can I achieve it ? any suggestion please.

Related

TWS job failing. ERROR jobmon was unable to retrieve user information

I am configuring TWS job in one of our application windows server but when the job triggered it is throwing the below exception. I am using domain user to execute the job.
AWSBDW079E Jobman could not run the job because the system call used to launch the job failed: Jobmon was unable to retrieve user information.
Do I need any special permissions for the user to run the job.
Any help?
On Windows, TWS needs userid and password in order to impersonate the user.
You have to specify the USER object with the appropriate Workstation name, userid and password.
Using Dynamic Agents there are also other options using Executable job types, e.g. you can store the password locally on the agent with the parm utility.
I have figured out the issue. The issue was with the user permissions. The user which is used to run the job on the workstation must have the below permissions.
Allow log on locally
Log on as a batch job

azure application gateway alert automation

I need a PowerShell script to trigger an alert whenever azure application gateway backend health turn to red. I have 2 subscriptions, under this, we have around 150+ Application Gateway provisioned. so the script should be reusable across all the subscription. it would be great, if any sample script available for the reference.
Thanks
Suri
You can use this command:
Get-AzureRMApplicationGatewayBackendHealth
https://learn.microsoft.com/en-us/powershell/module/azurerm.network/get-azurermapplicationgatewaybackendhealth?view=azurermps-6.13.0
With this command, you can create a runbook inside an automation account, and if the output of the command is a specific value, it can trigger an alert. Spending a few minutes on google shows you how to combine these techniques ;)

Splunk Alert with run a script action

Is there any way to run external script with source IP (source IP of device which sent alert to splunk, host= value in event) address as variable?
There is in splunk documentation few variables but non of them are host.
I need to trigger config download from Solar Winds upon change of config. All syslog messages are sent to splunk. So when alert is triggered it would run script ./update $SOURCE_HOST
You can trigger an Alert on anything you like. If you want the Alert to run a script, just parse-out the information you need into a field so you can pass it to your script.

Execute internal server script in grafana

I would like to know how to execute a script in response to an alert in Grafana.
I want to execute the script in a shell when the temperature is greater than 25C. The script connects to an ESX server and turns off all VM's.
I've created the script that connects to the ESX server, but I'm not sure how to call it from Grafana.
Use the Alert Webhook notifier. It sends a json document to the webhook url every time an alert is triggered.
You will need to build some sort of backend service (in any language/web framework) that can listen to HTTP requests. This service would take in the JSON document, parse it and then shell out to execute your script.

How to solve the logon issue in sql job?

I have configured a sql job which backups the databases and then transfer them to a remote location in another step. On command prompt my command is working fine but when I schedule this in a job I found the error :
Executed as user administrator. Logon Failure Unknown User Name or Bad Password. 0File(s) copied . Process Exit code 0. The step succedded.
I want to solve these issue and I also want that if does get transferred then job should report failure but it doesn`t show any such message.
I just want that when no files get copied i.e. 0File(s) copied . it should notify failure job .
Thanks
Nitesh Kumar
One way is to use a Script Task to check if there is a file you want to copy. If there is one, the process can proceed, if not, the step can result in an error. You do this by adding
Dts.TaskResult = (int)ScriptResults.Failure;
to the end of the script task logic.
Anyways i dont know your package design so there might be more suitable ways.
The issue has been solved . As the remote location folder was shared and was accessible to every one. My command was working fine from command prompt , even any user was able to create their own file on that location and also able to delete the file from that location.
The issue was related to user. My job was being executed by servername\administrator and remote location administrator password was changed due to that bad password error occurred. I told my IT Team about the problem and they reset the server password as older one, and my job began to work fine.
The issue was solved.
I just want to know how my sql job authenticates the server login as I go through the script of my job and found nothing helpful regarding authentication.
Can any one explain it to me.
Thanks
Nitesh Kumar