azure application gateway alert automation - azure-powershell

I need a PowerShell script to trigger an alert whenever azure application gateway backend health turn to red. I have 2 subscriptions, under this, we have around 150+ Application Gateway provisioned. so the script should be reusable across all the subscription. it would be great, if any sample script available for the reference.
Thanks
Suri

You can use this command:
Get-AzureRMApplicationGatewayBackendHealth
https://learn.microsoft.com/en-us/powershell/module/azurerm.network/get-azurermapplicationgatewaybackendhealth?view=azurermps-6.13.0
With this command, you can create a runbook inside an automation account, and if the output of the command is a specific value, it can trigger an alert. Spending a few minutes on google shows you how to combine these techniques ;)

Related

Google Cloud Dataflow permission issues

Beginner in GCP here. I'm testing GCP Dataflow as part of a IOT project to move data from Pub/Sub to BigQuery. I created a Dataflow job from the Topic's page "Export to BigQuery" button.
Apart from the issue that I can't delete a dataflow, I am hitting the following issue:
As soon as the dataflow starts, I get the error:
Workflow failed. Causes: There was a problem refreshing your credentials. Please check: 1. Dataflow API is enabled for your project. 2. Make sure both the Dataflow service account and the controller service account have sufficient permissions. If you are not specifying a controller service account, ensure the default Compute Engine service account [PROJECT_NUMBER]-compute#developer.gserviceaccount.com exists and has sufficient permissions. If you have deleted the default Compute Engine service account, you must specify a controller service account. For more information, see: https://cloud.google.com/dataflow/docs/concepts/security-and-permissions#security_and_permissions_for_pipelines_on_google_cloud_platform. , There is no cloudservices robot account for your project. Please ensure that the Dataflow API is enabled for your project.
Here's where it's funny:
Dataflow API is definitely enabled, since I am looking at this from the Dataflow portion of the console.
Dataflow is using the default compute engine service account, that exists. The link it's pointing at says that this account is created automatically and has a broad access to project's resources. Well, does it?
Dataflows elude me.. How can I tell a dataflow job to restart, or edit or delete it?
please verify below checklist:
Dataflow API should be enabled check under APIs & Services. If you just enabled ,wait for some time to get it updated
[project-number]-compute#developer.gserviceaccount.com and service-[project-number]#dataflow-service-producer-prod.iam.gserviceaccount.com service accounts should exists if dataflow-service-producer-prod didn't get created you can contact dataflow support or you can create and assign Cloud Dataflow Service Agent role, If you are using shared VPC create it in host project and assign Compute Network User role

Execute job spoon with software

I have a JOB done in SPOON, which is executed without problems in the command line, but I would like to know if there is any software in which I can execute these JOBS and go to see the execution visually. The idea is that for the most pleasant exploitation area these tasks are executed.
You have two solutions:
Carte:
Use the carte server which is shipped with the PDI. Install the PDI on any server, launch carte (specifying the port), then you can execute/view/stop/restart job/transformation from any browser. Documentation is here.
Of course you can launch a job/transformation from your own PDI. Just define a new Slave server, on the left panel, tab view, default username/password = cluster/cluster. Then each time you run a job/transformation, choose the carte server, instead of Pentaho/local in the Run configuration.
Loggin
If you just want to follow job/transformation, you may use the database logging: Right-click any where, Parameters, Logging, Job/Transformation, then define a database, a table and a logging interval of 2 seconds.
Then every two seconds, the line_read, line_written, errors, and log_field are written to a database. This database can be read by an external process and displayed on the screen or on a browser.
This method is used in the github/ETL-pilot which uses a tomcat (because you probably have a tomcat already running with a Pentaho server), but can easily be adapted to a nodejs or any other server. (If you do it and OpenSource it, please add a link to your work on our github).

Web Services - How to get failed backup jobs from NetBackup

I work with SharePoint. I was given a project where I need to call NetBackup web services and download all the failed Backup jobs. Backup Status = failed or something like it.
All I know they (backup team) gave me a url http://netbk004/Operation/opscenter.home.landing.action? I have worked with asmx before but I have no clue how to consume exceptions from NetBackup. Is there an API that comes with NetBackup that I can use to populate a SharePoint list? Or web services, it doesn't matter as long as I can download the exceptions to a SharePoint List.
Not sure about through the webservice, but I know you can access the state of backup jobs by running the bpdbjobs command and parsing through the output.
Go to Netbackup activity monitor. Then filter the "Status" column with ">1".
This will give you all tha failed jobs

SQL 2012 - SSIS Package not populating Text file when scheduled

I'm working on SQL 2012 Enterprise and I have a set of SSIS package exports which push data out to text files on a shared network folder. The packages aren't complex and under most circumstances they work perfectly. The problem I'm facing is that they do not work when scheduled - despite reporting that they have succeded.
Let me explain the scenarios;
1) When run manually from within BIDS, they work correctly, txt files are created and populated with data.
2) When deployed to the SSISDB and run from the Agent job they also work as expected - files are created and populate with data.
3) When the Agent job is scheduled to run in the evening, the job runs and reports success. The files are created but the data is not populated.
I've checked the reports on the Integration Services Catalogs and compared the messages line by line from the OnInformation. Both runs reports that the Flat File Destination wrote xxxx rows.
The data is there, the Agent account has the correct access. I cannot fathom why the job works when started manually, but behaves differently when scheduled.
Has anyone seen anything similar? It feels like a very strange bug....
Kind Regards,
James
Make sure that the account you have set up as the proxy for the SSIS task has read/write access to the file.
IMX, when you run an SQL Agent job manually, it appears to use the context of the user who initiates it in some way. I always assumed it was a side effect of impersonation. It's only when it actually runs with the schedule that everything uses the assigned security rights.
Additionally, I think when the user starts the job, the user is impersonating the proxy, but when the job is run via the schedule, the agent's account is impersonating the proxy. Make sure the service account has the right to impersonate the proxy. Take a look at sp_grant_login_to_proxy and sp_enum_login_for_proxy.
Here's a link that roughly goes through the process:
http://www.mssqltips.com/sqlservertip/2163/running-a-ssis-package-from-sql-server-agent-using-a-proxy-account/
I also recall this video being useful:
http://msdn.microsoft.com/en-us/library/dd440761(v=SQL.100).aspx
I had the same problem with Excel files. It was permission rights.
What worked for me was adding the SERVICE account to the folder's security tab. Then the SQL Agent can access the files.

Mac terminal restriction

I'm building a web API that use a command line on the server.
This command do certain build tasks and it has many optional arguments
I'm not so happy about building a rest API to handle all the arguments and escape/validate all security risk there is...
Is there anyway you could instead whitelist only one command to run by a certain proccess/program or user?
Or can it be as easy as validateing the command if it contains ; if it dose > consider it as unsafe to run?
Or can you create a little sandbox?
You could create a user (login account) on your server that runs your whitelisted command instead of a regular shell when he logs in, if that's what you mean.
For example, I have worked at some sites where there is a user account called "reset" and another called "mountall". So, if you know the password of that account, when you log in, certain databases get reset and you get logged out, or all filesystems in /etc/fstab get mounted and then you get logged out.