Is there any way to print on cloudwatch dashboard Status as Running of a particular codebuildproject , if cloudwatch logs is still running - amazon-cloudwatch

Is there any way to print on cloudwatch dashboard Status as Running of a particular codebuildproject , if cloudwatch logs is still running.
I tried with filtering keyword from logs but it isn't helping.

Related

View Spark UI for Jobs executed via Azure ADF

I am not able to view the spark-ui for databricks jobs executed through notebook activity in Azure datafactory.
Does anyone know which permissions needs to be added to enable the same?
--Update
Ensure you have Cluster-level permissions on PROD env
Job permissions
There are five permission levels for jobs: No Permissions, Can View,
Can Manage Run, Is Owner, and Can Manage. Admins are granted the Can
Manage permission by default, and they can assign that permission to
non-admin users.
Next...
Am able to view the completed jobs in the Spark UI without any additional setup.
It maybe that the stuff you are doing in the Notebook, does not constitute for a Spark Job.
Refer: Web UI and Monitoring and Instrumentation for more details.
Checkout Spark Glossary
Note: Job A parallel computation consisting of multiple tasks that gets spawned in response to a Spark action (e.g. save, collect);
you'll see this term used in the driver's logs.
View cluster information in the Apache Spark UI
You can get details about active and terminated clusters. If you
restart a terminated cluster, the Spark UI displays information for
the restarted cluster, not the historical information for the
terminated cluster.
So, If I run a notebook with a task to simply display my mounts or with a task that failed or exception, it is not listed in Spark UI.
#job/83 is not seen

Permission denied while running azure databricks notebook as job scheduler

I am trying to run a job scheduler in azure databricks. While its running various notebook, its failing and showing below error.
As mentioned by #santoznma in comment, you don’t have jobs access control enabled.
Enabling access control for jobs allows job owners to control who can view job results or manage runs of a job.
To enable it follow below steps:
1. Go to the Admin Console.
2. Click the Workspace Settings tab.
3. Click the Cluster, Pool and Jobs Access Control toggle.
4. Click Confirm.
Refer - Enable jobs access control for your workspace - Azure Databricks

unable to run query "AzureActivity" using Azure Monitor Log

I am facing issue while running "AzureActivity" query in Azure Monitor Log. I am using free trail subscription and I created a vm and trying to run query but facing issue. Please find below screenshot.
you can create an Log Analytics workspace. Then go to azure portal -> your vm -> in the Activity log page, click the Diagnostic settings button -> then in the Diagnostic settings, click the Add diagnostic setting button -> then you can send all the logs to the Log Analytics workspace. At last, you can try to query in that Log Analytics workspace.

batch job occasionally gets "does not have bigquery.jobs.create permission" error after multiple successful queries

I have a python batch job running as a service account using bq.cmd to load multiple datastore backups.
It has been running successfully for 2 years, but recently in the middle of some runs (after multiple successful loads by the same user into the same dataset) it fails, continuously returning : "does not have bigquery.jobs.create permission".
Restarting the job, with no changes, usually succeeds.
bq.cmd load --quiet --source_format=DATASTORE_BACKUP --project_id=blah-blah --replace project-name:data_set_name.TableName gs://project-datastore-backup/2018-08-30-03_00_01/blahblah.TableName.backup_info
gcloud components are up to date.
Any suggestions welcome
There's a public bug with a similar issue which was resolved by recreating the service account. If you don't see any actual changes to the IAM permissions occurring in the logs, then I'd try with a new service account.

Why Elastic MapReduce job flow failed in AWS MapReduce?

I created a job flow in AWS MapReduce, I created a job flow of Contextual Advertising (Hive Script) - done 'Start interactive Hive Session', selected m1.small instances, proceeded without a VPC subnet id and Configure Hadoop in Configure Bootstrap actions.
Now, job flow goes into starting state and after 15-20 minutes it goes into failed state and it does not go into waiting state.
It shows "Last State Change Reason: User account is not authorized to call EC2 "
I gave PowerUserAccess to myself thru IAM. also I have given below policies to myself.
1.AmazonEC2FullAccess
2.AmazonElasticMapReduceFullAccess
3.IAMFullAccess
After giving all these policies still it shows "User account is not authorized to call EC2"
please guide. Thanks.
EMR builds on other AWS services that you also need to subscribe to. Giving IAM privileges to call ec2, s3 and emr is not sufficient.