Is it possible to get build logs using api call ?
gcloud builds log BUILD_ID
I have to do it using my nodejs app
Thanks,
Yes.
The CLI command would be of the form:
BUILD_ID=[[SOME-BUILD-ID]]
gcloud logging read "resource.type=\"build\" resource.labels.build_id=\"${BUILD_ID}\" " \
--project=${PROJECT} ...
NB If you augment the above command with the global --log-http, the output will include details of the underlying API methods. This is a good way to map gcloud commands to APIs.
The underlying API is logging.googleapis.com/v2
A good approach is to build the filter using Logs Viewer:
https://console.cloud.google.com/logs/viewer?project=${PROJECT}&advancedFilter=resource.type%3D%22build%22
Or, if like me, you like playing with jq:
BUILD_ID=...
gcloud logging read "resource.type=\"build\" resource.labels.build_id=\"${BUILD_ID}\" " \
--project=${PROJECT} \
--limit=50 \
--format="json" \
| jq -r .[].textPayload
You may interact with any Google API using the wonderful and understated APIs Explorer. Here's API Explorer pre-selected with logging:
https://developers.google.com/apis-explorer/#search/logging/logging/v2/logging.entries.list
You mentioned using Node.JS, Google provides SDKs for all its services using a bunch of popular languages and runtimes, here's a page describing the Logging API with Node.JS examples:
https://cloud.google.com/logging/docs/reference/libraries#client-libraries-install-nodejs
Related
I have deployed the smart contract(go) and the fablo network , I also have my own fablo rest APIs but I need to integrate this API with my web app how can I do it?
I'm working with this repository https://github.com/fablo-io/fablo-rest
I invoked the following steps in the readme file:
1- Run the script ./hyperledger-citc.sh to install some necessary software
2- Run the command sudo ./fablo recreate to start the network
3- You may open the file fablo-config.json to view the network components. It also includes the the installation of the chaincode "asset-transfer-basic" in Golang
4- You may use the commands sudo ./fablo [down | start | stop | up | prune | reset] to interact with the network
5- We use Fablo Rest API (https://github.com/fablo-io/fablo-rest) to interact with chaincodes and execute Transactions.
6- Create an authorization token using the below command. This token expires in few minutes, so it needs to be regenerated
I have learned that I can use fetch API function but it's only receives URL as a parameter but I don't have the URL of my API
In fabric samples,Check the rest-api-typescript project which is available under asset-basic-transfer or you can search "Asset Transfer REST API Sample" with this text on Fabric sample you will get the project info.
I am trying to leverage Google Cloud Functions to run whenever I run an insert on a table within a specific dataset in my GCP project. From what I've seen from other stackoverflow questions, I know it is possible to use Eventarc (2nd Gen) to listen to cloud events and trigger Cloud Functions. From looking at my BigQuery logs, I think what I want is for the Cloud Function to trigger when the logs equal:
protoPayload.methodName:"google.cloud.bigquery.v2.JobService.InsertJob"
resource.labels.dataset_id:"the_specific_dataset"
However, after attempting to follow multiple guides, I'm hitting perplexing errors in cloud shell. Sources I've already tried to use: Google Codelabs and This Blogpost.
What I Ran
in CloudShell, I enabled everything to run it
gcloud config set project org-internal
PROJECT_ID=org-internal
gcloud services enable run.googleapis.com
gcloud services enable eventarc.googleapis.com
gcloud services enable logging.googleapis.com
gcloud services enable cloudbuild.googleapis.com
REGION=us-central1
gcloud config set run/region $REGION
gcloud config set run/platform managed
gcloud config set eventarc/location $REGION
Then I configured my service account in CloudShell to have the roles eventarc.eventReceiver / pubsub publisher
PROJECT_NUMBER=$(gcloud projects describe $PROJECT_ID
--format='value(projectNumber)')
gcloud projects add-iam-policy-binding $PROJECT_ID
--member serviceAccount:$PROJECT_NUMBER-compute#developer.gserviceaccount.com
--role roles/eventarc.eventReceiver
Then I deployed to CloudRun
SERVICE_NAME=hello gcloud run deploy $SERVICE_NAME \
--image=gcr.io/cloudrun/hello \ --allow-unauthenticated
I was able to successfully create a trigger for cloud pub/sub. Ran without issue. However, when I tried to apply the event filters specific to table inserts, I ran into issue after issue. Here's what I tried with the errors
gcloud eventarc triggers create $TRIGGER_NAME --destination-run-service=$SERVICE_NAME --destination-run-region=$REGION --event-filters="serviceName=bigquery.googleapis.com" --event-filters="protoPayload.methodName:google.cloud.bigquery.v2.JobService.InsertJob" --event-filters="resource.labels.dataset_id:the_specific_dataset"
The error: ERROR: (gcloud.eventarc.triggers.create) argument --event-filters: Bad syntax for dict arg: [protoPayload.methodName:google.cloud.bigquery.v2.JobService.InsertJob].
gcloud eventarc triggers create $TRIGGER_NAME --destination-run-service=$SERVICE_NAME --destination-run-region=$REGION --event-filters="type=google.cloud.audit.log.v1.written” --event-filters="serviceName=bigquery.googleapis.com" --event-filters methodName=google.cloud.bigquery.v2.JobService.InsertJob --event-filters resource.labels.dataset_id=the_specific_dataset
The error: ERROR: (gcloud.eventarc.triggers.create) INVALID_ARGUMENT: The request was invalid: invalid value for attribute 'type' in trigger.event_filters:
I tried a few other ways in formatting (e.g. with quotes and without) but nothing seems to be working. I guess my questions are -- can I filter on "resource.labels.dataset_id" and on "methodName=google.cloud.bigquery.v2.JobService.InsertJob"? If so, what am I doing wrong?
In our small startup we use GitLab for development and Telegram for internal communication between developers and PO. Since the PO would like to see the progress immediately, we have set up the GitLab Pipeline so that the preview version is deployed on the web server after each commit. Now we want to expand the pipeline. So that after the deployment a notification is sent via the Telegram group.
So the question - is that possible, and if so, how?
EDIT: since I've already implemented that, that's not a real question. I wanted to post the answer here so that others can use it as well.
So, we'll go through it step by step:
Create a Telegram bot
Add bot to Telegram group
Find out Telegram group Id
Send message via GitLab Pipeline
1. Create a Telegram bot
There are enough good instruction from Telegram itself for this:
https://core.telegram.org/bots#6-botfather
The instructions do not say anything explicitly, but to generate it, you have to go into the chat with the BotFather.
At the end you get a bot token, something like 110201543:AAHdqTcvCH1vGWJxfSeofSAs0K5PALDsaw
2. Add bot to Telegram group
Switch to the Telegram group, and add the created bot as a member (look for the bot by name).
3. Find out Telegram group Id
Get the update status for the bot in browser:
https://api.telegram.org/bot<YourBOTToken>/getUpdates
Find the chat-id in the response:
... "chat": {"id": <YourGroupID>, ...
see for more details: Telegram Bot - how to get a group chat id?
4. Send message via GitLab Pipeline
Send message with a curl command. For example, an existing stage in gitlab pipeline can be extended for this purpose:
upload:
stage: deploy
image: alpine:latest
script:
- 'apk --no-cache add curl'
- 'curl -X POST -H "Content-Type: application/json" -d "{\"chat_id\": \"<YourGroupID>\", \"text\": \"CI: new version was uploaded, see: https://preview.startup.com\"}" https://api.telegram.org/bot<YourBOTToken>/sendMessage '
only:
- main
Remember to adapt the YourBOTToken and YourGroupID, and the text for the message.
*) we use the alpine docker image here, so curl has to be installed - 'apk --no-cache add curl'. With other images this may have to be done in a different way.
One easy way to send notifications (particularly if you're using multiple services or chats) is to use apprise.
To send to one telegram channel:
apprise -vv --body="Notify telegram chat" \
tgram://bottoken/ChatID1 \
This makes it easy to notify many services from your pipeline all at once without needing to write code against the API of each service (apprise handles this for you).
image: python:3.9-slim # or :3.9-alpine if you prefer a smaller image
before_script:
- pip install apprise # consider caching PIP_CACHE_DIR for performance
script: |
# Set a notification to multiple telegram chats, a yahoo email account,
# Slack, and a Kodi Server with a bit of added verbosity:
apprise -vv --body="Notify more than one service" \
tgram://bottoken/ChatID1/ChatID2/ChatIDN \
mailto://user:password#yahoo.com \
slack://token_a/token_b/token_c \
kodi://example.com
I have the latest version of thruk installed with naemon and livestatus. I want to be able to post commands from a python script to cmd.cgi from the same server without the interference of authentication. I have tried the settings of:
use_authentication=0
default_user_name=thrukadmin
but it doesn't seem to work in the thruk gui. When trying to post to the cgi from the thruk gui I get the error, "I'm sorry Dave......"
Any thoughts on why this not working right? The apache server on that system uses ldap to authenticate to the gui, could this be an issue?
Other thoughts?
It's much easier, you don't even need Thruk in the middle. You can simply write to Naemons command_file.
The external command list at https://www.naemon.org/documentation/developer/externalcommands/ contains an example for every possible command.
Here is a shell snippet which schedules a host downtime:
printf "[%lu] SCHEDULE_HOST_DOWNTIME;host1;1478648441;1478638441;1;0;3600;naemonadmin;This is an example comment.\n" `date +%s` > /var/lib/naemon/naemon.cmd
When using Thruk, you can use thruks cli script to send commands:
thruk r -d comment_data=test /hosts/localhost/cmd/schedule_host_downtime
Authentication is only required if you want to send commands by HTTP.
Whether I have the azure agent plugin for Jenkins make my container, or if I do it manually, it seems like either way it never enters a running state.
az container create \
--os-type Windows \
--location eastus \
--registry-login-server SERVER.azurecr.io \
--registry-password PASSWORD \
--registry-username USERNAME \
--image namespace/image \
--name jenkins-permanent \
--resource-group devops-aci \
--cpu 2 \
--memory 3.5 \
--restart-policy Always \
--command-line "-jnlpUrl http://host:8080/computer/NAME/slave-agent.jnlp -secret SECRET -workDir \"C:\\jenkins\""
I've gone through all the troubleshooting steps that apply, tried a different region, but to no avail.
Here's a current event that I got which seems to be the most progress I've had yet:
{
"count": 1,
"firstTimestamp": "2017-12-07T03:02:56+00:00",
"lastTimestamp": "2017-12-07T03:02:56+00:00",
"message": "Failed to pull image \"MYREPO.azurecr.io/my-company/windows-agent:latest\": Error response from da
emon: {\"message\":\"Get https://MYREPO.azurecr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout
exceeded while awaiting headers)\"}",
"name": "Failed",
"type": "Warning"
}
The funny thing is, this event happens before and after one case of the instance working (but unfortunately my entrypoint command was wrong, so it never started).
I really feel like Azure is punting on this and I just have no way to change the order I do anything. It's simply one command.
Alexander, here's a lead to actually check what could be causing the delay, or if the deployment has failed in the background, this information would be critical to narrow down what the issue is: https://learn.microsoft.com/en-us/azure/azure-resource-manager/resource-manager-troubleshoot-tips#determine-error-code
From the article above check on deployment logs:
Enable debug logging:
PowerShell
In PowerShell, set the DeploymentDebugLogLevel parameter to All, ResponseContent, or RequestContent.
New-AzureRmResourceGroupDeployment -ResourceGroupName examplegroup -TemplateFile c:\Azure\Templates\storage.json -DeploymentDebugLogLevel All
or Azure CLI:
az group deployment operation list --resource-group ExampleGroup --name vmlinux
Check Also Check deployment sequence:
Many deployment errors happen when resources are deployed in an unexpected sequence. These errors arise when dependencies are not correctly set. When you are missing a needed dependency, one resource attempts to use a value for another resource but the other does not yet exist.
The above link contains more details. Let me know if this helps.
Figured it out, turns out the backslashes in the command in my executable path were not having their escapes honoured. Either because I was calling az from bash, or because something Azure side isn't handling the escaping correctly, or not escaping them itself.
My solution has been to just use forward slashes in the paths. Windows seems to be handling them correctly, and I prefer to not be bothered with its odd preference for backslashes.
Related to my issue is that the speed of the service makes troubleshooting very difficult. It takes a long time to go round trip with any fixes. So if you're using Azure Container Instances and want better performance, go upvote this feedback item that I've created.
How big is your image? You can always debug with 2 steps.
Run az container show -g devops-aci -n jenkins-permanent. It should contain a list of container events in the container json object. The event message should give you hint what's going on.
Run az container logs -g devops-aci -n jenkins-permanent. It should give you the logs of your container. If it's a problem within your image, you should be able to see some error output.