Waiting with gcloud on "detect-text-pdf" operation - google-colaboratory

I'm running a detect-text-pdf with gcloud:
$ gcloud ml vision detect-text-pdf gs://my-bucket/pdfs/D.pdf gs://my-bucket/pdfs/D
which results in an output like:
{
"name": "projects/PROJECT_ID/operations/OPERATION_ID"
}
Where PROJECT_ID is the project and OPERATION_ID is a hex number.
How can I wait for the operation to complete?
I've tried:
gcloud services operations wait OPERATION_ID
But I get the error:
ERROR: gcloud crashed (ArgumentTypeError): Invalid value 'OPERATION_ID': Operation format should be operations/namespace.id
I'm running from a colab notebook if this helps.

TL;DR How can I wait for the operation to complete?
You can not use the gcloud services operations wait command, but you can get operation status as shown in the documentation Working with long-running operations
From the gcloud ml vision detect-text-pdf reference:
API REFERENCE
This command uses the vision/v1 API. The full documentation for this API can be found at: https://cloud.google.com/vision/
Also, as shown in the documentation Detect text in files (PDF/TIFF):
A successful asyncBatchAnnotate request returns a response with a single name field:
{
"name": "projects/usable-auth-library/operations/**1efec2285bd442df**"
}
This name represents a long-running operation with an associated ID (for example, 1efec2285bd442df), which can be queried using the v1.operations API.
To retrieve your Vision annotation response, send a GET request to the v1.operations endpoint, passing the operation ID in the URL:
GET https://vision.googleapis.com/v1/operations/operation-id
As shown both in the reference and the documentation, these are long running operations which use the v1.operations API. Therefore, you can not use the gcloud services operations wait command, but you can get operation status as shown in the documentation Working with long-running operations:
Several operations you request are long-running, ... These types of requests will return a JSON with an operation ID that you can use to get the status of the operation.
In section Get an operation code samples, there are some instructions to request the status of your long running operations.
HTTP method and URL:
GET https://automl.googleapis.com/v1/projects/project-id/locations/us-central1/operations/operation-id
To send your request using curl, execute the following command:
curl -X GET \
-H "Authorization: Bearer "$(gcloud auth application-default print-access-token) \
"https://automl.googleapis.com/v1/projects/project-id/locations/us-central1/operations/operation-id"

Related

GCP REST api authentication missing

I have created a job of JDBC to BigQuery using the web interface and it worked just fine.
Now I want to create the same job from the REST API of GCP so I took the rest equivalent of the request from the site and tried to send it from Postman.
I'm sending POST request for the following URL:
https://dataflow.googleapis.com/v1b3/projects/test-data-308414/templates:launch?gcsPath=gs://dataflow-templates/latest/Jdbc_to_BigQuery
which I got from the example in the GCP documentation.
I also pass the JSON that the GCP gave me in the body.
And the API key as get parameter in the next format "?key=[API_KEY]"
I'm getting 401 response from the server with the following message:
Request is missing required authentication credential. Expected OAuth
2 access token, login cookie or other valid authentication credential.
See
https://developers.google.com/identity/sign-in/web/devconsole-project.
With a status of:
UNAUTHENTICATED
I looked up at the link and found a tutorial on how to create google authentication on the front end
witch is not helpful to me.
I'm pretty sure that I'm passing the API key in the wrong format and that the reason it failed to authenticate.
But I couldn't find any documentation that says how to do it correctly.
PS> I have also tried passing it at the headers as I saw in one place
in the next format
Authorization : [API_KEY]
but it failed with the same message
Few days back I was trying to integrate GCP into MechCloud and struggling to figure out how to invoke a microservice ( which is acting as a proxy to GCP) with credentials for different projects which will be passed to this microservice on the fly. I was surprised that in spite of spending good amount of time I could not figure out how to achieve it because GCP documentation is focused on working with one project credentials at a time using application default credentials. Another frustrating thing is that API explorer shows both OAuth 2.0 and API Key by default for all the APIs when the fact is that API Key is hardly supported for any API. Finally I found the solution for this problem here.
Here are the steps to invoke a GCP rest api -
Create a service account for your project and download the json file associated with it.
Note down values of client_email, private_key_id and private_key attribues from service account json file.
Define following environment variables using above values -
GCP_SERVICE_ACCOUNT_CLIENT_EMAIL=<client_email>
GCP_SERVICE_ACCOUNT_PRIVATE_KEY_ID=<private_key_id>
GCP_SERVICE_ACCOUNT_PRIVATE_KEY=<private_key>
Execute following python code to generate jwt_token -
import time, jwt, os
iat = time.time()
exp = iat + 3600
client_email = os.getenv('GCP_SERVICE_ACCOUNT_CLIENT_EMAIL')
private_key_id = os.getenv('GCP_SERVICE_ACCOUNT_PRIVATE_KEY_ID')
private_key = os.getenv('GCP_SERVICE_ACCOUNT_PRIVATE_KEY')
payload = {
'iss': client_email,
'sub': client_email,
'aud': 'https://compute.googleapis.com/',
'iat': iat,
'exp': exp
}
private_key1 = private_key.replace('\\n', '\n')
# print(private_key1)
additional_headers = {'kid': private_key_id}
signed_jwt = jwt.encode(
payload,
private_key1,
headers=additional_headers,
algorithm='RS256'
)
print(signed_jwt)
Use generated jwt token from previous step and use it as a bearer token to invoke any GCP rest api. E.g.
curl -X GET --header 'Authorization: Bearer <jwt_token>' 'https://compute.googleapis.com/compute/v1/projects/{project}/global/networks'
The best practice to authenticate a request is to use your application credentials. Just make sure you installed the google cloud SDK.
curl -X POST \
-H "Authorization: Bearer "$(gcloud auth application-default print-access-token) \
-H "Content-Type: application/json; charset=utf-8" \
-d #request.json \
https://dataflow.googleapis.com/v1b3/projects/PROJECT_ID/templates:launch?gcsPath=gs://dataflow-templates/latest/Jdbc_to_BigQuery

How to run a rundeck job like a rest call from application?

I created a simple job of mkdir in rundeck.Now I wanted to run that job some node application.So how can I get the api for the job so that I can call that rest call from my application and run the job.
I just tried a post call for this from my postman but didnt worked.
http://rundeckhost:4440/api/1/job/uuid/run
Gave the following error:
(unauthenticated) is not authorized for: /api/1/job/ec0852b7-222a-4372-ad4b-808892777019/executions
Can someone point me to any references or any info on how can we run the job through a rest call from our application.Basically how to get the rest url for the job to run?
You have one of two ways to authenticate: http://rundeck.org/docs/api/#authentication
For your purpose it will probably be easier to use the authtoken type. See here for your choices for authtoken types: http://rundeck.org/docs/administration/access-control-policy.html#api-token-authorization
Roughly, you will do something like this:
curl -H "X-Rundeck-Auth-Token: $API_TOKEN" \
--data-urlencode "${NODEFILTER:-}" \
--data-urlencode "argString=${JOB_OPTIONS:-}" \
-X POST "${RD_URL}/api/12/job/$JOB_UUID/run"

Not able to download log files using following API

We are not able to download log files. Once run below command, it is getting unable to parse object filter.
curl -X GET -u : -g https://api.service.softlayer.com/rest/v3/SoftLayer_Event_Log/getAllObjects.json
I'm able to retrieve the Event Logs through SoftLayer's private network, I used the same request:
curl -X GET -u $user:$apiKey -g https://api.service.softlayer.com/rest/v3/SoftLayer_Event_Log/getAllObjects.json
According to the exception, it seems that you are sending something else in the request.
Are you able to make others api calls? can you try again please? Did you have success before with this request?
Also, can you try with the public endpoint: https://api.softlayer.com instead of https://api.service.softlayer.com

gitlab api accept merge request fails

I have created a shell script to create and accept a merge request.
The first action (creating) is working fine, but the second action (accepting) fails.
This is my code:
curl -X PUT -H "PRIVATE-TOKEN: abc123" -d id=86 -d merge_request_id=323 https://gitlab/api/v3/projects/86/merge_requests/323/merge
API feedback:
{"message":"404 Not found"}
GitLab API documentation on merge requests:
https://gitlab.com/help/api/merge_requests.md#accept-mr
If you are using GitLab API v4
In this API version you should use iid, in the MR API calls, so the same id which you see in the web UI MR URL.
Source: https://docs.gitlab.com/ee/api/v3_to_v4.html :
API uses merge request IIDs (internal ID, as in the web UI) rather
than IDs. This affects the merge requests, award emoji, todos, and
time tracking APIs.
If you are using GitLab API v3
Use the globally unique internal id of the MR, NOT its local id.
So if you have a web UI URL of the MR you want to accept, like this:
https://gitlab.domain.com/group/project/merge_requests/11/commits
^^--- ..then *DON'T* use this id!
Instead if you create the MR with the API (or get a list of them in a project) and see its JSON:
{
"id": 16393, # <--- ...then USE THIS id...
"iid": 11,
"project_id": 1162,
"title": "MR title...",
(...)
}
...in your MR accept request:
curl -X PUT https://gitlab.domain.com/api/v3/projects/1162/merge_requests/16393/merge
# the globally unique internal id of the MR ---^^^^^
(In my opinion this is kind of misleading, because as of March 2017 the GitLab API docs call the global one iid (internal id)...)
Shortly after my previous post I have found out that the URL is incorrect.
The structure is not kept the same across the API.
The request that works is:
https://gitlab/api/v3/projects/86/merge_request/323/merge (notice REQUEST, not REQUESTS)

How can a Jenkins user authentication details be "passed" to a script which uses Jenkins API to create jobs?

I have a script that delete and re-create jobs through curl HTTP-calls and I want to get rid of any hard-coded "username:password".
E.g. curl -X POST $url --user username:password
Considerations:
Jenkins CLI (probably not an option).
One should be able to achieve the same with the CLI as with Jenkins API (creating jobs etc) but as far as I understand Jenkins CLI is not a good alternative for me since jobs created with will only appear in Jenkins after restarting or a "Reload Configuration from Disk", and that would cancel any other running jobs.
API token. Can't find out how to get the user token and then pass it
as a parameter to the script, but that may be a solution..
Try this way: (for example delete the job)
curl --silent --show-error http://<username>:<api-token>#<jenkins-server>/job/<job-name>/doDelete
The api-token can be obtained from http://<jenkins-server>/user/<username>/configure.
This worked for me:
curl -u $username:$api_token -FSubmit=Build 'http://<jenkins-server>/job/<job-name>/buildWithParameters?environment='
API token can be obtained from Jenkins user configuration.
With Jenkins CLI you do not have to reload everything - you just can load the job (update-job command). You can't use tokens with CLI, AFAIK - you have to use password or password file.
Token name for user can be obtained via http://<jenkins-server>/user/<username>/configure - push on 'Show API token' button.
Here's a link on how to use API tokens (it uses wget, but curl is very similar).
I needed to explicitly add POST in the CURL command:
curl -X POST http://<user>:<token>#<server>/safeRestart
I also have the SafeRestart Plugin installed, in case that makes a difference.
If you want to write a script to automate creation of jobs using the Jenkins API, you can use one of the API clients to do that. A ruby client for Jenkins is available at https://github.com/arangamani/jenkins_api_client
gem install jenkins_api_client
require "rubygems"
require "jenkins_api_client"
# Initialize the client by passing in the server information
# and credentials to communicate with the server
client = JenkinsApi::Client.new(
:server_ip => "127.0.0.1",
:username => "awesomeuser",
:password => "awesomepassword"
)
# The following block will create 10 jobs in Jenkins
# test_job_0, test_job_1, test_job_2, ...
10.times do |num|
client.job.create_freestyle(:name => "test_job_#{num}")
end
# The jobs in Jenkins can be listed using
client.job.list_all
The API client can be used to perform a lot of operations.
API token is the same as password from API point of view, see source code uses token in place of passwords for the API.
See related answer from #coffeebreaks in my question python-jenkins or jenkinsapi for jenkins remote access API in python
Others is described in doc to use http basic authentication model
In order to use API tokens, users will have to obtain their own tokens, each from https://<jenkins-server>/me/configure or https://<jenkins-server>/user/<user-name>/configure. It is up to you, as the author of the script, to determine how users supply the token to the script. For example, in a Bourne Shell script running interactively inside a Git repository, where .gitignore contains /.jenkins_api_token, you might do something like:
api_token_file="$(git rev-parse --show-cdup).jenkins_api_token"
api_token=$(cat "$api_token_file" || true)
if [ -z "$api_token" ]; then
echo
echo "Obtain your API token from $JENKINS_URL/user/$user/configure"
echo "After entering here, it will be saved in $api_token_file; keep it safe!"
read -p "Enter your Jenkins API token: " api_token
echo $api_token > "$api_token_file"
fi
curl -u $user:$api_token $JENKINS_URL/someCommand