Why elastic search not_analyzed doesn't work I am trying to solve
The mapping is visible here: http://i.stack.imgur.com/dGj7A.png
curl -XGET 'http://localhost:9200/gb/_analyze?field=tag?pretty' -d 'Black-cats'
curl -XGET 'http://localhost:9200/gb/_analyze?field=tweet?pretty' -d 'Black-cats'
the results are the same
{
"tokens": [{
"token": "black",
"start_offset": 0,
"end_offset": 5,
"type": "<ALPHANUM>",
"position": 1
}, {
"token": "cats",
"start_offset": 6,
"end_offset": 10,
"type": "<ALPHANUM>",
"position": 2
}]
}
You have your URL wrong, i.e. you have two ? characters, the one before pretty should be an ampersand &, try this one
curl -XGET 'http://localhost:9200/gb/_analyze?field=tag&pretty' -d 'Black-cats'
^
|
this should be a &, not a ?
Related
I am trying to create a contact point in grafana for pagerduty using grafana API.
Tried with the help of these URLS: AlertProvisioning HTTP_API
API call reference
YAML reference of data changed to JSON and tried this way, the YAML reference
But getting error as
{"message":"invalid object specification: type should not be an empty string","traceID":"00000000000000000000000000000000"}
My API code below, replaced with dummy integration key for security.
curl -X POST --insecure -H "Authorization: Bearer XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX" -H "Content-Type: application/json" -d '{
"contactPoints": [
{
"orgId": 1,
"name": "test1",
"receivers": [
{
"uid": "test1",
"type": "pagerduty",
"settings": {
"integrationKey": "XXXXXXXXXXXXXXXX",
"severity": "critical",
"class": "ping failure",
"component": "Grafana",
"group": "app-stack",
"summary": "{{ `{{ template \"default.message\" . }}` }}"
}
}
]
}
]
},
"overwrite": false
}' http://XXXXXXXXXXXXXXXX.us-east-2.elb.amazonaws.com/api/v1/provisioning/contact-points
I would recommend to enable Grafana swagger UI. You will see POST /api/v1/provisioning/contact-points model there:
Example:
{
"disableResolveMessage": false,
"name": "webhook_1",
"settings": {},
"type": "webhook",
"uid": "my_external_reference"
}
In my BitBucket+Bamboo setup, I'm trying to get a list of email addresses of people having access to a particular repository. This is the output from the BitBucket API:
{
"size": 3,
"limit": 25,
"isLastPage": true,
"values": [
{
"user": {
"name": "name1",
"emailAddress": "name1.lastname1#domain.com",
"id": 1,
"displayName": "Name1 Lastname1",
"active": true,
"slug": "name1",
"type": "NORMAL",
"links": {
"self": [
{
"href": "https://bitbucket.com/stash/users/name1"
}
]
}
},
"permission": "REPO_WRITE"
},
{
"user": {
"name": "name2",
"emailAddress": "name2.lastname2#domain.com",
"id": 2,
"displayName": "Name2 Lastname2",
"active": true,
"slug": "name2",
"type": "NORMAL",
"links": {
"self": [
{
"href": "https://bitbucket.com/stash/users/name2"
}
]
}
},
"permission": "REPO_WRITE"
},
{
"user": {
"name": "name3",
"emailAddress": "name3.lastname3#domain.com",
"id": 3,
"displayName": "Name3 Lastname3",
"active": true,
"slug": "name3",
"type": "NORMAL",
"links": {
"self": [
{
"href": "https://bitbucket.com/stash/users/name3"
}
]
}
},
"permission": "REPO_WRITE"
}
],
"start": 0
}
is there an easy way to, say, put all 3 email addresses into an array or a coma-separated variable within a bash script? I tried using grep and splitting the API output somehow (e.g. by 'permission'), but no luck so far. Let me note that I may be forced to use standard tools like grep, sed or awk, meaning I may not be able to use tools like jq (to process json in bash) since I cannot really temper with available build agents.
Any help would be much appreciated!
Consider using JQ (or another JSON query tool). It will handle any valid Json, even one that is not pretty-printed or formatted in a specific way. Ca be compined with readarray to build the array in bash.
readarray -t emails <<< "$(jq -r '.values[].user.emailAddress' < file)"
Will produce an array 'emails'
declare -p emails
declare -a emails=([0]=$'name1.lastname1#domain.com' [1]=$'name2.lastname2#domain.com' [2]=$'name3.lastname3#domain.com')
Note 2020-07-22: Added '-t' to strip trailing new lines from result array
Assuming your input is always that regular, this will work using any awk in any shell on every UNIX box:
$ awk -F'"' '$2=="emailAddress"{addrs=addrs sep $4; sep=","} END{print addrs}' file
name1.lastname1#domain.com,name2.lastname2#domain.com,name3.lastname3#domain.com
Save the output in a variable or a file as you see fit, e.g.:
$ var=$(awk -F'"' '$2=="emailAddress"{addrs=addrs sep $4; sep=","} END{print addrs}' file)
$ echo "$var"
name1.lastname1#domain.com,name2.lastname2#domain.com,name3.lastname3#domain.com
Take a look on the python:
You can access directly to your api like this:
import urllib.request
import json
with urllib.request.urlopen('http://your/api') as url:
data = json.loads(url.read().decode())
or as an example with the local file with the same data as you provided:
import json
with open('./response.json') as f:
data = json.load(f)
result = {}
for x in data['values']:
node = x['user']
result[node['emailAddress']] = x['permission']
result is {'name1.lastname1#domain.com': 'REPO_WRITE', 'name2.lastname2#domain.com': 'REPO_WRITE', 'name3.lastname3#domain.com': 'REPO_WRITE'}
$ grep -oP '(?<="emailAddress": ).*' file |
tr -d '",' |
paste -sd,
name1.lastname1#domain.com,name2.lastname2#domain.com,name3.lastname3#domain.com
or
$ grep '"emailAddress":' file |
cut -d: -f2 |
tr -d '", ' |
paste -sd,
I am using Microsoft's TFS 2018 and I have started writing some Selenium test cases using Python 3.7 in Visual Studio 2018.
I have managed to use the REST API of TFS to return my TFS projects and create new test cases.
What I couldn't find is how to use this API to pass a list with all the test steps of this test case. I am not sure how and if you can add them in the body of the request as a string or array.
At the moment I am trying to make this work on Postman first and then I am going to try in python as well.
This is the request:
curl -X POST \
'https://TFSLINK:443/DefaultCollection/TFS/_apis/wit/workitems/$Test%20Case?api-version=4.1' \
-H 'Authorization: Basic MYKEY' \
-H 'Content-Type: application/json-patch+json' \
-H 'cache-control: no-cache' \
-d '[
{
"op": "add",
"path": "/fields/System.Title",
"from": null,
"value": "Sample task 2"
}
]'
Is there a way to achieve adding steps ? The API didn't mention anything about this.
In the response I get after creating a test case I get a section called 'fields' which should have included the steps but I can't see them in my response.
{
"id": 731,
"rev": 1,
"fields": {
"System.AreaPath": "TFS",
"System.TeamProject": "TFS",
"System.IterationPath": "TFS",
"System.WorkItemType": "Test Case",
"System.State": "Design",
"System.Reason": "New",
"System.AssignedTo": "Marialena <TFS\\marialena>",
"System.CreatedDate": "2019-01-09T08:00:50.51Z",
"System.CreatedBy": "Marialena <TFS\\marialena>",
"System.ChangedDate": "2019-01-09T08:00:50.51Z",
"System.ChangedBy": "Marialena <TFS\\marialena>",
"System.Title": "Sample task 2",
"Microsoft.VSTS.Common.StateChangeDate": "2019-01-09T08:00:50.51Z",
"Microsoft.VSTS.Common.ActivatedDate": "2019-01-09T08:00:50.51Z",
"Microsoft.VSTS.Common.ActivatedBy": "Marialena <TFS\\marialena>",
"Microsoft.VSTS.Common.Priority": 2,
"Microsoft.VSTS.TCM.AutomationStatus": "Not Automated"
},
"_links": {
"self": {
"href": "https://TFSLINK/DefaultCollection/_apis/wit/workItems/731"
},
"workItemUpdates": {
"href": "https://TFSLINK/DefaultCollection/_apis/wit/workItems/731/updates"
},
"workItemRevisions": {
"href": "https://TFSLINK/DefaultCollection/_apis/wit/workItems/731/revisions"
},
"workItemHistory": {
"href": "https://TFSLINK/DefaultCollection/_apis/wit/workItems/731/history"
},
"html": {
"href": "https://TFSLINK/web/wi.aspx?pcguid=07b658c4-97e5-416f-b32d-3dd48d7f56cc&id=731"
},
"workItemType": {
"href": "https://TFSLINK/DefaultCollection/18ca0a74-cf78-45bf-b163-d8dd4345b418/_apis/wit/workItemTypes/Test%20Case"
},
"fields": {
"href": "https://TFSLINK/DefaultCollection/_apis/wit/fields"
}
},
"url": "https://TFSLINK/DefaultCollection/_apis/wit/workItems/731"
}
I have tried creating this PATCH request to update the steps but it didn't work
curl -X PATCH \
'https://TFSLINK:443/DefaultCollection/TFS/_apis/wit/workItems/730?api-version=4.1' \
-H 'Authorization: Basic MYKEY' \
-H 'Content-Type: application/json-patch+json'
-d '[
{
"op": "add",
"path": "/fields/Microsoft.VSTS.TCM.Steps",
"from": null,
"value": "Test"
},
{
"op": "add",
"path": "/fields/Steps",
"from": null,
"value": "Test"
}
]'
And maybe this is a another topic but if the above is achievable, can you also pass the results after you run the test and update the test plan perhaps ? If this is unrelated please help me only with the test steps and ignore this question.
Many thanks.
This is the way to add test steps in Test Case with Rest API:
{
"op": "add",
"path": "/fields/Microsoft.VSTS.TCM.Steps",
"value": "<steps id=\"0\" last=\"1\"><step id=\"2\" type=\"ValidateStep\"><parameterizedString isformatted=\"true\">Input step 1</parameterizedString><parameterizedString isformatted=\"true\">Expectation step 1</parameterizedString><description/></step></steps>"
}
For a few steps (3 on this example):
{
"op": "add",
"path": "/fields/Microsoft.VSTS.TCM.Steps",
"value": "<steps id=\"0\" last=\"4\"><step id=\"2\" type=\"ValidateStep\"><parameterizedString isformatted=\"true\"><P>step 1 \"Action\"</P></parameterizedString><parameterizedString isformatted=\"true\"><P>step 1 \"Expected\"<BR/></P></parameterizedString><description/></step><step id=\"3\" type=\"ValidateStep\"><parameterizedString isformatted=\"true\"><P>step 2 \"Action\"<BR/></P></parameterizedString><parameterizedString isformatted=\"true\"><P>step 2 \"Expected\"<BR/></P></parameterizedString><description/></step><step id=\"4\" type=\"ValidateStep\"><parameterizedString isformatted=\"true\"><P>step 3 \"Action\"<BR/></P></parameterizedString><parameterizedString isformatted=\"true\"><P>step 3 \"Expected\"<BR/></P></parameterizedString><description/></step></steps>"
}
I'm using puppet version 5.3.6.
I'm able to query the puppetdb and get lots of useful information like this:
$ curl -s -X GET http://localhost:8080/pdb/query/v4/facts --data-urlencode 'query=["extract", [["function","count"],"value"],["=","name","operatingsystem"],["group_by", "value"]]' | python -mjson.tool
[
{
"count": 339,
"value": "OracleLinux"
},
{
"count": 73,
"value": "RedHat"
}
]
AND:
$ curl -s -X GET http://localhost:8080/pdb/query/v4/facts --data-urlencode 'query=["extract", [["function","count"],"value"],["=","name","operatingsystemmajrelease"],["group_by", "value"]]' | python -mjson.tool
[
{
"count": 38,
"value": "5"
},
{
"count": 217,
"value": "6"
},
{
"count": 157,
"value": "7"
}
]
How can I combine the two together and get each Oracle/Red Hat release & major release grouped together in an easy to see view. I've tried a few different ways to do it but I'm not able to find any examples or docs that can explain to me how to do it.
Other useful combinations would be all Red Hat servers in a particular DC running operatingsystemmajrelease 6 (or show all of them?). This would involve combining three facts.
This would be very ussful.
Thanks for your help!
Regards
So I've submitted a request using travis api v3 guide and got response like:
{
"#type": "pending",
"remaining_requests": 10,
"repository": {
"#type": "repository",
"#href": "/repo/111111111",
"#representation": "minimal",
"id": 111111111,
"name": "my-111111111",
"slug": "me/my111111111"
},
"request": {
"repository": {
"id": 222222,
"owner_name": "me",
"name": "my-111111111"
},
"user": {
"id": 333333
},
"id": 444444,
"message": "Cool message",
"branch": "master"
},
"resource_type": "request"
}
So what is the way to get status of those jobs now? I suppose that I need to use id 444444, but I am getting error below, not sure what I am doing wrong:
curl -s -X POST \
-H "Content-Type: application/json" \
-H "Accept: application/json" \
-H "Travis-API-Version: 3" \
-H "Authorization: token mycooltoken" \
https://api.travis-ci.org/repo/111111111/requests/444444
{
"#type": "error",
"error_type": "not_found",
"error_message": "resource not found (or insufficient access)"
}
Can somebody point me where to look for examples or any other idea?
Oh sorry for confusion - somehow I overlooked that I must change POST to GET; with that simple fix everything worked fine.