I have an ElasticSearch cluster and am trying to query it using the RESTful Search API. My query would return the oldest results but I wanted the newest so I added a range filter
curl -XGET 'https://cluster.com/_search' -d '{
"from": 0, "size": 10000,
"range" : {
"#timestamp" : {
"gt": "now-1h"
}
}
}'
But I get the following error
"error":"SearchPhaseExecutionException[Failed to execute phase [query],.....Parse Failure [Failed to parse source.........Parse Failure [No parser for element [range]]]
I've tried using #timestamp, timestamp, and _timestamp as well for variable names but that didn't work. I've also confirmed that it is the range option that is causing the request to fail.
Any help would be appreciated.
Your query is not formatted correctly, you miss a "query" level:
curl -XGET 'https://cluster.com/_search' -d '{
"from": 0, "size": 10000,
"query": {
"range" : {
"#timestamp" : {
"gt": "now-1h"
}
}
}
}'
Related
I'm trying to send data to an ElasticSearch server using CURL. There is an index called 'datastream2' which has a lot of fields sorta like this:
"datastream2": {
"mappings": {
"properties": {
"UA": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 512
}
}
},
"accLang": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}...
I'd like to use CURL to send data to this index.
I've been using CURL for the attempted POST like this:
curl -v -X POST http://66-228-66-111.ip.linodeusercontent.com:9200/datastream2/newdocname -H "Content-type: application/json" --user elastic:u34XXXc2qYNGnVS4XXXA -d '{"UA":"Mozilla","acclang":"eng"}'
but it's failing with the message:
{"error":"no handler found for uri [/datastream2/newdocname] and method [POST]"}%
I will admit that I'm not sure what to put after the indexname of '/datastream2/' , but I've tried various different values. Some documentation says to list the type (which I'm not sure where to find) and some docs say that this is no longer necessary on ElasticSearch 8+ .
Any ideas how I can get this data posted into ElasticSearch?
You just need to replace newdocname by _doc and it will work
curl -v -X POST http://66-228-66-111.ip.linodeusercontent.com:9200/datastream2/_doc
I'm trying to get data back from this Withings endpoint: https://developer.withings.com/api-reference/#operation/measure-getmeas
But every combination of things I've tried simply returns:
status body error
503 Invalid Params
This is the most recent body that isn't working: action=getmeas&meastype=meastype&meastypes=11&category=1&startdate=1641168000&enddate=1641254399
For reference: https://developer.withings.com/api-reference/#operation/measure-getmeas
Based on what you posted, the problem is your parameter meastype=meastype. If you remove this then it should run fine.
Assuming you have followed the procedure to get an access token your call from PowerShell would look like this:
Invoke-RestMethod -Method 'Post' -Headers #{ "Authorization" = "Bearer XXXXXXXXXXXXXXXXXX" } -Body "action=getmeas&meastypes=11&category=1&startdate=1641168000&enddate=1641254399" -Uri 'https://wbsapi.withings.net/measure'
This will return a JSON structure as per the docs you link to in the question e.g.
{
"status": 0,
"body": {
"updatetime": "string",
"timezone": "string",
"measuregrps": [
{
"grpid": 12,
"attrib": 1,
"date": 1594245600,
"created": 1594246600,
"category": 1594257200,
"deviceid": "892359876fd8805ac45bab078c4828692f0276b1",
"measures": [
{
"value": 65750,
"type": 1,
"unit": -3,
"algo": 3425,
"fm": 1,
"fw": 1000
}
],
"comment": "A measurement comment"
}
],
"more": 0,
"offset": 0
}
}
If your "measuregrps" is empty (like mine is below) then it means there is no data available for the time period you selected so either your device doesn't record that parameter or the data has not been synchronised to your Withings account.
What I get when I run it (my device doesn't record HR):
status body
------ ----
0 #{updatetime=1641470158; timezone=Europe/London; measuregrps=System.Object[]}
Another option is to use Windows Subsystem for Linux to run curl commands. You essentially get the same thing:
curl --header "Authorization: Bearer XXXXXXXXXXXXXXXXXXXXXX" --data "action=getmeas&meastype=11&category=1&startdate=1609925332&enddate=1641461360" 'https://wbsapi.withings.net/measure'
gives
{
"status":0,
"body":{
"updatetime":1641470640,
"timezone":"Europe\/London",
"measuregrps":[]
}
}
I am creating a test case in TFS 2018 using postman like this:
curl -X POST \
'https://TFSURL:443/DefaultCollection/PROJECT/_apis/wit/workitems/$Test%20Case?api-version=4.1' \
-H 'Authorization: Basic MYKEY' \
-H 'Content-Type: application/json-patch+json' \
-d '[
{
"op": "add",
"path": "/fields/System.AreaPath",
"from": null,
"value": "TEST\\Automation"
},
{
"op": "add",
"path": "/fields/System.IterationPath",
"from": null,
"value": "TEST\\Sprint 8"
},
{
"op": "add",
"path": "/fields/System.Title",
"from": null,
"value": "Sample task"
},
{
"op": "add",
"path": "/fields/Microsoft.VSTS.TCM.Steps",
"value": "<steps id=\"0\">
<step id=\"1\" type=\"ValidateStep\"><parameterizedString isformatted=\"true\">Input step 1</parameterizedString><parameterizedString isformatted=\"true\">Expectation step 1</parameterizedString><description/></step>
<step id=\"2\" type=\"ValidateStep\"><parameterizedString isformatted=\"true\">Input step 2</parameterizedString><parameterizedString isformatted=\"true\">Expectation step 2</parameterizedString><description/></step>
<step id=\"3\" type=\"ValidateStep\"><parameterizedString isformatted=\"true\">Input step 3</parameterizedString><parameterizedString isformatted=\"true\">Expectation step 3</parameterizedString><description/></step>
<step id=\"4\" type=\"ValidateStep\"><parameterizedString isformatted=\"true\">Input step 4</parameterizedString><parameterizedString isformatted=\"true\">Expectation step 4</parameterizedString><description/></step></steps>"
}
]'
How can I check before sending this request that this test exist so that I can update it instead of creating a new one every time?
I was expecting to do it by :
searching the name of the automation which should be unique in tfs (method name from automation script) and see if it returns something
It seems like in tfs there is a way to do a request to get all work items but I am not sure if you can return based on the title field of the item. (Help page on how to do this request)
I tried to use the search api to return the results if anything with that name exists in tfs but I am getting an error message which I don't know how to resolve
curl -X POST \
'https://TFSURL:443/DefaultCollection/PROJECT/_apis/search/workitemsearchresults?api-version=4.1-preview' \
-H 'Authorization: Basic MYKEY' \
-H 'Content-Type: application/json' \
-d '[
{
"searchText": "Sample task",
"$skip": 0,
"$top": 1,
"filters": {
"System.AreaPath": [
"TEST\\Automation"
]
},
"$orderBy": [
{
"field": "system.id",
"sortOrder": "ASC"
}
],
"includeFacets": true
}
]'
'
Response:
{
"count": 1,
"value": {
"Message": "An error has occurred."
}
}
Just to answer my own question in case someone else needs it, as I have read you can't do this with just a single or two different API calls. for the following solution i have created multiple REST API request to TFS and called/processed them with Python >3.
What I have done in order to update a test case if it exist or create a new one it if it is not:
Create in tfs a query which returns all test cases
Update query using an API request and the key of the query you have created (i specified iteration and team specifically in the body of the request) (help)
Create an API request to get the result of that query (help)
In a loop i was iterating over all the test case ids the query has returned
In the same loop I have used the test case id to access with a request that specific work item (help)
I then compared the title of the returned item versus the title that i want to create from my automate script (ClassName_MethodName)
If those titles where the same I do a request to update that test case using the test id i got and then break from the loop
If the titles were different i create a new test case and then break from the loop
I know this is not the fastest solution but at least it is working!
If there is another way which is easier and faster I am more than happy to follow it.
trying elasticsearch(6.4.2) sql rest api on yelp dataset,
read -r -d '' sql <<'EOF'
{
"query":"select city, COUNT(*) AS c from \"yelp.business\" group by city"
}
EOF
curl -v -XPOST "http://$host/_xpack/sql?format=txt" -H'Content-Type: application/json' -d"$sql"
get right response
city | c
----------------------------+---------------
Cleveland |2977
Cleveland Heights |179
Cleveland Hghts. |1
East Cleveland |4
Mayfield Heights (Cleveland)|1
but tranlsate sql to dsl
query=`curl -v -XPOST "http://$host/_xpack/sql/translate?format=json" -H'Content-Type: application/json' -d"$sql"`
get following:
{
"_source" : false,
"size" : 0,
"aggregations" : {
"groupby" : {
"composite" : {
"sources" : [
{
"2467" : {
"terms" : {
"order" : "asc",
"field" : "city.keyword",
"missing_bucket" : false
}
}
}
],
"size" : 1000
}
}
},
"stored_fields" : "_none_"
}
execute translated dsl on search request
curl -XGET "http://$host/antkrill.event/_search" -H 'Content-Type: application/json' -d"$query"
and get error
failed to find field [city.keyword] and [missing_bucket] is not set
why search with sql is ok, but error with translated dsl
my own problem! the second query was executed with translated dsl on a different index!
I want to configure my elasticsearch 0.19.11 to delete indexes every 60s. My elasticsearch config has these 3 lines:
node.name: "Saurajeet"
index.ttl.disable_purge: false
index.ttl.interval: 60s
indices.ttl.interval: 60s
And its not working
I have 2 default docs indexed. And would be expecting it to go after 60s
$ curl -XGET http://localhost:9200/twitter/_settings?pretty=true
{
"twitter" : {
"settings" : {
"index.version.created" : "191199",
"index.number_of_replicas" : "1",
"index.number_of_shards" : "5"
}
}
Also if i m trying to do the following it doesnot have any effect
$ curl -XPUT http://localhost:9200/twitter/_settings -d '
> { "twitter": {
> "settings" : {
> "index.ttl.interval": "60s"
> }
> }
> }
> '
{"ok":true}~/bin/elasticsearc
$ curl -XGET http://localhost:9200/twitter/_settings?pretty=true
{
"twitter" : {
"settings" : {
"index.version.created" : "191199",
"index.number_of_replicas" : "1",
"index.number_of_shards" : "5"
}
}
}
I have indexes 2 documents and its still showing up after 1hr
$ curl -XPUT 'http://localhost:9200/twitter/tweet/1' -d '
{
"user": "kimchy",
"postDate": "2009-11-15T13:12:00",
"message": "Trying out Elastic Search, so far so good?"
}'
$ curl -XPUT 'http://localhost:9200/twitter/tweet/2' -d '
{
"user": "kimchy",
"postDate": "2009-11-15T13:12:00",
"message": "Trying out Elastic Search, so far so good?"
}'
WHAT DID I DO WRONG
P.S. I want to deploy this config with logstash. So any other alternative can be suggested.
for scaling reasons i dont want this autopurge to be a script.
I believe the indices.ttl.interval setting is only to tweak the cleanup process timing.
You would need to set the _ttl field for the index/type in order to expire it. It looks like this:
{
"tweet" : {
"_ttl" : { "enabled" : true, "default" : "60s" }
}
}
http://www.elasticsearch.org/guide/reference/mapping/ttl-field/
Finally Figured out myself. Upgraded elasticsearch version to 1.2.0. You can put in TTLs from the Mapping API. -> Put Mapping -> TTL.
Enabling TTL on type level on an index
$ curl -XPOST http://localhost:9200/abc/a/_mapping -d '
{
"a": {
"_ttl": {
"enabled": true,
"default": "10000ms"
}
}
}'
$ curl -XPOST http://localhost:9200/abc/a/a1 -d '{"test": "true"}'
$ $ curl -XGET http://localhost:9200/abc/a/a1?pretty
{
"_index" : "abc",
"_type" : "a",
"_id" : "a1",
"_version" : 1,
"found" : true,
"_source":{"test": "true"}
}
$ # After 10s
$ curl -XGET http://localhost:9200/abc/a/a1?pretty
{
"_index" : "abc",
"_type" : "a",
"_id" : "a1",
"found" : false
}
Note:
Mapping applies to docs created after creation of mapping.
Also mapping was created for type a. So if you post to type b and
expect it expired on TTL, thats not gonna happen.
If you need to expire index, you can also create index level mappings during the create index to precreate indexes from your application logic.