elasticsearch dsl translated from sql can't working - sql

trying elasticsearch(6.4.2) sql rest api on yelp dataset,
read -r -d '' sql <<'EOF'
{
"query":"select city, COUNT(*) AS c from \"yelp.business\" group by city"
}
EOF
curl -v -XPOST "http://$host/_xpack/sql?format=txt" -H'Content-Type: application/json' -d"$sql"
get right response
city | c
----------------------------+---------------
Cleveland |2977
Cleveland Heights |179
Cleveland Hghts. |1
East Cleveland |4
Mayfield Heights (Cleveland)|1
but tranlsate sql to dsl
query=`curl -v -XPOST "http://$host/_xpack/sql/translate?format=json" -H'Content-Type: application/json' -d"$sql"`
get following:
{
"_source" : false,
"size" : 0,
"aggregations" : {
"groupby" : {
"composite" : {
"sources" : [
{
"2467" : {
"terms" : {
"order" : "asc",
"field" : "city.keyword",
"missing_bucket" : false
}
}
}
],
"size" : 1000
}
}
},
"stored_fields" : "_none_"
}
execute translated dsl on search request
curl -XGET "http://$host/antkrill.event/_search" -H 'Content-Type: application/json' -d"$query"
and get error
failed to find field [city.keyword] and [missing_bucket] is not set
why search with sql is ok, but error with translated dsl

my own problem! the second query was executed with translated dsl on a different index!

Related

fulfillment shopify api not working 2023-01

I am trying to use this api /fulfillments.json in shopify but i am getting error {"errors":"Not Found"}
my code is here
curl --location --request POST 'https://logixgrid-save.myshopify.com/admin/api/2023-01/fulfillments.json' \
--header 'X-Shopify-Access-Token: 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' \
--header 'Content-Type: application/json' \
--data-raw '{
"fulfillment": {
"message": "The package was shipped this morning.",
"notify_customer": false,
"tracking_info": {
"number": 1562678,
"url": "https://www.my-shipping-company.com",
"company": "my-shipping-company"
},
"line_items_by_fulfillment_order": [
{
"fulfillment_order_id": 5247929286964,
"fulfillment_order_line_items": [
{
"id": 1058737495,
"quantity": 1
}
]
}
]
}
}'
Here is my test store detail you can try this i will delete all these after 2 days
i am getting this response
{"errors":"Not Found"}
Most likely this is because 1058737495 id was took directly from documentation and you don't have such line_item in your order.
EDIT: Your fulfillment order id is wrong. You can get correct one by fetching /admin/api/2023-01/orders/{{ order.id}}/fulfillment_orders.json. This will return array of fulfillment orders. Try one of the ids - it should be working.
curl -X GET "https://redacted.myshopify.com/admin/api/2023-01/orders/5250054553908/fulfillment_orders.json" \
-H "X-Shopify-Access-Token: shpca_redacted"

Missing parameters - unable to load CSV from s3 to neptune

I'm trying to load files from s3 to Neptune using the curl command from the Neptune documentation:
curl -X POST \
-H 'Content-Type: application/json' \
https://your-neptune-endpoint:port/loader -d '
{
"source" : "s3://bucket-name/object-key-name",
"format" : "format",
"iamRoleArn" : "arn:aws:iam::account-id:role/role-name",
"region" : "region",
"failOnError" : "FALSE",
"parallelism" : "MEDIUM",
"updateSingleCardinalityProperties" : "FALSE",
"queueRequest" : "FALSE"
}'
I entered all the parameters requested and receiving the following error: missing required parameters
I tried to load the CSV from s3 to Neptune; I expected to receive a message with the load id but I got an error instead.
tried to change the source like the commet suggested and still the same error:
I was able to take your command (substituting my server and credentials) and it worked fine. This is the command I used:
curl -X POST \
-H 'Content-Type: application/json' \
https://my-cluster.cluster-aaaabbbbcccc.us-east-1.neptune.amazonaws.com:8182/loader -d '
{
"source" : "s3://bucket/prefix/myfile.csv",
"format" : "csv",
"iamRoleArn" : "arn:aws:iam::111122223333:role/NeptuneLoadFromS3",
"region" : "us-east-1",
"failOnError" : "FALSE",
"parallelism" : "MEDIUM",
"updateSingleCardinalityProperties" : "FALSE",
"queueRequest" : "FALSE"
}'

ICINGA2 API Not making host modification

i have problem with the API of ICINGA2.
i'm trying to add new variables with the POST call ,
i'm getting the required result,
But ICINGA2 didn't add the new var.
According to documentation:
http://docs.icinga.org/icinga2/latest/doc/module/icinga2/chapter/icinga2-api
With the following API, i creates all our hosts in vienna :
curl -k -s -u root:icinga -H 'Accept: application/json' -X PUT 'https://localhost:5665/v1/objects/hosts/server.example.com' \
-d '{ "templates": [ "generic-host" ], "attrs": { "zone": "Vienna", "address": "180.33.1.123", "check_command": "hostalive", "vars.os" : "Linux", "vars.agent" : "ssh" } }' \
| python -m json.tool
While this part works as expected,
The problem is once host created, i need to add various vars for different servers.
for example
Adding of the variable: "vars.servicename" : "DHCP_Servers"
If i'm going back to the documentation, the below API that will need to be execute:
curl -k -s -u root:icinga -H 'Accept: application/json' -X POST 'https://localhost:5665/v1/objects/hosts/server.example.com' \
-d '{ "templates": [ "generic-host" ], "attrs": { "zone": "Vienna", "address": "180.33.1.123", "check_command": "hostalive", "vars.os" : "Linux", "vars.agent" : "ssh", "vars.servicename" : "DHCP_Servers" } }' \
| python -m json.tool
When i ran the API , as expected i'm getting back:
{
"results": [
{
"code": 200.0,
"name": "server.example.com",
"status": "Attributes updated.",
"type": "Host"
}
]
}
But there is no changes that taking place on ICINGA/ host file.
Obviously the same user as in my inbox and the forums (https://monitoring-portal.org/index.php?thread/37160-adding-vars-with-api/&postID=234885#post234885) lately. Leaving this as a note here as it might help others to see why it does not work. That feature is just not implemented as it involves storing the applied changes, do a rollback, and re-apply. Not as simple as it sounds.
https://dev.icinga.org/issues/11501

How to get most recent elasticsearch results using search API

I have an ElasticSearch cluster and am trying to query it using the RESTful Search API. My query would return the oldest results but I wanted the newest so I added a range filter
curl -XGET 'https://cluster.com/_search' -d '{
"from": 0, "size": 10000,
"range" : {
"#timestamp" : {
"gt": "now-1h"
}
}
}'
But I get the following error
"error":"SearchPhaseExecutionException[Failed to execute phase [query],.....Parse Failure [Failed to parse source.........Parse Failure [No parser for element [range]]]
I've tried using #timestamp, timestamp, and _timestamp as well for variable names but that didn't work. I've also confirmed that it is the range option that is causing the request to fail.
Any help would be appreciated.
Your query is not formatted correctly, you miss a "query" level:
curl -XGET 'https://cluster.com/_search' -d '{
"from": 0, "size": 10000,
"query": {
"range" : {
"#timestamp" : {
"gt": "now-1h"
}
}
}
}'

Elasticsearch: Auto Indices Deletion/Expiry

I want to configure my elasticsearch 0.19.11 to delete indexes every 60s. My elasticsearch config has these 3 lines:
node.name: "Saurajeet"
index.ttl.disable_purge: false
index.ttl.interval: 60s
indices.ttl.interval: 60s
And its not working
I have 2 default docs indexed. And would be expecting it to go after 60s
$ curl -XGET http://localhost:9200/twitter/_settings?pretty=true
{
"twitter" : {
"settings" : {
"index.version.created" : "191199",
"index.number_of_replicas" : "1",
"index.number_of_shards" : "5"
}
}
Also if i m trying to do the following it doesnot have any effect
$ curl -XPUT http://localhost:9200/twitter/_settings -d '
> { "twitter": {
> "settings" : {
> "index.ttl.interval": "60s"
> }
> }
> }
> '
{"ok":true}~/bin/elasticsearc
$ curl -XGET http://localhost:9200/twitter/_settings?pretty=true
{
"twitter" : {
"settings" : {
"index.version.created" : "191199",
"index.number_of_replicas" : "1",
"index.number_of_shards" : "5"
}
}
}
I have indexes 2 documents and its still showing up after 1hr
$ curl -XPUT 'http://localhost:9200/twitter/tweet/1' -d '
{
"user": "kimchy",
"postDate": "2009-11-15T13:12:00",
"message": "Trying out Elastic Search, so far so good?"
}'
$ curl -XPUT 'http://localhost:9200/twitter/tweet/2' -d '
{
"user": "kimchy",
"postDate": "2009-11-15T13:12:00",
"message": "Trying out Elastic Search, so far so good?"
}'
WHAT DID I DO WRONG
P.S. I want to deploy this config with logstash. So any other alternative can be suggested.
for scaling reasons i dont want this autopurge to be a script.
I believe the indices.ttl.interval setting is only to tweak the cleanup process timing.
You would need to set the _ttl field for the index/type in order to expire it. It looks like this:
{
"tweet" : {
"_ttl" : { "enabled" : true, "default" : "60s" }
}
}
http://www.elasticsearch.org/guide/reference/mapping/ttl-field/
Finally Figured out myself. Upgraded elasticsearch version to 1.2.0. You can put in TTLs from the Mapping API. -> Put Mapping -> TTL.
Enabling TTL on type level on an index
$ curl -XPOST http://localhost:9200/abc/a/_mapping -d '
{
"a": {
"_ttl": {
"enabled": true,
"default": "10000ms"
}
}
}'
$ curl -XPOST http://localhost:9200/abc/a/a1 -d '{"test": "true"}'
$ $ curl -XGET http://localhost:9200/abc/a/a1?pretty
{
"_index" : "abc",
"_type" : "a",
"_id" : "a1",
"_version" : 1,
"found" : true,
"_source":{"test": "true"}
}
$ # After 10s
$ curl -XGET http://localhost:9200/abc/a/a1?pretty
{
"_index" : "abc",
"_type" : "a",
"_id" : "a1",
"found" : false
}
Note:
Mapping applies to docs created after creation of mapping.
Also mapping was created for type a. So if you post to type b and
expect it expired on TTL, thats not gonna happen.
If you need to expire index, you can also create index level mappings during the create index to precreate indexes from your application logic.