Unable to create index in Elastic search using API - api

I am trying to create index in Elasticsearch using API using the following mapping in kibana dev tools. Once I create the index, I want to use reindex API to copy documents from an already existing index.
PUT /ipflow-logs
{
"ipflow-logs" : {
"mappings" : {
"properties" : {
"conn_state" : {
"type" : "keyword"
},
"content_length" : {
"type" : "long"
},
"content_type" : {
"type" : "keyword"
},
"createdDate" : {
"type" : "keyword"
},
"dst_ip" : {
"type" : "ip"
},
"dst_port" : {
"type" : "long"
},
"duration" : {
"type" : "long"
},
"history" : {
"type" : "keyword"
},
"local_orig" : {
"type" : "keyword"
},
"missed_bytes" : {
"type" : "long"
},
"orig_bytes" : {
"type" : "long"
},
"orig_ip_bytes" : {
"type" : "long"
},
"orig_pkts" : {
"type" : "long"
},
"protocol" : {
"type" : "keyword"
},
"resp_bytes" : {
"type" : "long"
},
"resp_ip_bytes" : {
"type" : "long"
},
"resp_pkts" : {
"type" : "long"
},
"service" : {
"type" : "keyword"
},
"src_ip" : {
"type" : "ip"
},
"src_port" : {
"type" : "long"
},
"timestamp" : {
"type" : "date",
"format" : "yyyy-MM-dd 'T' HH:mm:ss.SSS"
},
"uid" : {
"type" : "keyword"
}
}
}
}
}
I am getting the below error when I try to create the index.
"type": "parse_exception", "reason": "unknown key [ipflow-logs] for create index", "status": 400
Any help is appreciated. Thanks

You need to do it this way (i.e. mappings should be at the top):
PUT /ipflow-logs
{
"mappings": {
"properties": {
"conn_state": {
"type": "keyword"
},
"content_length": {
"type": "long"
},
"content_type": {
"type": "keyword"
},
"createdDate": {
"type": "keyword"
},
"dst_ip": {
"type": "ip"
},
"dst_port": {
"type": "long"
},
"duration": {
"type": "long"
},
"history": {
"type": "keyword"
},
"local_orig": {
"type": "keyword"
},
"missed_bytes": {
"type": "long"
},
"orig_bytes": {
"type": "long"
},
"orig_ip_bytes": {
"type": "long"
},
"orig_pkts": {
"type": "long"
},
"protocol": {
"type": "keyword"
},
"resp_bytes": {
"type": "long"
},
"resp_ip_bytes": {
"type": "long"
},
"resp_pkts": {
"type": "long"
},
"service": {
"type": "keyword"
},
"src_ip": {
"type": "ip"
},
"src_port": {
"type": "long"
},
"timestamp": {
"type": "date",
"format": "yyyy-MM-dd 'T' HH:mm:ss.SSS"
},
"uid": {
"type": "keyword"
}
}
}
}

Related

Does JSON Schema have a switch like structure?

Consider this following example,
{
"$schema": "http://json-schema.org/draft-07/schema#",
"title": "Animal",
"type": "object",
"properties": {
"type": {
"type": "string",
"description": "Type of animal."
},
"data": {
"$ref": "#/$definations/cat"
}
},
"$definations":{
"cat" : {
"properties" : {
"meow" : {
"type": "string"
}
}
}
},
"required": ["type"]
}
and the correct JSON is ,
{
"type" : "cat",
"data" : {
"meow" : "OK"
}
}
Now I am having enum of Animals, and the data ref will vary based on type of Animal.
I have have tried if else but it seems not efficient as the condition will keep on growing.
Also used anyOf but how will I make sure that meow will always belong to animal type cat and not dog.
Can we have something like,
cat : { "$ref" : "#/$definations/cat" },
dog : { "$ref" : "#/$definations/dog" }
EDIT : Or dynamic value in ref like #/$definations/{type-value} ?
Thanks in advance.
I have have tried if else but it seems not efficient as the condition
will keep on growing.
Can we have something like...
No. JSON Schema (2019-09 and previous) doesn't have a "switch".
You'll need to use allOf to create multiple if then conditions.
After #Relequestual's response and some more digging I found there is no such way. At least in this version, fingers crossed for the future releases.
Here is my solution, feel free to suggest improvements.
{
"$schema": "http://json-schema.org/draft-07/schema#",
"title": "Animal",
"type": "object",
"properties": {
"type": {
"type": "string",
"description": "Type of animal."
}
},
"$definations": {
"cat": {
"properties": {
"meow": {
"type": "string"
}
},
"required": [
"meow"
]
},
"dog": {
"properties": {
"bhow": {
"type": "string"
}
},
"required": [
"bhow"
]
}
},
"oneOf": [
{
"properties": {
"type": {
"const": "cat"
},
"data": {
"$ref": "#/$definations/cat"
}
}
},
{
"properties": {
"type": {
"const": "dog"
},
"data": {
"$ref": "#/$definations/dog"
}
}
}
],
"required": [
"type"
]
}

Convert JSON to Avro with nifi

I m trying to read RabbitMQ queue and transfer data to hive.
My flow like that : ConsumeAMQP -> ConvertJSONToAvro -> PutHiveStreaming.
I have an error on ConvertJSONTOAvro process.
JSON :
{
"bn":"/27546/0","bt":48128.94568269015,"e":
[
{"n":"1000","sv":"8125333b8-5cae-4c8d-a5312-bbb215211dab"},
{"n":"1001","v":57.520565032958984},
{"n":"1002","v":22.45258230712891},
{"n":"1003","v":1331.0},
{"n":"1005","v":53.0},
{"n":"1011","v":50.0},
{"n":"5518","t":44119.703412761854},
{"n":"1023","v":0.0},
{"n":"1024","v":48128.94568269015},
{"n":"1025","v":7.0}
]
}
Record schema :
{
"type": "record",
"namespace": "nifi",
"fields": [{
"name": "bn",
"type": "string"
},
{
"name": "bt",
"type": "number"
},
{
"name": "e",
"type": "array",
"items": {
"type": "record",
"fields": [{
"name": "n",
"type": "string"
},
{
"name": "sv",
"type": "string"
},
{
"name": "v",
"type": "number"
},
{
"name": "t",
"type": "number"
}
]
}
}
]
}
Error
-–Record schema validated against '{"type":"record"...
I could not figure out what was wrong.
"items" : {
"type" : "record",
You need to add a name to this new record type. Avro doesn't allow "anonymous" record types.

Can we declare stage name inside the template.serverless file?

We are creating AWS Serverless Lambda function using .NET Core. When we deploy this lambda function it added automatically "Prod" suffix in the url. But we want change it to "dev". Can we declare stage name inside the serverless.template file?
Here is my serverless.template file:
{
"AWSTemplateFormatVersion" : "2010-09-09",
"Transform" : "AWS::Serverless-2016-10-31",
"Description" : "An AWS Serverless Application that uses the ASP.NET Core framework running in Amazon Lambda.",
"Parameters" : {
},
"Conditions" : {
},
"Resources" : {
"Get" : {
"Type" : "AWS::Serverless::Function",
"Properties": {
"Handler": "F2C.MAP.API.AWSLambda.PublicAPI::F2C.MAP.API.AWSLambda.PublicAPI.LambdaEntryPoint::FunctionHandlerAsync",
"Runtime": "dotnetcore2.0",
"CodeUri": "",
"MemorySize": 256,
"Timeout": 30,
"Role": null,
"Policies": [ "AWSLambdaFullAccess" ],
"Environment" : {
"Variables" : {
}
},
"Events": {
"PutResource": {
"Type": "Api",
"Properties": {
"Path": "/{proxy+}",
"Method": "GET"
}
}
}
}
},
"POST" : {
"Type" : "AWS::Serverless::Function",
"Properties": {
"Handler": "F2C.MAP.API.AWSLambda.PublicAPI::F2C.MAP.API.AWSLambda.PublicAPI.LambdaEntryPoint::FunctionHandlerAsync",
"Runtime": "dotnetcore2.0",
"CodeUri": "",
"MemorySize": 256,
"Timeout": 30,
"Role": null,
"Policies": [ "AWSLambdaFullAccess" ],
"Environment" : {
"Variables" : {
}
},
"Events": {
"PutResource": {
"Type": "Api",
"Properties": {
"Path": "/{proxy+}",
"Method": "POST"
}
}
}
}
}
},
"Outputs" : {
}
}
We are using AWS Toolkit for visual studio 2017 to deploy aws serverless lambda.("https://aws.amazon.com/blogs/developer/preview-of-the-aws-toolkit-for-visual-studio-2017")
The only way I could find to make this work is the specify a AWS::Serverless::Api to use. The following example sets both the StageName and the ASPNETCORE_ENVIRONMENT to the EnvironmentName parameter.
serverless.template:
{
"AWSTemplateFormatVersion" : "2010-09-09",
"Transform" : "AWS::Serverless-2016-10-31",
"Description" : "An AWS Serverless Application that uses the ASP.NET Core framework running in Amazon Lambda.",
"Parameters" : {
"EnvironmentName" : {
"Type" : "String",
"Description" : "Sets the ASPNETCORE_ENVIRONMENT variable as well as the API's StageName to this.",
"MinLength" : "0"
}
},
"Resources" : {
"ProxyFunction" : {
"Type" : "AWS::Serverless::Function",
"Properties": {
"Handler": "PeopleGateway::PeopleGateway.LambdaEntryPoint::FunctionHandlerAsync",
"Runtime": "dotnetcore2.0",
"CodeUri": "",
"MemorySize": 256,
"Timeout": 30,
"Role": null,
"Policies": [ "AWSLambdaFullAccess", "AWSLambdaVPCAccessExecutionRole" ],
"Environment" : {
"Variables" : {
"ASPNETCORE_ENVIRONMENT": { "Ref" : "EnvironmentName" }
}
},
"Events": {
"PutResource": {
"Type": "Api",
"Properties": {
"Path": "/{proxy+}",
"Method": "ANY",
"RestApiId": { "Ref": "APIGateway" }
}
}
}
}
},
"APIGateway": {
"Type" : "AWS::Serverless::Api",
"Properties": {
"StageName": { "Ref" : "EnvironmentName" },
"DefinitionBody": {
"swagger": "2.0",
"info": {
"title": {
"Ref": "AWS::StackName"
}
},
"paths": {
"/{proxy+}": {
"x-amazon-apigateway-any-method": {
"x-amazon-apigateway-integration": {
"httpMethod": "POST",
"type": "aws_proxy",
"uri": {
"Fn::Sub": "arn:aws:apigateway:${AWS::Region}:lambda:path/2015-03-31/functions/${ProxyFunction.Arn}/invocations"
}
},
"responses": {}
}
}
}
}
}
}
},
"Outputs" : {
"ApiURL" : {
"Description" : "API endpoint URL for the specified environment",
"Value" : { "Fn::Sub" : "https://${APIGateway}.execute-api.${AWS::Region}.amazonaws.com/${EnvironmentName}/" }
}
}
}

AWS API Gateway Query parameter validation

I have been trying to validate my request parameters using x-amazon-apigateway-request-validator, but unfortunately it is not working. Below is swagger file-
{
"swagger": "2.0",
"info": {
"title": "API Gateway - Request Validation Demo"
},
"schemes": [
"https"
],
"produces": [
"application/json"
],
"x-amazon-apigateway-request-validators" : {
"full" : {
"validateRequestBody" : true,
"validateRequestParameters" : true
},
"body-only" : {
"validateRequestBody" : true,
"validateRequestParameters" : false
}
},
"x-amazon-apigateway-request-validator" : "full",
"paths": {
"/orders": {
"post": {
"x-amazon-apigateway-request-validator": "body-only",
"parameters": [
{
"in": "body",
"name": "CreateOrders",
"required": true,
"schema": {
"$ref": "#/definitions/CreateOrders"
}
}
],
"responses": {
"200": {
"schema": {
"$ref": "#/definitions/Message"
}
},
"400" : {
"schema": {
"$ref": "#/definitions/Message"
}
}
},
"x-amazon-apigateway-integration": {
"responses": {
"default": {
"statusCode": "200",
"responseTemplates": {
"application/json": "{\"message\" : \"Orders successfully created\"}"
}
}
},
"requestTemplates": {
"application/json": "{\"statusCode\": 200}"
},
"passthroughBehavior": "never",
"type": "mock"
}
},
"get": {
"x-amazon-apigateway-request-validator": "full",
"parameters": [
{
"in": "header",
"name": "Account-Id",
"required": true
},
{
"in": "query",
"name": "type",
"required": false,
"schema": {
"$ref": "#/definitions/InputOrders"
}
}
],
"responses": {
"200" : {
"schema": {
"$ref": "#/definitions/Orders"
}
},
"400" : {
"schema": {
"$ref": "#/definitions/Message"
}
}
},
"x-amazon-apigateway-integration": {
"responses": {
"default": {
"statusCode": "200",
"responseTemplates": {
"application/json": "[{\"order-id\" : \"qrx987\",\n \"type\" : \"STOCK\",\n \"symbol\" : \"AMZN\",\n \"shares\" : 100,\n \"time\" : \"1488217405\",\n \"state\" : \"COMPLETED\"\n},\n{\n \"order-id\" : \"foo123\",\n \"type\" : \"STOCK\",\n \"symbol\" : \"BA\",\n \"shares\" : 100,\n \"time\" : \"1488213043\",\n \"state\" : \"COMPLETED\"\n}\n]"
}
}
},
"requestTemplates": {
"application/json": "{\"statusCode\": 200}"
},
"passthroughBehavior": "never",
"type": "mock"
}
}
}
},
"definitions": {
"CreateOrders": {
"$schema": "http://json-schema.org/draft-04/schema#",
"title": "Create Orders Schema",
"type": "array",
"minItems" : 1,
"items": {
"type": "object",
"$ref" : "#/definitions/Order"
}
},
"Orders" : {
"type": "array",
"$schema": "http://json-schema.org/draft-04/schema#",
"title": "Get Orders Schema",
"items": {
"type": "object",
"properties": {
"order_id": { "type": "string" },
"time" : { "type": "string" },
"state" : {
"type": "string",
"enum": [
"PENDING",
"COMPLETED"
]
},
"order" : {
"$ref" : "#/definitions/Order"
}
}
}
},
"Order" : {
"type": "object",
"$schema": "http://json-schema.org/draft-04/schema#",
"title": "Schema for a single Order",
"required": [
"account-id",
"type",
"symbol",
"shares",
"details"
],
"properties" : {
"account-id": {
"type": "string",
"pattern": "[A-Za-z]{6}[0-9]{6}"
},
"type": {
"type" : "string",
"enum" : [
"STOCK",
"BOND",
"CASH"]
},
"symbol" : {
"type": "string",
"minLength": 1,
"maxLength": 4
},
"shares": {
"type": "number",
"minimum": 1,
"maximum": 1000
},
"details": {
"type": "object",
"required": [
"limit"
],
"properties": {
"limit": {
"type": "number"
}
}
}
}
},
"InputOrder" : {
"type": "object",
"$schema": "http://json-schema.org/draft-04/schema#",
"title": "Schema for a Input Order",
"required": [
"type"
],
"properties" : {
"type": {
"type" : "string",
"enum" : [
"STOCK",
"BOND",
"CASH"]
}
}
},
"Message": {
"type": "object",
"properties": {
"message" : {
"type" : "string"
}
}
}
}
}
I am trying to validate my request parameters against some regex and enum values.
I am not sure if this is even possible or not. Can anybody please help me with this?
For HTTP parameter validation, API Gateway only supports marking one as 'required'. It does not support regex/enum values for parameters.

Elasticsearch: aggregate nested objects

I have the data in the following structure
"mappings" : {
"PERSON" : {
"properties" : {
"ADDRESS" : {
"type": "nested",
"properties" : {
"STREET" : {
"type": "nested",
"properties" : {
"street": {
"type": "string"
},
"number": {
"type": "integer"
}
}
},
"CITY" : {
"type": "nested",
"properties" : {
"name": {
"type": "string"
},
"size": {
"type": "integer"
}
}
}
,
"country": {
"type": "string"
}
}
},
"INFORMATION" : {
"type": "nested",
"properties" : {
"age": {
"type": "integer"
},
"sex": {
"type": "string"
}
}
}
"name" : {
"type": "string",
}
}
}
}
I want to aggregate nested object dynamically in form:
["object type": count of records with this type].
e.g. for the PERSON I want to get something like:
[ADDRESS: 1000, INFORMATION: 1230]
and for ADDRESS:
[STREET: 200, CITY: 100]
Is it possible?
You can first filter based on PERSON or ADDRESS and then use cardinality aggregation to get the count