Infinispan 14.0.1 Final cli.sh batch mode does not work anymore - infinispan

I`m playing around with the new infinispan version 14.0.1 Final but it seems that
since Infinispan 14.0.1 Final cli.sh batch mode does not work anymore.
i have tried the following:
echo describe | ${INFINISPAN_HOME}/bin/cli.sh -c http://****:*****#localhost:11222 -f - | grep RUNNING
command terminated with exit code 1
without batch mode it works:
${INFINISPAN_HOME}/bin/cli.sh -c http://****:*****#localhost:11222
[f314abe638e2-57726#cluster//containers/default]> describe
{
"version" : "14.0.1.Final",
"name" : "default",
"coordinator" : true,
"cache_configuration_names" : [ "org.infinispan.REPL_ASYNC","___protobuf_metadata","org.infinispan.DIST_SYNC","org.infinispan.LOCAL","org.infinispan.INVALIDATION_SYNC","respCache","org.infinispan.REPL_SYNC","example.PROTOBUF_DIST","org.infinispan.SCATTERED_SYNC","org.infinispan.INVALIDATION_ASYNC","___script_cache","org.infinispan.DIST_ASYNC" ],
"cluster_name" : "cluster",
"physical_addresses" : "[10.88.0.35:7800]",
"coordinator_address" : "f314abe638e2-57726",
"cache_manager_status" : "RUNNING",
"created_cache_count" : 1,
"running_cache_count" : 1,
"node_address" : "f314abe638e2-57726",
"cluster_members" : [ "f314abe638e2-57726" ],
"cluster_members_physical_addresses" : [ "10.88.0.35:7800" ],
"cluster_size" : 1,
"defined_caches" : [ {
"name" : "___script_cache",
"started" : true
},{
"name" : "___protobuf_metadata",
"started" : true
},{
"name" : "respCache",
"started" : true
} ],
"local_site" : null,
"relay_node" : false,
"relay_nodes_address" : [ ],
"sites_view" : [ ],
"rebalancing_enabled" : true
}
with version 13.0.12 it still works:
echo describe | ${INFINISPAN_HOME}/bin/cli.sh -c http://****:*****#localhost:11222 -f - | grep RUNNING
"cache_manager_status" : "RUNNING",
bash-4.4$ echo describe | ${INFINISPAN_HOME}/bin/cli.sh -c http://cliuser:changeit#localhost:11222 -f -
{
"version" : "13.0.12.Final",
"name" : "default",
"coordinator" : true,
"cache_configuration_names" : [ "org.infinispan.REPL_ASYNC","___protobuf_metadata","org.infinispan.DIST_SYNC","org.infinispan.LOCAL","org.infinispan.INVALIDATION_SYNC","org.infinispan.REPL_SYNC","example.PROTOBUF_DIST","org.infinispan.SCATTERED_SYNC","org.infinispan.INVALIDATION_ASYNC","___script_cache","org.infinispan.DIST_ASYNC" ],
"cluster_name" : "cluster",
"physical_addresses" : "[10.88.0.36:7800]",
"coordinator_address" : "437060144b6b-64461",
"cache_manager_status" : "RUNNING",
"created_cache_count" : 0,
"running_cache_count" : 0,
"node_address" : "437060144b6b-64461",
"cluster_members" : [ "437060144b6b-64461" ],
"cluster_members_physical_addresses" : [ "10.88.0.36:7800" ],
"cluster_size" : 1,
"defined_caches" : [ {
"name" : "___protobuf_metadata",
"started" : true
},{
"name" : "___script_cache",
"started" : true
} ],
"local_site" : null,
"relay_node" : false,
"relay_nodes_address" : [ ],
"sites_view" : [ ],
"rebalancing_enabled" : true
}
can someone reproduce that ?

This is a bug where the CLI doesn't work with piped input https://issues.redhat.com/browse/ISPN-14256
It will be fixed in Infinispan 14.0.3.Final

Related

ELasticsearch Post bulk on elastic xpack role

I have an Elastic cluster with xpack enable.
I'd like to make a backup of all xpack roles created :
GET _xpack/security/role
=> I get a big JSON, ex :
{
"kibana_dashboard_only_user": {
"cluster": [],
"indices": [
{
"names": [
".kibana*"
],
"privileges": [
"read",
"view_index_metadata"
]
}
],
"run_as": [],
"metadata": {
"_reserved": true
},
"transient_metadata": {
"enabled": true
}
},
"watcher_admin": {
"cluster": [
"manage_watcher"
],
"indices": [
{
"names": [
".watches",
".triggered_watches",
".watcher-history-*"
],
"privileges": [
"read"
]
}
],
"run_as": [],
"metadata": {
"_reserved": true
},
"transient_metadata": {
"enabled": true
}
},
....
}
And now I'd like to put it back in the cluster (or another). I cannot just PUT it to _xpack/security/role. If i understand correctly I have to use bulk :
$ curl --user elastic:password https://elastic:9200/_xpack/security/_bulk?pretty -XPOST -H 'Content-Type: application/json' -d '
{"index":{"_index": "_xpack/security/role"}}
{"ROOOOLE" : {"cluster" : [ ],"indices" : [{"names" : [".kibana*"],"privileges" : ["read","view_index_metadata"]}],"run_as" : [ ],"metadata" : {"_reserved" : true},"transient_metadata" : {"enabled" : true}}}
'
But I get an error:
{
"took" : 3,
"errors" : true,
"items" : [
{
"index" : {
"_index" : "_xpack/security/role",
"_type" : "security",
"_id" : null,
"status" : 400,
"error" : {
"type" : "invalid_index_name_exception",
"reason" : "Invalid index name [_xpack/security/role], must not contain the following characters [ , \", *, \\, <, |, ,, >, /, ?]",
"index_uuid" : "_na_",
"index" : "_xpack/security/role"
}
}
}
]
}
Is there a way to do this easily? Or do I have to parse the JSON, and put each role one by one to:
_xpack/security/role/rolexxx
_xpack/security/role/roleyyy
...
More globally, is there a way to get all data of an index (config index), then upload it back or put it into another cluster?

Kubernetes configmap with Redis

I was following this tutorial to setup a configmap for a redis.conf. After I create the Redis deployment, I check to ensure that the redis.conf file is in each of the pods, and they are there. The problem is that when go in the redis-cli and check the configuration there, the redis.conf values aren't used. The default values are being used as if the Redis did not start up with the redis.conf file.
redis.conf
maxclients 2000
requirepass "test"
redis-config configmap
{
"apiVersion": "v1",
"data": {
"redis-config": "maxclients 2000\nrequirepass \"test\"\n\n"
},
"kind": "ConfigMap",
"metadata": {
"creationTimestamp": "2018-03-07T15:28:19Z",
"name": "redis-config",
"namespace": "default",
"resourceVersion": "2569562",
"selfLink": "/api/v1/namespaces/default/configmaps/redis-config",
"uid": "29d250ea-221c-11e8-969f-06c0c8d545d2"
}
}
k8 redis manifest.json
{
"kind" : "Deployment",
"apiVersion" : "extensions/v1beta1",
"metadata" : {
"name" : "redis-master",
"creationTimestamp" : null
},
"spec" : {
"replicas" : 2,
"template" : {
"metadata" : {
"creationTimestamp" : null,
"labels" : {
"app" : "redis",
"role" : "master",
"tier" : "backend"
}
},
"spec" : {
"hostNetwork" : true,
"nodeSelector" :{ "role": "cache"},
"containers" : [{
"name" : "master",
"image" : "redis",
"ports" : [{
"containerPort" : 6379,
"hostPort" : 6379,
"protocol" : "TCP"
}
],
"volumeMounts" : [{
"mountPath" : "/redis-master",
"name": "config"
}
],
"resources" : {},
"terminationMessagePath" : "/dev/termination-log",
"imagePullPolicy" : "IfNotPresent"
}],
"volumes" : [{
"name" : "config",
"configMap" : {
"name" : "redis-config",
"items": [{
"key": "redis-config",
"path": "redis.conf"
}]
}
}
],
"restartPolicy" : "Always",
"terminationGracePeriodSeconds" : 30,
"dnsPolicy" : "ClusterFirst",
"securityContext" : {}
}
}
},
"status" : {}
}
Now I know the tutorial uses a Pod kind, and I am using a Deployment kind, but I don't think that is the issue here.
It looks like you are pulling the default redis container. If you check the redis Dokerfiles, for example https://github.com/docker-library/redis/blob/d53b982b387634092c6f11069401679034054ecb/4.0/alpine/Dockerfile, at the bottom, they have:
CMD ["redis-server"]
which will start redis with the default configuration.
Per redis documentation:
https://redis.io/topics/quickstart
under "Starting Redis" section, if you want to provide a different configuration, you would need to start redis with:
redis-server <config file>
Additionally the example in Kubernetes documentation uses a different redis containter:
image: kubernetes/redis
And from the Dokerfile: https://github.com/kubernetes/kubernetes/blob/master/examples/storage/redis/image/Dockerfile, it seems like that one starts Redis with the provided configuration.

Elasticsearch Not Returning Document By Field Name

Elasticsearch newb here. I seem to be having an issue selecting documents by a certain field. It feels like a corrupt index to me, but I'm not sure.
Here is a document that I can retrieve, and get the fields event.type and event.accountId:
$ curl -XGET 'http://127.0.0.1:9200/events-2015.04.08/event/AUyYpkl-r99VdGrSLpIX?pretty=1&fields=event.type,event.accountId'
{
"_index" : "events-2015.04.08",
"_type" : "event",
"_id" : "AUyYpkl-r99VdGrSLpIX",
"_version" : 1,
"found" : true,
"fields" : {
"event.type" : [ "USER_LOGIN" ],
"event.accountId" : [ 10399 ]
}
}
Notice the event.type: USER_LOGIN. Now I want to find all documents that have this field/value combination:
curl -XGET 'http://127.0.0.1:9200/events-2015.04.08/_search?q=event.type:USER_LOGIN&pretty=1'
{
"took" : 2,
"timed_out" : false,
"_shards" : {
"total" : 5,
"successful" : 5,
"failed" : 0
},
"hits" : {
"total" : 0,
"max_score" : null,
"hits" : [ ]
}
}
No results. I can find the document by event.accountId though:
$ curl -XGET 'http://127.0.0.1:9200/events-2015.04.08/_search?q=event.accountId:10399&pretty=1'
{
"took" : 2,
"timed_out" : false,
"_shards" : {
"total" : 5,
"successful" : 5,
"failed" : 0
},
"hits" : {
"total" : 2,
"max_score" : 1.0,
"hits" : [ {
"_index" : "events-2015.04.08",
"_type" : "event",
"_id" : "AUyYpkjCr99VdGrSLpIW",
"_score" : 1.0,
"_source": {...}
}, {
"_index" : "events-2015.04.08",
"_type" : "event",
"_id" : "AUyYpkl-r99VdGrSLpIX", # <-- This is the doc I want
"_score" : 1.0,
"_source": {...}
} ]
}
}
So is this field corrupt or something? How do I check? I expect to be able to find this document by event.type.
UPDATE
The document is being indexed with the SQS plugin to Logstash. Here is the relevant part of logstash.conf:
input {
sqs {
queue => "the_queue"
region => "us-west-2"
type => "event"
}
}
filter {
json {
source => "Message"
target => "event"
remove_field => [ "Message" ]
}
mutate {
rename => { "Type" => "EventType" }
}
date {
match => [ "Timestamp", "ISO8601" ]
}
}

restart jobtracker through cloudera manager API

I am trying to restart Mapreduce Jobtracker through Cloudera Manager API. Stats for Jobtracker is as follows :
local-iMac-399:$ curl -u 'admin:admin' 'http://hadoop-namenode.dev.com:7180/api/v6/clusters/Cluster%201/services/mapreduce/roles/mapreduce-JOBTRACKER-0675ebab2b87e3869e0d90167cf4bf86'
{
"name" : "mapreduce-JOBTRACKER-0675ebab2b87e3869e0d90167cf4bf86",
"type" : "JOBTRACKER",
"serviceRef" : {
"clusterName" : "cluster",
"serviceName" : "mapreduce"
},
"hostRef" : {
"hostId" : "24259373-7e71-4089-8251-faf055e42ad7"
},
"roleUrl" : "http://hadoop-namenode.dev.com:7180/cmf/roleRedirect/mapreduce-JOBTRACKER-0675ebab2b87e3869e0d90167cf4bf86",
"roleState" : "STARTED",
"healthSummary" : "GOOD",
"healthChecks" : [ {
"name" : "JOB_TRACKER_FILE_DESCRIPTOR",
"summary" : "GOOD"
}, {
"name" : "JOB_TRACKER_GC_DURATION",
"summary" : "GOOD"
}, {
"name" : "JOB_TRACKER_HOST_HEALTH",
"summary" : "GOOD"
}, {
"name" : "JOB_TRACKER_LOG_DIRECTORY_FREE_SPACE",
"summary" : "GOOD"
}, {
"name" : "JOB_TRACKER_SCM_HEALTH",
"summary" : "GOOD"
}, {
"name" : "JOB_TRACKER_UNEXPECTED_EXITS",
"summary" : "GOOD"
}, {
"name" : "JOB_TRACKER_WEB_METRIC_COLLECTION",
"summary" : "GOOD"
} ],
"configStalenessStatus" : "STALE",
"haStatus" : "ACTIVE",
"maintenanceMode" : false,
"maintenanceOwners" : [ ],
"commissionState" : "COMMISSIONED",
"roleConfigGroupRef" : {
"roleConfigGroupName" : "mapreduce-JOBTRACKER-BASE"
}
}
local-iMac-399:$
Dont know How do I use API to restart just Jobtracker ?
I tried to restart Hive service using following command but got some error
local-iMac-399:$curl -X POST -u 'admin:admin' 'http://hadoop-namenode.dev.com:7180/api/v6/clusters/Cluster%201/services/hive/roleCommands/restart'
{
"message" : "No content to map due to end-of-input\n at [Source: org.apache.cxf.transport.http.AbstractHTTPDestination$1#4169c499; line: 1, column: 1]"
}
I would appreciate if someone help in understanding how to use Cloudera Manager API
Based on the information provided, this is how you'd invoke the CM API JobTracker restart
curl -u 'admin:admin' -X POST -H "Content-Type:application/json" -d '{"items":["mapreduce-JOBTRACKER-0675ebab2b87e3869e0d90167cf4bf86"]}' 'http://hadoop-namenode.dev.com:7180/api/v6/clusters/Cluster%201/services/mapreduce/roleCommands/restart'

how to reduce amount of data in neo4j query?

when requesting data over Neo4j, as in, say,
curl -i -XPOST -d'{ "query" : "start n=node(*) return n" }'
-H "accept:application/json;stream=true"
-H content-type:application/json
http://localhost:7474/db/data/cypher
i get, as documented, a response like this:
{
"columns" : [ "n" ],
"data" : [ [ {
"outgoing_relationships" : "http://localhost:7474/db/data/node/0/relationships/out",
"data" : {
},
"traverse" : "http://localhost:7474/db/data/node/0/traverse/{returnType}",
"all_typed_relationships" : "http://localhost:7474/db/data/node/0/relationships/all/{-list|&|types}",
"property" : "http://localhost:7474/db/data/node/0/properties/{key}",
"self" : "http://localhost:7474/db/data/node/0",
"properties" : "http://localhost:7474/db/data/node/0/properties",
"outgoing_typed_relationships" : "http://localhost:7474/db/data/node/0/relationships/out/{-list|&|types}",
"incoming_relationships" : "http://localhost:7474/db/data/node/0/relationships/in",
"extensions" : {
},
"create_relationship" : "http://localhost:7474/db/data/node/0/relationships",
"paged_traverse" : "http://localhost:7474/db/data/node/0/paged/traverse/{returnType}{?pageSize,leaseTime}",
"all_relationships" : "http://localhost:7474/db/data/node/0/relationships/all",
"incoming_typed_relationships" : "http://localhost:7474/db/data/node/0/relationships/in/{-list|&|types}"
} ], [ {
"outgoing_relationships" : "http://localhost:7474/db/data/node/1/relationships/out",
"data" : {
"glyph" : "δΈ€",
"~isa" : "glyph"
},
"traverse" : "http://localhost:7474/db/data/node/1/traverse/{returnType}",
"all_typed_relationships" : "http://localhost:7474/db/data/node/1/relationships/all/{-list|&|types}",
"property" : "http://localhost:7474/db/data/node/1/properties/{key}",
"self" : "http://localhost:7474/db/data/node/1",
"properties" : "http://localhost:7474/db/data/node/1/properties",
"outgoing_typed_relationships" : "http://localhost:7474/db/data/node/1/relationships/out/{-list|&|types}",
"incoming_relationships" : "http://localhost:7474/db/data/node/1/relationships/in",
"extensions" : {
},
"create_relationship" : "http://localhost:7474/db/data/node/1/relationships",
"paged_traverse" : "http://localhost:7474/db/data/node/1/paged/traverse/{returnType}{?pageSize,leaseTime}",
"all_relationships" : "http://localhost:7474/db/data/node/1/relationships/all",
"incoming_typed_relationships" : "http://localhost:7474/db/data/node/1/relationships/in/{-list|&|types}"
} ], [ {
"outgoing_relationships" : "http://localhost:7474/db/data/node/2/relationships/out",
"data" : {
"~isa" : "LPG",
"LPG" : "1"
},
"traverse" : "http://localhost:7474/db/data/node/2/traverse/{returnType}",
"all_typed_relationships" : "http://localhost:7474/db/data/node/2/relationships/all/{-list|&|types}",
"property" : "http://localhost:7474/db/data/node/2/properties/{key}",
"self" : "http://localhost:7474/db/data/node/2",
"properties" : "http://localhost:7474/db/data/node/2/properties",
"outgoing_typed_relationships" : "http://localhost:7474/db/data/node/2/relationships/out/{-list|&|types}",
"incoming_relationships" : "http://localhost:7474/db/data/node/2/relationships/in",
"extensions" : {
},
"create_relationship" : "http://localhost:7474/db/data/node/2/relationships",
"paged_traverse" : "http://localhost:7474/db/data/node/2/paged/traverse/{returnType}{?pageSize,leaseTime}",
"all_relationships" : "http://localhost:7474/db/data/node/2/relationships/all",
"incoming_typed_relationships" : "http://localhost:7474/db/data/node/2/relationships/in/{-list|&|types}"
} ], [ {
and so on and on. the URLs delivered with each node are certainly well meant, but they also occupy a major portion of the data transmitted. they're also highly redundant and not what i', after with my query. is there any way to drop all of that traverse,
all_typed_relationships,
property,
self,
properties,
outgoing_typed_relationships,
incoming_relationships,
extensions,
create_relationship,
paged_traverse,
all_relationships,
incoming_typed_relationships
jazz?
The only way is to specify the properties you want returned in the return statement. Like:
return id(n), n.glyph;