How to disable elasticsearch 5.0 authentication? - authentication

I'm just start to use elasticsearch. Created an index with default settings (5 shards, 1 replica). I then indexed ~13G text files with the attachment plugin. As a result, it went very slow searching in Kibana Discover. However, searching in the console is fast:
GET /mytext/_search
{
"fields": [ "file.name" ],
"query": {
"match": {
"file.content": "foobar"
}
},
"highlight": {
"fields": {
"file.content": {
}
}
}
}
To investigate why it's so slow, I installed X-Pack. The guide documentation seems not comprehensive, I didn't get to the security config.
The default install of elasticsearch don't have to be logged in, but it have to be logged in after installed X-Pack plugin. I'm confused with the security settings of elasticsearch, kibana, x-pack, do they share the user accounts whatever? After all, I get the authentication works by:
curl -XPUT -uelastic:changeme 'localhost:9200/_shield/user/elastic/_password' -d '{ "password" : "newpass1" }'
curl -XPUT -uelastic:newpass1 'localhost:9200/_shield/user/kibana/_password' -d '{ "password" : "newpass2" }'
Here comes the problem. I can't login using Java client with org.elasticsearch.plugin:shield. It's likely the latest version of the shield dependency (2.3.3) mismatched with the elasticsearch dependency (5.0.0-alpha).
Well, can I just disable the authentication?
From the node config:
GET http://localhost:9200/_nodes
"nodes" : {
"v_XmZh7jQCiIMYCG2AFhJg" : {
"transport_address" : "127.0.0.1:9300",
"version" : "5.0.0-alpha2",
"roles" : [ "master", "data", "ingest" ],
...
"settings" : {
"node" : {
"name" : "Apache Kid"
},
"http" : {
"type" : "security"
},
"transport" : {
"type" : "security",
"service" : {
"type" : "security"
}
},
...
So, can I modify these settings, and the possible values are?

In a test environment I added the following option to elasticsearch.yml, and/or kibana.yml
xpack.security.enabled: false

assuming that your image name is elasticsearch. you can use id if you don't like name
if you run docker you can use this.
go to bash in docker with command
docker exec -i -t elasticsearch /bin/bash
then remove x-pack
elasticsearch-plugin remove x-pack
exit docker
exit
and restart docker image
docker restart elasticsearch
Disclamer: Solution inspired by
Michał Dymel

When using with docker (in local dev), instead of removing the xpack, you can simply disable it.
docker pull docker.elastic.co/elasticsearch/elasticsearch:5.5.3
docker run -p 9200:9200 \
-p 9300:9300 \
-e "discovery.type=single-node" \
-e "xpack.security.enabled=false" \
docker.elastic.co/elasticsearch/elasticsearch:5.5.3

I've managed to authenticate using this xpack_security_enable equals false but I still getting some authentication errors on my kibana log.
elasticsearch:
image: elasticsearch:1.7.6
ports:
- ${PIM_ELASTICSEARCH_PORT}:9200
- 9300:9300
kibana:
image: docker.elastic.co/kibana/kibana:5.4.1
environment:
SERVER_NAME: localhost
ELASTICSEARCH_URL: http://localhost:9200
XPACK_SECURITY_ENABLED: 'false'
ports:
- 5601:5601
links:
- elasticsearch
depends_on:
- elasticsearch
This is my current setup, on the kibana I can see some errors:
KIBANA dashboard
On kibana logs I can see:
kibana_1 | {"type":"log","#timestamp":"2017-06-15T07:43:41Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"No living connections"}
kibana_1 | {"type":"log","#timestamp":"2017-06-15T07:43:42Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"Unable to revive connection: http://localhost:9200/"}
So it seems it still trying to connect using authentication.

I had same xpack issue but with kibana, fixed by following command:
docker run docker.elastic.co/kibana/kibana:5.5.1 /bin/bash -c 'bin/kibana-plugin remove x-pack ; /usr/local/bin/kibana-docker'
so it start container, than removes xpack and after that starts normal process. Same can be done with elasticsearch and logstash.

Related

How to add wsl command line arguments to Windows Terminal configuration?

I have the following .json configuration for my Windows Terminal:
{
"guid": "{926758ba-8c4a-5c36-a9c6-0c4943cd78a1}",
"hidden": false,
"name": "Fedora-33",
"source": "Windows.Terminal.Wsl"
},
This was generated automatically from the WSL database.
I would like to add wsl command line option -u user as it starts now as root. I tried adding
"user" : "hxv454"
to no avail. How can I configure WT to start my wsl instance with a specific user?
Learning from
How do I get Windows 10 Terminal to launch WSL?
searching for "wsl" I have found and used
"commandline": "wsl -d Fedora-33 -u hxv454"
and it worked.

MobileFirst Analytics dashboard does not show any data

Our MFP Analytics dashboard was working fine until last week. There is no data shown in the dashboard. Restart the server does not seem to help either. The cluster status on the server is RED. What can I do to resolve this?
I've learned that when the cluster status is red, it is likely due to one or more unassigned shards. The following commands are quite handy and able to solve my issue:
To find the list of all shards, use this CURL command:
curl -XGET http://localhost:9500/_cat/shards
To find the list of unassigned shards, use this CURL command:
curl -XGET http://localhost:9500/_cat/shards | grep UNASSIGNED
Once you get the list of unassigned shards, you can initialize them
with the following command:
for shard in $(curl -XGET http://localhost:9500/_cat/shards | grep UNASSIGNED | awk '{print $2}'); do
curl -XPOST 'localhost:9500/_cluster/reroute' -d '{
"commands" : [ {
"allocate" : {
"index" : “worklight”,
"shard" : $shard,
"node" : "worklightNode_1234",
"allow_primary" : true
}
}
]
}'
sleep 5
done
You need to replace the 'node' with the node that the initialized
shards locate on, in my case, worklightNode_1234. This you can find
from the output in step 1.
Run the following command to check the status:
curl -XGET http://localhost:9500/_cluster/health?pretty
The server status should be green when all the shards are initialized and assigned.
On MobileFirst Platform 7.1, I solved the problem by changing the configuration in the server.xml file of the Analytics server, reducing the number of shards:
<jndiEntry jndiName="analytics/shards" value="5" />
<jndiEntry jndiName="analytics/replicas_per_shard" value="1" />
For default values refer to: http://www.ibm.com/support/knowledgecenter/SSHSCD_7.1.0/com.ibm.worklight.monitor.doc/monitor/c_op_analytics_properties.html

In Docker, is there a way to retrieve the image or layer UUID?

I am looking for a way to retrieve the docker container image UUID and the layers UUID. I saw the command 'docker images' and 'docker history' but they do not work on the individual image. Is there a command to do this?
You can find some more information about an image if your perform the following commands:
$ docker images
mysql latest 2fd136002c22 8 weeks ago 378.4 MB
inspect the imageID or image name
$ docker inspect 2fd136002c22
output:
"RootFS": {
"Type": "layers",
"Layers": [
"sha256:4dcab49015d47e8f300ec33400a02cebc7b54cadd09c37e49eccbc655279da90",
"sha256:5f70bf18a086007016e948b04aed3b82103a36bea41755b6cddfaf10ace3c6ef",
"sha256:47bce276c5783a6cfc88e0ac368af70909144d04780222d134090dbf08f897aa",
"sha256:5f70bf18a086007016e948b04aed3b82103a36bea41755b6cddfaf10ace3c6ef",
"sha256:093c117bc4d3e1cd6e597a89b1648ebb3543be581c61aba80fc41ff6f7ae8e6d",
"sha256:1028156f10f1a0f79dba5be05e935d5f4588ebe7c25a3581843f7a759a2d7bfb",
...
and a lot more information like:
"Id": "sha256:2fd136002c22c9017ea24544fc15810aad7d88ab9d53da1063d2805ba0f31e9a",
"Volumes": {
"/var/lib/mysql": {}
...
This is how I did it with a running image:
$ docker exec youthful_pike cat /sys/class/dmi/id/product_uuid
E2F747AF-0000-0000-AA77-B856C9D179D8

Bluemix: service bound to container does not appear in VCAP_SERVICES

I'm trying to use IBM Containers for Bluemix to deploy a container and bind it to a Bluemix service.
I start with an existing Bluemix app, which is bound to the MongoDB service I want. I verify that its VCAP_SERVICES environment variable is correctly populated:
$ cf env mamacdon-app
Getting env variables for app mamacdon-app in org mamacdon#ca.ibm.com / space dev as mamacdon#ca.ibm.com...
OK
System-Provided:
{
"VCAP_SERVICES": {
"mongodb-2.4": [
{
"credentials": { /*private data hidden*/ },
"label": "mongodb-2.4",
"name": "mongodb-1a",
"plan": "100",
"tags": [ "nosql", "document", "mongodb" ]
}
]
}
...
Then I run my image in Bluemix using the ice command, with the --bind mamacdon-app argument to bind it to my CF app:
$ ice run --name sshparty \
--bind mamacdon-app \
--ssh "$(cat ~/.ssh/id_rsa.pub)" \ # For SSH access
--publish 22 \ # For SSH access
registry-ice.ng.bluemix.net/ibmliberty:latest
As the name suggests, the image is a trivial example based on the IBM Websphere Liberty docker image -- just enough to let me SSH in and poke around.
At this point, the Containers dashboard tells me that the service has been bound to my container:
But when I finally ssh into the container, the environment does not contain the VCAP_SERVICES variable:
$ ssh -i ~/.ssh/id_rsa root#129.41.232.212
root#instance-000123e2:~# env
TERM=xterm
SHELL=/bin/bash
SSH_CLIENT=[private data hidden]
SSH_TTY=/dev/pts/0
USER=root
LS_COLORS=[omitted]
MAIL=/var/mail/root
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
PWD=/root
LANG=en_CA.UTF-8
SHLVL=1
HOME=/root
LOGNAME=root
SSH_CONNECTION=[private data hidden]
LESSOPEN=| /usr/bin/lesspipe %s
LESSCLOSE=/usr/bin/lesspipe %s %s
_=/usr/bin/env
root#instance-000123e2:~#
I expected the VCAP_SERVICES variable to be injected. What am I doing wrong?
I think there is an issue with the way the ssh daemon is getting launched where it does not have visibility to the VCAP_SERVICES environment variable.
However, you can confirm that the container's command will see the variable with following test:
ice run registry-ice.ng.bluemix.net/ibmliberty --bind mamacdon-app --name vcap_services_party printenv; sleep 60
Then, confirm it in the printenv output with ice logs vcap_services_party
Could you give the following a try:
ice run registry-ice.ng.bluemix.net/lintest/tradelite --bind yourappname --name yournewcontainer name
Once the image comes up run the following.
# echo $VCAP_SERVICES
For more info check out the Containers Docs.

How to Install and configure Redis on ElasticBeanstalk

How do I install and configure Redis on AWS ElasticBeanstalk? Does anyone know how to write an .ebextension script to accomplish that?
The accepted answer is great if you are using ElastiCache (like RDS, but for Memcached or Redis). But, if what you are trying to do is tell EB to provision Redis into the EC2 instance in which it spins up your app, you want a different config file, something like this gist:
packages:
yum:
gcc-c++: []
make: []
sources:
/home/ec2-user: http://download.redis.io/releases/redis-2.8.4.tar.gz
commands:
redis_build:
command: make
cwd: /home/ec2-user/redis-2.8.4
redis_config_001:
command: sed -i -e "s/daemonize no/daemonize yes/" redis.conf
cwd: /home/ec2-user/redis-2.8.4
redis_config_002:
command: sed -i -e "s/# maxmemory <bytes>/maxmemory 500MB/" redis.conf
cwd: /home/ec2-user/redis-2.8.4
redis_config_003:
command: sed -i -e "s/# maxmemory-policy volatile-lru/maxmemory-policy allkeys-lru/" redis.conf
cwd: /home/ec2-user/redis-2.8.4
redis_server:
command: src/redis-server redis.conf
cwd: /home/ec2-user/redis-2.8.4
IMPORTANT: The commands are executed in alphabetical order by name, so if you pick different names than redis_build, redis_config_xxx, redis_server, make sure they are such that they execute in the way you expect.
Your other option is to containerize your app with Redis using Docker, then deploy your app as some number of Docker containers, instead of whatever language you wrote it in. Doing that for a Flask app is described here.
You can jam it all into one container and deploy that way, which is easier, but doesn't scale well, or you can use AWS' Elastic Beanstalk multi-container deployments. If you have used docker-compose, you can use this tool to turn a docker-compose.yml into the form AWS wants, Dockerrun.aws.json.
AWS Elastic Beanstalk does provide resource configuration via the .ebextensions folder. Essentially you need to tell Elastic Beanstalk what you would like provisioned in addition to your application. For provisioning into a default vpc. You need to
create an .ebextensions folder
add an elasticache.config file
and include the following contents.
Resources:
MyCacheSecurityGroup:
Type: "AWS::EC2::SecurityGroup"
Properties:
GroupDescription: "Lock cache down to webserver access only"
SecurityGroupIngress :
- IpProtocol : "tcp"
FromPort :
Fn::GetOptionSetting:
OptionName : "CachePort"
DefaultValue: "6379"
ToPort :
Fn::GetOptionSetting:
OptionName : "CachePort"
DefaultValue: "6379"
SourceSecurityGroupName:
Ref: "AWSEBSecurityGroup"
MyElastiCache:
Type: "AWS::ElastiCache::CacheCluster"
Properties:
CacheNodeType:
Fn::GetOptionSetting:
OptionName : "CacheNodeType"
DefaultValue : "cache.t1.micro"
NumCacheNodes:
Fn::GetOptionSetting:
OptionName : "NumCacheNodes"
DefaultValue : "1"
Engine:
Fn::GetOptionSetting:
OptionName : "Engine"
DefaultValue : "redis"
VpcSecurityGroupIds:
-
Fn::GetAtt:
- MyCacheSecurityGroup
- GroupId
Outputs:
ElastiCache:
Description : "ID of ElastiCache Cache Cluster with Redis Engine"
Value :
Ref : "MyElastiCache"
Referenced from: "How to add ElasticCache resources to Elastic Beanstalk VPC"
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/customize-environment-resources-elasticache.html