How to Install and configure Redis on ElasticBeanstalk - redis

How do I install and configure Redis on AWS ElasticBeanstalk? Does anyone know how to write an .ebextension script to accomplish that?

The accepted answer is great if you are using ElastiCache (like RDS, but for Memcached or Redis). But, if what you are trying to do is tell EB to provision Redis into the EC2 instance in which it spins up your app, you want a different config file, something like this gist:
packages:
yum:
gcc-c++: []
make: []
sources:
/home/ec2-user: http://download.redis.io/releases/redis-2.8.4.tar.gz
commands:
redis_build:
command: make
cwd: /home/ec2-user/redis-2.8.4
redis_config_001:
command: sed -i -e "s/daemonize no/daemonize yes/" redis.conf
cwd: /home/ec2-user/redis-2.8.4
redis_config_002:
command: sed -i -e "s/# maxmemory <bytes>/maxmemory 500MB/" redis.conf
cwd: /home/ec2-user/redis-2.8.4
redis_config_003:
command: sed -i -e "s/# maxmemory-policy volatile-lru/maxmemory-policy allkeys-lru/" redis.conf
cwd: /home/ec2-user/redis-2.8.4
redis_server:
command: src/redis-server redis.conf
cwd: /home/ec2-user/redis-2.8.4
IMPORTANT: The commands are executed in alphabetical order by name, so if you pick different names than redis_build, redis_config_xxx, redis_server, make sure they are such that they execute in the way you expect.
Your other option is to containerize your app with Redis using Docker, then deploy your app as some number of Docker containers, instead of whatever language you wrote it in. Doing that for a Flask app is described here.
You can jam it all into one container and deploy that way, which is easier, but doesn't scale well, or you can use AWS' Elastic Beanstalk multi-container deployments. If you have used docker-compose, you can use this tool to turn a docker-compose.yml into the form AWS wants, Dockerrun.aws.json.

AWS Elastic Beanstalk does provide resource configuration via the .ebextensions folder. Essentially you need to tell Elastic Beanstalk what you would like provisioned in addition to your application. For provisioning into a default vpc. You need to
create an .ebextensions folder
add an elasticache.config file
and include the following contents.
Resources:
MyCacheSecurityGroup:
Type: "AWS::EC2::SecurityGroup"
Properties:
GroupDescription: "Lock cache down to webserver access only"
SecurityGroupIngress :
- IpProtocol : "tcp"
FromPort :
Fn::GetOptionSetting:
OptionName : "CachePort"
DefaultValue: "6379"
ToPort :
Fn::GetOptionSetting:
OptionName : "CachePort"
DefaultValue: "6379"
SourceSecurityGroupName:
Ref: "AWSEBSecurityGroup"
MyElastiCache:
Type: "AWS::ElastiCache::CacheCluster"
Properties:
CacheNodeType:
Fn::GetOptionSetting:
OptionName : "CacheNodeType"
DefaultValue : "cache.t1.micro"
NumCacheNodes:
Fn::GetOptionSetting:
OptionName : "NumCacheNodes"
DefaultValue : "1"
Engine:
Fn::GetOptionSetting:
OptionName : "Engine"
DefaultValue : "redis"
VpcSecurityGroupIds:
-
Fn::GetAtt:
- MyCacheSecurityGroup
- GroupId
Outputs:
ElastiCache:
Description : "ID of ElastiCache Cache Cluster with Redis Engine"
Value :
Ref : "MyElastiCache"
Referenced from: "How to add ElasticCache resources to Elastic Beanstalk VPC"
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/customize-environment-resources-elasticache.html

Related

BitBucket deployment using SSH keys to remote server

I am trying to write a YAML pipeline script to deploy files that have been altered from my bitbucket repository to my remote server using ssh keys. The document that I have in place at the moment was copied from bitbucket itself and has errors:
pipelines:
default:
- step:
name: Deploy to test
deployment: test
script:
- pipe: atlassian/sftp-deploy:0.3.1
- variables:
USER: $USER
SERVER: $SERVER
REMOTE_PATH: $REMOTE_PATH
LOCAL_PATH: $LOCAL_PATH
I am getting the following error
Configuration error
There is an error in your bitbucket-pipelines.yml at [pipelines > default > 0 > step > script > 1]. To be precise: Missing or empty command string. Each item in this list should either be a single command string or a map defining a pipe invocation.
My ssh public and private keys are setup in bitbucket along with the fingerprint and host. The variables have also been setup.
How do I go about setting up my YAML deploy script to connect to my remote server via ssh and transfer the files?
Try to update the variables section become:
- variables:
- USER: $USER
- SERVER: $SERVER
- REMOTE_PATH: $REMOTE_PATH
- LOCAL_PATH: $LOCAL_PATH
Here is am example about how to set variables: https://confluence.atlassian.com/bitbucket/configure-bitbucket-pipelines-yml-792298910.html#Configurebitbucket-pipelines.yml-ci_variablesvariables
Your directive - step has to be intended.
I have bitbucket-pipelines.yml like that (using rsync instead of ssh):
# This is a sample build configuration for PHP.
# Check our guides at https://confluence.atlassian.com/x/e8YWN for more examples.
# Only use spaces to indent your .yml configuration.
# -----
# You can specify a custom docker image from Docker Hub as your build environment.
image: php:7.2.1-fpm
pipelines:
default:
- step:
script:
- apt-get update
- apt-get install zip -y
- apt-get install unzip -y
- apt-get install libgmp3-dev -y
- curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
- composer install
- cp .env.example .env
#- vendor/bin/phpunit
- pipe: atlassian/rsync-deploy:0.2.0
variables:
USER: $DEPLOY_USER
SERVER: $DEPLOY_SERVER
REMOTE_PATH: $DEPLOY_PATH
LOCAL_PATH: '.'
I suggest to use their online editor in repository for editing bitbucket-pipelines.yml, it checks all formal yml structure and you can't commit invalid file.
Even if you check file on some other yaml editor, it may look fine, but not necessary according to bitbucket specification. Their online editor does fine job.
Also, I suggest to visit their community on atlasian community as it's very active, sometimes their staff members are providing answers.
However, I struggle with plenty dependencies needed to run tests properly. (actual bitbucket-pipelines.yml is becoming bigger and bigger).
Maybe there is some nicely prepared Docker image for this job.

How to install schema registry

I am looking options to install confluent schema registry, is it possible to download and install registry alone and make it work with existing kafka setup ?
Thanks
Assuming you have Zookeeper/Kafka running already, you can easily run Confulent Schema Registry using Docker with running the following command:
docker run -p 8081:8081 -e \
SCHEMA_REGISTRY_KAFKASTORE_CONNECTION_URL=host.docker.internal:2181 \
-e SCHEMA_REGISTRY_HOST_NAME=localhost \
-e SCHEMA_REGISTRY_LISTENERS=http://0.0.0.0:8081 \
-e SCHEMA_REGISTRY_DEBUG=true confluentinc/cp-schema-registry:5.3.2
parameters:
-p 8081:8081 - will open the port 8081 between the container to your machine
SCHEMA_REGISTRY_KAFKASTORE_CONNECTION_URL - is your Zookeeper host and port, I'm using host.docker.internal to resolve local machine that is hosting Zookeeper (outside of the container)
SCHEMA_REGISTRY_HOST_NAME - The hostname advertised in Zookeeper. This is required if if you are running Schema Registry with multiple nodes. Hostname is required because it defaults to the Java canonical hostname for the container, which may not always be resolvable in a Docker environment.
SCHEMA_REGISTRY_LISTENERS - the Schema Registry host and port number to open
SCHEMA_REGISTRY_DEBUG Run in debug mode
note: the script was using the version 5.3.2, make sure this version is aligned with your Kafka version.
Yes you can use your existing Kafka setup, just match to the compatible version of Confluent Platform. Here are the docs on getting started
https://docs.confluent.io/current/schema-registry/docs/intro.html#installation
tl;dr download the platform to pull out the pieces you need or get the docker image and point it at your Kafka cluster.

Azure ACS: Azure file volume not working

I've been following the instructions on github to setup an azure files volume.
apiVersion: v1
kind: Secret
metadata:
name: azure-files-secret
type: Opaque
data:
azurestorageaccountname: Yn...redacted...=
azurestorageaccountkey: 3+w52/...redacted...MKeiiJyg==
I then in my pod config have:
...stuff
volumeMounts:
- mountPath: /var/ccd
name: openvpn-ccd
...more stuff
volumes:
- name: openvpn-ccd
azureFile:
secretName: azure-files-secret
shareName: az-files
readOnly: false
Creating the containers then fails:
MountVolume.SetUp failed for volume "kubernetes.io/azure-file/007adb39-30df-11e7-b61e-000d3ab6ece2-openvpn-ccd" (spec.Name: "openvpn-ccd") pod "007adb39-30df-11e7-b61e-000d3ab6ece2" (UID: "007adb39-30df-11e7-b61e-000d3ab6ece2") with: mount failed: exit status 32 Mounting command: mount Mounting arguments: //xxx.file.core.windows.net/az-files /var/lib/kubelet/pods/007adb39-30df-11e7-b61e-000d3ab6ece2/volumes/kubernetes.io~azure-file/openvpn-ccd cifs [vers=3.0,username=xxx,password=xxx,dir_mode=0777,file_mode=0777] Output: mount error(13): Permission denied Refer to the mount.cifs(8) manual page (e.g. man mount.cifs)
I was previously getting password errors, as I hadn't base64 encoded the account key, but that has resolved now, and I get the more generic Permission denied error, which I suspect is maybe on the mount point, rather than the file storage. In any case, I need advice on how to troubleshoot further please?
This appears to be an auth error to your storage account. Un-base64 your password, and then validate using an ubuntu image in the same region as the storage account.
Here is a sample script to validate the Azure Files share correctly mounts:
if [ $# -ne 3 ]
then
echo "you must pass arguments STORAGEACCOUNT STORAGEACCOUNTKEY SHARE"
exit 1
fi
ACCOUNT=$1
ACCOUNTKEY=$2
SHARE=$3
MOUNTSHARE=/mnt/${SHARE}
apt-get update && apt-get install -y cifs-utils
mkdir -p /mnt/$SHARE
mount -t cifs //${ACCOUNT}.file.core.windows.net/${SHARE} ${MOUNTSHARE} -o vers=2.1,username=${ACCOUNT},password=${ACCOUNTKEY}

How to disable elasticsearch 5.0 authentication?

I'm just start to use elasticsearch. Created an index with default settings (5 shards, 1 replica). I then indexed ~13G text files with the attachment plugin. As a result, it went very slow searching in Kibana Discover. However, searching in the console is fast:
GET /mytext/_search
{
"fields": [ "file.name" ],
"query": {
"match": {
"file.content": "foobar"
}
},
"highlight": {
"fields": {
"file.content": {
}
}
}
}
To investigate why it's so slow, I installed X-Pack. The guide documentation seems not comprehensive, I didn't get to the security config.
The default install of elasticsearch don't have to be logged in, but it have to be logged in after installed X-Pack plugin. I'm confused with the security settings of elasticsearch, kibana, x-pack, do they share the user accounts whatever? After all, I get the authentication works by:
curl -XPUT -uelastic:changeme 'localhost:9200/_shield/user/elastic/_password' -d '{ "password" : "newpass1" }'
curl -XPUT -uelastic:newpass1 'localhost:9200/_shield/user/kibana/_password' -d '{ "password" : "newpass2" }'
Here comes the problem. I can't login using Java client with org.elasticsearch.plugin:shield. It's likely the latest version of the shield dependency (2.3.3) mismatched with the elasticsearch dependency (5.0.0-alpha).
Well, can I just disable the authentication?
From the node config:
GET http://localhost:9200/_nodes
"nodes" : {
"v_XmZh7jQCiIMYCG2AFhJg" : {
"transport_address" : "127.0.0.1:9300",
"version" : "5.0.0-alpha2",
"roles" : [ "master", "data", "ingest" ],
...
"settings" : {
"node" : {
"name" : "Apache Kid"
},
"http" : {
"type" : "security"
},
"transport" : {
"type" : "security",
"service" : {
"type" : "security"
}
},
...
So, can I modify these settings, and the possible values are?
In a test environment I added the following option to elasticsearch.yml, and/or kibana.yml
xpack.security.enabled: false
assuming that your image name is elasticsearch. you can use id if you don't like name
if you run docker you can use this.
go to bash in docker with command
docker exec -i -t elasticsearch /bin/bash
then remove x-pack
elasticsearch-plugin remove x-pack
exit docker
exit
and restart docker image
docker restart elasticsearch
Disclamer: Solution inspired by
MichaƂ Dymel
When using with docker (in local dev), instead of removing the xpack, you can simply disable it.
docker pull docker.elastic.co/elasticsearch/elasticsearch:5.5.3
docker run -p 9200:9200 \
-p 9300:9300 \
-e "discovery.type=single-node" \
-e "xpack.security.enabled=false" \
docker.elastic.co/elasticsearch/elasticsearch:5.5.3
I've managed to authenticate using this xpack_security_enable equals false but I still getting some authentication errors on my kibana log.
elasticsearch:
image: elasticsearch:1.7.6
ports:
- ${PIM_ELASTICSEARCH_PORT}:9200
- 9300:9300
kibana:
image: docker.elastic.co/kibana/kibana:5.4.1
environment:
SERVER_NAME: localhost
ELASTICSEARCH_URL: http://localhost:9200
XPACK_SECURITY_ENABLED: 'false'
ports:
- 5601:5601
links:
- elasticsearch
depends_on:
- elasticsearch
This is my current setup, on the kibana I can see some errors:
KIBANA dashboard
On kibana logs I can see:
kibana_1 | {"type":"log","#timestamp":"2017-06-15T07:43:41Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"No living connections"}
kibana_1 | {"type":"log","#timestamp":"2017-06-15T07:43:42Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"Unable to revive connection: http://localhost:9200/"}
So it seems it still trying to connect using authentication.
I had same xpack issue but with kibana, fixed by following command:
docker run docker.elastic.co/kibana/kibana:5.5.1 /bin/bash -c 'bin/kibana-plugin remove x-pack ; /usr/local/bin/kibana-docker'
so it start container, than removes xpack and after that starts normal process. Same can be done with elasticsearch and logstash.

Running Redis on Travis CI

I just included a Redis Store in my Express application and got it to work.
I wanted to include this Redis Store in Travis CI for my code to keep working there. I read in the Travis Documentation that it is possible to start Redis, with the factory settings.
In my project, I don't use the factory settings, I wrote my own redis.conf file which specifies the port and the password.
So I added the following line to my .travis.yml file:
services:
- redis-server --port 6380 --requirepass 'secret'
But this returns the following on Travis CI:
$ sudo service redis-server\ --port\ 6380\ --requirepass\ \'secret\' start
redis-server --port 6380 --requirepass 'secret': unrecognized service
Is there any way to fix this?
If you want to customize the option for Redis on Travis CI, I'd suggest not using the services section, but rather do this:
before_script: sudo redis-server /etc/redis/redis.conf --port 6380 --requirepass 'secret'
The services section runs services using their init/upstart scripts, which may not support the options you've added in there. The command is also escaped for security reasons, hence the documentation only hinting that you can list normal service names in that section.