I want to create an elasticache instance using redis.
I think that I should use it "cluster mode disabled" because everything will fit into one server.
In order to not have a SPOF, I want to create a read replica that will be promoted by AWS in case of a failure of the master.
If possible, it would be great to balance the read only operations between master and slave, but it is not mandatory.
I created a functioning master/read-replica using the aws console then used cloudformer to create the cloudformation json conf. Cloudformer has created me two unlinked AWS::ElastiCache::CacheCluster, but by reading the doc. I don't understand how to link them... For now I have this configuration :
{
"cachehubcache001": {
"Type": "AWS::ElastiCache::CacheCluster",
"Properties": {
"AutoMinorVersionUpgrade": "true",
"AZMode": "single-az",
"CacheNodeType": "cache.t2.small",
"Engine": "redis",
"EngineVersion": "3.2.4",
"NumCacheNodes": "1",
"PreferredAvailabilityZone": { "Fn::FindInMap" : [ "RegionMap", { "Ref" : "AWS::Region" }, "Az1B"]},
"PreferredMaintenanceWindow": "sun:04:00-sun:05:00",
"CacheSubnetGroupName": {
"Ref": "cachesubnethubprivatecachesubnetgroup"
},
"VpcSecurityGroupIds": [
{
"Fn::GetAtt": [
"sgiHubCacheSG",
"GroupId"
]
}
]
}
},
"cachehubcache002": {
"Type": "AWS::ElastiCache::CacheCluster",
"Properties": {
"AutoMinorVersionUpgrade": "true",
"AZMode": "single-az",
"CacheNodeType": "cache.t2.small",
"Engine": "redis",
"EngineVersion": "3.2.4",
"NumCacheNodes": "1",
"PreferredAvailabilityZone": { "Fn::FindInMap" : [ "RegionMap", { "Ref" : "AWS::Region" }, "Az1A"]},
"PreferredMaintenanceWindow": "sun:02:00-sun:03:00",
"CacheSubnetGroupName": {
"Ref": "cachesubnethubprivatecachesubnetgroup"
},
"VpcSecurityGroupIds": [
{
"Fn::GetAtt": [
"sgiHubCacheSG",
"GroupId"
]
}
]
}
},
}
I know that it is wrong, but I can't figure out how to create a correct replica. I can't understand the AWS doc, for a start I can't figure out wich Type I should use between :
http://docs.aws.amazon.com/fr_fr/AWSCloudFormation/latest/UserGuide/aws-resource-elasticache-replicationgroup.html
http://docs.aws.amazon.com/fr_fr/AWSCloudFormation/latest/UserGuide/aws-properties-elasticache-cache-cluster.html
Since cloudformer created AWS::ElastiCache::CacheCluster I'll go with it, but I've got the feeling that it should have created only one resource, and used the NumCacheNodes parameter in order to create two resources.
redis can't use :
NumCacheNodes
AZMode and PreferredAvailabilityZones
so I don't know how to make this solution multi-AZ...
I managed to do this using AWS::ElastiCache::ReplicationGroup, the NumCacheClusters parameter provide the possibility to have numerous servers. Beware : it seem that you have to handle the connection to master/slave yourself (but in case of a master's failure, aws should normally detect it and change the dns of a slave to permit you point do not change your configuration). here is a sample :
"hubElastiCacheReplicationGroup" : {
"Type" : "AWS::ElastiCache::ReplicationGroup",
"Properties" : {
"ReplicationGroupDescription" : "Hub WebServer redis cache cluster",
"AutomaticFailoverEnabled" : "false",
"AutoMinorVersionUpgrade" : "true",
"CacheNodeType" : "cache.t2.small",
"CacheParameterGroupName" : "default.redis3.2",
"CacheSubnetGroupName" : { "Ref": "cachesubnethubprivatecachesubnetgroup" },
"Engine" : "redis",
"EngineVersion" : "3.2.4",
"NumCacheClusters" : { "Ref" : "ElasticacheRedisNumCacheClusters" },
"PreferredMaintenanceWindow" : "sun:04:00-sun:05:00",
"SecurityGroupIds" : [ { "Fn::GetAtt": ["sgHubCacheSG", "GroupId"] } ]
}
},
Related
I'm learning OpenTelemetry and I wonder how dotnet-monitor is connected with OpenTelemetry (Meter). Are those things somehow connected or maybe dotnet-monitor is just custom MS tools that is not using standards from OpenTelemetry (API, SDK and exporters).
If you run dotnet-monitor on your machine it exposes the dotnet metrics in Prometheus format which mean you can set OpenTelemetry collector to scrape those metrics
For example in OpenTelemetry-collector-contrib configuration
receivers:
prometheus_exec:
exec: dotnet monitor collect
port: 52325
Please note that for dotnet-monitor to run you need to create a setting.json in theis path:
$XDG_CONFIG_HOME/dotnet-monitor/settings.json
If $XDG_CONFIG_HOME is not defined, create the file in this path:
$HOME/.config/dotnet-monitor/settings.json
If you want to identify the process by its PID, write this into settings.json (change Value to your PID):
{
"DefaultProcess": {
"Filters": [{
"Key": "ProcessId",
"Value": "1"
}]
}
}
If you want to identify the process by its name, write this into settings.json (change Value to your process name):
{
"DefaultProcess": {
"Filters": [{
"Key": "ProcessName",
"Value": "iisexpress"
}]
},
}
In my example I used this configuration:
{
"DefaultProcess": {
"Filters": [{
"Key": "ProcessId",
"Value": "1"
}]
},
"Metrics": {
"Providers": [
{
"ProviderName": "System.Net.Http"
},
{
"ProviderName": "Microsoft-AspNetCore-Server-Kestrel"
}
]
}
}
I was following this tutorial to setup a configmap for a redis.conf. After I create the Redis deployment, I check to ensure that the redis.conf file is in each of the pods, and they are there. The problem is that when go in the redis-cli and check the configuration there, the redis.conf values aren't used. The default values are being used as if the Redis did not start up with the redis.conf file.
redis.conf
maxclients 2000
requirepass "test"
redis-config configmap
{
"apiVersion": "v1",
"data": {
"redis-config": "maxclients 2000\nrequirepass \"test\"\n\n"
},
"kind": "ConfigMap",
"metadata": {
"creationTimestamp": "2018-03-07T15:28:19Z",
"name": "redis-config",
"namespace": "default",
"resourceVersion": "2569562",
"selfLink": "/api/v1/namespaces/default/configmaps/redis-config",
"uid": "29d250ea-221c-11e8-969f-06c0c8d545d2"
}
}
k8 redis manifest.json
{
"kind" : "Deployment",
"apiVersion" : "extensions/v1beta1",
"metadata" : {
"name" : "redis-master",
"creationTimestamp" : null
},
"spec" : {
"replicas" : 2,
"template" : {
"metadata" : {
"creationTimestamp" : null,
"labels" : {
"app" : "redis",
"role" : "master",
"tier" : "backend"
}
},
"spec" : {
"hostNetwork" : true,
"nodeSelector" :{ "role": "cache"},
"containers" : [{
"name" : "master",
"image" : "redis",
"ports" : [{
"containerPort" : 6379,
"hostPort" : 6379,
"protocol" : "TCP"
}
],
"volumeMounts" : [{
"mountPath" : "/redis-master",
"name": "config"
}
],
"resources" : {},
"terminationMessagePath" : "/dev/termination-log",
"imagePullPolicy" : "IfNotPresent"
}],
"volumes" : [{
"name" : "config",
"configMap" : {
"name" : "redis-config",
"items": [{
"key": "redis-config",
"path": "redis.conf"
}]
}
}
],
"restartPolicy" : "Always",
"terminationGracePeriodSeconds" : 30,
"dnsPolicy" : "ClusterFirst",
"securityContext" : {}
}
}
},
"status" : {}
}
Now I know the tutorial uses a Pod kind, and I am using a Deployment kind, but I don't think that is the issue here.
It looks like you are pulling the default redis container. If you check the redis Dokerfiles, for example https://github.com/docker-library/redis/blob/d53b982b387634092c6f11069401679034054ecb/4.0/alpine/Dockerfile, at the bottom, they have:
CMD ["redis-server"]
which will start redis with the default configuration.
Per redis documentation:
https://redis.io/topics/quickstart
under "Starting Redis" section, if you want to provide a different configuration, you would need to start redis with:
redis-server <config file>
Additionally the example in Kubernetes documentation uses a different redis containter:
image: kubernetes/redis
And from the Dokerfile: https://github.com/kubernetes/kubernetes/blob/master/examples/storage/redis/image/Dockerfile, it seems like that one starts Redis with the provided configuration.
The task seems to be simple: I want to take scheduled EBS snapshots of my EBS volumes on a daily basis. According to the documentation, CloudWatch seems to be the right place to do that:
http://docs.aws.amazon.com/AmazonCloudWatch/latest/events/TakeScheduledSnapshot.html
Now I want to create such a scheduled rule when launching a new stack with CloudFormation. For this, there is a new resource type AWS::Events::Rule:
http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-events-rule.html
But now comes the tricky part: How can I use this resource type to create a built-in target that creates my EBS snapshot, like described in the above scenario?
I'm pretty sure there is way to do it, but I can't help myself right now. My resource template looks like this at the moment:
"DailyEbsSnapshotRule": {
"Type": "AWS::Events::Rule",
"Properties": {
"Description": "creates a daily snapshot of EBS volume (8 a.m.)",
"ScheduleExpression": "cron(0 8 * * ? *)",
"State": "ENABLED",
"Targets": [{
"Arn": { "Fn::Join": [ "", "arn:aws:ec2:", { "Ref": "AWS::Region" }, ":", { "Ref": "AWS::AccountId" }, ":volume/", { "Ref": "EbsVolume" } ] },
"Id": "SomeId1"
}]
}
}
Any ideas?
I found the solution to this over on a question about how to do this in terraform. The solution in plain CloudFormation JSON seems to be.
"EBSVolume": {
"Type": "AWS::EC2::Volume",
"Properties": {
"Size": 1
}
},
"EBSSnapshotRole": {
"Type": "AWS::IAM::Role",
"Properties": {
"AssumeRolePolicyDocument": {
"Version" : "2012-10-17",
"Statement": [ {
"Effect": "Allow",
"Principal": {
"Service": ["events.amazonaws.com", "ec2.amazonaws.com"]
},
"Action": ["sts:AssumeRole"]
}]
},
"Path": "/",
"Policies": [{
"PolicyName": "root",
"PolicyDocument": {
"Version" : "2012-10-17",
"Statement": [ {
"Effect": "Allow",
"Action": [
"ec2:CreateSnapshot"
],
"Resource": "*"
} ]
}
}]
}
},
"EBSSnapshotRule": {
"Type": "AWS::Events::Rule",
"Properties": {
"Description": "creates a daily snapshot of EBS volume (1 a.m.)",
"ScheduleExpression": "cron(0 1 * * ? *)",
"State": "ENABLED",
"Name": {"Ref": "AWS::StackName"},
"RoleArn": {"Fn::GetAtt" : ["EBSSnapshotRole", "Arn"]},
"Targets": [{
"Arn": {
"Fn::Join": [
"",
[
"arn:aws:automation:",
{"Ref": "AWS::Region"},
":",
{"Ref": "AWS::AccountId"},
":action/",
"EBSCreateSnapshot/EBSCreateSnapshot_",
{"Ref": "AWS::StackName"}
]
]
},
"Input": {
"Fn::Join": [
"",
[
"\"arn:aws:ec2:",
{"Ref": "AWS::Region"},
":",
{"Ref": "AWS::AccountId"},
":volume/",
{"Ref": "EBSVolume"},
"\""
]
]
},
"Id": "EBSVolume"
}]
}
}
Unfortunately, it is not (yet) possible to set up scheduled EBS snapshots via CloudWatch Events within a CloudFormation stack.
It is a bit hidden in the docs: http://docs.aws.amazon.com/AmazonCloudWatchEvents/latest/APIReference/API_PutTargets.html
Note that creating rules with built-in targets is supported only in the AWS Management Console.
And "EBSCreateSnapshot" is one of these so-called "built-in targets".
Amazon seem to have removed their "built-in" targets and now its become possible to create Cloudwatch rules to schedule EBS snapshots.
First you must create a rule, which will be used to attach targets to.
Replace XXXXXXXXXXXXX with your aws account-id
aws events put-rule \
--name create-disk-snapshot-for-ec2-instance \
--schedule-expression 'rate(1 day)' \
--description "Create EBS snapshot" \
--role-arn arn:aws:iam::XXXXXXXXXXXXX:role/AWS_Events_Actions_Execution
Then you simply add your targets (up to 10 targets allowed per rule).
aws events put-targets \
--rule create-disk-snapshot-for-ec2-instance \
--targets "[{ \
\"Arn\": \"arn:aws:automation:eu-central-1:XXXXXXXXXXXXX:action/EBSCreateSnapshot/EBSCreateSnapshot_mgmt-disk-snapshots\", \
\"Id\": \"xxxx-yyyyy-zzzzz-rrrrr-tttttt\", \
\"Input\": \"\\\"arn:aws:ec2:eu-central-1:XXXXXXXXXXXXX:volume/<VolumeId>\\\"\" \}]"
There's a better way to automate EBS snapshots these days, using DLM (Data Lifecycle Manager). It's also available through Cloudformation. See these for details:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/snapshot-lifecycle.html
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-dlm-lifecyclepolicy.html
I am using Elasticsearch with Haystacksearch and Django and want to search the follow structure:
{
{
"title": "book1",
"category" : ["Cat_1", "Cat_2"],
"key_values" :
[
{
"key_name" : "key_1",
"value" : "sample_value_1"
},
{
"key_name" : "key_2",
"value" : "sample_value_12"
}
]
},
{
"title": "book2",
"category" : ["Cat_3", "Cat_2"],
"key_values" :
[
{
"key_name" : "key_1",
"value" : "sample_value_1"
},
{
"key_name" : "key_3",
"value" : "sample_value_6"
},
{
"key_name" : "key_4",
"value" : "sample_value_5"
}
]
}
}
Right now I have set up an index model using Haystack with a "text" that put all the data together and runs a full text search! In my opinion this is not the a well established search 'cause I am not using my data set structure and hence this is some kind odd.
As an example if for an object I have a key-value
{
"key_name": "key_1",
"value": "sample_value_1"
}
and for another object I have
{
"key_name": "key_2",
"value": "sample_value_1"
}
and we it gets a query like "Key_1 sample_value_1" comes I get a thoroughly mixed result of objects who have these words in their fields rather than using their structures.
P.S. I am totally new to ElasticSearch and better to say new to the search technologies and challenges. I have searched the web and SO button didn't find anything satisfying. Please let me know if there is something wrong with my thoughts and expectations from these search engines and if there is SO duplicate question! And also if there is a better approach to design a database for this kind of search
Read the es docs on nested mappings and do something like this:
"book_type" : {
"properties" : {
// title, cat mappings
"key_values" : {
"type" : "nested"
"properties": {
"key_name": {
"type": "string", "index": "not_analyzed"
},
"value": {
"type": "string"
}
}
}
}
}
Then query using a nested query
"nested" : {
"path" : "key_values",
"query" : {
"bool" : {
"must" : [
{
"term" : {"key_values.key_name" : "key_1"}
},
{
"match" : {"key_values.value" : "sample_value_1"}
}
]
}
}
}
I am trying to retrieve a file in my cloudformation script. If I make the file publicly available, then it works fine. If the file is private, then the cfn script fails, but with a 404 error in /var/log/. Trying to retrieve the file via wget results in the appropriate 403 error.
How can I retrieve private files from S3?
My file clause looks like:
"files" : {
"/etc/httpd/conf/httpd.conf" : {
"source" : "https://s3.amazonaws.com/myConfigBucket/httpd.conf"
}
},
I added an authentication clause and appropriate parameter:
"Parameters" : {
"BucketRole" : {
"Description" : "S3 role for access to bucket",
"Type" : "String",
"Default" : "S3Access",
"ConstraintDescription" : "Must be a valid IAM Role"
}
}
"AWS::CloudFormation::Authentication": {
"default" : {
"type": "s3",
"buckets": [ "myConfigBucket" ],
"roleName": { "Ref" : "BucketRole" }
}
},
My IAM Role looks like:
{
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:Get*",
"s3:List*"
],
"Resource": "*"
}
]
}
The solution is to add an IamInstanceProfile property to the instance creation:
"Parameters" : {
...
"RoleName" : {
"Description" : "IAM Role for access to S3",
"Type" : "String",
"Default" : "DefaultRoleName",
"ConstraintDescription" : "Must be a valid IAM Role"
}
},
"Resources" : {
"InstanceName" : {
"Type" : "AWS::EC2::Instance",
"Properties" : {
"ImageId" : { "Fn::FindInMap" : [ "RegionMap", { "Ref" : "AWS::Region" }, "64"] },
"InstanceType" : { "Ref" : "InstanceType" },
"SecurityGroups" : [ {"Ref" : "SecurityGroup"} ],
"IamInstanceProfile" : { "Ref" : "RoleName" },
"KeyName" : { "Ref" : "KeyName" }
}
},
...