Error in adding user in RabbitMQ using Ansible - automation

I am trying to create a rabbitmq node using terraform and ansible scripts,Other scripts are executing successfully but I am facing a warning while running this script of adding a user in rabbitmq node.
[WARNING]: Module did not set no_log for update_password
failed: [rabbit-node1] (item=admin) => {
"ansible_loop_var": "item",
"changed": false,
"cmd": "/usr/sbin/rabbitmqctl -q -n rabbit list_users",
"invocation": {
"module_args": {
"configure_priv": ".*",
"force": false,
"node": "rabbit",
"password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"permissions": [
{
"configure_priv": ".*",
"read_priv": ".*",
"vhost": "/",
"write_priv": ".*"
}
],
"read_priv": ".*",
"state": "present",
"tags": "administrator,admin",
"update_password": "on_create",
"user": "admin",
"vhost": "/",
"write_priv": ".*"
}
},
"item": "admin",
"msg": "Error:********#rabbit-node1.\n * Suggestion: start it with \"rabbitmqctl start_app\" and try again",
"rc": 70,
"stderr": "Error: rabbit application is not running on node rabbit#rabbit-node1.\n * Suggestion: start it with \"rabbitmqctl start_app\" and try again\n",
"stderr_lines": [
"Error: rabbit application is not running on node rabbit#rabbit-node1.",
" * Suggestion: start it with \"rabbitmqctl start_app\" and try again"
],
"stdout": "",
"stdout_lines": []
}
main.yml file for Creating user in Rabbitmq node using ansible:
- name: add user
rabbitmq_user:
user: "{{ item }}"
password: "{{ ADMIN_PASS }}"
tags: administrator,{{item}}
vhost: /
configure_priv: .*
write_priv: .*
read_priv: .*
state: present
with_items:
- admin

Related

Wait for file over ssh

I would like to do something like:
- name: Wait until the file is touched
ansible.builtin.wait_for:
path: 192.168.1.1:/home/test.txt
timeout: 300
where 192.168.1.1 is some remote host I am connected to. Is this possible?
I don't believe it's possible with wait_for module.
My workaround is to use until:
- name: Wait until file exists on remote.
shell: ssh $USER#192.168.1.1 ls /home/test.txt
register: filecheck
until: filecheck.stdout == "/home/test.txt"
retries: 10
delay: 1
I copied the file to the host while the playbook was running through the 9th attempt:
changed: [localhost] => {
"attempts": 9,
"changed": true,
"cmd": "ssh root#192.168.1.1 ls /home/test.txt",
"delta": "0:00:01.548091",
"end": "2022-08-11 15:31:05.276277",
"invocation": {
"module_args": {
"_raw_params": "ssh root#192.168.1.1 ls /home/test.txt",
"_uses_shell": true,
"argv": null,
"chdir": null,
"creates": null,
"executable": null,
"removes": null,
"stdin": null,
"stdin_add_newline": true,
"strip_empty_ends": true,
"warn": false
}
},
"msg": "",
"rc": 0,
"start": "2022-08-11 15:31:03.728186",
"stderr": "",
"stderr_lines": [],
"stdout": "/home/test.txt",
"stdout_lines": [
"/home/test.txt"
]

kubernetes e2e tests fails with spec.configSource: Invalid value

We are running kubernetes(1.15.3) e2e tests via sonobuoy and 3 of them fail with the same error:
/go/src/k8s-tests/test/e2e/framework/framework.go:674
error setting labels on node
Expected error:
<*errors.StatusError | 0xc001e6def0>: {
ErrStatus: {
TypeMeta: {Kind: "", APIVersion: ""},
ListMeta: {SelfLink: "", ResourceVersion: "", Continue: ""},
Status: "Failure",
Message: "Node \"nightly-e2e-rhel76-1vm\" is invalid: spec.configSource: Invalid value: core.NodeConfigSource{ConfigMap:(*core.ConfigMapNodeConfigSource)(nil)}: exactly one reference subfield must be non-nil",
Reason: "Invalid",
Details: {
Name: "nightly-e2e-rhel76-1vm",
Group: "",
Kind: "Node",
UID: "",
Causes: [
{
Type: "FieldValueInvalid",
Message: "Invalid value: core.NodeConfigSource{ConfigMap:(*core.ConfigMapNodeConfigSource)(nil)}: exactly one reference subfield must be non-nil",
Field: "spec.configSource",
},
],
RetryAfterSeconds: 0,
},
Code: 422,
},
}
Node "nightly-e2e-rhel76-1vm" is invalid: spec.configSource: Invalid value: core.NodeConfigSource{ConfigMap:(*core.ConfigMapNodeConfigSource)(nil)}: exactly one reference subfield must be non-nil
not to have occurred
/go/src/k8s-tests/test/e2e/apps/daemon_set.go:170
kubectl get nodes -o yaml gives these fields and yes, we do have kubelet dynamic config enabled:
spec:
configSource:
configMap:
kubeletConfigKey: kubelet
name: kubelet-config-1.15.3-1581671888
namespace: kube-system
[cloud-user#nightly-e2e-rhel76-1vm ~]$ kubectl get no nightly-e2e-rhel76-1vm -o json | jq .status.config
{
"active": {
"configMap": {
"kubeletConfigKey": "kubelet",
"name": "kubelet-config-1.15.3-1581671888",
"namespace": "kube-system",
"resourceVersion": "508",
"uid": "71961ed7-2fff-41f5-80b7-78167fb056fc"
}
},
"assigned": {
"configMap": {
"kubeletConfigKey": "kubelet",
"name": "kubelet-config-1.15.3-1581671888",
"namespace": "kube-system",
"resourceVersion": "508",
"uid": "71961ed7-2fff-41f5-80b7-78167fb056fc"
}
},
"lastKnownGood": {
"configMap": {
"kubeletConfigKey": "kubelet",
"name": "kubelet-config-1.15.3-1581671888",
"namespace": "kube-system",
"resourceVersion": "508",
"uid": "71961ed7-2fff-41f5-80b7-78167fb056fc"
}
}
}
What are we missing?
Thanks.

Docker: Cannot launch .Net Core 3.1 Web Api on browser after docker run

I'm new to Docker and, as a start, I'm trying to accomplish a basic task, dockerize a .Net Core 3.1 Web Api and run it from the command line (not From Visual Studio where it actually works).
I create my project image using the following Dockerfile and with the next command:
docker build -t concepttest_crud1 .
Dockerfile (created by Visual Studio):
FROM mcr.microsoft.com/dotnet/core/aspnet:3.1-buster-slim AS base
WORKDIR /app
EXPOSE 80
EXPOSE 443
FROM mcr.microsoft.com/dotnet/core/sdk:3.1-buster AS build
WORKDIR /src
COPY ["Gfi_ConceptTest_CRUD1/Gfi_ConceptTest_CRUD1.csproj", "Gfi_ConceptTest_CRUD1/"]
RUN dotnet restore "Gfi_ConceptTest_CRUD1/Gfi_ConceptTest_CRUD1.csproj"
COPY . .
WORKDIR "/src/Gfi_ConceptTest_CRUD1"
RUN dotnet build "Gfi_ConceptTest_CRUD1.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "Gfi_ConceptTest_CRUD1.csproj" -c Release -o /app/publish
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "Gfi_ConceptTest_CRUD1.dll"]
4) I run my image with the following command:
docker run -d -p 8080:44390 --name crud1 concepttest_crud1
where 44390 is the port I have my api configured on in Visual Studio.
When debugging I use to access the api through:
https://localhost:44390/api/Authors
5) I'm trying to test my dockerized api in Chrome with the next url:
https://localhost:8080/api/Authors
to no avail. No matter with url I try, my api won't start when the docker run command executes with no errors.
docker ps output:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
8df6c92c6678 concepttest_crud1 "dotnet Gfi_ConceptT…" 22 minutes ago Up 22 minutes 0.0.0.0:8080->44390/tcp crud1
docker images output:
REPOSITORY TAG IMAGE ID CREATED SIZE
concepttest_crud1 latest 878e4c4845f6 About an hour ago 228MB
<none> <none> 36a29990113c About an hour ago 1GB
In the Docker images output I don't know why I see two images (a second with no name) when I expect only one.
I've also tried http://192.168.99.100:8080/api/Authors but it is not working either.
Edit 1: Adding docker inspect.
[
{
"Id": "0f0bf4b4de30ef5dd222016db639abe45e5e70bc25270b74df45eec64e319b57",
"Created": "2019-12-23T12:30:06.3947619Z",
"Path": "dotnet",
"Args": [
"Gfi_ConceptTest_CRUD1.dll"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 83477,
"ExitCode": 0,
"Error": "",
"StartedAt": "2019-12-23T12:30:08.9701219Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
"Image": "sha256:92b9217abf5dc6a7fc6c10aac2ac5549d6873adfc1b3ed30264d6e1fd4a971c7",
"ResolvConfPath": "/var/lib/docker/containers/0f0bf4b4de30ef5dd222016db639abe45e5e70bc25270b74df45eec64e319b57/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/0f0bf4b4de30ef5dd222016db639abe45e5e70bc25270b74df45eec64e319b57/hostname",
"HostsPath": "/var/lib/docker/containers/0f0bf4b4de30ef5dd222016db639abe45e5e70bc25270b74df45eec64e319b57/hosts",
"LogPath": "/var/lib/docker/containers/0f0bf4b4de30ef5dd222016db639abe45e5e70bc25270b74df45eec64e319b57/0f0bf4b4de30ef5dd222016db639abe45e5e70bc25270b74df45eec64e319b57-json.log",
"Name": "/crud1",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "",
"ExecIDs": null,
"HostConfig": {
"Binds": null,
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "default",
"PortBindings": {
"443/tcp": [
{
"HostIp": "",
"HostPort": "8080"
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"CapAdd": null,
"CapDrop": null,
"Capabilities": null,
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": false,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": null,
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"ConsoleSize": [
28,
165
],
"Isolation": "",
"CpuShares": 0,
"Memory": 0,
"NanoCpus": 0,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": null,
"BlkioDeviceWriteBps": null,
"BlkioDeviceReadIOps": null,
"BlkioDeviceWriteIOps": null,
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"KernelMemory": 0,
"KernelMemoryTCP": 0,
"MemoryReservation": 0,
"MemorySwap": 0,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": null,
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": [
"/proc/asound",
"/proc/acpi",
"/proc/kcore",
"/proc/keys",
"/proc/latency_stats",
"/proc/timer_list",
"/proc/timer_stats",
"/proc/sched_debug",
"/proc/scsi",
"/sys/firmware"
],
"ReadonlyPaths": [
"/proc/bus",
"/proc/fs",
"/proc/irq",
"/proc/sys",
"/proc/sysrq-trigger"
]
},
"GraphDriver": {
"Data": {
"LowerDir": "/var/lib/docker/overlay2/3781dd694db551736c551397e9a0834ebd6fa653e0b78b6d6244e15ef7b8c291-init/diff:/var/lib/docker/overlay2/025232b693c4a6470e0b26e5302e5165757dbc4a44d5af5b7b8aa8dacd77ee03/diff:/var/lib/docker/overlay2/bcca2356fe199bbaa471c3590a3f55df343eecd84899cf570197d97ed8111d95/diff:/var/lib/docker/overlay2/5f2d77acbd50bc4a8cde011128fb43b2a0cff888e716c5ba882d4dd906bc1901/diff:/var/lib/docker/overlay2/ca57ac72ac1d48b622745bb473fd71c18d67392b02cdf938835f05cf9b547681/diff:/var/lib/docker/overlay2/0b894b31b26df6e40b57a86dc66bbd4eda71d0ebf6d5755c4c43bc296e9a24de/diff:/var/lib/docker/overlay2/b6ca0109d2718605ccba2585e52ae1718cbecd2fcd62197f1f7fd505cf2be358/diff:/var/lib/docker/overlay2/e0ff62d02fe725053674ce9e3b59a4004c101fd2e98f919e5037a412f1c3a4e8/diff",
"MergedDir": "/var/lib/docker/overlay2/3781dd694db551736c551397e9a0834ebd6fa653e0b78b6d6244e15ef7b8c291/merged",
"UpperDir": "/var/lib/docker/overlay2/3781dd694db551736c551397e9a0834ebd6fa653e0b78b6d6244e15ef7b8c291/diff",
"WorkDir": "/var/lib/docker/overlay2/3781dd694db551736c551397e9a0834ebd6fa653e0b78b6d6244e15ef7b8c291/work"
},
"Name": "overlay2"
},
"Mounts": [],
"Config": {
"Hostname": "0f0bf4b4de30",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"443/tcp": {},
"8080/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"ASPNETCORE_URLS=http://+:80",
"DOTNET_RUNNING_IN_CONTAINER=true"
],
"Cmd": null,
"Image": "crud1",
"Volumes": null,
"WorkingDir": "/app",
"Entrypoint": [
"dotnet",
"Gfi_ConceptTest_CRUD1.dll"
],
"OnBuild": null,
"Labels": {}
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "ed889f3303ca17ad4e79ae917d8b17c5e590af502adb8049dbd7e0fcd46b323c",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {
"443/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8080"
}
],
"8080/tcp": null
},
"SandboxKey": "/var/run/docker/netns/ed889f3303ca",
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "001cd49640c4211e5eeb1eedacedd6de07022cc3362acaca59b82f0e537f6238",
"Gateway": "172.17.0.1",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "172.17.0.2",
"IPPrefixLen": 16,
"IPv6Gateway": "",
"MacAddress": "02:42:ac:11:00:02",
"Networks": {
"bridge": {
"IPAMConfig": null,
"Links": null,
"Aliases": null,
"NetworkID": "ccea227190a1f04d0043015cf3d9932ba41f4a96dfb9bc2a175a06483a9ca66e",
"EndpointID": "001cd49640c4211e5eeb1eedacedd6de07022cc3362acaca59b82f0e537f6238",
"Gateway": "172.17.0.1",
"IPAddress": "172.17.0.2",
"IPPrefixLen": 16,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "02:42:ac:11:00:02",
"DriverOpts": null
}
}
}
}
]
Edit 2: Added curl.
docker exec -it 0f0bf4b4de30 /bin/bash . root#4237f947b2b0:/app# curl https://localhost:8080/api/Authors
.: .: Is a directory
Edit 3: New curl
docker exec -it 0f0bf4b4de30 /bin/bash . root#4237f947b2b0:/app# curl https://localhost/api/Authors
.: .: Is a directory
In my case it was due to having SSL enabled inthe project, but after removing everything related to https it started working as expected.
Having SSL enabled implies installing a SSL certificate inside the docker container and enabling it, things I wasn't able to do as I don't have a certificate and I'm just learning docker.
Steps I've done:
In Startup.cs I've disabled https redirection
//app.UseHttpsRedirection();
And in project properties under "Debug" section I unchecked "Enable SSL".
Then I was able to run the container as expected.

Ansible aws s3 fails on directory checksuom when getting an object

when getting an object from an S3 bucket using the following Ansible command:
- name: "copy object from s3://{{ s3_bucket }}/{{ s3_object }} to {{ dest }}"
s3:
bucket: "{{ s3_bucket }}"
object: "{{ s3_object }}"
dest: "{{ dest }}"
mode: get
I get the following error:
fatal: [som_fake_host]: FAILED! => {
"changed": false,
"failed": true,
"invocation": {
"module_args": {
"aws_access_key": null,
"aws_secret_key": null,
"bucket": "some-fake-bucket",
"dest": "/some-fake-dest/",
"ec2_url": null,
"encrypt": true,
"expiry": "600",
"headers": null,
"ignore_nonexistent_bucket": false,
"marker": null,
"max_keys": "1000",
"metadata": null,
"mode": "get",
"object": "some_fake_file",
"overwrite": "always",
"permission": [
"private"
],
"prefix": null,
"profile": null,
"region": null,
"retries": 0,
"rgw": false,
"s3_url": null,
"security_token": null,
"src": null,
"validate_certs": true,
"version": null
}
},
"msg": "attempted to take checksum of directory: /some-fake-dest/"
}
additional useful information:
The destination directory exists
The user that runs the playbook has permission on the destination directory
The file exists in S3 bucket
Looking at the docs:
dest The destination file path when downloading an object/key with a GET operation.
Try to call module with file path, not directory. E.g.:
dest: "{{ dest }}/{{ s3_object }}"
or something.

Unable to connect to dockerized redis instance from outside docker

I have the latest docker installation (without boot2docker) and I am unable to connect to a dockerized redis instance running locally. Could you please tell me what I'm doing wrong here?
Created the docker, mapped port 6379 to 127.0.0.1:6379
bash-3.2$ docker run -p 127.0.0.1:6379:6379 --name webmonitor-redis -d redis
3291541d58ab16c362f9e0cd7017d179c0bc9aef3a1323e79f1e1ca075e171c9
docker ps output:
bash-3.2$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3291541d58ab redis "/entrypoint.sh redis" 14 seconds ago Up 6 seconds 127.0.0.1:6379->6379/tcp webmonitor-redis
Tried connecting from outside container (but the same host where container is running), connection failed:
bash-3.2$ ./src/redis-cli
Could not connect to Redis at 127.0.0.1:6379: Connection refused
not connected> exit
It works if I try to connect from another container though..
bash-3.2$ docker run -it --link webmonitor-redis:redis --rm redis sh -c 'exec redis-cli -h "$REDIS_PORT_6379_TCP_ADDR" -p "$REDIS_PORT_6379_TCP_PORT"'
172.17.0.8:6379>
Here's the docker inspect for the container:
bash-3.2$ docker inspect 3291541d58ab
[
{
"Id": "3291541d58ab16c362f9e0cd7017d179c0bc9aef3a1323e79f1e1ca075e171c9",
"Created": "2015-10-03T15:48:17.818355794Z",
"Path": "/entrypoint.sh",
"Args": [
"redis-server"
],
"State": {
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 7769,
"ExitCode": 0,
"Error": "",
"StartedAt": "2015-10-03T15:48:17.954436198Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
"Image": "2f2578ff984f013c9a5d6cbb6fe061ed3f73a17380a4c9b53b76d4b8da3eda7d",
"NetworkSettings": {
"Bridge": "",
"EndpointID": "b787e46d1219f36d4f1b1ea35c5f750f7174221137fb01889a26a3bc1e1c6aee",
"Gateway": "172.17.42.1",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"HairpinMode": false,
"IPAddress": "172.17.0.8",
"IPPrefixLen": 16,
"IPv6Gateway": "",
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"MacAddress": "02:42:ac:11:00:08",
"NetworkID": "30176c9c7c14a6a052af784014832a0c52b5966089d7bcfe535041569e6bb1c9",
"PortMapping": null,
"Ports": {
"6379/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "6379"
}
]
},
"SandboxKey": "/var/run/docker/netns/3291541d58ab",
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null
},
"ResolvConfPath": "/mnt/sda1/var/lib/docker/containers/3291541d58ab16c362f9e0cd7017d179c0bc9aef3a1323e79f1e1ca075e171c9/resolv.conf",
"HostnamePath": "/mnt/sda1/var/lib/docker/containers/3291541d58ab16c362f9e0cd7017d179c0bc9aef3a1323e79f1e1ca075e171c9/hostname",
"HostsPath": "/mnt/sda1/var/lib/docker/containers/3291541d58ab16c362f9e0cd7017d179c0bc9aef3a1323e79f1e1ca075e171c9/hosts",
"LogPath": "/mnt/sda1/var/lib/docker/containers/3291541d58ab16c362f9e0cd7017d179c0bc9aef3a1323e79f1e1ca075e171c9/3291541d58ab16c362f9e0cd7017d179c0bc9aef3a1323e79f1e1ca075e171c9-json.log",
"Name": "/webmonitor-redis",
"RestartCount": 0,
"Driver": "aufs",
"ExecDriver": "native-0.2",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "",
"ExecIDs": null,
"HostConfig": {
"Binds": null,
"ContainerIDFile": "",
"LxcConf": [],
"Memory": 0,
"MemorySwap": 0,
"CpuShares": 0,
"CpuPeriod": 0,
"CpusetCpus": "",
"CpusetMems": "",
"CpuQuota": 0,
"BlkioWeight": 0,
"OomKillDisable": false,
"MemorySwappiness": -1,
"Privileged": false,
"PortBindings": {
"6379/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "6379"
}
]
},
"Links": null,
"PublishAllPorts": false,
"Dns": null,
"DnsSearch": null,
"ExtraHosts": null,
"VolumesFrom": null,
"Devices": [],
"NetworkMode": "default",
"IpcMode": "",
"PidMode": "",
"UTSMode": "",
"CapAdd": null,
"CapDrop": null,
"GroupAdd": null,
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"SecurityOpt": null,
"ReadonlyRootfs": false,
"Ulimits": null,
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"CgroupParent": "",
"ConsoleSize": [
0,
0
]
},
"GraphDriver": {
"Name": "aufs",
"Data": null
},
"Mounts": [
{
"Name": "643a80cbd7a50cfd481acc48721b34030c8ce55ba64ac3bc161d5b330c9374d2",
"Source": "/mnt/sda1/var/lib/docker/volumes/643a80cbd7a50cfd481acc48721b34030c8ce55ba64ac3bc161d5b330c9374d2/_data",
"Destination": "/data",
"Driver": "local",
"Mode": "",
"RW": true
}
],
"Config": {
"Hostname": "3291541d58ab",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"6379/tcp": {}
},
"PublishService": "",
"Tty": false,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"REDIS_VERSION=3.0.3",
"REDIS_DOWNLOAD_URL=http://download.redis.io/releases/redis-3.0.3.tar.gz",
"REDIS_DOWNLOAD_SHA1=0e2d7707327986ae652df717059354b358b83358"
],
"Cmd": [
"redis-server"
],
"Image": "redis",
"Volumes": {
"/data": {}
},
"VolumeDriver": "",
"WorkingDir": "/data",
"Entrypoint": [
"/entrypoint.sh"
],
"NetworkDisabled": false,
"MacAddress": "",
"OnBuild": null,
"Labels": {}
}
}
]
Am I missing something here?
make sure the port 6379 on the host is forwarded to port 6379 on docker.
use -p 6379:6379 to fixed the problem:
docker run -d --name redisDev -p 6379:6379 redis
The VM has it's own IP 192.168.99.100 so I was able to connect by binding to 192.168.99.100:6379:6379 and then connecting as below.
0c4de9a25467:redis-stable electron$ ./src/redis-cli -h 192.168.99.100
192.168.99.100:6379>
docker run -p 6379:6379 -host 0.0.0.0 --name webmonitor-redis -d redis