Read LocalStack SES emails via API - amazon-ses

As per the LocalStack documentation:
The sent messages can be retrieved via a service API endpoint (GET
/_localstack/ses) or from the filesystem.
Messages are also saved to the data directory (DATA_DIR, see
Configuration). If data directory is not available, the temporary
directory (TMPDIR) is used. The files are saved as JSON in the ses/
subdirectory and organised by message ID.
However, I am unable to get the "service API endpoint" method to work. The documentation is not very specific. What port is this API listening on? I've tried to cURL the default port for all localstack services (http://localhost:4566/_localstack/ses/), but this only returns a 404.
I have confirmed that the emails do appear in the data_dir. But I specifically want to retrieve via this API.
EDIT
I'm using docker-compose to start the container, but the same outcome happens when using the CLI:
~ localstack --version
0.14.3.3
~ SERVICES=ses localstack start -d
__ _______ __ __
/ / ____ _________ _/ / ___// /_____ ______/ /__
/ / / __ \/ ___/ __ `/ /\__ \/ __/ __ `/ ___/ //_/
/ /___/ /_/ / /__/ /_/ / /___/ / /_/ /_/ / /__/ ,<
/_____/\____/\___/\__,_/_//____/\__/\__,_/\___/_/|_|
đŸ’» LocalStack CLI 0.14.3.3
[17:02:56] starting LocalStack in Docker mode 🐳 localstack.py:135
[17:02:57] preparing environment bootstrap.py:729
configuring container bootstrap.py:737
starting container bootstrap.py:743
[17:02:58] detaching
~ aws --endpoint-url http://localhost:4566 ses verify-email-identity --email-address abc#mail.com
~ aws --endpoint-url http://localhost:4566 ses send-email --destination ToAddresses=abc#test.com --message file://$HOME/tmp/message.json --from abc#mail.com
{
"MessageId": "eorislettxoexeoc-xdcvrgnw-zggi-exmm-dfcc-dllwqdlnrrlu-thowhk"
}
~ curl -v http://localhost:4566/_localstack/ses/
* Trying ::1:4566...
* connect to ::1 port 4566 failed: Connection refused
* Trying 127.0.0.1:4566...
* Connected to localhost (127.0.0.1) port 4566 (#0)
> GET /_localstack/ses/ HTTP/1.1
> Host: localhost:4566
> User-Agent: curl/7.77.0
> Accept: */*
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 404
...

Related

Mercure keeps binding to port 80

I'm using the Mercure hub 0.13, everything works fine on my development machine, but on my test server the hub keeps on trying to bind on port 80, resulting in a error, as nginx is already running on port 80.
run: loading initial config: loading new config: http app module: start: tcp: listening on :80: listen tcp :80: bind: address already in use
I'm starting the hub with the following command:
MERCURE_PUBLISHER_JWT_KEY=$(cat publisher.key.pub) \
MERCURE_PUBLISHER_JWT_ALG=RS256 \
MERCURE_SUBSCRIBER_JWT_KEY=$(cat publisher.key.pub) \
MERCURE_SUBSCRIBER_JWT_ALG=RS256 \
./mercure run -config Caddyfile.dev
Caddyfile.dev is as follows:
# Learn how to configure the Mercure.rocks Hub on https://mercure.rocks/docs/hub/config
{
{$GLOBAL_OPTIONS}
}
{$SERVER_NAME:localhost:3000}
log
route {
redir / /.well-known/mercure/ui/
encode zstd gzip
mercure {
# Transport to use (default to Bolt)
transport_url {$MERCURE_TRANSPORT_URL:bolt://mercure.db}
# Publisher JWT key
publisher_jwt {env.MERCURE_PUBLISHER_JWT_KEY} {env.MERCURE_PUBLISHER_JWT_ALG}
# Subscriber JWT key
subscriber_jwt {env.MERCURE_SUBSCRIBER_JWT_KEY} {env.MERCURE_SUBSCRIBER_JWT_ALG}
# Permissive configuration for the development environment
cors_origins *
publish_origins *
demo
anonymous
subscriptions
# Extra directives
{$MERCURE_EXTRA_DIRECTIVES}
}
respond /healthz 200
respond "Not Found" 404
}
When I provider the SERVER_NAME as an environment variable, without a domain, SERVER_NAME=:3000, the hub actually starts on port 3000, but runs in http mode, which only allows for anonymous subscriptions and is not what I need.
Server:
Operating System: CentOS Stream 8
Kernel: Linux 4.18.0-383.el8.x86_64
Architecture: x86-64
Full output when trying to start the Mercure hub:
2022/05/10 04:50:29.605 INFO using provided configuration {"config_file": "Caddyfile.dev", "config_adapter": ""}
2022/05/10 04:50:29.606 WARN input is not formatted with 'caddy fmt' {"adapter": "caddyfile", "file": "Caddyfile.dev", "line": 3}
2022/05/10 04:50:29.609 INFO admin admin endpoint started {"address": "tcp/localhost:2019", "enforce_origin": false, "origins": ["localhost:2019", "[::1]:2019", "127.0.0.1:2019"]}
2022/05/10 04:50:29.610 INFO http enabling automatic HTTP->HTTPS redirects {"server_name": "srv0"}
2022/05/10 04:50:29.610 INFO tls.cache.maintenance started background certificate maintenance {"cache": "0xc0003d6150"}
2022/05/10 04:50:29.627 INFO tls cleaning storage unit {"description": "FileStorage:/root/.local/share/caddy"}
2022/05/10 04:50:29.628 INFO tls finished cleaning storage units
2022/05/10 04:50:29.642 INFO pki.ca.local root certificate is already trusted by system {"path": "storage:pki/authorities/local/root.crt"}
2022/05/10 04:50:29.643 INFO tls.cache.maintenance stopped background certificate maintenance {"cache": "0xc0003d6150"}
run: loading initial config: loading new config: http app module: start: tcp: listening on :80: listen tcp :80: bind: address already in use
I'm a bit late, but I hope that will help someone.
As mentionned here, you can specify the http_port manually in your caddy configuration file.

Authentication failure with OAuth to Traefik dashboard

I am trying to get Traefik setup in a Docker and am having a heck of a time. Following this guide and using Cloudflare (DNS only to trafeik.mydomain.com), to connect, I am getting "This site can't be reached oauth.mydomain.com's server IP address could not be found".
wget https://traefik.mydomain.com/dashboard
--2020-09-26 19:19:38-- https://traefik.mydomain.com/dashboard
Resolving traefik.mydomain.com (traefik.mydomain.com)... <ip address>
Connecting to traefik.mydomain.com (traefik.mydomain.com)|<ip address>|:443... connected.
HTTP request sent, awaiting response... 307 Temporary Redirect
Location: https://accounts.google.com/o/oauth2/auth?client_id=6597174190-33npvgec044jtcrj4scmfgt561.apps.googleusercontent.com&prompt=select_account&redirect_uri=https%3A%2F%2Foauth.mydomain.com%2F_oauth&response_type=code&scope=https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fuserinfo.profile+https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fuserinfo.email&state=fc4c85a1e11f4914247d1e7c95b031%3Agoogle%3Ahttps%3A%2F%2Ftraefik.mydomain.com%2Fdashboard [following]
--2020-09-26 19:19:38-- https://accounts.google.com/o/oauth2/auth?client_id=6597114190-33npkhvge44jtcrj4scmuafgt561.apps.googleusercontent.com&prompt=select_account&redirect_uri=https%3A%2F%2Foauth.mydomain.com%2F_oauth&response_type=code&scope=https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fuserinfo.profile+https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fuserinfo.email&state=fc4c85a1e11f4914247d1e7c94a5b031%3Agoogle%3Ahttps%3A%2F%2Ftraefik.mydomain.com%2Fdashboard
Resolving accounts.google.com (accounts.google.com)... 172.217.1.205, 2607:f8b0:400f:805::200d
Connecting to accounts.google.com (accounts.google.com)|172.217.1.205|:443... connected.
HTTP request sent, awaiting response... 302 Moved Temporarily
Location: https://accounts.google.com/AccountChooser?oauth=1&continue=https%3A%2F%2Faccounts.google.com%2Fsignin%2Foauth%2Flegacy%2Fconsent%3Fauthuser%3Dunknown%26part%3DAJi8hAPSHE1cMVuaeTQ61pcXpMEfDhbxN02IAjg5jH0GYWG7yCVc5EC9ug_kuK3j7hUBGCiY_q_SGL-0xSPDtiliIthwpmOTvP_MV5upFvDdYTpTTlraXSxx_7f8vhJteA8UjJoKSqgeUvFWns_BdFn8z73XALchawMrWA1vVl0xJYpUYHUXxD3K0zl4TbcgpVOljSfZM0vkQAHwTm54OjNiw51GTMCJAwiGwh_ANodLXY1n07UrO6-AgJ1pEeRksrlKs-O2W2Az1Fj4QWMej3OJ8HiHfRVlBt8c8zStbROoFMIce9ldHm5FF-l54b3xQcBp4xLi6ABqcrciv_Y0TAFuuwwotfgqrl1_uMHfyX9KJogk_gntcEiG2489OMNwFinOVAPUCg1Z-gn-ps7g_oBl4MB-FzsIiVpfyy_qRD7SMyhvnVe4Bj-%26as%3DS-2012888342%253A160116957872%23 [following]
--2020-09-26 19:19:38-- https://accounts.google.com/AccountChooser?oauth=1&continue=https%3A%2F%2Faccounts.google.com%2Fsignin%2Foauth%2Flegacy%2Fconsent%3Fauthuser%3Dunknown%26part%3DAJi8hAPSk6oHE1cMVRfegIuaeTQ61pcXpMEfD2FXah02IAjg5GYWG7yCVc5EC9ug_kuK3j7hUBGCiY_q_SGL-0xSPDtiliIthwpmOTvP_MV5upFvDdYTpTTSxx_7f8vhJteA8UjJoKSqgeUvFWns_BdFn8z73XALchawMrWA1vVXbAl0xJYpUYHUXzl4TbcgpVOljSfZM0vkQAHwTmhD54OjNiw51GTMCJAwiGwh_ANodLXY1n07UrO6-AgJ1pEeRksrlKs-O2W2Az1Fj4QWMej3OJ8HiHfRVlBt8c8zStbROoFMIce9ldHm5FF-l54b3xQcBpABqcrciv_Y0TAFuuwwotfgqrl1_uMHfyX9KJogk_gntcEiG2489OMNwFAPUCg1Z-gn-ps7g_oBl4MB-FzsIiVpfyy_qRDBteJ7SMyhvnVe4Bj-%26as%3DS-2012888342%253A1601169578900872%23
Reusing existing connection to accounts.google.com:443.
HTTP request sent, awaiting response... 302 Moved Temporarily
Location: https://accounts.google.com/ServiceLogin?continue=https%3A%2F%2Faccounts.google.com%2Fsignin%2Foauth%2Flegacy%2Fconsent%3Fauthuser%3Dunknown%26part%3DAJi8hAPSkE1cMVRIuaeTQ61ppMEfD2FXahbxN02IAjg5jH0GYWG7yCVc5EC9ug_kuK3j7hUBGCiY_q_SGL-0xSPDtiliIthwpmOTvP_MV5upFvDdYTpTTlraXSxx_7f8vhJteA8UjJoKSqgs_BdFn8z73XALchawMrWA1vVXbAl0xJYpUYHUXxD3K0zl4TbcgpVOljSfZM0vkQAHwTmhD54OjNiw51GTMCJAwiGwh_ANodLXY1n07UrO6-AgJ1pEeRksrlKs-O2W2Az1Fj4QWMej3OJ8HiHBt8c8zStbROoFMIce9ldHm5FF-l54b3xQcBp4xLi6ABqcrciv_Y0TAFuuwwotfgqrl1_uMHfyX9KJntcEiG2489OMNwFinOVAPUCg1Z-gn-ps7g_oBl4MB-FzsIiVpfyy_qRDBteJ7SMyhvnVe4Bj-%26as%3DS-2012888342%253A1601178900872%23&sacu=1&oauth=1&rip=1 [following]
--2020-09-26 19:19:39-- https://accounts.google.com/ServiceLogin?continue=https%3A%2F%2Faccounts.google.com%2Fsignin%2Foauth%2Flegacy%2Fconsent%3Fauthuser%3Dunknown%26part%3DAJi8hAPSk6ogIuaeTQ61pcXpMEfD2FXahbxN02IAjg5jH0GYWG7yCVc5EC9ug_kuK3j7hUBGCiY_q_SGL-0xSPDtiliIthwpmOTvP_MV5upFvDdYTpTTlraXSxx_7f8vhJteA8UjJoKSqgeUvFWns_BdFn8z73XALchawMrWA1vVXbAl0xJYpUYHUXxD3K0zl4jSfZM0vkQAHwTmhD54OjNiw51GTMCJAwiGwh_ANodLXY1n07UrO6-AgJ1pEeRksrlKs-O2W2Az1Fj4QWMej3OVlBt8c8zStbROoFMIce9ldHm5FF-l54b3xQcBp4xLi6ABqcrciv_Y0TAFuuwwotfgqrl1_uKJogk_gntcEiG2489OMNwFinOVAPUCg1Z-gn-ps7g_oBl4MB-FzsIiVpfyy_qRDBteJ7SMyhvnVe4Bj-%26as%3DS-2012888342%253A1601169578900872%23&sacu=1&oauth=1&rip=1
Reusing existing connection to accounts.google.com:443.
HTTP request sent, awaiting response... 200 OK
Length: unspecified [text/html]
Saving to: ‘dashboard.1’
dashboard.1 [ <=> ] 58.82K --.-KB/s in 0.03s
2020-09-26 19:19:39 (1.64 MB/s) - ‘dashboard.1’ saved [60236]
The Docker log says:
level=debug msg="Remote error http://oauth:4181. StatusCode: 307"
middlewareType=ForwardedAuthType middlewareName=middlewares-oauth#file
This is my docker-compose.yml file:
version: "3.3"
########################### NETWORKS
networks:
t2_proxy:
external:
name: t2_proxy
default:
driver: bridge
########################### SERVICES
services:
# All services / apps go below this line
# Traefik 2 - Reverse Proxy
traefik:
container_name: traefik
image: traefik:chevrotin # the chevrotin tag refers to v2.2.x but introduced a breaking change in 2.2.2
restart: unless-stopped
command: # CLI arguments
- --global.checkNewVersion=true
- --global.sendAnonymousUsage=false
- --entryPoints.http.address=:80
- --entryPoints.https.address=:443
# Allow these IPs to set the X-Forwarded-* headers - Cloudflare IPs: https://www.cloudflare.com/ips/
- --entrypoints.https.forwardedHeaders.trustedIPs=173.245.48.0/20,103.21.244.0/22,103.22.200.0/22,103.31.4.0/22,141.101.64.0/18,108.162.192.0/18,190.93.240.0/20,188.114.96.0/20,197.234.240.0/22,198.41.128.0/17,162.158.0.0/15,104.16.0.0/12,172.64.0.0/13,131.0.72.0/22
- --entryPoints.traefik.address=:8080
- --api=true
# - --api.insecure=true
# - --serversTransport.insecureSkipVerify=true
- --log=true
- --log.level=DEBUG # (Default: error) DEBUG, INFO, WARN, ERROR, FATAL, PANIC
- --accessLog=true
- --accessLog.filePath=/traefik.log
- --accessLog.bufferingSize=100 # Configuring a buffer of 100 lines
- --accessLog.filters.statusCodes=400-499
- --providers.docker=true
- --providers.docker.endpoint=unix:///var/run/docker.sock
- --providers.docker.defaultrule=Host(`{{ index .Labels "com.docker.compose.service" }}.$DOMAINNAME`)
- --providers.docker.exposedByDefault=false
- --providers.docker.network=t2_proxy
- --providers.docker.swarmMode=false
- --providers.file.directory=/rules # Load dynamic configuration from one or more .toml or .yml files in a directory.
# - --providers.file.filename=/path/to/file # Load dynamic configuration from a file.
- --providers.file.watch=true # Only works on top level files in the rules folder
# - --certificatesResolvers.dns-cloudflare.acme.caServer=https://acme-staging-v02.api.letsencrypt.org/directory # LetsEncrypt Staging Server - uncomment when testing
- --certificatesResolvers.dns-cloudflare.acme.email=$CLOUDFLARE_EMAIL
- --certificatesResolvers.dns-cloudflare.acme.storage=/acme.json
- --certificatesResolvers.dns-cloudflare.acme.dnsChallenge.provider=cloudflare
- --certificatesResolvers.dns-cloudflare.acme.dnsChallenge.resolvers=1.1.1.1:53,1.0.0.1:53
# networks:
# t2_proxy:
# ipv4_address: 192.168.90.254 # You can specify a static IP
networks:
- t2_proxy
security_opt:
- no-new-privileges:true
ports:
- target: 80
published: 80
protocol: tcp
mode: host
- target: 443
published: 443
protocol: tcp
mode: host
- target: 8080
published: 8080
protocol: tcp
mode: host
volumes:
- $DOCKERDIR/traefik2/rules:/rules
- /var/run/docker.sock:/var/run/docker.sock:ro
- $DOCKERDIR/traefik2/acme/acme.json:/acme.json # cert location - you must touch this file and change permissions to 600
- $DOCKERDIR/traefik2/traefik.log:/traefik.log # for fail2ban - make sure to touch file before starting container
- $DOCKERDIR/shared:/shared
environment:
- CF_API_EMAIL=$CLOUDFLARE_EMAIL
- CF_API_KEY=$CLOUDFLARE_API_KEY
labels:
- "traefik.enable=true"
# HTTP-to-HTTPS Redirect
- "traefik.http.routers.http-catchall.entrypoints=http"
- "traefik.http.routers.http-catchall.rule=HostRegexp(`{host:.+}`)"
- "traefik.http.routers.http-catchall.middlewares=redirect-to-https"
- "traefik.http.middlewares.redirect-to-https.redirectscheme.scheme=https"
# HTTP Routers
- "traefik.http.routers.traefik-rtr.entrypoints=https"
- "traefik.http.routers.traefik-rtr.rule=Host(`traefik.$DOMAINNAME`)"
- "traefik.http.routers.traefik-rtr.tls=true"
# - "traefik.http.routers.traefik-rtr.tls.certResolver=dns-cloudflare" # Comment out this line after first run of traefik to force the use of wildcard certs
- "traefik.http.routers.traefik-rtr.tls.domains[0].main=$DOMAINNAME"
- "traefik.http.routers.traefik-rtr.tls.domains[0].sans=*.$DOMAINNAME"
# - "traefik.http.routers.traefik-rtr.tls.domains[1].main=$SECONDDOMAINNAME" # Pulls main cert for second domain
# - "traefik.http.routers.traefik-rtr.tls.domains[1].sans=*.$SECONDDOMAINNAME" # Pulls wildcard cert for second domain
## Services - API
- "traefik.http.routers.traefik-rtr.service=api#internal"
## Middlewares
- "traefik.http.routers.traefik-rtr.middlewares=chain-oauth#file"
# Google OAuth - Single Sign On using OAuth 2.0
oauth:
container_name: oauth
image: thomseddon/traefik-forward-auth:latest
restart: unless-stopped
networks:
- t2_proxy
security_opt:
- no-new-privileges:true
environment:
- CLIENT_ID=$GOOGLE_CLIENT_ID
- CLIENT_SECRET=$GOOGLE_CLIENT_SECRET
- SECRET=$OAUTH_SECRET
- COOKIE_DOMAIN=$DOMAINNAME
- INSECURE_COOKIE=false
- AUTH_HOST=oauth.$DOMAINNAME
- URL_PATH=/_oauth
- WHITELIST=$MY_EMAIL
- LOG_LEVEL=debug
- LOG_FORMAT=text
- LIFETIME=2592000 # 30 days
- DEFAULT_ACTION=auth
- DEFAULT_PROVIDER=google
labels:
- "traefik.enable=true"
## HTTP Routers
- "traefik.http.routers.oauth-rtr.entrypoints=https"
- "traefik.http.routers.oauth-rtr.rule=Host(`oauth.$DOMAINNAME`)"
- "traefik.http.routers.oauth-rtr.tls=true"
## HTTP Services
- "traefik.http.routers.oauth-rtr.service=oauth-svc"
- "traefik.http.services.oauth-svc.loadbalancer.server.port=4181"
## Middlewares
- "traefik.http.routers.oauth-rtr.middlewares=chain-oauth#file"
Finally, the end of my middlewares.toml file looks like this:
[http.middlewares.middlewares-oauth]
[http.middlewares.middlewares-oauth.forwardAuth]
address = "http://oauth:4181" # Make sure you have the OAuth service in docker-compose.yml
trustForwardHeader = true
authResponseHeaders = ["X-Forwarded-User"]
I searched around and checked everything I found already suggested but no luck. Seems like it's gotta be something small though.
In Cloudflare, I changed oauth.mydomain.com from "Proxied" to "DNS Only" and now I am no longer getting redirected.

deepstream.io redis & rethink not ready

I am trying to set up deepstream.io. My goal is to have a 4 docker container:
deepstream
the deepstream search
redis
rethink
Redis as well as rethink are running and are accepting connections. Starting deepstream now states that the cache as well as the storage are not ready. I do not get why and what "dependency description provided" is supposed to tell me.
Why does deepstream not accept the connection?
{
"deepstreamVersion": "3.1.0",
"gitRef": "2557412988b128b3331f6079ff1bd26b0b49302d",
"buildTime": "Mon Sep 25 2017 14:42:10 GMT+0000 (UTC)",
"platform": "linux",
"arch": "x64",
"nodeVersion": "v6.11.3",
"libs": [
"deepstream.io-cache-hazelcast:1.0.2",
"deepstream.io-cache-memcached:1.0.0",
"deepstream.io-cache-redis:1.1.0",
"deepstream.io-logger-winston:1.1.0",
"deepstream.io-storage-elasticsearch:1.0.1",
"deepstream.io-storage-mongodb:1.1.0",
"deepstream.io-storage-postgres:1.1.3",
"deepstream.io-storage-rethinkdb:1.0.2"
]
}
Running deepstream start
_ _
__| | ___ ___ _ __ ___| |_ _ __ ___ __ _ _ __ ____
/ _` |/ _ \/ _ \ '_ \/ __| __| '__/ _ \/ _` | '_ ` _ \
| (_| | __/ __/ |_) \__ \ |_| | | __/ (_| | | | | | |
\__,_|\___|\___| .__/|___/\__|_| \___|\__,_|_| |_| |_|
|_|
===================== starting =====================
INFO | State transition (start): Stopped -> LoggerInit
INFO | logger ready: std out/err
INFO | State transition (logger-started): LoggerInit -> PluginInit
INFO | deepstream version: 3.1.0
INFO | configuration file loaded from /etc/deepstream/config.yml
INFO | library directory set to: /var/lib/deepstream
INFO | authenticationHandler ready: none
INFO | permissionHandler ready: valve permissions loaded from /etc/deepstream/permissions.yml
INFO | cache ready: no dependency description provided
INFO | storage ready: no dependency description provided
INFO | State transition (plugins-started): PluginInit -> ServiceInit
INFO | State transition (services-started): ServiceInit -> ConnectionEndpointInit
iconv-lite warning: javascript files use encoding different from utf-8. See https://github.com/ashtuchkin/iconv-lite/wiki/Javascript-source-file-encodings for more info.
INFO | Listening for websocket connections on 0.0.0.0:6020/deepstream
INFO | Listening for health checks on path /health-check
INFO | connectionEndpoint ready: WebSocket Connection Endpoint
INFO | Listening for http connections on 0.0.0.0:8080
INFO | Listening for health checks on path /health-check
INFO | connectionEndpoint ready: HTTP connection endpoint
INFO | State transition (connection-endpoints-started): ConnectionEndpointInit -> Running
INFO | Deepstream started
The config file:
# General
# Show the deepstream logo on startup (highly recommended)
showLogo: true
# Log messages with this level and above. Valid levels are DEBUG, INFO, WARN, ERROR, OFF
logLevel: DEBUG
# Directory where all plugins reside
libDir: /var/lib/deepstream
# Connectivity
# webfacing URL under which this client is reachable. Used for loadbalancing / failover
externalUrl: null
# SSL Configuration
sslKey: null
sslCert: null
sslCa: null
# Connection Endpoint Configuration
# to disable, replace configuration with null eg. `http: null`
connectionEndpoints:
websocket:
name: uws
options:
# port for the websocket server
port: 6020
# host for the websocket server
host: 0.0.0.0
# url path websocket connections connect to
urlPath: /deepstream
# url path for http health-checks, GET requests to this path will return 200 if deepstream is alive
healthCheckPath: /health-check
# the amount of milliseconds between each ping/heartbeat message
heartbeatInterval: 30000
# the amount of milliseconds that writes to sockets are buffered
outgoingBufferTimeout: 0
# Security
# amount of time a connection can remain open while not being logged in
# or false for no timeout
unauthenticatedClientTimeout: 180000
# invalid login attempts before the connection is cut
maxAuthAttempts: 3
# if true, the logs will contain the cleartext username / password of invalid login attempts
logInvalidAuthData: false
# maximum allowed size of an individual message in bytes
maxMessageSize: 1048576
http:
name: http
options:
# port for the http server
port: 8080
# host for the http server
host: 0.0.0.0
# allow 'authData' parameter in POST requests, if disabled only token and OPEN auth is
# possible
allowAuthData: true
# enable the authentication endpoint for requesting tokens/userData.
# note: a custom authentication handler is required for token generation
enableAuthEndpoint: false
# path for authentication requests
authPath: /auth
# path for POST requests
postPath: /
# path for GET requests
getPath: /
# url path for http health-checks, GET requests to this path will return 200 if deepstream is alive
healthCheckPath: /health-check
# -- CORS --
# if disabled, only requests with an 'Origin' header matching one specified under 'origins'
# below will be permitted and the 'Access-Control-Allow-Credentials' response header will be
# enabled
allowAllOrigins: true
# a list of allowed origins
origins:
- 'https://example.com'
# Logger Configuration
# logger:
# # use either the default logger
# name: default
# options:
# colors: true
# # value of logLevel (line 4) will always overwrite this value
# logLevel: INFO
# # or the winston logger
# name: winston
# options:
# # specify a list of transports (console, file, time)
# -
# type: console
# options:
# # value of logLevel (line 4) will always overwrite this value
# level: info
# colorize: true
# -
# type: time
# options:
# filename: ../var/deepstream
# # or a custom logger
# path: ./my-custom-logger
# Plugin Configuration
plugins:
cache:
name: redis
options:
host: Redis-Redis-1
port: 6379
storage:
name: rethinkdb
options:
host: rethinkdb-rethinkdb-proxy-1
port: 28015
splitChar: /
# Storage options
# a RegExp that matches recordNames. If it matches, the record's data won't be stored in the db
storageExclusion: null
auth:
type: none
# getting permissions from a http webhook
# type: http
# options:
# # a post request will be send to this url on every incoming connection
# endpointUrl: http://localhost:6004
# # any of these will be treated as access granted
# permittedStatusCodes: [ 200 ]
# # if the webhook didn't respond after this amount of milliseconds, the connection will be rejected
# requestTimeout: 2000
# Permissioning
permission:
# Only config or custom permissionHandler at the moment
type: config
options:
# Path to the permissionFile. Can be json, js or yml
path: ./permissions.yml
# Amount of times nested cross-references will be loaded. Avoids endless loops
maxRuleIterations: 3
# PermissionResults are cached to increase performance. Lower number means more loading
cacheEvacuationInterval: 60000
# Timeouts (in milliseconds)
# Timeout for client RPC acknownledgement
rpcAckTimeout: 1000
# Timeout for actual RPC provider response
rpcTimeout: 10000
# Maximum time permitted to fetch from cache
cacheRetrievalTimeout: 1000
# Maximum time permitted to fetch from storage
storageRetrievalTimeout: 2000
# Plugin startup timeout – deepstream init will fail if any plugins fail to emit a 'done' event within this timeout
dependencyInitialisationTimeout: 10000
# The amount of time to wait for a provider to acknowledge or reject a listen request
listenResponseTimeout: 500
# The amount of time a broadcast will wait (to allow broadcast coalescing). -1 means disabled.
broadcastTimeout: 0
# A list of prefixes that, when a record is updated via setData and it matches one of the prefixes
# it will be permissioned and written directly to the cache and storage layers
# storageHotPathPatterns:
# - analytics/
# - metrics/
Redis PING
ping Redis-Redis-1
PING redis-redis-1.rancher.internal (10.42.230.105): 56 data bytes
64 bytes from 10.42.230.105: icmp_seq=0 ttl=62 time=12.676 ms
64 bytes from 10.42.230.105: icmp_seq=1 ttl=62 time=12.751 ms
64 bytes from 10.42.230.105: icmp_seq=2 ttl=62 time=15.441 ms
64 bytes from 10.42.230.105: icmp_seq=3 ttl=62 time=12.838 ms
^C--- redis-redis-1.rancher.internal ping statistics ---
4 packets transmitted, 4 packets received, 0% packet loss
round-trip min/avg/max/stddev = 12.676/13.427/15.441/1.164 ms
The message no dependency description provided just means that under the hood, the connector has no description property.
I'd recommend trying to set some data via a deepstream client and see if it is written to the database.

How to configure Knox to make it pass Authorization header to a backend service?

As discussed in my other question there is no support for websockets authentication in Knox, but as a temporary solution we could handle authentication in our backend service. Our test has shown however that Knox does not pass Authorization header to the backend.
[client]$ curl -i -u '<user>:<password>' https://knox-server/gateway/default/myservice/ping
# 8090 is our backend port
[knox-server]$ ngrep -W byline port 8090
interface: eth0
filter: ( port 8090 ) and ((ip || ip6) || (vlan && (ip || ip6)))
#
T <knox-server>:59118 -> <myservice>:8090 [AP]
GET /ping?doAs=<user> HTTP/1.1.
X-Forwarded-For: <client>.
X-Forwarded-Proto: https.
X-Forwarded-Port: 443.
X-Forwarded-Host: <knox-server>.
X-Forwarded-Server: <knox-server>.
X-Forwarded-Context: /gateway/default.
User-Agent: curl/7.54.0.
Accept: */*.
Host: <myservice>:8090.
Connection: Keep-Alive.
Accept-Encoding: gzip,deflate.
.
#
T <myservice>:8090 -> <knox-server>:59118 [AP]
HTTP/1.1 200 OK.
Date: Sat, 14 Oct 2017 14:27:58 GMT.
X-Application-Context: myservice:prod:8090.
Content-Type: text/plain;charset=utf-8.
Content-Length: 4.
.
PONG
How should I configure Knox (0.12.0 from HDP 2.6.2) to make it pass Authorization header to the backend for websocket connection?
While writing this question I realised that there is a ticket KNOX-895 resolving the issue of passing cookies and headers to a backend service in Knox 0.14.0.
[EDIT]
I cloned knox git repo (commit 92b1505a), which includes KNOX-895 (2d236e78), run it locally with added websocket service to sandbox topology.
[tulinski]$ wscat -n --auth 'user:password' -c wss://localhost:8443/gateway/sandbox/echows
[tulinski]$ sudo ngrep -W byline host echo.websocket.org
#
T 192.168.0.16:59952 -> 174.129.224.73:80 [AP]
GET / HTTP/1.1.
Host: echo.websocket.org.
Upgrade: websocket.
Connection: Upgrade.
Sec-WebSocket-Key: Z4Qa9Dxwr6Qvq2QAicsT5Q==.
Sec-WebSocket-Version: 13.
Pragma: no-cache.
Cache-Control: no-cache.
Authorization: Basic dXNlcjpwYXNzd29yZA==.
.
##
T 174.129.224.73:80 -> 192.168.0.16:59952 [AP]
HTTP/1.1 101 Web Socket Protocol Handshake.
Connection: Upgrade.
Date: Mon, 16 Oct 2017 14:23:49 GMT.
Sec-WebSocket-Accept: meply+6cIyjbH+Vk2OsAqKJDWic=.
Server: Kaazing Gateway.
Upgrade: websocket.
.
Authorization header is passed to the backend service.

How to configure apache server to allow wget with proxy?

I'm totally new to the apache httpd stuff
I setup my host ServerHost1 as a file server with httpd
# httpd -v
Server version: Apache/2.4.6 (Red Hat Enterprise Linux)
Server built: Dec 2 2014 08:09:42
I have put the file TestFile.txt under /var/www/html/TestDir/TestFile.txt
I modified part of the httpd.conf as follow
<Directory />
Order deny,allow
Allow from all
</Directory>
On a test host TestHost1 with full Internet access, I can downloaded my file with wget
TestHost1]# wget http://ServerHost1/TestDir/TestFile.txt
--2016-03-17 13:39:12-- http://ServerHost1/TestDir/TestFile.txt
Resolving ServerHost1 (ServerHost1)... <IP address>
Connecting to ServerHost1 (ServerHost1)|<IP address>|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 2859976598 (2.7G) [application/octet-stream]
Saving to: ‘TestFile.txt’
2% [> ] 60,645,376 24.0MB/s
On the host sitting on a semi-isolated network TestHost2, I have to use proxy for wget to work. It works fine with google
TestHost2]# wget google.ca
--2016-03-17 13:53:26-- http://google.ca/
Resolving proxy.com (proxy.com)... <ProxyIP>
Connecting to proxy.com (proxy.com)|<ProxyIP>|:3128... connected.
Proxy request sent, awaiting response... 301 Moved Permanently
Location: http://www.google.ca/ [following]
--2016-03-17 13:53:26-- http://www.google.ca/
Reusing existing connection to proxy.com:3128.
Proxy request sent, awaiting response... 200 OK
Length: unspecified [text/html]
Saving to: ‘index.html’
[ <=> ] 19,928 --.-K/s in 0.1s
2016-03-17 13:53:27 (159 KB/s) - ‘index.html’ saved [19928]
However when I try to get my file from ServerHost1, it gets ERROR 503: Service Unavailable
TestHost2]# wget http://ServerHost1/TestDir/TestFile.txt
--2016-03-17 13:57:13-- http://ServerHost1/TestDir/TestFile.txt
Resolving proxy.com (proxy.com)...<ProxyIP>
Connecting to proxy.com (proxy.com)|<ProxyIP>|:3128... connected.
Proxy request sent, awaiting response... 503 Service Unavailable
2016-03-17 13:57:13 ERROR 503: Service Unavailable.
So the question is
(1) Why am I seeing 503 ServiceUnavailable when the file is apparently available (since I can downloaded from testhost1)?
(2) How do I configure my httpd.conf file so that TestHost2 can wget the file from ServerHost1?
Maybe try with ProxyRequests as described in Apache docs https://httpd.apache.org/docs/2.4/mod/mod_proxy.html