FTPs (TLS) causing the connection to throw connection error (Pure-FTPd) - ssl

Error: I won't open a connection to 127.0.0.1 (only to 172.20.0.1) (code=500)
My docker file is as below, I have tried passive ports range too but not sure about --expose here, I am able to connect at first through my code but at a second attempt or reusing it is throwing the above error.
I understand it has something with the passive mode but I have the configuration but not sure what is missing, please help.
version: '3'
services:
ftpd_server:
image: stilliard/pure-ftpd
container_name: pure-ftpd
privileged: true
ports:
- "21:21"
- "30000-30099:30000-30099"
volumes: # remember to replace /folder_on_disk/ with the path to where you want to store the files on the host machine
#- "/tmp/test/data:/home/username/"
#- "/tmp/test/passwd:/etc/pure-ftpd/passwd"
- "/upload:/home/upload"
#- "./certs:/etc/ssl/private"
environment:
PUBLICHOST: "localhost"
FTP_USER_NAME: lktransfer
FTP_USER_PASS: lktransfer
FTP_USER_HOME: /home/upload/
#FTP_PASSIVE_PORTS: "30000-30009"
FTP_MAX_CLIENTS: 50
FTP_PASSIVE_PORTS: 30000:30099
FTP_MAX_CONNECTIONS: 50
ADDED_FLAGS: "--tls=1"
#ADDED_FLAGS: "--tls=2"
TLS_USE_DSAPRAM: "true"
TLS_CN: "localhost"
TLS_ORG: "Ellkay"
TLS_C: "IN"
restart: always

Related

Dockware with Traefik

I try to proxy a dockware container through traefik.
The error is an internal server error (500).
Is it necessary to change the domainname at all? If so, how can I change the domain name?
Docker compose for shopware
version: "3"
services:
shopwaretest:
image: dockware/play:latest
container_name: shopwaretest
restart: always
volumes:
- "db_shopwaretest:/var/lib/mysql"
- "shopwaretest:/var/www/html"
- ./hosts:/etc/hosts
networks:
- proxy
environment:
- XDEBUG_ENABLED=0
- PHP_VERSION=8.0
labels:
- "traefik.enable=true"
- "traefik.http.routers.shopwaretest-http.rule=Host(`example.com`)"
- "traefik.http.routers.shopwaretest-http.entrypoints=http"
- "traefik.http.routers.shopwaretest-http.service=shopwaretest-http-service"
- "traefik.http.services.shopwaretest-http-service.loadbalancer.server.port=80"
- "traefik.http.routers.shopwaretest-https.rule=Host(`example.com`)"
- "traefik.http.routers.shopwaretest-https.entrypoints=https"
- "traefik.http.routers.shopwaretest-https.service=shopwaretest-https-service"
- "traefik.http.services.shopwaretest-https-service.loadbalancer.server.port=80"
- "traefik.http.routers.shopwaretest-https.tls=true"
- "traefik.http.routers.shopwaretest-http.middlewares=redirect#file"
- "traefik.http.routers.shopwaretest-https.tls.certresolver=http"
volumes:
db_shopwaretest:
driver: local
shopwaretest:
driver: local
networks:
proxy:
external: true
If you get an internal server error, please check the server logs.
You can manually change the domain name in the sales_channel_domain table.
The problem might be, that SSL is terminated on traefik and Shopware does not detect this - if this is the problem, you might need to set the TRUESTED_PROXIES variable to the IP of your traefik server/container.

Portainer doesn't show icons anymore since upgrading to v2 (Traefik Proxy)

Since upgrading to Portainer v2, the icons would suddenly not load anymore. I can still access Portainer (which is proxied by Traefik), but after a bit of testing, I noticed, only / would be forwarded. If a path was given, Traefik would throw a 404 error. This is a problem because Portainer loads the fonts from eg. /b15db15f746f29ffa02638cb455b8ec0.woff2.
There is one issue about this on Github, but I don't really know what to do with that information: https://github.com/portainer/portainer/issues/3706
My Traefik configuration
version: "2"
# Manage domain access to services
services:
traefik:
container_name: traefik
image: traefik
command:
- --api.dashboard=true
- --certificatesresolvers.le.acme.email=${ACME_EMAIL}
- --certificatesresolvers.le.acme.storage=acme.json
# Enable/Disable staging by commenting/uncommenting the next line
# - --certificatesresolvers.le.acme.caserver=https://acme-staging-v02.api.letsencrypt.org/directory
- --certificatesresolvers.le.acme.dnschallenge=true
- --certificatesresolvers.le.acme.dnschallenge.provider=cloudflare
- --entrypoints.http.address=:80
- --entrypoints.https.address=:443
- --global.sendAnonymousUsage
- --log.level=INFO
- --providers.docker=true
- --providers.docker.exposedbydefault=false
- --providers.docker.network=traefik_proxy
restart: always
networks:
- traefik_proxy
ports:
- "80:80"
- "443:443"
dns:
- 1.1.1.1
- 1.0.0.1
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- ./acme.json:/acme.json
# - ./acme-staging.json:/acme.json
environment:
CF_API_EMAIL: ${CLOUDFLARE_EMAIL}
CF_API_KEY: ${CLOUDFLARE_API_KEY}
labels:
- traefik.enable=true
- traefik.http.routers.traefik0.entrypoints=http
- traefik.http.routers.traefik0.rule=Host(`${TRAEFIK_URL}`)
- traefik.http.routers.traefik0.middlewares=to_https
- traefik.http.routers.traefik.entrypoints=https
- traefik.http.routers.traefik.rule=Host(`${TRAEFIK_URL}`)
- traefik.http.routers.traefik.middlewares=traefik_auth
- traefik.http.routers.traefik.tls=true
- traefik.http.routers.traefik.tls.certresolver=le
- traefik.http.routers.traefik.service=api#internal
# Declaring the user list
#
# Note: all dollar signs in the hash need to be doubled for escaping.
# To create user:password pair, it's possible to use this command:
# echo $(htpasswd -nb user password) | sed -e s/\\$/\\$\\$/g
- traefik.http.middlewares.traefik_auth.basicauth.users=${TRAEFIK_USERS}
# Standard middleware for other containers to use
- traefik.http.middlewares.to_https.redirectscheme.scheme=https
- traefik.http.middlewares.to_https_perm.redirectscheme.scheme=https
- traefik.http.middlewares.to_https_perm.redirectscheme.permanent=true
networks:
traefik_proxy:
external: true
And my Portainer configuration
version: "2"
# Manage docker containers
services:
portainer:
container_name: portainer
image: portainer/portainer-ce
restart: always
networks:
- traefik_proxy
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- ./data/:/data/
labels:
- traefik.enable=true
- traefik.http.services.portainer.loadbalancer.server.port=9000
- traefik.http.routers.portainer0.entrypoints=http
- traefik.http.routers.portainer0.rule=Host(`${PORTAINER_URL}`)
- traefik.http.routers.portainer0.middlewares=to_https
- traefik.http.routers.portainer.entrypoints=https
- traefik.http.routers.portainer.rule=Host(`${PORTAINER_URL}`)
- traefik.http.routers.portainer.tls=true
- traefik.http.routers.portainer.tls.certresolver=le
networks:
traefik_proxy:
external: true
What do I have to change to make Traefik be able to forward the paths so that Portainer can load the icons?
Could you try flush your DNS Cache?
In Chrome 'chrome://net-internals/#dns' into URL bar and pressed enter.
Then click on 'Clear host cache'
Then refresh your portainer page
I noticed that there is also an Alpine version of Portainer.
After switching to that (image: portainer/portainer-ce:alpine), the icons seem to be working again. I don't know what the issue is with the regular image, but this solves it for now.
PS: I had tried to use the Access-Control header on Traefik, but that didn't help. I guess it's a problem with Portainer's code itself.
If someone else is facing this issue, I resolved this by deleting my Browser Cache or just do a full Refresh with CTRL+Shift+R

GCS Connector fails when enabling SASL, SSL, or SASL_SSL

I have been able to successfully connect the GCS connector without SASL or SSL enabled. When I enable SASL and SSL; connect-standalone does not seem to be able to communicate with the brokers.
The problem appears to be with the gcs-sink-license-manager. This is what I have found from the logs but they aren't super helpful for me to actually figuring out what the issue is....
LOGS
[2018-12-19 16:29:05,645] INFO [AdminClient clientId=gcs-sink-license-manager] Metadata update failed (org.apache.kafka.clients.admin.internals.AdminMetadataManager:238)
org.apache.kafka.common.errors.TimeoutException: Timed out waiting to send the call.
[2018-12-19 16:29:05,647] ERROR WorkerConnector{id=gcs-sink} Error while starting connector (org.apache.kafka.connect.runtime.WorkerConnector:119)
org.apache.kafka.connect.errors.ConnectException: Timed out while checking for or creating topic(s) '_confluent-command'. This could indicate a connectivity issue, unavailable topic partitions, or if this is your first use of the topic it may have taken too long to create.
at org.apache.kafka.connect.util.TopicAdmin.createTopics(TopicAdmin.java:251)
at io.confluent.license.LicenseStore$1.run(LicenseStore.java:159)
at org.apache.kafka.connect.util.KafkaBasedLog.start(KafkaBasedLog.java:126)
at io.confluent.license.LicenseStore.start(LicenseStore.java:187)
at io.confluent.license.LicenseManager.<init>(LicenseManager.java:42)
at io.confluent.connect.gcs.GcsSinkConnector.checkLicense(GcsSinkConnector.java:80)
at io.confluent.connect.gcs.GcsSinkConnector.start(GcsSinkConnector.java:67)
at org.apache.kafka.connect.runtime.WorkerConnector.doStart(WorkerConnector.java:111)
at org.apache.kafka.connect.runtime.WorkerConnector.start(WorkerConnector.java:136)
at org.apache.kafka.connect.runtime.WorkerConnector.transitionTo(WorkerConnector.java:195)
at org.apache.kafka.connect.runtime.Worker.startConnector(Worker.java:241)
at org.apache.kafka.connect.runtime.standalone.StandaloneHerder.startConnector(StandaloneHerder.java:297)
at org.apache.kafka.connect.runtime.standalone.StandaloneHerder.putConnectorConfig(StandaloneHerder.java:206)
at org.apache.kafka.connect.cli.ConnectStandalone.main(ConnectStandalone.java:107)
Caused by: org.apache.kafka.common.errors.TimeoutException: Timed out waiting for a node assignment.
[2018-12-19 16:29:05,649] INFO Finished creating connector gcs-sink (org.apache.kafka.connect.runtime.Worker:257)
[2018-12-19 16:29:05,650] INFO Skipping reconfiguration of connector gcs-sink since it is not running (org.apache.kafka.connect.runtime.standalone.StandaloneHerder:329)
[2018-12-19 16:29:05,652] INFO Created connector gcs-sink (org.apache.kafka.connect.cli.ConnectStandalone:104)
Connector Properties
connector.class="io.confluent.connect.gcs.GcsSinkConnector"
storage.class="io.confluent.connect.gcs.storage.GcsStorage"
bootstrap.servers=kafka1:19092
key.converter=org.apache.kafka.connect.json.JsonConverter
value.converter=org.apache.kafka.connect.json.JsonConverter
key.converter.schemas.enable=false
value.converter.schemas.enable=false
security.protocol=SASL_PLAINTEXT
sasl.mechanism=PLAIN
offset.storage.file.filename=/tmp/connect.offsets
offset.flush.interval.ms=10000
plugin.path=/usr/share/java,/usr/share/confluent-hub-components
gcs.sasl.properties
#Connector
format.class=io.confluent.connect.gcs.format.json.JsonFormat
partitioner.class=io.confluent.connect.storage.partitioner.DefaultPartitioner
flush.size=3
# confluent.license=
#GCS
name=gcs-sink
connector.class=io.confluent.connect.gcs.GcsSinkConnector
gcs.bucket.name=kafka-bucket-4c
gcs.part.size=5242880
gcs.credentials.path=/usr/share/assets/gcs-key.json
confluent.topic.bootstrap.servers=kafka1:19092
topics=sandbox
confluent.topic.replication.factor=1
#Storage
storage.class=io.confluent.connect.gcs.storage.GcsStorage
client.id=gcs-standalone-sink
# Sink authentication settings
consumer.log4j.root.loglevel=DEBUG
consumer.bootstrap.servers=kafka1:19092
consumer.sasl.mechanism=PLAIN
consumer.security.protocol=SASL_PLAINTEXT
consumer.ssl.endpoint.identification.algorithm=
Dockerfile
FROM confluentinc/cp-kafka-connect
ADD assets /usr/share/assets
# ENV CONNECT_OPTS "-Djava.security.auth.login.config=/usr/share/assets/kafka_admin_account.conf -Djavax.net.ssl.trustStore=/usr/share/assets/secrets/kafka.client.truststore.jks -Djavax.net.ssl.trustStorePassword=changeit"
ENV KAFKA_OPTS "-Djava.security.auth.login.config=/usr/share/assets/secrets/kafka_admin_account.conf -Djavax.net.debug=all"
ENV CONNECT_OPTS "-Djava.security.auth.login.config=/usr/share/assets/secrets/kafka_admin_account.conf -Djavax.net.debug=all"
COPY assets/secrets/cacerts /usr/lib/jvm/zulu-8-amd64/jre/lib/security/cacerts
CMD ["/bin/bash", "-c", "connect-standalone ${CONNECT_PROPS} ${GCS_PROPS}"]
docker-compose file
kafka1:
image: company-kafka-secure
# build: ./
depends_on:
- zookeeper
ports:
- 19091:19091
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_ADVERTISED_LISTENERS: SASL_PLAINTEXT://kafka1:19092,EXT://localhost:19091
KAFKA_LISTENERS: SASL_PLAINTEXT://:19092,EXT://:19091
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: SASL_PLAINTEXT:SASL_PLAINTEXT,EXT:SASL_PLAINTEXT
KAFKA_SASL_MECHANISM_INTER_BROKER_PROTOCOL: PLAIN
KAFKA_SECURITY_INTER_BROKER_PROTOCOL: SASL_PLAINTEXT
KAFKA_SASL_ENABLED_MECHANISMS: PLAIN
KAFKA_ZOOKEEPER_CONNECTION_TIMEOUT_MS: 6000
ZOOKEEPER_SASL_ENABLED: "false"
KAFKA_AUTHORIZER_CLASS_NAME: com.us.digital.kafka.security.authorization.KafkaAuthorizer
CONFLUENT_METRICS_ENABLE: "false"
volumes:
- ./secrets:/etc/kafka/secrets
networks:
- message_hub
kafka_gcs_connect:
build: ./kafka-connect
ports:
- 28082:28082
depends_on:
- kafka1
- kafka3
- kafka2
- zookeeper
environment:
CONNECT_PROPS: /usr/share/assets/connect-standalone.sasl.properties
CONNECT_REST_PORT: 28082
GCS_PROPS: /usr/share/assets/gcs.sasl.properties
networks:
- message_hub
CONNECT_BOOTSTRAP_SERVERS=kafka1:19092,kafka2:29092,kafka3:39092
CONNECT_CONFLUENT_TOPIC_BOOTSTRAP_SERVERS=kafka1:19092,kafka2:29092,kafka3:39092
CONNECT_CONFLUENT_LICENSE=
CONNECT_KEY_CONVERTER=org.apache.kafka.connect.json.JsonConverter
CONNECT_VALUE_CONVERTER=org.apache.kafka.connect.json.JsonConverter
CONNECT_KEY_CONVERTER_SCHEMAS_ENABLE=false
CONNECT_VALUE_CONVERTER_SCHEMAS_ENABLE=false
CONNECT_CONFIG_STORAGE_TOPIC=connect-config
CONNECT_OFFSET_STORAGE_TOPIC=connect-offsets
CONNECT_STATUS_STORAGE_TOPIC=connect-status
CONNECT_REPLICATION_FACTOR=1
CONNECT_CONFIG_STORAGE_REPLICATION_FACTOR=1
CONNECT_OFFSET_STORAGE_REPLICATION_FACTOR=1
CONNECT_STATUS_STORAGE_REPLICATION_FACTOR=1
CONNECT_SECURITY_PROTOCOL=SASL_PLAINTEXT
CONNECT_SASL_MECHANISM=PLAIN
CONNECT_SSL_ENDPOINT_IDENTIFICATION_ALGORITHM=
CONNECT_CONSUMER_BOOTSTRAP_SERVERS=kafka1:19092,kafka2:29092,kafka3:39092
CONNECT_CONSUMER_SECURITY_PROTOCOL=SASL_PLAINTEXT
CONNECT_CONSUMER_SSL_ENDPOINT_IDENTIFICATION_ALGORITHM=
CONNECT_CONSUMER_SASL_MECHANISM=PLAIN
CONNECT_GROUP_ID=gcs-kafka-connector
CONNECT_INTERNAL_KEY_CONVERTER=org.apache.kafka.connect.json.JsonConverter
CONNECT_INTERNAL_VALUE_CONVERTER=org.apache.kafka.connect.json.JsonConverter
CONNECT_REST_PORT=28082
CONNECT_PLUGIN_PATH=/usr/share/java,/usr/share/confluent-hub-components
KAFKA_OPTS=-Djava.security.auth.login.config=/usr/share/assets/kafka_admin_account.conf
Here is all of the properties I found I needed to get SASL working with a gcs connector.

400 Error HTTP GET Request between Docker Containers with HTTPURLConnection

I got two Containers defined in a docker-compose file:
tomcat_webserver_api:
image: tomcat:8
volumes:
- ./API/Docker/API.war:/usr/local/tomcat/webapps/API.war
ports:
- "8080:8080"
depends_on:
- mysql_database
tomcat_webserver_anwendung:
image: tomcat:8
ports:
- "8081:8080"
volumes:
- ./Anwendung/Docker/Anwendung.war:/usr/local/tomcat/webapps/Anwendung.war
depends_on:
- tomcat_webserver_api
environment:
API_HOST: tomcat_webserver_api
API_PORT: 8080
Now i want to access the URL http://tomcat_webserver_api:8080/API/restaurants/Wochentag from Inside the Java Web Application with an HttpURLConnection.
Issue: It returns an 400 Error
java.io.IOException: Server returned HTTP response code: 400 for URL: http://tomcat_webserver_api:8080/API/restaurants/Wochentag
The Code is like that (The Headers are nearly the same when i try to connect to the URL via curl - this works inside the container huh):
URL api = UriBuilder.fromUri("http://" + "tomcat_webserver_api" + ":" + "8080" +"/API/restaurants/RestaurantSpeisen").build().toURL();
System.setProperty("http.agent", "curl/7.52.1");
HttpURLConnection connection = (HttpURLConnection) api.openConnection();
connection.setRequestMethod("GET");
connection.setRequestProperty("Host", "localhost");
connection.setRequestProperty("User-Agent", "curl/7.52.1");
connection.connect();
BufferedReader in = new BufferedReader(new InputStreamReader(connection.getInputStream(), "UTF-8"));
If i try to connect to http://172.20.0.3:8080/API/restaurants/Wochentag i got an 200-ok HTTP Response Code and the JSON-Data.
If i exec the API Container and inspect the logs i can see the 400 GET-Request.
Why is this happen?
http://172.20.0.3:8080/API/restaurants/Wochentag - Works
http://tomcat_webserver_api:8080/API/restaurants/Wochentag - Won't Work but not with an 404 Error
I have had the same issue as you, apparently underscore are not allowed as virtualhost, try to remove it, for example, use just tomcatwebserverapi, that should fix your problem.
See Can (domain name) subdomains have an underscore "_" in it? for more information about valid letters in hostnames.
Please give explicit container names a try:
tomcat_webserver_api:
image: tomcat:8
container_name: tomcat_webserver_api
volumes:
- ./API/Docker/API.war:/usr/local/tomcat/webapps/API.war
ports:
- "8080:8080"
depends_on:
- mysql_database
tomcat_webserver_anwendung:
container_name: tomcat_webserver_app
image: tomcat:8
ports:
- "8081:8080"
volumes:
- ./Anwendung/Docker/Anwendung.war:/usr/local/tomcat/webapps/Anwendung.war
depends_on:
- tomcat_webserver_api
environment:
API_HOST: tomcat_webserver_api
API_PORT: 8080
The "local only" configuration needs explicit container names to activate Docker's name lookup mechanism. In Swarm mode you wouldn't need to set container names.

How can I setup a proxy in a Selenium Chrome container?

I have a docker-compose.yml file with well-known environment variables to reach our corporate proxy:
---
version: '2.2'
# adapted from:
# https://github.com/SeleniumHQ/docker-selenium/wiki/Getting-Started-with-Docker-Compose
# docker-compose --force-recreate
services:
chrome:
privileged: True
image: "selenium/standalone-chrome:3.11.0"
ports:
- "4444:4444"
volumes:
- /dev/shm:/dev/shm
environment:
- TZ="UT"
- http_proxy=http://proxy.lan:8080
- https_proxy=http://proxy.lan:8080
- no_proxy=
#- SE_OPTS=-Dhttp.proxyHost=proxy.lan -Dhttp.proxyPort=8080
network_mode: host
When I run wget in the container, then the proxy is used as expected.
--2018-05-02 12:30:45-- http://google.com/
Resolving proxy.lan (proxy.lan)... 192.168.33.141
Connecting to proxy.lan (proxy.lan)|192.168.33.141|:8080... connected.
Proxy request sent, awaiting response... 301 Moved Permanently
Location: http://www.google.com/ [following]
--2018-05-02 12:30:45-- http://www.google.com/
Reusing existing connection to proxy.lan:8080.
Proxy request sent, awaiting response... 200 OK
Length: unspecified [text/html]
Saving to: ‘/dev/null’
/dev/null [ <=> ] 10.81K --.-KB/s in 0s
2018-05-02 12:30:46 (310 MB/s) - ‘/dev/null’ saved [11073]
However, when I try to run the container with SE_OPTS="-Dhttp.proxyHost=proxy.lan -Dhttp.proxyPort=8080", then I see a stacktrace:
Exception in thread "main" com.beust.jcommander.ParameterException: Was passed main parameter '"-Dhttp.proxyHost=proxy.lan' but no main parameter was defined in your arg class.
There is an unmerged PR, but I fear the urgency to use a proxy in testing in corporate environment might not be felt by the Se dev. Maybe there is an alternative solution.