Let's Encrypt: SSL Certificate is valid for the Domain but not valid Specific Port [closed] - ssl

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 1 year ago.
Improve this question
I am using VPS: Amazon EC2 and SSL Cert Provider: Let's Encrypt (through Certbot)
I have seen some kind of a question but the answer is not useful for my situation.
I have a domain api.example.com that is configured and fully functioning on an Ubuntu server. I used Certbot to configure the domain with HTTPS, however, I also have APIs configured to be accessed on a specific port of that domain, say 8443.
When I access api.example.com, I see the lock on the browser that says the site is secure, but whenever I try to access my api api.example.com:8443/v1/someAPI, the API returns the appropriate result, but without the site is secure. Because the main site is secure, while the API access location isn't, I am unable to make API calls accordingly, resulting in net::ERR_SSL_PROTOCOL_ERROR.
Myapplication.java
package com.xxx.xxxx;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.boot.autoconfigure.orm.jpa.HibernateJpaAutoConfiguration;
#SpringBootApplication(exclude = HibernateJpaAutoConfiguration.class)
public class MyApplication {
public static void main(String[] args) {
System.setProperty("file.encoding", "UTF-8");
System.setProperty("server.port", "8080");
SpringApplication.run(MyApplication.class, args);
}
}
my application.properties:
# Database
db.driver: com.mysql.cj.jdbc.Driver
db.url: jdbc:mysql://123.123.123.123:123/ex?serverTimeZone=UTC&useSSL=false
db.username: xx
#db.password: xxx
db.password: xxxxxx
# Hibernate
hibernate.dialect: org.hibernate.dialect.MySQL5Dialect
hibernate.show_sql: false
hibernate.hbm2ddl.auto: validate
hibernate.format_sql = false
entitymanager.packagesToScan: com.example
# GZIP Server compression
server.compression.enabled: true
server.compression.min-response-size: 2048
server.compression.mime-types: application/json,application/xml,text/html,text/xml,text/plain
# File Path
file.path: /home/ec2-user/
file.report.path: /home/ec2-user/
jpa.repositories.enabled=false
multipart.enabled=true
multipart.max-file-size=50MB
multipart.max-request-size=50MB
spring.servlet.multipart.max-file-size=50MB
spring.servlet.multipart.max-request-size=50MB
# server base path
base.path: https://api.example.com:8443
# Origins to allow requests from
origins: *
#Error Page Configuration
server.error.whitelabel.enabled=false
spring.autoconfigure.exclude=org.springframework.boot.autoconfigure.web.ErrorMvcAutoConfiguration
reportUrl:https://example.com/report/
email=contact#example.com
emails=sales#contact#example.com
# SMTP Configuration
spring.mail.enabled=true
spring.mail.from=sales#contact#example.com
##Amazon SES SMTP config
spring.mail.host=email-smtp
spring.mail.username=fsdfskfjsldfjf
spring.mail.password=ffdfsfdsfdsfsdfdsf
spring.mail.port=123
eds.users: sales#marketsresearcher.biz
eds.host: smtp.gmail.com
eds.port: 123
eds.fromname==example
##SSL details
server.port:8443
security.require-ssl=true
server.ssl.key-store:classpath:abc.p12
server.ssl.key-store-password:abc
server.ssl.keyStoreType:PKCS12
server.ssl.keyAlias:abc
I have also added a rule for 8443 port in aws ec2 instance security group
I am getting an error on server log:
. ____ _ __ _ _
/\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
\\/ ___)| |_)| | | | | || (_| | ) ) ) )
' |____| .__|_| |_|_| |_\__, | / / / /
=========|_|==============|___/=/_/_/_/
:: Spring Boot :: (v2.4.1)
2021-08-28 15:47:04.463 INFO 4513 --- [ main] c.a.MarketResearcher.ApplicationWar : Starting ApplicationWar v0.0.1-SNAPSHOT using Java 1.8.0_302 on ip-172-31-17-203.ap-south-1.compute.internal with PID 4513 (/home/ec2-user/MarketResearcher-0.0.1-SNAPSHOT.jar started by root in /home/ec2-user)
2021-08-28 15:47:04.467 INFO 4513 --- [ main] c.a.MarketResearcher.ApplicationWar : The following profiles are active: prod
2021-08-28 15:47:06.924 INFO 4513 --- [ main] trationDelegate$BeanPostProcessorChecker : Bean 'org.springframework.ws.config.annotation.DelegatingWsConfiguration' of type [org.springframework.ws.config.annotation.DelegatingWsConfiguration$$EnhancerBySpringCGLIB$$b39d77f] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
2021-08-28 15:47:07.008 INFO 4513 --- [ main] .w.s.a.s.AnnotationActionEndpointMapping : Supporting [WS-Addressing August 2004, WS-Addressing 1.0]
2021-08-28 15:47:07.705 INFO 4513 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat initialized with port(s): 8443 (https)
2021-08-28 15:47:07.729 INFO 4513 --- [ main] o.apache.catalina.core.StandardService : Starting service [Tomcat]
2021-08-28 15:47:07.730 INFO 4513 --- [ main] org.apache.catalina.core.StandardEngine : Starting Servlet engine: [Apache Tomcat/9.0.41]
2021-08-28 15:47:07.852 INFO 4513 --- [ main] o.a.c.c.C.[Tomcat].[localhost].[/] : Initializing Spring embedded WebApplicationContext
2021-08-28 15:47:07.852 INFO 4513 --- [ main] w.s.c.ServletWebServerApplicationContext : Root WebApplicationContext: initialization completed in 3164 ms
2021-08-28 15:47:08.432 INFO 4513 --- [ main] org.hibernate.Version : HHH000412: Hibernate ORM core version 5.4.25.Final
2021-08-28 15:47:08.894 INFO 4513 --- [ main] o.hibernate.annotations.common.Version : HCANN000001: Hibernate Commons Annotations {5.1.2.Final}
2021-08-28 15:47:09.462 INFO 4513 --- [ main] org.hibernate.dialect.Dialect : HHH000400: Using dialect: org.hibernate.dialect.MySQL5Dialect
2021-08-28 15:47:09.613 INFO 4513 --- [ main] o.h.e.boot.internal.EnversServiceImpl : Envers integration enabled? : true
2021-08-28 15:47:12.758 INFO 4513 --- [ main] o.h.e.t.j.p.i.JtaPlatformInitiator : HHH000490: Using JtaPlatform implementation: [org.hibernate.engine.transaction.jta.platform.internal.NoJtaPlatform]
2021-08-28 15:47:13.322 INFO 4513 --- [ main] o.s.s.concurrent.ThreadPoolTaskExecutor : Initializing ExecutorService 'applicationTaskExecutor'
2021-08-28 15:47:15.192 INFO 4513 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat started on port(s): 8443 (https) with context path ''
2021-08-28 15:47:15.223 INFO 4513 --- [ main] c.a.MarketResearcher.ApplicationWar : Started ApplicationWar in 11.771 seconds (JVM running for 12.677)
2021-08-28 15:52:41.387 INFO 4513 --- [nio-8443-exec-6] o.a.c.c.C.[Tomcat].[localhost].[/] : Initializing Spring DispatcherServlet 'dispatcherServlet'
2021-08-28 15:52:41.388 INFO 4513 --- [nio-8443-exec-6] o.s.web.servlet.DispatcherServlet : Initializing Servlet 'dispatcherServlet'
2021-08-28 15:52:41.390 INFO 4513 --- [nio-8443-exec-6] o.s.web.servlet.DispatcherServlet : Completed initialization in 2 ms
2021-08-28 16:06:33.275 WARN 4513 --- [nio-8443-exec-4] org.hibernate.orm.deprecation : HHH90000022: Hibernate's legacy org.hibernate.Criteria API is deprecated; use the JPA javax.persistence.criteria.CriteriaQuery instead
2021-08-28 16:06:33.391 WARN 4513 --- [nio-8443-exec-4] org.hibernate.orm.deprecation : HHH90000022: Hibernate's legacy org.hibernate.Criteria API is deprecated; use the JPA javax.persistence.criteria.CriteriaQuery instead
2021-08-28 16:06:33.683 WARN 4513 --- [nio-8443-exec-4] org.hibernate.orm.deprecation : HHH90000022: Hibernate's legacy org.hibernate.Criteria API is deprecated; use the JPA javax.persistence.criteria.CriteriaQuery instead
2021-08-28 16:06:33.738 WARN 4513 --- [nio-8443-exec-1] org.hibernate.orm.deprecation : HHH90000022: Hibernate's legacy org.hibernate.Criteria API is deprecated; use the JPA javax.persistence.criteria.CriteriaQuery instead
2021-08-28 16:06:33.739 WARN 4513 --- [nio-8443-exec-3] org.hibernate.orm.deprecation : HHH90000022: Hibernate's legacy org.hibernate.Criteria API is deprecated; use the JPA javax.persistence.criteria.CriteriaQuery instead
2021-08-28 16:06:33.747 WARN 4513 --- [io-8443-exec-10] org.hibernate.orm.deprecation : HHH90000022: Hibernate's legacy org.hibernate.Criteria API is deprecated; use the JPA javax.persistence.criteria.CriteriaQuery instead
2021-08-28 16:06:33.845 WARN 4513 --- [nio-8443-exec-5] org.hibernate.orm.deprecation : HHH90000022: Hibernate's legacy org.hibernate.Criteria API is deprecated; use the JPA javax.persistence.criteria.CriteriaQuery instead
2021-08-28 16:06:33.866 WARN 4513 --- [nio-8443-exec-2] org.hibernate.orm.deprecation : HHH90000022: Hibernate's legacy org.hibernate.Criteria API is deprecated; use the JPA javax.persistence.criteria.CriteriaQuery instead
2021-08-28 16:06:34.021 WARN 4513 --- [nio-8443-exec-7] org.hibernate.orm.deprecation : HHH90000022: Hibernate's legacy org.hibernate.Criteria API is deprecated; use the JPA javax.persistence.criteria.CriteriaQuery instead
2021-08-29 19:08:38.141 INFO 4513 --- [nio-8443-exec-5] o.apache.coyote.http11.Http11Processor : Error parsing HTTP request header
Note: further occurrences of HTTP request parsing errors will be logged at DEBUG level.
java.lang.IllegalArgumentException: Invalid character found in the HTTP protocol [RTSP/1.00x0d0x0a0x0d...]
at org.apache.coyote.http11.Http11InputBuffer.parseRequestLine(Http11InputBuffer.java:559) ~[tomcat-embed-core-9.0.41.jar!/:9.0.41]
at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:261) ~[tomcat-embed-core-9.0.41.jar!/:9.0.41]
at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:65) [tomcat-embed-core-9.0.41.jar!/:9.0.41]
at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:888) [tomcat-embed-core-9.0.41.jar!/:9.0.41]
at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1597) [tomcat-embed-core-9.0.41.jar!/:9.0.41]
at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49) [tomcat-embed-core-9.0.41.jar!/:9.0.41]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [na:1.8.0_302]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [na:1.8.0_302]
at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61) [tomcat-embed-core-9.0.41.jar!/:9.0.41]
at java.lang.Thread.run(Thread.java:748) [na:1.8.0_302]

There is no concept of separate SSL for ports. Whenever the user enters https://api.example.com, the browser translates it as api.example.com:443 and sends the request to the server.
Similarly, when you hit http://api.example.com, it's translated to api.example.com:80. The port for SSL (HTTPS) is 443. The default port for HTTP is 80.
In your case, if you use the 8443 port explicitly, SSL won’t work. Probably, you can host in 443 and route the API internally.

create a virtual host for port 8843
<VirtualHost *:8843>
ServerName example.com
ServerAlias www.example.com
DocumentRoot /xxx/xxx/xxx
<Directory /xxx/xxx/xxx>
Require All Granted
</Directory>
SSLEngine on
SSLCertificateFile "/xxx/xxx/xxx/xxx.pem"
SSLCertificateKeyFile "/xxx/xxx/xxx/xxx.pem"
SSLCertificateChainFile "/xxx/xxx/xxx/xxx.pem"
</VirtualHost>

Related

spring cloud config server cannot bind to gitlab

I am using Spring springCloudVersion, "2021.0.3" to setup a Config Server that uses my gitlab account for its files. I created a deploy token with premissions:
read_repository, read_registry, write_registry, read_package_registry, write_package_registry
and then I use those in the spring server application.properties:
spring.application.name=config-server
spring.application.version=0.1.0
server.port=8012
spring.cloud.config.server.git.uri=https://gitlab.com/[account]/[repo]
spring.cloud.config.server.git.skip-ssl-validation=true
spring.cloud.config.server.git.clone-on-start=true
spring.cloud.config.server.git.default-label=main
spring.cloud.config.server.git.basedir=https://gitlab.com/[account]/[repo]
spring.cloud.config.server.git.username=[my-token-username]
spring.cloud.config.server.git.password=[my-token-password]
I was getting errors on startup about not binding the base directory until I put the same uri there as in spring.cloud.config.server.git.uri (they are now identical but it feels wrong)
When I try to startup my ConfigServerApplication I get the following:
ConfigServletWebServerApplicationContext : Exception encountered during context initialization - cancelling refresh attempt: org.springframework.beans.factory.UnsatisfiedDependencyException: Error creating bean with name 'defaultEnvironmentRepository' defined in class path resource [org/springframework/cloud/config/server/config/DefaultRepositoryConfiguration.class]: Unsatisfied dependency expressed through method 'defaultEnvironmentRepository' parameter 1; nested exception is org.springframework.boot.context.properties.ConfigurationPropertiesBindException: Error creating bean with name 'multipleJGitEnvironmentProperties': Could not bind properties to 'MultipleJGitEnvironmentProperties' : prefix=spring.cloud.config.server.git, ignoreInvalidFields=false, ignoreUnknownFields=true; nested exception is org.springframework.boot.context.properties.bind.BindException: Failed to bind properties under 'spring.cloud.config.server.git.basedir' to java.io.File
2022-06-30 12:04:53.020 INFO 1594 --- [ main] o.apache.catalina.core.StandardService : Stopping service [Tomcat]
2022-06-30 12:04:53.026 INFO 1594 --- [ main] ConditionEvaluationReportLoggingListener :
Error starting ApplicationContext. To display the conditions report re-run your application with 'debug' enabled.
2022-06-30 12:04:53.033 ERROR 1594 --- [ main] o.s.b.d.LoggingFailureAnalysisReporter :
***************************
APPLICATION FAILED TO START
***************************
Description:
Failed to bind properties under 'spring.cloud.config.server.git.basedir' to java.io.File:
Property: spring.cloud.config.server.git.basedir
Value: https://gitlab.com/[account]/[repo]
Origin: class path resource [application.properties] - 9:40
Reason: failed to convert java.lang.String to java.io.File (caused by java.lang.IllegalStateException: Could not retrieve file for URL [https://gitlab.com/[account]/[repo]]: URL [https://gitlab.com/[account]/[repo]] cannot be resolved to absolute file path because it does not reside in the file system: https://gitlab.com/[account]/[repo]
I don't know what file it is trying to access - in that repo I have
README.md
application.properties
and I would think the properties files is what it is trying to access?
I have tried using both my account credentials and the deploy token with the same results. I did not try the deploy key because this will not be access locally, but via cloud.
Finally tracked it down - I had to add:
spring.profiles.active=native
and remove
spring.cloud.config.server.git.basedir

Intellij Timezone does not correspond to my machine/country time

My Intellij timezone is not the same as my machine/country time, When I try to run my spring-boot application the run-console shows less hour than my machine time, and another problem is that when I try to persist to MySQL, the field which has java.util.Date saves time shown in intellij not my local machine.
2022-03-19 23:53:35.636 INFO 14604 --- [ main] c.c.c.ConfigServicePropertySourceLocator : Fetching config from server at : http://localhost:8888
where it should be GMT+01:00 :
2022-03-19 00:53:35.636 INFO 14604 --- [ main] c.c.c.ConfigServicePropertySourceLocator : Fetching config from server at : http://localhost:8888

o.s.cloud.stream.binding.Binding Service : Failed to create producer binding

I am trying build spring config client in clustered env, for this to achieve I am using kafka to connect between clients. My client project works with local kafka server but when I try to connect to my remote kafka server application do not start up and throw me with below errors,
2019-01-09 00:06:00.364 INFO 17892 --- [ main] o.a.kafka.common.utils.AppInfoParser : Kafka version : 2.0.1
2019-01-09 00:06:00.364 INFO 17892 --- [ main] o.a.kafka.common.utils.AppInfoParser : Kafka commitId : fa14705e51bd2ce5
2019-01-09 00:08:00.369 INFO 17892 --- [thread | client] o.a.k.c.a.i.AdminMetadataManager : [AdminClient clientId=client] Metadata update failed
org.apache.kafka.common.errors.TimeoutException: Timed out waiting to send the call.
2019-01-09 00:10:00.366 INFO 17892 --- [thread | client] o.a.k.c.a.i.AdminMetadataManager : [AdminClient clientId=client] Metadata update failed
org.apache.kafka.common.errors.TimeoutException: Timed out waiting to send the call.
2019-01-09 00:10:00.371 ERROR 17892 --- [ main] o.s.cloud.stream.binding.BindingService : Failed to create producer binding; retrying in 30 seconds
org.springframework.cloud.stream.provisioning.ProvisioningException: Provisioning exception; nested exception is java.util.concurrent.TimeoutException
in pom.xml,
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-bus-kafka</artifactId>
<version>2.0.0.RELEASE</version>
</dependency>

Spring Cloud Config (Server) Monitor does not send events to clients via RabbitMQ

I'm using Spring Boot / 2.1.0.RELAESE and Spring Cloud Dependencies / Greenwich.M3, consider the following projects, with the following dependencies:
https://github.com/dnijssen/configuration
https://github.com/dnijssen/configurationclient, dependencies:
spring-boot-starter-actuator
spring-boot-starter-web
spring-cloud-starter-bus-amqp
spring-cloud-starter-config
https://github.com/dnijssen/configurationserver, dependencies:
spring-boot-starter-actuator
spring-cloud-config-monitor
spring-cloud-starter-stream-rabbit
spring-cloud-config-server
So my configurationserver fetches the properties for the configurationclient via a Git repository (namely https://github.com/dnijssen/configuration), when a change is made I'll trigger the /monitor endpoint on my configurationserver manually (just for testing). Which is supposed to send an event via RabbitMQ which I started up with the following Docker command:
docker run -d --hostname my-rabbit --name some-rabbit -p 5672:5672 -p 15672:15672 rabbitmq:3-management
This gives me the following response: ["*"]
With the following console output on my configurationserver:
2018-11-22 09:19:48.527 INFO 19316 --- [nio-8888-exec-6] o.s.c.c.monitor.PropertyPathEndpoint : Refresh for: *
2018-11-22 09:19:48.543 INFO 19316 --- [nio-8888-exec-6] o.s.a.r.c.CachingConnectionFactory : Attempting to connect to: [localhost:5672]
2018-11-22 09:19:48.550 INFO 19316 --- [nio-8888-exec-6] o.s.a.r.c.CachingConnectionFactory : Created new connection: rabbitConnectionFactory.publisher#79a8318:0/SimpleConnection#34a9bacb [delegate=amqp://guest#127.0.0.1:5672/, localPort= 55205]
2018-11-22 09:19:48.553 INFO 19316 --- [nio-8888-exec-6] o.s.amqp.rabbit.core.RabbitAdmin : Auto-declaring a non-durable, auto-delete, or exclusive Queue (springCloudBusInput.anonymous.cT0DOhixRX6H82A1zBDF9g) durable:false, auto-delete:true, exclusive:true. It will be redeclared if the broker stops and is restarted while the connection factory is alive, but all messages will be lost.
2018-11-22 09:19:48.880 INFO 19316 --- [nio-8888-exec-6] trationDelegate$BeanPostProcessorChecker : Bean 'configurationPropertiesRebinderAutoConfiguration' of type [org.springframework.cloud.autoconfigure.ConfigurationPropertiesRebinderAutoConfiguration$$EnhancerBySpringCGLIB$$135b8961] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
2018-11-22 09:19:49.162 INFO 19316 --- [nio-8888-exec-6] o.s.boot.SpringApplication : No active profile set, falling back to default profiles: default
2018-11-22 09:19:49.165 INFO 19316 --- [nio-8888-exec-6] o.s.boot.SpringApplication : Started application in 0.6 seconds (JVM running for 29.712)
2018-11-22 09:19:49.228 INFO 19316 --- [nio-8888-exec-6] o.s.cloud.bus.event.RefreshListener : Received remote refresh request. Keys refreshed []
However my configurationclient does not seem to receive any event, and is thus not refreshing itself.
What am I missing here?
Kind regards,
Dennis Nijssen

SCDF yarn connection error

I have deployed spring cloud dataflow on a virtual yarn cluster. starting the server ./bin/dataflow-server-yarn executes correctly. and returns
2016-11-02 10:31:59.786 INFO 42493 --- [ main] o.s.i.endpoint.EventDrivenConsumer : Adding {logging-channel-adapter:_org.springframework.integration.errorLogger} as a subscriber to the 'errorChannel' channel
2016-11-02 10:31:59.787 INFO 42493 --- [ main] o.s.i.channel.PublishSubscribeChannel : Channel 'spring-cloud-dataflow-server-yarn:9393.errorChannel' has 1 subscriber(s).
2016-11-02 10:31:59.787 INFO 42493 --- [ main] o.s.i.endpoint.EventDrivenConsumer : started _org.springframework.integration.errorLogger
2016-11-02 10:31:59.896 INFO 42493 --- [ main] s.b.c.e.t.TomcatEmbeddedServletContainer : Tomcat started on port(s): 9393 (http)
2016-11-02 10:31:59.901 INFO 42493 --- [ main] o.s.c.d.server.yarn.YarnDataFlowServer : Started YarnDataFlowServer in 16.026 seconds (JVM running for 16.485)
I can then start ./bin/dataflow-shell , from here i can import apps create and list streams without errors; however, if i try to deploy the created stream the following connection error happens
2016-11-02 10:52:58.275 INFO 42788 --- [nio-9393-exec-8] o.s.c.deployer.spi.yarn.YarnAppDeployer : Deploy request for org.springframework.cloud.deployer.spi.core.AppDeploymentRequest#23d59aea
2016-11-02 10:52:58.275 INFO 42788 --- [nio-9393-exec-8] o.s.c.deployer.spi.yarn.YarnAppDeployer : Deploying request for definition [AppDefinition#3350878c name = 'time', properties = map['spring.cloud.stream.bindings.output.producer.requiredGroups' -> 'ticktock', 'spring.cloud.stream.bindings.output.destination' -> 'ticktock.time']]
2016-11-02 10:52:58.275 INFO 42788 --- [nio-9393-exec-8] o.s.c.deployer.spi.yarn.YarnAppDeployer : Parameters for definition {spring.cloud.stream.bindings.output.producer.requiredGroups=ticktock, spring.cloud.stream.bindings.output.destination=ticktock.time}
2016-11-02 10:52:58.275 INFO 42788 --- [nio-9393-exec-8] o.s.c.deployer.spi.yarn.YarnAppDeployer : Deployment properties for request {spring.cloud.deployer.group=ticktock}
2016-11-02 10:52:58.276 INFO 42788 --- [rTaskExecutor-1] o.s.s.support.LifecycleObjectSupport : started org.springframework.statemachine.support.DefaultStateMachineExecutor#6d68eeb7
2016-11-02 10:52:58.276 INFO 42788 --- [rTaskExecutor-1] o.s.s.support.LifecycleObjectSupport : started RESOLVEINSTANCE WAITINSTANCE STARTCLUSTER CREATECLUSTER PUSHARTIFACT STARTINSTANCE CHECKINSTANCE PUSHAPP CHECKAPP WAITCHOICE STARTINSTANCECHOICE PUSHAPPCHOICE / / uuid=d7e5224f-c5f0-47c9-b2c2-066b117cc786 / id=null
2016-11-02 10:52:58.276 INFO 42788 --- [rTaskExecutor-1] o.s.c.d.s.y.DefaultYarnCloudAppService : Cachekey STREAMnull found YarnCloudAppServiceApplication org.springframework.cloud.deployer.spi.yarn.YarnCloudAppServiceApplication#79158163
2016-11-02 10:52:58.280 INFO 42788 --- [rTaskExecutor-1] o.s.c.d.s.y.DefaultYarnCloudAppService : Cachekey STREAMnull found YarnCloudAppServiceApplication org.springframework.cloud.deployer.spi.yarn.YarnCloudAppServiceApplication#79158163
2016-11-02 10:52:59.281 INFO 42788 --- [rTaskExecutor-1] org.apache.hadoop.ipc.Client : Retrying connect to server: localhost/192.168.137.135:8032. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2016-11-02 10:53:00.282 INFO 42788 --- [rTaskExecutor-1] org.apache.hadoop.ipc.Client : Retrying connect to server: localhost/192.168.137.135:8032. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2016-11-02 10:53:01.283 INFO 42788 --- [rTaskExecutor-1] org.apache.hadoop.ipc.Client : Retrying connect to server: localhost/192.168.137.135:8032. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2016-11-02 10:53:02.283 INFO 42788 --- [rTaskExecutor-1] org.apache.hadoop.ipc.Client : Retrying connect to server: localhost/192.168.137.135:8032. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2016-11-02 10:53:03.284 INFO 42788 --- [rTaskExecutor-1] org.apache.hadoop.ipc.Client : Retrying connect to server: localhost/192.168.137.135:8032. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2016-11-02 10:53:04.285 INFO 42788 --- [rTaskExecutor-1] org.apache.hadoop.ipc.Client : Retrying connect to server: localhost/192.168.137.135:8032. Already tried 5 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2016-11-02 10:53:05.286 INFO 42788 --- [rTaskExecutor-1] org.apache.hadoop.ipc.Client : Retrying connect to server: localhost/192.168.137.135:8032. Already tried 6 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2016-11-02 10:53:06.287 INFO 42788 --- [rTaskExecutor-1] org.apache.hadoop.ipc.Client : Retrying connect to server: localhost/192.168.137.135:8032. Already tried 7 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2016-11-02 10:53:07.288 INFO 42788 --- [rTaskExecutor-1] org.apache.hadoop.ipc.Client : Retrying connect to server: localhost/192.168.137.135:8032. Already tried 8 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2016-11-02 10:53:08.289 INFO 42788 --- [rTaskExecutor-1] org.apache.hadoop.ipc.Client : Retrying connect to server: localhost/192.168.137.135:8032. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2016-11-02 10:53:08.290 INFO 42788 --- [rTaskExecutor-1] o.s.c.d.s.y.DefaultYarnCloudAppService : Cachekey STREAMapp--spring.yarn.appName=scdstream:app:ticktock,--spring.yarn.client.launchcontext.arguments.--spring.cloud.deployer.yarn.appmaster.artifact=/dataflow//artifacts/cache/ found YarnCloudAppServiceApplication org.springframework.cloud.deployer.spi.yarn.YarnCloudAppServiceApplication#7d3bc3f0
2016-11-02 10:53:09.294 INFO 42788 --- [rTaskExecutor-1] org.apache.hadoop.ipc.Client : Retrying connect to server: localhost/192.168.137.135:8032. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2016-11-02 10:53:10.295 INFO 42788 --- [rTaskExecutor-1] org.apache.hadoop.ipc.Client : Retrying connect to server: localhost/192.168.137.135:8032. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2016-11-02 10:53:11.296 INFO 42788 --- [rTaskExecutor-1] org.apache.hadoop.ipc.Client : Retrying connect to server: localhost/192.168.137.135:8032. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2016-11-02 10:53:12.297 INFO 42788 --- [rTaskExecutor-1] org.apache.hadoop.ipc.Client : Retrying connect to server: localhost/192.168.137.135:8032. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2016-11-02 10:53:13.297 INFO 42788 --- [rTaskExecutor-1] org.apache.hadoop.ipc.Client : Retrying connect to server: localhost/192.168.137.135:8032. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2016-11-02 10:53:14.298 INFO 42788 --- [rTaskExecutor-1] org.apache.hadoop.ipc.Client : Retrying connect to server: localhost/192.168.137.135:8032. Already tried 5 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2016-11-02 10:53:15.299 INFO 42788 --- [rTaskExecutor-1] org.apache.hadoop.ipc.Client : Retrying connect to server: localhost/192.168.137.135:8032. Already tried 6 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2016-11-02 10:53:16.300 INFO 42788 --- [rTaskExecutor-1] org.apache.hadoop.ipc.Client : Retrying connect to server: localhost/192.168.137.135:8032. Already tried 7 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2016-11-02 10:53:17.301 INFO 42788 --- [rTaskExecutor-1] org.apache.hadoop.ipc.Client : Retrying connect to server: localhost/192.168.137.135:8032. Already tried 8 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2016-11-02 10:53:18.302 INFO 42788 --- [rTaskExecutor-1] org.apache.hadoop.ipc.Client : Retrying connect to server: localhost/192.168.137.135:8032. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2016-11-02 10:53:18.361 INFO 42788 --- [rTaskExecutor-1] org.apache.hadoop.fs.TrashPolicyDefault : Namenode trash configuration: Deletion interval = 0 minutes, Emptier interval = 0 minutes.
2016-11-02 10:53:18.747 INFO 42788 --- [rTaskExecutor-1] o.s.s.support.LifecycleObjectSupport : stopped org.springframework.statemachine.support.DefaultStateMachineExecutor#6d68eeb7
2016-11-02 10:53:18.747 INFO 42788 --- [rTaskExecutor-1] o.s.s.support.LifecycleObjectSupport : stopped RESOLVEINSTANCE WAITINSTANCE STARTCLUSTER CREATECLUSTER PUSHARTIFACT STARTINSTANCE CHECKINSTANCE PUSHAPP CHECKAPP WAITCHOICE STARTINSTANCECHOICE PUSHAPPCHOICE / / uuid=d7e5224f-c5f0-47c9-b2c2-066b117cc786 / id=null
2016-11-02 10:53:18.747 ERROR 42788 --- [rTaskExecutor-1] o.s.c.d.s.y.AbstractDeployerStateMachine : Passing through error state DefaultStateContext [stage=STATE_ENTRY, message=GenericMessage [payload=DEPLOY, headers={artifact=org.springframework.cloud.stream.app:time-source-kafka:jar:1.0.0.BUILD-SNAPSHOT, appVersion=app, groupId=ticktock, definitionParameters={spring.cloud.stream.bindings.output.producer.requiredGroups=ticktock, spring.cloud.stream.bindings.output.destination=ticktock.time}, count=1, clusterId=ticktock:time, id=f0f7b54e-e59d-69c0-a405-f0907fa46343, contextRunArgs=[--spring.yarn.appName=scdstream:app:ticktock, --spring.yarn.client.launchcontext.arguments.--spring.cloud.deployer.yarn.appmaster.artifact=/dataflow//artifacts/cache/], artifactDir=/dataflow//artifacts/cache/, timestamp=1478083978275}], messageHeaders={artifact=org.springframework.cloud.stream.app:time-source-kafka:jar:1.0.0.BUILD-SNAPSHOT, appVersion=app, groupId=ticktock, definitionParameters={spring.cloud.stream.bindings.output.producer.requiredGroups=ticktock, spring.cloud.stream.bindings.output.destination=ticktock.time}, count=1, clusterId=ticktock:time, id=f0f7b54e-e59d-69c0-a405-f0907fa46343, contextRunArgs=[--spring.yarn.appName=scdstream:app:ticktock, --spring.yarn.client.launchcontext.arguments.--spring.cloud.deployer.yarn.appmaster.artifact=/dataflow//artifacts/cache/], artifactDir=/dataflow//artifacts/cache/, timestamp=1478083978275}, extendedState=DefaultExtendedState [variables={artifact=org.springframework.cloud.stream.app:time-source-kafka:jar:1.0.0.BUILD-SNAPSHOT, appVersion=app, appname=scdstream:app:ticktock, definitionParameters={spring.cloud.stream.bindings.output.producer.requiredGroups=ticktock, spring.cloud.stream.bindings.output.destination=ticktock.time}, count=1, messageId=f0f7b54e-e59d-69c0-a405-f0907fa46343, clusterId=ticktock:time, error=org.springframework.yarn.YarnSystemException: Call From master/127.0.1.1 to localhost:8032 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused; nested exception is java.net.ConnectException: Call From master/127.0.1.1 to localhost:8032 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused}], transition=org.springframework.statemachine.transition.DefaultExternalTransition#77c2bbfc, stateMachine=UNDEPLOYMODULE DESTROYCLUSTER STOPCLUSTER DEPLOYMODULE RESOLVEINSTANCE WAITINSTANCE STARTCLUSTER CREATECLUSTER PUSHARTIFACT STARTINSTANCE CHECKINSTANCE PUSHAPP CHECKAPP WAITCHOICE STARTINSTANCECHOICE PUSHAPPCHOICE ERROR READY ERROR_JUNCTION UNDEPLOYEXIT DEPLOYEXIT / ERROR / uuid=436b73fe-991a-4c22-a418-334f895d41e5 / id=null, source=null, target=null, sources=null, targets=null, exception=null]
2016-11-02 10:53:18.748 WARN 42788 --- [nio-9393-exec-8] o.s.c.d.s.c.StreamDeploymentController : Exception when deploying the app StreamAppDefinition [streamName=ticktock, name=time, registeredAppName=time, properties={spring.cloud.stream.bindings.output.producer.requiredGroups=ticktock, spring.cloud.stream.bindings.output.destination=ticktock.time}]: java.util.concurrent.ExecutionException: org.springframework.yarn.YarnSystemException: Call From master/127.0.1.1 to localhost:8032 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused; nested exception is java.net.ConnectException: Call From master/127.0.1.1 to localhost:8032 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
changing the IP address to the localhost,yielded same results.
here is my server.yml
logging:
level:
org.apache.hadoop: INFO
org.springframework.yarn: INFO
maven:
remoteRepositories:
springRepo:
url: https://repo.spring.io/libs-snapshot
spring:
main:
show_banner: false
# Configured for Hadoop single-node running on localhost. Replace with property values reflecting your
# actual Hadoop cluster when running in a distributed environment.
hadoop:
fsUri: hdfs://192.168.137.135:8020
resourceManagerHost: 192.168.137.135
resourceManagerPort: 8032
resourceManagerSchedulerAddress: 192.168.137.135:8030
# Configured for Redis running on localhost. Replace at least host property when running in a
# distributed environment.
redis:
port: 6379
host: 192.168.137.135
# Configured for an embedded in-memory H2 database. Replace the datasource configuration with properties
# matching your preferred database to be used instead, if needed, or when running in a distributed environment.
#rabbitmq:
# addresses: localhost:5672
# for default embedded database
datasource:
url: jdbc:h2:tcp://localhost:19092/mem:dataflow
username: sa
password:
driverClassName: org.h2.Driver
# # for mysql/mariadb datasource
# datasource:
# url: jdbc:mysql://localhost:3306/yourDB
# username: yourUsername
# password: yourPassword
# driverClassName: org.mariadb.jdbc.Driver
# # for postgresql datasource
# datasource:
# url: jdbc:postgresql://localhost:5432/yourDB
# username: yourUsername
# password: yourPassword
# driverClassName: org.postgresql.Driver
cloud:
stream:
kafka:
binder:
brokers: localhost:9093
zkNodes: localhost:2181
config:
enabled: false
server:
bootstrap: true
deployer:
yarn:
app:
baseDir: /dataflow
streamappmaster:
memory: 512m
virtualCores: 1
javaOpts: "-Xms512m -Xmx512m"
streamcontainer:
priority: 5
memory: 256m
virtualCores: 1
javaOpts: "-Xms64m -Xmx256m"
taskappmaster:
memory: 512m
virtualCores: 1
javaOpts: "-Xms512m -Xmx512m"
taskcontainer:
priority: 10
memory: 256m
virtualCores: 1
javaOpts: "-Xms64m -Xmx256m"
# yarn:
# hostdiscovery:
# pointToPoint: false
# loopback: false
# preferInterface: ['eth', 'en']
# matchIpv4: 192.168.0.0/24
# matchInterface: eth\\d*
I found a workaround, it seems the port was the problem. I changed the port number on the yarn-site.xml and did the same on the servers.xml and voila!!