I deployed a spring cloud config server on cloud Foundry. To get its health status in real time, I called its /health endpoint every minute. But after a few days later, config server crashed(out of memory). The chart of memory usage shows that the percentage of memory usage increased stably until it reached 100%. if I did not call /health endpoint so frequently, it ran normally. There seemed to be memory leak. Why did this happened?
this is the pom.xml file of config server maven project:
<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>1.5.4.RELEASE</version>
</parent>
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-test</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-config-server</artifactId>
</dependency>
</dependencies>
<dependencyManagement>
<dependencies>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-dependencies</artifactId>
<version>Dalston.SR1</version>
<type>pom</type>
<scope>import</scope>
</dependency>
</dependencies>
</dependencyManagement>
this is the only java code in the config server project:
#EnableConfigServer
#SpringBootApplication
public class ConfigServerApplication {
public static void main(String[] args) {
SpringApplication.run(ConfigServerApplication.class, args);
}
}
this is the application.properties file in the config server project:
spring.application.name=config-server
spring.cloud.config.server.git.uri=https://github.com/***/config-server-test
spring.cloud.config.server.git.username=zhu*****
spring.cloud.config.server.git.password=****
Related
We are using solace as the Messaging system in our application and while writing the unit test classes (using JUNIT )for listners i have to start the solcae in my local.
Instead i was trying to mock the broker (apache ActiveMq) to use amqp protocl and send messages to the listeners.
https://github.com/apache/activemq/blob/activemq-5.15.x/activemq-amqp/src/test/java/org/apache/activemq/transport/amqp/AmqpTransformerTest.java
But when i try to build the maven project i see the error
package org.apache.activemq.transport.amqp.client does not exist.
I have added the below dependencies but i still facing the same issue. Please suggest
<dependency>
<groupId>org.apache.activemq</groupId>
<artifactId>activemq-broker</artifactId>
<version>5.15.12</version>
<!-- <scope>test</scope> -->
</dependency>
<!-- Testing Dependencies -->
<dependency>
<groupId>org.apache.qpid</groupId>
<artifactId>qpid-jms-client</artifactId>
<version>0.51.0</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.apache.activemq</groupId>
<artifactId>activemq-kahadb-store</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.apache.activemq</groupId>
<artifactId>activemq-jaas</artifactId>
<version>5.15.12</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.apache.activemq</groupId>
<artifactId>activemq-broker</artifactId>
<version>5.15.12</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.apache.activemq</groupId>
<artifactId>activemq-spring</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.apache.activemq</groupId>
<artifactId>activemq-http</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.apache.activemq</groupId>
<artifactId>activemq-mqtt</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.apache.activemq</groupId>
<artifactId>activemq-leveldb-store</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.apache.activemq.tooling</groupId>
<artifactId>activemq-junit</artifactId>
<version>5.15.12</version>
<scope>test</scope>
</dependency>
I am not able to resolve the below compilation issues.
org.apache.activemq.transport.amqp.client can not be resolved since the dependecy for this package is not found,But i have added the above dependencies in the maven project.
import org.apache.activemq.transport.amqp.client.AmqpClient;
import org.apache.activemq.transport.amqp.client.AmqpConnection;
import org.apache.activemq.transport.amqp.client.AmqpMessage;
import org.apache.activemq.transport.amqp.client.AmqpSender;
import org.apache.activemq.transport.amqp.client.AmqpSession;
Please suggest.
thank you experts.
Not entirely clear what your test is doing but the classes it can't find are those of the AMQP test client that is implemented in the ActiveMQ 5.x AMQP module's test jar so you definitely won't find them with the dependencies you have there.
The AMQP test client in the ActiveMQ broker is not meant for general use by anyone as is was built specifically to test the AMQP stack in the broker. If you remove the usage of that from your tests you should have better luck.
Added Depedency Pom Details :
<dependencies>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-clients_2.11</artifactId>
<version>1.7.1</version>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-streaming-java_2.11</artifactId>
<version>1.7.1</version>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-core</artifactId>
<version>1.7.1</version>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-java</artifactId>
<version>1.7.1</version>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-runtime_2.11</artifactId>
<version>1.7.1</version>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-table_2.11</artifactId>
<version>1.7.1</version>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-connector-kafka-0.10_2.11</artifactId>
<version>1.7.1</version>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-connector-filesystem_2.11</artifactId>
<version>1.7.1</version>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-hadoop-compatibility_2.11</artifactId>
<version>1.7.1</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-s3-fs-hadoop</artifactId>
<version>1.7.1</version>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-shaded-hadoop</artifactId>
<version>1.7.1</version>
<type>pom</type>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-aws</artifactId>
<version>2.8.5</version>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-hdfs</artifactId>
<version>2.8.5</version>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-common</artifactId>
<version>2.8.5</version>
</dependency>
<dependency>
<groupId>com.amazonaws</groupId>
<artifactId>aws-java-sdk-s3</artifactId>
<version>1.11.529</version>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-streaming-connectors</artifactId>
<version>1.1.5</version>
<type>pom</type>
</dependency>
</dependencies>
java.lang.UnsupportedOperationException: Recoverable writers on Hadoop
are only supported for HDFS and for Hadoop version 2.7 or newer at
org.apache.flink.runtime.fs.hdfs.HadoopRecoverableWriter.(HadoopRecoverableWriter.java:57)
at
org.apache.flink.runtime.fs.hdfs.HadoopFileSystem.createRecoverableWriter(HadoopFileSystem.java:202)
at
org.apache.flink.core.fs.SafetyNetWrapperFileSystem.createRecoverableWriter(SafetyNetWrapperFileSystem.java:69)
at
org.apache.flink.streaming.api.functions.sink.filesystem.Buckets.(Buckets.java:112)
at
org.apache.flink.streaming.api.functions.sink.filesystem.StreamingFileSink$RowFormatBuilder.createBuckets(StreamingFileSink.java:242)
at
org.apache.flink.streaming.api.functions.sink.filesystem.StreamingFileSink.initializeState(StreamingFileSink.java:327)
at
org.apache.flink.streaming.util.functions.StreamingFunctionUtils.tryRestoreFunction(StreamingFunctionUtils.java:178)
at
org.apache.flink.streaming.util.functions.StreamingFunctionUtils.restoreFunctionState(StreamingFunctionUtils.java:160)
at
org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.initializeState(AbstractUdfStreamOperator.java:96)
at
org.apache.flink.streaming.api.operators.AbstractStreamOperator.initializeState(AbstractStreamOperator.java:278)
at
org.apache.flink.streaming.runtime.tasks.StreamTask.initializeState(StreamTask.java:738)
at
org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:289)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:704) at
java.lang.Thread.run(Thread.java:748)
Flink uses something called a ServiceLoader to load components needed to interface with pluggable File Systems. If you care to see where Flink does this in code, head over to org.apache.flink.core.fs.FileSystem. Take note of the initialize function, which makes use of the RAW_FACTORIES variable. RAW_FACTORIES is created by the function loadFileSystems, which you can see makes use of Java's ServiceLoader.
The file system components need to be setup prior to your application starting on Flink. This implies that your Flink application does not need to bundle these components, they should be provided for your application.
EMR does not provide the S3 file system components that Flink needs to use S3 as a streaming file sink out of the box. This exception is being thrown not because the version isn't high enough, but because Flink loaded the HadoopFileSystem in the absence of a FileSystem that matched the s3 scheme (see code here).
You can see if your file systems are loading by enabling DEBUG logging level for my Flink application which EMR lets you do in configurations:
{
"Classification": "flink-log4j",
"Properties": {
"log4j.rootLogger": "DEBUG,file"
}
},{
"Classification": "flink-log4j-yarn-session",
"Properties": {
"log4j.rootLogger": "DEBUG,stdout"
}
}
The relevant logs are available in the YARN Resource Manager, looking at the logs for an individual node. Searching for the string "Added file system" should help you locate all successfully loaded file systems.
Also handy in this investigation was to SSH to the master node and use the flink-scala REPL, where I could see what FileSystem Flink decided to load given a file URI.
The solution is to drop the JAR for the S3 file system implementation into /usr/lib/flink/lib/ prior to starting your Flink application. This can be done with a bootstrap action that grabs the flink-s3-fs-hadoop or flink-s3-fs-presto (depending on which implementation you are using). My bootstrap action script looks something like this:
sudo mkdir -p /usr/lib/flink/lib
cd /usr/lib/flink/lib
sudo curl -O https://search.maven.org/remotecontent?filepath=org/apache/flink/flink-s3-fs-hadoop/1.8.1/flink-s3-fs-hadoop-1.8.1.jar
In order to use Flink's StreamingFileSink with exactly once guarantees, you need to use Hadoop >= 2.7. Versions below 2.7 are not supported. Hence, please make sure that you are running an up to date Hadoop version on EMR.
I have an ear created with ShrinkWrap. I'm trying to run AQ tests on remote (dockered) wildfly 12 container. WF is deployed properly, necessary ports are opened and available. While trying to run tests I get:
16:05:42,922 TRACE [listener] Invoking listener org.jboss.remoting3.remote.RemoteConnection$RemoteWriteListener#56babcc2 on channel org.xnio.conduits.ConduitStreamSinkChannel#34dcf94b
16:05:42,924 ERROR [listener] XNIO001007: A channel event listener threw an exception
java.lang.NoSuchMethodError: org.jboss.remoting3._private.Messages.tracef(Ljava/lang/String;J)V
at org.jboss.remoting3.remote.RemoteConnection$RemoteWriteListener.handleEvent(RemoteConnection.java:275)
at org.jboss.remoting3.remote.RemoteConnection$RemoteWriteListener.handleEvent(RemoteConnection.java:243)
at org.xnio.ChannelListeners.invokeChannelListener(ChannelListeners.java:92)
at org.xnio.conduits.WriteReadyHandler$ChannelListenerHandler.writeReady(WriteReadyHandler.java:65)
at org.xnio.nio.NioSocketConduit.handleReady(NioSocketConduit.java:94)
at org.xnio.nio.WorkerThread.run(WorkerThread.java:571)
and have no clue how to deal with it.
I'm using:
<dependency>
<groupId>org.wildfly.arquillian</groupId>
<artifactId>wildfly-arquillian-container-remote</artifactId>
<version>2.1.0.Final</version>
<scope>test</scope>
</dependency>
...
<dependencyManagement>
<dependencies>
<dependency>
<groupId>org.jboss.arquillian</groupId>
<artifactId>arquillian-bom</artifactId>
<version>1.4.1.Final</version>
<scope>import</scope>
<type>pom</type>
</dependency>
</dependencies>
</dependencyManagement>
Any clues?
I have created simple seedstack web project through guideline mentioned on http://seedstack.org/docs/basics/
Undertow is also started with seedstack:run.
However, while accessing "hello" resource undertow throws below exception:
ERROR 2018-07-25 21:37:34,468 XNIO-1 task-2 io.undertow.request
UT005023: Exception handling request to
/api/seed-w20/application/configuration
null returned by binding at
org.seedstack.w20.internal.W20Module.configure(W20Module.java:51) (via
modules: com.google.inject.util.Modules$OverrideModule ->
io.nuun.kernel.core.internal.injection.KernelGuiceModuleInternal ->
org.seedstack.w20.internal.W20Module) but the 3rd parameter of
org.seedstack.w20.internal.FragmentManagerImpl.(FragmentManagerImpl.java:32)
is not #Nullable at
org.seedstack.w20.internal.W20Module.configure(W20Module.java:51) (via
modules: com.google.inject.util.Modules$OverrideModule ->
io.nuun.kernel.core.internal.injection.KernelGuiceModuleInternal ->
org.seedstack.w20.internal.W20Module) while locating
org.seedstack.w20.internal.ConfiguredApplication
for the 3rd parameter of org.seedstack.w20.internal.FragmentManagerImpl.(FragmentManagerImpl.java:32)
while locating org.seedstack.w20.internal.FragmentManagerImpl while
locating org.seedstack.w20.FragmentManager
for field at org.seedstack.w20.internal.rest.application.ApplicationConfigurationResource.fragmentManager(ApplicationConfigurationResource.java:38)
while locating
org.seedstack.w20.internal.rest.application.ApplicationConfigurationResource
Any help please?
This is a bug introduced recently into the w20-bridge, which occurs when no w20.app.json configuration file is present.
You can workaround it by creating an empty-object w20.app.json file at the root of the classpath:
{}
You can also update the version of all w20-bridge dependencies to 3.2.4 which has a fix for it. This can be done by using the dependencyManagement section of your POM:
<dependencyManagement>
<dependencies>
<dependency>
<groupId>org.seedstack</groupId>
<artifactId>seedstack-bom</artifactId>
<version>18.4.3</version>
<type>pom</type>
<scope>import</scope>
</dependency>
<dependency>
<groupId>org.seedstack.addons.w20</groupId>
<artifactId>w20-bridge-web</artifactId>
<version>3.2.4</version>
</dependency>
<dependency>
<groupId>org.seedstack.addons.w20</groupId>
<artifactId>w20-bridge-web-bootstrap-3</artifactId>
<version>3.2.4</version>
</dependency>
<dependency>
<groupId>org.seedstack.addons.w20</groupId>
<artifactId>w20-bridge-web-business-theme</artifactId>
<version>3.2.4</version>
</dependency>
<dependency>
<groupId>org.seedstack.addons.w20</groupId>
<artifactId>w20-bridge-web-components</artifactId>
<version>3.2.4</version>
</dependency>
<dependency>
<groupId>org.seedstack.addons.w20</groupId>
<artifactId>w20-bridge-rest</artifactId>
<version>3.2.4</version>
</dependency>
<dependency>
<groupId>org.seedstack.addons.w20</groupId>
<artifactId>w20-bridge-specs</artifactId>
<version>3.2.4</version>
</dependency>
</dependencies>
</dependencyManagement>
This fix will be included in the upcoming SeedStack 18.7.
I have a SpringBoot 1.5.12 app using the Edgware.SR3 Spring Cloud release.
The following code:
#Configuration
public class HmlConfig
{
#Value("${jms.destination.name}")
...
}
...
#RestController
#RequestMapping("/api")
public class HmlRestController
{
#Autowired
private JmsTemplate jmsTemplate;
...
}
Raises the following exception:
Caused by: java.lang.IllegalArgumentException: Could not resolve placeholder 'jms.destination.name' in value "${jms.destination.name}"
at org.springframework.util.PropertyPlaceholderHelper.parseStringValue(PropertyPlaceholderHelper.java:174)
at org.springframework.util.PropertyPlaceholderHelper.replacePlaceholders(PropertyPlaceholderHelper.java:126)
Here is my bootstrap.yml content:
spring:
application:
name: hml-core
profiles:
active:
default
cloud:
config:
uri: http://localhost:8888/hml
Going to http://localhost:8888/hml/hml-core/default correctly displays the properties. Did I miss anything ?
Depending on your logs, your client can't access to the Config Server, this is why the field jms.destination.name can't be injected.
Add the full stacktrace of your client will be helpful
Do you happen to have spring.main.sources set somewhere?
We had the excact same issue and removing that line helped us. It seems that having this in the bootstrap.yml hinders the app to connect to cloud config server. This results in missing props.
Also having spring.main.sources param on config server side caused a server side exception. See Spring cloud config /refresh crashes when spring.main.sources is set
I'm updating this post with the results of the last tests. It appears that including the maven dependencies in each of the modules solves the issue and the configuration is then found as expected.
In my original design I had a parent POM which factorized all the spring boot dependencies, as shown below:
<dependencyManagement>
<dependencies>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-dependencies</artifactId>
<version>Camden.SR5</version>
<type>pom</type>
<scope>import</scope>
</dependency>
</dependencies>
</dependencyManagement>
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-actuator</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-config-server</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-config</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-config-client</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.security</groupId>
<artifactId>spring-security-rsa</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-activemq</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-test</artifactId>
<scope>test</scope>
</dependency>
</dependencies>
This way the configuration client doesn't find the config server and the mentioned exception is raised. Modifying the parent POM such that to include only the dependencyManagement, as shown below:
<dependencyManagement>
<dependencies>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-dependencies</artifactId>
<version>Camden.SR5</version>
<type>pom</type>
<scope>import</scope>
</dependency>
</dependencies>
</dependencyManagement>
and moving the dependencies in the config server POM as follows:
<dependencies>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-config-server</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-config</artifactId>
</dependency>
</dependencies>
and in the config client as follows:
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-actuator</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-config</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-config-client</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.security</groupId>
<artifactId>spring-security-rsa</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-activemq</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-test</artifactId>
<scope>test</scope>
</dependency>
</dependencies>
completely solves the problem and everything works as expected. I don't have any explanation of this as including all the common dependencies in the parent POM should have exactly the same effect as including them in each individual module. In my opinion this shows some instabilities and strange behavior of Spring Boot and Spring Cloud. I don't have time to dig more and to try to understand what happens. Anyway, given this kind of issues, as well as the lack of any support including on this site, we moved from Spring.
But if someone has any explanation of this, I'm still interested to know.
Kind regards,
Nicolas