Hazelcast No DataSerializerFactory registered for namespace: 0 on standalone process - serialization

Trying to set a HazelCast cluster with tcp-ip enabled on a standalone process.
My class looks like this
public class Person implements Serializable{
private static final long serialVersionUID = 1L;
int personId;
String name;
Person(){};
//getters and setters
}
Hazelcast is loaded as
final Config config = createNewConfig(mapName);
HazelcastInstance node = Hazelcast.newHazelcastInstance(config);`
Config createNewConfig(mapName){
final PersonStore personStore = new PersonStore();
XmlConfigBuilder configBuilder = new XmlConfigBuilder();
Config config = configBuilder.build();
config.setClassLoader(LoadAll.class.getClassLoader());
MapConfig mapConfig = config.getMapConfig(mapName);
MapStoreConfig mapStoreConfig = new MapStoreConfig();
mapStoreConfig.setImplementation(personStore);
return config;
}
and my myhazelcast config has this
<tcp-ip enabled="true">
<member>machine-1</member>
<member>machine-2</member>
</tcp-ip>
Do I need to populate this tag in my xml?
I get this error when a second instance is brought up
com.hazelcast.nio.serialization.HazelcastSerializationException: No DataSerializerFactory registered for namespace: 0
2275 at com.hazelcast.nio.serialization.DataSerializer.read(DataSerializer.java:98)
2276 at com.hazelcast.nio.serialization.DataSerializer.read(DataSerializer.java:39)
2277 at com.hazelcast.nio.serialization.StreamSerializerAdapter.read(StreamSerializerAdapter.java:41)
2278 at com.hazelcast.nio.serialization.SerializationServiceImpl.toObject(SerializationServiceImpl.java:276)
Any help is highly appericiated.

Solved my problem, I had a pom.xml with hazelcast-wm so I did not have actual hazelcast jar in my bundled jar. Including that fixed my issue.

Note that this same "No DataSerializerFactory registered for namespace: 0" error message can also occur in an OSGi environment when you're attempting to use more than one Hazelcast instance within the same VM, but initializing the instances from different bundles. The reason being that the com.hazelcast.util.ServiceLoader.findHighestReachableClassLoader() method will sometimes pick the wrong class loader during Hazelcast initialization (as it won't always pick the class loader you set on the config), and then it ends up with an empty list of DataSerializerFactory instances (hence causing the error message that it can't find the requested factory with id 0). The following shows a way to work around that problem by taking advantage of Java's context class loader:
private HazelcastInstance createHazelcastInstance() {
// Use the following if you're only using the Hazelcast data serializers
final ClassLoader classLoader = Hazelcast.class.getClassLoader();
// Use the following if you have custom data serializers that you need
// final ClassLoader classLoader = this.getClass().getClassLoader();
final com.hazelcast.config.Config config = new com.hazelcast.config.Config();
config.setClassLoader(classLoader);
final ClassLoader previousContextClassLoader = Thread.currentThread().getContextClassLoader();
try {
Thread.currentThread().setContextClassLoader(classLoader);
return Hazelcast.newHazelcastInstance(config);
} finally {
if(previousContextClassLoader != null) {
Thread.currentThread().setContextClassLoader(previousContextClassLoader);
}
}
}

Related

Pivotal GemFire cannot see cached data in Gfsh or Pulse

I have created a Spring Boot application with a Geode/GemFire cache. I would like to connect to a Gfsh created Region with my Spring Boot application. In my application-context.xml I'm using a gfe:lookup-region with an id of the Gfsh created Region.
In my Java configuration file I utilize a LookupRegionFactoryBean to get a reference to the externally defined Region.
In my SpringBootApplication bootstrap class I write to my Repository successfully, as I can read back all the objects that I saved. But using the Gfsh tool or the Pulse tool I cannot see my cached data records (or the count of them that I saved.)
Could you provide some insight here? Also, I tried using the LocalRegionFactoryBean too in my configuration file but that approach did not work either.
Thanks.
application-context.xml:
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:context="http://www.springframework.org/schema/context"
xmlns:gfe-data="http://www.springframework.org/schema/data/gemfire"
xmlns:gfe="http://www.springframework.org/schema/gemfire"
xmlns:util="http://www.springframework.org/schema/util"
xsi:schemaLocation="http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context-3.0.xsd http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd http://www.springframework.org/schema/data/gemfire http://www.springframework.org/schema/data/gemfire/spring-data-gemfire.xsd http://www.springframework.org/schema/gemfire http://www.springframework.org/schema/gemfire/spring-gemfire.xsd">
<context:component-scan base-package="com.example.geode"></context:component-scan>
<util:properties id="gemfireProperties" location="context:geode.properties"/>
<!-- <context:property-placeholder location="context:geode.properties"/>
<bean id="log-level"><property name="log-level" value="${log-level}"/></bean>
<bean id="mcast-port"><property name="mcast-port" value="${mcast-port}"/></bean>
<bean id="name"><property name="name" value="${name}"/></bean>-->
<gfe:annotation-driven/>
<gfe-data:function-executions base-package="com.example.geode.config"/>
<!-- Declare GemFire Cache -->
<!-- <gfe:cache/> -->
<gfe:cache properties-ref="gemfireProperties"/>
<!-- Local region for being used by the Message -->
<!-- <gfe:replicated-region id="employee" value-constraint="com.example.geode.model.Employee" data-policy="REPLICATE"/>-->
<gfe:lookup-region id="employee" value-constraint="com.example.geode.model.Employee" data-policy="REPLICATE"/>
<!-- <gfe:local-region id="employee" value-constraint="com.example.geode.model.Employee" data-policy="REPLICATE"/>-->
<!-- Search for GemFire repositories -->
<gfe-data:repositories base-package="com.example.geode.repository"/>
</beans>
GeodeConfiguration.java:
//imports not included
#Configuration
#ComponentScan
#EnableCaching
#EnableGemfireRepositories//(basePackages = "com.example.geode.repository")
#EnableGemfireFunctions
#EnableGemfireFunctionExecutions//(basePackages = "com.example.geode.function")
#PropertySource("classpath:geode.properties")
public class GeodeConfiguration {
#Autowired
private EmployeeRepository employeeRepository;
#Autowired
private FunctionExecution functionExecution;
#Value("${log-level}")
private String loglevel;
#Value("${mcast-port}")
private String mcastPort;
#Value("${name}")
private String name;
Properties gemfireProperties() {
Properties gemfireProperties = new Properties();
gemfireProperties.setProperty(loglevel, loglevel);
gemfireProperties.setProperty(mcastPort, mcastPort);
gemfireProperties.setProperty(name, name);
return gemfireProperties;
}
#Bean
CacheFactoryBean gemfireCache() {
return new CacheFactoryBean();
}
#Bean
GemfireCacheManager cacheManager() {
GemfireCacheManager cacheManager = new GemfireCacheManager();
try {
CacheFactoryBean cacheFactory = gemfireCache();
//gemfireProperties();
//cacheFactory.setProperties(gemfireProperties());
cacheManager.setCache(cacheFactory.getObject()); //gemfireCache().getObject());
} catch (Exception ex) {
ex.printStackTrace();
}
return cacheManager;
}
#Bean(name="employee")
//#Autowired
LookupRegionFactoryBean<String, Employee> getRegion(final GemFireCache cache)
throws Exception {
//CacheTypeAwareRegionFactoryBean<String, Employee> region = new CacheTypeAwareRegionFactoryBean<>();//GenericRegionFactoryBean<> //LocalRegionFactoryBean<>();
LookupRegionFactoryBean<String, Employee> region = new LookupRegionFactoryBean<>();//GenericRegionFactoryBean<> //LocalRegionFactoryBean<>();
region.setRegionName("employee");
try {
region.setCache(gemfireCache().getObject());
} catch (Exception e) {
e.printStackTrace();
}
//region.setClose(false);
region.setName("employee");
//region.setAsyncEventQueues(new AsyncEventQueue[]{gemfireQueue});
//region.setPersistent(false);
//region.setDataPolicy(org.apache.geode.cache.DataPolicy.REPLICATE); //PRELOADED); //REPLICATE);
region.afterPropertiesSet();
return region;
}
}
BasicGeodeApplication.java:
//imports not provided
#EnableGemfireRepositories
#SpringBootApplication
#ComponentScan("com.example.geode")
//#EnableCaching
#EnableGemfireCaching
#EnableEntityDefinedRegions(basePackageClasses = Employee.class)
#SuppressWarnings("unused")
//#CacheServerApplication(name = "server2", locators = "localhost[10334]",
// autoStartup = true, port = 41414)
public class BasicGeodeApplication {
#Autowired
private EmployeeRepository employeeRepository;
#Autowired
private EmployeeService employeeService;
private static ConfigurableApplicationContext context;
public static void main(String[] args) {
context = SpringApplication.run(BasicGeodeApplication.class, args);
BasicGeodeApplication bga = new BasicGeodeApplication();
}
#Bean
public ApplicationRunner run(EmployeeRepository employeeRepository) {
return args -> {
Employee bob = new Employee("Bob", 80.0);
Employee sue = new Employee("Susan", 95.0);
Employee jane = new Employee("Jane", 85.0);
Employee jack = new Employee("Jack", 90.0);
List<Employee> employees = Arrays.asList(bob, sue, jane, jack);
employees.sort(Comparator.comparing(Employee::getName));
for (Employee employee : employees) {
//employeeService.saveEmployee(employee);
employeeRepository.save(employee);
}
System.out.println("\nList of employees:");
employees //Arrays.asList(bob.getName(), sue.getName(), jane.getName(), jack.getName());
.forEach(person -> System.out.println("\t" + employeeRepository.findByName(person.getName())));
System.out.println("\nQuery salary greater than 80k:");
stream(employeeRepository.findBySalaryGreaterThan(80.0).spliterator(), false)
.forEach(person -> System.out.println("\t" + person));
System.out.println("\nQuery salary less than 95k:");
stream(employeeRepository.findBySalaryLessThan(95.0).spliterator(), false)
.forEach(person -> System.out.println("\t" + person));
System.out.println("\nQuery salary greater than 80k and less than 95k:");
stream(employeeRepository.findBySalaryGreaterThanAndSalaryLessThan(80.0, 95.0).spliterator(), false)
.forEach(person -> System.out.println("\t" + person));
};
}
#Service
class EmployeeService {
#Autowired
private EmployeeRepository employeeRepository;
#CachePut(cacheNames = "employee", key = "#id")
//#PutMapping("/")
void saveEmployee(Employee employee) {
employeeRepository.save(employee);
}
Employee findEmployee(String name) {
return null;
}
//employeeRepository.findByName(person.getName())));
}
}
EmployeeRepository.java:
#Repository("employeeRepository")
//#DependsOn("gemfireCache")
public interface EmployeeRepository extends CrudRepository<Employee, String> {
Employee findByName(String name);
Iterable<Employee> findBySalaryGreaterThan(double salary);
Iterable<Employee> findBySalaryLessThan(double salary);
Iterable<Employee> findBySalaryGreaterThanAndSalaryLessThan(double salary1, double salary2);
}
Employee.java:
// imports not included
#Entity
#Region("employee")
public class Employee implements Serializable {
private static final long serialVersionUID = 1L;
#Id
#GeneratedValue(strategy=GenerationType.AUTO)
#javax.persistence.Id
private Long id;
public String name;
public double salary;
protected Employee() {}
#PersistenceConstructor
public Employee(String name, double salary) {
this.name = name;
this.salary = salary;
}
#Override
public String toString() {
return name + " salary is: " + salary;
}
public String getName() {
return name;
}
public void setName(String name) {
this.name = name;
}
public double getSalary() {
return salary;
}
public void setSalary(int salary) {
this.salary = salary;
}
}
I'd be very surprised if your Spring Boot application was actually using the Region(s) created in Gfsh after attempting a lookup. This is also evident from the fact that you cannot see data in the Region(s) when using Gfsh or Pulse after running your application.
In addition, it is not entirely apparent from your configuration whether your Spring Boot application is joining a cluster started with Gfsh since you have not shared the contents of your geode.properties file.
NOTE: We will come back to your properties file in a moment, since how you reference it (i.e. with context: in the location attribute of the <util> element from the Spring Util schema) is not even correct. In fact, your entire XML file is not even valid! Bean definitions without a class are not valid unless they are abstract.
What version of Spring Boot and Spring Data Geode/GemFire (SDG) are you using?
Judging by the Spring XML schema references in your XML configuration file, you are using Spring 3.0!? You should not reference versions in your schema location declarations. An unqualified schema version will be versioned based on the Spring JARs you imported on your application classpath (e.g. using Maven).
Anyway, the reason I ask is, I have made many changes to SDG causing it to fail-fast in the event that the configuration is ambiguous or incomplete (for instance).
In your case, the lookup would have failed instantly if the Region(s) did not exist. And, I am quite certain the Region(s) do not exist because SDG disables Cluster Configuration on a peer cache node/application, by default. Therefore, none of the Region(s) you created in Gfsh are immediately available to your Spring Boot application.
So, let's walk through a simple example. I will mostly use SDG's new annotation configuration model mixed with some Java config, for ease of use and convenience. I encourage you to read this chapter in SDG's Reference Guide given your configuration is all over the place and I am pretty certain you confusing what is actually happening.
Here is my Spring Boot, Apache Geode application...
#SpringBootApplication
#SuppressWarnings("unused")
public class ClusterConfiguredGeodeServerApplication {
public static void main(String[] args) {
SpringApplication.run(ClusterConfiguredGeodeServerApplication.class, args);
}
#Bean
ApplicationRunner runner(GemfireTemplate customersTemplate) {
return args -> {
Customer jonDoe = Customer.named("Jon Doe").identifiedBy(1L);
customersTemplate.put(jonDoe.getId(), jonDoe);
};
}
#PeerCacheApplication(name = "ClusterConfiguredGeodeServerApplication")
#EnablePdx
static class GeodeConfiguration {
#Bean("Customers")
LookupRegionFactoryBean<Long, Customer> customersRegion(GemFireCache gemfireCache) {
LookupRegionFactoryBean<Long, Customer> customersRegion = new LookupRegionFactoryBean<>();
customersRegion.setCache(gemfireCache);
return customersRegion;
}
#Bean("CustomersTemplate")
GemfireTemplate customersTemplate(#Qualifier("Customers") Region<?, ?> customers) {
return new GemfireTemplate(customers);
}
}
#Data
#RequiredArgsConstructor(staticName = "named")
static class Customer {
#Id
private Long id;
#NonNull
private String name;
Customer identifiedBy(Long id) {
this.id = id;
return this;
}
}
}
I am using Spring Data Lovelace RC1 (which includes Spring Data for Apache Geode 2.1.0.RC1). I am also using Spring Boot 2.0.3.RELEASE, which pulls in core Spring Framework 5.0.7.RELEASE, all on Java 8.
I ommited package and import declarations.
I am using Project Lombok to define my Customer class.
I have a nested GeodeConfiguration class to configure the Spring Boot application as a "peer" Cache member capable of joining an Apache Geode cluster. However, it is not part of any cluster yet!
Finally, I have configured a "Customers" Region, which is "looked" up in the Spring context.
When I start this application, it fails because there is no "Customers" Region defined yet, by any means...
Caused by: org.springframework.beans.factory.BeanInitializationException: Region [Customers] in Cache [GemFireCache[id = 2126876651; isClosing = false; isShutDownAll = false; created = Thu Aug 02 13:43:07 PDT 2018; server = false; copyOnRead = false; lockLease = 120; lockTimeout = 60]] not found
at org.springframework.data.gemfire.ResolvableRegionFactoryBean.createRegion(ResolvableRegionFactoryBean.java:146) ~[spring-data-geode-2.1.0.RC1.jar:2.1.0.RC1]
at org.springframework.data.gemfire.ResolvableRegionFactoryBean.afterPropertiesSet(ResolvableRegionFactoryBean.java:96) ~[spring-data-geode-2.1.0.RC1.jar:2.1.0.RC1]
at org.springframework.data.gemfire.LookupRegionFactoryBean.afterPropertiesSet(LookupRegionFactoryBean.java:72) ~[spring-data-geode-2.1.0.RC1.jar:2.1.0.RC1]
...
This is expected!
Alright, let's go into Gfsh and start a cluster.
You know that you need to start a Locator with a Server to form a cluster, right? A Locator is used by other potential peers attempting to join the cluster so they can locate the cluster in the first place. The Server is needed in order to create the "Customers" Region since you cannot create a Region on a Locator.
$ echo $GEODE_HOME
/Users/jblum/pivdev/apache-geode-1.6.0
$ gfsh
_________________________ __
/ _____/ ______/ ______/ /____/ /
/ / __/ /___ /_____ / _____ /
/ /__/ / ____/ _____/ / / / /
/______/_/ /______/_/ /_/ 1.6.0
Monitor and Manage Apache Geode
gfsh>start locator --name=LocaorOne --log-level=config
Starting a Geode Locator in /Users/jblum/pivdev/lab/LocaorOne...
....
Locator in /Users/jblum/pivdev/lab/LocaorOne on 10.0.0.121[10334] as LocaorOne is currently online.
Process ID: 41758
Uptime: 5 seconds
Geode Version: 1.6.0
Java Version: 1.8.0_152
Log File: /Users/jblum/pivdev/lab/LocaorOne/LocaorOne.log
JVM Arguments: -Dgemfire.enable-cluster-configuration=true -Dgemfire.load-cluster-configuration-from-dir=false -Dgemfire.log-level=config -Dgemfire.launcher.registerSignalHandlers=true -Djava.awt.headless=true -Dsun.rmi.dgc.server.gcInterval=9223372036854775806
Class-Path: /Users/jblum/pivdev/apache-geode-1.6.0/lib/geode-core-1.6.0.jar:/Users/jblum/pivdev/apache-geode-1.6.0/lib/geode-dependencies.jar
Successfully connected to: JMX Manager [host=10.0.0.121, port=1099]
Cluster configuration service is up and running.
gfsh>start server --name=ServerOne --log-level=config
Starting a Geode Server in /Users/jblum/pivdev/lab/ServerOne...
...
Server in /Users/jblum/pivdev/lab/ServerOne on 10.0.0.121[40404] as ServerOne is currently online.
Process ID: 41785
Uptime: 3 seconds
Geode Version: 1.6.0
Java Version: 1.8.0_152
Log File: /Users/jblum/pivdev/lab/ServerOne/ServerOne.log
JVM Arguments: -Dgemfire.default.locators=10.0.0.121[10334] -Dgemfire.start-dev-rest-api=false -Dgemfire.use-cluster-configuration=true -Dgemfire.log-level=config -XX:OnOutOfMemoryError=kill -KILL %p -Dgemfire.launcher.registerSignalHandlers=true -Djava.awt.headless=true -Dsun.rmi.dgc.server.gcInterval=9223372036854775806
Class-Path: /Users/jblum/pivdev/apache-geode-1.6.0/lib/geode-core-1.6.0.jar:/Users/jblum/pivdev/apache-geode-1.6.0/lib/geode-dependencies.jar
gfsh>list members
Name | Id
--------- | --------------------------------------------------------------
LocaorOne | 10.0.0.121(LocaorOne:41758:locator)<ec><v0>:1024 [Coordinator]
ServerOne | 10.0.0.121(ServerOne:41785)<v1>:1025
gfsh>list regions
No Regions Found
gfsh>create region --name=Customers --type=PARTITION --key-constraint=java.lang.Long --value-constraint=java.lang.Object
Member | Status
--------- | ------------------------------------------
ServerOne | Region "/Customers" created on "ServerOne"
gfsh>list regions
List of regions
---------------
Customers
gfsh>describe region --name=Customers
..........................................................
Name : Customers
Data Policy : partition
Hosting Members : ServerOne
Non-Default Attributes Shared By Hosting Members
Type | Name | Value
------ | ----------- | ---------
Region | size | 0
| data-policy | PARTITION
Now, even if I run the Spring Boot application again, it will still fail with the same Exception...
Caused by: org.springframework.beans.factory.BeanInitializationException: Region [Customers] in Cache [GemFireCache[id = 989520513; isClosing = false; isShutDownAll = false; created = Thu Aug 02 14:09:25 PDT 2018; server = false; copyOnRead = false; lockLease = 120; lockTimeout = 60]] not found
at org.springframework.data.gemfire.ResolvableRegionFactoryBean.createRegion(ResolvableRegionFactoryBean.java:146) ~[spring-data-geode-2.1.0.RC1.jar:2.1.0.RC1]
at org.springframework.data.gemfire.ResolvableRegionFactoryBean.afterPropertiesSet(ResolvableRegionFactoryBean.java:96) ~[spring-data-geode-2.1.0.RC1.jar:2.1.0.RC1]
at org.springframework.data.gemfire.LookupRegionFactoryBean.afterPropertiesSet(LookupRegionFactoryBean.java:72) ~[spring-data-geode-2.1.0.RC1.jar:2.1.0.RC1]
...
Why?
This because 1) the Spring Boot, Apache Geode peer Cache application is not part of the cluster (yet) and 2) because, by default, SDG does not allow a Spring configured/bootstrapped Apache Geode peer Cache application to get it's configuration from the cluster (specifically, from the Cluster Configuration Service), therefore, we have to configure/enable both things.
We can have our Spring Boot, Apache Geode peer Cache application join the cluster specifically by specifying the locators attribute of the #PeerCacheApplication annotation as localhost[10334].
#PeerCacheApplication(name = "...", locators = "localhost[10334]")
We can have our Spring Boot, Apache Geode peer Cache application get its configuration from the cluster by enabling the useClusterConfiguration property, which we do by adding the following Configurer bean definition to our inner, static GeodeConfiguration class, as follows:
#Bean
PeerCacheConfigurer useClusterConfigurationConfigurer() {
return (beanName, cacheFactoryBean) -> cacheFactoryBean.setUseClusterConfiguration(true);
}
Now, when we run our Spring Boot, Apache Geode peer Cache application again, we see quite different output. First, see that our peer member (app) gets the cluster configuration...
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<cache xmlns="http://geode.apache.org/schema/cache" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" copy-on-read="false" is-server="false" lock-lease="120" lock-timeout="60" search-timeout="300" version="1.0" xsi:schemaLocation="http://geode.apache.org/schema/cache http://geode.apache.org/schema/cache/cache-1.0.xsd">
<region name="Customers">
<region-attributes data-policy="partition">
<key-constraint>java.lang.Long</key-constraint>
<value-constraint>java.lang.Object</value-constraint>
</region-attributes>
</region>
</cache>
Next, you may have noticed that I enabled PDX, using SDG's #EnablePdx annotation. This allows us to easily serialize our application domain model object types (e.g. Customer) without our type unduly needing to implement java.io.Serializable. There are several reasons why you wouldn't necessarily want to implement java.io.Serializable anyway. Using SDG's #EnablePdx uses SDG's MappingPdxSerializer implementation, which is far more powerful than even Apache Geode's/Pivotal GemFire's own ReflectionBasedAutoSerializer.
As result of serializing the application types (namely, Customer), you will see this output...
14:26:48.322 [main] INFO org.apache.geode.internal.cache.PartitionedRegion - Partitioned Region /Customers is created with prId=2
Started ClusterConfiguredGeodeServerApplication in 4.223 seconds (JVM running for 5.574)
14:26:48.966 [main] INFO org.apache.geode.pdx.internal.PeerTypeRegistration - Adding new type: PdxType[dsid=0, typenum=14762571
name=example.app.spring.cluster_config.server.ClusterConfiguredGeodeServerApplication$Customer
fields=[
id:Object:identity:0:idx0(relativeOffset)=0:idx1(vlfOffsetIndex)=-1
name:String:1:1:idx0(relativeOffset)=0:idx1(vlfOffsetIndex)=1]]
14:26:49.002 [main] INFO org.apache.geode.pdx.internal.TypeRegistry - Caching PdxType[dsid=0, typenum=14762571
name=example.app.spring.cluster_config.server.ClusterConfiguredGeodeServerApplication$Customer
fields=[
id:Object:identity:0:idx0(relativeOffset)=0:idx1(vlfOffsetIndex)=-1
name:String:1:1:idx0(relativeOffset)=0:idx1(vlfOffsetIndex)=1]]
Another reason I enable PDX is so that I do not need to add the Customer class to the Server (i.e. "ServerOne") started using Gfsh. It also allows me to query the "Customers" Region and see that Customer "Jon Doe" was successfully added...
gfsh>describe region --name=Customers
..........................................................
Name : Customers
Data Policy : partition
Hosting Members : ServerOne
Non-Default Attributes Shared By Hosting Members
Type | Name | Value
------ | ----------- | ---------
Region | size | 1
| data-policy | PARTITION
gfsh>
gfsh>query --query="SELECT c.name FROM /Customers c"
Result : true
Limit : 100
Rows : 1
Result
-------
Jon Doe
Bingo! Success!
I am not going to even begin to discuss everything that is wrong with your configuration. I implore you to read the docs (and Apache Geode's User Guide; i.e. appropriate sections), understand the concepts, look at examples, guides, ask concise questions, etc, etc.
Here is the example source code...
https://github.com/jxblum/contacts-application/blob/master/configuration-example/src/main/java/example/app/spring/cluster_config/server/ClusterConfiguredGeodeServerApplication.java
Hope this helps!
-j

In-memory H2 database, insert not working in SpringBootTest

I have a SpringBootApplicationWhich I wish to test.
Below are the details about my files
application.properties
PRODUCT_DATABASE_PASSWORD=
PRODUCT_DATABASE_USERNAME=sa
PRODUCT_DATABASE_CONNECTION_URL=jdbc:h2:file:./target/db/testdb
PRODUCT_DATABASE_DRIVER=org.h2.Driver
RED_SHIFT_DATABASE_PASSWORD=
RED_SHIFT_DATABASE_USERNAME=sa
RED_SHIFT_DATABASE_CONNECTION_URL=jdbc:h2:file:./target/db/testdb
RED_SHIFT_DATABASE_DRIVER=org.h2.Driver
spring.datasource.platform=h2
ConfigurationClass
#SpringBootConfiguration
#SpringBootApplication
#Import({ProductDataAccessConfig.class, RedShiftDataAccessConfig.class})
public class TestConfig {
}
Main Test Class
#RunWith(SpringJUnit4ClassRunner.class)
#SpringBootTest(classes = {TestConfig.class,ConfigFileApplicationContextInitializer.class}, webEnvironment = SpringBootTest.WebEnvironment.NONE)
public class MainTest {
#Autowired(required = true)
#Qualifier("dataSourceRedShift")
private DataSource dataSource;
#Test
public void testHourlyBlock() throws Exception {
insertDataIntoDb(); //data sucessfully inserted
SpringApplication.run(Application.class, new String[]{}); //No data found
}
}
Data Access In Application.class;
try (Connection conn = dataSourceRedShift.getConnection();
Statement stmt = conn.createStatement() {
//access inserted data
}
Please Help!
PS for the spring boot application the test beans are being picked so bean instantiation definitely not a problem. I think I am missing some properties.
I do not use hibernate in my application and data goes off even within the same application context (child context). i.e. I run a spring boot application which reads that data inserted earlier
Problem solved.
removing spring.datasource.platform=h2 from the application.properties.
Made my h2 data persists.
But I still wish to know how is h2 starting automatically?

AutoFac Exception after upgrading NServiceBus

I just upgraded NServiceBus from 4.6 to 5.0
I did the steps suggested in the "4 to 5" document and am able to compile. Now I receive the following Error:
None of the constructors found with
'Autofac.Core.Activators.Reflection.DefaultConstructorFinder' on type
'Nop.Web.Controllers.ShoppingCartController' can be invoked with the
available services and parameters: Cannot resolve parameter
'NServiceBus.IBus bus' of constructor 'Void .ctor(NServiceBus.IBus, ...
What has to be done?
(Update: My Configuration)
public static class ServiceBus
{
public static void Init(ILifetimeScope scope)
{
var configuration = new BusConfiguration();
configuration.EndpointName(ConfigurationManager.AppSettings["ServiceBusEndpointName"]);
configuration.UseTransport<MsmqTransport>();
configuration.UseSerialization<JsonSerializer>();
configuration.UsePersistence<RavenDBPersistence>();
configuration.DisableFeature<Sagas>();
configuration.Transactions().Enable();
configuration.AssembliesToScan(AllAssemblies
.Matching("Nop.Services.dll")
.And("TengoMessages.dll")
.And("Partner.Pricing.Messages.dll")
.And("Partner.Pricing.Infrastructure.dll"));
configuration.UseContainer<AutofacBuilder>();
configuration.PurgeOnStartup(false);
var bus = Bus.Create(configuration);
bus.Start();
var newBuilder = new ContainerBuilder();
newBuilder.RegisterInstance(bus);
newBuilder.Update(Singleton<IContainer>.Instance);
}
I don't use AutoFac, so I'm not familiar with the ContainerBuilder concept, but it looks like you want to use an existing container with NServiceBus?
Create the instance of your container first, and then change your configuration code to use:
configuration.UseContainer<AutofacBuilder>(customizations =>
customizations.ExistingContainer(container));
It looks like the second to last line of code is registering the bus--this should not be necessary, as the above code will ensure all NSB-related classes get properly registered.

How to find port of Spring Boot container when running a spock test using property server.port=0

Given this entry in application.properties:
server.port=0
which causes Spring Boot to chose a random available port, and testing a spring boot web application using spock, how can the spock code know which port to hit?
Normal injection like this:
#Value("${local.server.port}")
int port;
doesn't work with spock.
You can find the port using this code:
int port = context.embeddedServletContainer.port
Which for those interested in the java equivalent is:
int port = ((TomcatEmbeddedServletContainer)((AnnotationConfigEmbeddedWebApplicationContext)context).getEmbeddedServletContainer()).getPort();
Here's an abstract class that you can extends which wraps up this initialization of the spring boot application and determines the port:
abstract class SpringBootSpecification extends Specification {
#Shared
#AutoCleanup
ConfigurableApplicationContext context
int port = context.embeddedServletContainer.port
void launch(Class clazz) {
Future future = Executors.newSingleThreadExecutor().submit(
new Callable() {
#Override
public ConfigurableApplicationContext call() throws Exception {
return (ConfigurableApplicationContext) SpringApplication.run(clazz)
}
})
context = future.get(20, TimeUnit.SECONDS);
}
}
Which you can use like this:
class MySpecification extends SpringBootSpecification {
void setupSpec() {
launch(MyLauncher.class)
}
String getBody(someParam) {
ResponseEntity entity = new RestTemplate().getForEntity("http://localhost:${port}/somePath/${someParam}", String.class)
return entity.body;
}
}
The injection will work with Spock, as long as you've configured your spec class correctly and have spock-spring on the classpath. There's a limitation in Spock Spring which means it won't bootstrap your Boot application if you use #SpringApplicationConfiguration. You need to use #ContextConfiguration and configure it manually instead. See this answer for the details.
The second part of the problem is that you can't use a GString for the #Value. You could escape the $, but it's easier to use single quotes:
#Value('${local.server.port}')
private int port;
Putting this together, you get a spec that looks something like this:
#ContextConfiguration(loader = SpringApplicationContextLoader, classes = SampleSpockTestingApplication.class)
#WebAppConfiguration
#IntegrationTest("server.port=0")
class SampleSpockTestingApplicationSpec extends Specification {
#Value("\${local.server.port}")
private int port;
def "The index page has the expected body"() {
when: "the index page is accessed"
def response = new TestRestTemplate().getForEntity(
"http://localhost:$port", String.class);
then: "the response is OK and the body is welcome"
response.statusCode == HttpStatus.OK
response.body == 'welcome'
}
}
Also note the use of #IntegrationTest("server.port=0") to request a random port be used. It's a nice alternative to configuring it in application.properties.
You could do this too:
#Autowired
private org.springframework.core.env.Environment springEnv;
...
springEnv.getProperty("server.port");

Any got Spring Boot working with cucumber-jvm?

I'm using spring boot as it removes all the boring stuff and let's me focus on my code, but all the test examples use junit and I want to use cucumber?
Can someone point me in the right direction to get cucumber and spring to start things up, do all the auto config and wiring and let my step definitions use auto wired beans to do stuff?
Try to use the following on your step definition class:
#ContextConfiguration(classes = YourBootApplication.class,
loader = SpringApplicationContextLoader.class)
#RunWith(SpringJUnit4ClassRunner.class)
public class MySteps {
//...
}
Also make sure you have the cucumber-spring module on your classpath.
Jake - my final code had the following annotations in a superclass that each cucumber step definition class extended, This gives access to web based mocks, adds in various scopes for testing, and bootstraps Spring boot only once.
#ContextConfiguration(classes = {MySpringConfiguration.class}, loader = SpringApplicationContextLoader.class)
#WebAppConfiguration
#TestExecutionListeners({WebContextTestExecutionListener.class,ServletTestExecutionListener.class})
where WebContextTestExecutionListener is:
public class WebContextTestExecutionListener extends
AbstractTestExecutionListener {
#Override
public void prepareTestInstance(TestContext testContext) throws Exception {
if (testContext.getApplicationContext() instanceof GenericApplicationContext) {
GenericApplicationContext context = (GenericApplicationContext) testContext.getApplicationContext();
ConfigurableListableBeanFactory beanFactory = context
.getBeanFactory();
Scope requestScope = new RequestScope();
beanFactory.registerScope("request", requestScope);
Scope sessionScope = new SessionScope();
beanFactory.registerScope("session", sessionScope);
}
}
}
My approach is quite simple. In a Before hook (in env.groovy as I am using Cucumber-JVM for Groovy), do the following.
package com.example.hooks
import static cucumber.api.groovy.Hooks.Before
import static org.springframework.boot.SpringApplication.exit
import static org.springframework.boot.SpringApplication.run
def context
Before {
if (!context) {
context = run Application
context.addShutdownHook {
exit context
}
}
}
Thanks to #PaulNUK, I found a set of annotations that will work.
I posted the answer in my question here
My StepDefs class required the annotations:
#RunWith(SpringJUnit4ClassRunner.class)
#ContextConfiguration(classes = DemoApplication.class, loader = SpringApplicationContextLoader.class)
#WebAppConfiguration
#IntegrationTest
There is also a repository with source code in answer I linked.