Pivotal GemFire cannot see cached data in Gfsh or Pulse - gemfire

I have created a Spring Boot application with a Geode/GemFire cache. I would like to connect to a Gfsh created Region with my Spring Boot application. In my application-context.xml I'm using a gfe:lookup-region with an id of the Gfsh created Region.
In my Java configuration file I utilize a LookupRegionFactoryBean to get a reference to the externally defined Region.
In my SpringBootApplication bootstrap class I write to my Repository successfully, as I can read back all the objects that I saved. But using the Gfsh tool or the Pulse tool I cannot see my cached data records (or the count of them that I saved.)
Could you provide some insight here? Also, I tried using the LocalRegionFactoryBean too in my configuration file but that approach did not work either.
Thanks.
application-context.xml:
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:context="http://www.springframework.org/schema/context"
xmlns:gfe-data="http://www.springframework.org/schema/data/gemfire"
xmlns:gfe="http://www.springframework.org/schema/gemfire"
xmlns:util="http://www.springframework.org/schema/util"
xsi:schemaLocation="http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context-3.0.xsd http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd http://www.springframework.org/schema/data/gemfire http://www.springframework.org/schema/data/gemfire/spring-data-gemfire.xsd http://www.springframework.org/schema/gemfire http://www.springframework.org/schema/gemfire/spring-gemfire.xsd">
<context:component-scan base-package="com.example.geode"></context:component-scan>
<util:properties id="gemfireProperties" location="context:geode.properties"/>
<!-- <context:property-placeholder location="context:geode.properties"/>
<bean id="log-level"><property name="log-level" value="${log-level}"/></bean>
<bean id="mcast-port"><property name="mcast-port" value="${mcast-port}"/></bean>
<bean id="name"><property name="name" value="${name}"/></bean>-->
<gfe:annotation-driven/>
<gfe-data:function-executions base-package="com.example.geode.config"/>
<!-- Declare GemFire Cache -->
<!-- <gfe:cache/> -->
<gfe:cache properties-ref="gemfireProperties"/>
<!-- Local region for being used by the Message -->
<!-- <gfe:replicated-region id="employee" value-constraint="com.example.geode.model.Employee" data-policy="REPLICATE"/>-->
<gfe:lookup-region id="employee" value-constraint="com.example.geode.model.Employee" data-policy="REPLICATE"/>
<!-- <gfe:local-region id="employee" value-constraint="com.example.geode.model.Employee" data-policy="REPLICATE"/>-->
<!-- Search for GemFire repositories -->
<gfe-data:repositories base-package="com.example.geode.repository"/>
</beans>
GeodeConfiguration.java:
//imports not included
#Configuration
#ComponentScan
#EnableCaching
#EnableGemfireRepositories//(basePackages = "com.example.geode.repository")
#EnableGemfireFunctions
#EnableGemfireFunctionExecutions//(basePackages = "com.example.geode.function")
#PropertySource("classpath:geode.properties")
public class GeodeConfiguration {
#Autowired
private EmployeeRepository employeeRepository;
#Autowired
private FunctionExecution functionExecution;
#Value("${log-level}")
private String loglevel;
#Value("${mcast-port}")
private String mcastPort;
#Value("${name}")
private String name;
Properties gemfireProperties() {
Properties gemfireProperties = new Properties();
gemfireProperties.setProperty(loglevel, loglevel);
gemfireProperties.setProperty(mcastPort, mcastPort);
gemfireProperties.setProperty(name, name);
return gemfireProperties;
}
#Bean
CacheFactoryBean gemfireCache() {
return new CacheFactoryBean();
}
#Bean
GemfireCacheManager cacheManager() {
GemfireCacheManager cacheManager = new GemfireCacheManager();
try {
CacheFactoryBean cacheFactory = gemfireCache();
//gemfireProperties();
//cacheFactory.setProperties(gemfireProperties());
cacheManager.setCache(cacheFactory.getObject()); //gemfireCache().getObject());
} catch (Exception ex) {
ex.printStackTrace();
}
return cacheManager;
}
#Bean(name="employee")
//#Autowired
LookupRegionFactoryBean<String, Employee> getRegion(final GemFireCache cache)
throws Exception {
//CacheTypeAwareRegionFactoryBean<String, Employee> region = new CacheTypeAwareRegionFactoryBean<>();//GenericRegionFactoryBean<> //LocalRegionFactoryBean<>();
LookupRegionFactoryBean<String, Employee> region = new LookupRegionFactoryBean<>();//GenericRegionFactoryBean<> //LocalRegionFactoryBean<>();
region.setRegionName("employee");
try {
region.setCache(gemfireCache().getObject());
} catch (Exception e) {
e.printStackTrace();
}
//region.setClose(false);
region.setName("employee");
//region.setAsyncEventQueues(new AsyncEventQueue[]{gemfireQueue});
//region.setPersistent(false);
//region.setDataPolicy(org.apache.geode.cache.DataPolicy.REPLICATE); //PRELOADED); //REPLICATE);
region.afterPropertiesSet();
return region;
}
}
BasicGeodeApplication.java:
//imports not provided
#EnableGemfireRepositories
#SpringBootApplication
#ComponentScan("com.example.geode")
//#EnableCaching
#EnableGemfireCaching
#EnableEntityDefinedRegions(basePackageClasses = Employee.class)
#SuppressWarnings("unused")
//#CacheServerApplication(name = "server2", locators = "localhost[10334]",
// autoStartup = true, port = 41414)
public class BasicGeodeApplication {
#Autowired
private EmployeeRepository employeeRepository;
#Autowired
private EmployeeService employeeService;
private static ConfigurableApplicationContext context;
public static void main(String[] args) {
context = SpringApplication.run(BasicGeodeApplication.class, args);
BasicGeodeApplication bga = new BasicGeodeApplication();
}
#Bean
public ApplicationRunner run(EmployeeRepository employeeRepository) {
return args -> {
Employee bob = new Employee("Bob", 80.0);
Employee sue = new Employee("Susan", 95.0);
Employee jane = new Employee("Jane", 85.0);
Employee jack = new Employee("Jack", 90.0);
List<Employee> employees = Arrays.asList(bob, sue, jane, jack);
employees.sort(Comparator.comparing(Employee::getName));
for (Employee employee : employees) {
//employeeService.saveEmployee(employee);
employeeRepository.save(employee);
}
System.out.println("\nList of employees:");
employees //Arrays.asList(bob.getName(), sue.getName(), jane.getName(), jack.getName());
.forEach(person -> System.out.println("\t" + employeeRepository.findByName(person.getName())));
System.out.println("\nQuery salary greater than 80k:");
stream(employeeRepository.findBySalaryGreaterThan(80.0).spliterator(), false)
.forEach(person -> System.out.println("\t" + person));
System.out.println("\nQuery salary less than 95k:");
stream(employeeRepository.findBySalaryLessThan(95.0).spliterator(), false)
.forEach(person -> System.out.println("\t" + person));
System.out.println("\nQuery salary greater than 80k and less than 95k:");
stream(employeeRepository.findBySalaryGreaterThanAndSalaryLessThan(80.0, 95.0).spliterator(), false)
.forEach(person -> System.out.println("\t" + person));
};
}
#Service
class EmployeeService {
#Autowired
private EmployeeRepository employeeRepository;
#CachePut(cacheNames = "employee", key = "#id")
//#PutMapping("/")
void saveEmployee(Employee employee) {
employeeRepository.save(employee);
}
Employee findEmployee(String name) {
return null;
}
//employeeRepository.findByName(person.getName())));
}
}
EmployeeRepository.java:
#Repository("employeeRepository")
//#DependsOn("gemfireCache")
public interface EmployeeRepository extends CrudRepository<Employee, String> {
Employee findByName(String name);
Iterable<Employee> findBySalaryGreaterThan(double salary);
Iterable<Employee> findBySalaryLessThan(double salary);
Iterable<Employee> findBySalaryGreaterThanAndSalaryLessThan(double salary1, double salary2);
}
Employee.java:
// imports not included
#Entity
#Region("employee")
public class Employee implements Serializable {
private static final long serialVersionUID = 1L;
#Id
#GeneratedValue(strategy=GenerationType.AUTO)
#javax.persistence.Id
private Long id;
public String name;
public double salary;
protected Employee() {}
#PersistenceConstructor
public Employee(String name, double salary) {
this.name = name;
this.salary = salary;
}
#Override
public String toString() {
return name + " salary is: " + salary;
}
public String getName() {
return name;
}
public void setName(String name) {
this.name = name;
}
public double getSalary() {
return salary;
}
public void setSalary(int salary) {
this.salary = salary;
}
}

I'd be very surprised if your Spring Boot application was actually using the Region(s) created in Gfsh after attempting a lookup. This is also evident from the fact that you cannot see data in the Region(s) when using Gfsh or Pulse after running your application.
In addition, it is not entirely apparent from your configuration whether your Spring Boot application is joining a cluster started with Gfsh since you have not shared the contents of your geode.properties file.
NOTE: We will come back to your properties file in a moment, since how you reference it (i.e. with context: in the location attribute of the <util> element from the Spring Util schema) is not even correct. In fact, your entire XML file is not even valid! Bean definitions without a class are not valid unless they are abstract.
What version of Spring Boot and Spring Data Geode/GemFire (SDG) are you using?
Judging by the Spring XML schema references in your XML configuration file, you are using Spring 3.0!? You should not reference versions in your schema location declarations. An unqualified schema version will be versioned based on the Spring JARs you imported on your application classpath (e.g. using Maven).
Anyway, the reason I ask is, I have made many changes to SDG causing it to fail-fast in the event that the configuration is ambiguous or incomplete (for instance).
In your case, the lookup would have failed instantly if the Region(s) did not exist. And, I am quite certain the Region(s) do not exist because SDG disables Cluster Configuration on a peer cache node/application, by default. Therefore, none of the Region(s) you created in Gfsh are immediately available to your Spring Boot application.
So, let's walk through a simple example. I will mostly use SDG's new annotation configuration model mixed with some Java config, for ease of use and convenience. I encourage you to read this chapter in SDG's Reference Guide given your configuration is all over the place and I am pretty certain you confusing what is actually happening.
Here is my Spring Boot, Apache Geode application...
#SpringBootApplication
#SuppressWarnings("unused")
public class ClusterConfiguredGeodeServerApplication {
public static void main(String[] args) {
SpringApplication.run(ClusterConfiguredGeodeServerApplication.class, args);
}
#Bean
ApplicationRunner runner(GemfireTemplate customersTemplate) {
return args -> {
Customer jonDoe = Customer.named("Jon Doe").identifiedBy(1L);
customersTemplate.put(jonDoe.getId(), jonDoe);
};
}
#PeerCacheApplication(name = "ClusterConfiguredGeodeServerApplication")
#EnablePdx
static class GeodeConfiguration {
#Bean("Customers")
LookupRegionFactoryBean<Long, Customer> customersRegion(GemFireCache gemfireCache) {
LookupRegionFactoryBean<Long, Customer> customersRegion = new LookupRegionFactoryBean<>();
customersRegion.setCache(gemfireCache);
return customersRegion;
}
#Bean("CustomersTemplate")
GemfireTemplate customersTemplate(#Qualifier("Customers") Region<?, ?> customers) {
return new GemfireTemplate(customers);
}
}
#Data
#RequiredArgsConstructor(staticName = "named")
static class Customer {
#Id
private Long id;
#NonNull
private String name;
Customer identifiedBy(Long id) {
this.id = id;
return this;
}
}
}
I am using Spring Data Lovelace RC1 (which includes Spring Data for Apache Geode 2.1.0.RC1). I am also using Spring Boot 2.0.3.RELEASE, which pulls in core Spring Framework 5.0.7.RELEASE, all on Java 8.
I ommited package and import declarations.
I am using Project Lombok to define my Customer class.
I have a nested GeodeConfiguration class to configure the Spring Boot application as a "peer" Cache member capable of joining an Apache Geode cluster. However, it is not part of any cluster yet!
Finally, I have configured a "Customers" Region, which is "looked" up in the Spring context.
When I start this application, it fails because there is no "Customers" Region defined yet, by any means...
Caused by: org.springframework.beans.factory.BeanInitializationException: Region [Customers] in Cache [GemFireCache[id = 2126876651; isClosing = false; isShutDownAll = false; created = Thu Aug 02 13:43:07 PDT 2018; server = false; copyOnRead = false; lockLease = 120; lockTimeout = 60]] not found
at org.springframework.data.gemfire.ResolvableRegionFactoryBean.createRegion(ResolvableRegionFactoryBean.java:146) ~[spring-data-geode-2.1.0.RC1.jar:2.1.0.RC1]
at org.springframework.data.gemfire.ResolvableRegionFactoryBean.afterPropertiesSet(ResolvableRegionFactoryBean.java:96) ~[spring-data-geode-2.1.0.RC1.jar:2.1.0.RC1]
at org.springframework.data.gemfire.LookupRegionFactoryBean.afterPropertiesSet(LookupRegionFactoryBean.java:72) ~[spring-data-geode-2.1.0.RC1.jar:2.1.0.RC1]
...
This is expected!
Alright, let's go into Gfsh and start a cluster.
You know that you need to start a Locator with a Server to form a cluster, right? A Locator is used by other potential peers attempting to join the cluster so they can locate the cluster in the first place. The Server is needed in order to create the "Customers" Region since you cannot create a Region on a Locator.
$ echo $GEODE_HOME
/Users/jblum/pivdev/apache-geode-1.6.0
$ gfsh
_________________________ __
/ _____/ ______/ ______/ /____/ /
/ / __/ /___ /_____ / _____ /
/ /__/ / ____/ _____/ / / / /
/______/_/ /______/_/ /_/ 1.6.0
Monitor and Manage Apache Geode
gfsh>start locator --name=LocaorOne --log-level=config
Starting a Geode Locator in /Users/jblum/pivdev/lab/LocaorOne...
....
Locator in /Users/jblum/pivdev/lab/LocaorOne on 10.0.0.121[10334] as LocaorOne is currently online.
Process ID: 41758
Uptime: 5 seconds
Geode Version: 1.6.0
Java Version: 1.8.0_152
Log File: /Users/jblum/pivdev/lab/LocaorOne/LocaorOne.log
JVM Arguments: -Dgemfire.enable-cluster-configuration=true -Dgemfire.load-cluster-configuration-from-dir=false -Dgemfire.log-level=config -Dgemfire.launcher.registerSignalHandlers=true -Djava.awt.headless=true -Dsun.rmi.dgc.server.gcInterval=9223372036854775806
Class-Path: /Users/jblum/pivdev/apache-geode-1.6.0/lib/geode-core-1.6.0.jar:/Users/jblum/pivdev/apache-geode-1.6.0/lib/geode-dependencies.jar
Successfully connected to: JMX Manager [host=10.0.0.121, port=1099]
Cluster configuration service is up and running.
gfsh>start server --name=ServerOne --log-level=config
Starting a Geode Server in /Users/jblum/pivdev/lab/ServerOne...
...
Server in /Users/jblum/pivdev/lab/ServerOne on 10.0.0.121[40404] as ServerOne is currently online.
Process ID: 41785
Uptime: 3 seconds
Geode Version: 1.6.0
Java Version: 1.8.0_152
Log File: /Users/jblum/pivdev/lab/ServerOne/ServerOne.log
JVM Arguments: -Dgemfire.default.locators=10.0.0.121[10334] -Dgemfire.start-dev-rest-api=false -Dgemfire.use-cluster-configuration=true -Dgemfire.log-level=config -XX:OnOutOfMemoryError=kill -KILL %p -Dgemfire.launcher.registerSignalHandlers=true -Djava.awt.headless=true -Dsun.rmi.dgc.server.gcInterval=9223372036854775806
Class-Path: /Users/jblum/pivdev/apache-geode-1.6.0/lib/geode-core-1.6.0.jar:/Users/jblum/pivdev/apache-geode-1.6.0/lib/geode-dependencies.jar
gfsh>list members
Name | Id
--------- | --------------------------------------------------------------
LocaorOne | 10.0.0.121(LocaorOne:41758:locator)<ec><v0>:1024 [Coordinator]
ServerOne | 10.0.0.121(ServerOne:41785)<v1>:1025
gfsh>list regions
No Regions Found
gfsh>create region --name=Customers --type=PARTITION --key-constraint=java.lang.Long --value-constraint=java.lang.Object
Member | Status
--------- | ------------------------------------------
ServerOne | Region "/Customers" created on "ServerOne"
gfsh>list regions
List of regions
---------------
Customers
gfsh>describe region --name=Customers
..........................................................
Name : Customers
Data Policy : partition
Hosting Members : ServerOne
Non-Default Attributes Shared By Hosting Members
Type | Name | Value
------ | ----------- | ---------
Region | size | 0
| data-policy | PARTITION
Now, even if I run the Spring Boot application again, it will still fail with the same Exception...
Caused by: org.springframework.beans.factory.BeanInitializationException: Region [Customers] in Cache [GemFireCache[id = 989520513; isClosing = false; isShutDownAll = false; created = Thu Aug 02 14:09:25 PDT 2018; server = false; copyOnRead = false; lockLease = 120; lockTimeout = 60]] not found
at org.springframework.data.gemfire.ResolvableRegionFactoryBean.createRegion(ResolvableRegionFactoryBean.java:146) ~[spring-data-geode-2.1.0.RC1.jar:2.1.0.RC1]
at org.springframework.data.gemfire.ResolvableRegionFactoryBean.afterPropertiesSet(ResolvableRegionFactoryBean.java:96) ~[spring-data-geode-2.1.0.RC1.jar:2.1.0.RC1]
at org.springframework.data.gemfire.LookupRegionFactoryBean.afterPropertiesSet(LookupRegionFactoryBean.java:72) ~[spring-data-geode-2.1.0.RC1.jar:2.1.0.RC1]
...
Why?
This because 1) the Spring Boot, Apache Geode peer Cache application is not part of the cluster (yet) and 2) because, by default, SDG does not allow a Spring configured/bootstrapped Apache Geode peer Cache application to get it's configuration from the cluster (specifically, from the Cluster Configuration Service), therefore, we have to configure/enable both things.
We can have our Spring Boot, Apache Geode peer Cache application join the cluster specifically by specifying the locators attribute of the #PeerCacheApplication annotation as localhost[10334].
#PeerCacheApplication(name = "...", locators = "localhost[10334]")
We can have our Spring Boot, Apache Geode peer Cache application get its configuration from the cluster by enabling the useClusterConfiguration property, which we do by adding the following Configurer bean definition to our inner, static GeodeConfiguration class, as follows:
#Bean
PeerCacheConfigurer useClusterConfigurationConfigurer() {
return (beanName, cacheFactoryBean) -> cacheFactoryBean.setUseClusterConfiguration(true);
}
Now, when we run our Spring Boot, Apache Geode peer Cache application again, we see quite different output. First, see that our peer member (app) gets the cluster configuration...
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<cache xmlns="http://geode.apache.org/schema/cache" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" copy-on-read="false" is-server="false" lock-lease="120" lock-timeout="60" search-timeout="300" version="1.0" xsi:schemaLocation="http://geode.apache.org/schema/cache http://geode.apache.org/schema/cache/cache-1.0.xsd">
<region name="Customers">
<region-attributes data-policy="partition">
<key-constraint>java.lang.Long</key-constraint>
<value-constraint>java.lang.Object</value-constraint>
</region-attributes>
</region>
</cache>
Next, you may have noticed that I enabled PDX, using SDG's #EnablePdx annotation. This allows us to easily serialize our application domain model object types (e.g. Customer) without our type unduly needing to implement java.io.Serializable. There are several reasons why you wouldn't necessarily want to implement java.io.Serializable anyway. Using SDG's #EnablePdx uses SDG's MappingPdxSerializer implementation, which is far more powerful than even Apache Geode's/Pivotal GemFire's own ReflectionBasedAutoSerializer.
As result of serializing the application types (namely, Customer), you will see this output...
14:26:48.322 [main] INFO org.apache.geode.internal.cache.PartitionedRegion - Partitioned Region /Customers is created with prId=2
Started ClusterConfiguredGeodeServerApplication in 4.223 seconds (JVM running for 5.574)
14:26:48.966 [main] INFO org.apache.geode.pdx.internal.PeerTypeRegistration - Adding new type: PdxType[dsid=0, typenum=14762571
name=example.app.spring.cluster_config.server.ClusterConfiguredGeodeServerApplication$Customer
fields=[
id:Object:identity:0:idx0(relativeOffset)=0:idx1(vlfOffsetIndex)=-1
name:String:1:1:idx0(relativeOffset)=0:idx1(vlfOffsetIndex)=1]]
14:26:49.002 [main] INFO org.apache.geode.pdx.internal.TypeRegistry - Caching PdxType[dsid=0, typenum=14762571
name=example.app.spring.cluster_config.server.ClusterConfiguredGeodeServerApplication$Customer
fields=[
id:Object:identity:0:idx0(relativeOffset)=0:idx1(vlfOffsetIndex)=-1
name:String:1:1:idx0(relativeOffset)=0:idx1(vlfOffsetIndex)=1]]
Another reason I enable PDX is so that I do not need to add the Customer class to the Server (i.e. "ServerOne") started using Gfsh. It also allows me to query the "Customers" Region and see that Customer "Jon Doe" was successfully added...
gfsh>describe region --name=Customers
..........................................................
Name : Customers
Data Policy : partition
Hosting Members : ServerOne
Non-Default Attributes Shared By Hosting Members
Type | Name | Value
------ | ----------- | ---------
Region | size | 1
| data-policy | PARTITION
gfsh>
gfsh>query --query="SELECT c.name FROM /Customers c"
Result : true
Limit : 100
Rows : 1
Result
-------
Jon Doe
Bingo! Success!
I am not going to even begin to discuss everything that is wrong with your configuration. I implore you to read the docs (and Apache Geode's User Guide; i.e. appropriate sections), understand the concepts, look at examples, guides, ask concise questions, etc, etc.
Here is the example source code...
https://github.com/jxblum/contacts-application/blob/master/configuration-example/src/main/java/example/app/spring/cluster_config/server/ClusterConfiguredGeodeServerApplication.java
Hope this helps!
-j

Related

No physical database known of type cassandra -AWS keyspace

We are connecting our microservices to aws keyspace(Cassandra) through dbaas.
Getting error
cloud.dbaas.client.exceptions.CreateDbException: MicroserviceRestClientResponseException{message=404 Not Found: "No physical database known of type cassandra
Even getting same error from dbaas pods logs.
I already configured below parameters
spring.data.cassandra.ssl
spring.data.cassandra.contact-points
spring.data.cassandra.local-datacenter
spring.data.cassandra.port
spring.data.cassandra.password
spring.data.cassandra.username
You will want to reference the external configuration. See the following Amazon Keyspaces spring example.
https://github.com/aws-samples/amazon-keyspaces-spring-app-example/
#Configuration
public class AppConfig {
private final String username = System.getenv("AWS_MCS_SPRING_APP_USERNAME");
private final String password = System.getenv("AWS_MCS_SPRING_APP_PASSWORD");
File driverConfig = new File(System.getProperty("user.dir")+"/application.conf");
#Primary
public #Bean
CqlSession session() throws NoSuchAlgorithmException {
return CqlSession.builder().
withConfigLoader(DriverConfigLoader.fromFile(driverConfig)).
withAuthCredentials(username, password).
withSslContext(SSLContext.getDefault()).
withKeyspace("keyspace_name").
build();
}
}

In-memory H2 database, insert not working in SpringBootTest

I have a SpringBootApplicationWhich I wish to test.
Below are the details about my files
application.properties
PRODUCT_DATABASE_PASSWORD=
PRODUCT_DATABASE_USERNAME=sa
PRODUCT_DATABASE_CONNECTION_URL=jdbc:h2:file:./target/db/testdb
PRODUCT_DATABASE_DRIVER=org.h2.Driver
RED_SHIFT_DATABASE_PASSWORD=
RED_SHIFT_DATABASE_USERNAME=sa
RED_SHIFT_DATABASE_CONNECTION_URL=jdbc:h2:file:./target/db/testdb
RED_SHIFT_DATABASE_DRIVER=org.h2.Driver
spring.datasource.platform=h2
ConfigurationClass
#SpringBootConfiguration
#SpringBootApplication
#Import({ProductDataAccessConfig.class, RedShiftDataAccessConfig.class})
public class TestConfig {
}
Main Test Class
#RunWith(SpringJUnit4ClassRunner.class)
#SpringBootTest(classes = {TestConfig.class,ConfigFileApplicationContextInitializer.class}, webEnvironment = SpringBootTest.WebEnvironment.NONE)
public class MainTest {
#Autowired(required = true)
#Qualifier("dataSourceRedShift")
private DataSource dataSource;
#Test
public void testHourlyBlock() throws Exception {
insertDataIntoDb(); //data sucessfully inserted
SpringApplication.run(Application.class, new String[]{}); //No data found
}
}
Data Access In Application.class;
try (Connection conn = dataSourceRedShift.getConnection();
Statement stmt = conn.createStatement() {
//access inserted data
}
Please Help!
PS for the spring boot application the test beans are being picked so bean instantiation definitely not a problem. I think I am missing some properties.
I do not use hibernate in my application and data goes off even within the same application context (child context). i.e. I run a spring boot application which reads that data inserted earlier
Problem solved.
removing spring.datasource.platform=h2 from the application.properties.
Made my h2 data persists.
But I still wish to know how is h2 starting automatically?

Hazelcast No DataSerializerFactory registered for namespace: 0 on standalone process

Trying to set a HazelCast cluster with tcp-ip enabled on a standalone process.
My class looks like this
public class Person implements Serializable{
private static final long serialVersionUID = 1L;
int personId;
String name;
Person(){};
//getters and setters
}
Hazelcast is loaded as
final Config config = createNewConfig(mapName);
HazelcastInstance node = Hazelcast.newHazelcastInstance(config);`
Config createNewConfig(mapName){
final PersonStore personStore = new PersonStore();
XmlConfigBuilder configBuilder = new XmlConfigBuilder();
Config config = configBuilder.build();
config.setClassLoader(LoadAll.class.getClassLoader());
MapConfig mapConfig = config.getMapConfig(mapName);
MapStoreConfig mapStoreConfig = new MapStoreConfig();
mapStoreConfig.setImplementation(personStore);
return config;
}
and my myhazelcast config has this
<tcp-ip enabled="true">
<member>machine-1</member>
<member>machine-2</member>
</tcp-ip>
Do I need to populate this tag in my xml?
I get this error when a second instance is brought up
com.hazelcast.nio.serialization.HazelcastSerializationException: No DataSerializerFactory registered for namespace: 0
2275 at com.hazelcast.nio.serialization.DataSerializer.read(DataSerializer.java:98)
2276 at com.hazelcast.nio.serialization.DataSerializer.read(DataSerializer.java:39)
2277 at com.hazelcast.nio.serialization.StreamSerializerAdapter.read(StreamSerializerAdapter.java:41)
2278 at com.hazelcast.nio.serialization.SerializationServiceImpl.toObject(SerializationServiceImpl.java:276)
Any help is highly appericiated.
Solved my problem, I had a pom.xml with hazelcast-wm so I did not have actual hazelcast jar in my bundled jar. Including that fixed my issue.
Note that this same "No DataSerializerFactory registered for namespace: 0" error message can also occur in an OSGi environment when you're attempting to use more than one Hazelcast instance within the same VM, but initializing the instances from different bundles. The reason being that the com.hazelcast.util.ServiceLoader.findHighestReachableClassLoader() method will sometimes pick the wrong class loader during Hazelcast initialization (as it won't always pick the class loader you set on the config), and then it ends up with an empty list of DataSerializerFactory instances (hence causing the error message that it can't find the requested factory with id 0). The following shows a way to work around that problem by taking advantage of Java's context class loader:
private HazelcastInstance createHazelcastInstance() {
// Use the following if you're only using the Hazelcast data serializers
final ClassLoader classLoader = Hazelcast.class.getClassLoader();
// Use the following if you have custom data serializers that you need
// final ClassLoader classLoader = this.getClass().getClassLoader();
final com.hazelcast.config.Config config = new com.hazelcast.config.Config();
config.setClassLoader(classLoader);
final ClassLoader previousContextClassLoader = Thread.currentThread().getContextClassLoader();
try {
Thread.currentThread().setContextClassLoader(classLoader);
return Hazelcast.newHazelcastInstance(config);
} finally {
if(previousContextClassLoader != null) {
Thread.currentThread().setContextClassLoader(previousContextClassLoader);
}
}
}

ejb3.1 #Startup.. #Singleton .. #PostConstruct read from XML the Objects

I need to initialize a set of static String values stored in an XML files [ I know this is against the EJB spec ]
as shown below since the over all Idea is to not hardcore within EJB's the JNDI info
Utils.xml
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE properties SYSTEM "http://java.sun.com/dtd/properties.dtd">
<properties>
<entry key="jndidb">java:jdbc/MYSQLDB10</entry>
<entry key="jndimdbque">java:jms/QueueName/remote</entry>
<entry key="jndi1">DBConnections/remote</entry>
<entry key="jndi2">AddressBean/remote</entry>
</properties>
The Onload of ejbserver startup code is as follows ...
inpstrem = clds.getClassLoaders(flename) Reads the Util.xml and stores the same in Hashtable key value pare....
package com.ejb.utils;
import java.io.InputStream;
import java.util.Enumeration;
import java.util.HashMap;
import java.util.Hashtable;
import java.util.Map;
import java.util.Properties;
import java.util.TreeMap;
import javax.annotation.PostConstruct;
import javax.annotation.PreDestroy;
import javax.ejb.ConcurrencyManagement;
import javax.ejb.Singleton;
import javax.ejb.Startup;
#Singleton
#Startup
public class StartupUtils {
private final String INITFILENAME = "/System/Config/Utils.xml";
private static Hashtable HTINITFLENME=null,HTERRINITFLENME=null,HTCMMNFLENME=null;
public StartupUtils() {
HTINITFLENME = new Hashtable();
HTERRINITFLENME = new Hashtable();
}
public void printAll(Hashtable htcmmnflenme){
Enumeration ENUMK = null, VALS = null;
String KEY = "", VALUE = "";
ENUMK = htcmmnflenme.keys();
while (ENUMK.hasMoreElements()) {
KEY = null;VALUE = null;
KEY = (ENUMK.nextElement().toString().trim());
VALUE = htcmmnflenme.get(KEY).toString().trim();
InitLogDisplay(KEY + " :::: " + VALUE);
}
}
public static void InitLogDisplay(String Datadisplay){
System.out.println(Datadisplay);
}
public Hashtable getDataProp(String flename){
Map htData = null;
InputStream inpstrem = null;
ClassLoaders clds = null;
Enumeration enumk = null, vals = null;
String key = "", value = "";
Properties props = null;
Hashtable htx = null;
try {
clds = new ClassLoaders();
inpstrem = clds.getClassLoaders(flename);
props = new Properties();
props.loadFromXML(inpstrem);
enumk = props.keys();
vals = props.elements();
htData = new HashMap();
htData = new TreeMap();
while (enumk.hasMoreElements()) {
key = (enumk.nextElement().toString().trim());
value = (vals.nextElement().toString().trim());
htData.put(key,value);
}
clds = null;
props = null;
inpstrem.close();
} catch (Exception e) {
e.printStackTrace();
}finally{
key = ""; value = "";
enumk = null;vals = null;
clds=null;
props=null;
}
htx = new Hashtable();
htx.putAll(htData);
return htx;
}
public void setUtilsPropDetails(){
HTINITFLENME = getDataProp(INITFILENAME);
this.printAll(HTINITFLENME);
}
public static Hashtable getUtilsPropDetails(){
return HTINITFLENME;
}
#PostConstruct
public void startOnstartup(){
this.setUtilsPropDetails();
this.printAll();
}
#PreDestroy
public void startOnshutdown(){
try {
this.finalize();
} catch (Throwable e) {
e.printStackTrace();
}
}
}
On startup of EJB server "this.printAll(HTINITFLENME);" prints the key values of the XML file hoever If an external Call is made via any other EJB's to the method "getUtilsPropDetails()" does not return the key values....
Am i doing something wrong ??????
Have you considered using the deployment descriptor and having the container do this work for you?
There are of course <resource-ref>, <resource-env-ref>, <ejb-ref> and <env-entry> elements to cover externally configuring which things should be made available to the bean for lookup. For example:
<resource-ref>
<res-ref-name>db</res-ref-name>
<res-type>javax.sql.DataSource</res-type>
<mapped-name>java:jdbc/MYSQLDB10</mapped-name>
</resource-ref>
I'm not sure how your vendor handles mapped-name (that particular element is vendor specific), but there will be an equivalent syntax to specify the datasource you want.
The singleton can then lookup java:comp/env/db and return the datasource to other EJBs.
If you are in a compliant Java EE 6 server, then you can change the name to <res-ref-name>java:app/db</res-ref-name> and then anyone in the app can lookup the datasource without the need to get it from the singleton. Global JNDI is a standard feature of Java EE 6 and designed for exactly this.
You can put those elements in the ejb-jar.xml, web.xml or application.xml. Putting them in the application.xml will make the one entry available to the entire application and give you one place to maintain everything.
Global resources can also be injected via:
#Resource(name="java:app/db")
DataSource dataSource;
If for some reason you didn't want to use those, at the very least you could use the <env-entry> element to externalize the strings.
EDIT
See this other answer for a much more complete description of JNDI as it pertains to simple types. This of course can be done where the name/value pairs are not simple types and instead are more complex types like DataSource and Topic or Queue
For example:
<resource-ref>
<res-ref-name>myDataSource</res-ref-name>
<res-type>javax.sql.DataSource</res-type>
</resource-ref>
<resource-ref>
<res-ref-name>myJmsConnectionFactory</res-ref-name>
<res-type>javax.jms.ConnectionFactory</res-type>
</resource-ref>
<resource-ref>
<res-ref-name>myQueueCF</res-ref-name>
<res-type>javax.jms.QueueConnectionFactory</res-type>
</resource-ref>
<resource-ref>
<res-ref-name>myTopicCF</res-ref-name>
<res-type>javax.jms.TopicConnectionFactory</res-type>
</resource-ref>
<resource-env-ref>
<resource-env-ref-name>myQueue</resource-env-ref-name>
<resource-env-ref-type>javax.jms.Queue</resource-env-ref-type>
</resource-env-ref>
<resource-env-ref>
<resource-env-ref-name>myTopic</resource-env-ref-name>
<resource-env-ref-type>javax.jms.Topic</resource-env-ref-type>
</resource-env-ref>
<persistence-context-ref>
<persistence-context-ref-name>myEntityManager</persistence-context-ref-name>
<persistence-unit-name>test-unit</persistence-unit-name>
</persistence-context-ref>
<persistence-unit-ref>
<persistence-unit-ref-name>myEntityManagerFactory</persistence-unit-ref-name>
<persistence-unit-name>test-unit</persistence-unit-name>
</persistence-unit-ref>
See the JNDI and simple types answer for look and injection syntax.
I see the name and type, but where's the value?
Configuring what actual things these names refer to has historically been done in a separate vendor specific deployment descriptor, such as sun-ejb-jar.xml or openejb-jar.xml or whatever that vendor requires. The vendor-specific descriptor and the standard ejb-jar.xml descriptor combined provide the guaranteed portability apps require.
The ejb-jar.xml file offering only standard things like being able to say what types of resources the application requires and what names the application has chosen to use to refer to those resources. The vendor-specific descriptor fills the gap of mapping those names to actual resources in the system.
As of EJB 3.0/Java EE 5, we on the spec groups departed from that slightly and added the <mapped-name> element which can be used in the ejb-jar.xml with any of the references shown above, such as <resource-ref>, to the vendor-specific name. Mapped name will never be portable and its value will always be vendor-specific -- if it is supported at all.
That said, <mapped-name> can be convenient in avoiding the need for a separate vendor-specific file and achieves the goal of getting vendors-specific names out of code. After all, the ejb-jar.xml can be edited when moving from one vendor to another and for many people that's good enough.

How to deploy a Configuration to the JNDI tree of the Glass fish server

I want to deploy a HashMap of configuration to the JNDI tree of Glass Fish server. I am migrating a framework from Weblogic to GLassfish. Previously it was done via the following code..
Where the Environment is weblogic.jndi.Environment;
public void deployConfiguration(HashMap configuration)
throws GenericFrameworkException {
Context ictx = null;
String configParameter = null;
Environment env = new Environment();
env.setReplicateBindings(false);
// get the NOT replicating initial context of this server
ictx = ServiceLocator.getNotReplicatingInitialContext();
if (ictx != null) {
Set e = configuration.keySet();
Iterator iter = e.iterator();
while (iter.hasNext()) {
configParameter = (String) iter.next();
this.addParameter(
ictx,
Constants.JNDI_SUB_PATH,
configParameter,
configuration.get(configParameter));
}
}
}
Can any one suggest how this can be achieved in Glassfish
Thanks in Advance.
It seems as if you are looking for custom jndi resources:
http://docs.oracle.com/cd/E26576_01/doc.312/e24930/jndi.htm#beanz