No physical database known of type cassandra -AWS keyspace - amazon-eks

We are connecting our microservices to aws keyspace(Cassandra) through dbaas.
Getting error
cloud.dbaas.client.exceptions.CreateDbException: MicroserviceRestClientResponseException{message=404 Not Found: "No physical database known of type cassandra
Even getting same error from dbaas pods logs.
I already configured below parameters
spring.data.cassandra.ssl
spring.data.cassandra.contact-points
spring.data.cassandra.local-datacenter
spring.data.cassandra.port
spring.data.cassandra.password
spring.data.cassandra.username

You will want to reference the external configuration. See the following Amazon Keyspaces spring example.
https://github.com/aws-samples/amazon-keyspaces-spring-app-example/
#Configuration
public class AppConfig {
private final String username = System.getenv("AWS_MCS_SPRING_APP_USERNAME");
private final String password = System.getenv("AWS_MCS_SPRING_APP_PASSWORD");
File driverConfig = new File(System.getProperty("user.dir")+"/application.conf");
#Primary
public #Bean
CqlSession session() throws NoSuchAlgorithmException {
return CqlSession.builder().
withConfigLoader(DriverConfigLoader.fromFile(driverConfig)).
withAuthCredentials(username, password).
withSslContext(SSLContext.getDefault()).
withKeyspace("keyspace_name").
build();
}
}

Related

The way to use Redis in Apache Flink

I am using the Flink and want to insert the result value into the Redis.
When I googled the Redis, I found the redis-connector included in the Apache bahir.
So I am able to insert the result value into the redis using the reids-connector in the Apache bahir.
However, I think that I am also able to connect to the Redis using the Jedis.
I had the experiment showing that I was able to connect the redis and found the value inserted into the redis using jedis as shown in the code below.
DataStream<String> messageStream = env.addSource(new FlinkKafkaConsumer<>(flinkParams.getRequired("topic"), new SimpleStringSchema(), flinkParams.getProperties())).setParallelism(Math.min(hosts * cores, kafkaPartitions));
messageStream.keyBy(new KeySelector<String, String>() {
#Override
public String getKey(String s) throws Exception {
return s;
}
}).flatMap(new RedisConnector());
In the RedisConnector module, without the redis-connector in the Apache bahir, I also successfully connected to the redis and found the message processed after the Flink.
The example code is shown as below
public class ProcessorCommon {
private static final Logger logger = LoggerFactory.getLogger(ProcessorCommon.class);
private Jedis jedis;
private Set<DummyPair> dummy;
public ProcessorCommon(String redisServerHostName) {
this.jedis = new Jedis(redisServerHostName);
}
public void writeToRedis(String key, String value) {
this.jedis.set(key, value);
}
public String getFromRedis(String key) {
return this.jedis.get(key);
}
public void close() {
this.jedis.close();
}
}
So I am wondering that there is a difference between using redis-connector in the bahir and Jedis.
There is currently no real Redis connector maintained by the Flink community. The Redis connector in Bahir is rather outdated. There is a new Redis Streams connector in the works, which can be found at https://github.com/apache/flink-connector-redis-streams

Flink statefun and confluent schema registry compatibility

I'm trying to egress to confluent kafka from flink statefun. In confluent git repo
in order to schema check and put data to kafka topic all we need to do is use kafka client ProducerRecord object with avro object.
But in statefun we need to override "ProducerRecord<byte[], byte[]> serialize" method for kafka egress. This causes the following error.
Caused by: org.apache.kafka.common.errors.SerializationException: Error registering Avro schema: "bytes"
Schema registery and statefun kafka egress seem to be incompatible. Are there any workaround ?
It is possible to use Confluent Schema Registry with Statefun Egress.
In order to do so, you first register your schema manually with the schema registry and then supply KafkaEgressSerializer a byte[] serialized by KafkaAvroSerializer instance.
Code below is the gist of it and is in compliance with the first one in Igal's workaround suggestions:
public class SpecificRecordFromAvroSchemaSerializer implements KafkaEgressSerializer<SpecificRecordGeneratedFromAvroSchema> {
private static String KAFKA_TOPIC = "kafka_topic";
private static CachedSchemaRegistryClient schemaRegistryClient = new CachedSchemaRegistryClient(
"http://schema-registry:8081",
1_000
);
private static KafkaAvroSerializer kafkaAvroSerializer = new KafkaAvroSerializer(schemaRegistryClient);
static {
try {
schemaRegistryClient.register(
KAFKA_TOPIC + "-value", // assuming subject name strategy is TopicNameStrategy (default)
SpecificRecordGeneratedFromAvroSchema.getClassSchema()
);
} catch (IOException e) {
e.printStackTrace();
} catch (RestClientException e) {
e.printStackTrace();
}
}
#Override
public ProducerRecord<byte[], byte[]> serialize(SpecificRecordGeneratedFromAvroSchema specificRecordGeneratedFromAvroSchema) {
byte[] valueData = kafkaAvroSerializer.serialize(
KAFKA_TOPIC,
specificRecordGeneratedFromAvroSchema
);
return new ProducerRecord<>(
KAFKA_TOPIC,
String.valueOf(System.currentTimeMillis()).getBytes(),
valueData
);
}
}
Schema registry is not directly supported at this version of stateful functions,
but few workarounds are possible:
Connect to the schema registry by your self from the KafkaEgressSerializer class. In your linked example that would need to be happening here.
Provide your own instance of a FlinkKafkaProducer that is based on (see
AvroDeserializationSchema)
Mange the schemas outside of stateful functions, but serialize your Avro record to bytes. Make sure to remove the schema registry from the properties that being passed to the KafkaProducer

Pivotal GemFire cannot see cached data in Gfsh or Pulse

I have created a Spring Boot application with a Geode/GemFire cache. I would like to connect to a Gfsh created Region with my Spring Boot application. In my application-context.xml I'm using a gfe:lookup-region with an id of the Gfsh created Region.
In my Java configuration file I utilize a LookupRegionFactoryBean to get a reference to the externally defined Region.
In my SpringBootApplication bootstrap class I write to my Repository successfully, as I can read back all the objects that I saved. But using the Gfsh tool or the Pulse tool I cannot see my cached data records (or the count of them that I saved.)
Could you provide some insight here? Also, I tried using the LocalRegionFactoryBean too in my configuration file but that approach did not work either.
Thanks.
application-context.xml:
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:context="http://www.springframework.org/schema/context"
xmlns:gfe-data="http://www.springframework.org/schema/data/gemfire"
xmlns:gfe="http://www.springframework.org/schema/gemfire"
xmlns:util="http://www.springframework.org/schema/util"
xsi:schemaLocation="http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context-3.0.xsd http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd http://www.springframework.org/schema/data/gemfire http://www.springframework.org/schema/data/gemfire/spring-data-gemfire.xsd http://www.springframework.org/schema/gemfire http://www.springframework.org/schema/gemfire/spring-gemfire.xsd">
<context:component-scan base-package="com.example.geode"></context:component-scan>
<util:properties id="gemfireProperties" location="context:geode.properties"/>
<!-- <context:property-placeholder location="context:geode.properties"/>
<bean id="log-level"><property name="log-level" value="${log-level}"/></bean>
<bean id="mcast-port"><property name="mcast-port" value="${mcast-port}"/></bean>
<bean id="name"><property name="name" value="${name}"/></bean>-->
<gfe:annotation-driven/>
<gfe-data:function-executions base-package="com.example.geode.config"/>
<!-- Declare GemFire Cache -->
<!-- <gfe:cache/> -->
<gfe:cache properties-ref="gemfireProperties"/>
<!-- Local region for being used by the Message -->
<!-- <gfe:replicated-region id="employee" value-constraint="com.example.geode.model.Employee" data-policy="REPLICATE"/>-->
<gfe:lookup-region id="employee" value-constraint="com.example.geode.model.Employee" data-policy="REPLICATE"/>
<!-- <gfe:local-region id="employee" value-constraint="com.example.geode.model.Employee" data-policy="REPLICATE"/>-->
<!-- Search for GemFire repositories -->
<gfe-data:repositories base-package="com.example.geode.repository"/>
</beans>
GeodeConfiguration.java:
//imports not included
#Configuration
#ComponentScan
#EnableCaching
#EnableGemfireRepositories//(basePackages = "com.example.geode.repository")
#EnableGemfireFunctions
#EnableGemfireFunctionExecutions//(basePackages = "com.example.geode.function")
#PropertySource("classpath:geode.properties")
public class GeodeConfiguration {
#Autowired
private EmployeeRepository employeeRepository;
#Autowired
private FunctionExecution functionExecution;
#Value("${log-level}")
private String loglevel;
#Value("${mcast-port}")
private String mcastPort;
#Value("${name}")
private String name;
Properties gemfireProperties() {
Properties gemfireProperties = new Properties();
gemfireProperties.setProperty(loglevel, loglevel);
gemfireProperties.setProperty(mcastPort, mcastPort);
gemfireProperties.setProperty(name, name);
return gemfireProperties;
}
#Bean
CacheFactoryBean gemfireCache() {
return new CacheFactoryBean();
}
#Bean
GemfireCacheManager cacheManager() {
GemfireCacheManager cacheManager = new GemfireCacheManager();
try {
CacheFactoryBean cacheFactory = gemfireCache();
//gemfireProperties();
//cacheFactory.setProperties(gemfireProperties());
cacheManager.setCache(cacheFactory.getObject()); //gemfireCache().getObject());
} catch (Exception ex) {
ex.printStackTrace();
}
return cacheManager;
}
#Bean(name="employee")
//#Autowired
LookupRegionFactoryBean<String, Employee> getRegion(final GemFireCache cache)
throws Exception {
//CacheTypeAwareRegionFactoryBean<String, Employee> region = new CacheTypeAwareRegionFactoryBean<>();//GenericRegionFactoryBean<> //LocalRegionFactoryBean<>();
LookupRegionFactoryBean<String, Employee> region = new LookupRegionFactoryBean<>();//GenericRegionFactoryBean<> //LocalRegionFactoryBean<>();
region.setRegionName("employee");
try {
region.setCache(gemfireCache().getObject());
} catch (Exception e) {
e.printStackTrace();
}
//region.setClose(false);
region.setName("employee");
//region.setAsyncEventQueues(new AsyncEventQueue[]{gemfireQueue});
//region.setPersistent(false);
//region.setDataPolicy(org.apache.geode.cache.DataPolicy.REPLICATE); //PRELOADED); //REPLICATE);
region.afterPropertiesSet();
return region;
}
}
BasicGeodeApplication.java:
//imports not provided
#EnableGemfireRepositories
#SpringBootApplication
#ComponentScan("com.example.geode")
//#EnableCaching
#EnableGemfireCaching
#EnableEntityDefinedRegions(basePackageClasses = Employee.class)
#SuppressWarnings("unused")
//#CacheServerApplication(name = "server2", locators = "localhost[10334]",
// autoStartup = true, port = 41414)
public class BasicGeodeApplication {
#Autowired
private EmployeeRepository employeeRepository;
#Autowired
private EmployeeService employeeService;
private static ConfigurableApplicationContext context;
public static void main(String[] args) {
context = SpringApplication.run(BasicGeodeApplication.class, args);
BasicGeodeApplication bga = new BasicGeodeApplication();
}
#Bean
public ApplicationRunner run(EmployeeRepository employeeRepository) {
return args -> {
Employee bob = new Employee("Bob", 80.0);
Employee sue = new Employee("Susan", 95.0);
Employee jane = new Employee("Jane", 85.0);
Employee jack = new Employee("Jack", 90.0);
List<Employee> employees = Arrays.asList(bob, sue, jane, jack);
employees.sort(Comparator.comparing(Employee::getName));
for (Employee employee : employees) {
//employeeService.saveEmployee(employee);
employeeRepository.save(employee);
}
System.out.println("\nList of employees:");
employees //Arrays.asList(bob.getName(), sue.getName(), jane.getName(), jack.getName());
.forEach(person -> System.out.println("\t" + employeeRepository.findByName(person.getName())));
System.out.println("\nQuery salary greater than 80k:");
stream(employeeRepository.findBySalaryGreaterThan(80.0).spliterator(), false)
.forEach(person -> System.out.println("\t" + person));
System.out.println("\nQuery salary less than 95k:");
stream(employeeRepository.findBySalaryLessThan(95.0).spliterator(), false)
.forEach(person -> System.out.println("\t" + person));
System.out.println("\nQuery salary greater than 80k and less than 95k:");
stream(employeeRepository.findBySalaryGreaterThanAndSalaryLessThan(80.0, 95.0).spliterator(), false)
.forEach(person -> System.out.println("\t" + person));
};
}
#Service
class EmployeeService {
#Autowired
private EmployeeRepository employeeRepository;
#CachePut(cacheNames = "employee", key = "#id")
//#PutMapping("/")
void saveEmployee(Employee employee) {
employeeRepository.save(employee);
}
Employee findEmployee(String name) {
return null;
}
//employeeRepository.findByName(person.getName())));
}
}
EmployeeRepository.java:
#Repository("employeeRepository")
//#DependsOn("gemfireCache")
public interface EmployeeRepository extends CrudRepository<Employee, String> {
Employee findByName(String name);
Iterable<Employee> findBySalaryGreaterThan(double salary);
Iterable<Employee> findBySalaryLessThan(double salary);
Iterable<Employee> findBySalaryGreaterThanAndSalaryLessThan(double salary1, double salary2);
}
Employee.java:
// imports not included
#Entity
#Region("employee")
public class Employee implements Serializable {
private static final long serialVersionUID = 1L;
#Id
#GeneratedValue(strategy=GenerationType.AUTO)
#javax.persistence.Id
private Long id;
public String name;
public double salary;
protected Employee() {}
#PersistenceConstructor
public Employee(String name, double salary) {
this.name = name;
this.salary = salary;
}
#Override
public String toString() {
return name + " salary is: " + salary;
}
public String getName() {
return name;
}
public void setName(String name) {
this.name = name;
}
public double getSalary() {
return salary;
}
public void setSalary(int salary) {
this.salary = salary;
}
}
I'd be very surprised if your Spring Boot application was actually using the Region(s) created in Gfsh after attempting a lookup. This is also evident from the fact that you cannot see data in the Region(s) when using Gfsh or Pulse after running your application.
In addition, it is not entirely apparent from your configuration whether your Spring Boot application is joining a cluster started with Gfsh since you have not shared the contents of your geode.properties file.
NOTE: We will come back to your properties file in a moment, since how you reference it (i.e. with context: in the location attribute of the <util> element from the Spring Util schema) is not even correct. In fact, your entire XML file is not even valid! Bean definitions without a class are not valid unless they are abstract.
What version of Spring Boot and Spring Data Geode/GemFire (SDG) are you using?
Judging by the Spring XML schema references in your XML configuration file, you are using Spring 3.0!? You should not reference versions in your schema location declarations. An unqualified schema version will be versioned based on the Spring JARs you imported on your application classpath (e.g. using Maven).
Anyway, the reason I ask is, I have made many changes to SDG causing it to fail-fast in the event that the configuration is ambiguous or incomplete (for instance).
In your case, the lookup would have failed instantly if the Region(s) did not exist. And, I am quite certain the Region(s) do not exist because SDG disables Cluster Configuration on a peer cache node/application, by default. Therefore, none of the Region(s) you created in Gfsh are immediately available to your Spring Boot application.
So, let's walk through a simple example. I will mostly use SDG's new annotation configuration model mixed with some Java config, for ease of use and convenience. I encourage you to read this chapter in SDG's Reference Guide given your configuration is all over the place and I am pretty certain you confusing what is actually happening.
Here is my Spring Boot, Apache Geode application...
#SpringBootApplication
#SuppressWarnings("unused")
public class ClusterConfiguredGeodeServerApplication {
public static void main(String[] args) {
SpringApplication.run(ClusterConfiguredGeodeServerApplication.class, args);
}
#Bean
ApplicationRunner runner(GemfireTemplate customersTemplate) {
return args -> {
Customer jonDoe = Customer.named("Jon Doe").identifiedBy(1L);
customersTemplate.put(jonDoe.getId(), jonDoe);
};
}
#PeerCacheApplication(name = "ClusterConfiguredGeodeServerApplication")
#EnablePdx
static class GeodeConfiguration {
#Bean("Customers")
LookupRegionFactoryBean<Long, Customer> customersRegion(GemFireCache gemfireCache) {
LookupRegionFactoryBean<Long, Customer> customersRegion = new LookupRegionFactoryBean<>();
customersRegion.setCache(gemfireCache);
return customersRegion;
}
#Bean("CustomersTemplate")
GemfireTemplate customersTemplate(#Qualifier("Customers") Region<?, ?> customers) {
return new GemfireTemplate(customers);
}
}
#Data
#RequiredArgsConstructor(staticName = "named")
static class Customer {
#Id
private Long id;
#NonNull
private String name;
Customer identifiedBy(Long id) {
this.id = id;
return this;
}
}
}
I am using Spring Data Lovelace RC1 (which includes Spring Data for Apache Geode 2.1.0.RC1). I am also using Spring Boot 2.0.3.RELEASE, which pulls in core Spring Framework 5.0.7.RELEASE, all on Java 8.
I ommited package and import declarations.
I am using Project Lombok to define my Customer class.
I have a nested GeodeConfiguration class to configure the Spring Boot application as a "peer" Cache member capable of joining an Apache Geode cluster. However, it is not part of any cluster yet!
Finally, I have configured a "Customers" Region, which is "looked" up in the Spring context.
When I start this application, it fails because there is no "Customers" Region defined yet, by any means...
Caused by: org.springframework.beans.factory.BeanInitializationException: Region [Customers] in Cache [GemFireCache[id = 2126876651; isClosing = false; isShutDownAll = false; created = Thu Aug 02 13:43:07 PDT 2018; server = false; copyOnRead = false; lockLease = 120; lockTimeout = 60]] not found
at org.springframework.data.gemfire.ResolvableRegionFactoryBean.createRegion(ResolvableRegionFactoryBean.java:146) ~[spring-data-geode-2.1.0.RC1.jar:2.1.0.RC1]
at org.springframework.data.gemfire.ResolvableRegionFactoryBean.afterPropertiesSet(ResolvableRegionFactoryBean.java:96) ~[spring-data-geode-2.1.0.RC1.jar:2.1.0.RC1]
at org.springframework.data.gemfire.LookupRegionFactoryBean.afterPropertiesSet(LookupRegionFactoryBean.java:72) ~[spring-data-geode-2.1.0.RC1.jar:2.1.0.RC1]
...
This is expected!
Alright, let's go into Gfsh and start a cluster.
You know that you need to start a Locator with a Server to form a cluster, right? A Locator is used by other potential peers attempting to join the cluster so they can locate the cluster in the first place. The Server is needed in order to create the "Customers" Region since you cannot create a Region on a Locator.
$ echo $GEODE_HOME
/Users/jblum/pivdev/apache-geode-1.6.0
$ gfsh
_________________________ __
/ _____/ ______/ ______/ /____/ /
/ / __/ /___ /_____ / _____ /
/ /__/ / ____/ _____/ / / / /
/______/_/ /______/_/ /_/ 1.6.0
Monitor and Manage Apache Geode
gfsh>start locator --name=LocaorOne --log-level=config
Starting a Geode Locator in /Users/jblum/pivdev/lab/LocaorOne...
....
Locator in /Users/jblum/pivdev/lab/LocaorOne on 10.0.0.121[10334] as LocaorOne is currently online.
Process ID: 41758
Uptime: 5 seconds
Geode Version: 1.6.0
Java Version: 1.8.0_152
Log File: /Users/jblum/pivdev/lab/LocaorOne/LocaorOne.log
JVM Arguments: -Dgemfire.enable-cluster-configuration=true -Dgemfire.load-cluster-configuration-from-dir=false -Dgemfire.log-level=config -Dgemfire.launcher.registerSignalHandlers=true -Djava.awt.headless=true -Dsun.rmi.dgc.server.gcInterval=9223372036854775806
Class-Path: /Users/jblum/pivdev/apache-geode-1.6.0/lib/geode-core-1.6.0.jar:/Users/jblum/pivdev/apache-geode-1.6.0/lib/geode-dependencies.jar
Successfully connected to: JMX Manager [host=10.0.0.121, port=1099]
Cluster configuration service is up and running.
gfsh>start server --name=ServerOne --log-level=config
Starting a Geode Server in /Users/jblum/pivdev/lab/ServerOne...
...
Server in /Users/jblum/pivdev/lab/ServerOne on 10.0.0.121[40404] as ServerOne is currently online.
Process ID: 41785
Uptime: 3 seconds
Geode Version: 1.6.0
Java Version: 1.8.0_152
Log File: /Users/jblum/pivdev/lab/ServerOne/ServerOne.log
JVM Arguments: -Dgemfire.default.locators=10.0.0.121[10334] -Dgemfire.start-dev-rest-api=false -Dgemfire.use-cluster-configuration=true -Dgemfire.log-level=config -XX:OnOutOfMemoryError=kill -KILL %p -Dgemfire.launcher.registerSignalHandlers=true -Djava.awt.headless=true -Dsun.rmi.dgc.server.gcInterval=9223372036854775806
Class-Path: /Users/jblum/pivdev/apache-geode-1.6.0/lib/geode-core-1.6.0.jar:/Users/jblum/pivdev/apache-geode-1.6.0/lib/geode-dependencies.jar
gfsh>list members
Name | Id
--------- | --------------------------------------------------------------
LocaorOne | 10.0.0.121(LocaorOne:41758:locator)<ec><v0>:1024 [Coordinator]
ServerOne | 10.0.0.121(ServerOne:41785)<v1>:1025
gfsh>list regions
No Regions Found
gfsh>create region --name=Customers --type=PARTITION --key-constraint=java.lang.Long --value-constraint=java.lang.Object
Member | Status
--------- | ------------------------------------------
ServerOne | Region "/Customers" created on "ServerOne"
gfsh>list regions
List of regions
---------------
Customers
gfsh>describe region --name=Customers
..........................................................
Name : Customers
Data Policy : partition
Hosting Members : ServerOne
Non-Default Attributes Shared By Hosting Members
Type | Name | Value
------ | ----------- | ---------
Region | size | 0
| data-policy | PARTITION
Now, even if I run the Spring Boot application again, it will still fail with the same Exception...
Caused by: org.springframework.beans.factory.BeanInitializationException: Region [Customers] in Cache [GemFireCache[id = 989520513; isClosing = false; isShutDownAll = false; created = Thu Aug 02 14:09:25 PDT 2018; server = false; copyOnRead = false; lockLease = 120; lockTimeout = 60]] not found
at org.springframework.data.gemfire.ResolvableRegionFactoryBean.createRegion(ResolvableRegionFactoryBean.java:146) ~[spring-data-geode-2.1.0.RC1.jar:2.1.0.RC1]
at org.springframework.data.gemfire.ResolvableRegionFactoryBean.afterPropertiesSet(ResolvableRegionFactoryBean.java:96) ~[spring-data-geode-2.1.0.RC1.jar:2.1.0.RC1]
at org.springframework.data.gemfire.LookupRegionFactoryBean.afterPropertiesSet(LookupRegionFactoryBean.java:72) ~[spring-data-geode-2.1.0.RC1.jar:2.1.0.RC1]
...
Why?
This because 1) the Spring Boot, Apache Geode peer Cache application is not part of the cluster (yet) and 2) because, by default, SDG does not allow a Spring configured/bootstrapped Apache Geode peer Cache application to get it's configuration from the cluster (specifically, from the Cluster Configuration Service), therefore, we have to configure/enable both things.
We can have our Spring Boot, Apache Geode peer Cache application join the cluster specifically by specifying the locators attribute of the #PeerCacheApplication annotation as localhost[10334].
#PeerCacheApplication(name = "...", locators = "localhost[10334]")
We can have our Spring Boot, Apache Geode peer Cache application get its configuration from the cluster by enabling the useClusterConfiguration property, which we do by adding the following Configurer bean definition to our inner, static GeodeConfiguration class, as follows:
#Bean
PeerCacheConfigurer useClusterConfigurationConfigurer() {
return (beanName, cacheFactoryBean) -> cacheFactoryBean.setUseClusterConfiguration(true);
}
Now, when we run our Spring Boot, Apache Geode peer Cache application again, we see quite different output. First, see that our peer member (app) gets the cluster configuration...
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<cache xmlns="http://geode.apache.org/schema/cache" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" copy-on-read="false" is-server="false" lock-lease="120" lock-timeout="60" search-timeout="300" version="1.0" xsi:schemaLocation="http://geode.apache.org/schema/cache http://geode.apache.org/schema/cache/cache-1.0.xsd">
<region name="Customers">
<region-attributes data-policy="partition">
<key-constraint>java.lang.Long</key-constraint>
<value-constraint>java.lang.Object</value-constraint>
</region-attributes>
</region>
</cache>
Next, you may have noticed that I enabled PDX, using SDG's #EnablePdx annotation. This allows us to easily serialize our application domain model object types (e.g. Customer) without our type unduly needing to implement java.io.Serializable. There are several reasons why you wouldn't necessarily want to implement java.io.Serializable anyway. Using SDG's #EnablePdx uses SDG's MappingPdxSerializer implementation, which is far more powerful than even Apache Geode's/Pivotal GemFire's own ReflectionBasedAutoSerializer.
As result of serializing the application types (namely, Customer), you will see this output...
14:26:48.322 [main] INFO org.apache.geode.internal.cache.PartitionedRegion - Partitioned Region /Customers is created with prId=2
Started ClusterConfiguredGeodeServerApplication in 4.223 seconds (JVM running for 5.574)
14:26:48.966 [main] INFO org.apache.geode.pdx.internal.PeerTypeRegistration - Adding new type: PdxType[dsid=0, typenum=14762571
name=example.app.spring.cluster_config.server.ClusterConfiguredGeodeServerApplication$Customer
fields=[
id:Object:identity:0:idx0(relativeOffset)=0:idx1(vlfOffsetIndex)=-1
name:String:1:1:idx0(relativeOffset)=0:idx1(vlfOffsetIndex)=1]]
14:26:49.002 [main] INFO org.apache.geode.pdx.internal.TypeRegistry - Caching PdxType[dsid=0, typenum=14762571
name=example.app.spring.cluster_config.server.ClusterConfiguredGeodeServerApplication$Customer
fields=[
id:Object:identity:0:idx0(relativeOffset)=0:idx1(vlfOffsetIndex)=-1
name:String:1:1:idx0(relativeOffset)=0:idx1(vlfOffsetIndex)=1]]
Another reason I enable PDX is so that I do not need to add the Customer class to the Server (i.e. "ServerOne") started using Gfsh. It also allows me to query the "Customers" Region and see that Customer "Jon Doe" was successfully added...
gfsh>describe region --name=Customers
..........................................................
Name : Customers
Data Policy : partition
Hosting Members : ServerOne
Non-Default Attributes Shared By Hosting Members
Type | Name | Value
------ | ----------- | ---------
Region | size | 1
| data-policy | PARTITION
gfsh>
gfsh>query --query="SELECT c.name FROM /Customers c"
Result : true
Limit : 100
Rows : 1
Result
-------
Jon Doe
Bingo! Success!
I am not going to even begin to discuss everything that is wrong with your configuration. I implore you to read the docs (and Apache Geode's User Guide; i.e. appropriate sections), understand the concepts, look at examples, guides, ask concise questions, etc, etc.
Here is the example source code...
https://github.com/jxblum/contacts-application/blob/master/configuration-example/src/main/java/example/app/spring/cluster_config/server/ClusterConfiguredGeodeServerApplication.java
Hope this helps!
-j

Spring-data-redis with redis after a while get Exception: could not get a resource from the pool

I'm using spring-data-redis to access redis(one machine) with xml config file, on beginning, everything is ok, but after some minutes, i run my
test again, i got "could not get a resource from the pool" exception, i haved searched some answers, i guess this reason is the connections did
not return to the pool, how to solve this problem, why this problem can be occur, i'm using redis-3.2.6 spring-data-redis1.8 jedis2.9, below is my config
#Redis settings
redis.host=27.57.100.3
redis.port=6379
redis.pass=
maxTotal=5
maxIdle=3
minIdle=1
maxWaitMillis=10000
testOnBorrow=true
testOnReturn=true
testWhileIdle=true
timeBetweenEvictionRunsMillis=30000
numTestsPerEvictionRun=10
minEvictableIdleTimeMillis=60000
softMinEvictableIdleTimeMillis=10000
blockWhenExhausted=true
And here is my code :
#Autowired
StringRedisTemplate stringRedisTemplate
#Test
public void test(){
ValueOperations<String, String> vop = stringRedisTemplate.opsForValue();
String k = "k";
String v = "v";
vop.set(k, v);
String value = vop.get(k);
}
maxTotal=5, I thought 5 is too small, you can set it to e.g,20.

In-memory H2 database, insert not working in SpringBootTest

I have a SpringBootApplicationWhich I wish to test.
Below are the details about my files
application.properties
PRODUCT_DATABASE_PASSWORD=
PRODUCT_DATABASE_USERNAME=sa
PRODUCT_DATABASE_CONNECTION_URL=jdbc:h2:file:./target/db/testdb
PRODUCT_DATABASE_DRIVER=org.h2.Driver
RED_SHIFT_DATABASE_PASSWORD=
RED_SHIFT_DATABASE_USERNAME=sa
RED_SHIFT_DATABASE_CONNECTION_URL=jdbc:h2:file:./target/db/testdb
RED_SHIFT_DATABASE_DRIVER=org.h2.Driver
spring.datasource.platform=h2
ConfigurationClass
#SpringBootConfiguration
#SpringBootApplication
#Import({ProductDataAccessConfig.class, RedShiftDataAccessConfig.class})
public class TestConfig {
}
Main Test Class
#RunWith(SpringJUnit4ClassRunner.class)
#SpringBootTest(classes = {TestConfig.class,ConfigFileApplicationContextInitializer.class}, webEnvironment = SpringBootTest.WebEnvironment.NONE)
public class MainTest {
#Autowired(required = true)
#Qualifier("dataSourceRedShift")
private DataSource dataSource;
#Test
public void testHourlyBlock() throws Exception {
insertDataIntoDb(); //data sucessfully inserted
SpringApplication.run(Application.class, new String[]{}); //No data found
}
}
Data Access In Application.class;
try (Connection conn = dataSourceRedShift.getConnection();
Statement stmt = conn.createStatement() {
//access inserted data
}
Please Help!
PS for the spring boot application the test beans are being picked so bean instantiation definitely not a problem. I think I am missing some properties.
I do not use hibernate in my application and data goes off even within the same application context (child context). i.e. I run a spring boot application which reads that data inserted earlier
Problem solved.
removing spring.datasource.platform=h2 from the application.properties.
Made my h2 data persists.
But I still wish to know how is h2 starting automatically?