How to deploy a Configuration to the JNDI tree of the Glass fish server - glassfish

I want to deploy a HashMap of configuration to the JNDI tree of Glass Fish server. I am migrating a framework from Weblogic to GLassfish. Previously it was done via the following code..
Where the Environment is weblogic.jndi.Environment;
public void deployConfiguration(HashMap configuration)
throws GenericFrameworkException {
Context ictx = null;
String configParameter = null;
Environment env = new Environment();
env.setReplicateBindings(false);
// get the NOT replicating initial context of this server
ictx = ServiceLocator.getNotReplicatingInitialContext();
if (ictx != null) {
Set e = configuration.keySet();
Iterator iter = e.iterator();
while (iter.hasNext()) {
configParameter = (String) iter.next();
this.addParameter(
ictx,
Constants.JNDI_SUB_PATH,
configParameter,
configuration.get(configParameter));
}
}
}
Can any one suggest how this can be achieved in Glassfish
Thanks in Advance.

It seems as if you are looking for custom jndi resources:
http://docs.oracle.com/cd/E26576_01/doc.312/e24930/jndi.htm#beanz

Related

Apache Ignite performance problem on Azure Kubernetes Service

I'm using Apache Ignite on Azure Kubernetes as a distributed cache.
Also, I have a web API on Azure based on .NET6
The Ignite service works stable and very well on AKS.
But at first request, the API tries to connect Ignite and it takes around 3 seconds. After that, Ignite responses take around 100 ms which is great. Here are my Web API performance outputs for the GetProduct function.
At first, I've tried adding the Ignite Service to Singleton but it failed sometimes as 'connection closed'. How can I keep open the Ignite connection always? or does anyone has something better idea?
here is my latest GetProduct code,
[HttpGet("getProduct")]
public IActionResult GetProduct(string barcode)
{
Stopwatch _stopWatch = new Stopwatch();
_stopWatch.Start();
Product product;
CacheManager cacheManager = new CacheManager();
cacheManager.ProductCache.TryGet(barcode, out product);
if(product == null)
{
return NotFound(new ApiResponse<Product>(product));
}
cacheManager.DisposeIgnite();
_logger.LogWarning("Loaded in " + _stopWatch.ElapsedMilliseconds + " ms...");
return Ok(new ApiResponse<Product>(product));
}
Also, I add CacheManager class here;
public CacheManager()
{
ConnectIgnite();
InitializeCaches();
}
public void ConnectIgnite()
{
_ignite = Ignition.StartClient(GetIgniteConfiguration());
}
public IgniteClientConfiguration GetIgniteConfiguration()
{
var appSettingsJson = AppSettingsJson.GetAppSettings();
var igniteEndpoints = appSettingsJson["AppSettings:IgniteEndpoint"];
var igniteUser = appSettingsJson["AppSettings:IgniteUser"];
var ignitePassword = appSettingsJson["AppSettings:IgnitePassword"];
var nodeList = igniteEndpoints.Split(",");
var config = new IgniteClientConfiguration
{
Endpoints = nodeList,
UserName = igniteUser,
Password = ignitePassword,
EnablePartitionAwareness = true,
SocketTimeout = TimeSpan.FromMilliseconds(System.Threading.Timeout.Infinite)
};
return config;
}
Make it a singleton. Ignite node, even in client mode, is supposed to be running for the entire lifetime of your application. All Ignite APIs are thread-safe. If you get a connection error, please provide more details (exception stack trace, how do you create the singleton, etc).
You can also try the Ignite thin client which consumes fewer resources and connects instantly: https://ignite.apache.org/docs/latest/thin-clients/dotnet-thin-client.

Pivotal GemFire cannot see cached data in Gfsh or Pulse

I have created a Spring Boot application with a Geode/GemFire cache. I would like to connect to a Gfsh created Region with my Spring Boot application. In my application-context.xml I'm using a gfe:lookup-region with an id of the Gfsh created Region.
In my Java configuration file I utilize a LookupRegionFactoryBean to get a reference to the externally defined Region.
In my SpringBootApplication bootstrap class I write to my Repository successfully, as I can read back all the objects that I saved. But using the Gfsh tool or the Pulse tool I cannot see my cached data records (or the count of them that I saved.)
Could you provide some insight here? Also, I tried using the LocalRegionFactoryBean too in my configuration file but that approach did not work either.
Thanks.
application-context.xml:
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:context="http://www.springframework.org/schema/context"
xmlns:gfe-data="http://www.springframework.org/schema/data/gemfire"
xmlns:gfe="http://www.springframework.org/schema/gemfire"
xmlns:util="http://www.springframework.org/schema/util"
xsi:schemaLocation="http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context-3.0.xsd http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd http://www.springframework.org/schema/data/gemfire http://www.springframework.org/schema/data/gemfire/spring-data-gemfire.xsd http://www.springframework.org/schema/gemfire http://www.springframework.org/schema/gemfire/spring-gemfire.xsd">
<context:component-scan base-package="com.example.geode"></context:component-scan>
<util:properties id="gemfireProperties" location="context:geode.properties"/>
<!-- <context:property-placeholder location="context:geode.properties"/>
<bean id="log-level"><property name="log-level" value="${log-level}"/></bean>
<bean id="mcast-port"><property name="mcast-port" value="${mcast-port}"/></bean>
<bean id="name"><property name="name" value="${name}"/></bean>-->
<gfe:annotation-driven/>
<gfe-data:function-executions base-package="com.example.geode.config"/>
<!-- Declare GemFire Cache -->
<!-- <gfe:cache/> -->
<gfe:cache properties-ref="gemfireProperties"/>
<!-- Local region for being used by the Message -->
<!-- <gfe:replicated-region id="employee" value-constraint="com.example.geode.model.Employee" data-policy="REPLICATE"/>-->
<gfe:lookup-region id="employee" value-constraint="com.example.geode.model.Employee" data-policy="REPLICATE"/>
<!-- <gfe:local-region id="employee" value-constraint="com.example.geode.model.Employee" data-policy="REPLICATE"/>-->
<!-- Search for GemFire repositories -->
<gfe-data:repositories base-package="com.example.geode.repository"/>
</beans>
GeodeConfiguration.java:
//imports not included
#Configuration
#ComponentScan
#EnableCaching
#EnableGemfireRepositories//(basePackages = "com.example.geode.repository")
#EnableGemfireFunctions
#EnableGemfireFunctionExecutions//(basePackages = "com.example.geode.function")
#PropertySource("classpath:geode.properties")
public class GeodeConfiguration {
#Autowired
private EmployeeRepository employeeRepository;
#Autowired
private FunctionExecution functionExecution;
#Value("${log-level}")
private String loglevel;
#Value("${mcast-port}")
private String mcastPort;
#Value("${name}")
private String name;
Properties gemfireProperties() {
Properties gemfireProperties = new Properties();
gemfireProperties.setProperty(loglevel, loglevel);
gemfireProperties.setProperty(mcastPort, mcastPort);
gemfireProperties.setProperty(name, name);
return gemfireProperties;
}
#Bean
CacheFactoryBean gemfireCache() {
return new CacheFactoryBean();
}
#Bean
GemfireCacheManager cacheManager() {
GemfireCacheManager cacheManager = new GemfireCacheManager();
try {
CacheFactoryBean cacheFactory = gemfireCache();
//gemfireProperties();
//cacheFactory.setProperties(gemfireProperties());
cacheManager.setCache(cacheFactory.getObject()); //gemfireCache().getObject());
} catch (Exception ex) {
ex.printStackTrace();
}
return cacheManager;
}
#Bean(name="employee")
//#Autowired
LookupRegionFactoryBean<String, Employee> getRegion(final GemFireCache cache)
throws Exception {
//CacheTypeAwareRegionFactoryBean<String, Employee> region = new CacheTypeAwareRegionFactoryBean<>();//GenericRegionFactoryBean<> //LocalRegionFactoryBean<>();
LookupRegionFactoryBean<String, Employee> region = new LookupRegionFactoryBean<>();//GenericRegionFactoryBean<> //LocalRegionFactoryBean<>();
region.setRegionName("employee");
try {
region.setCache(gemfireCache().getObject());
} catch (Exception e) {
e.printStackTrace();
}
//region.setClose(false);
region.setName("employee");
//region.setAsyncEventQueues(new AsyncEventQueue[]{gemfireQueue});
//region.setPersistent(false);
//region.setDataPolicy(org.apache.geode.cache.DataPolicy.REPLICATE); //PRELOADED); //REPLICATE);
region.afterPropertiesSet();
return region;
}
}
BasicGeodeApplication.java:
//imports not provided
#EnableGemfireRepositories
#SpringBootApplication
#ComponentScan("com.example.geode")
//#EnableCaching
#EnableGemfireCaching
#EnableEntityDefinedRegions(basePackageClasses = Employee.class)
#SuppressWarnings("unused")
//#CacheServerApplication(name = "server2", locators = "localhost[10334]",
// autoStartup = true, port = 41414)
public class BasicGeodeApplication {
#Autowired
private EmployeeRepository employeeRepository;
#Autowired
private EmployeeService employeeService;
private static ConfigurableApplicationContext context;
public static void main(String[] args) {
context = SpringApplication.run(BasicGeodeApplication.class, args);
BasicGeodeApplication bga = new BasicGeodeApplication();
}
#Bean
public ApplicationRunner run(EmployeeRepository employeeRepository) {
return args -> {
Employee bob = new Employee("Bob", 80.0);
Employee sue = new Employee("Susan", 95.0);
Employee jane = new Employee("Jane", 85.0);
Employee jack = new Employee("Jack", 90.0);
List<Employee> employees = Arrays.asList(bob, sue, jane, jack);
employees.sort(Comparator.comparing(Employee::getName));
for (Employee employee : employees) {
//employeeService.saveEmployee(employee);
employeeRepository.save(employee);
}
System.out.println("\nList of employees:");
employees //Arrays.asList(bob.getName(), sue.getName(), jane.getName(), jack.getName());
.forEach(person -> System.out.println("\t" + employeeRepository.findByName(person.getName())));
System.out.println("\nQuery salary greater than 80k:");
stream(employeeRepository.findBySalaryGreaterThan(80.0).spliterator(), false)
.forEach(person -> System.out.println("\t" + person));
System.out.println("\nQuery salary less than 95k:");
stream(employeeRepository.findBySalaryLessThan(95.0).spliterator(), false)
.forEach(person -> System.out.println("\t" + person));
System.out.println("\nQuery salary greater than 80k and less than 95k:");
stream(employeeRepository.findBySalaryGreaterThanAndSalaryLessThan(80.0, 95.0).spliterator(), false)
.forEach(person -> System.out.println("\t" + person));
};
}
#Service
class EmployeeService {
#Autowired
private EmployeeRepository employeeRepository;
#CachePut(cacheNames = "employee", key = "#id")
//#PutMapping("/")
void saveEmployee(Employee employee) {
employeeRepository.save(employee);
}
Employee findEmployee(String name) {
return null;
}
//employeeRepository.findByName(person.getName())));
}
}
EmployeeRepository.java:
#Repository("employeeRepository")
//#DependsOn("gemfireCache")
public interface EmployeeRepository extends CrudRepository<Employee, String> {
Employee findByName(String name);
Iterable<Employee> findBySalaryGreaterThan(double salary);
Iterable<Employee> findBySalaryLessThan(double salary);
Iterable<Employee> findBySalaryGreaterThanAndSalaryLessThan(double salary1, double salary2);
}
Employee.java:
// imports not included
#Entity
#Region("employee")
public class Employee implements Serializable {
private static final long serialVersionUID = 1L;
#Id
#GeneratedValue(strategy=GenerationType.AUTO)
#javax.persistence.Id
private Long id;
public String name;
public double salary;
protected Employee() {}
#PersistenceConstructor
public Employee(String name, double salary) {
this.name = name;
this.salary = salary;
}
#Override
public String toString() {
return name + " salary is: " + salary;
}
public String getName() {
return name;
}
public void setName(String name) {
this.name = name;
}
public double getSalary() {
return salary;
}
public void setSalary(int salary) {
this.salary = salary;
}
}
I'd be very surprised if your Spring Boot application was actually using the Region(s) created in Gfsh after attempting a lookup. This is also evident from the fact that you cannot see data in the Region(s) when using Gfsh or Pulse after running your application.
In addition, it is not entirely apparent from your configuration whether your Spring Boot application is joining a cluster started with Gfsh since you have not shared the contents of your geode.properties file.
NOTE: We will come back to your properties file in a moment, since how you reference it (i.e. with context: in the location attribute of the <util> element from the Spring Util schema) is not even correct. In fact, your entire XML file is not even valid! Bean definitions without a class are not valid unless they are abstract.
What version of Spring Boot and Spring Data Geode/GemFire (SDG) are you using?
Judging by the Spring XML schema references in your XML configuration file, you are using Spring 3.0!? You should not reference versions in your schema location declarations. An unqualified schema version will be versioned based on the Spring JARs you imported on your application classpath (e.g. using Maven).
Anyway, the reason I ask is, I have made many changes to SDG causing it to fail-fast in the event that the configuration is ambiguous or incomplete (for instance).
In your case, the lookup would have failed instantly if the Region(s) did not exist. And, I am quite certain the Region(s) do not exist because SDG disables Cluster Configuration on a peer cache node/application, by default. Therefore, none of the Region(s) you created in Gfsh are immediately available to your Spring Boot application.
So, let's walk through a simple example. I will mostly use SDG's new annotation configuration model mixed with some Java config, for ease of use and convenience. I encourage you to read this chapter in SDG's Reference Guide given your configuration is all over the place and I am pretty certain you confusing what is actually happening.
Here is my Spring Boot, Apache Geode application...
#SpringBootApplication
#SuppressWarnings("unused")
public class ClusterConfiguredGeodeServerApplication {
public static void main(String[] args) {
SpringApplication.run(ClusterConfiguredGeodeServerApplication.class, args);
}
#Bean
ApplicationRunner runner(GemfireTemplate customersTemplate) {
return args -> {
Customer jonDoe = Customer.named("Jon Doe").identifiedBy(1L);
customersTemplate.put(jonDoe.getId(), jonDoe);
};
}
#PeerCacheApplication(name = "ClusterConfiguredGeodeServerApplication")
#EnablePdx
static class GeodeConfiguration {
#Bean("Customers")
LookupRegionFactoryBean<Long, Customer> customersRegion(GemFireCache gemfireCache) {
LookupRegionFactoryBean<Long, Customer> customersRegion = new LookupRegionFactoryBean<>();
customersRegion.setCache(gemfireCache);
return customersRegion;
}
#Bean("CustomersTemplate")
GemfireTemplate customersTemplate(#Qualifier("Customers") Region<?, ?> customers) {
return new GemfireTemplate(customers);
}
}
#Data
#RequiredArgsConstructor(staticName = "named")
static class Customer {
#Id
private Long id;
#NonNull
private String name;
Customer identifiedBy(Long id) {
this.id = id;
return this;
}
}
}
I am using Spring Data Lovelace RC1 (which includes Spring Data for Apache Geode 2.1.0.RC1). I am also using Spring Boot 2.0.3.RELEASE, which pulls in core Spring Framework 5.0.7.RELEASE, all on Java 8.
I ommited package and import declarations.
I am using Project Lombok to define my Customer class.
I have a nested GeodeConfiguration class to configure the Spring Boot application as a "peer" Cache member capable of joining an Apache Geode cluster. However, it is not part of any cluster yet!
Finally, I have configured a "Customers" Region, which is "looked" up in the Spring context.
When I start this application, it fails because there is no "Customers" Region defined yet, by any means...
Caused by: org.springframework.beans.factory.BeanInitializationException: Region [Customers] in Cache [GemFireCache[id = 2126876651; isClosing = false; isShutDownAll = false; created = Thu Aug 02 13:43:07 PDT 2018; server = false; copyOnRead = false; lockLease = 120; lockTimeout = 60]] not found
at org.springframework.data.gemfire.ResolvableRegionFactoryBean.createRegion(ResolvableRegionFactoryBean.java:146) ~[spring-data-geode-2.1.0.RC1.jar:2.1.0.RC1]
at org.springframework.data.gemfire.ResolvableRegionFactoryBean.afterPropertiesSet(ResolvableRegionFactoryBean.java:96) ~[spring-data-geode-2.1.0.RC1.jar:2.1.0.RC1]
at org.springframework.data.gemfire.LookupRegionFactoryBean.afterPropertiesSet(LookupRegionFactoryBean.java:72) ~[spring-data-geode-2.1.0.RC1.jar:2.1.0.RC1]
...
This is expected!
Alright, let's go into Gfsh and start a cluster.
You know that you need to start a Locator with a Server to form a cluster, right? A Locator is used by other potential peers attempting to join the cluster so they can locate the cluster in the first place. The Server is needed in order to create the "Customers" Region since you cannot create a Region on a Locator.
$ echo $GEODE_HOME
/Users/jblum/pivdev/apache-geode-1.6.0
$ gfsh
_________________________ __
/ _____/ ______/ ______/ /____/ /
/ / __/ /___ /_____ / _____ /
/ /__/ / ____/ _____/ / / / /
/______/_/ /______/_/ /_/ 1.6.0
Monitor and Manage Apache Geode
gfsh>start locator --name=LocaorOne --log-level=config
Starting a Geode Locator in /Users/jblum/pivdev/lab/LocaorOne...
....
Locator in /Users/jblum/pivdev/lab/LocaorOne on 10.0.0.121[10334] as LocaorOne is currently online.
Process ID: 41758
Uptime: 5 seconds
Geode Version: 1.6.0
Java Version: 1.8.0_152
Log File: /Users/jblum/pivdev/lab/LocaorOne/LocaorOne.log
JVM Arguments: -Dgemfire.enable-cluster-configuration=true -Dgemfire.load-cluster-configuration-from-dir=false -Dgemfire.log-level=config -Dgemfire.launcher.registerSignalHandlers=true -Djava.awt.headless=true -Dsun.rmi.dgc.server.gcInterval=9223372036854775806
Class-Path: /Users/jblum/pivdev/apache-geode-1.6.0/lib/geode-core-1.6.0.jar:/Users/jblum/pivdev/apache-geode-1.6.0/lib/geode-dependencies.jar
Successfully connected to: JMX Manager [host=10.0.0.121, port=1099]
Cluster configuration service is up and running.
gfsh>start server --name=ServerOne --log-level=config
Starting a Geode Server in /Users/jblum/pivdev/lab/ServerOne...
...
Server in /Users/jblum/pivdev/lab/ServerOne on 10.0.0.121[40404] as ServerOne is currently online.
Process ID: 41785
Uptime: 3 seconds
Geode Version: 1.6.0
Java Version: 1.8.0_152
Log File: /Users/jblum/pivdev/lab/ServerOne/ServerOne.log
JVM Arguments: -Dgemfire.default.locators=10.0.0.121[10334] -Dgemfire.start-dev-rest-api=false -Dgemfire.use-cluster-configuration=true -Dgemfire.log-level=config -XX:OnOutOfMemoryError=kill -KILL %p -Dgemfire.launcher.registerSignalHandlers=true -Djava.awt.headless=true -Dsun.rmi.dgc.server.gcInterval=9223372036854775806
Class-Path: /Users/jblum/pivdev/apache-geode-1.6.0/lib/geode-core-1.6.0.jar:/Users/jblum/pivdev/apache-geode-1.6.0/lib/geode-dependencies.jar
gfsh>list members
Name | Id
--------- | --------------------------------------------------------------
LocaorOne | 10.0.0.121(LocaorOne:41758:locator)<ec><v0>:1024 [Coordinator]
ServerOne | 10.0.0.121(ServerOne:41785)<v1>:1025
gfsh>list regions
No Regions Found
gfsh>create region --name=Customers --type=PARTITION --key-constraint=java.lang.Long --value-constraint=java.lang.Object
Member | Status
--------- | ------------------------------------------
ServerOne | Region "/Customers" created on "ServerOne"
gfsh>list regions
List of regions
---------------
Customers
gfsh>describe region --name=Customers
..........................................................
Name : Customers
Data Policy : partition
Hosting Members : ServerOne
Non-Default Attributes Shared By Hosting Members
Type | Name | Value
------ | ----------- | ---------
Region | size | 0
| data-policy | PARTITION
Now, even if I run the Spring Boot application again, it will still fail with the same Exception...
Caused by: org.springframework.beans.factory.BeanInitializationException: Region [Customers] in Cache [GemFireCache[id = 989520513; isClosing = false; isShutDownAll = false; created = Thu Aug 02 14:09:25 PDT 2018; server = false; copyOnRead = false; lockLease = 120; lockTimeout = 60]] not found
at org.springframework.data.gemfire.ResolvableRegionFactoryBean.createRegion(ResolvableRegionFactoryBean.java:146) ~[spring-data-geode-2.1.0.RC1.jar:2.1.0.RC1]
at org.springframework.data.gemfire.ResolvableRegionFactoryBean.afterPropertiesSet(ResolvableRegionFactoryBean.java:96) ~[spring-data-geode-2.1.0.RC1.jar:2.1.0.RC1]
at org.springframework.data.gemfire.LookupRegionFactoryBean.afterPropertiesSet(LookupRegionFactoryBean.java:72) ~[spring-data-geode-2.1.0.RC1.jar:2.1.0.RC1]
...
Why?
This because 1) the Spring Boot, Apache Geode peer Cache application is not part of the cluster (yet) and 2) because, by default, SDG does not allow a Spring configured/bootstrapped Apache Geode peer Cache application to get it's configuration from the cluster (specifically, from the Cluster Configuration Service), therefore, we have to configure/enable both things.
We can have our Spring Boot, Apache Geode peer Cache application join the cluster specifically by specifying the locators attribute of the #PeerCacheApplication annotation as localhost[10334].
#PeerCacheApplication(name = "...", locators = "localhost[10334]")
We can have our Spring Boot, Apache Geode peer Cache application get its configuration from the cluster by enabling the useClusterConfiguration property, which we do by adding the following Configurer bean definition to our inner, static GeodeConfiguration class, as follows:
#Bean
PeerCacheConfigurer useClusterConfigurationConfigurer() {
return (beanName, cacheFactoryBean) -> cacheFactoryBean.setUseClusterConfiguration(true);
}
Now, when we run our Spring Boot, Apache Geode peer Cache application again, we see quite different output. First, see that our peer member (app) gets the cluster configuration...
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<cache xmlns="http://geode.apache.org/schema/cache" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" copy-on-read="false" is-server="false" lock-lease="120" lock-timeout="60" search-timeout="300" version="1.0" xsi:schemaLocation="http://geode.apache.org/schema/cache http://geode.apache.org/schema/cache/cache-1.0.xsd">
<region name="Customers">
<region-attributes data-policy="partition">
<key-constraint>java.lang.Long</key-constraint>
<value-constraint>java.lang.Object</value-constraint>
</region-attributes>
</region>
</cache>
Next, you may have noticed that I enabled PDX, using SDG's #EnablePdx annotation. This allows us to easily serialize our application domain model object types (e.g. Customer) without our type unduly needing to implement java.io.Serializable. There are several reasons why you wouldn't necessarily want to implement java.io.Serializable anyway. Using SDG's #EnablePdx uses SDG's MappingPdxSerializer implementation, which is far more powerful than even Apache Geode's/Pivotal GemFire's own ReflectionBasedAutoSerializer.
As result of serializing the application types (namely, Customer), you will see this output...
14:26:48.322 [main] INFO org.apache.geode.internal.cache.PartitionedRegion - Partitioned Region /Customers is created with prId=2
Started ClusterConfiguredGeodeServerApplication in 4.223 seconds (JVM running for 5.574)
14:26:48.966 [main] INFO org.apache.geode.pdx.internal.PeerTypeRegistration - Adding new type: PdxType[dsid=0, typenum=14762571
name=example.app.spring.cluster_config.server.ClusterConfiguredGeodeServerApplication$Customer
fields=[
id:Object:identity:0:idx0(relativeOffset)=0:idx1(vlfOffsetIndex)=-1
name:String:1:1:idx0(relativeOffset)=0:idx1(vlfOffsetIndex)=1]]
14:26:49.002 [main] INFO org.apache.geode.pdx.internal.TypeRegistry - Caching PdxType[dsid=0, typenum=14762571
name=example.app.spring.cluster_config.server.ClusterConfiguredGeodeServerApplication$Customer
fields=[
id:Object:identity:0:idx0(relativeOffset)=0:idx1(vlfOffsetIndex)=-1
name:String:1:1:idx0(relativeOffset)=0:idx1(vlfOffsetIndex)=1]]
Another reason I enable PDX is so that I do not need to add the Customer class to the Server (i.e. "ServerOne") started using Gfsh. It also allows me to query the "Customers" Region and see that Customer "Jon Doe" was successfully added...
gfsh>describe region --name=Customers
..........................................................
Name : Customers
Data Policy : partition
Hosting Members : ServerOne
Non-Default Attributes Shared By Hosting Members
Type | Name | Value
------ | ----------- | ---------
Region | size | 1
| data-policy | PARTITION
gfsh>
gfsh>query --query="SELECT c.name FROM /Customers c"
Result : true
Limit : 100
Rows : 1
Result
-------
Jon Doe
Bingo! Success!
I am not going to even begin to discuss everything that is wrong with your configuration. I implore you to read the docs (and Apache Geode's User Guide; i.e. appropriate sections), understand the concepts, look at examples, guides, ask concise questions, etc, etc.
Here is the example source code...
https://github.com/jxblum/contacts-application/blob/master/configuration-example/src/main/java/example/app/spring/cluster_config/server/ClusterConfiguredGeodeServerApplication.java
Hope this helps!
-j

Hazelcast No DataSerializerFactory registered for namespace: 0 on standalone process

Trying to set a HazelCast cluster with tcp-ip enabled on a standalone process.
My class looks like this
public class Person implements Serializable{
private static final long serialVersionUID = 1L;
int personId;
String name;
Person(){};
//getters and setters
}
Hazelcast is loaded as
final Config config = createNewConfig(mapName);
HazelcastInstance node = Hazelcast.newHazelcastInstance(config);`
Config createNewConfig(mapName){
final PersonStore personStore = new PersonStore();
XmlConfigBuilder configBuilder = new XmlConfigBuilder();
Config config = configBuilder.build();
config.setClassLoader(LoadAll.class.getClassLoader());
MapConfig mapConfig = config.getMapConfig(mapName);
MapStoreConfig mapStoreConfig = new MapStoreConfig();
mapStoreConfig.setImplementation(personStore);
return config;
}
and my myhazelcast config has this
<tcp-ip enabled="true">
<member>machine-1</member>
<member>machine-2</member>
</tcp-ip>
Do I need to populate this tag in my xml?
I get this error when a second instance is brought up
com.hazelcast.nio.serialization.HazelcastSerializationException: No DataSerializerFactory registered for namespace: 0
2275 at com.hazelcast.nio.serialization.DataSerializer.read(DataSerializer.java:98)
2276 at com.hazelcast.nio.serialization.DataSerializer.read(DataSerializer.java:39)
2277 at com.hazelcast.nio.serialization.StreamSerializerAdapter.read(StreamSerializerAdapter.java:41)
2278 at com.hazelcast.nio.serialization.SerializationServiceImpl.toObject(SerializationServiceImpl.java:276)
Any help is highly appericiated.
Solved my problem, I had a pom.xml with hazelcast-wm so I did not have actual hazelcast jar in my bundled jar. Including that fixed my issue.
Note that this same "No DataSerializerFactory registered for namespace: 0" error message can also occur in an OSGi environment when you're attempting to use more than one Hazelcast instance within the same VM, but initializing the instances from different bundles. The reason being that the com.hazelcast.util.ServiceLoader.findHighestReachableClassLoader() method will sometimes pick the wrong class loader during Hazelcast initialization (as it won't always pick the class loader you set on the config), and then it ends up with an empty list of DataSerializerFactory instances (hence causing the error message that it can't find the requested factory with id 0). The following shows a way to work around that problem by taking advantage of Java's context class loader:
private HazelcastInstance createHazelcastInstance() {
// Use the following if you're only using the Hazelcast data serializers
final ClassLoader classLoader = Hazelcast.class.getClassLoader();
// Use the following if you have custom data serializers that you need
// final ClassLoader classLoader = this.getClass().getClassLoader();
final com.hazelcast.config.Config config = new com.hazelcast.config.Config();
config.setClassLoader(classLoader);
final ClassLoader previousContextClassLoader = Thread.currentThread().getContextClassLoader();
try {
Thread.currentThread().setContextClassLoader(classLoader);
return Hazelcast.newHazelcastInstance(config);
} finally {
if(previousContextClassLoader != null) {
Thread.currentThread().setContextClassLoader(previousContextClassLoader);
}
}
}

How to list JBoss AS 7 datasource properties in Java code?

I'm running JBoss AS 7.1.0.CR1b. I've got several datasources defined in my standalone.xml e.g.
<subsystem xmlns="urn:jboss:domain:datasources:1.0">
<datasources>
<datasource jndi-name="java:/MyDS" pool-name="MyDS_Pool" enabled="true" use-java-context="true" use-ccm="true">
<connection-url>some-url</connection-url>
<driver>the-driver</driver>
[etc]
Everything works fine.
I'm trying to access the information contained here within my code - specifically the connection-url and driver properties.
I've tried getting the Datasource from JNDI, as normal, but it doesn't appear to provide access to these properties:
// catches removed
InitialContext context;
DataSource dataSource = null;
context = new InitialContext();
dataSource = (DataSource) context.lookup(jndi);
ClientInfo and DatabaseMetadata from a Connection object from this Datasource also don't contain these granular, JBoss properties either.
My code will be running inside the container with the datasource specfied, so all should be available. I've looked at the IronJacamar interface org.jboss.jca.common.api.metadata.ds.DataSource, and its implementing class, and these seem to have accessible hooks to the information I require, but I can't find any information on how to create such objects with these already deployed resources within the container (only constructor on impl involves inputting all properties manually).
JBoss AS 7's Command-Line Interface allows you to navigate and list the datasources as a directory system. http://www.paykin.info/java/add-datasource-programaticaly-cli-jboss-7/ provides an excellent post on how to use what I believe is the Java Management API to interact with the subsystem, but this appears to involve connecting to the target JBoss server. My code is already running within that server, so surely there must be an easier way to do this?
Hope somebody can help. Many thanks.
What you're really trying to do is a management action. The best way to is to use the management API's that are available.
Here is a simple standalone example:
public class Main {
public static void main(final String[] args) throws Exception {
final List<ModelNode> dataSources = getDataSources();
for (ModelNode dataSource : dataSources) {
System.out.printf("Datasource: %s%n", dataSource.asString());
}
}
public static List<ModelNode> getDataSources() throws IOException {
final ModelNode request = new ModelNode();
request.get(ClientConstants.OP).set("read-resource");
request.get("recursive").set(true);
request.get(ClientConstants.OP_ADDR).add("subsystem", "datasources");
ModelControllerClient client = null;
try {
client = ModelControllerClient.Factory.create(InetAddress.getByName("127.0.0.1"), 9999);
final ModelNode response = client.execute(new OperationBuilder(request).build());
reportFailure(response);
return response.get(ClientConstants.RESULT).get("data-source").asList();
} finally {
safeClose(client);
}
}
public static void safeClose(final Closeable closeable) {
if (closeable != null) try {
closeable.close();
} catch (Exception e) {
// no-op
}
}
private static void reportFailure(final ModelNode node) {
if (!node.get(ClientConstants.OUTCOME).asString().equals(ClientConstants.SUCCESS)) {
final String msg;
if (node.hasDefined(ClientConstants.FAILURE_DESCRIPTION)) {
if (node.hasDefined(ClientConstants.OP)) {
msg = String.format("Operation '%s' at address '%s' failed: %s", node.get(ClientConstants.OP), node.get(ClientConstants.OP_ADDR), node.get(ClientConstants.FAILURE_DESCRIPTION));
} else {
msg = String.format("Operation failed: %s", node.get(ClientConstants.FAILURE_DESCRIPTION));
}
} else {
msg = String.format("Operation failed: %s", node);
}
throw new RuntimeException(msg);
}
}
}
The only other way I can think of is to add module that relies on servers internals. It could be done, but I would probably use the management API first.

Changing Location of Velocity.Log File

Seems pretty straight forward. Documentation at http://velocity.apache.org/engine/devel/developer-guide.html#Configuring_Logging
says to set the runtime.log property. Here's what I got for all my properties.
velocityEngine.setProperty(RuntimeConstants.FILE_RESOURCE_LOADER_PATH, templatesPath);
velocityEngine.setProperty("runtime.log", "/path/to/my/file/velocity.log");
velocityEngine.setProperty("resource.loader", "string");
velocityEngine.setProperty("string.resource.loader.class", "org.apache.velocity.runtime.resource.loader.StringResourceLoader");
velocityEngine.setProperty("string.resource.loader.repository.class", "org.apache.velocity.runtime.resource.util.StringResourceRepositoryImpl");
Not finding any log file where I told it to place it and instead finding the new errors placed into old (folder of initialization) location. Any ideas? :D
i had similar problem when setting at runtime some options. I figured out those problem whit a custom VelocityBuilder and an external velocity.properties file where you can put all the runtime properties.
Here is the code:
public class BaseVelocityBuilder implements VelocityBuilder {
private VelocityEngine engine;
private Log logger = LogFactory.getLog(getClass());
#Autowired
private WebApplicationContext webApplicationContext;
public VelocityEngine engine() {
if(engine == null) {
engine = new VelocityEngine();
Properties properties = new Properties();
InputStream in = null;
try {
in = webApplicationContext.getServletContext().getResourceAsStream("/WEB-INF/velocity.properties");
properties.load(in);
engine.init(properties);
} catch (IOException e) {
e.printStackTrace();
logger.error("Error loading velocity engine properties");
throw new ProgramException("Cannot load velocity engine properties");
}
IOUtils.closeQuietly(in);
}
return engine;
}
}
See this line:
in = webApplicationContext.getServletContext().getResourceAsStream("/WEB-INF/velocity.properties");
properties.load(in);
engine.init(properties);
So i have a velocity.properties file in /WEB-INF where i put some configuration:
resource.loader = webinf, class
webinf.resource.loader.description = Framework Templates Resource Loader
webinf.resource.loader.class = applica.framework.library.velocity.WEBINFResourceLoader
webapp.resource.loader.class = org.apache.velocity.tools.view.servlet.WebappLoader
webapp.resource.loader.path =
file.resource.loader.description = Velocity File Resource Loader
file.resource.loader.class = org.apache.velocity.runtime.resource.loader.FileResourceLoader
file.resource.loader.path =
class.resource.loader.description = Velocity Classpath Resource Loader
class.resource.loader.class = org.apache.velocity.runtime.resource.loader.ClasspathResourceLoader
runtime.log='/pathYouWant/velocity.log'
In the end in your application.xml :
<bean class="applica.framework.library.velocity.BaseVelocityBuilder" />
In this way you can have for example different file log for different application and when you give the war in production, the sysadm can change the properties due to env configuration of the production server.