Can Ignite persistent one cache to multiple store? - ignite

I hope to use ignite to sync up records to multiple mysql db. For example, when some records goes into cacheA, the records can be persistent to db1 and db2 both.
Can it be possible?
What I did is:
write a PersonStore class and build it as a jar and place it in libs\
first sample1.xml configure as
Blockquote
<bean class="org.springframework.jdbc.datasource.DriverManagerDataSource" id="dataSource">
<property name="driverClassName" value="com.mysql.jdbc.Driver"></property>
<property name="url" value="jdbc:mysql://111.xxx.xxx:3306/test"></property>
<property name="username" value="root"></property>
<property name="password" value="xxxx"></property>
</bean>
<bean id="ignite.cfg" class="org.apache.ignite.configuration.IgniteConfiguration">
<property name="peerClassLoadingEnabled" value="true"/>
<property name="cacheConfiguration">
<list>
<bean class="org.apache.ignite.configuration.CacheConfiguration">
<property name="name" value="personCache"></property>
<!-- Enable readThrough-->
<property name="readThrough" value="true"></property>
<property name="writeThrough" value="true"></property>
<!-- Set cacheStoreFactory-->
<property name="cacheStoreFactory">
<bean class="javax.cache.configuration.FactoryBuilder" factory-method="factoryOf">
<constructor-arg value="com.jguo.ignitepersistentstoredemo.PersonStore"></constructor-arg>
</bean>
</property>
<property name="queryEntities">
<list>
<bean class="org.apache.ignite.cache.QueryEntity">
<property name="keyType" value="java.lang.Long"></property>
<property name="valueType" value="com.jguo.ignitepersistentstoredemo.model.Person"></property>
<property name="fields">
<map>
<entry key="id" value="java.lang.Long"></entry>
<entry key="name" value="java.lang.String"></entry>
<entry key="orgId" value="java.lang.Long"></entry>
<entry key="salary" value="java.lang.Integer"></entry>
</map>
</property>
</bean>
</list>
</property>
</bean>
</list>
</property>
</bean>
Start one Ignite node bin/ignite.sh config/sample1.xml
Create another xml file sample2.xml and only modify the datasource part
Blockquote
<bean class="org.springframework.jdbc.datasource.DriverManagerDataSource" id="dataSource">
<property name="driverClassName" value="com.mysql.jdbc.Driver"></property>
<property name="url" value="jdbc:mysql://222.xxx.xxx:3306/test"></property>
<property name="username" value="root"></property>
<property name="password" value="xxxx"></property>
</bean>
Start second Ignite node bin/ignite.sh config/sample2.xml
Start a client and put some record in cache personCache
But only one db got the data.

CacheConfiguration should be unified across all the nodes. That's why only one config is in effect.
If you need a CacheStore to operate against multiple DBs, you need to create a custom CacheStore which will have multiple data sources referring to different DBs and implement methods in an appropriate way.

Related

GridGain Near Cache Not storing data

I have a query re. the setup of the GridGain near cache, we have a single server node with the config as listed below and have a single thick client connecting successfully to it ~
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="
http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans.xsd">
<bean class="org.apache.ignite.configuration.IgniteConfiguration">
<!-- PEER CLASS LOADING -->
<property name="peerClassLoadingEnabled" value="true"/>
<!-- CACHE CONFIG-->
<property name="cacheConfiguration">
<list>
<!-- ENTER CACHE TEMPLATE-->
<bean class="org.apache.ignite.configuration.CacheConfiguration">
<property name="name" value="cache1"/>
<property name="cacheMode" value="PARTITIONED"/>
<property name="rebalanceMode" value="SYNC"/>
<property name="nearConfiguration">
<bean class="org.apache.ignite.configuration.NearCacheConfiguration">
<property name="nearEvictionPolicyFactory">
<bean class="org.apache.ignite.cache.eviction.lru.LruEvictionPolicyFactory">
<property name="maxSize" value="100000"/>
</bean>
</property>
</bean>
</property>
</bean>
<bean class="org.apache.ignite.configuration.CacheConfiguration">
<property name="name" value="cache2"/>
<property name="cacheMode" value="PARTITIONED"/>
<property name="rebalanceMode" value="SYNC"/>
<property name="nearConfiguration">
<bean class="org.apache.ignite.configuration.NearCacheConfiguration">
<property name="nearEvictionPolicyFactory">
<bean class="org.apache.ignite.cache.eviction.lru.LruEvictionPolicyFactory">
<property name="maxSize" value="100000"/>
</bean>
</property>
</bean>
</property>
</bean>
<bean class="org.apache.ignite.configuration.CacheConfiguration">
<property name="name" value="cache3"/>
<property name="cacheMode" value="PARTITIONED"/>
<property name="rebalanceMode" value="SYNC"/>
<property name="nearConfiguration">
<bean class="org.apache.ignite.configuration.NearCacheConfiguration">
<property name="nearEvictionPolicyFactory">
<bean class="org.apache.ignite.cache.eviction.lru.LruEvictionPolicyFactory">
<property name="maxSize" value="100000"/>
</bean>
</property>
</bean>
</property>
</bean>
</list>
</property>
<!-- DISCOVERY-->
<property name="discoverySpi">
<bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
<property name="ipFinder">
<bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.kubernetes.TcpDiscoveryKubernetesIpFinder">
<property name="namespace" value="gridgain"/>
<property name="serviceName" value="gridgain-service"/>
</bean>
</property>
</bean>
</property>
</bean>
</beans>
In setting the server up like this it was my understanding that as per the documentation here , that "Once configured in this way, the near cache is created on any node that requests data from the underlying cache, including both server nodes and client nodes. When you get an instance of the cache, as shown in the following example, the data requests go through the near cache.
IgniteCache<Integer, Integer> cache = ignite.cache("myCache");
int value = cache.get(1);
Based on this I do not believe that I have any need to create the near cache config on our client? and have just implemented code as ~
IgniteCache<Object, Object> cache = ignite.cache(ourCacheName);
The issue I see is that when I peek at the local cache to try and find values in there, after searching for them ~
cache_.localPeek(key, CachePeekMode.NEAR)
The objects are not found, despite being searched for several times, and it looks like they are not added to our near cache setup, everything just refers to the underlying cache. Previously we had programmatically created the Near cache on the client and it had worked, but we would like to config the solution on the server if possible. Our client node is just using default config, if this makes a difference.
Any thoughts why we are not seeing a near cache?
Thanks,
LS
In order to use the cache I suggest you create the near cache explicitly using the following syntax:
IgniteCache<Integer, Integer> clientCache = client.getOrCreateNearCache(cacheCfg.getName(), nearCfg);
...
clientCache.get(1);
System.out.println(clientCache.localPeek(1, CachePeekMode.NEAR));
There are some tickets like IGNITE-15960 or IGNITE-1163 with discussions about the API improvements. I suppose the cache has to be declared on the servers first and then you would be able to create it explicitly on the clients. Agree, the docs and API are super confusing and have to be reworked.
Also, the near cache is local to a node, i.e. you might have them for some clients/servers and do not want to create it for other ones.

WSO2 APIM 4.0.0 - Groups are not assigned to Users

I'm trying to configure API Manager 4.0.0 against OpenLDAP.
The users and groups are correctly fetched from the ldap and I can see them on carbon UI.
When I navigate to "View Users" of one group, I can see the users that are fetched using the attribute "uniqueMember" of the ldap.
But when I navigate to "View Roles" of one user, only "Internal/everyone" is displayed. The groups of the user are not assigned to him.
Is it normal to see the relationship in one way only ?
My OpenLDAP has no "memberOf" attribute schema. Maybe it is required ?
I am using a fresh install from wso4am-4.0.0.zip all-in-one without modification.
Here is the configuration of the userstore:
<?xml version="1.0" encoding="UTF-8"?><UserStoreManager class="org.wso2.carbon.user.core.ldap.UniqueIDReadWriteLDAPUserStoreManager">
<Property name="ConnectionURL">ldap://xxx:389</Property>
<Property name="ConnectionName">cn=admin,dc=mycompany-dev,dc=fr</Property>
<Property encrypted="true" name="ConnectionPassword">xxx</Property>
<Property name="UserSearchBase">ou=Users,ou=wso2,dc=mycompany-dev,dc=fr</Property>
<Property name="UserEntryObjectClass">person</Property>
<Property name="UserNameAttribute">uid</Property>
<Property name="UserNameSearchFilter">(&(objectClass=person)(uid=?))</Property>
<Property name="UserNameListFilter">(objectClass=person)</Property>
<Property name="UserIDAttribute">scimId</Property>
<Property name="UserIdSearchFilter">(&(objectClass=person)(uid=?))</Property>
<Property name="UserDNPattern"/>
<Property name="DisplayNameAttribute"/>
<Property name="Disabled">false</Property>
<Property name="ReadGroups">true</Property>
<Property name="WriteGroups">true</Property>
<Property name="GroupSearchBase">ou=Groups,ou=wso2,dc=mycompany-dev,dc=fr</Property>
<Property name="GroupEntryObjectClass">groupOfUniqueNames</Property>
<Property name="GroupNameAttribute">cn</Property>
<Property name="GroupNameSearchFilter">(&(objectClass=groupOfUniqueNames)(cn=?))</Property>
<Property name="GroupNameListFilter">(objectClass=groupOfUniqueNames)</Property>
<Property name="RoleDNPattern"/>
<Property name="MembershipAttribute">uniqueMember</Property>
<Property name="MemberOfAttribute"/>
<Property name="BackLinksEnabled">true</Property>
<Property name="UserNameJavaRegEx">[a-zA-Z0-9._-|//]{3,30}$</Property>
<Property name="UserNameJavaScriptRegEx">^[\S]{3,30}$</Property>
<Property name="UsernameJavaRegExViolationErrorMsg">Username pattern policy violated.</Property>
<Property name="PasswordJavaRegEx">^[\S]{5,30}$</Property>
<Property name="PasswordJavaScriptRegEx">^[\S]{5,30}$</Property>
<Property name="PasswordJavaRegExViolationErrorMsg">Password pattern policy violated.</Property>
<Property name="RoleNameJavaRegEx">[a-zA-Z0-9._-|//]{3,30}$</Property>
<Property name="RoleNameJavaScriptRegEx">^[\S]{3,30}$</Property>
<Property name="LDAPInitialContextFactory">com.sun.jndi.ldap.LdapCtxFactory</Property>
<Property name="DateAndTimePattern">Date And Time Pattern</Property>
<Property name="CaseInsensitiveUsername">true</Property>
<Property name="BulkImportSupported">true</Property>
<Property name="EmptyRolesAllowed">true</Property>
<Property name="PasswordHashMethod">PLAIN_TEXT</Property>
<Property name="MultiAttributeSeparator">,</Property>
<Property name="MaxUserNameListLength">100</Property>
<Property name="MaxRoleNameListLength">100</Property>
<Property name="kdcEnabled">false</Property>
<Property name="defaultRealmName">WSO2.ORG</Property>
<Property name="UserRolesCacheEnabled">false</Property>
<Property name="ConnectionPoolingEnabled">false</Property>
<Property name="LDAPConnectionTimeout">5000</Property>
<Property name="ReadTimeout">5000</Property>
<Property name="RetryAttempts">0</Property>
<Property name="CountRetrieverClass"/>
<Property name="java.naming.ldap.attributes.binary"/>
<Property name="ClaimOperationsSupported">true</Property>
<Property name="MembershipAttributeRange">0</Property>
<Property name="UserCacheExpiryMilliseconds"/>
<Property name="UserDNCacheEnabled">true</Property>
<Property name="StartTLSEnabled">false</Property>
<Property name="ConnectionRetryDelay">120000</Property>
<Property name="ImmutableAttributes"/>
<Property name="TimestampAttributes"/>
<Property name="DomainName">CompanyUsers</Property>
<Property name="Description"/>
</UserStoreManager>
FYI I found the answer after activating logs :
DEBUG
{org.wso2.carbon.user.core.ldap.UniqueIDReadOnlyLDAPUserStoreManager} - No UserID found for the property: uid, value: user1, in domain: COMPANYUSERS
My user was incorrectly fetched because of scimId not exists in my Ldap
<Property name="UserIDAttribute">scimId</Property>
I changed to uid and it is working now.
<Property name="UserIDAttribute">uid</Property>

Ignite QueryEntity without index

I want to set QueryEntity with indexes in ignite configuration but it is not allowing to set. I am using 2.9.0. If I dont set indexes then it does not create then It doesn't create tables. If I try to set then it throws exception mentioned in below link.
Getting Caused by: class org.apache.ignite.binary.BinaryInvalidTypeException: in ignite 2.9.0
Can anyone tell me how to set query entity without indices?
Maybe I do not understand, but I am trying this right from documentation, and it works:
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:util="http://www.springframework.org/schema/util"
xsi:schemaLocation="
http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans.xsd
http://www.springframework.org/schema/util
http://www.springframework.org/schema/util/spring-util.xsd">
<bean id="ignite.cfg" class="org.apache.ignite.configuration.IgniteConfiguration">
<property name="cacheConfiguration">
<list>
<bean class="org.apache.ignite.configuration.CacheConfiguration">
<property name="name" value="mycache"/>
<!-- Configure query entities -->
<property name="queryEntities">
<list>
<bean class="org.apache.ignite.cache.QueryEntity">
<!-- Setting indexed type's key class -->
<property name="keyType" value="java.lang.Long"/>
<!-- Setting indexed type's value class -->
<property name="valueType" value="org.apache.ignite.examples.Person"/>
<!-- Defining fields that will be either indexed or queryable. Indexed fields are added to 'indexes' list below.-->
<property name="fields">
<map>
<entry key="id" value="java.lang.Long"/>
<entry key="name" value="java.lang.String"/>
<entry key="salary" value="java.lang.Long "/>
</map>
</property>
<!-- Defining indexed fields.-->
<property name="indexes">
<list>
<!-- Single field (aka. column) index -->
<bean class="org.apache.ignite.cache.QueryIndex">
<constructor-arg value="id"/>
</bean>
<!-- Group index. -->
<bean class="org.apache.ignite.cache.QueryIndex">
<constructor-arg>
<list>
<value>id</value>
<value>salary</value>
</list>
</constructor-arg>
<constructor-arg value="SORTED"/>
</bean>
</list>
</property>
</bean>
</list>
</property>
</bean>
</list>
</property>
</bean>
</beans>
Can you show your configuration?
Issue got resolved . While setting QueryEntity key fields and fields in QueryEntity are required to set.

Apache Ignite data loss when one node goes down

I'm new with Ignite and I'm trying to test data quality and availability of Ignite cluster.
I use the below xml configuration for setting cluster,
<property name="discoverySpi">
<bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
<property name="socketTimeout" value="50000" />
<property name="networkTimeout" value="50000" />
<property name="reconnectCount" value="5" />
<property name="ipFinder">
<bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
<property name="addresses">
<list>
<value>x.x.x.1:47500..47509</value>
<value>x.x.x.2:47500..47509</value>
</list>
</property>
</bean>
</property>
</bean>
</property>
</bean>
Also and jthe CacheConfiguration is,
<bean id="cache-template-bean" class="org.apache.ignite.configuration.CacheConfiguration">
<property name="name" value="CACHE_TEMPLATE*"/>
<property name="cacheMode" value="PARTITIONED" />
<property name="backups" value="1" />
<!-- <property name="backups" value="2" />
<property name="backups" value="3" /> -->
<property name="atomicityMode" value="TRANSACTIONAL" />
<property name="writeSynchronizationMode" value="PRIMARY_SYNC" />
<property name="rebalanceBatchSize" value="#{4 * 1024 * 1024}" />
<property name="rebalanceMode" value="ASYNC" />
<property name="statisticsEnabled" value="true" />
<property name="rebalanceBatchesPrefetchCount" value="4" />
<property name="defaultLockTimeout" value="5000" />
<property name="readFromBackup" value="true" />
<property name="queryParallelism" value="6" />
<property name="nodeFilter">
<bean class="org.apache.ignite.util.AttributeNodeFilter">
<constructor-arg>
<map>
<entry key="ROLE" value="data.compute"/>
</map>
</constructor-arg>
</bean>
</property>
</bean>
My scenarios are,
Loaded the 5 million data when all the 3 nodes
Bring one node down
The count shows 3.75 million. (Data loss)
Bringing the node up counts 5 million again.
I tried backup 1,2,3 all resulted in the same data loss. As per Ignite documents, appears the data loss should not happen. If this fixed, I can try adding data when the node is down and check how it behaves.
Any suggestions, please?
Ash
The main idea of the baseline topology and persistence is to prevent unnecessary rebalance and store data only in specified server nodes. When a baseline node stopped, it is expected that one will back soon and the rebalance process is not triggered. You could exclude the node from the baseline using api or control.sh utility.
IgniteCache.size() returns the number of primary entries. So when a baseline node is stopped, size() shows a smaller number indicating that a number of primary entries is not accessible.
In your case the data is not lost by two reasons:
1. The data is persisted in backup entries on alive baseline nodes.
2. The primary and backup entries located on the stopped node will back to the cluster after the node started.
[1] https://apacheignite.readme.io/docs/baseline-topology

EclipseLink is not logging Finer or Finest level only fine

I have in persistance.xml
<persistence-unit name="callrecunit">
...
<properties>
<property name="eclipselink.logging.level.sql" value="FINER"/>
<property name="eclipselink.logging.level" value="FINER"/>
<property name="eclipselink.logging.parameters" value="true"/>
</properties>
but the best logging level is FINE.
<bean id="entityManagerFactory" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean" >
<property name="persistenceUnitName" value="callrecunit"/>
<property name="dataSource" ref="dataSource"/>
<property name="jpaVendorAdapter">
<bean class="org.springframework.orm.jpa.vendor.EclipseLinkJpaVendorAdapter">
<property name="showSql" value="true"/>
<property name="databasePlatform" value="org.eclipse.persistence.platform.database.PostgreSQLPlatform"/>
</bean>
</property>
<property name="jpaDialect">
<bean class="org.springframework.orm.jpa.vendor.EclipseLinkJpaDialect"/>
</property>
<property name="loadTimeWeaver">
<bean class="org.springframework.instrument.classloading.InstrumentationLoadTimeWeaver"/>
</property>
</bean>
I know I have logging messages on FINER or FINES, but it is never logged.
So the problem is in EclipseLinkJpaVendorAdapter definition.
I have to commet out:
<!--<property name="showSql" value="true"/>-->
And it started logging on higher level.
I did some debugging and if showSQL is enabled, then this is considered as FINE level, which is then not overwriten by persistance settings. - only null values of logging level are overwriten, since FINE is valid, even more detailed levels are not used... Maybe bug?