I have in persistance.xml
<persistence-unit name="callrecunit">
...
<properties>
<property name="eclipselink.logging.level.sql" value="FINER"/>
<property name="eclipselink.logging.level" value="FINER"/>
<property name="eclipselink.logging.parameters" value="true"/>
</properties>
but the best logging level is FINE.
<bean id="entityManagerFactory" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean" >
<property name="persistenceUnitName" value="callrecunit"/>
<property name="dataSource" ref="dataSource"/>
<property name="jpaVendorAdapter">
<bean class="org.springframework.orm.jpa.vendor.EclipseLinkJpaVendorAdapter">
<property name="showSql" value="true"/>
<property name="databasePlatform" value="org.eclipse.persistence.platform.database.PostgreSQLPlatform"/>
</bean>
</property>
<property name="jpaDialect">
<bean class="org.springframework.orm.jpa.vendor.EclipseLinkJpaDialect"/>
</property>
<property name="loadTimeWeaver">
<bean class="org.springframework.instrument.classloading.InstrumentationLoadTimeWeaver"/>
</property>
</bean>
I know I have logging messages on FINER or FINES, but it is never logged.
So the problem is in EclipseLinkJpaVendorAdapter definition.
I have to commet out:
<!--<property name="showSql" value="true"/>-->
And it started logging on higher level.
I did some debugging and if showSQL is enabled, then this is considered as FINE level, which is then not overwriten by persistance settings. - only null values of logging level are overwriten, since FINE is valid, even more detailed levels are not used... Maybe bug?
Related
I have a query re. the setup of the GridGain near cache, we have a single server node with the config as listed below and have a single thick client connecting successfully to it ~
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="
http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans.xsd">
<bean class="org.apache.ignite.configuration.IgniteConfiguration">
<!-- PEER CLASS LOADING -->
<property name="peerClassLoadingEnabled" value="true"/>
<!-- CACHE CONFIG-->
<property name="cacheConfiguration">
<list>
<!-- ENTER CACHE TEMPLATE-->
<bean class="org.apache.ignite.configuration.CacheConfiguration">
<property name="name" value="cache1"/>
<property name="cacheMode" value="PARTITIONED"/>
<property name="rebalanceMode" value="SYNC"/>
<property name="nearConfiguration">
<bean class="org.apache.ignite.configuration.NearCacheConfiguration">
<property name="nearEvictionPolicyFactory">
<bean class="org.apache.ignite.cache.eviction.lru.LruEvictionPolicyFactory">
<property name="maxSize" value="100000"/>
</bean>
</property>
</bean>
</property>
</bean>
<bean class="org.apache.ignite.configuration.CacheConfiguration">
<property name="name" value="cache2"/>
<property name="cacheMode" value="PARTITIONED"/>
<property name="rebalanceMode" value="SYNC"/>
<property name="nearConfiguration">
<bean class="org.apache.ignite.configuration.NearCacheConfiguration">
<property name="nearEvictionPolicyFactory">
<bean class="org.apache.ignite.cache.eviction.lru.LruEvictionPolicyFactory">
<property name="maxSize" value="100000"/>
</bean>
</property>
</bean>
</property>
</bean>
<bean class="org.apache.ignite.configuration.CacheConfiguration">
<property name="name" value="cache3"/>
<property name="cacheMode" value="PARTITIONED"/>
<property name="rebalanceMode" value="SYNC"/>
<property name="nearConfiguration">
<bean class="org.apache.ignite.configuration.NearCacheConfiguration">
<property name="nearEvictionPolicyFactory">
<bean class="org.apache.ignite.cache.eviction.lru.LruEvictionPolicyFactory">
<property name="maxSize" value="100000"/>
</bean>
</property>
</bean>
</property>
</bean>
</list>
</property>
<!-- DISCOVERY-->
<property name="discoverySpi">
<bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
<property name="ipFinder">
<bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.kubernetes.TcpDiscoveryKubernetesIpFinder">
<property name="namespace" value="gridgain"/>
<property name="serviceName" value="gridgain-service"/>
</bean>
</property>
</bean>
</property>
</bean>
</beans>
In setting the server up like this it was my understanding that as per the documentation here , that "Once configured in this way, the near cache is created on any node that requests data from the underlying cache, including both server nodes and client nodes. When you get an instance of the cache, as shown in the following example, the data requests go through the near cache.
IgniteCache<Integer, Integer> cache = ignite.cache("myCache");
int value = cache.get(1);
Based on this I do not believe that I have any need to create the near cache config on our client? and have just implemented code as ~
IgniteCache<Object, Object> cache = ignite.cache(ourCacheName);
The issue I see is that when I peek at the local cache to try and find values in there, after searching for them ~
cache_.localPeek(key, CachePeekMode.NEAR)
The objects are not found, despite being searched for several times, and it looks like they are not added to our near cache setup, everything just refers to the underlying cache. Previously we had programmatically created the Near cache on the client and it had worked, but we would like to config the solution on the server if possible. Our client node is just using default config, if this makes a difference.
Any thoughts why we are not seeing a near cache?
Thanks,
LS
In order to use the cache I suggest you create the near cache explicitly using the following syntax:
IgniteCache<Integer, Integer> clientCache = client.getOrCreateNearCache(cacheCfg.getName(), nearCfg);
...
clientCache.get(1);
System.out.println(clientCache.localPeek(1, CachePeekMode.NEAR));
There are some tickets like IGNITE-15960 or IGNITE-1163 with discussions about the API improvements. I suppose the cache has to be declared on the servers first and then you would be able to create it explicitly on the clients. Agree, the docs and API are super confusing and have to be reworked.
Also, the near cache is local to a node, i.e. you might have them for some clients/servers and do not want to create it for other ones.
I want to use network shared folder as persistent store path in DataStorageConfiguration.Ignite stucks there.
Can anyone please tell me how to do in ignite?
I wouldn’t recommend putting Ignite’s persistent files on a network volume. The performance and locking characteristics often lead to problems. Fast, local disks are very much preferable.
But to directly answer your question, as per the documentation:
<bean class="org.apache.ignite.configuration.IgniteConfiguration">
<property name="dataStorageConfiguration">
<bean class="org.apache.ignite.configuration.DataStorageConfiguration">
<property name="defaultDataRegionConfiguration">
<bean class="org.apache.ignite.configuration.DataRegionConfiguration">
<property name="persistenceEnabled" value="true"/>
</bean>
</property>
<property name="storagePath" value="/opt/storage"/>
<property name="walPath" value="/opt/wal"/>
<property name="walArchivePath" value="/opt/walarch"/>
</bean>
</property>
</bean>
I'm battling to configure Apache Ignite to distribute partitions in zone-aware manner. I have Ignite 2.8.0 with 4 nodes running as StatefulSet pods in GKE 1.14 split in two zones. I followed the guide, and the example:
Propagated zone names into pod under AVAILABILITY_ZONE env var.
Then using Web-Console I verified that this env var was loaded correctly for each node.
I setup cache template in node XML config as in the below and created a cache from it using GET /ignite?cmd=getorcreate&cacheName=zone-aware-cache&templateName=zone-aware-cache (I can't see affinityBackupFilter settings in UI, but other parameters from the template got applied, so I assume it worked)
To simplify verification of partition distribution, I the partition number is set to just 2. After creating the cache I observed the following partition distribution:
Then I mapped nodes ids to values in AVAILABILITY_ZONE env var, as reported by nodes, with the following results:
AA146954 us-central1-a
3943ECC8 us-central1-c
F7B7AB67 us-central1-a
A94EE82C us-central1-c
As one can easily see, partition 0 pri/bak resides on nodes 3943ECC8 and A94EE82C which both are in the same zone. What am I missing to make it work?
Another odd thing, is then specifying partition number to be low (e.g. 2 or 4), only 3 out of 4 nodes are used). When using 1024 partitions, all nodes are utilized, but the problem still exists - 346 out of 1024 partitions had their primary/backup colocated in the same zone.
Here is my node config XML:
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="
http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans.xsd">
<bean class="org.apache.ignite.configuration.IgniteConfiguration">
<!-- Enabling Apache Ignite Persistent Store. -->
<property name="dataStorageConfiguration">
<bean class="org.apache.ignite.configuration.DataStorageConfiguration">
<property name="defaultDataRegionConfiguration">
<bean class="org.apache.ignite.configuration.DataRegionConfiguration">
<property name="persistenceEnabled" value="true"/>
</bean>
</property>
</bean>
</property>
<!-- Explicitly configure TCP discovery SPI to provide list of initial nodes. -->
<property name="discoverySpi">
<bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
<property name="ipFinder">
<!-- Enables Kubernetes IP finder and setting custom namespace and service names. -->
<bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.kubernetes.TcpDiscoveryKubernetesIpFinder">
<property name="namespace" value="ignite"/>
</bean>
</property>
</bean>
</property>
<property name="cacheConfiguration">
<list>
<bean id="zone-aware-cache-template" abstract="true" class="org.apache.ignite.configuration.CacheConfiguration">
<!-- when you create a template via XML configuration, you must add an asterisk to the name of the template -->
<property name="name" value="zone-aware-cache*"/>
<property name="cacheMode" value="PARTITIONED"/>
<property name="atomicityMode" value="ATOMIC"/>
<property name="backups" value="1"/>
<property name="readFromBackup" value="true"/>
<property name="partitionLossPolicy" value="READ_WRITE_SAFE"/>
<property name="copyOnRead" value="true"/>
<property name="eagerTtl" value="true"/>
<property name="statisticsEnabled" value="true"/>
<property name="affinity">
<bean class="org.apache.ignite.cache.affinity.rendezvous.RendezvousAffinityFunction">
<property name="partitions" value="2"/> <!-- for debugging only! -->
<property name="excludeNeighbors" value="true"/>
<property name="affinityBackupFilter">
<bean class="org.apache.ignite.cache.affinity.rendezvous.ClusterNodeAttributeAffinityBackupFilter">
<constructor-arg>
<array value-type="java.lang.String">
<!-- Backups must go to different AZs -->
<value>AVAILABILITY_ZONE</value>
</array>
</constructor-arg>
</bean>
</property>
</bean>
</property>
</bean>
</list>
</property>
</bean>
</beans>
Update: Eventually excludeNeighbors false/true makes or breaks zone awareness. I'm not sure why it didn't work with excludeNeighbors=false previously for me. I made some scripts to automate my testing. And now it's definite that it's the excludeNeighbors setting. It's all here: https://github.com/doitintl/ignite-gke. Regardless I also opened a bug with IGNITE Jira: https://issues.apache.org/jira/browse/IGNITE-12896. Many thanks to #alamar for his suggestions.
I recommend setting excludeNeighbors to false. It is true in your case, it is not needed, and I get correct partitions mapping when I set it to false (of course, I also run all four nodes locally).
Environment property was enough, did not need to add it manually to user attributes.
I've been following this documentation on how to connect to external LDAP server from WSO2 Identity Server.
Now I am stuck at running the product. Upon running the WSO2 IS, I got an error saying that the admin user is not exist in PRIMARY. If I am using the existing configuration, everything went well. So i thought, this might be because of the configuration that I make at the user-mgt.xml and tenant-mgt.xml
This is my external LDAP configuration:
LDAP User and Group
This is my user-mgt.xml file looks like
<UserManager>
<Realm>
<Configuration>
<AddAdmin>true</AddAdmin>
<AdminRole>admin</AdminRole>
<AdminUser>
<UserName>admin</UserName>
<Password>*****</Password>
</AdminUser>
<EveryOneRoleName>everyone</EveryOneRoleName> <!-- By default users in this role sees the registry root -->
<Property name="isCascadeDeleteEnabled">true</Property>
<Property name="initializeNewClaimManager">true</Property>
<Property name="dataSource">jdbc/WSO2CarbonDB</Property>
</Configuration>
<UserStoreManager class="org.wso2.carbon.user.core.ldap.ReadWriteLDAPUserStoreManager">
<Property name="TenantManager">org.wso2.carbon.user.core.tenant.CommonHybridLDAPTenantManager</Property>
<Property name="ConnectionURL">ldap://10.251.45.200:389</Property>
<Property name="ConnectionName">cn=ldap,dc=ei,dc=local</Property>
<Property name="ConnectionPassword">P#ssw0rd</Property>
<Property name="AnonymousBind">false</Property>
<Property name="UserSearchBase">ou=People,dc=ei,dc=local</Property>
<Property name="UserEntryObjectClass">identityPerson</Property>
<Property name="UserNameAttribute">uid</Property>
<Property name="UserNameSearchFilter">(&(objectClass=person)(uid=?))</Property>
<Property name="UserNameListFilter">(objectClass=person)</Property>
<Property name="DisplayNameAttribute"/>
<Property name="ReadGroups">true</Property>
<Property name="WriteGroups">true</Property>
<Property name="GroupSearchBase">ou=Group,dc=ei,dc=local</Property>
<Property name="GroupEntryObjectClass">groupOfNames</Property>
<Property name="GroupNameAttribute">cn</Property>
<Property name="GroupNameSearchFilter">(&(objectClass=groupOfNames)(cn=?))</Property>
<Property name="GroupNameListFilter">(objectClass=groupOfNames)</Property>
<Property name="MembershipAttribute">member</Property>
<Property name="SCIMEnabled">true</Property>
<Property name="IsBulkImportSupported">false</Property>
<Property name="EmptyRolesAllowed">false</Property>
</UserStoreManager>
</Realm>
Please help me here, I so stuck at this process. Any advice would be great !
I hope to use ignite to sync up records to multiple mysql db. For example, when some records goes into cacheA, the records can be persistent to db1 and db2 both.
Can it be possible?
What I did is:
write a PersonStore class and build it as a jar and place it in libs\
first sample1.xml configure as
Blockquote
<bean class="org.springframework.jdbc.datasource.DriverManagerDataSource" id="dataSource">
<property name="driverClassName" value="com.mysql.jdbc.Driver"></property>
<property name="url" value="jdbc:mysql://111.xxx.xxx:3306/test"></property>
<property name="username" value="root"></property>
<property name="password" value="xxxx"></property>
</bean>
<bean id="ignite.cfg" class="org.apache.ignite.configuration.IgniteConfiguration">
<property name="peerClassLoadingEnabled" value="true"/>
<property name="cacheConfiguration">
<list>
<bean class="org.apache.ignite.configuration.CacheConfiguration">
<property name="name" value="personCache"></property>
<!-- Enable readThrough-->
<property name="readThrough" value="true"></property>
<property name="writeThrough" value="true"></property>
<!-- Set cacheStoreFactory-->
<property name="cacheStoreFactory">
<bean class="javax.cache.configuration.FactoryBuilder" factory-method="factoryOf">
<constructor-arg value="com.jguo.ignitepersistentstoredemo.PersonStore"></constructor-arg>
</bean>
</property>
<property name="queryEntities">
<list>
<bean class="org.apache.ignite.cache.QueryEntity">
<property name="keyType" value="java.lang.Long"></property>
<property name="valueType" value="com.jguo.ignitepersistentstoredemo.model.Person"></property>
<property name="fields">
<map>
<entry key="id" value="java.lang.Long"></entry>
<entry key="name" value="java.lang.String"></entry>
<entry key="orgId" value="java.lang.Long"></entry>
<entry key="salary" value="java.lang.Integer"></entry>
</map>
</property>
</bean>
</list>
</property>
</bean>
</list>
</property>
</bean>
Start one Ignite node bin/ignite.sh config/sample1.xml
Create another xml file sample2.xml and only modify the datasource part
Blockquote
<bean class="org.springframework.jdbc.datasource.DriverManagerDataSource" id="dataSource">
<property name="driverClassName" value="com.mysql.jdbc.Driver"></property>
<property name="url" value="jdbc:mysql://222.xxx.xxx:3306/test"></property>
<property name="username" value="root"></property>
<property name="password" value="xxxx"></property>
</bean>
Start second Ignite node bin/ignite.sh config/sample2.xml
Start a client and put some record in cache personCache
But only one db got the data.
CacheConfiguration should be unified across all the nodes. That's why only one config is in effect.
If you need a CacheStore to operate against multiple DBs, you need to create a custom CacheStore which will have multiple data sources referring to different DBs and implement methods in an appropriate way.