Im trying to retrieve the cached value for every element in the JavaPairRDD. Im using the LOCAL cache mode as i want to minimize data shuffling of cached data. The ignite nodes are started in embedded mode within a spark job. The following code works fine if i run it on a single node. However, when i run it on a cluster of 5 machines, i get zero results.
The first attempt i had was using the IgniteRDD sql method:
dataRDD.sql("SELECT v.id,v.sub,v.obj FROM VPRow v JOIN table(id bigint = ?) i ON v.id = i.id",new Object[] {objKeyEntries.toArray()});
where objKeyEntries is a collected set of entries in an RDD. The second attempt was using AffinityRun:
JavaPairRDD<Long, VPRow> objEntries = objKeyEntries.mapPartitionsToPair(new PairFlatMapFunction<Iterator<Tuple2<Long, Boolean>>, Long, VPRow>() {
#Override
public Iterator<Tuple2<Long, VPRow>> call(Iterator<Tuple2<Long, Boolean>> tuple2Iterator) throws Exception {
ApplicationContext ctx = new ClassPathXmlApplicationContext("ignite-rdd.xml");
IgniteConfiguration igniteConfiguration = (IgniteConfiguration) ctx.getBean("ignite.cfg");
Ignite ignite = Ignition.getOrStart(igniteConfiguration);
IgniteCache<Long, VPRow> cache = ignite.getOrCreateCache("dataRDD");
ArrayList<Tuple2<Long,VPRow>> lst = new ArrayList<>();
while(tuple2Iterator.hasNext()) {
Tuple2<Long, Boolean> val = tuple2Iterator.next();
ignite.compute().affinityRun("dataRDD", val._1(),()->{
lst.add(new Tuple2<>(val._1(),cache.get(val._1())));
});
}
return lst.iterator();
}
});
The following is the ignite-rdd.xml configuration file:
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="
http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans.xsd">
<bean id="ignite.cfg" class="org.apache.ignite.configuration.IgniteConfiguration">
<property name="memoryConfiguration">
<bean class="org.apache.ignite.configuration.MemoryConfiguration">
<property name="systemCacheInitialSize" value="#{100 * 1024 * 1024}"/>
<property name="defaultMemoryPolicyName" value="default_mem_plc"/>
<property name="memoryPolicies">
<list>
<bean class="org.apache.ignite.configuration.MemoryPolicyConfiguration">
<property name="name" value="default_mem_plc"/>
<property name="initialSize" value="#{5 * 1024 * 1024 * 1024}"/>
</bean>
</list>
</property>
</bean>
</property>
<property name="cacheConfiguration">
<list>
<bean class="org.apache.ignite.configuration.CacheConfiguration">
<!-- Set a cache name. -->
<property name="name" value="dataRDD"/>
<!-- Set a cache mode. -->
<property name="cacheMode" value="LOCAL"/>
<!-- Index Integer pairs used in the example. -->
<property name="indexedTypes">
<list>
<value>java.lang.Long</value>
<value>edu.code.VPRow</value>
</list>
</property>
<property name="affinity">
<bean class="org.apache.ignite.cache.affinity.rendezvous.RendezvousAffinityFunction">
<property name="partitions" value="50"/>
</bean>
</property>
</bean>
</list>
</property>
<!-- Explicitly configure TCP discovery SPI to provide list of initial nodes. -->
<property name="discoverySpi">
<bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
<property name="ipFinder">
<bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.multicast.TcpDiscoveryMulticastIpFinder">
<property name="addresses">
<list>
<value>[IP5]</value>
<value>[IP4]</value>
<value>[IP3]</value>
<value>[IP2]</value>
<value>[IP1]</value>
</list>
</property>
</bean>
</property>
</bean>
</property>
</bean>
</beans>
Are you sure that you need to use LOCAL cache mode?
Most likely you filled cache only on one node and local caches on other nodes still empty.
affinityRun doesn't work because you have LOCAL cache, not PARTITIONED, so, it's not possible to determine owner node for key with AffinityFunction.
Related
I have the following configuration file
<bean abstract="true" id="ignite.cfg" class="org.apache.ignite.configuration.IgniteConfiguration">
<property name="peerClassLoadingEnabled" value="true"/>
<property name="includeEventTypes">
<list>
<!--Task execution events-->
<util:constant static-field="org.apache.ignite.events.EventType.EVT_TASK_STARTED"/>
<util:constant static-field="org.apache.ignite.events.EventType.EVT_TASK_FINISHED"/>
<util:constant static-field="org.apache.ignite.events.EventType.EVT_TASK_FAILED"/>
</list>
</property>
<property name="metricsUpdateFrequency" value="10000"/>
<property name="discoverySpi">
<bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
<property name="ipFinder">
<bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.multicast.TcpDiscoveryMulticastIpFinder">
<property name="addresses">
<list>
<!-- In distributed environment, replace with actual host IP address. -->
<value>127.0.0.1:47500..47509</value>
<value>127.0.0.1:48500..48509</value>
</list>
</property>
</bean>
</property>
</bean>
</property>
<!-- Enabling the required Failover SPI. -->
<property name="failoverSpi">
<bean class="org.apache.ignite.spi.failover.jobstealing.JobStealingFailoverSpi"/>
</property>
<property name="collisionSpi">
<bean class="org.apache.ignite.spi.collision.jobstealing.JobStealingCollisionSpi">
<property name="activeJobsThreshold" value="50"/>
<property name="waitJobsThreshold" value="0"/>
<property name="messageExpireTime" value="1000"/>
<property name="maximumStealingAttempts" value="10"/>
<property name="stealingEnabled" value="true"/>
</bean>
</property>
</bean>
The closure gets executed over the server nodes in the grid as expected.
When we add a new node by executing the below command to the grid during the execution of closure
The existing nodes acknowledge the addition of the new node in the grid but the closure is not distributed to the newly added node.
Below is my closure implementation
#Override
public AccruedSimpleInterest apply(SimpleInterestParameter simpleInterestParameter) {
BigDecimal si = simpleInterestParameter.getPrincipal()
.multiply(new BigDecimal(simpleInterestParameter.getYears()))
.multiply(new BigDecimal(simpleInterestParameter.getRate())).divide(SimpleInterestClosure.HUNDRED);
System.out.println("Calculated SI for id=" + simpleInterestParameter.getId() + " SI=" + si.toPlainString());
return new AccruedSimpleInterest(si, simpleInterestParameter);
}
Below is the main class
public static void main(String... args) throws IgniteException, IOException {
Factory<SimpleInterestClosure> siClosureFactory = FactoryBuilder.factoryOf(new SimpleInterestClosure());
ClassPathResource ress = new ClassPathResource("example-ignite-poc.xml");
File file = new File(ress.getPath());
try (Ignite ignite = Ignition.start(file.getPath())) {
System.out.println("Started Ignite Cluster");
IgniteFuture<Collection<AccruedSimpleInterest>> igniteFuture = ignite.compute()
.applyAsync(siClosureFactory.create(), createParamCollection());
Collection<AccruedSimpleInterest> res = igniteFuture.get();
System.out.println(res.size());
}nter code here
As far as my understanding goes, Job Stealing SPI requires you to implement some additional APIs in order to work.
Please see this discussion on user list:
Some remarks about job stealing SPI:
1)You have some nodes that can proceed the tasks of some compute job.
2)Tasks will be executed in public thread pool by default:
https://apacheignite.readme.io/docs/thread-pools#section-public-pool
3)If some node thread pool is busy then some task of compute job can be
executed on other node.
In next cases it will not work:
1)In case if you choose specific node for your compute task
2)In case if you do affinity call (the same as above but node will be
choose by affinity mapping)
Am trying to enable Ignite Native Persistence in Ignite Yarn Deployment.
Purpose of this is to have data written to disc when RAM overflows.
But when I try to add large number of records to Ignite Grid, the node is getting disconnected and getting below exception.
Error :class org.apache.ignite.internal.NodeStoppingException: Operation has been cancelled (node is stopping).
javax.cache.CacheException: class org.apache.ignite.internal.NodeStoppingException: Operation has been cancelled (node is stopping).
ERROR com.project$$anonfun$startWritingToGrid$1: org.apache.ignite.internal.processors.cache.GridCacheUtils.convertToCacheException(GridCacheUtils.java:1287)
ERROR com.project$$anonfun$startWritingToGrid$1: org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.cacheException(IgniteCacheProxyImpl.java:1648)
ERROR com.project$$anonfun$startWritingToGrid$1: org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.putAll(IgniteCacheProxyImpl.java:1071)
ERROR com.project$$anonfun$startWritingToGrid$1: org.apache.ignite.internal.processors.cache.GatewayProtectedCacheProxy.putAll(GatewayProtectedCacheProxy.java:928)
Please find below the details.
Ignite Version : 2.3.0
Cluster details for Yarn Deployment:
IGNITE_NODE_COUNT=10
IGNITE_RUN_CPU_PER_NODE=5
IGNITE_MEMORY_PER_NODE=10096
IGNITE_VERSION=2.3.0
IGNITE_PATH=/tmp/ignite/2.3.0/apache-ignite-fabric-2.3.0-bin.zip
IGNITE_RELEASES_DIR=/tmp/ignite/2.3.0/releases
IGNITE_WORKING_DIR=/tmp/ignite/2.3.0/work
IGNITE_XML_CONFIG=/tmp/ignite/2.3.0/config/ignite-config.xml
IGNITE_USERS_LIBS=/tmp/ignite/2.3.0/libs
IGNITE_LOCAL_WORK_DIR=/local/home/ignite/2.3.0
Ignite Configuration for Yarn deployment:
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:util="http://www.springframework.org/schema/util"
xsi:schemaLocation="
http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans-2.5.xsd
http://www.springframework.org/schema/util
http://www.springframework.org/schema/util/spring-util-2.0.xsd">
<bean class="org.apache.ignite.configuration.IgniteConfiguration">
<property name="clientMode" value="false"/>
<property name="dataStorageConfiguration">
<bean
class="org.apache.ignite.configuration.DataStorageConfiguration">
<property name="defaultDataRegionConfiguration">
<bean
class="org.apache.ignite.configuration.DataRegionConfiguration">
<property name="persistenceEnabled" value="true"/>
</bean>
</property>
</bean>
</property>
<property name="peerClassLoadingEnabled" value="true"/>
<property name="networkTimeout" value="10000000"/>
<property name="networkSendRetryCount" value="50"/>
<property name="discoverySpi">
<bean
class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
<property name="ipFinder">
<bean
class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
<property name="addresses">
<list>
<value><hosts>:47500</value>
</list>
</property>
</bean>
</property>
<property name="networkTimeout"
value="10000000"/>
<property name="joinTimeout"
value="10000000"/>
<property name="maxAckTimeout"
value="10000000"/>
<property name="reconnectCount" value="50"/>
<property name="socketTimeout" value="10000000"/>
</bean>
</property>
</bean>
</beans>
Code to Add Data to grid :
var cacheConf: CacheConfiguration[Long, Data] = new
CacheConfiguration[Long, Data]("DataCache")
cacheConf.setCacheMode(CacheMode.PARTITIONED)
cacheConf.setIndexedTypes(classOf[Long], classOf[Data])
val cache = ignite.getOrCreateCache(cacheConf)
var dataMap = getDataMap()
cache.putAll(dataMap)
Code to Count records:
val sql1 = "select * from DataCache"
val count = cache.query(new SqlFieldsQuery(sql1)).getAll.size()
I've been working on authenticating against an active directory server with jasper 6.4.0 for a while now, and have been getting the following error.
2017-10-16 13:39:35,145 WARN JSLdapAuthenticationProvider,http-apr-8080-exec-9:62 - [
LDAP: error code 49 - 80090308: LdapErr: DSID-0C09042F, comment: AcceptSecurityContext error, data 52e, v2580 ];
nested exception is javax.naming.AuthenticationException:
[LDAP: error code 49 - 80090308: LdapErr: DSID-0C09042F, comment: AcceptSecurityContext error, data 52e, v2580 ]
From what I've gathered, data 52e indicates invalid credentials. I tried changing my service account password in my configuration to bob to see if I would get the same error (showing that it's an error when binding to the ldaps server rather than the test account I was logging in with), and I did.
Here is the configuration for the service account in jasper.
<bean id="ldapContextSource" class="com.jaspersoft.jasperserver.api.security.externalAuth.ldap.JSLdapContextSource">
<constructor-arg value="ldaps://MyLDAPSServer:636/"/>
<!-- manager user name and password (may not be needed) -->
<property name="userDn" value="dc=mydomain,dc=com,uid=MyServiceAccount"/>
<property name="password" value="SomePasswordWithSpecialCharacters"/>
</bean>
I'm confident that the username and password are correct. I'm able to authenticate against the same ldaps server using the following Python code.
from ldap3 import Server, \
Connection, \
AUTO_BIND_NO_TLS, \
SUBTREE, \
ALL_ATTRIBUTES
def get_ldap_info(u):
with Connection(Server('MyLDAPSServer', port=636, use_ssl=True),
auto_bind=AUTO_BIND_NO_TLS,
read_only=True,
check_names=True,
user='MyServiceAccount', password='SomePasswordWithSpecialCharacters') as c:
c.search(search_base='DC=mydomain,DC=com',
search_filter='(&(samAccountName=' + u + '))',
search_scope=SUBTREE,
attributes=ALL_ATTRIBUTES,
get_operational_attributes=True)
print(c.response_to_json())
print(c.result)
get_ldap_info('test.user')
The one thing I've been able to think of, is maybe jasper doesn't like having special characters in the password?
Here is the remainder of the configuration for the jasper server in case I'm missing something.
<!--
~ Copyright (C) 2005 - 2014 TIBCO Software Inc. All rights reserved.
~ http://www.jaspersoft.com.
~ Licensed under commercial Jaspersoft Subscription License Agreement
-->
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.1.xsd">
<!-- ############ LDAP authentication ############
- Sample configuration of external authentication via an external LDAP server.
-->
<bean id="proxyAuthenticationProcessingFilter" class="com.jaspersoft.jasperserver.api.security.EncryptionAuthenticationProcessingFilter"
parent="mtAuthenticationProcessingFilter">
<property name="authenticationManager">
<ref local="ldapAuthenticationManager"/>
</property>
<property name="authenticationSuccessHandler" ref="externalAuthSuccessHandler" />
</bean>
<bean id="proxyAuthenticationSoapProcessingFilter"
class="com.jaspersoft.jasperserver.multipleTenancy.security.externalAuth.MTDefaultAuthenticationSoapProcessingFilter">
<property name="authenticationManager" ref="ldapAuthenticationManager"/>
<property name="authenticationSuccessHandler" ref="externalAuthSuccessHandler" />
<property name="filterProcessesUrl" value="/services"/>
</bean>
<bean id="proxyAuthenticationRestProcessingFilter" class="com.jaspersoft.jasperserver.multipleTenancy.security.externalAuth.MTDefaultAuthenticationRestProcessingFilter">
<property name="authenticationManager">
<ref local="ldapAuthenticationManager"/>
</property>
<property name="authenticationSuccessHandler" ref="externalAuthSuccessHandler" />
<property name="filterProcessesUrl" value="/rest/login"/>
</bean>
<bean id="proxyRequestParameterAuthenticationFilter"
class="com.jaspersoft.jasperserver.war.util.ExternalRequestParameterAuthenticationFilter" parent="requestParameterAuthenticationFilter">
<property name="authenticationManager">
<ref local="ldapAuthenticationManager"/>
</property>
<property name="externalDataSynchronizer" ref="externalDataSynchronizer"/>
</bean>
<bean id="externalAuthSuccessHandler"
class="com.jaspersoft.jasperserver.api.security.externalAuth.JrsExternalAuthenticationSuccessHandler" parent="successHandler">
<property name="externalDataSynchronizer">
<ref local="externalDataSynchronizer"/>
</property>
</bean>
<bean id="proxyBasicProcessingFilter"
class="com.jaspersoft.jasperserver.multipleTenancy.security.externalAuth.MTExternalAuthBasicProcessingFilter" parent="mtBasicProcessingFilter">
<property name="authenticationManager" ref="ldapAuthenticationManager"/>
<property name="externalDataSynchronizer" ref="externalDataSynchronizer"/>
</bean>
<bean id="ldapAuthenticationManager" class="com.jaspersoft.jasperserver.api.security.externalAuth.wrappers.spring.JSProviderManager">
<property name="providers">
<list>
<ref local="ldapAuthenticationProvider"/>
<ref bean="${bean.daoAuthenticationProvider}"/>
<!--anonymousAuthenticationProvider only needed if filterInvocationInterceptor.alwaysReauthenticate is set to true
<ref bean="anonymousAuthenticationProvider"/>-->
</list>
</property>
</bean>
<bean id="ldapAuthenticationProvider" class="com.jaspersoft.jasperserver.api.security.externalAuth.wrappers.spring.ldap.JSLdapAuthenticationProvider">
<constructor-arg>
<bean class="com.jaspersoft.jasperserver.api.security.externalAuth.wrappers.spring.ldap.JSBindAuthenticator">
<constructor-arg><ref local="ldapContextSource"/></constructor-arg>
<property name="userSearch" ref="userSearch"/>
</bean>
</constructor-arg>
<constructor-arg>
<bean class="com.jaspersoft.jasperserver.api.security.externalAuth.wrappers.spring.ldap.JSDefaultLdapAuthoritiesPopulator">
<constructor-arg index="0"><ref local="ldapContextSource"/></constructor-arg>
<constructor-arg index="1"><value></value></constructor-arg>
<property name="groupRoleAttribute" value="title"/>
<property name="groupSearchFilter" value="(uid={1})"/>
<property name="searchSubtree" value="true"/>
<!-- Can setup additional external default roles here <property name="defaultRole" value="LDAP"/> -->
</bean>
</constructor-arg>
</bean>
<bean id="userSearch"
class="com.jaspersoft.jasperserver.api.security.externalAuth.wrappers.spring.ldap.JSFilterBasedLdapUserSearch">
<constructor-arg index="0">
<value>cn=Users</value>
</constructor-arg>
<constructor-arg index="1">
<!--<value>(uid={0})</value>-->
<!--<value>(uid=test.user)</value>-->
<value>(sAMAccountName={0})</value>
</constructor-arg>
<constructor-arg index="2">
<ref local="ldapContextSource" />
</constructor-arg>
<property name="searchSubtree">
<value>true</value>
</property>
</bean>
<bean id="ldapContextSource" class="com.jaspersoft.jasperserver.api.security.externalAuth.ldap.JSLdapContextSource">
<constructor-arg value="ldaps://MyLDAPSServer:636/"/>
<!-- manager user name and password (may not be needed) -->
<property name="userDn" value="dc=mydomain,dc=com,uid=MyServiceAccount"/>
<property name="password" value="SomePasswordWithSpecialCharacters"/>
</bean>
<!-- ############ LDAP authentication ############ -->
<!-- ############ JRS Synchronizer ############ -->
<bean id="externalDataSynchronizer"
class="com.jaspersoft.jasperserver.multipleTenancy.security.externalAuth.MTExternalDataSynchronizerImpl">
<property name="externalUserProcessors">
<list>
<ref local="ldapExternalTenantProcessor"/>
<ref local="mtExternalUserSetupProcessor"/>
<!-- Example processor for creating user folder-->
<!--<ref local="externalUserFolderProcessor"/>-->
</list>
</property>
</bean>
<bean id="abstractExternalProcessor" class="com.jaspersoft.jasperserver.api.security.externalAuth.processors.AbstractExternalUserProcessor" abstract="true">
<property name="repositoryService" ref="${bean.repositoryService}"/>
<property name="userAuthorityService" ref="${bean.userAuthorityService}"/>
<property name="tenantService" ref="${bean.tenantService}"/>
<property name="profileAttributeService" ref="profileAttributeService"/>
<property name="objectPermissionService" ref="objectPermissionService"/>
</bean>
<!--
Multi-tenant configuration. For a JRS deployment with multiple
organizations, modify this bean to set up your organizations. For
single-organization deployments, comment this out and uncomment the version
below.
-->
<bean id="ldapExternalTenantProcessor" class="com.jaspersoft.jasperserver.multipleTenancy.security.externalAuth.processors.ldap.LdapExternalTenantProcessor" parent="abstractExternalProcessor">
<property name="ldapContextSource" ref="ldapContextSource"/>
<property name="multiTenancyService"><ref bean="internalMultiTenancyService"/></property>
<property name="excludeRootDn" value="false"/>
<!--only following LDAP attributes will be used in creation of organization hierarchy.
Eg. cn=Smith,ou=Developement,o=Jaspersoft will produce tanant Development as child of
tenant Jaspersoft (if excludeRootDn=false) as child of default tenant organization_1-->
<property name="organizationRDNs">
<list>
<value>dc</value>
<value>c</value>
<value>o</value>
<value>ou</value>
<value>st</value>
</list>
</property>
<property name="rootOrganizationId" value="organization_1"/>
<property name="tenantIdNotSupportedSymbols" value="#{configurationBean.tenantIdNotSupportedSymbols}"/>
<!-- User credentials are setup in js.externalAuth.properties-->
<property name="externalTenantSetupUsers">
<list>
<bean class="com.jaspersoft.jasperserver.multipleTenancy.security.externalAuth.processors.MTAbstractExternalProcessor.ExternalTenantSetupUser">
<property name="username" value="${new.tenant.user.name.1}"/>
<property name="fullName" value="${new.tenant.user.fullname.1}"/>
<property name="password" value="${new.tenant.user.password.1}"/>
<property name="emailAddress" value="${new.tenant.user.email.1}"/>
<property name="roleSet">
<set>
<value>ROLE_ADMINISTRATOR</value>
<value>ROLE_USER</value>
</set>
</property>
</bean>
</list>
</property>
</bean>
<!--
Single tenant configuration. For a JRS deployment with a single
organization, uncomment this bean and configure it to set up your organization.
Comment out the multi-tenant version of ldapExternalTenantProcessor above
-->
<!--<bean id="ldapExternalTenantProcessor" class="com.jaspersoft.jasperserver.multipleTenancy.security.externalAuth.processors.ldap.LdapExternalTenantProcessor" parent="abstractExternalProcessor">
<property name="ldapContextSource" ref="ldapContextSource"/>
<property name="multiTenancyService"><ref bean="internalMultiTenancyService"/></property>
<property name="excludeRootDn" value="true"/>
<property name="defaultOrganization" value="organization_1"/>
</bean>-->
<bean id="mtExternalUserSetupProcessor" class="com.jaspersoft.jasperserver.multipleTenancy.security.externalAuth.processors.MTExternalUserSetupProcessor" parent="abstractExternalProcessor">
<!--Default permitted role characters; others are removed. Change regular expression to allow other chars.
<property name="permittedExternalRoleNameRegex" value="[A-Za-z0-9_]+"/>-->
<property name="userAuthorityService">
<ref bean="${bean.internalUserAuthorityService}"/>
</property>
<property name="defaultInternalRoles">
<list>
<value>ROLE_USER</value>
</list>
</property>
<property name="organizationRoleMap">
<map>
<!-- Example of mapping customer roles to JRS roles -->
<entry>
<key>
<value>ROLE_ADMIN_EXTERNAL_ORGANIZATION</value>
</key>
<!-- JRS role that the <key> external role is mapped to-->
<value>ROLE_ADMINISTRATOR</value>
</entry>
</map>
</property>
</bean>
<!-- EXAMPLE Processor
<bean id="externalUserFolderProcessor"
class="com.jaspersoft.jasperserver.api.security.externalAuth.processors.ExternalUserFolderProcessor"
parent="abstractExternalProcessor">
<property name="repositoryService" ref="${bean.unsecureRepositoryService}"/>
</bean>
-->
<!-- ############ JRS Synchronizer ############ -->
</beans>
For anyone in the future who's struggling with this like I was, the problem was the userDn in my ldapContextSource.
The correct userDn was
<property name="userDn" value="CN=myserviceaccount,OU=my ou,DC=mydomain,DC=com"/>
I found that by running this python script and looking at the output:
from ldap3 import Server, \
Connection, \
AUTO_BIND_NO_TLS, \
SUBTREE, \
ALL_ATTRIBUTES
def get_ldap_info(u):
with Connection(Server('MyLDAPSServer', port=636, use_ssl=True),
auto_bind=AUTO_BIND_NO_TLS,
read_only=True,
check_names=True,
user='MyServiceAccount', password='SomePasswordWithSpecialCharacters') as c:
c.search(search_base='DC=mydomain,DC=com',
search_filter='(&(samAccountName=' + u + '))',
search_scope=SUBTREE,
attributes=ALL_ATTRIBUTES,
get_operational_attributes=True)
print(c.response_to_json())
print(c.result)
get_ldap_info('myserviceaccount')
I have a query that stalls/hangs over large argument inputs. The same code works well on smaller SQL argument input. The code is as follows:
subEntries = dataRDD.sql("SELECT v.id,v.sub,v.obj FROM VPRow v JOIN table(id bigint = ?) i ON v.id = i.id",new Object[] {subKeyEntries.toArray()});
LOG.debug("Reading : "+subEntries.count());
Please note that the Ignite documentation mentions that the input argument can be of any size - "Here you can provide object array (Object[]) of any length as a parameter". The parameter passed to the query in the stalling case was of size 23641 of long values.
My spring configuration file is as follows:
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="
http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans.xsd">
<bean id="ignite.cfg" class="org.apache.ignite.configuration.IgniteConfiguration">
<property name="cacheConfiguration">
<list>
<bean class="org.apache.ignite.configuration.CacheConfiguration">
<!-- Set a cache name. -->
<property name="name" value="dataRDD"/>
<!-- Set a cache mode. -->
<property name="cacheMode" value="PARTITIONED"/>
<!-- Index Integer pairs used in the example. -->
<property name="indexedTypes">
<list>
<value>java.lang.Long</value>
<value>sample.VPRow</value>
</list>
</property>
<property name="backups" value="0"/>
</bean>
</list>
</property>
<!-- Explicitly configure TCP discovery SPI to provide list of initial nodes. -->
<property name="discoverySpi">
<bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
<property name="ipFinder">
<bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.multicast.TcpDiscoveryMulticastIpFinder">
<property name="addresses">
<list>
<value>[IP1]</value>
<value>[...]</value>
<value>[IP5]</value>
</list>
</property>
</bean>
</property>
</bean>
</property>
</bean>
</beans>
The VPRow class is defined as follows
public class VPRow implements Serializable {
#QuerySqlField
private long id;
#QuerySqlField
private String sub;
#QuerySqlField
private String obj;
public VPRow(long id,String sub, String obj) {
this.id = id;
this.sub = sub;
this.obj = obj;
}
...
}
Usually databases has a limitation for "IN" operator.
You can divide this sql query in parts and run them simultaneously.
Please see paragraph 2 here: https://apacheignite.readme.io/docs/sql-performance-and-debugging#sql-performance-and-usability-considerations
IN operator doesn't use indexes, so with a long list like this query will do too many scans. Changing the query like described should help.
When I start a remote compute job , call() Or affinityCall(). Remote server will create 6 threads, and these thread never exit. Just like the VisualVM shows below:
view VisualVM snapshot
thread name from "utility-#153%null%" to "marshaller-cache-#14i%null%", will never be ended.
If client runs over and over again, the number of threads on server node will be increased rapidly. As a result, server node run out of memory.
How can I close this thread when client closed.
May be I do not run client in the current way.
Client Code
String cacheKey = "jobIds";
String cname = "myCacheName";
ClusterGroup rmts = getIgnite().cluster().forRemotes();
IgniteCache<String, List<String>> cache = getIgnite().getOrCreateCache(cname);
List<String> jobList = cache.get(cacheKey);
Collection<String> res = ignite.compute(rmts).apply(
new IgniteClosure<String, String>() {
#Override
public String apply(String word) {
return word;
}
},
jobList
);
getIgnite().close();
System.out.println("ignite Closed");
if (res == null) {
System.out.println("Error: Result is null");
return;
}
res.forEach(s -> {
System.out.println(s);
});
System.out.println("Finished!");
getIgnite(), get the instance of Ignite.
public static Ignite getIgnite() {
if (ignite == null) {
System.out.println("RETURN INSTANCE ..........");
Ignition.setClientMode(true);
ignite = Ignition.start(confCache);
ignite.configuration().setDeploymentMode(DeploymentMode.CONTINUOUS);
}
return ignite;
}
Server config:
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="
http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans.xsd">
<!--
Alter configuration below as needed.
-->
<bean id="grid.cfg" class="org.apache.ignite.configuration.IgniteConfiguration">
<property name="peerClassLoadingEnabled" value="true"/>
<property name="peerClassLoadingMissedResourcesCacheSize" value="0"/>
<property name="publicThreadPoolSize" value="64"/>
<property name="discoverySpi">
<bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
<property name="ipFinder">
<bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
<property name="addresses">
<list>
<value>172.22.1.72:47500..47509</value>
<value>172.22.1.100:47500..47509</value>
</list>
</property>
</bean>
</property>
</bean>
</property>
<property name="cacheConfiguration">
<bean class="org.apache.ignite.configuration.CacheConfiguration">
<property name="cacheMode" value="PARTITIONED"/>
<property name="memoryMode" value="ONHEAP_TIERED"/>
<property name="backups" value="0"/>
<property name="offHeapMaxMemory" value="0"/>
<property name="swapEnabled" value="false"/>
</bean>
</property>
</bean>
</beans>
These thread pools are static and number of threads in them never depends on load (number of executed operations, jobs, etc.). Having said that, I'm don't think they are the reason of OOME, unless you somehow start a new node within the same JVM for each executed job.
I would also recommend to always reuse the existing node that is already started in a JVM. Starting a new one and closing it for each job is a bad practice.
Threads are created in thread pools, so you may set their size in IgniteConfiguration: setUtilityCachePoolSize(int) and setMarshallerCachePoolSize(int) for Ignite 1.5 and setMarshallerCacheThreadPoolSize(int) for Ignite 1.7, and others.