Plugin, authenticateNode NotSerializableException - ignite
I have troubles implementing an authentication plugin. A NotSerializableException is raised on the instance of my AuthPluginProcessor class. Disabling the authentication with enabled returning false makes the node start.
I think I am missing something (I tried adding parameters to authenticateNode without success), below is the minimum code reproducing the error.
AuthPluginProcessor :
public class AuthPluginProcessor implements IgnitePlugin, GridSecurityProcessor, Serializable
{
#Override
public SecurityContext authenticateNode(ClusterNode clusterNode, SecurityCredentials securityCredentials) throws IgniteCheckedException
{
return new SecurityContext() {
#Override
public SecuritySubject subject() { return null; }
#Override
public boolean taskOperationAllowed(String s, SecurityPermission securityPermission) { return true; }
#Override
public boolean cacheOperationAllowed(String s, SecurityPermission securityPermission) { return true; }
#Override
public boolean serviceOperationAllowed(String s, SecurityPermission securityPermission) { return true; }
#Override
public boolean systemOperationAllowed(SecurityPermission securityPermission) { return true; }
};
}
#Override
public boolean isGlobalNodeAuthentication() { return true; }
#Override
public boolean enabled() { return true; }
// others empty methods ...
}
Here is most of the log file :
[16:38:10,598][INFO][main][IgniteKernal] PID: 10152
[16:38:10,598][INFO][main][IgniteKernal] Language runtime: Java Platform API Specification ver. 1.8
[16:38:10,599][INFO][main][IgniteKernal] VM information: Java(TM) SE Runtime Environment 1.8.0_172-b11 Oracle Corporation Java HotSpot(TM) 64-Bit Server VM 25.172-b11
[16:38:10,602][INFO][main][IgniteKernal] VM total memory: 0.96GB
[16:38:10,602][INFO][main][IgniteKernal] Remote Management [restart: on, REST: on, JMX (remote: on, port: 49186, auth: off, ssl: off)]
[16:38:10,603][INFO][main][IgniteKernal] Logger: JavaLogger [quiet=false, config=null]
[16:38:10,603][INFO][main][IgniteKernal] IGNITE_HOME=D:\db\Ignite2.4
[16:38:10,605][INFO][main][IgniteKernal] VM arguments: [-Xms1g, -Xmx1g, -XX:+AggressiveOpts, -XX:MaxMetaspaceSize=256m, -DIGNITE_QUIET=false, -DIGNITE_SUCCESS_FILE=D:\db\Ignite2.4\work\ignite_success_18756fda-d6de-4b6f-9ff2-5dc0bbe5f8b7, -Dcom.sun.management.jmxremote, -Dcom.sun.management.jmxremote.port=49186, -Dcom.sun.management.jmxremote.authenticate=false, -Dcom.sun.management.jmxremote.ssl=false, -DIGNITE_HOME=D:\db\Ignite2.4, -DIGNITE_PROG_NAME=ignite.bat]
[16:38:10,606][INFO][main][IgniteKernal] System cache's DataRegion size is configured to 40 MB. Use DataStorageConfiguration.systemCacheMemorySize property to change the setting.
[16:38:10,613][INFO][main][IgniteKernal] Configured caches [in 'sysMemPlc' dataRegion: ['ignite-sys-cache']]
[16:38:10,613][WARNING][main][IgniteKernal] Peer class loading is enabled (disable it in production for performance and deployment consistency reasons)
[16:38:10,617][WARNING][pub-#19][GridDiagnostic] This operating system has been tested less rigorously: Windows 10 10.0 amd64. Our team will appreciate the feedback if you experience any problems running ignite in this environment.
[16:38:10,620][INFO][main][IgniteKernal] 3-rd party licenses can be found at: D:\db\Ignite2.4\libs\licenses
[16:38:10,683][INFO][main][IgnitePluginProcessor] Configured plugins:
[16:38:10,684][INFO][main][IgnitePluginProcessor] ^-- AuthPlugin 0.1.0
[16:38:10,686][INFO][main][IgnitePluginProcessor] ^-- null
[16:38:10,686][INFO][main][IgnitePluginProcessor]
[16:38:10,725][INFO][main][TcpCommunicationSpi] Successfully bound communication NIO server to TCP port [port=47101, locHost=0.0.0.0/0.0.0.0, selectorsCnt=4, selectorSpins=0, pairedConn=false]
[16:38:11,230][WARNING][main][TcpCommunicationSpi] Message queue limit is set to 0 which may lead to potential OOMEs when running cache operations in FULL_ASYNC or PRIMARY_SYNC modes due to message queues growth on sender and receiver sides.
[16:38:11,242][WARNING][main][NoopCheckpointSpi] Checkpoints are disabled (to enable configure any GridCheckpointSpi implementation)
[16:38:11,263][WARNING][main][GridCollisionManager] Collision resolution is disabled (all jobs will be activated upon arrival).
[16:38:11,265][INFO][main][IgniteKernal] Security status [authentication=on, tls/ssl=off]
[16:38:11,314][INFO][main][TcpDiscoverySpi] Successfully bound to TCP port [port=47501, localHost=0.0.0.0/0.0.0.0, locNodeId=8acfdf1f-1371-443a-a6b1-38189fcee66f]
[16:38:12,331][INFO][main][PdsFoldersResolver] Unable to acquire lock to file [D:\db\Ignite2.4\work\db\node00-3413bf9c-56ea-4041-ae02-6967e6118354], reason: Failed to acquire file lock during 1 sec, (locked by [4c07699d-cb8f-4e20-9531-9154eb4e8a12][]): D:\db\Ignite2.4\work\db\node00-3413bf9c-56ea-4041-ae02-6967e6118354\lock
[16:38:12,364][INFO][main][PdsFoldersResolver] Successfully locked persistence storage folder [D:\db\Ignite2.4\work\db\node01-e607d470-4b11-42c1-b054-2a60a5ddaa02]
[16:38:12,364][INFO][main][PdsFoldersResolver] Consistent ID used for local node is [e607d470-4b11-42c1-b054-2a60a5ddaa02] according to persistence data storage folders
[16:38:12,365][INFO][main][CacheObjectBinaryProcessorImpl] Resolved directory for serialized binary metadata: D:\db\Ignite2.4\work\binary_meta\node01-e607d470-4b11-42c1-b054-2a60a5ddaa02
[16:38:12,530][INFO][main][FilePageStoreManager] Resolved page store work directory: D:\db\Ignite2.4\work\db\node01-e607d470-4b11-42c1-b054-2a60a5ddaa02
[16:38:12,531][INFO][main][FileWriteAheadLogManager] Resolved write ahead log work directory: D:\db\Ignite2.4\work\db\wal\node01-e607d470-4b11-42c1-b054-2a60a5ddaa02
[16:38:12,531][INFO][main][FileWriteAheadLogManager] Resolved write ahead log archive directory: D:\db\Ignite2.4\work\db\wal\archive\node01-e607d470-4b11-42c1-b054-2a60a5ddaa02
[16:38:12,543][INFO][main][FileWriteAheadLogManager] Started write-ahead log manager [mode=LOG_ONLY]
[16:38:12,552][INFO][main][GridCacheDatabaseSharedManager] Read checkpoint status [startMarker=null, endMarker=null]
[16:38:12,565][INFO][main][PageMemoryImpl] Started page memory [memoryAllocated=100,0 MiB, pages=24936, tableSize=1,4 MiB, checkpointBuffer=100,0 MiB]
[16:38:12,566][INFO][main][GridCacheDatabaseSharedManager] Checking memory state [lastValidPos=FileWALPointer [idx=0, fileOff=0, len=0], lastMarked=FileWALPointer [idx=0, fileOff=0, len=0], lastCheckpointId=00000000-0000-0000-0000-000000000000]
[16:38:12,572][INFO][main][GridCacheDatabaseSharedManager] Applying lost cache updates since last checkpoint record [lastMarked=FileWALPointer [idx=0, fileOff=0, len=0], lastCheckpointId=00000000-0000-0000-0000-000000000000]
[16:38:12,574][INFO][main][GridCacheDatabaseSharedManager] Finished applying WAL changes [updatesApplied=0, time=0ms]
[16:38:12,633][INFO][main][ClientListenerProcessor] Client connector processor has started on TCP port 10801
[16:38:12,665][INFO][main][GridTcpRestProtocol] Command protocol successfully started [name=TCP binary, host=0.0.0.0/0.0.0.0, port=11212]
[16:38:13,969][INFO][main][IgniteKernal] Non-loopback local IPs: 124.0.0.9, fe80:0:0:0:15be:bcfe:4ff6:1422%eth10, fe80:0:0:0:49ae:87cd:b8a4:a6bb%eth2, fe80:0:0:0:7943:2fb2:1da3:e69b%eth1, fe80:0:0:0:859b:b174:7412:4eb7%wlan1, fe80:0:0:0:d1fb:422:9dfa:aada%eth17, fe80:0:0:0:dd7d:8d79:7438:1db7%eth0, fe80:0:0:0:e593:8733:6979:aa7f%wlan0
[16:38:13,969][INFO][main][IgniteKernal] Enabled local MACs: 00090FFE0001, 00FF20F9E686, 025041000001, 184F32F210F1, 1A4F32F210F1, 54587804FD10, 54C6D294231E, 54CCB184D71B, 64006A8BA7BF
[16:38:13,995][SEVERE][main][IgniteKernal] Failed to start manager: GridManagerAdapter [enabled=true, name=o.a.i.i.managers.discovery.GridDiscoveryManager]
class org.apache.ignite.IgniteCheckedException: Failed to start SPI: TcpDiscoverySpi [addrRslvr=null, sockTimeout=5000, ackTimeout=5000, marsh=JdkMarshaller [clsFilter=org.apache.ignite.internal.IgniteKernal$5#3f390d63], reconCnt=10, reconDelay=2000, maxAckTimeout=600000, forceSrvMode=false, clientReconnectDisabled=false]
at org.apache.ignite.internal.managers.GridManagerAdapter.startSpi(GridManagerAdapter.java:300)
at org.apache.ignite.internal.managers.discovery.GridDiscoveryManager.start(GridDiscoveryManager.java:892)
at org.apache.ignite.internal.IgniteKernal.startManager(IgniteKernal.java:1669)
at org.apache.ignite.internal.IgniteKernal.start(IgniteKernal.java:983)
at org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start0(IgnitionEx.java:1973)
at org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start(IgnitionEx.java:1716)
at org.apache.ignite.internal.IgnitionEx.start0(IgnitionEx.java:1144)
at org.apache.ignite.internal.IgnitionEx.startConfigurations(IgnitionEx.java:1062)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:948)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:847)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:717)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:686)
at org.apache.ignite.Ignition.start(Ignition.java:347)
at org.apache.ignite.startup.cmdline.CommandLineStartup.main(CommandLineStartup.java:302)
Caused by: class org.apache.ignite.spi.IgniteSpiException: Failed to authenticate local node (will shutdown local node).
at org.apache.ignite.spi.discovery.tcp.ServerImpl.localAuthentication(ServerImpl.java:991)
at org.apache.ignite.spi.discovery.tcp.ServerImpl.joinTopology(ServerImpl.java:862)
at org.apache.ignite.spi.discovery.tcp.ServerImpl.spiStart(ServerImpl.java:364)
at org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.spiStart(TcpDiscoverySpi.java:1930)
at org.apache.ignite.internal.managers.GridManagerAdapter.startSpi(GridManagerAdapter.java:297)
... 13 more
Caused by: class org.apache.ignite.IgniteCheckedException: Failed to serialize object: test.ignite.AuthPluginProcessor$1#1af687fe
at org.apache.ignite.marshaller.jdk.JdkMarshaller.marshal0(JdkMarshaller.java:103)
at org.apache.ignite.marshaller.AbstractNodeNameAwareMarshaller.marshal(AbstractNodeNameAwareMarshaller.java:70)
at org.apache.ignite.marshaller.jdk.JdkMarshaller.marshal0(JdkMarshaller.java:117)
at org.apache.ignite.marshaller.AbstractNodeNameAwareMarshaller.marshal(AbstractNodeNameAwareMarshaller.java:58)
at org.apache.ignite.internal.util.IgniteUtils.marshal(IgniteUtils.java:9984)
at org.apache.ignite.spi.discovery.tcp.ServerImpl.localAuthentication(ServerImpl.java:985)
... 17 more
Caused by: java.io.NotSerializableException: test.ignite.AuthPluginProcessor$1
at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1184)
at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:348)
at org.apache.ignite.marshaller.jdk.JdkMarshaller.marshal0(JdkMarshaller.java:98)
... 22 more
Here is the configuration file :
<bean abstract="true" id="ignite.cfg" class="org.apache.ignite.configuration.IgniteConfiguration">
<!-- Set to true to enable distributed class loading for examples, default is false. -->
<property name="peerClassLoadingEnabled" value="true"/>
<!-- Enable task execution events for examples. -->
<property name="includeEventTypes">
<list>
<!--Task execution events-->
<util:constant static-field="org.apache.ignite.events.EventType.EVT_TASK_STARTED"/>
<util:constant static-field="org.apache.ignite.events.EventType.EVT_TASK_FINISHED"/>
<util:constant static-field="org.apache.ignite.events.EventType.EVT_TASK_FAILED"/>
<util:constant static-field="org.apache.ignite.events.EventType.EVT_TASK_TIMEDOUT"/>
<util:constant static-field="org.apache.ignite.events.EventType.EVT_TASK_SESSION_ATTR_SET"/>
<util:constant static-field="org.apache.ignite.events.EventType.EVT_TASK_REDUCED"/>
<!--Cache events-->
<util:constant static-field="org.apache.ignite.events.EventType.EVT_CACHE_OBJECT_PUT"/>
<util:constant static-field="org.apache.ignite.events.EventType.EVT_CACHE_OBJECT_READ"/>
<util:constant static-field="org.apache.ignite.events.EventType.EVT_CACHE_OBJECT_REMOVED"/>
</list>
</property>
<property name="pluginConfigurations">
<bean class="test.ignite.AuthPluginConfiguration"/>
</property>
<!--<property name="binaryConfiguration">
<bean class="org.apache.ignite.configuration.BinaryConfiguration">
<property name="compactFooter" value="false"/>
<property name="idMapper">
<bean class="org.apache.ignite.binary.BinaryBasicIdMapper">
<property name="lowerCase" value="true"/>
</bean>
</property>
</bean>
</property>-->
<!-- Explicitly configure TCP discovery SPI to provide list of initial nodes. -->
<property name="discoverySpi">
<bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
<property name="ipFinder">
<!--
Ignite provides several options for automatic discovery that can be used
instead os static IP based discovery. For information on all options refer
to our documentation: http://apacheignite.readme.io/docs/cluster-config
-->
<!-- Uncomment static IP finder to enable static-based discovery of initial nodes. -->
<bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
<property name="addresses">
<list>
<value>124.0.0.127</value>
<value>124.0.0.125</value>
<value>124.0.0.9</value>
</list>
</property>
</bean>
</property>
</bean>
</property>
<property name="dataStorageConfiguration">
<bean class="org.apache.ignite.configuration.DataStorageConfiguration">
<property name="defaultDataRegionConfiguration">
<bean class="org.apache.ignite.configuration.DataRegionConfiguration">
<property name="persistenceEnabled" value="true"/>
</bean>
</property>
</bean>
</property>
</bean>
Your SecurityContext doesn't implement Serializable likely.
Move to named inner class => add implements.
Related
Ignite Local Node Binary Configuration Not Equal to Remote Node Configuration
I use the Ignite docker image to set up an ignite cluster on my local machine. Things work fine as long as I use the default configuration. When I tried overriding the config using the command docker run -d --name my_ignite \ -p 11211:11211 -p 47100:47100 -p 47500:47500 -p 49112:49112 \ -e “OPTION_LIBS=ignite-indexing” \ -e "CONFIG_URI=file:///Users/abc/Documents/ignite_configs/ignite-config.xml" \ -v $(pwd):/Users/abc/Documents/ignite_configs apacheignite/ignite:2.7.0 Ignite starts up correctly but when I try to write to the cache I get the following exception. My config file ignite-config.xml is <?xml version="1.0" encoding="UTF-8"?> <beans xmlns = "http://www.springframework.org/schema/beans" xmlns:xsi = "http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation = "http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd"> <bean id="ignite.cfg" class="org.apache.ignite.configuration.IgniteConfiguration"> <property name="binaryConfiguration"> <bean class="org.apache.ignite.configuration.BinaryConfiguration"> <property name="idMapper"> <bean class = "org.apache.ignite.binary.BinaryBasicIdMapper"> <property name = "lowerCase" value = "true"/> </bean> </property> </bean> </property> </bean> </beans> How should I fix this? Stacktrace: 2020-05-20 14:34:49 [main] WARN o.a.i.i.p.c.d.d.t.PartitionsEvictManager - Logging at INFO level without checking if INFO level is enabled: Evict partition permits=4 2020-05-20 14:34:50 [tcp-client-disco-msg-worker-#4%fn_Instance_979931923%] ERROR ROOT - Critical system error detected. Will be handled accordingly to configured handler [hnd=StopNodeOrHaltFailureHandler [tryStop=false, timeout=0, super=AbstractFailureHandler [ignoredFailureTypes=[SYSTEM_WORKER_BLOCKED]]], failureCtx=FailureContext [type=SYSTEM_WORKER_TERMINATION, err=class o.a.i.IgniteException: GridWorker [name=tcp-client-disco-msg-worker, igniteInstanceName=svexecfn_Instance_979931923, finished=true, heartbeatTs=1590010490223]]] org.apache.ignite.IgniteException: GridWorker [name=tcp-client-disco-msg-worker, igniteInstanceName=svexecfn_Instance_979931923, finished=true, heartbeatTs=1590010490223] at org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance$2.apply(IgnitionEx.java:1831) at org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance$2.apply(IgnitionEx.java:1826) at org.apache.ignite.internal.worker.WorkersRegistry.onStopped(WorkersRegistry.java:169) at org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:153) at org.apache.ignite.spi.discovery.tcp.ClientImpl$1.body(ClientImpl.java:304) at org.apache.ignite.spi.IgniteSpiThread.run(IgniteSpiThread.java:62) 2020-05-20 14:34:50 [main] ERROR o.a.i.i.IgniteKernal%svexecfn_Instance_979931923 - Failed to start manager: GridManagerAdapter [enabled=true, name=o.a.i.i.managers.discovery.GridDiscoveryManager] org.apache.ignite.IgniteCheckedException: Failed to start SPI: TcpDiscoverySpi [addrRslvr=null, sockTimeout=5000, ackTimeout=5000, marsh=JdkMarshaller [clsFilter=org.apache.ignite.marshaller.MarshallerUtils$1#7ed9ae94], reconCnt=10, reconDelay=2000, maxAckTimeout=600000, forceSrvMode=false, clientReconnectDisabled=false, internalLsnr=null] at org.apache.ignite.internal.managers.GridManagerAdapter.startSpi(GridManagerAdapter.java:300) at org.apache.ignite.internal.managers.discovery.GridDiscoveryManager.start(GridDiscoveryManager.java:939) at org.apache.ignite.internal.IgniteKernal.startManager(IgniteKernal.java:1682) at org.apache.ignite.internal.IgniteKernal.start(IgniteKernal.java:1066) at org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start0(IgnitionEx.java:2038) at org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start(IgnitionEx.java:1730) at org.apache.ignite.internal.IgnitionEx.start0(IgnitionEx.java:1158) at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:678) at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:603) at org.apache.ignite.Ignition.start(Ignition.java:323) ... Caused by: org.apache.ignite.spi.IgniteSpiException: Local node's binary configuration is not equal to remote node's binary configuration [locNodeId=4fc2317b-a7b5-43e0-bc37-20747fab2d73, rmtNodeId=a7ce1554-64cb-4de1-a71f-5c3ddb252ffa, locBinaryCfg=null, rmtBinaryCfg={globIdMapper=org.apache.ignite.binary.BinaryBasicIdMapper, compactFooter=true, globSerializer=null}] at org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.checkFailedError(TcpDiscoverySpi.java:1946) at org.apache.ignite.spi.discovery.tcp.ClientImpl$MessageWorker.body(ClientImpl.java:1888) at org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:120) at org.apache.ignite.spi.discovery.tcp.ClientImpl$1.body(ClientImpl.java:304) at org.apache.ignite.spi.IgniteSpiThread.run(IgniteSpiThread.java:62)
Take a look at the binary configuration of all your nodes: https://apacheignite.readme.io/docs/binary-marshaller#configuring-binary-objects you have: Local node's binary configuration is not equal to remote node's binary configuration [locNodeId=4fc2317b-a7b5-43e0-bc37-20747fab2d73, rmtNodeId=a7ce1554-64cb-4de1-a71f-5c3ddb252ffa, locBinaryCfg=null, rmtBinaryCfg={globIdMapper=org.apache.ignite.binary.BinaryBasicIdMapper, compactFooter=true, globSerializer=null}] which means you are inserting from a node w/a different binary config. Take a look at the binary config of 4fc2317b-a7b5-43e0-bc37-20747fab2d73, it should be the same as the remote node.
<bean class="org.apache.ignite.configuration.BinaryConfiguration"> <property name="idMapper"> <bean class = "org.apache.ignite.binary.BinaryBasicIdMapper"> <property name = "lowerCase" value = "true"/> </bean> </property> </bean> This is binary configuration ^ it should be the same in your server node's ignite_configs/ignite-config.xml file
Job Stealing Configuration not working in Apache Ignite
I have the following configuration file <bean abstract="true" id="ignite.cfg" class="org.apache.ignite.configuration.IgniteConfiguration"> <property name="peerClassLoadingEnabled" value="true"/> <property name="includeEventTypes"> <list> <!--Task execution events--> <util:constant static-field="org.apache.ignite.events.EventType.EVT_TASK_STARTED"/> <util:constant static-field="org.apache.ignite.events.EventType.EVT_TASK_FINISHED"/> <util:constant static-field="org.apache.ignite.events.EventType.EVT_TASK_FAILED"/> </list> </property> <property name="metricsUpdateFrequency" value="10000"/> <property name="discoverySpi"> <bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi"> <property name="ipFinder"> <bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.multicast.TcpDiscoveryMulticastIpFinder"> <property name="addresses"> <list> <!-- In distributed environment, replace with actual host IP address. --> <value>127.0.0.1:47500..47509</value> <value>127.0.0.1:48500..48509</value> </list> </property> </bean> </property> </bean> </property> <!-- Enabling the required Failover SPI. --> <property name="failoverSpi"> <bean class="org.apache.ignite.spi.failover.jobstealing.JobStealingFailoverSpi"/> </property> <property name="collisionSpi"> <bean class="org.apache.ignite.spi.collision.jobstealing.JobStealingCollisionSpi"> <property name="activeJobsThreshold" value="50"/> <property name="waitJobsThreshold" value="0"/> <property name="messageExpireTime" value="1000"/> <property name="maximumStealingAttempts" value="10"/> <property name="stealingEnabled" value="true"/> </bean> </property> </bean> The closure gets executed over the server nodes in the grid as expected. When we add a new node by executing the below command to the grid during the execution of closure The existing nodes acknowledge the addition of the new node in the grid but the closure is not distributed to the newly added node. Below is my closure implementation #Override public AccruedSimpleInterest apply(SimpleInterestParameter simpleInterestParameter) { BigDecimal si = simpleInterestParameter.getPrincipal() .multiply(new BigDecimal(simpleInterestParameter.getYears())) .multiply(new BigDecimal(simpleInterestParameter.getRate())).divide(SimpleInterestClosure.HUNDRED); System.out.println("Calculated SI for id=" + simpleInterestParameter.getId() + " SI=" + si.toPlainString()); return new AccruedSimpleInterest(si, simpleInterestParameter); } Below is the main class public static void main(String... args) throws IgniteException, IOException { Factory<SimpleInterestClosure> siClosureFactory = FactoryBuilder.factoryOf(new SimpleInterestClosure()); ClassPathResource ress = new ClassPathResource("example-ignite-poc.xml"); File file = new File(ress.getPath()); try (Ignite ignite = Ignition.start(file.getPath())) { System.out.println("Started Ignite Cluster"); IgniteFuture<Collection<AccruedSimpleInterest>> igniteFuture = ignite.compute() .applyAsync(siClosureFactory.create(), createParamCollection()); Collection<AccruedSimpleInterest> res = igniteFuture.get(); System.out.println(res.size()); }nter code here
As far as my understanding goes, Job Stealing SPI requires you to implement some additional APIs in order to work. Please see this discussion on user list: Some remarks about job stealing SPI: 1)You have some nodes that can proceed the tasks of some compute job. 2)Tasks will be executed in public thread pool by default: https://apacheignite.readme.io/docs/thread-pools#section-public-pool 3)If some node thread pool is busy then some task of compute job can be executed on other node. In next cases it will not work: 1)In case if you choose specific node for your compute task 2)In case if you do affinity call (the same as above but node will be choose by affinity mapping)
Apache Ignite does not distribute an executing closure to a newly spawned node in the compute grid
Apache Ignite Version is: 2.1.0 I am using the default configuration for client & servers. The following is the client configuration. The server configuration does not have the "clientMode" property. <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:util="http://www.springframework.org/schema/util" xsi:schemaLocation=" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://www.springframework.org/schema/util http://www.springframework.org/schema/util/spring-util.xsd"> <bean abstract="true" id="ignite.cfg" class="org.apache.ignite.configuration.IgniteConfiguration"> <!-- Set to true to enable distributed class loading for examples, default is false. --> <property name="peerClassLoadingEnabled" value="true"/> <property name="clientMode" value="true"/> <!-- Enable task execution events for examples. --> <property name="includeEventTypes"> <list> <!--Task execution events--> <util:constant static-field="org.apache.ignite.events.EventType.EVT_TASK_STARTED"/> <util:constant static-field="org.apache.ignite.events.EventType.EVT_TASK_FINISHED"/> <util:constant static-field="org.apache.ignite.events.EventType.EVT_TASK_FAILED"/> <util:constant static-field="org.apache.ignite.events.EventType.EVT_TASK_TIMEDOUT"/> <util:constant static-field="org.apache.ignite.events.EventType.EVT_TASK_SESSION_ATTR_SET"/> <util:constant static-field="org.apache.ignite.events.EventType.EVT_TASK_REDUCED"/> <!--Cache events --> <util:constant static-field="org.apache.ignite.events.EventType.EVT_CACHE_OBJECT_PUT"/> <util:constant static-field="org.apache.ignite.events.EventType.EVT_CACHE_OBJECT_READ"/> <util:constant static-field="org.apache.ignite.events.EventType.EVT_CACHE_OBJECT_REMOVED"/> </list> </property> <!-- Explicitly configure TCP discovery SPI to provide list of initial nodes. --> <property name="discoverySpi"> <bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi"> <property name="ipFinder"> <!-- Ignite provides several options for automatic discovery that can be used instead os static IP based discovery. For information on all options refer to our documentation: http://apacheignite.readme.io/docs/cluster-config --> <!-- Uncomment static IP finder to enable static-based discovery of initial nodes. --> <!--<bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">--> <!-- <bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.multicast.TcpDiscoveryMulticastIpFinder"> --> <bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder"> <property name="addresses"> <list> <!-- In distributed environment, replace with actual host IP address. --> <value>xxx.1y4.1zz.91:47500..47509</value> <value>xxx.1y4.1zz.92:47500..47509</value> </list> </property> </bean> </property> </bean> </property> </bean> </beans> The closure gets executed over the server nodes in the grid as expected. When we add a new node by executing the below command to the grid during the execution of closure .\ignite.bat ..\examples\config\example-ignite.xml The existing nodes acknowledge the addition of the new node in the grid but the closure is not distributed to the newly added node. Is there any configuration available to enable execution of closure to a node added during the execution of the closure? Edit 1: Below is the IgniteClosure implementation class: public class SimpleInterestClosure implements IgniteClosure<SimpleInterestParam, AccruedSimpleInterest> { private static final long serialVersionUID = -5542687183747797356L; private static final BigInteger HUNDRED = new BigInteger("100".getBytes()); private static Logger log = Logger.getLogger("SimpleInterestClosure"); #Override public AccruedSimpleInterest apply(SimpleInterestParam e) { BigInteger si = e.getPrincipal().multiply(new BigInteger(e.getDurationInYears().toString().getBytes())). multiply(new BigInteger(e.getInterestRate().toString().getBytes())).divide(SimpleInterestClosure.HUNDRED); log.info("Calculated SI for id=" + e.getId()); return new AccruedSimpleInterest(e, si); } } Edit 2: Below is the method which invokes the IgniteClosure implementation public void method() throws IgniteException, IOException { Factory<SimpleInterestClosure> siClosureFactory = FactoryBuilder.singletonfactoryOf( new SimpleInterestClosure()); ClassPathResource ress = new ClassPathResource("example-ignite.xml"); File file = new File(ress.getPath()); try (Ignite ignite = Ignition.start(file.getPath())) { log.info("Started Ignite Cluster"); IgniteFuture<Collection<AccruedSimpleInterest>> igniteFuture = ignite.compute() .applyAsync(siClosureFactory.create(), createParamCollection()); Collection<AccruedSimpleInterest> res = igniteFuture.get(); } }
This sounds like you're looking for job stealing: http://apacheignite.readme.io/docs/load-balancing#job-stealing Although it currently has a bug that may be an issue in this particular case: http://issues.apache.org/jira/browse/IGNITE-1267
Ignite Remote Server Thread Not exit, Cause Out Of Memory Finally
When I start a remote compute job , call() Or affinityCall(). Remote server will create 6 threads, and these thread never exit. Just like the VisualVM shows below: view VisualVM snapshot thread name from "utility-#153%null%" to "marshaller-cache-#14i%null%", will never be ended. If client runs over and over again, the number of threads on server node will be increased rapidly. As a result, server node run out of memory. How can I close this thread when client closed. May be I do not run client in the current way. Client Code String cacheKey = "jobIds"; String cname = "myCacheName"; ClusterGroup rmts = getIgnite().cluster().forRemotes(); IgniteCache<String, List<String>> cache = getIgnite().getOrCreateCache(cname); List<String> jobList = cache.get(cacheKey); Collection<String> res = ignite.compute(rmts).apply( new IgniteClosure<String, String>() { #Override public String apply(String word) { return word; } }, jobList ); getIgnite().close(); System.out.println("ignite Closed"); if (res == null) { System.out.println("Error: Result is null"); return; } res.forEach(s -> { System.out.println(s); }); System.out.println("Finished!"); getIgnite(), get the instance of Ignite. public static Ignite getIgnite() { if (ignite == null) { System.out.println("RETURN INSTANCE .........."); Ignition.setClientMode(true); ignite = Ignition.start(confCache); ignite.configuration().setDeploymentMode(DeploymentMode.CONTINUOUS); } return ignite; } Server config: <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation=" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd"> <!-- Alter configuration below as needed. --> <bean id="grid.cfg" class="org.apache.ignite.configuration.IgniteConfiguration"> <property name="peerClassLoadingEnabled" value="true"/> <property name="peerClassLoadingMissedResourcesCacheSize" value="0"/> <property name="publicThreadPoolSize" value="64"/> <property name="discoverySpi"> <bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi"> <property name="ipFinder"> <bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder"> <property name="addresses"> <list> <value>172.22.1.72:47500..47509</value> <value>172.22.1.100:47500..47509</value> </list> </property> </bean> </property> </bean> </property> <property name="cacheConfiguration"> <bean class="org.apache.ignite.configuration.CacheConfiguration"> <property name="cacheMode" value="PARTITIONED"/> <property name="memoryMode" value="ONHEAP_TIERED"/> <property name="backups" value="0"/> <property name="offHeapMaxMemory" value="0"/> <property name="swapEnabled" value="false"/> </bean> </property> </bean> </beans>
These thread pools are static and number of threads in them never depends on load (number of executed operations, jobs, etc.). Having said that, I'm don't think they are the reason of OOME, unless you somehow start a new node within the same JVM for each executed job. I would also recommend to always reuse the existing node that is already started in a JVM. Starting a new one and closing it for each job is a bad practice.
Threads are created in thread pools, so you may set their size in IgniteConfiguration: setUtilityCachePoolSize(int) and setMarshallerCachePoolSize(int) for Ignite 1.5 and setMarshallerCacheThreadPoolSize(int) for Ignite 1.7, and others.
Use JAAS for LDAP password with Spring security
I have a Java EE web application which uses an LDAP authentication. I use Spring security to connect to my LDAP with the following code: <bean id="ldapContextSource" class="com.myapp.security.authentication.MySecurityContextSource"> <constructor-arg index="0" value="${ldap.url}" /> <constructor-arg index="1" ref="userConnexion" /> </bean> <security:authentication-manager alias="authenticationManager"> <security:authentication-provider ref="ldapAuthProvider" /> </security:authentication-manager> <bean id="userConnexion" class="com.myapp.util.security.WebsphereCredentials"> <constructor-arg value="${ldap.authJndiAlias}" /> </bean> <bean id="ldapAuthProvider" class="org.springframework.security.ldap.authentication.LdapAuthenticationProvider"> <constructor-arg> <bean class="org.springframework.security.ldap.authentication.BindAuthenticator"> <constructor-arg ref="ldapContextSource" /> <property name="userSearch" ref="userSearch" /> </bean> </constructor-arg> <constructor-arg> <bean class="com.myapp.security.authentication.MyAuthoritiesPopulator" > <property name="userService" ref="userService" /> </bean> </constructor-arg> <property name="userDetailsContextMapper" ref="myUserDetailsContextMapper"/> <property name="hideUserNotFoundExceptions" value="false" /> </bean> Actually, my bean WebsphereCredentials uses a WebSphere private class WSMappingCallbackHandlerFactory as in this response : How to access authentication alias from EJB deployed to Websphere 6.1 We can see it in the official websphere documentation: http://pic.dhe.ibm.com/infocenter/wasinfo/v6r1/index.jsp?topic=%2Fcom.ibm.websphere.express.doc%2Finfo%2Fexp%2Fae%2Frsec_pluginj2c.html But I don't want it because: I think my application can access all JAAS logins in my WebSphere instance (not sure). This class is defined in the HUGE IBM client library com.ibm.ws.admin.client-7.0.0.jar (42 Mo) => compilation slower, not present in my enterprise nexus It's not portable, not standard For information, I define the WebsphereCredentials constructor as this: Map<String, String> map = new HashMap<String, String>(); map.put(Constants.MAPPING_ALIAS, this.jndiAlias); Subject subject; try { CallbackHandler callbackHandler = WSMappingCallbackHandlerFactory.getInstance().getCallbackHandler(map, null); LoginContext lc = new LoginContext("DefaultPrincipalMapping", callbackHandler); lc.login(); subject = lc.getSubject(); } catch (NotImplementedException e) { throw new EfritTechnicalException(EfritTechnicalExceptionEnum.LOGIN_CREDENTIAL_PROBLEM, e); } catch (LoginException e) { throw new EfritTechnicalException(EfritTechnicalExceptionEnum.LOGIN_CREDENTIAL_PROBLEM, e); } PasswordCredential cred = (PasswordCredential) subject.getPrivateCredentials().toArray()[0]; this.user = cred.getUserName(); this.password = String.valueOf(cred.getPassword()); Is there a way to use just Spring security and remove this dependency? I have no idea how to combine http://static.springsource.org/spring-security/site/docs/3.1.x/reference/jaas.html and http://static.springsource.org/spring-security/site/docs/3.1.x/reference/ldap.html. Maybe I must totally change my approach and use another way?
I assume your goal is to simply utilize the username / password that you configure in WebSphere to connect to the LDAP directory? If this is the case, you are not really trying to combine LDAP and JAAS based authentication. The JAAS support is really intended to be a way of using JAAS LoginModules to authenticate a user instead of using the LDAP based authentication. If you are wanting to obtain the username and password without having a compile time dependency on WebSphere, you have a few options. Eliminating Compile Time and Runtime Dependencies on WAS One option is to configure the password in a different way. This could be as simple as using the password directly directly in the configuration file as shown in the Spring Security LDAP documentation: <bean id="ldapContextSource" class="org.springframework.security.ldap.DefaultSpringSecurityContextSource"> <constructor-arg value="ldap://monkeymachine:389/dc=springframework,dc=org"/> <property name="userDn" value="cn=manager,dc=springframework,dc=org"/> <property name="password" value="password"/> </bean> You could also configure the username password in JNDI. Another alternative is to use a .properties file with the Property. If you are wanting to ensure the password is secured, then you will probably want to encrypt the password using something like Jasypt. Eliminating Compile Time dependencies and still configuring with WAS If you need or want to use WebSphere's J2C support for storing the credentials, then you can do by injecting the CallbackHandler instance. For example, your WebsphereCredentials bean could be something like this: try { LoginContext lc = new LoginContext("DefaultPrincipalMapping", this.callbackHandler); lc.login(); subject = lc.getSubject(); } catch (NotImplementedException e) { throw new EfritTechnicalException(EfritTechnicalExceptionEnum.LOGIN_CREDENTIAL_PROBLEM, e); } catch (LoginException e) { throw new EfritTechnicalException(EfritTechnicalExceptionEnum.LOGIN_CREDENTIAL_PROBLEM, e); } PasswordCredential cred = (PasswordCredential) subject.getPrivateCredentials().toArray()[0]; this.user = cred.getUserName(); this.password = String.valueOf(cred.getPassword()); Your configuration would then look something like this: <bean id="userConnexion" class="com.myapp.util.security.WebsphereCredentials"> <constructor-arg ref="wasCallbackHandler"/> </bean> <bean id="wasCallbackHandler" factory-bean="wasCallbackFactory" factory-method="getCallbackHandler"> <constructor-arg> <map> <entry value="${ldap.authJndiAlias}"> <key> <util:constant static-field="com.ibm.wsspi.security.auth.callback.Constants.MAPPING_ALIAS"/> </key> </entry> </map> </constructor-arg> <constructor-arg> <null /> </constructor-arg> </bean> <bean id="wasCallbackFactory" class="com.ibm.wsspi.security.auth.callback.WSMappingCallbackHandlerFactory" factory-method="getInstance" /> Disclaimer CallbackHandler instances are not Thread safe and generally should not be used more than once. Thus it can be a bit risky injecting CallbackHandler instances as member variables. You may want to program in a check to ensure that the CallbackHandler only used one time. Hybrid Approach You could do a hybrid approach that always removes the compile time dependency and allows you to remove the runtime dependency in instances where you might not be running on WebSphere. This could be done by combining the two suggestions and using Spring Bean Definition Profiles to differentiate between running on WebSphere and a non-WebSphere machine.