Ignite exception with Too many open files but with ulimit of "open files (-n) 1048576" not work - ignite

I stop one Ignite server, and restart agagin, it throws
exception with Too many open files, i have change the ulimt of open file with
ulimit -n 1048576
and check the number changes, but ignite still could not start.
# ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 15083
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 1048576
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 15083
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
the error log:
>>> VM name: 7493#test-server-node1
>>> Local node [ID=07F093E4-BBEF-471C-9046-4D1A50B84087, order=9, clientMode=false]
>>> Local node addresses: [san011.fr.alcatel-lucent.com/0:0:0:0:0:0:0:1%lo, 10.0.2.15/10.0.2.15, /127.0.0.1, /192.168.100.11]
>>> Local ports: TCP:10800 TCP:11090 TCP:11211 TCP:47100 UDP:47400 TCP:47500
[19:13:57,052][INFO][main][GridDiscoveryManager] Topology snapshot [ver=9, servers=2, clients=1, CPUs=5, offheap=1.5GB, heap=4.0GB]
[19:13:57,052][INFO][main][GridDiscoveryManager] Data Regions Configured:
[19:13:57,052][INFO][main][GridDiscoveryManager] ^-- default [initSize=256.0 MiB, maxSize=758.3 MiB, persistenceEnabled=true]
[19:13:57,255][INFO][sys-#59][GridDhtPartitionDemander] Completed (final) rebalancing [fromNode=918b2b4e-f98e-4faf-bffd-8f9d1dd97bf3, cacheOrGroup=TxCoinLatestInfoCache, topology=AffinityTopologyVersion [topVer=9, minorTopVer=0], time=307 ms]
[19:13:57,256][INFO][sys-#59][GridDhtPartitionDemander] Starting rebalancing [mode=ASYNC, fromNode=918b2b4e-f98e-4faf-bffd-8f9d1dd97bf3, partitionsCount=512, topology=AffinityTopologyVersion [topVer=9, minorTopVer=0], updateSeq=1]
[19:13:57,490][INFO][sys-#45][GridDhtPartitionDemander] Completed (final) rebalancing [fromNode=918b2b4e-f98e-4faf-bffd-8f9d1dd97bf3, cacheOrGroup=TxCoinMinInfoCache, topology=AffinityTopologyVersion [topVer=9, minorTopVer=0], time=232 ms]
[19:13:57,491][INFO][sys-#45][GridDhtPartitionDemander] Starting rebalancing [mode=ASYNC, fromNode=918b2b4e-f98e-4faf-bffd-8f9d1dd97bf3, partitionsCount=512, topology=AffinityTopologyVersion [topVer=9, minorTopVer=0], updateSeq=1]
[19:13:57,828][SEVERE][sys-#57][NodeInvalidator] Critical error with null is happened. All further operations will be failed and local node will be stopped.
class org.apache.ignite.internal.processors.cache.persistence.file.PersistentStorageIOException: Could not initialize file: part-347.bin
at org.apache.ignite.internal.processors.cache.persistence.file.FilePageStore.init(FilePageStore.java:445)
at org.apache.ignite.internal.processors.cache.persistence.file.FilePageStore.read(FilePageStore.java:332)
at org.apache.ignite.internal.processors.cache.persistence.file.FilePageStoreManager.read(FilePageStoreManager.java:322)
at org.apache.ignite.internal.processors.cache.persistence.file.FilePageStoreManager.read(FilePageStoreManager.java:306)
at org.apache.ignite.internal.processors.cache.persistence.pagemem.PageMemoryImpl.acquirePage(PageMemoryImpl.java:655)
at org.apache.ignite.internal.processors.cache.persistence.pagemem.PageMemoryImpl.acquirePage(PageMemoryImpl.java:575)
at org.apache.ignite.internal.processors.cache.persistence.GridCacheOffheapManager$GridCacheDataStore.getOrAllocatePartitionMetas(GridCacheOffheapManager.java:1132)
at org.apache.ignite.internal.processors.cache.persistence.GridCacheOffheapManager$GridCacheDataStore.init0(GridCacheOffheapManager.java:1030)
at org.apache.ignite.internal.processors.cache.persistence.GridCacheOffheapManager$GridCacheDataStore.updateCounter(GridCacheOffheapManager.java:1265)
at org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtLocalPartition.updateCounter(GridDhtLocalPartition.java:849)
at org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionDemander.handleSupplyMessage(GridDhtPartitionDemander.java:697)
at org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPreloader.handleSupplyMessage(GridDhtPreloader.java:375)
at org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$5.apply(GridCachePartitionExchangeManager.java:364)
at org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$5.apply(GridCachePartitionExchangeManager.java:354)
at org.apache.ignite.internal.processors.cache.GridCacheIoManager.processMessage(GridCacheIoManager.java:1060)
at org.apache.ignite.internal.processors.cache.GridCacheIoManager.onMessage0(GridCacheIoManager.java:579)
at org.apache.ignite.internal.processors.cache.GridCacheIoManager.access$700(GridCacheIoManager.java:99)
at org.apache.ignite.internal.processors.cache.GridCacheIoManager$OrderedMessageListener.onMessage(GridCacheIoManager.java:1609)
at org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1555)
at org.apache.ignite.internal.managers.communication.GridIoManager.access$4100(GridIoManager.java:126)
at org.apache.ignite.internal.managers.communication.GridIoManager$GridCommunicationMessageSet.unwind(GridIoManager.java:2751)
at org.apache.ignite.internal.managers.communication.GridIoManager.unwindMessageSet(GridIoManager.java:1515)
at org.apache.ignite.internal.managers.communication.GridIoManager.access$4400(GridIoManager.java:126)
at org.apache.ignite.internal.managers.communication.GridIoManager$10.run(GridIoManager.java:1484)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.nio.file.FileSystemException: /usr/share/apache-ignite/work/db/node00-d2cb44e3-b649-4e2e-b6f9-f08f9ae1b3af/cache-TxCoinMinInfoToDbCache/part-347.bin: Too many open files
at sun.nio.fs.UnixException.translateToIOException(UnixException.java:91)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
at sun.nio.fs.UnixFileSystemProvider.newAsynchronousFileChannel(UnixFileSystemProvider.java:196)
at java.nio.channels.AsynchronousFileChannel.open(AsynchronousFileChannel.java:248)
at java.nio.channels.AsynchronousFileChannel.open(AsynchronousFileChannel.java:301)
at org.apache.ignite.internal.processors.cache.persistence.file.AsyncFileIO.<init>(AsyncFileIO.java:57)
at org.apache.ignite.internal.processors.cache.persistence.file.AsyncFileIOFactory.create(AsyncFileIOFactory.java:53)
at org.apache.ignite.internal.processors.cache.persistence.file.FilePageStore.init(FilePageStore.java:428)
... 26 more
[19:13:57,856][SEVERE][sys-#57][GridCacheIoManager] Failed processing message [senderId=918b2b4e-f98e-4faf-bffd-8f9d1dd97bf3, msg=GridDhtPartitionSupplyMessage [updateSeq=1, topVer=AffinityTopologyVersion [topVer=9, minorTopVer=0], missed=null, clean=[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99... and 412 more], msgSize=16500, estimatedKeysCnt=1, size=512, parts=[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99... and 412 more], super=GridCacheGroupIdMessage [grpId=-607232546]]]
class org.apache.ignite.IgniteException: Could not initialize file: part-347.bin
at org.apache.ignite.internal.processors.cache.persistence.GridCacheOffheapManager$GridCacheDataStore.updateCounter(GridCacheOffheapManager.java:1271)
at org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtLocalPartition.updateCounter(GridDhtLocalPartition.java:849)
at org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionDemander.handleSupplyMessage(GridDhtPartitionDemander.java:697)
at org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPreloader.handleSupplyMessage(GridDhtPreloader.java:375)
at org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$5.apply(GridCachePartitionExchangeManager.java:364)
at org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$5.apply(GridCachePartitionExchangeManager.java:354)
at org.apache.ignite.internal.processors.cache.GridCacheIoManager.processMessage(GridCacheIoManager.java:1060)
at org.apache.ignite.internal.processors.cache.GridCacheIoManager.onMessage0(GridCacheIoManager.java:579)
at org.apache.ignite.internal.processors.cache.GridCacheIoManager.access$700(GridCacheIoManager.java:99)
at org.apache.ignite.internal.processors.cache.GridCacheIoManager$OrderedMessageListener.onMessage(GridCacheIoManager.java:1609)
at org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1555)
at org.apache.ignite.internal.managers.communication.GridIoManager.access$4100(GridIoManager.java:126)
at org.apache.ignite.internal.managers.communication.GridIoManager$GridCommunicationMessageSet.unwind(GridIoManager.java:2751)
at org.apache.ignite.internal.managers.communication.GridIoManager.unwindMessageSet(GridIoManager.java:1515)
at org.apache.ignite.internal.managers.communication.GridIoManager.access$4400(GridIoManager.java:126)
at org.apache.ignite.internal.managers.communication.GridIoManager$10.run(GridIoManager.java:1484)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: class org.apache.ignite.internal.processors.cache.persistence.file.PersistentStorageIOException: Could not initialize file: part-347.bin
at org.apache.ignite.internal.processors.cache.persistence.file.FilePageStore.init(FilePageStore.java:445)
at org.apache.ignite.internal.processors.cache.persistence.file.FilePageStore.read(FilePageStore.java:332)
at org.apache.ignite.internal.processors.cache.persistence.file.FilePageStoreManager.read(FilePageStoreManager.java:322)
at org.apache.ignite.internal.processors.cache.persistence.file.FilePageStoreManager.read(FilePageStoreManager.java:306)
at org.apache.ignite.internal.processors.cache.persistence.pagemem.PageMemoryImpl.acquirePage(PageMemoryImpl.java:655)
at org.apache.ignite.internal.processors.cache.persistence.pagemem.PageMemoryImpl.acquirePage(PageMemoryImpl.java:575)
at org.apache.ignite.internal.processors.cache.persistence.GridCacheOffheapManager$GridCacheDataStore.getOrAllocatePartitionMetas(GridCacheOffheapManager.java:1132)
at org.apache.ignite.internal.processors.cache.persistence.GridCacheOffheapManager$GridCacheDataStore.init0(GridCacheOffheapManager.java:1030)
at org.apache.ignite.internal.processors.cache.persistence.GridCacheOffheapManager$GridCacheDataStore.updateCounter(GridCacheOffheapManager.java:1265)
... 18 more
Caused by: java.nio.file.FileSystemException: /usr/share/apache-ignite/work/db/node00-d2cb44e3-b649-4e2e-b6f9-f08f9ae1b3af/cache-TxCoinMinInfoToDbCache/part-347.bin: Too many open files
at sun.nio.fs.UnixException.translateToIOException(UnixException.java:91)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
at sun.nio.fs.UnixFileSystemProvider.newAsynchronousFileChannel(UnixFileSystemProvider.java:196)
at java.nio.channels.AsynchronousFileChannel.open(AsynchronousFileChannel.java:248)
at java.nio.channels.AsynchronousFileChannel.open(AsynchronousFileChannel.java:301)
at org.apache.ignite.internal.processors.cache.persistence.file.AsyncFileIO.<init>(AsyncFileIO.java:57)
at org.apache.ignite.internal.processors.cache.persistence.file.AsyncFileIOFactory.create(AsyncFileIOFactory.java:53)
at org.apache.ignite.internal.processors.cache.persistence.file.FilePageStore.init(FilePageStore.java:428)
... 26 more
[19:13:57,874][SEVERE][upd-ver-checker][GridUpdateNotifier] Runtime error caught during grid runnable execution: GridWorker [name=grid-version-checker, igniteInstanceName=null, finished=false, hashCode=73805044, interrupted=false, runner=upd-ver-checker]
java.lang.ExceptionInInitializerError
at javax.crypto.JceSecurityManager.<clinit>(JceSecurityManager.java:65)
at javax.crypto.Cipher.getConfiguredPermission(Cipher.java:2586)
at javax.crypto.Cipher.getMaxAllowedKeyLength(Cipher.java:2610)
at sun.security.ssl.CipherSuite$BulkCipher.isUnlimited(CipherSuite.java:535)
at sun.security.ssl.CipherSuite$BulkCipher.<init>(CipherSuite.java:507)
at sun.security.ssl.CipherSuite.<clinit>(CipherSuite.java:614)
at sun.security.ssl.SSLContextImpl.getApplicableCipherSuiteList(SSLContextImpl.java:294)
at sun.security.ssl.SSLContextImpl.access$100(SSLContextImpl.java:42)
at sun.security.ssl.SSLContextImpl$AbstractTLSContext.<clinit>(SSLContextImpl.java:425)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:264)
at java.security.Provider$Service.getImplClass(Provider.java:1634)
at java.security.Provider$Service.newInstance(Provider.java:1592)
at sun.security.jca.GetInstance.getInstance(GetInstance.java:236)
at sun.security.jca.GetInstance.getInstance(GetInstance.java:164)
at javax.net.ssl.SSLContext.getInstance(SSLContext.java:156)
at javax.net.ssl.SSLContext.getDefault(SSLContext.java:96)
at javax.net.ssl.SSLSocketFactory.getDefault(SSLSocketFactory.java:122)
at javax.net.ssl.HttpsURLConnection.getDefaultSSLSocketFactory(HttpsURLConnection.java:332)
at javax.net.ssl.HttpsURLConnection.<init>(HttpsURLConnection.java:289)
at sun.net.www.protocol.https.HttpsURLConnectionImpl.<init>(HttpsURLConnectionImpl.java:94)
at sun.net.www.protocol.https.Handler.openConnection(Handler.java:62)
at sun.net.www.protocol.https.Handler.openConnection(Handler.java:57)
at java.net.URL.openConnection(URL.java:979)
at org.apache.ignite.internal.processors.cluster.HttpIgniteUpdatesChecker.getUpdates(HttpIgniteUpdatesChecker.java:59)
at org.apache.ignite.internal.processors.cluster.GridUpdateNotifier$UpdateChecker.body(GridUpdateNotifier.java:268)
at org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
at org.apache.ignite.internal.processors.cluster.GridUpdateNotifier$1.run(GridUpdateNotifier.java:113)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.SecurityException: Can not initialize cryptographic mechanism
at javax.crypto.JceSecurity.<clinit>(JceSecurity.java:93)
... 29 more
Caused by: java.security.PrivilegedActionException: java.io.FileNotFoundException: /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.171-7.b10.el7.x86_64/jre/lib/security/policy/unlimited/US_export_policy.jar (Too many open files)
at java.security.AccessController.doPrivileged(Native Method)
at javax.crypto.JceSecurity.<clinit>(JceSecurity.java:82)
... 29 more
Caused by: java.io.FileNotFoundException: /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.171-7.b10.el7.x86_64/jre/lib/security/policy/unlimited/US_export_policy.jar (Too many open files)
at java.util.zip.ZipFile.open(Native Method)
at java.util.zip.ZipFile.<init>(ZipFile.java:225)
at java.util.zip.ZipFile.<init>(ZipFile.java:155)
at java.util.jar.JarFile.<init>(JarFile.java:166)
at java.util.jar.JarFile.<init>(JarFile.java:130)
at javax.crypto.JceSecurity.loadPolicies(JceSecurity.java:353)
at javax.crypto.JceSecurity.setupJurisdictionPolicies(JceSecurity.java:323)
at javax.crypto.JceSecurity.access$000(JceSecurity.java:50)
at javax.crypto.JceSecurity$1.run(JceSecurity.java:85)
... 31 more
[19:13:57,897][INFO][node-stopper][GridTcpRestProtocol] Command protocol successfully stopped: TCP binary
[19:13:57,929][INFO][node-stopper][GridJettyRestProtocol] Command protocol successfully stopped: Jetty REST
[19:13:57,935][INFO][node-stopper][GridDhtPartitionDemander] Cancelled rebalancing from all nodes [topology=AffinityTopologyVersion [topVer=9, minorTopVer=0]]
[19:13:57,939][SEVERE][db-checkpoint-thread-#40][GridCacheDatabaseSharedManager] Runtime error caught during grid runnable execution: GridWorker [name=db-checkpoint-thread, igniteInstanceName=null, finished=false, hashCode=1713594100, interrupted=false, runner=db-checkpoint-thread-#40]
class org.apache.ignite.IgniteException: Failed to perform WAL operation (environment was invalidated by a previous error)
at org.apache.ignite.internal.processors.cache.persistence.pagemem.PageMemoryImpl.beforeReleaseWrite(PageMemoryImpl.java:1490)
at org.apache.ignite.internal.processors.cache.persistence.pagemem.PageMemoryImpl.writeUnlockPage(PageMemoryImpl.java:1349)
at org.apache.ignite.internal.processors.cache.persistence.pagemem.PageMemoryImpl.writeUnlock(PageMemoryImpl.java:415)
at org.apache.ignite.internal.processors.cache.persistence.pagemem.PageMemoryImpl.writeUnlock(PageMemoryImpl.java:409)
at org.apache.ignite.internal.processors.cache.persistence.tree.util.PageHandler.writeUnlock(PageHandler.java:377)
at org.apache.ignite.internal.processors.cache.persistence.DataStructure.writeUnlock(DataStructure.java:198)
at org.apache.ignite.internal.processors.cache.persistence.freelist.PagesList.releaseAndClose(PagesList.java:359)
at org.apache.ignite.internal.processors.cache.persistence.freelist.PagesList.saveMetadata(PagesList.java:318)
at org.apache.ignite.internal.processors.cache.persistence.GridCacheOffheapManager.saveStoreMetadata(GridCacheOffheapManager.java:190)
at org.apache.ignite.internal.processors.cache.persistence.GridCacheOffheapManager.onCheckpointBegin(GridCacheOffheapManager.java:167)
at org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager$Checkpointer.markCheckpointBegin(GridCacheDatabaseSharedManager.java:2986)
at org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager$Checkpointer.doCheckpoint(GridCacheDatabaseSharedManager.java:2754)
at org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager$Checkpointer.body(GridCacheDatabaseSharedManager.java:2679)
at org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
at java.lang.Thread.run(Thread.java:748)
Caused by: class org.apache.ignite.internal.pagemem.wal.StorageException: Failed to perform WAL operation (environment was invalidated by a previous error)
at org.apache.ignite.internal.processors.cache.persistence.wal.FileWriteAheadLogManager.checkNode(FileWriteAheadLogManager.java:1354)
at org.apache.ignite.internal.processors.cache.persistence.wal.FileWriteAheadLogManager.access$7700(FileWriteAheadLogManager.java:130)
at org.apache.ignite.internal.processors.cache.persistence.wal.FileWriteAheadLogManager$FileWriteHandle.addRecord(FileWriteAheadLogManager.java:2509)
at org.apache.ignite.internal.processors.cache.persistence.wal.FileWriteAheadLogManager$FileWriteHandle.access$1900(FileWriteAheadLogManager.java:2419)
at org.apache.ignite.internal.processors.cache.persistence.wal.FileWriteAheadLogManager.log(FileWriteAheadLogManager.java:700)
at org.apache.ignite.internal.processors.cache.persistence.pagemem.PageMemoryImpl.beforeReleaseWrite(PageMemoryImpl.java:1486)
... 14 more
[19:13:57,952][INFO][tcp-disco-sock-reader-#8][TcpDiscoverySpi] Finished serving remote node connection [rmtAddr=/192.168.100.13:32777, rmtPort=32777
[19:13:57,978][INFO][node-stopper][GridCacheProcessor] Stopped cache [cacheName=ignite-sys-cache]
[19:13:57,979][INFO][node-stopper][GridCacheProcessor] Stopped cache [cacheName=TxCoinMinInfoToDbCache]
[19:13:57,980][INFO][node-stopper][GridCacheProcessor] Stopped cache [cacheName=TxCoinMinInfoCache]
[19:13:57,981][INFO][node-stopper][GridCacheProcessor] Stopped cache [cacheName=TxCoinLatestInfoCache]
[19:13:57,983][INFO][node-stopper][GridCacheProcessor] Stopped cache [cacheName=datastructures_ATOMIC_PARTITIONED_1#default-ds-group, group=default-ds-group]
[19:13:57,985][INFO][node-stopper][GridCacheProcessor] Stopped cache [cacheName=ignite-sys-atomic-cache#default-ds-group, group=default-ds-group]
[19:13:57,986][INFO][node-stopper][GridCacheProcessor] Stopped cache [cacheName=TradeCoinInfoCache]
[19:13:57,986][INFO][node-stopper][GridCacheProcessor] Stopped cache [cacheName=LvOneTxCache]
[19:13:57,986][INFO][node-stopper][GridCacheProcessor] Stopped cache [cacheName=CoinTypeListCache]
[19:13:57,987][INFO][node-stopper][GridCacheProcessor] Stopped cache [cacheName=MatchResultRecordCache]
[19:14:02,540][INFO][node-stopper][GridDeploymentLocalStore] Removed undeployed class: GridDeployment [ts=1525259633995, depMode=SHARED, clsLdr=sun.misc.Launcher$AppClassLoader#764c12b6, clsLdrId=fa6be802361-07f093e4-bbef-471c-9046-4d1a50b84087, userVer=0, loc=true, sampleClsName=org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionFullMap, pendingUndeploy=false, undeployed=true, usage=0]
[19:14:02,550][INFO][node-stopper][IgniteKernal]
>>> +---------------------------------------------------------------------------------+
>>> Ignite ver. 2.4.0#20180305-sha1:aa342270b13cc1f4713382a8eb23b2eb7edaa3a5 stopped OK
>>> +---------------------------------------------------------------------------------+
>>> Grid uptime: 00:00:05.510
But if i manully change it to
ulimit -n 65535
The node could be restart normally and back to cluster
I have tried several time, it always could be reproduced.

Please get count of open handlers by command: sudo lsof -u user | wc -l . Where user is the user name.
Check the system configuration for file descriptors: sudo sysctl fs.file-nr . You could increase limit in file /etc/sysctl.conf
Please check your application for properly closing of the file resources and resolve what process consumes file descriptors.

Related

Scrapy Pausing and resuming crawls, results directory

I have finished a scraping project using resume mode. but I don't know where the results are.
scrapy crawl somespider -s JOBDIR=crawls/somespider-1
I look at https://docs.scrapy.org/en/latest/topics/jobs.html, but it does not indicate anything about it
¿Where is the file with the results?
2020-09-10 23:31:31 [scrapy.core.engine] INFO: Closing spider (finished)
2020-09-10 23:31:31 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'bans/error/scrapy.core.downloader.handlers.http11.TunnelError': 22,
'bans/error/twisted.internet.error.ConnectionRefusedError': 2,
'bans/error/twisted.internet.error.TimeoutError': 6891,
'bans/error/twisted.web._newclient.ResponseNeverReceived': 8424,
'bans/status/500': 9598,
'bans/status/503': 56,
'downloader/exception_count': 15339,
'downloader/exception_type_count/scrapy.core.downloader.handlers.http11.TunnelError': 22,
'downloader/exception_type_count/twisted.internet.error.ConnectionRefusedError': 2,
'downloader/exception_type_count/twisted.internet.error.TimeoutError': 6891,
'downloader/exception_type_count/twisted.web._newclient.ResponseNeverReceived': 8424,
'downloader/request_bytes': 9530,
'downloader/request_count': 172,
'downloader/request_method_count/GET': 172,
'downloader/response_bytes': 1848,
'downloader/response_count': 170,
'downloader/response_status_count/200': 169,
'downloader/response_status_count/500': 9,
'downloader/response_status_count/503': 56,
'elapsed_time_seconds': 1717,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2015, 9, 11, 2, 31, 31, 32),
'httperror/response_ignored_count': 67,
'httperror/response_ignored_status_count/500': 67,
'item_scraped_count': 120,
'log_count/DEBUG': 357,
'log_count/ERROR': 119,
'log_count/INFO': 1764,
'log_count/WARNING': 240,
'proxies/dead': 1,
'proxies/good': 1,
'proxies/mean_backoff': 0.0,
'proxies/reanimated': 0,
'proxies/unchecked': 0,
'response_received_count': 169,
'retry/count': 1019,
'retry/max_reached': 93,
'retry/reason_count/500 Internal Server Error': 867,
'retry/reason_count/twisted.internet.error.TimeoutError': 80,
'retry/reason_count/twisted.web._newclient.ResponseNeverReceived': 72,
'scheduler/dequeued': 1722,
'scheduler/dequeued/disk': 1722,
'scheduler/enqueued': 1722,
'scheduler/enqueued/disk': 1722,
'start_time': datetime.datetime(2015, 9, 9, 2, 48, 56, 908)}
2020-09-10 23:31:31 [scrapy.core.engine] INFO: Spider closed (finished)
(Face python 3.8) D:\Selenium\Face python 3.8\TORBUSCADORDELINKS\TORBUSCADORDELINKS\spiders>
'retry/reason_count/500 Internal Server Error': 867,
'retry/reason_count/twisted.internet.error.TimeoutError': 80,
'retry/reason_count/twisted.web._newclient.ResponseNeverReceived': 72,
'scheduler/dequeued': 1722673,
'scheduler/dequeued/disk': 1722,
'scheduler/enqueued': 1722,
'scheduler/enqueued/disk': 1722,
'start_time': datetime.datetime(2020, 9, 9, 2, 48, 56, 908)}
2020-09-10 23:31:31 [scrapy.core.engine] INFO: Spider closed (finished)
Your command,
scrapy crawl somespider -s JOBDIR=crawls/somespider-1
does not indicate an output file path.
Because of that, your results are nowhere.
Use the -o command-line switch to specify an output path.
See also the Scrapy tutorial, which covers this. Or run scrapy crawl --help.

Julia and dbscan clustering: how extract elements from resulting structure?

Warning: this is from a julia n00b!
After performing dbscan on a point coordinate array in Julia. (Note that this is not the 'distance based method' that returns 'assignments' as part of the result structure, but the 'adjacency list' method). Documentation here. I attempt to access the vector containing the indices, but I am at a loss when trying to retrieve the members of individual clusters:
dbr = dbscan(pointcoordinates, .1, min_neighbors = 10, min_cluster_size = 10)
13-element Array{DbscanCluster,1}:
DbscanCluster(17, [4, 12, 84, 90, 94, 675, 676, 737, 873, 965], [27, 108, 177, 880, 954, 1050, 1067])
DbscanCluster(10, Int64[], [46, 48, 51, 57, 188, 225, 226, 228, 270, 542])
DbscanCluster(11, [48, 51, 228], [46, 49, 57, 188, 225, 226, 270, 542])
DbscanCluster(14, [418, 759, 832, 988, 1046], [830, 831, 855, 865, 989, 991, 996, 1021, 1070])
DbscanCluster(10, Int64[], [624, 654, 664, 803, 805, 821, 859, 987, 1057, 1069])
It is easy to retrieve a single cluster from the array:
> dbr[1]
DbscanCluster(17, [4, 12, 84, 90, 94, 675, 676, 737, 873, 965], [27, 108, 177, 880, 954, 1050, 1067])
But how do i get the stuff inside DBscanCluster?
a = dbr[1]
DbscanCluster(17, [4, 12, 84, 90, 94, 675, 676, 737, 873, 965], [27, 108, 177, 880, 954, 1050, 1067])
In [258]:
a[1]
MethodError: no method matching getindex(::DbscanCluster, ::Int64)
Thank you for your help, and sorry if I am missing something glaring!
What makes you say that DbscanCluster is a child of array?
julia> DbscanCluster <: AbstractArray
false
You might be confused by Array{DbscanCluster,1} in your result, but this just tells you that the object returned by the dbscan call is an Array the elements of which are of type DbscanCluster - this does not tell you anything about whether those elements themselves are subtypes of Array.
As for how to get the indexes, the docs for DbscanResult show that the type has three fields:
seeds::Vector{Int}: indices of cluster starting points
assignments::Vector{Int}: vector of clusters indices, where each point was assigned to
counts::Vector{Int}: cluster sizes (number of assigned points)
each of which you can access with dot notation by doing e.g. drb[1].assignments.
If you want to get say the counts for all the 13 clusters in your results, you can broadcast getproperty like so:
getproperty.(drb, :counts)
Note that counts does not exist for in the case of the "adjacency lists" method of dbscan, one can use:
getproperty.(drb, :core_indices)

RabbitMQ - vhost '/' is down for user 'XYZ'. even after user has all access

I am using RabbitMQ version 3.7.17
As my AWS hard disk was completely occupied(100% full). Due to which all the services stopped working
Solution to this: I extended AWS server memory and than tried to start all the API services after that it started throwing error. (Post this it started giving error)
Connection.open: (541) INTERNAL_ERROR - access to vhost '/' refused for user 'XYZ': vhost '/' is down
Restarted RabbitmMQ server using the below code still it was giving error:
sudo service rabbitmq-server restart
If I checked the permission for my user using:
sudo rabbitmqctl list_permissions --vhost /
Response shows that user has all the access.
Listing permissions for vhost "/" ...
user configure write read
XYZ .* .* .*
Thank You.
As the Memory was full the RabbitMQ that was processing was not completed which resulted it in error in vhost.
When tried to restart vhost sudo rabbitmqctl restart_vhost it got error:
ERROR:
Failed to start vhost '/' on node 'rabbit#ip-172-31-16-172'Reason: {:shutdown, {:failed_to_start_child, :rabbit_vhost_process, {:error, {{{:function_clause, [{:rabbit_queue_index, :journal_minus_segment1, [{{true, <<230, 140, 82, 5, 193, 81, 136, 75, 11, 91, 31, 232, 119, 30, 99, 112, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 4, 144>>, <<131, 104, 6, 100, 0, 13, 98, 97, 115, 105, 99, 95, 109, 101, 115, 115, 97, 103, 101, 104, 4, 100, 0, 8, 114, 101, 115, 111, 117, 114, ...>>}, :no_del, :no_ack}, {{true, <<230, 140, 82, 5, 193, 81, 136, 75, 11, 91, 31, 232, 119, 30, 99, 112, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 4, 144>>, <<131, 104, 6, 100, 0, 13, 98, 97, 115, 105, 99, 95, 109, 101, 115, 115, 97, 103, 101, 104, 4, 100, 0, 8, 114, 101, 115, 111, 117, ...>>}, :del, :no_ack}], [file: 'src/rabbit_queue_index.erl', line: 1231]}, {:rabbit_queue_index, :"-journal_minus_segment/3-fun-0-", 4, [file: 'src/rabbit_queue_index.erl', line: 1208]}, {:array, :sparse_foldl_3, 7, [file: 'array.erl', line: 1684]}, {:array, :sparse_foldl_2, 9, [file: 'array.erl', line: 1678]}, {:rabbit_queue_index, :"-recover_journal/1-fun-0-", 1, [file: 'src/rabbit_queue_index.erl', line: 915]}, {:lists, :map, 2, [file: 'lists.erl', line: 1239]}, {:rabbit_queue_index, :segment_map, 2, [file: 'src/rabbit_queue_index.erl', line: 1039]}, {:rabbit_queue_index, :recover_journal, 1, [file: 'src/rabbit_queue_index.erl', line: 906]}]}, {:gen_server2, :call, [#PID<10397.473.0>, :out, :infinity]}}, {:child, :undefined, :msg_store_persistent, {:rabbit_msg_store, :start_link, [:msg_store_persistent, '/var/lib/rabbitmq/mnesia/rabbit#ip-172-31-16-172/msg_stores/vhosts/628WB79CIFDYO9LJI6DKMI09L', [], {#Function<2.32138423/1 in :rabbit_queue_index>, {:start, [{:resource, "/", :queue, "xx_queue"}, {:resource, "/", :queue, "app_xxx_queue"}, {:resource, "/", :queue, "default"}, {:resource, "/", :queue, "xx_priority_queue"}, {:resource, "/", :queue, "xxx_queue"}, {:resource, "/", :queue, "xxxx_queue"}, {:resource, "/", :queue, "yyy_queue"}, {:resource, "/", :queue, "zzz_queue"}, {:resource, "/", :queue, "aaa_queue"}]}}]}, :transient, 30000, :worker, [:rabbit_msg_store]}}}}}
STEPS TO SOLVE IT
Stop your app node by below command.
sudo rabbitmqctl stop_app
Reset your node by below command.
Removes the node from any cluster it belongs to, removes all data from the management database, such as configured users and vhosts, and deletes all persistent messages.(Be careful while using it.)
To backup your data before reset look here
sudo rabbitmqctl reset
Start your Node by below command.
sudo rabbitmqctl start_app
Restart your vhost by below commad.
sudo rabbitmqctl restart_vhost
And if you are using some application that is depended on RabbitMQ. Such as I using celery you will have to restart them again.
This was the link that helped me to solve it.

OSGi project with Kotlin dependency

I'm converting a Java project to Kotlin that gets deployed as an OSGi bundle. I've included the kotlin-osgi-bundle and kotlin-stdlib-jdk8 dependencies:
<dependencies>
<dependency>
<groupId>org.jetbrains.kotlin</groupId>
<artifactId>kotlin-osgi-bundle</artifactId>
<version>1.3.11</version>
</dependency>
<dependency>
<groupId>org.jetbrains.kotlin</groupId>
<artifactId>kotlin-stdlib-jdk8</artifactId>
<version>1.3.11</version>
</dependency>
</dependencies>
I've also included the kotlin-maven-plugin, and I can successfully package-up the project.
However, when deployed, I get the following error:
Error while starting bundle: file:/D:/Esri/ArcGIS/Server/GeoEvent/deploy/test-0.0.1.jar
org.osgi.framework.BundleException: Unresolved constraint in bundle com.sample.test [464]: Unable to resolve 464.0: missing requirement [464.0] osgi.wiring.package; (&(osgi.wiring.package=kotlin)(version>=1.3.0)(!(version>=2.0.0)))
at org.apache.felix.framework.Felix.resolveBundleRevision(Felix.java:3974)[org.apache.felix.framework-4.2.1.jar:]
at org.apache.felix.framework.Felix.startBundle(Felix.java:2037)[org.apache.felix.framework-4.2.1.jar:]
at org.apache.felix.framework.BundleImpl.start(BundleImpl.java:955)[org.apache.felix.framework-4.2.1.jar:]
at org.apache.felix.fileinstall.internal.DirectoryWatcher.startBundle(DirectoryWatcher.java:1245)[16:org.apache.felix.fileinstall:3.4.2]
at org.apache.felix.fileinstall.internal.DirectoryWatcher.startBundles(DirectoryWatcher.java:1217)[16:org.apache.felix.fileinstall:3.4.2]
at org.apache.felix.fileinstall.internal.DirectoryWatcher.doProcess(DirectoryWatcher.java:509)[16:org.apache.felix.fileinstall:3.4.2]
at org.apache.felix.fileinstall.internal.DirectoryWatcher.process(DirectoryWatcher.java:358)[16:org.apache.felix.fileinstall:3.4.2]
at org.apache.felix.fileinstall.internal.DirectoryWatcher.run(DirectoryWatcher.java:310)[16:org.apache.felix.fileinstall:3.4.2]
I have tried specifying the Kotlin dependency in the maven-bundle-plugin:
<plugin>
<groupId>org.apache.felix</groupId>
<artifactId>maven-bundle-plugin</artifactId>
<extensions>true</extensions>
<configuration>
<instructions>
<Embed-Dependency>kotlin-osgi-bundle</Embed-Dependency>
</instructions>
</configuration>
</plugin>
This generates a fairly large JAR (4MB) compared to the original Java project (35KB), and when deployed I get the following error:
INFO | curator-framework - 3.1.0 | New config event received: [115, 101, 114, 118, 101, 114, 46, 49, 61, 76, 69, 65, 45, 51, 48, 53, 48, 57, 51, 46, 83, 69, 82, 86, 73, 67, 69, 83, 46, 69, 83, 82, 73, 65, 85, 83, 84, 82, 65, 76, 73, 65, 46, 67, 79, 77, 46, 65, 85, 58, 50, 49, 56, 50, 58, 50, 49, 57, 48, 58, 112, 97, 114, 116, 105, 99, 105, 112, 97, 110, 116, 59, 48, 46, 48, 46, 48, 46, 48, 58, 50, 49, 56, 49, 10, 118, 101, 114, 115, 105, 111, 110, 61, 49, 48, 48, 48, 48, 48, 48, 48, 48]
INFO | curator-client - 3.1.0 | Connection string changed to: LAPTOP-3050:2181
INFO | curator-framework - 3.1.0 | State change: SUSPENDED
INFO | curator-client - 3.1.0 | Connection string changed to: LAPTOP-3050:2181
INFO | curator-client - 3.1.0 | Connection string changed to: LAPTOP-3050:2181
INFO | curator-client - 3.1.0 | Connection string changed to: LAPTOP-3050:2181
INFO | curator-client - 3.1.0 | Connection string changed to: LAPTOP-3050:2181
INFO | curator-client - 3.1.0 | Connection string changed to: LAPTOP-3050:2181
INFO | curator-framework - 3.1.0 | State change: RECONNECTED
INFO | curator-client - 3.1.0 | Connection string changed to: LAPTOP-3050:2181
INFO | curator-client - 3.1.0 | Connection string changed to: LAPTOP-3050:2181
INFO | curator-client - 3.1.0 | Connection string changed to: LAPTOP-3050:2181
INFO | curator-client - 3.1.0 | Connection string changed to: LAPTOP-3050:2181
INFO | curator-client - 3.1.0 | Connection string changed to: LAPTOP-3050:2181
INFO | curator-client - 3.1.0 | Connection string changed to: LAPTOP-3050:2181
INFO | curator-framework - 3.1.0 | New config event received: [115, 101, 114, 118, 101, 114, 46, 49, 61, 76, 69, 65, 45, 51, 48, 53, 48, 57, 51, 46, 83, 69, 82, 86, 73, 67, 69, 83, 46, 69, 83, 82, 73, 65, 85, 83, 84, 82, 65, 76, 73, 65, 46, 67, 79, 77, 46, 65, 85, 58, 50, 49, 56, 50, 58, 50, 49, 57, 48, 58, 112, 97, 114, 116, 105, 99, 105, 112, 97, 110, 116, 59, 48, 46, 48, 46, 48, 46, 48, 58, 50, 49, 56, 49, 10, 118, 101, 114, 115, 105, 111, 110, 61, 49, 48, 48, 48, 48, 48, 48, 48, 48]
INFO | curator-client - 3.1.0 | Connection string changed to: LAPTOP-3050:2181
ERROR | com.esri.ges.persistence.zookeeper.zk-persistenceutility - 10.5.1 | KeeperErrorCode = ConnectionLoss for /geoevent/config/clusters/default/deploy/test-0.0.1.jar
org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /geoevent/config/clusters/default/deploy/test-0.0.1.jar
at org.apache.zookeeper.KeeperException.create(KeeperException.java:99)[23:org.apache.zookeeper.zookeeper-geoevent:3.5.0]
at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)[23:org.apache.zookeeper.zookeeper-geoevent:3.5.0]
at org.apache.zookeeper.ZooKeeper.create(ZooKeeper.java:1155)[23:org.apache.zookeeper.zookeeper-geoevent:3.5.0]
at org.apache.curator.framework.imps.CreateBuilderImpl$17.call(CreateBuilderImpl.java:1040)[386:curator-framework:3.1.0]
at org.apache.curator.framework.imps.CreateBuilderImpl$17.call(CreateBuilderImpl.java:1023)[386:curator-framework:3.1.0]
at org.apache.curator.connection.StandardConnectionHandlingPolicy.callWithRetry(StandardConnectionHandlingPolicy.java:67)[387:curator-client:3.1.0]
at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:99)[387:curator-client:3.1.0]
at org.apache.curator.framework.imps.CreateBuilderImpl.pathInForeground(CreateBuilderImpl.java:1020)[386:curator-framework:3.1.0]
at org.apache.curator.framework.imps.CreateBuilderImpl.protectedPathInForeground(CreateBuilderImpl.java:501)[386:curator-framework:3.1.0]
at org.apache.curator.framework.imps.CreateBuilderImpl.forPath(CreateBuilderImpl.java:491)[386:curator-framework:3.1.0]
at org.apache.curator.framework.imps.CreateBuilderImpl$4.forPath(CreateBuilderImpl.java:367)[386:curator-framework:3.1.0]
at org.apache.curator.framework.imps.CreateBuilderImpl$4.forPath(CreateBuilderImpl.java:309)[386:curator-framework:3.1.0]
at com.esri.ges.fabric.internal.ZKPersistenceUtility.copyInputStreamToPath(ZKPersistenceUtility.java:338)[77:com.esri.ges.persistence.zookeeper.zk-persistenceutility:10.5.1]
at Proxy73615e00_3b42_4973_8167_02adcb1d58c6.copyInputStreamToPath(Unknown Source)[:]
at Proxyca093291_fc2b_46b8_8c21_c244986150ba.copyInputStreamToPath(Unknown Source)[:]
at com.esri.ges.registry.deploy.internal.DeployFolderRegistryImpl.copyIntoZooKeeper(DeployFolderRegistryImpl.java:73)[169:com.esri.ges.registry.internal-deploy-registry:10.5.1]
at com.esri.ges.registry.deploy.internal.DeployFolderRegistryImpl.access$500(DeployFolderRegistryImpl.java:37)[169:com.esri.ges.registry.internal-deploy-registry:10.5.1]
at com.esri.ges.registry.deploy.internal.DeployFolderRegistryImpl$LookForChanges.run(DeployFolderRegistryImpl.java:141)[169:com.esri.ges.registry.internal-deploy-registry:10.5.1]
at java.lang.Thread.run(Thread.java:745)[:1.8.0_121]
This gets repeated a number of times.
Am I missing something, or does this simply mean that the application (Esri ArcGIS GeoEvent) doesn't support OSGi bundles written in Kotlin?
Your bundle simply has a dependency on the kotlin package and you need to ensure that this dependency is satisfied.
Breaking this down, it means that your bundle has Import-Package: kotlin, which has been derived from the fact that classfiles in the bundle have dependencies on the kotlin package. I know very little about Kotlin but clearly the kotlin package contains the standard library.
Where you have a bundle that imports a package there must be another bundle that exports that package. This simply means you need to find a bundle that has Export-Package: kotlin and ensure that it is deployed into your OSGi Framework alongside your own bundle.
An alternative "solution" is to embed the dependency in your own bundle but as you have discovered, this usually creates far more problems than it solves.

What is the difference between subquery and values when passed to NOT IN on Postgresql?

In Rails4 app (versions: rails 4.2.3, postgresql 9.3.5), I have model classes like below
class Message < ActiveRecord::Base
belongs_to :receiver, class_name: 'User'
belongs_to :sender, class_name: 'User'
validate :receiver, presence: true
validate :sender, presence: true
end
class User < ActiveRecord::Base
has_many :received_messages, class_name: 'Message', foreign_key: :receiver_id
has_many :sent_messages, class_name: 'Message', foreign_key: :sender_id
end
I want to get collection of users who are NOT received message from specific user, So I wrote these scopes:
class User < ActiveRecord::Base
...
scope :received_messages_from, -> (user) {
includes(:received_messages).
where('messages.sender_id': user.id).
references(:received_messages)
}
scope :not_received_messages_from, -> (user) {
includes(:received_messages).
where.not(id: received_messages_from(user).select(:id)).
references(:received_messages)
}
end
I have these rows in messages table:
message_00:
sender_user_id: 11
receiver_user_id: 12
message_01:
sender_user_id: 11
receiver_user_id: 12
message_02:
sender_user_id: 12
receiver_user_id: 11
message_11:
sender_user_id: 17
receiver_user_id: 11
message_12:
sender_user_id: 11
receiver_user_id: 17
message_13:
sender_user_id: 18
receiver_user_id: 12
message_14:
sender_user_id: 12
receiver_user_id: 18
message_15:
sender_user_id: 17
receiver_user_id: 12
message_16:
sender_user_id: 17
receiver_user_id: 13
message_17:
sender_user_id: 17
receiver_user_id: 14
So, User.received_messages_from(User.find(17)).pluck(:id) results: [11, 12, 13, 14], and User.not_received_messages_from(User.find(17)).pluck(:id) results sholdn't contain these ids.
But the not_received_messages_from scope dosen't work as it returning users who has received messages from specific user. This generates SQL like this (in this example, user's id is 17):
SELECT "users"."id" FROM "users"
LEFT OUTER JOIN "messages" ON "messages"."receiver_id" = "users"."id"
WHERE ("users"."id"
NOT IN (
SELECT "users"."id" FROM "users"
WHERE "messages"."sender_id" = 17))
User.not_received_messages_from(User.find(17)).pluck(:id) results:
[11, 12, 12, 12, 15, 16, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 17, 18, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100]
So, I tried fixing .select(:id) to .pluck(:id) in where in not_received_messages_from scope and this works.
scope :not_received_messages_from, -> (user) {
includes(:received_messages).
where.not(id: received_messages_from(user).pluck(:id)).
references(:received_messages)
}
SQL:
SELECT "users"."id" FROM "users"
LEFT OUTER JOIN "messages" ON "messages"."receiver_id" = "users"."id"
WHERE ("users"."id" NOT IN (11, 12, 13, 14))
User.not_received_messages_from(User.find(17)).pluck(:id) results:
[15, 16, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 17, 18, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100]
I think the defferece between two SQLs is only subquery or static ids array passed to 'NOT IN'. Why the results differ each other?
This is likely because your sub-select is not returning the expected response.
SELECT "users"."id" FROM "users"
LEFT OUTER JOIN "messages" ON "messages"."receiver_user_id" = "users"."id"
WHERE ("users"."id"
NOT IN (
SELECT "users"."id" FROM "users"
WHERE "messages"."sender_user_id" = 17))
It's been a while since I've looked at PostresQL joins, but I don't know what, if anything that sub-select would produce. It's operating with a join, but... which one? There's no reference in the documentation that explains that.