Cypher: nodes creation issue - cypher

I am unable to create 10K nodes neither in gui interface nor in cypher shell. I did follow Neo4j Performance Tuning (https://neo4j.com/developer/guide-performance-tuning/). I am using Ubuntu 16.04.5 LTS with 16GB RAM on Intel® Core™ i7 CPU # 3.00GHz × 4. I have neo4j community version 3.4.7. I set the parameters in neo4j.conf as:
dbms.memory.heap.initial_size=8g
dbms.memory.heap.max_size=8g
dbms.memory.pagecache.size=10g
dbms.jvm.additional=-Xss8G
dbms.jvm.additional=-Xmx8G
I am using embedded installation. Even then, i am getting following error in cypher-shell:
There is not enough stack size to perform the current task. This is generally considered to be a database error, so please contact Neo4j support. You could try increasing the stack size: for example to set the stack size to 2M, add `dbms.jvm.additional=-Xss2M' to in the neo4j configuration (normally in 'conf/neo4j.conf' or, if you are using Neo4j Desktop, found through the user interface) or if you are running an embedded installation just add -Xss2M as command line flag
The cypher command file is attached in text file.
cypher-file

You should either use LOAD CSV, or execute each cypher separately.

Related

How to set JVM XMX parameter for SQL Workbench? (increase available memory)

I'm running Sql Workbench/J and keep running out of memory on a heavy query. (on MAC osx)
I just need to fetch this data once in a blue moon and would like to increase the available memory.
The exact error I get is:
I found a solution when running from terminal which is to use the command below, but I would like to have one from the Sql Workbench application itself:
java -Xmx4g -jar sqlworkbench.jar
Disclaimer: I am not a Mac user.
The memory setting that is used for the MacOS launcher is stored in the file Info.plist which should be inside the sub-folder Contents of the SQL Workbench/J "app" folder (not sure how this is called).
There is already an entry with -Xmx2048m present that you can change.

External script (R) not working

When I try to run the external script (R Script) from kognitio console. I'm getting the below error message.
Error:external script vfork child: No such file or directory
Can someone please help me to understand what it is!
This will be because you have not replicated the script environment to all the DB nodes which are eligible to run the script.
Chapter 10 of the Kognitio Guide (downloadable from http://www.kognitio.com/forums/viewtopic.php?t=3), explains in section 10.2 how the script environment myst be identically installed in the same location on all nodes which will be used in processing, and section 10.6 explains how you can contrain this to a subset of nodes if for some reason you do not want the script environment to be on all nodes (e.g. if it has an expensive per-node licence).
You can use the wxsync tool to synchronise files across all nodes, or a remote deployment tool, such as HP's RDP, to ensure that the script environment is installed identically on all nodes.

Error initializing JVM

I am getting "Could not reserve enough space for object heap" error when I am trying to start hybris server.
I have set
wrapper.java.additional.1=-Xmx1G
wrapper.java.additional.2=-XX:MaxPermSize=1024M
My machine is 64 bit 8GB RAM Windows
I faced the same problem once, The problem in my case was that too many other Applications were running on my system.
So go to the task manager and check the memory available use.
Close some applications and try running.
Also if you are using eclipse Then, In your eclipse.ini file (this is beside the eclipse executable), replace -Xmx256m with -Xmx1024m (or Xmx512m).
This is not compulsory but in certain cases it works.
If you are using some extension then,
Open YOURPATH/config/local.properties file.
Add the following entry:
config/local.properties
build.parallel=true
Save the file.
(In cases where we have multiple cores in the machine, we can tell hybris to utilize these by building in parallel and in certain cases this too works)
I too faced the same probelm. I followed below steps and set the max heap size to 1GB.
Add the following content to local.properties
tomcat.generaloptions=-Xmx4G -ea -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Dorg.tanukisoftware.wrapper.WrapperManager.mbean=true -Djava.endorsed.dirs="%CATALINA_HOME%/lib/endorsed" -Dcatalina.base=%CATALINA_BASE% -Dcatalina.home=%CATALINA_HOME% -Dfile.encoding=UTF-8 -Dlog4j.configuration=log4j_init_tomcat.properties -Djava.util.logging.config.file=jdk_logging.properties -Djava.io.tmpdir="${HYBRIS_TEMP_DIR}"
ant clean all
start hybrisserver
Reference
https://launchpad.support.sap.com/#/notes/0002437669

Strange apache behaviour when lauching an external binary called by a perl script

I am currently setting up a web service powered by apache and running on CENTOS 6.4.
This service uses perl scripts (cgi-bin) launching in particular external homemade fortran compiled binaries.
Here is the issue: when I boot my server, everything goes well except that one of my binary crashes systematically (with a kernel segfault) when called by my perl scripts.
If I restart manually the httpd service (at the command line: service httpd restart), the issue is totally fixed.
I examined apache/system logs and nothing suspicious can be found.
It appears that the problem occurs only when httpd is launched by /etc/rc[0-6].d startup directives. I tried to change the launch order of http (S85httpd by default) to any other position without success.
To summarize, my web service is only functional (with no external binary crash) when httpd is launched at the command line once the server has fully booted up!
[EDIT] This issue is now resolved:
My fortran binary handles very large arrays and complex functions requiring an unlimited stack size.
Despite that the stack size limit was defined on a system-wide basis (in /etc/security/limits.conf), for any reason it appears that the "apache/perl/fortran binary" ensemble was not aware of that (causing my binary to crash each time it was called).
At the contrary, when I manually restarted apache at the shell prompt, the stacksize limit was correctly passed (.bashrc with 'ulimit -S -s unlimited').
As a workaround, I used BSD::Resource module (http://metacpan.org/pod/BSD::Resource) to define stacksize directly in my perl script by using e.g. setrlimit(RLIMIT_STACK, $softlimit, $hardlimit);
Thus, this new stack size limit is now directly passed from my perl script to my binary.
I've run into similar problems before. Maybe one way to solve this is to put the binary on a 'delayed start', so that it starts after everything else on your system is running. One way to do this is to put an at job in your /etc/rc.local script, to start the binary in X minutes.

Initrd, Ramdisk, Initramfs, uclinux

I am working on uclinux porting on coldfire board M5272C3. Right now I have kernel running from RAM with romfs as my rootfile system.
I am not clear about few terms what they mean and when to use them....
Please explain me in a simplest possible manner:
Q1: What is initrd? Why we need that?
Q2: What is ramdisk? Why and where we need this?
Q3: what is initramfs? Why and where we use this?
Q4: What is ramfs? Why and where we use this?
Also please refer document/reference book for in depth knowledge of these terms....
Thanks
Phogat
A ramdisk merely refers to an in-memory disk image. It is implemented using the ramfs VFS driver in the kernel. The contents of the ramdisk would be wiped on the next reboot or power-cycle.
I'll give you details about initrd and initramfs next.
In simple terms, both initrd and initramfs refers to an early stage userspace root filesystem (aka rootfs) that will let you run a very minimal filesystem in memory.
The documentation present at Documentation/filesystems/ramfs-rootfs-initramfs.txt part of the linux kernel source tree, which would also give you a length description of what these are.
What is initrd ?
One common case where there is the need for such an early-stage filesystem is to load driver modules for hard disk controllers. If the drivers were present on the hard drive, it becomes a chicken-and-egg problem. Having these drivers as part of this early-stage rootfs helps the kernel load the drivers for any detected hard disk controllers, before it can mount the actual root filesystem from the hard drive. Another solution to this problem would be to have all the driver modules built into the kernel, but you're going to increase the size of the kernel binary this way. This kind of filesystem image is commonly referred to as initrd. It is implemented using either ramfs or tmpfs. It is emulated using a loopback block device.
The bootloader loads the kernel image into a memory address, the initrd image into another memory address, and tells the kernel where to find the initrd, passes the boot arguments to the kernel, and passes control to the kernel to let it continue the boot process.
So how is it different from initramfs then ?
initramfs is an even earlier stage filesystem compared to initrd which is built into the kernel (controlled by the kernel config of course).
As far as I know, both initrd and initramfs are controlled by this single kernel config, but it could have been changed in the recent kernels.
config BLK_DEV_INITRD
I'm not going deep into how to build your own initramfs, but I can tell you it just uses cpio format to store the files and can be configured using usr/Kconfig while building the kernel. Even if you do not specify your own initramfs image, but have turned on support for initramfs, kernel automatically embeds a very simple initramfs containing /dev/console, /root and some other files/directories.
In addition there is also a newer tmpfs filesystem which is commonly used to implement in-memory filesystems. In fact newer kernels implement initrd using tmpfs instead of ramfs.
UPDATE:
Just happened to stumble upon a similar question
This might also be useful