Off heap not working when query parallelism is set - ignite

We are trying 1.9 query parallelism in the cache config but It doesn't work with off heap.If we comment out parallelism everything works fine. Paralellism works fine with on heap .
Are we missing anything ?

There is a known bug [1] that already fixed and merged to master branch. You can apply it to 1.9 and build ignite from sources.
[1] https://issues.apache.org/jira/browse/IGNITE-4826

Related

Apache server cannot allocate memory for new process

I have a apache server with 32 GB of RAM. When I start the server and execute top to see the resources It show me that the CPU is at 95 percent. It doesn't a normal behaviour and after a few minutes it raises:
apache cannot allocate memory fork unable to fork new process
I don't know how to solve the problem. Any tips?
I had same problem to fix it there is 2 options:
1- move from micro instances to small and this was the change that solved the problem (micro instances on amazon tend to have large cpu steal time)
2- tune the mysql database server configuration and my apache configuration to use a lot less memory.
tuning guide for a low memory situation such as this one: http://www.narga.net/optimizing-apachephpmysql-low-memory-server/ (But don't use the suggestion of MyISAM tables - horrible...)
this 2 options will make the problem much much less happening .. I am still looking for better solution to close the process that are done and kill the ones that hang in there .

Hadoop release version confusing

I am trying to figure out the different versions of hadoop and I got confusing after reading this page.
Download
1.2.X - current stable version, 1.2 release
2.2.X - current stable 2.x version
2.3.X - current 2.x version
0.23.X - similar to 2.X.X but missing NN HA.
Releases may be downloaded from Apache mirrors.
Question:
I think any release starting with 0.xx means it is a alpha version and should be not used in product, is that the case?
What is the difference between 0.23.X and 2.3.X? it mentioned they are similar but missing namenode? high availability? is there any correlation between 0.23 and 2.3? Is it because when they develop the code, the PMC group say "man! it is so immature and should let it start with 0, since they are the same product, I will keep the digits the same?"
When I look at the source code of the new hadoop, I see the jobtracker class turned out to be a dummy class. And I am envisioning the jobtracker and tasktracker, ie. Mapreduce1 will slowly fade away on the roadmap of Hadoop, which in another case, the interface for the Map Reduce Job might keep the same, but the second generation of Hadoop (YARN) will totally replace the idea of Jobtracker and Tasktracker with ResourceManager..etc.
Sorry that this question might be a bit unorganized since I got really confused by the version number. I will modify the question after I figured it out.
First of all: there's a major difference between Hadoop v1 and v2 (aka YARN). The v1's NameNode and JobTracker are replaced by the new ResourceManager for better scalability. That's why both will disappear later on in the development.
Second: 0.X versions are subtle no hint for alpha releases: OpenSSL was over ten years a 0.9 release (en.wikipedia.org/wiki/OpenSSL#Major_version_releases) even though it was considered being a de facto standard or reference implementation. And many Fortune 500 companies trusted in it.
And that's true for Hadoop as well. The 0.23 version refers to Hadoop v1's architecture that has v2 implementations (except High Availability as the NameNode is still v1's). So 0.23 and 2.3 are about the same and continue aging in parallel. They named it 0.X as 1.X is already in use. They just don't wanted 1.X keep aging to indicate that 2.X is the way to go -- you may use 0.X only if you rely on 1.X's architecture but on the other hand want to receive minor improvements from the current development in 2.X.
The bottom part tries to explain this, but is a bit better skelter as well: http://wiki.apache.org/hadoop/Roadmap. The top part here does it a bit better: http://hadoop.apache.org/releases.html
Hope this was helpful...
From the image below you can notice that Hadoop 2.6.2 has been released after 2.71
Reasoning
2.6 to 2.6.2 is a MINOR API update and IS backward compatible.
2.6 to 2.7 is a MAJOR API update EG IS NOT backward compatible. Some API's may now be obsolete.
Ref Hadoop Road map

Pig script minimum execution time

I'm currently learning Pig and I'm executing my scripts inside Hortonworks Sandbox. What is bugging me from the very beginning is that the minimum execution time for a Pig script seems to be at least 30-40 seconds. Is that because I'm using the Hortonworks Sandbox or is a normal for Pig scripts? Is there a way to reduce the execution time, because this is really slowing my learning progress? If this execution time is normal can you explain me what is going on and why is that?
PS
I've allocated 2GB RAM for the Hortonworks virtual machine. And just to mention I'm currently executing just simple scripts on small data sets.
If you execute pig in local mode (pig -x local) then it'll run a lot faster but it won't do map-reduce and won't access hdfs - it's good for learning though!
Yes, 30-40 seconds is absolutely normal for Pig, because it has a big overhead for compiling the job, launching JVMs, etc.
As stated in the other answer - you can try to run in local mode. It usually takes me about 15 seconds for a simple job with input containing just a few lines of data. My Cloudera VM is allocated with 4G of RAM, btw.

Speeding up the Dojo Build

We are running a build of our application using Dojo 1.9 and the build itself is taking an inordinate amount of time to complete. Somewhere along the lines of 10-15 minutes.
Our application is not huge by any means. Maybe 150K LOC. Nothing fancy. Furthermore, when running this build locally using Node, it takes less than a minute.
However, we run the build on a RHEL server with plenty of space and memory, using Rhino. In addition, the tasks are invoked through Ant.
We also use Shrinksafe as the compression mechanism, which could also be the problem. It seems like Shrinksafe is compressing the entire Dojo library (which is enormous) each time the build runs, which seems silly.
Is there anything we can do to speed this up? Or anything we're doing wrong?
Yes, that is inordinate. I have never seen a build take so long, even on an Atom CPU.
In addition to the prior suggestion to use Node.js and not Rhino (by far the biggest killer of build performance), if all of your code has been correctly bundled into layers, you can set optimize to empty string (don’t optimize) and layerOptimize to "closure" (Closure Compiler) in your build profile so only the layers will be run through the optimizer.
Other than that, you should make sure that there isn’t something wrong with the system you are running the build on. (Build files are on NAS with a slow link? Busted CPU fan forcing CPUs to underclock? Ancient CPU with only a single core? Insufficient/bad RAM? Someone else decided to install a TF2 server on it and didn’t tell you?)

Netbeans and Glassfish performance

I was wondering if anyone had any performance options that might work for me. I am using Netbeans 6.1 and Glassfish V2 on my work laptop and the memory requirements are getting a little tiresome. I have 3 gb of ram and I frequently have to kill everything and restart it due to PermGen Space errors.
I've played with the mem sizes as well but nothing seems to really help.
Is there a way for you to monitor glassfish through JConsole? JConsole will show you how much PermGen space (as well as other spaces) is being used. Using this information can help you tweak your startup parameters.
This page http://java.sun.com/javase/technologies/hotspot/vmoptions.jsp lists a few and I know I've seen more, esp. when it comes to setting permgen sizes.
You might also want to look at how your webapp(s) are allocating things that go into permgen space. Maybe the problem is there rather than in NB/GF combo.
Finally, is it possible for you to upgrade to NB 6.7? I know it's difficult to change your app server for development, esp. if you deploy to that version of the app server in production (I've experience problems there too). But changing the IDE could help too.
I know this is not an "answer", but I hope it helps.