Upsource .jvmoptions and JVM max heap size - jvm

I want to change the heap size made available to the multiple Upsource processes (frontend, cassandra, etc) when running it in a docker container.
I'm trying to make the configuration by using the documented manual approach which is basically to change the 'upsource-frontend.jvmoptions' file content.
Other JVM options, like for example -XX:ErrorFile does produce the expected outcome when changing that file, but apparently changing the default value for -Xmx is not effective.
I'm running jetbrains/upsource:2018.2.1154
Am I missing something here?

Related

RandomSortField not given same result Apache Solr 5.5

I have Apache Solr 5.5 working. On environments other then live randomSortField is working fine because no reindexing is happening or version is not changing but on live data starts changing even on same random string.
for example:
http://localhost:8983/solr/select/?q=*:*&fl=name&sort=random_1234%20desc
hitting this twice wont give me same result on live environment.
i have checked this Solr: Random sort order after index version change
but cant find this file on my solr instance
In my experience, this functionality always behaves unexpectedly with SolrCloud, even when overriding with a custom implementation of this functionality. I suspect this is because of differing timestamp values of the same documents across instances / shards / replicas.

Camel readlock strategy in cluster

We are trying to move to cluster with Apache Camel. So far we had it on one node and worked well.
One node:
I have readlock strategy set to 'changed' which keeps track of file changes with camelLock file and only when the file has finished downloading, it will be picked up for processing. But camel readlock strategy 'changed' is discouraged in clustering. According to the camel documentation 'idempotent' is recommended. This is what happens when I am testing with 5GB file.
Two nodes:
I have readlock strategy to 'idempotent' which distributes files to one of the nodes but camel starts processing the file even before the file has finished downloading.
Is there a way to stop camel from processing even before file has downloaded when readlock strategy is idempotent?
Even though both "readLock=changed" and "readLock=idempotent" cause the file-consumer to wait, they really address quite different use-cases: while "readLock=changed" guards against the file being incomplete (i.e. still being written by some producer/sender), "readLock=idempotent" guards against a file being read by two consumer routes. It's a bit confusing that they're addressed by the same option.
First, to address the "changed" scenario: can the sender be changed so that it writes the file in one directory and then, when it is done writing, it copies it into the directory being monitored by your file-consumer? If this is under your control, this is a good way of letting the OS handle things instead of trying to deal with it yourself. (This does not address the issue of the multiple readers.) Otherwise, I suggest you revert back to readLock=changed
Next, on multiple readers, one work around is to only have this route run on only one node of your cluster. Sometimes this might defeat the purpose of clustering, but it is quite possible that you're starting up additional nodes to help with some other routes, and you're fine with this particular route running on just one node. It's a bit of a hack to do something like this, because all nodes are no longer equal, but it is still an option to consider. Simplest would be to start one node with some environment property that flags it as the node that will handle file-reading... or some similar approach.
If you do want the route on multiple nodes, you can start by using the option "idempotent=true" but this is not good enough on its own. The option uses a repository, where it records what files have been read before, and the default repository is in-memory (i.e. each node has its own). So, the default implementation is helpful if the same file is actually being received more than once, and you wish to skip it. However, if you want it to work across nodes, you have to use a different repository.
One central repository could be a database. In that case use can use Camel's JDBC or JPA based repositories. Or, you could use something like Hazelcast. See here for your options: http://camel.apache.org/idempotent-consumer.html
You can use readLock=idempotent-changed.
idempotent-changed is for using an idempotentRepository and changed as the combined read-lock. This allows you to use read locks that supports clustering if the idempotent repository implementation supports that.
You can read more about these idempotent-changed options here: https://camel.apache.org/components/3.13.x/file-component.html
We also used readLock=changed in Docker clustered mode and worked perfectly since we used readLockMinAge for certain interval.

Best way to execute tests on Jenkins using large files

I have a very large tar file(>1GB) that needs to be checked out and is a precondition for executing any tests.
I cannot have dedicated build server for my tests since tests are going to be executed on slave machines which are disposable.
Checking out a file(>1GB) is not optimal since in this case test execution time would increase because of precondition.What is the best optimal way of solving this problem?
I would dedicate a location on the slaves for that file.
Then in your tests, check if the file is in that location. If not, check it out and move it there. Since this location is outside your normal work area it won't get cleaned, and the file will stay there for the next test execution to use, and you won't need to check it out again.
Of course if the file changes you have to clear those caches. A first option would be to do this manual, alternative you can create a hash of the file and keep that hash in the cash and in your version control. You would then compare only the hashes, and only if those change you would check out the file.
Of course this requires that you have the ability to checkout all the rest of your code without the big file. How to do that obviously depends on the version control system in use.

Tools used to update dynamic properties without even restarting the application/server

In my project I am trying to do the setting where in I can update the dynamic properties in the server/application without even restarting it.
We face this problem that whenever we have to update or change some properties which are dynamic in nature, then every time we have to restart the server/application and this results in unavailability of the server for that time stamp.
I have already found one tool Archaius-ZooKeeper to set it.https://github.com/Netflix/archaius/
We are trying to do it for JBoss servers where we use war file to deploy on server.
Please suggest are there any other method or tool or technology that can be used to set it.
Thanks in advance.
You could consider jRebel, allows you to redeploy your app without any downtime, then you can use jRebel Remoting to redeploy from eclipse to a remote server
You may use Zookeeper. You have to create a Znode and add the properties in the Znode. All your servers/applications should read from this Znode and also put an watch on this Znode for data changes.
Alternately, you may use a database to store the properties along with their modification time. Whenever you change the value of a property, the corresponding modification time is changed. All your applications/servers keep pulling the delta at some intervals (may be 2 secs/ 5 secs etc.).
Or you may have the properties hosted on a web server, or on NFS, or on some distributed cache etc. All your applications/servers keep reading it at some intervals for detecting any changes.
You can use Spring Cloud Zookeeper. I shared a little example here.

XDebug really slow

I am trying to get XDebug working on my local wamp installation (Uniform Server 8).
However when I put
xdebug.remote_enable=1
in my php.ini, which is required for my IDE to use xdebug, loading the pages gets really slow as in 5 seconds per page slow. The debugger works though.
I haven't used xdebug before but I can imagine that it normally shouldn't take this long. I'm pretty sure it might have something to do with using the symfony2 framework.
Does anyone have an idea what's causing this?
It's maybe because this is what it does!
Check the default storage place for xdebug logs (most of the times /tmp/xdebug/something)
which on Windows would be something different than on unix/linux systems.
set these in your php.ini if you want them placed/named somewhere else:
xdebug.profiler_output_dir
Type: string, Default value: /tmp
The directory where the profiler output will be written to, make sure that the user who the PHP will be running as has write permissions to that directory. This setting can not be set in your script with ini_set().
xdebug.profiler_output_name
Type: string, Default value: cachegrind.out.%p
This setting determines the name of the file that is used to dump traces into. The setting specifies the format with format specifiers, very similar to sprintf() and strftime(). There are several format specifiers that can be used to format the file name.
Generating these files is taxing to your system. But these are what you need to profile your code.
Also go read http://xdebug.org/docs before you actually use it again so that you know what exactly you are trying to do.
As per another answer on SO, you need to set xdebug.remote_autostart = 0 in your php.ini