RavenDB's server (builds 2330 and 2380) seem to ignore the --config parameter:
Raven.Server.exe --config=another.config
The feature has been suggested and confirmed and implemented. Are there any constraints on the location of the configuration file?
In particular, I cannot seem to even change the port number unless I overwrite the existing configuration file Raven.Server.exe.config, rather than specifying a new configuration file using the command-line option.
There appears to be some strangeness in the method we used to get the config.
This command line argument won't work with 2.0 builds. This will be fixed in 2.5
In the meantime, you can set the values explicitly using:
Raven.Server.exe --set=Raven/Port==9999
Note that the first equal repeat once, and the second repeat twice.
Related
On AzureML Batchendpoint, I'm recently hitting the following error:
Unable to get image details : Environment version Autosave_(date)T(time)Z_******** provided in request doesn't match environ.
when I setup the batch-endpoint with a yml config:
environment: azureml:env-name:env-version
So, AzureML creates and builds the environment with the version I specify env-version, which is just a number (in my case = 3).
and then for some weird reason, AzureML creates an extra environment version called Autosave_(date)T(time)Z_********, which is not built, but based on the previous one just created, and then it becomes the latest version of that environment.
In summary, AzureML instead of looking for the version that I specified as env-name:3 it seems to be looking for env-name:Autosave_(date)T(time)Z_******** and then throws the error message mentioned above.
I found the problem was that when creating an environment from a YAML specification file, one of my conda dependencies was cmake, which I needed to allow installation of another python module. The docker image is exactly the same as a previously created environment.
Removing the cmake dependency from the YAML file, eliminated the issue. So the workaround is to install it using a Dockerfile.
The error message was very misleading to start with, but got there in the end after understanding that AzureML reuses a cached image, based on the hash value, from the environment definition accordingly to this
So for that reason, the automatically created Autosave docker image references to that same build, which only happens once when the first job is sent.
setting up a distributed test with Jmeter i ended up in this problem.
First of all i'm aware setting the jmeter.property server.rmi.ssl.disable=true is a work around.
Still i'd like to see if it is possible to use this rmi_keystore.jks. The Jmeter documentation
https://jmeter.apache.org/usermanual/remote-test.html is clear enough about setting up the environment but doesn't mention at all how specify the path to the rmi_keystore.jks, or if this has to be the rmi_keystore.jks on the worker or the one in the controller.
I noticed if you do a test on your machine as worker and controller ( as this guy does https://www.youtube.com/watch?v=Ok8Cqc0wipk ) setting the absolute path to the rmi_keystore.jks works.
Ex. server.rmi.ssl.truststore.file=C:\path\to\the\rmi_keystore.jks and server.rmi.ssl.keystore.file=C:\path\to\the\rmi_keystore.jks and.
But this doesn't work when controller has a different path to the rmi_keystore.jks then the worker.
My question is : how can I set the right jmeter properties server.rmi.ssl.truststore.file and server.rmi.ssl.keystore.file to resolve the FileNotFoundException? Stating that default values don't work?
thank you everyone
You need to:
Generate the rmi_keystore.jks file on the master machine
Copy it to all the slaves
The default location (where JMeter looks for the file) is rmi_keystore.jks, to wit if you drop it to "bin" folder of your JMeter installation on master and slaves - JMeter will find it and start using.
The server.rmi.ssl.keystore.file property should be used if you want to customize the file name and/or location so if it is different you can either set slave-specific location via user.properties file or pass it via -J command-line argument.
If location is common for all the slaves and you want to override it in a single shot - provide it via -G command-line argument.
More information:
Configuring JMeter
Full list of command-line options
Apache JMeter Properties Customization Guide
You can use create-rmi-keystore.bat to generate the rmi_keystore.jks. You will find it under Bin folder.
I'm trying to test a kafka stream on jmeter using the pepper box config, but each time I try adding java request parameters it goes back to the default parameters without saving the ones I have added. I have tried the recommendations on here of adding the underscore, so _ssl.enabled, but the params are still disappearing. Any recommendations? Using jmeter5.3 and pepper-box1.0
I believe you need to put your SSL properties to the PepperBoxKafkaSampler directly, there are pre-populated placeholders which you can change and the changes persist.
The same behaviour is for Java Request Defaults
It might be the case your installation got corrupt somehow or there is a conflict with another JMeter Plugin, check jmeter.log file for any suspicious entries
In the meantime you may find Apache Kafka - How to Load Test with JMeter article useful
I had the same issue. I got around this issue by cloning the pepperbox repository https://github.com/GSLabDev/pepper-box and made changes to the PepperBoxKafkaSampler.java file, updated the setupTest() method with your props. You can also add the parameters making use of the .addArgument() method (used in PepperBoxKafkaSampler.java) to make the parameters available in jmeter.
Rebuild the repo using maven mvn clean install replace the old pepperbox jar in jmeter/lib/ext with your new built jar.
I started seeing this error in flyte:
No configuration set for [aws] s3_shard_formatter. This is a required configuration.
What does it mean? AFAIK we set S3_SHARD_FORMATTER env variable in the image and also when registering the workflow.
It means the configuration object is not set. There are multiple ways to set it.
You can add it to the config file like so
[aws]
s3_shard_formatter=s3://bucket-name/{}/
s3_shard_string_length=2
You can set the environment variable FLYTE_AWS_S3_SHARD_FORMATTER to the value in the config example in 1. (or whatever your bucket name/path is).
However, usually when you see this error, what's actually happening is that the configuration option for where to look for the configuration file itself, is not being set correctly.
If you can get yourself into a Python repl, take a look at the following.
from flytekit.configuration.internal import CONFIGURATION_PATH
CONFIGURATION_PATH.get()
That path should be a /full/path/from/root. cat it too just to check that it's what you expect.
If that config option returns an empty string, then your registration step must be in error. Confirm which file is being used during registration.
I am installing postgreSQL on my debian server using apt-get. The postgresql.conf is located here:
/etc/postgresql/8.4/main/postgresql.conf
Is there a way to actually change where postgreSQL looks for this config without my having to install postgreSQL by building it from source?
You can specify the location of the .conf file when you start PostgreSQL.
From the manual:
If you wish, you can specify the configuration file names and locations individually using the parameters config_file, hba_file and/or ident_file. config_file can only be specified on the postgres command line
Where config_file refers to the location of the postgresql.conf file.
Have a look at the do_ctl_all() function in /usr/share/postgresql-common/init.d-functions and see how it tries to locate the postgresql instance to start at boot:
for c in /etc/postgresql/"$2"/*; do
[ -e "$c/postgresql.conf" ] || continue
name=$(basename "$c")
# evaluate start.conf
if [ -e "$c/start.conf" ]; then
....
This code shows that despite the fact that the path to postgresql.conf is not hardcoded (the version number and cluster name are variables), the way is it built by concatenating the different parts is hardcoded.
You may still symlink postgresql.conf manually to somewhere else, though I'm not sure how an automatic upgrade of the package would cope with that.