I have a devop directory containing ansible's varible directroy , plabooks and inventory directory
The directory looks like this
|groups_vars
-all.yml
-development.yml
-staging.yml
|inventroy
- staging
- development
configure.yml
deploy.yml
configure.yml and deploy.yml contains task that are applied to either staging or development machines using variable in groups_vars
Know if i call ansible-playbook command with staging inventory. How will it know which variable file to use. The varfile task in not added to configure.yml and deploy.yml
By the way am using an example from the company i work and the example is working I just want to know the magic that is happening it is using the right variable file though the var file is not incuded in the configure.yml nor deploy.yml
Ansible uses a few conventions to load vars files:
group_vars/[group]
host_vars/[host]
So if you have an inventory file that looks like this:
[staging]
some-host.name.com
Then These files will be included (optional extension .yml or .yaml also):
/group_vars/all
/group_vars/staging
/host_vars/some-host.name.com
I think this is the "magic" you are referring to.
You can find more on the subject here: http://docs.ansible.com/playbooks_variables.html#variable-precedence-where-should-i-put-a-variable
And here: http://docs.ansible.com/playbooks_best_practices.html
Related
setting up a distributed test with Jmeter i ended up in this problem.
First of all i'm aware setting the jmeter.property server.rmi.ssl.disable=true is a work around.
Still i'd like to see if it is possible to use this rmi_keystore.jks. The Jmeter documentation
https://jmeter.apache.org/usermanual/remote-test.html is clear enough about setting up the environment but doesn't mention at all how specify the path to the rmi_keystore.jks, or if this has to be the rmi_keystore.jks on the worker or the one in the controller.
I noticed if you do a test on your machine as worker and controller ( as this guy does https://www.youtube.com/watch?v=Ok8Cqc0wipk ) setting the absolute path to the rmi_keystore.jks works.
Ex. server.rmi.ssl.truststore.file=C:\path\to\the\rmi_keystore.jks and server.rmi.ssl.keystore.file=C:\path\to\the\rmi_keystore.jks and.
But this doesn't work when controller has a different path to the rmi_keystore.jks then the worker.
My question is : how can I set the right jmeter properties server.rmi.ssl.truststore.file and server.rmi.ssl.keystore.file to resolve the FileNotFoundException? Stating that default values don't work?
thank you everyone
You need to:
Generate the rmi_keystore.jks file on the master machine
Copy it to all the slaves
The default location (where JMeter looks for the file) is rmi_keystore.jks, to wit if you drop it to "bin" folder of your JMeter installation on master and slaves - JMeter will find it and start using.
The server.rmi.ssl.keystore.file property should be used if you want to customize the file name and/or location so if it is different you can either set slave-specific location via user.properties file or pass it via -J command-line argument.
If location is common for all the slaves and you want to override it in a single shot - provide it via -G command-line argument.
More information:
Configuring JMeter
Full list of command-line options
Apache JMeter Properties Customization Guide
You can use create-rmi-keystore.bat to generate the rmi_keystore.jks. You will find it under Bin folder.
I have a issue with informatica environmental variables.
the variables are not getting displayed in unix.
[root#******]# su infadm
bash-4.2$ echo $PMEXTERNALPROCDIR
bash-4.2$
I check the variables session in admin console and all the path are defined correctly.
what could be the reason and what should i do if i have to see the value?
You probably started the integration service as a user other than infadm . Su to the user who starts the informatica service and check the variable value. If you don't know who started the services you could create a command task in workflow manager to run following command
whoami | cat > /home/youruser/whoisrunninginformatica
Just make sure you open the permissions on your home directory before executing it. If you dont care who is running the service you could use same strategy to cat the variable value itself.
I strongly recommend to use a specific user for Informatica Services, something like "ipcuser".
Change/login with that user and create the Informatica home folder, also change the ownership to the that user.
Once created, let's assume that you are using bash. Search and/or create the file .bash_profile
Edit all the variables in that file and work with the variables like this:
PATH=${PATH}:~/bin
export PATH
Once you edit use source .bash_profile and/or logout and login again.
Review the variables with env | grep PATH
In .ebextensions, I have a file (environmentvariables.config) that looks like this:
commands:
01_get_env_vars:
command: aws s3 cp s3://elasticbeanstalk-us-east-1-466049672149/ENVVAR.sh /home/ec2-user
02_export_vars:
command: source /home/ec2-user/ENVVAR.sh
The shell script is a series of simple export key=value commands.
The file is correctly placed on the server, but it seems like it isn't being called with source. If I manually log into the app and use source /home/ec2-user/ENVVAR.sh, it sets up all my environment variables, so I know the script works.
Is it possible to set up environment variables this way? I'd like to store my configuration in S3 and automate the setup so I don't need to commit any variables to source control (option_settings) or manually enter them into the console.
Answer:
Active load S3 variables in Rails app to bypass environment variable issue altogether.
Put a json in S3, download to server, read ENV VAR from there in environment.rb
A similar question has been asked before but this one is different.
I am doing a rsync of a single file on remote server and destination directory does not exist. I would like the destination directory to be created if it does not exist. I am using :: syntax which uses modules and I could not find a similar case in the forums.
Here is the syntax. remote_dir2 does not exist and i want it to be created.
rsync -avz --password-file=<file> <source-file> remote-user#remote-server::remote_dir1/remote_dir2
Note: there is a module named remote-user in /etc/rsyncd.conf in the remote server and connection and everything else works, except that source file ends up in the remote_dir1 with the name of remote_dir2
Is there any solution that is different from what mentioned below ?
I do not want to open an ssh to remote server to 'mkdir'
I do not want to use -R, --relative because directory structure names in source and destination are very different.
I also know that there is a trick mentioned
here but it does not work when you specify a module. There is no error or anything in the logs, apparently it gets ignored.
In FuseFabric we can add configuration files using the web console, using the Config Files Tab and just write the name of the configFile and inside it the properties foo=foo
Well, this is very simple, and my question is: How can I do this using the Fabric console ??
just typing commands ????? D:
I've seen the fabric:profile and its options, and I can edit the properties, but only when there is an already existing PID.
Thank you for every answers !
You can use this command:
fabric:profile-edit --pid PID/Property=Value Profile [Version]
Otherwise, maintaining property files on the file system and using import to re-import the settings works fine also.
Use profile-display to see the correct PID.