I have alot of Bamboo variables defined due the fact that i have a system with alot of legacy and config at places where it does not belong. Getting rid of all this will take a bit longer on the roadmap so i need to find a way to auto replace all these values.
The number im talking about is that there are 8 customer config files with each about 100 variables. Indeed, there was a maniac who added all of those in Bamboo because as you might thought most of them are variable for each environment.
At this moment i want to automate the deployment process and all is going fine exact the fact that i need to replace 100 variables and i dont want to maintain it in my script itself all the time.
I am looking for a way to retrieve all the variables in an array so i can just iterate through all the keys and try to replace them at the config files.
echo "${bamboo.application.myvalue}" will replace the value as expected. The only problem is, how can i get all the keys under bamboo.*
I tried it with the following functions but all without success:
printenv
env
declare
All above without success. How can i retrieve a list of all those variables as inline script in Bamboo.
Thanks alot
I think it is not possible to change the value of the variables on the fly. Instead, you can use the "Inject Bamboo variables" task in order to be able to change the variable value.
This task reads a file to create the variables. So, all you have to do is to create this file with the values you need, and then use this variables.
E.g.: Creating a file from a powershell script:
$path = 'bambooVariaveis.properties'
$connectionstringX = 'connectionstring="Data Source=XXXX;"'
$Utf8NoBomEncoding = New-Object System.Text.UTF8Encoding($False)
[System.IO.File]::WriteAllLines($path, $connectionstringX, $Utf8NoBomEncoding)
E.g: Inject Bamboo Variables config
Using it (in a subsequent script task):
echo ${bamboo.inject.connectionstring}
Related
I want to create a parameter that contains a list of string (list of hub codes). This list of string is created by reading an external csv file (this list could contain the different codes depending on the hub codes in the CSV file)
What I want is to find a easy auto way to perform batch runs by each hub code in the list.
So this question is:
1) how to add and set a new parameter directly from the code (during the initialization when reading the CSV) instead of GUI parameter panel?
2) how to avoid manual configuration of hub list in the batch run configuration
Something like this for adding the parameters should work in your ContextBuilder.
Parameters params = RunEnvironment.getInstance().getParameters();
((DefaultParameters)params).addParameter("foo", "Big Foo", Integer.class, 3, false);
You would read the csv file to get the parameter name and value.
I'm not sure I completely understand the batch run configuration question, but each batch run has a run number associated with it
RunState.getInstance().getRunInfo().getRunNumber()
If you can associate line numbers in your csv parameter file with run number (e.g. run number 1 should use line 1, and so on), then each batch run would use a different parameter line.
I need to set variables in root scope in one job to be used in a different job. The first job has a Javascript job entry, with the statements:
parent_job.setVariable("customers_full_path", "C:\\customers22.csv", "r");
true;
But the compilation fails with:
Couldn't compile javascript:
org.mozilla.javascript.EvaluatorException: Can't find method
org.pentaho.di.job.Job.setVariable(string,string,string). (#2)
How to set a variable at root level in a Javascript job entry?
Sorry for the passive agressive but:
I don't know if you are new to Pentaho but, the most common mistake for new users, with previous knowledge of programming, is to be sort of 'addicted' to know methods, as such you are using JavaScript for a functionality that is built in the tool. Both Transformations(KTR) and JOBs(KJB) have a similar step, you can better manipulate this in a KTR.
JavaScript steps slow down the flow considerably, so try to stay away from those as much as possible.
EDIT:
Reading This article, seems the only thing you're doing wrong is the actual syntax of the command..
Correct usage :
parent_job.setVariable("Desired Value", [name_of_variable]);
The command you described has 3 parameters, when it should be 2. If you have more than 1 variable you need to set, use 3 times the command. Try it out see if it works.
I am coding a custom module that is executed inside a pillar (to set a pillar variable) but I need it to retrieve an external parameter.
The idea is to retrieve a parameter from the master server. For example, if I execute
salt 'myminion' state.highstate
the custom module will be called and it should retrieve a parameter to generate the pillar.
I was looking into options like:
Using environment variables: It doesn't work as it seems that the execution modules does nothave access to the shell environment of the salt command.
Using command line paramenters: I dont know if it is even possible as I couldn't find any documentation.
Using an additional pillar in the command line: It doesn't work as the execution module is executed during pillar evaluation so it does not have access to __pillar__ or __salt__['pillar.get'] (both empty).
Reading from stdin: Does not workfrom a custom module.
Using a file to read info: I didn't even tryied this because it is not an option for me for security reasons. I dont want the information stored.
Any ideas if or how is this possible to do?
Thanks a lot!
By:
a custom module that is executed inside a pillar (to set a pillar variable)
do you mean an external pillar?
If so, passing it parameters is covered in that document:
You can pass a single argument, a list of arguments or a dictionary of arguments to your pillar:
ext_pillar:
- example_a: some argument
- example_b:
- argumentA
- argumentB
- example_c:
keyA: valueA
keyB: valueB
External pillars merge their data into the pillar dictionary, and are "custom modules", so I think that would fit your case.
If that's not what you're trying to do, can you update the question? Where is this parameter coming from? Is it different depending on the minion (minion_id is always passed to an external pillar)?
(edit) Adding a couple links about safely storing secrets:
using vault
dotgpg
blackbox
Need some help, because I can't find any solution for my problems with DBD::Oracle.
So at first, this is the current situation:
We are running Apache2 with mod_perl 2.0.4 at our company
Apache web server was set up with a startup script which is setting some environment variables (LD_LIBRARY_PATH, ORACLE_HOME, NLS_LANG)
In httpd.conf there are also environment variables for LD_LIBRARY_PATH and ORACLE_HOME (via SetEnv)
We are generally using the perl module DBI with driver DBD::Oracle to connect to our main database
Before we create a new instance of DBI we are setting some perl env variables, too (%ENV). We are setting ORACLE_HOME and NLS_LANG.
So far, this works fine. But now we are extending our system and need to connect to a remote database. Again, we are using DBI and DBD::Oracle. But for now there are some new conditions:
New connection must run in parallel to the existing one
TNSNAMES.ORA for the new connection is placed at a different location (not at $ORACLE_HOME.'/network/admin')
New database contents are provided by stored procedures, which we are fetching with DBD::Oracle and cursors (like explained here: https://metacpan.org/pod/DBD::Oracle#Binding-Cursors)
The stored procedures are returning object types and collection types, containing attributes of oracle type DATE
To get these dates in a readable format, we set a new env variable $ENV{NLS_DATE_FORMAT}
To ensure the date format we additionally alter the session by alter session set nls_date_format ...
Okay, this works fine, too. But only if we make a new connection on the console. New TNS location is found by the script, connection could be established and fetching data from the procedures by cursor is also working. Alle DATE types are formatted as specified.
Now, if we try to make this connection at apache environment, it fails. At first the datasource name could not resolved by DBI/DBD::Oracle. I think this is because of our new TNSNAMES.ORA file or rather the location is not found by DBI/DBD::Oracle in Apache context (published by $ENV{TNS_ADMIN}). But I don't know why???
The second problem is (if I create a dirty workaround for our first one) that the date format, published by $ENV{NLS_DATE_FORMAT} is only working on first level of our cursor select.
BEGIN OPEN :cursor FOR SELECT * FROM TABLE(stored_procedure) END;
The example above returns collection types of object which are containing date attributes. In Apache context the format published by NLS_DATE_FORMAT is not recognized. If I use a simple form of the example like this
BEGIN OPEN :cursor FOR SELECT SYSDATE FROM TABLE(stored_procedure) END;
the result (a single date field) is formatted well. So I think subordinated structures were not formatted because $ENV{NLS_DATE_FORMAT} works only in console context and not in Apache context, too.
So there must be a problem with the perl environment variables (%ENV) running under Apache and mod_perl. Maybe a problem of mod_perl?
I am at my wit's end. Maybe anyone in the whole wide world has a solution ... and excuse my english :-) If you need some further explanations, I will try to define it more precisely.
If your problem is that changes to %ENV made while processing a request don't seem to be honoured, this is because mod_perl assumes you might be running multiple threads and doesn't actually change the process environment when you change %ENV, so external libraries (like the oracle client) or child processes don't see the change.
You can work around it by first using the prefork MPM, so there aren't any threading issues, and then making changes to the environment using Env::C instead of the %ENV hash.
I have following test suite structures in my mind:
Test Suite 01, has one test case (TC01) in side.
Test Suite 02, has one test case (TC02) in side.
Variable file available and imported both Test suite as resource.
Variable file has one List #{List}, with several values
In TC01, I output the content of #{List}
In TC02, I first Remove ${List} index 0, and set it as a new variable with same name: Remove From List ${List} 0, and then ${List}= Set Variable ${List}, Set Global Variable ${List}
Then out put new ${List}
--> Everything works out correctly till then:
After TC02 finished, I made RF perform TC01 again, and this time I think it should use new ${List} value, but it's not. Because Variable File has higher priority.
How can I made TC01 use new global variable ${List} later in second time?
Is that possible?
Thank you very much in advance.
Well, finally I solve this problem.
Before reboot save the needed variable and value into sqlite db, and fetch them after reboot.
What you are probably looking for is 'Set Suite variable'
See: http://robotframework.googlecode.com/svn/tags/robotframework-2.1/doc/libraries/BuiltIn.html#Set Suite Variable
or even 'Set Global variable'.
See: http://robotframework.googlecode.com/svn/tags/robotframework-2.1/doc/libraries/BuiltIn.html#Set Global Variable