An Ansible module can produce custom facts by including them in its JSON output. Is it possible to access those facts from the same module when it is called next time during the same Ansible run? The idea is to use them to prefetch the is-state of a certain, er... class of things. Something that's called "resource type" in Puppet.
Or is there some other way of prefetching for Ansible modules?
No, unless you implement it yourself. Modules do only have access to those vars which have been explicitly passed to the module. This is because the module is executed remotely while all facts are available in the runner on the client machine.
I had a similar problem myself before. Bruce P. suggested to use fact caching and then query the redis cache from the module.
In your case it might be simpler though, since you want to have the value available if the module gets called a second time during the same run. You could simply store your data in a temporary file.
I had another idea to solve this but did not yet have the chance to test it: One could create an action plugin which works together with the module. An action plugin is like a module, but is executed locally, so has access to all facts. You can execute a module from within an action plugin and pass all the available facts. If I'm not mistaken copy is a good example for this. There is a copy module and a copy action plugin. The plugin then calls the module.
The easiest thing though would be to simply pass your facts to your module.
- your_module: your_fact={{ your_fact | default(False) }}
- your_module: your_fact={{ your_fact | default(False) }}
When first executed your_fact wouldn't exist, so you pass False. your_module then creates your_fact when first executed. So on the 2nd call it would exist and being passed in the 2nd task.
Related
I have a job that might fail with a specific configuration, what I want to do is let it run once and if it fails run a slightly different configuration.
I found the attempt parameter but I don't find a way to access it outside the resources tag...
do you know how to access it or any alternative?
the attempt counter is contained as argument "'--attempt' 'int'" to the jobscript (in my case a wrapper script in python)
therefore you can access it for example with:
sys.argv[sys.argv.index('--attempt')+1]
I want to use a property ('currentId') which has a certain start value. For each test case the value should be increased by 1. I can do that by adding an extra test step in each test case which increases the value but that would be much copy paste. The code for that would be (see reference):
def uniqueUserPortion = testRunner.testCase.testSuite.project.getPropertyValue("currentId")
// convert it to an Integer, and increment
def uniqueUserPortionInc = uniqueUserPortion.toInteger() + 1
// set the property back as string
testRunner.testCase.testSuite.project.setPropertyValue("currentId", uniqueUserPortionInc.toString())
To avoid that copy&paste I've added the code above to the Load Script of the project but it doesn't work:
testSuite.testCases.each {
*code above*
}
What can I do to use the code in one script/call for all test cases?
I could define the property as the start value plus the test case ID but that would be a definition in each test case again since I can not reference the #TestCase#ID in project level/property.
Issue with what your are trying
Load Script of the project is executed once when you import the project into soapui workspace. So, this approach does not work.
As you rightly mentioned, either you need to have it in a separate step of the each test case or you can add the same code as setup script. Yes, it is copy paste only
It is possible to achieve easily using SoapUI NG which pro edition using Event feature.
Then your next question may be : how to do it in Open Source edition of SoapUI.
Here is an soapuiExtensions which I did sometime ago which allows you do the same without having to copy paste for each test case in open source edition.
All you need do is have your groovy script into a specific file called 'TestCaseBeforeRun.groovy'. That means, the script is executed before running each test case.
For more details refer README
This soapuiExtensions library allows users to have some additional functionality in soapUI(free edition) tool, like soapui pro allows to do something before, after doing something.
For eg: User may want to do something before running a test case or after running a test case etc by implementing appropriate groovy script as required. Allow me to add an example here. Usually user may want to add credentials for the request step automatically, see the script samples/scripts/TestSuiteTestStepAdded.groovy
How to use this library:
set SOAPUI_HOME environment variable.
copy lib/SoapUIExtListeners.jar file under $SOAPUI_HOME/bin/ext directory
copy samples/listeners/custom-listeners.xml file under $SOAPUI_HOME/bin/listeners directory
copy samples/scripts directory under $SOAPUI_HOME
And implement appropriate groovy script available under $SOAPUI_HOME/scripts. Refer Mappings file in order to implement respective groovy script.
Note: for windows users, you may need to check %SOAPUI_HOME%\bin\soapui.bat which actually overwrites SOAPUI_HOME, need to fix soapui.bat script if requires.
Uses jdk 7, soapUI 4.5.1, and groovy 1.8.9
Dependency
log4j
UPDATE: this is realted to the note in the above.
As it was mentioned in the note, soapui.bat overrides SOAPUI_HOME environment variable on windows, needs to be tweaked a bit. May be you want to copy that groovy file under %SOAPUI_HOME%\bin\scripts (this is without tweaking soapui.bat)and retry. If your machine is linux then it should work if you copy the groovy file under $SOAPUI_HOME/scripts directory
Okay, I'll try to explain as good as I can... Quite a particular case.
Tools: SSIS 2008
We have a control flow that now needs to be triggered by an event: the presence of one or multiple files. (1,2 or 3)
The variables used:
BO_FileLocation_1
BO_FileLocation_2
BO_FileLocation_3
BO_FileName_1
BO_FileName_2
BO_FileName_3
There can be one, two or three files: defined in above variables. When they are filled in,
they should be processed. When they are empty, this means there's just one file file, the process should ignore them and jump to the next (file watcher?) task.
For example:
BO_FileLocation_1= "C:\"
BO_FileLocation_2 NULL
BO_FileLocation_3 NULL
BO_FileName_1= "test.csv"
BO_FileName_2 NULL
BO_FileName_3 NULL
The report only needs one file.
I'd need a generic concept that checks the presence of these files, it could be more generic than my SSIS knowledge can handle right now. For example handy, when there's a 4th file in the future. I was also thinking to work with a single script to handle all the logic.
Thanks in advance
A possibly irrelevant image:
If all you want is to trigger the Copy Source File to handle if one or more of the files is present, just use the OR Constraint in your flow. The following image shows you how:
First connect all to the destination:
Then click one of the green arrows. This will make its properties window pop up. Select the Logical ORinstead of the Logical AND:
If everything went well, you should now see the connections as dashed lines:
There are several possible solutions:
Create a sequence container and include all the file imports in the sequence container. Add int variables for RowCountFile1, RowCountFile2, and RowCountFile3 and set the value to 0 (this is the default value when you create an int variable). Add a RowCount transformation to each of the data flows. Create a precedence constraint from the sequence container to the "Do something" task. Set the precedence constraint to success and expression. Set the expression value to #RowCountFile1 > 0 || #RowCountFile2 > 0 || #RowCountFile3 > 0. The advantage of this approach is that you can take an action as soon as the files are detected, you import all available files, and you only take an action after all the files have been imported. You could then schedule running this SSIS package as a SQL Server Agent job step and run it as frequently as you want.
A variant on solution 1 is to use for each file enumerator containers inside the sequence container. This would be useful if you don't know the exact name of the file and you expect to import more than one under some circumstances. For instance, if you get a file every few minutes with a timestamp in its file name and your process doesn't run for some reason, then you may have to process multiple files to get caught up and then take an action once it has been done.
You could use the file watcher task as you outlined in your question. The only problem I have with the file watcher task is that the package has to be in a constantly running state. This makes it hard to troubleshoot problems and performance. It also can introduce other problems since I remember having some problems with the file watcher task years ago when it first came out. It may well be a totally stable task now, but I prefer other methods over the task after having been burned previously. If you really want the package to run continously instead of having it be called by a job, then you could always use a script task to check for file, sleep thread if not found, check again, etc. I'm sure that's what the file watcher task does, but I would trust my own C# over the task. Power to anyone who has had better experiences than me with File Watcher...
Use PowerShell. If you just want to take an action if a file appears and you aren't importing the data, then a PowerShell script could do this just as well as a SSIS package. The drawback is that you have to learn some basic PowerShell, it may be hard to maintain in the future since PowerShell is probably not your bread and butter core language, and you may have to rewrite the code again to a SSIS package if you want to import the data. You would probably call the PowerShell script from a SQL Server Agent job step, so scheduling can be handled pretty easily.
There are more options than what I listed, so let me know if you still want more suggestions.
I'm trying to configure a set of build configurations in TeamCity 6 and am trying to model a specific requirement in the cleanest possible manner way enabled by TeamCity.
I have a set of acceptance tests (around 4-8 suites of tests grouped by the functional area of the system they pertain to) that I wish to run in parallel (I'll model them as build configurations so they can be distributed across a set of agents).
From my initial research, it seems that having a AcceptanceTests meta-build config that pulls in the set of individual Acceptance test configs via Snapshot dependencies should do the trick. Then all I have to do is say that my Commit build config should trigger AcceptanceTests and they'll all get pulled in. So, lets say I also have AcceptanceSuiteA, AcceptanceSuiteB and AcceptanceSuiteC
So far, so good (I know I could also turn it around the other way and cause the Commit config to trigger AcceptanceSuiteA, AcceptanceSuiteB and AcceptanceSuiteC - problem there is I need to manually aggregate the results to determine the overall success of the acceptance tests as a whole).
The complicating bit is that while AcceptanceSuiteC just needs some Commit artifacts and can then live on it's own, AcceptanceSuiteA and AcceptanceSuiteB need to:
DeploySite (lets say it takes 2 minutes and I cant afford to spin up a completely isolated one just for this run)
Run tests against the deployed site
The problem is that I need to be able to ensure that:
the website only gets configured once
The website does not get clobbered while the two suites are running
If I set up DeploySite as a build config and have AcceptanceSuiteA and AcceptanceSuiteB pull it in as a snapshot dependency, AFAICT:
a subsequent or parallel run of AcceptanceSuiteB could trigger another DeploySite which would clobber the deployment that AcceptanceSuiteA and/or AcceptanceSuiteB are in the middle of using.
While I can say Limit the number of simultaneously running builds to force only one to happen at a time, I need to have one at a time and not while the dependent pieces are still running.
Is there a way in TeamCity to model such a hierarchy?
EDIT: Ideas:-
A crap solution is that DeploySite could set a 'in use flag' marker and then have the AcceptanceTests config clear that flag [after AcceptanceSuiteA and AcceptanceSuiteB have completed]. The problem then becomes one of having the next DeploySite down the pipeline wait until said gate has been opened again (Doing a blocking wait within the build, doesnt feel right - I want it to be flagged as 'not yet started' rather than looking like it's taking a long time to do something). However this sort of stuff a flag over here and have this bit check it is the sort of mutable state / flakiness smell I'm trying to get away from.
EDIT 2: if I could programmatically alter the agent configuration, I could set Agent Requirements to require InUse=false and then set the flag when a deploy starts and clear it after the tests have run
Seems you go look on the Jetbrains Devnet and YouTrack tracker first and remember to use the magic word clobber in your search.
Then you install groovy-plug and use the StartBuildPrecondition facility
To use the feature, add system.locks.readLock. or system.locks.writeLock. property to the build configuration.
The build with writeLock will only start when there are no builds running with read or write locks of the same name.
The build with readLock will only start when there are no builds running with write lock of the same name.
therein to manage the fact that the dependent configs 'read' and the DeploySite config 'writes' the shared item.
(This is not a full productised solution hence the tracker item remains open)
EDIT: And I still dont know whether the lock should be under Build Parameters|System Properties and what the exact name format should be, is it locks.writeLock.MYLOCKNAME (i.e., show up in config with reference syntax %system.locks.writeLock.MYLOCKNAME%) ?
Other puzzlers are: how does one manage giving builds triggered by build completion of a writeLock task read access - does the lock get dropped until the next one picks up (which would allow another writer in) - or is it necessary to have something queue up the parent and child dependency at the same time ?
How can a task be reused in SSIS without copy/paste?
For example, I'd like to use the tasks I've defined in an event handler for one executable in another executable, but not with all executables in the package. So far, I haven't found any solutions other than writing a complete custom component, which seems like overkill. Any suggestions?
Have you considered using an event at the package level, and filtering to only fire when your particular condition requires it?
E.g. you could use the OnPostExecute event just by putting a dummy task in your flow with a name that starts with a specific string like "RunMyTasks", and then check the System::SourceName to see if it starts with "RunMyTasks". If it does, then branch to run your tasks (and otherwise branch to handle the event as you normally would).
You could do a similar thing using OnVariableValueChanged - this might be better (although you'd need to test it). Create a variable with RaiseChangedEvent=TRUE. Create a script task / component to change the value of the variable; finally, put your task set into the event handler.
Check the scoping notes at the bottom of Jamie's post here.
If you can use third-party solutions, check the commercial CozyRoc SSIS+ library. It includes enhanced Script Task Plus, which allows export of script to external file and then link and reuse in other packages.