Nested arguments for Ansible Module - module

I am developing an Ansible module. Is it possible to specify a set of arguments as required when one argument has a certain value?
For example, if my module has a 'state' argument that can be either 'present' or 'absent', is it possible to require an additional set of arguments like 'type', 'path' only when state=present?
module_args = dict(
name=dict(type='str', required=True),
type=dict(type='str', required=False),
path=dict(type='str', required=False),
state=dict(type='str', required=False, choices=["present","absent"]
}
module = AnsibleModule(
argument_spec=module_args,
supports_check_mode=True
)
name = module.params["name"]
script_type = module.param["type"]
path = module.param["path"]
state = module.state["state"]

As far as I know there is no such ability in current Ansible 2.3.
There is only required_together option for AnsibleModule class to define parameters that should be supplied together (but there is no condition on its value):
required_together = [['s3_key', 's3_bucket'],
['vpc_subnet_ids', 'vpc_security_group_ids']]
So you should do manual checks for that.

Though there is no such concept as nested parameters in Ansible (AFAIK), there are other concepts that may accomplish the same intent:
required_together
required_one_of
required_if
With the above declarations, one can make associations between parameters -- that is put in logic where if one parameter is specified, another may be asked to be specified with it.
Or if a certain value is specified for a parameter (i.e. State=present), additional parameters may be asked to be specified with that parameter (given its value).
Joe Seeder's blogpost covers this in more detail.

Not sure if too late, but you can do exactly this with the following.
module_args = dict(
name=dict(type='str', required=True),
type=dict(type='str', required=False),
path=dict(type='str', required=False),
state=dict(type='str', required=False, choices=["present","absent"]
)
required_if_args = [
["state", "present", ["type", "path"]]
]
module = AnsibleModule(
argument_spec=module_args,
required_if=required_if_args,
supports_check_mode=True
)
This states that if state=present, type and path are required.

Related

how to generate a subfactory based on condition of a parent attribute

I have a factory like so:
class PayInFactory(factory.DjangoModelFactory):
class Meta:
model = PayIn
#factory.lazy_attribute
def card(self):
if self.booking_payment and self.booking_payment.payment_type in [bkg_cts.PAYMENT_CARD, bkg_cts.PAYMENT_CARD_2X]:
factory.SubFactory(
CardFactory,
user=self.user,
)
I'm trying to generate a field card only if the booking_payment field has a payment_type value in [bkg_cts.PAYMENT_CARD, bkg_cts.PAYMENT_CARD_2X]
The code goes into that statement but card field is empty after generation.
How can I do that properly ?
Is SubFactory allowed in lazy_attribute ?
I'd like to be able to modify Card field from PayInFactory if possible like so:
>>> PayInFactory(card__user=some_user)
PostGeneration won't do as I need this Card to be available before the call to create. I overrided _create and it may use the card if available.
Thanks !
The solution lies in factory.Maybe:
class PayInFactory(factory.django.DjangoModelFactory):
class Meta:
model = models.PayIn
card = factory.Maybe(
factory.LazyAttribute(
lambda o: o.booking_payment and o.booking_payment.payment_type in ...
),
factory.SubFactory(
CardFactory,
# Fetch 'user' one level up from CardFactory: PayInFactory
user=factory.SelfAttribute('..user'),
),
)
However, I haven't tested whether the extra params get actually passed to the CardFactory, nor what happens when CardFactory is not called — you'll have to check (and maybe open an issue on the project if you get an unexpected behaviour!).

How to give names to MobX flows

How do I give a name to my flows?
I currently see messages in the console (using dev tools) like:
action '<unnamed flow> - runid: 3 - init'
index.js:1 action '<unnamed flow> - runid: 3 - yield 0'
My code (in typescript):
fetchMetricData = flow( function * (this: MetricDataStore) {
const responseJson:IMetrics[] = yield Http.post("/metrics");
this.metrics = responseJson;
});
According to following text found in MobX Api Reference · MobX page:
Tip: it is recommended to give the generator function a name, this is the name that will show up in dev tools and such
Unfortunately, this is the only way to set the name (I use LiveScript and can't set names to function expressions while defining it).
In your case, you can turn your unnamed function expression into a named one. If you ever face another situation where you can't, you could also use Object.defineProperty(myFunction, 'name', {value: 'myExplicitName'}).
You can find the culprit in the code: mobx/flow.ts at master · mobxjs/mobx.

Django Concat columns with integers and strings

I have run into an issue using attempting to add values from two different columns together in a query, namely that some of them contain numbers. This means that the built in Concat does not work as it requires strings or chars.
Considering how one can cast variables as other datatypes in SQL I don't see why I wouldn't be able to do that in Django.
cast(name as varchar(100))
I would assume that one would do it as follows in Django using the Concat function in combination with Cast.
queryset.annotate(new_col=Concat('existing_text_col', Cast('existing_integer_col', TextField())).get())
The above obviously does not work, so does anyone know how to actually do this?
The use case if anyone wonders are sending jenkins urls saved as fragments as a whole. So one url would be:
base_url: www.something.com/
url_fragment: name/
url_number: 123456
I ended up writing a serializer that inherits from the base serializer that contains the urls fragments and a lot of other things. In it I made a MethodField for my complete url and defined a getter function that loaded in the different fragments and added them together. I also redeclared the fragmented fields to None.
The code inside the new serializer is:
complete = serpy.MethodField("get_copmlete")
serverUrl = serpy.Field(attr=None, call=False, required=False)
jobName = serpy.Field(attr=None, call=False, required=False)
buildNumber = serpy.Field(attr=None, call=False, required=False)
def get_complete(self, obj):
return obj.server_url + obj.job_name + '/' + str(obj.build_number)

How to pass variable to groovy code in Intellij IDEA live templates groovy script?

I have a groovyScript in my Intellij IDEA live template, like this :
groovyScript("D:/test.groovy","", v1)
on my D:/test.groovy i have a code like this:
if ( v1 == 'abc') {
'abc'
}
Now I want to pass v1 variable into test.groovy ,can any one help me how can I do this?
For exemplification purposes I made a live template which is printing a comment with the current class and current method.
This is how my live template is defined:
And here is how I edited the variableResolvedWithGroovyScript variable:
The Expression for the given variable has the follwing value:
groovyScript("return \"// Current Class:\" + _1 + \". Current Method:\"+ _2 ", className(),methodName())
As you can see, in this case the _1(which acts like a variable in the groovy script) takes the value of the first parameter which is the class name, and the _2 takes the value of the
second parameter which is the method name. If another parameter is needed the _3 will be used in the groovy script to reference the given parameter.
The arguments to groovyScript macro are bound to script variables named _1, _2 etc. This is also described at groovyScript help at Edit Template Variables Dialog / Live Template Variables.
I found a solution.
I was needing to calculate a CRC32 of the qualified class name using live models
I used it like this:
groovyScript("
def crc = new java.util.zip.CRC32().with { update _1.bytes; value };
return Long.toHexString(crc);
", qualifiedClassName())
then the result is
Based on the documentation, your variables are available as _1, _2, etc. Please note that variables are passed without dollar signs (so only v1 instead of $v1$)
So your test script should look like
if ( _2 == 'abc') {
'abc'
}

Can I pretty-print the DBIC_TRACE output in DBIx::Class?

Setting the DBIC_TRACE environment variable to true:
BEGIN { $ENV{DBIC_TRACE} = 1 }
generates very helpful output, especially showing the SQL query that is being executed, but the SQL query is all on one line.
Is there a way to push it through some kinda "sql tidy" routine to format it better, perhaps breaking it up over multiple lines? Failing that, could anyone give me a nudge into where in the code I'd need to hack to add such a hook? And what the best tool is to accept a badly formatted SQL query and push out a nicely formatted one?
"nice formatting" in this context simply means better than "all on one line". I'm not particularly fussed about specific styles of formatting queries
Thanks!
As of DBIx::Class 0.08124 it's built in.
Just set $ENV{DBIC_TRACE_PROFILE} to console or console_monochrome.
From the documentation of DBIx::Class::Storage
If DBIC_TRACE is set then trace information is produced (as when the
debug method is set). ...
debug Causes trace information to be emitted on the debugobj
object. (or STDERR if debugobj has not specifically been set).
debugobj Sets or retrieves the object used for metric collection.
Defaults to an instance of DBIx::Class::Storage::Statistics that is
compatible with the original method of using a coderef as a callback.
See the aforementioned Statistics class for more information.
In other words, you should set debugobj in that class to an object that subclasses DBIx::Class::Storage::Statistics. In your subclass, you can reformat the query the way you want it to be.
First, thanks for the pointers! Partial answer follows ....
What I've got so far ... first some scaffolding:
# Connect to our db through DBIx::Class
my $schema = My::Schema->connect('dbi:SQLite:/home/me/accounts.db');
# See also BEGIN { $ENV{DBIC_TRACE} = 1 }
$schema->storage->debug(1);
# Create an instance of our subclassed (see below)
# DBIx::Class::Storage::Statistics class
my $stats = My::DBIx::Class::Storage::Statistics->new();
# Set the debugobj object on our schema's storage
$schema->storage->debugobj($stats);
And the definition of My::DBIx::Class::Storage::Statistics being:
package My::DBIx::Class::Storage::Statistics;
use base qw<DBIx::Class::Storage::Statistics>;
use Data::Dumper qw<Dumper>;
use SQL::Statement;
use SQL::Parser;
sub query_start {
my ($self, $sql_query, #params) = #_;
print "The original sql query is\n$sql_query\n\n";
my $parser = SQL::Parser->new();
my $stmt = SQL::Statement->new($sql_query, $parser);
#printf "%s\n", $stmt->command;
print "The parameters for this query are:";
print Dumper \#params;
}
Which solves the problem about how to hook in to get the SQL query for me to "pretty-ify".
Then I run a query:
my $rs = $schema->resultset('SomeTable')->search(
{
'email' => $email,
'others.some_col' => 1,
},
{ join => 'others' }
);
$rs->count;
However SQL::Parser barfs on the SQL generated by DBIx::Class:
The original sql query is
SELECT COUNT( * ) FROM some_table me LEFT JOIN others other_table ON ( others.some_col_id = me.id ) WHERE ( others.some_col_id = ? AND email = ? )
SQL ERROR: Bad table or column name '(others' has chars not alphanumeric or underscore!
SQL ERROR: No equijoin condition in WHERE or ON clause
So ... is there a better parser than SQL::Parser for the job?