Grouping/inheriting properties for tasks in Gradle - properties

Is there a way to reuse property groups in Gradle?
Something that would look like:
def propGroup = [
options.fork = true
options.forkOptions.executable = ...
]
task compileThis(type:JavaCompile) {
options.fork = propGroup.options.fork
options.forkOptions.setExecutable(propGroup.options.forkOptions.executable)
destinationDir = file(xxx)
}
task compileThat(type:JavaCompile) {
options.fork = propGroup.options.fork
options.forkOptions.setExecutable(propGroup.options.forkOptions.executable)
destinationDir = file(yyy)
}
In Java that would be inheritance but you cannot inherit a task from a task in Gradle

It will work if propGroup would be defined as a map:
def propGroup = [
options: [
fork: true,
forkOptions: [
executable: true
]
]
]
Then executable could be e.g. referred as:
propGroup.options.forkOptions.executable

Related

publish two separate result directories from nextflow process

I have a nextflow process that outputs a result folder and a log as below:
process test {
label "x"
input:
path(somefile)
output:
path "test_results"
path "test_log.log"
path "something_for_next_process", emit: f
shell:
'''
myshellcommand
'''
}
And a config file to publish the results like below.
process
{
withLabel : x
{
publishDir =
[
path: {"$params.outdir/"},
pattern: "*_results",
mode: 'copy',
saveAs: {filename -> "${filename.split('_')[0]}/all_results/${filename.split("_")[1]}"}
]
}
withLabel : x
{
publishDir =
[
path: {"$params.outdir/"},
pattern: "*.log",
mode: 'copy',
saveAs: {filename -> "${filename.split('_')[0]}/logs/${filename}"}
]
}
}
I tried multiple combinations however, I can't get a label to publish its desired contents to two different folders. It always takes whichever is the last one in the publish config (in this case the logs). I know I can just put the publishdir options to the process in the workflow and that also works but I would like to do it through the config file. Is there a way I can get this to work?
You can specify the publishDir directive more than once in your nextflow.config by supplying a list of maps, for example:
process {
withLabel: 'x' {
publishDir = [
[
path: { "$params.outdir/" },
pattern: "*_results",
mode: 'copy',
saveAs: { fn -> "${fn.split('_')[0]}/all_results/${fn.split("_")[1]}" }
],
[
path: { "$params.outdir/" },
pattern: "*.log",
mode: 'copy',
saveAs: { fn -> "${fn.split('_')[0]}/logs/${fn}" }
]
]
}
}
If you don't need the withName or withLabel process selectors, the publishDir directive can just be specified multiple times inside of your process definition, for example:
process test {
publishDir(
path: { "$params.outdir/" },
pattern: "*_results",
mode: 'copy',
saveAs: { fn -> "${fn.split('_')[0]}/all_results/${fn.split("_")[1]}" }
)
publishDir(
path: { "$params.outdir/" },
pattern: "*.log",
mode: 'copy',
saveAs: { fn -> "${fn.split('_')[0]}/logs/${fn}" }
)
...
}
You can have multiple publishDir directives for the same process. No need to have the withLabel twice. If it was in the process block, it would be like the snippet below, for example.
process foo {
publishDir "$params.outdir/$sampleId/counts", pattern: "*_counts.txt"
publishDir "$params.outdir/$sampleId/outlooks", pattern: '*_outlook.txt'
publishDir "$params.outdir/$sampleId/", pattern: '*.fq'
input:
Check an example here.

Pass list of objects (list of maps) variable to terraform via TF_VAR

What is the correct way to pass a variable with type "list of objects" to terraform via an environment TF_VAR_ variable?
If I define the variable in the variables.tf or in .tfvars files as such
containers = [
{
"container_access_type": "private",
"metadata": {},
"name": "20220909-001"
}
]
variable "containers" {
description = "containers"
default = [
{
"container_access_type": "private",
"metadata": {},
"name":"20220909-001"
}
]
everything works and in the terraform console the variable is shown as
> var.containers
[
{
"container_access_type" = "private"
"metadata" = {}
"name" = "20220909-001"
},
]
but if I declare the environment variable export TF_VAR_containers='[{"container_access_type":"private","metadata":{},"name":"20220912-001"},]'
I get the error
The given value is not suitable for child module variable "containers" defined at terraform/modules/stco/storageContainer/variables.tf:1,1-22: list of object required.
and the variable is shown as
> var.containers
"[{\"container_access_type\":\"private\",\"metadata\":{},\"name\":\"20220912-001\"},]"
(the comma at the end makes no difference, I still get the error).
What is the proper way to pass such variable?
Thanks #jordanm, I needed to add the type of the variable:
variable "containers" {
description = "containers"
type = list(object({
container_access_type = string,
metadata = object({}),
name = string
}))
default = [
{
"container_access_type": "private",
"metadata": {},
"name":"20220909-001"
}
]
}

Installing Rabbitmq using helm3 from bitnami throws chart.metadata is required

I am trying to install rabbitmq:8.6.1 from bitnami chart repository using terraform:0.12.18.
My helm version is 3.4.2
while installing I am getting following error
Error: validation: chart.metadata is required
My terraform file is as below
resource "kubernetes_secret" "rabbitmq_load_definition" {
metadata {
name = "rabbitmq-load-definition"
namespace = kubernetes_namespace.kylas_sales.metadata[0].name
}
type = "Opaque"
data = {
"load_definition.json" = jsonencode({
"users": [
{
name: "sales",
tags: "administrator",
password: var.rabbitmq_password
}
],
"vhosts": [
{
name: "/"
}
],
"permissions": [
{
user: "sales",
vhost: "/",
configure: ".*",
write: ".*",
read: ".*"
}
],
"exchanges": [
{
name: "ex.iam",
vhost: "/",
type: "topic",
durable: true,
auto_delete: false,
internal: false,
arguments: {}
}
]
})
}
}
resource "helm_release" "rabbitmq" {
chart = "rabbitmq"
name = "rabbitmq"
version = "8.6.1"
timeout = 600
repository = "https://charts.bitnami.com/bitnami"
namespace = "sales"
depends_on = [
kubernetes_secret.rabbitmq_load_definition
]
}
After looking issue(509) at terraform-provider-helm,
If your module/subdirectory name is same as your chart name (In my case directory name is rabbitmq and my helm_resource name is also same rabbitmq), so I am getting this error, still not able to identify why, With reference to,
Solution: I change my directory name from rabbitmq to rabbitmq-resource and this error is gone.

What is the correct syntax for multiple conditions in a Terraform `aws_iam_policy_document` data block

How can this S3 bucket IAM policy, which has multiple conditions, be re-written as aws_iam_policy_document data block, please?
"Condition": {
"StringEquals": {
"s3:x-amz-acl": "bucket-owner-full-control",
"aws:SourceAccount": "xxxxxxxxxxxx"
},
"ArnLike": {
"aws:SourceArn": "arn:aws:s3:::my-tf-test-bucket"
}
}
With the aws_iam_policy_document condition data block syntax 1:
condition {
test = "StringEquals"
values = []
variable = ""
}
The aws_iam_policy_document supports nested condition directives. The following Terraform configuration should help:
data "aws_iam_policy_document" "iam_policy_document" {
condition {
test = "StringEquals"
values = [
"bucket-owner-full-control"
]
variable = "s3:x-amz-acl"
}
condition {
test = "StringEquals"
values = [
"xxxxxxxxxxxx"
]
variable = "aws:SourceAccount"
}
condition {
test = "ArnLike"
values = [
"arn:aws:s3:::my-tf-test-bucket"
]
variable = "aws:SourceArn"
}
}

Actor stats not being sent to StatsD from Kamon

I'm trying to use Kamon with StatsD backend for Akka monitoring/telemetry. I currently only have one actor and I'm sending an ask which replies to the caller. I can see my custom counter data in Graphana and Graphite as well as JVM and OS stats, but no actor stats from Akka. My project is setup just like this intro but I have added the Kamon stuff and my own HttpService. What am I missing?
Here's my config:
kamon {
trace.level = simple-trace
metric {
tick-interval = 1 second
filters {
akka-actor.includes = [ "**" ]
akka-router.includes = [ "**" ]
akka-dispatcher.includes = [ "**" ]
trace.includes = [ "**" ]
trace-segment.includes = [ "**" ]
histogram.includes = [ "**" ]
min-max-counter.includes = [ "**" ]
gauge.includes = [ "**" ]
counters.includes = [ "**" ]
# For web projects
http-server.includes = [ "**" ]
akka-actor.excludes = [ "**" ]
akka-router.excludes = [ "**" ]
akka-dispatcher.excludes = [ "**" ]
counters.excludes = [ "" ]
trace-segment.excludes = [ "" ]
histogram.excludes = [ "" ]
min-max-counter.excludes = [ "" ]
gauge.excludes = [ "" ]
http-server.excludes = [ "" ]
}
}
akka {
ask-pattern-timeout-warning = "lightweight"
}
statsd {
hostname = "localhost" //Graphite Host
port = "8125"
flush-interval = 1 second
max-packet-size = 1024 bytes
subscriptions {
histogram = ["**"]
min-max-counter = ["**"]
gauge = ["**"]
counter = ["**"]
trace = ["**"]
trace-segment = ["**"]
system-metric = ["**"]
akka-actor = ["**"]
akka-dispatcher = ["**"]
http-server = ["**"]
}
report-system-metrics = true
simple-metric-key-generator {
application = "data-id-generator"
}
}
system-metrics {
#sigar is enabled by default
sigar-enabled = false
#jmx related metrics are enabled by default
jmx-enabled = true
}
}
Here's my service:
import akka.event.Logging
import akka.http.scaladsl.server.Directives._
import akka.http.scaladsl.server.Route
import akka.pattern.ask
import akka.util.{ByteString, Timeout}
import com.incontact.data.GeneratorActor.GenerateId
import com.incontact.data.IdGenerator.{GeneratorError, IdResult}
import com.incontact.http.HttpService
import akka.http.scaladsl.model.StatusCodes._
import akka.http.scaladsl.marshallers.sprayjson.SprayJsonSupport
import spray.json._
import kamon.Kamon
import scala.concurrent.duration._
trait IdService extends HttpService with SprayJsonSupport with DefaultJsonProtocol {
private lazy val log = Logging(system, classOf[IdService])
implicit val timeout: Timeout = 1.second
//implicit val errorFormat = jsonFormat2(GeneratorError)
abstract override def route: Route = {
log.info("creating generator actor")
val generator = system.actorOf(GeneratorActor.props, "generator")
val counter = Kamon.metrics.counter("generate-long-id")
path("") {
get {
counter.increment()
log.info("IdService sending single request to actor")
onSuccess((generator ? GenerateId).mapTo[IdResult]) {
case Right(id) => complete(ByteString(s"${id}\n"))
case Left(e:GeneratorError) => failWith(e.Error)
}
}
}
} ~ super.route
}
Main.scala (startup)
import akka.event.Logging
import com.incontact.data.IdService
import com.incontact.http.HttpServer
import kamon.Kamon
import scala.io.StdIn
import scala.util.Success
object Main extends App
with HttpServer
with IdService {
Kamon.start()
private lazy val log = Logging(system, getClass)
bindingFuture.andThen {
case Success(binding) =>
// this really doesn't support any production behavior since [ENTER] won't be used to shut down the docker container
log.info("Press ENTER to exit")
StdIn.readLine()
log.info("Shutting down...")
Kamon.shutdown()
system.terminate()
}
}
build.sbt
scalaVersion := "2.11.8"
val akkaVersion = "2.4.14"
val kamonVersion = "0.6.3"
libraryDependencies ++= Seq(
// remove multiple-dependency warnings
"org.scala-lang" % "scala-reflect" % scalaVersion.value,
"org.scala-lang.modules" %% "scala-xml" % "1.0.5",
// application dependencies
"com.typesafe.akka" %% "akka-actor" % akkaVersion,
"com.typesafe.akka" %% "akka-http" % "10.0.0",
"com.typesafe.akka" %% "akka-http-spray-json" % "10.0.0",
"com.amazonaws" % "aws-java-sdk-dynamodb" % "1.11.63",
// kamon dependencies
"io.kamon" %% "kamon-core" % kamonVersion,
"io.kamon" %% "kamon-statsd" % kamonVersion,
"io.kamon" %% "kamon-system-metrics" % kamonVersion,
"org.aspectj" % "aspectjweaver" % "1.8.6",
// test dependencies
"com.typesafe.akka" %% "akka-testkit" % akkaVersion % "test",
"org.scalatest" %% "scalatest" % "3.0.1" % "test",
"com.typesafe.akka" %% "akka-http-testkit" % "10.0.0" % "test",
"org.pegdown" % "pegdown" % "1.1.0" % "test"
)
project/plugins.sbt
addSbtPlugin("io.kamon" % "aspectj-runner" % "0.1.3")
launch from command-line:
$ sbt aspectj-runner:run
It looks like you're excluding all the actors/dispatcher/router metrics in your config. This should rule all metrics out (even if you're also including them all).
akka-actor.excludes = [ "**" ]
akka-router.excludes = [ "**" ]
akka-dispatcher.excludes = [ "**" ]
Try and change it to
akka-actor.excludes = [ ]
akka-router.excludes = [ ]
akka-dispatcher.excludes = [ ]