How can I use the gradle-release-plugin to auto-increment the minor - not the incremental - gradle-release-plugin

I'm using the gradle-release-plugin successfully in jenkins with the option gradle.release.useAutomaticVersion=true; however, it is incrementing the incremental and i'd like to increment the minor....
1.14.0
want to increment to 1.15.0, rather than 1.14.1
is there a way to do this?

You can configure how the increment should work.
release {
versionPatterns = [
/(\d+)\.(\d+)\.(\d)$/: { Matcher m, Project p -> m.replaceAll("${m[0][1]}.${(m[0][2] as int) +1}.${m[0][3]}") }
]
}
I think this should do the trick. It should match your current version via the regex pattern
/(\d+)\.(\d+)\.(\d)$/
And writes the new version by
m.replaceAll("${m[0][1]}.${(m[0][2] as int) +1}.${m[0][3]}")
where the second group is incremented by 1
Didn't tested the code

Related

Slick plain sql query with pagination

I have something like this, using Akka, Alpakka + Slick
Slick
.source(
sql"""select #${onlyTheseColumns.mkString(",")} from #${dbSource.table}"""
.as[Map[String, String]]
.withStatementParameters(rsType = ResultSetType.ForwardOnly, rsConcurrency = ResultSetConcurrency.ReadOnly, fetchSize = batchSize)
.transactionally
).map( doSomething )...
I want to update this plain sql query with skipping the first N-th element.
But that is very DB specific.
Is is possible to get the pagination bit generated by Slick? [like for type-safe queries one just do a drop, filter, take?]
ps: I don't have the Schema, so I cannot go the type-safe way, just want all tables as Map, filter, drop etc on them.
ps2: at akka level, the flow.drop works, but it's not optimal/slow, coz it still consumes the rows.
Cheers
Since you are using the plain SQL, you have to provide a workable SQL in code snippet. Plain SQL may not type-safe, but agile.
BTW, the most optimal way is to skip N-th element by Database, such as limit in mysql.
depending on your database engine, you could use something like
val page = 1
val pageSize = 10
val query = sql"""
select #${onlyTheseColumns.mkString(",")}
from #${dbSource.table}
limit #${pageSize + 1}
offset #${pageSize * (page - 1)}
"""
the pageSize+1 part tells you whether the next page exists
I want to update this plain sql query with skipping the first N-th element. But that is very DB specific.
As you're concerned about changing the SQL for different databases, I suggest you abstract away that part of the SQL and decide what to do based on the Slick profile being used.
If you are working with multiple database product, you've probably already abstracted away from any specific profile, perhaps using JdbcProfile. In that case you could place your "skip N elements" helper in a class and use the active slickProfile to decide on the SQL to use. (As an alternative you could of course check via some other means, such as an environment value you set).
In practice that could be something like this:
case class Paginate(profile: slick.jdbc.JdbcProfile) {
// Return the correct LIMIT/OFFSET SQL for the current Slick profile
def page(size: Int, firstRow: Int): String =
if (profile.isInstanceOf[slick.jdbc.H2Profile]) {
s"LIMIT $size OFFSET $firstRow"
} else if (profile.isInstanceOf[slick.jdbc.MySQLProfile]) {
s"LIMIT $firstRow, $size"
} else {
// And so on... or a default
// Danger: I've no idea if the above SQL is correct - it's just placeholder
???
}
}
Which you could use as:
// Import your profile
import slick.jdbc.H2Profile.api._
val paginate = Paginate(slickProfile)
val action: DBIO[Seq[Int]] =
sql""" SELECT cols FROM table #${paginate.page(100, 10)}""".as[Int]
In this way, you get to isolate (and control) RDBMS-specific SQL in one place.
To make the helper more usable, and as slickProfile is implicit, you could instead write:
def page(size: Int, firstRow: Int)(implicit profile: slick.jdbc.JdbcProfile) =
// Logic for deciding on SQL goes here
I feel obliged to comment that using a splice (#$) in plain SQL opens you to SQL injection attacks if any of the values are provided by a user.

Terraform: How Do I Setup a Resource Based on Configuration

So here is what I want as a module in Pseudo Code:
IF UseCustom, Create AWS Launch Config With One Custom EBS Device and One Generic EBS Device
ELSE Create AWS Launch Config With One Generic EBS Device
I am aware that I can use the 'count' function within a resource to decide whether it is created or not... So I currently have:
resource aws_launch_configuration "basic_launch_config" {
count = var.boolean ? 0 : 1
blah
}
resource aws_launch_configuration "custom_launch_config" {
count = var.boolean ? 1 : 0
blah
blah
}
Which is great, now it creates the right Launch configuration based on my 'boolean' variable... But in order to then create the AutoScalingGroup using that Launch Configuration, I need the Launch Configuration Name. I know what you're thinking, just output it and grab it, you moron! Well of course I'm outputting it:
output "name" {
description = "The Name of the Default Launch Configuration"
value = aws_launch_configuration.basic_launch_config.*.name
}
output "name" {
description = "The Name of the Custom Launch Configuration"
value = aws_launch_configuration.custom_launch_config.*.name
}
But how the heck do I know from the higher area that I'm calling the module that creates the Launch Configuration and Then the Auto Scaling Group which output to use for passing into the ASG???
Is there a different way to grab the value I want that I'm overlooking? I'm new to Terraform and the whole no real conditional thing is really throwing me for a loop.
Terraform: How to conditionally assign an EBS volume to an ECS Cluster
This seemed to be the cleanest way I could find, using a ternary operator:
output "name {
description = "The Name of the Launch Configuration"
value = "${(var.booleanVar) == 0 ? aws_launch_configuration.default_launch_config.*.name : aws_launch_configuration.custom_launch_config.*.name}
}
Let me know if there is a better way!
You can use the same variable you used to decide which resource to enable to select the appropriate result:
output "name" {
value = var.boolean ? aws_launch_configuration.custom_launch_config[0].name : aws_launch_configuration.basic_launch_config[0].name
}
Another option, which is a little more terse but arguably also a little less clear to a future reader, is to exploit the fact that you will always have one list of zero elements and one list with one elements, like this:
output "name" {
value = concat(
aws_launch_configuration.basic_launch_config[*].name,
aws_launch_configuration.custom_launch_config[*].name,
)[0]
}
Concatenating these two lists will always produce a single-item list due to how the count expressions are written, and so we can use [0] to take that single item and return it.

Why is yadcf custom_filter not working?

Fiddle: https://codepen.io/MBaas/pen/rpZZzd
I have a Datatable about newspapers endorsements for presidential candidates that I want to filter on the party - a value that is not contained in the table (I have shortcodes "D" or "R" in the table but would like to use text "Democrat" or "Republican" in the UI).
This may have once worked (I think it did) - but after upgrading to beta 0.9.1 it stopped. Possibly a bug in the beta - or possibly an undetected bug in my code?
My fn:
function myCustomFilterFunction(filterVal,columnVal,rowValues,stateVal)
{
console.log(rowValues);
console.log(filterVal+'/'+columnVal);
if (columnVal === '') { return true;}
return -1 < columnVal.search(filterVal);
}
I had added the log for debugging purposes and it produced this output (excerpt):
["Wisconsin State Journal", "2016", "Clinton", "", "", "", ""]
"D/"
I was surprised to see columnVal being empty. That explained filtering not working, and it being empty can be explained by looking at rowValues. But given that the source-data was defined in JSON as
["Wisconsin State Journal",2016,"Clinton","http:\/\/host.madison.com\/wsj\/opinion\/editorial\/our-endorsement-hillary-clinton-america-must-get-this-right\/article_b526fe64-c2ca-5e3d-807a-0ef4ae23a4d5.html","","","D"]
this is odd. Could it be related to the fact that the column is not visible?
You should make column containing party short code searchable with searchable: true option otherwise your custom filtering function won't work.
For example:
{"searchable":true, "title":"Party (Shortcode)", "visible":true}
See updated example for code and demonstration.

Neo4j: "ghost" node in label index throws error

I have a neo4j database with a set of nodes with label :EXAMPLE.
There are two operations. First I delete one node and then I look for another one. They are done separately using neo4j API.
MATCH (n:EXAMPLE {Name: { name1 }}) DELETE n;
and
MATCH (n:EXAMPLE {Name: { name2 }}) RETURN n;
Sometimes, when I execute second query, it throws an error "Node with id 123". Node with id 123 is the same node that was deleted in the first query.
It happens when there is a lot of requests are coming to the database simultaneously.
I guess that it could happen if node was deleted, but EXAMPLE label index wasn't updated yet. There are two facts that prove such theory.
1) The error is unstable.
2) If I change second query like this (remove the label), I won't get the error:
MATCH (n {Name: { name2 }}) RETURN n;
Neo4j version is 2.1.5, Java - OpenJDK Runtime Environment (IcedTea 2.5.3) (7u71-2.5.3-2~deb7u1) and operation system is Debian. There are no other indexes in the database except the label.
The question is how can I fix this, but still use labels?
What ends up happening is that (simplified) the operations will order like so:
Q1: MATCH (n)
Q2: DELETE (n), COMMIT
Q1: RETURN n # Error, n no longer exists
For implementation reasons, this is much more likely to happen if cypher is going via an index. The database will eventually handle this for you, but for now, you'll need to wrap that read query in a retry block - if it fails with this type of error, you simply run it again.
On that note, there are other errors that are easily recoverable from by retrying, such as deadlock errors, so wrapping your statements and/or transactions in retry-blocks is a useful thing to do in general.
This is a possible workaround:
Mark nodes as deleted instead of deleting. Ignore nodes that are marked as deleted. Delete all such nodes at once with a garbage collector.

How to make a diff branches with JGit?

Use JGit. Need to know the difference in the branches.
How to run a command JGit API:
git diff --name-status ..origin
You can use the DiffCommand by creating AbstractTreeIterator instaces for the branches and then use the DiffCommand to return you a list of differences between the two branches:
// the diff works on TreeIterators, we prepare two for the two branches
AbstractTreeIterator oldTreeParser = prepareTreeParser(repository, "refs/heads/oldbranch");
AbstractTreeIterator newTreeParser = prepareTreeParser(repository, "refs/heads/master");
// then the procelain diff-command returns a list of diff entries
List<DiffEntry> diff = new Git(repository).diff().setOldTree(oldTreeParser).setNewTree(newTreeParser).call();
for(DiffEntry entry : diff) {
System.out.println("Entry: " + entry);
}
The full example including creating the AbstractTreeIterator can now be found as part of my jgit-cookbook