There are a lot of useful parameters (for example, changelogCatalogName) for Maven update task: http://www.liquibase.org/documentation/maven/maven_update.html
But they are not mentioned in CLI page for liquibase update: http://www.liquibase.org/documentation/command_line.html
Is it possible to pass these parameters?
Thanks!
You can also use liquibase --help to return command line options as well, there may be some on there that were missed in the docs.
I am working on improving consistency and feature parity between different ways to run Liquibase but there can still be some features that have not made it into all modes yet. ChangelogCatalogName and ChangelogSchemaName look like two fields that have not made it into CLI parameters yet, but you are able to specify them by system properties as -Dliquibase.catalogName=ABC and -Dliquibase.schemaName=XYZ
Related
I am completely new to Karate and had a question regarding the karate-config.js file.
I understand it is the first to run as "config" for all the scripts- sort of global settings.
What I have written are a few test cases that require different "setup" steps that cannot be done in the Background (what I understand runs after karate-config.js) for each test scenario.
I have two Feature files with scenarios in them. One of the feature file requires this setup from the karate-config.js. The other Feature file doesn't. Right now the setup is running for both Feature files when I only want it to run for the first one.
I was thinking I could tag each Feature file with a unique tag and use an If statement in karate-config.js to only run if this tag is present. However, that likely won't work since the Feature files don't get accessed until after karate-config.js is traversed right?
Is there a way to get this done?
Sorry if the description is long.
I think you are overcomplicating things. If something is not to be used in all features, please don't put it in karate-config.js.
Just go with the strategy of having 2 re-usable features and call them in the Background where needed. This is what is done in normal programming languages, and Karate in that regard is no different.
You seem to be trying to save one line of code being repeated in multiple files. My advice: don't.
I have a huge concern about the Liquibase behavior of ignoring changeset Context if no context is supplied as a run-time parameter.
I'm setting up my first Liquibase project, using "dev, test, prod" as contexts in changesets. I'm passing in the context from a Spring Boot application.properties, which will have different versions for dev, test, etc. So the PROD version will have spring.liquibase.contexts=prod. So far, so good.
But what will happen if somehow, years from now, that line gets accidentally deleted, or commented out? Or what if someone happens to run Liquibase against PROD and doesn't supply "prod" as context?
It seems to me that ALL prior changesets NOT marked with "prod" will then run. This would include any marked just "test", that insert test data, or--God forbid--drop tables... Worse, they'll be running out of order.
I understand Liquibase DOES recommend including "test"-only changesets along with everything else, and using the "test" context (only) to distinguish them.
So. Am I right that this is a potential disaster waiting to happen? Is there a way to prevent this?
Thank you, StackOverflows!!
Yes, you are right that a potential disaster can happen. It could happen many other ways in your described process as well. This design is on purpose b/c most don't use contexts and so the majority want all changesets to run when you do an liquibase update.
A safety net I have seen at various places: Create a check for context around the liquibase command in your cicd automation instrumentation layer. For example those that use Jenkins, make sure there is a mandatory parameter for context before the build can even run.
While using async-profiler I run the profiles for cpu and alloc separately but was hoping it would be possible to use them as part of the same duration? Given the output format types supported, this only seems to make sense if JFR is used.
Yes, this feature is implemented in v2.0 branch of async-profiler. The branch is currently under development, use with care. Planned for the next major release.
To specify multiple events in the command line, use
profiler.sh -e cpu,alloc -f out.jfr ...
The same as an agent option:
-agentpath:/path/to/libasyncProfiler.sh=start,event=cpu,event=alloc,file=out.jfr,...
As you've correctly guessed, this works only with JFR output.
For the feedback, comment the corresponding GitHub issue.
I am currently working on a product that heavily relies on database logic/functions to realize certain business cases. After having a hard time with quarterly live releases we decided to integrate our projects in a CI environment and to setup a continuous delivery process as a final goal.
At the current moment the database related projects heavliy rely on shell scripts. These scripts are triggered on each release and take care of the incremental import of certain sql patches (e.g. projectX_v_4_0.sql, projectX_v_4_1.sql, ... projectX_v_4_n.sql).
Regretfully this approach is very error prone and also the script logic is not verified/tested at all. Since our experience with Gradle was very good in the past, we decided to evaluate Gradle as an alternative to the existing shell scripts.
My question now is: How would you handle the sequential import of the certain sql patches? Is there a certain framework that you could recommend or would you prefer to execute the psql command from inside Gradle as it was done by the shell scripts before?
Thanks for any hints/recommendations and general thoughts!
Have a look at Liquibase and Flyway. A Gradle plugin is available for both tools.
I'm using cabal to build and test my projects with the commands:
cabal configure --enable-tests
cabal build
cabal test
As a framework I use testing-framework (https://batterseapower.github.io/test-framework/).
Everything works, however the number of QuickCheck-tests defaults to 50 which in my use-case is very little because I have to filter the generated data to fit certain properties.
Is there any possibility to pass something like
--maximum-generated-tests=5000
to the test-executable via cabal? I tried things like
cabal test --test-options='maximum-generated-tests=5000'
but no luck so far. Is there any possibility to achieve this?
Many thanks in advance!
jules
You missed the dashes:
cabal test --test-options=--maximum-generated-tests=5000
Also, if too few generated tests satisfy your property, you may have better luck with SmallCheck. It's not random and thus will find all inputs satisfying the condition in the given search space. (Disclosure: I'm the maintainer of SmallCheck.)