I have hard time trying to create sequences with liquibase.
In my postgres db I can create sequence with the following:
CREATE SEQUENCE IF NOT EXISTS seq_tn_101 as INT increment 1 minvalue 0 start 1;
I am trying to do the same with liquibase with the following:
databaseChangeLog:
- changeSet:
id: 1667405954-1
author: 4356
changes:
- createSequence:
dataType: int
incrementBy: 1
minValue: 0
sequenceName: seq_tn_101
startValue: 1
It is included in the changelog master file like:
databaseChangeLog:
...
- include:
file: changes/028_update_foo.yml
relativeToChangelogFile: true
- include:
file: changes/029_create_sequence.yml
relativeToChangelogFile: true
But I am having error:
Invocation of init method failed; nested exception is liquibase.exception.ChangeLogParseException: Error parsing classpath:/db/changelog/db.changelog-master.yaml
When 029_create_sequence.yml is not included in master file, I don't have any problem.
Any idea?
Related
(How) Can I use variable value as a name for another variable?
I have a job with matrix and dotenv artifacts as follows:
build-names:
stage: build
...
script:
...
<lines omitted>
...
- echo "${NAME}_DEB_PACKAGE_VERSION=${DEB_PACKAGE_VERSION}" >> build.env
artifacts:
reports:
dotenv: build.env
parallel:
matrix:
- NAME:
- name1
- name2
type:
- 0
- 1
environment: $NAME/$TYPE
Then I have a downstream trigger job again using matrix and I want to pass the appropriate package version based on the ${NAME}
build-images:
stage: .post
needs:
- job: build-names
artifacts: true
variables:
PACKAGE_VERSION_VARIABLE_NAME: ${NAME}_DEB_PACKAGE_VERSION
PACKAGE_VERSION: ${${BACKEND_VERSION_VARIABLE_NAME}}
// OR
PACKAGE_VERSION_VARIABLE_NAME: ${NAME}_DEB_PACKAGE_VERSION
PACKAGE_VERSION: ${!BACKEND_VERSION_VARIABLE_NAME}
trigger:
project: group/project-${NAME}
parallel:
matrix:
- NAME:
- name1
- name2
Neither of the two approaches above (double ${${}} or !) works in the variables section.
I could 'generate' the variable within script section, but AFAIK you cannot have both trigger and script within the same job.
Is there a workaround for similar use cases?
(Using self-hosted gitlab 15.4)
I have two jobs in the same stage with a dependency specified via "needs" keyword, for example JobA -> JobB(needs[JobA]).
When I try to skip JobA with a rule (to speedup the build process), JobB throws an 'invalid yaml' error for the "needs" keyword, because the referenced JobA now doesn't exist.
What is the correct syntax/construct to enable such dependency ? Is the use of "rules" in JobA the right approach ?
The simplified version of what I have is:
image1:
stage: build-images
script:
- etc...
rules:
- changes:
- values.env
image2:
stage: build-images
script:
- ...
needs: [image2]
Use needs:optional:
image1:
stage: build-images
script:
- etc...
rules:
- changes:
- values.env
image2:
stage: build-images
script:
- ...
needs:
- job: image1
optional: true
I'm trying to setup some processors in a filebeat.yml to process some logs before sending to ELK.
An important part of the processing is determining the "level" of the event, which is not always included in the line in the log file.
This is the idea I have for it right now:
# /var/log/messages
- type: log
processors:
- dissect:
tokenizer: "%{month} %{day} %{time} %{hostname} %{service}: {%message}"
field: "message"
target_prefix: "dissect"
- if:
when:
regexp:
message: ((E|e)rror|(f|F)ault)
then:
- add_fields:
target: 'dissect'
fields:
level: error
else:
- if:
when:
regexp:
message: (W|W)arning
then:
- add_fields:
target: 'dissect'
fields:
level: warning
else:
- add_fields:
target: 'dissect'
fields:
level: information
- drop_fields:
#duplicate
fields: ["dissect.month","dissect.day","dissect.time","dissect.hostname","message"]
# Change to true to enable this input configuration.
enabled: true
paths:
- /var/log/messages
I'm still not sure about those patterns I'm trying... but right now I don't think they're what's causing me to fail.
When trying to run filebeat with console output for a test with
filebeat -e -c filebeat.yml
I get the following error:
2022-01-26T17:45:27.174+0200 ERROR instance/beat.go:877 Exiting: Error while initializing input: failed to make if/then/else processor: missing or invalid condition
Exiting: Error while initializing input: failed to make if/then/else processor: missing or invalid condition
I'm very new to yaml in general, and the only other beat I've done before is an AuditBeat (which works, and has conditions, but not "if"s).
Does anyone know what the problem might be?
To clarify: I commented out all other "input" entries, leaving just this one, and still got this error.
Edit: Version: 7.2.0
The if part of the if-then-else processor doesn't use the when label to introduce the condition. The correct usage is:
- if:
regexp:
message: [...]
You have to correct the two if processors in your configuration.
Additionally, there's a mistake in your dissect expression. {%message} should be %{message}. Also, the regexp for warning should be (W|w)arning not (W|W)arning (both W's are uppercase in your config).
This is the corrected processors configuration:
processors:
- dissect:
tokenizer: "%{month} %{day} %{time} %{hostname} %{service}: %{message}"
field: "message"
target_prefix: "dissect"
- if:
regexp:
message: ((E|e)rror|(f|F)ault)
then:
- add_fields:
target: 'dissect'
fields:
level: error
else:
- if:
regexp:
message: (W|w)arning
then:
- add_fields:
target: 'dissect'
fields:
level: warning
else:
- add_fields:
target: 'dissect'
fields:
level: information
When add any comments to the end of sql file liquibase raises an error.
Everything is ok in any other places.
What are we doing wrong?
version 4.1.1 #10 built at 2020-10-12 19:24+0000 Starting Liquibase at 09:43:04 (version 4.1.1 #10 built at 2020-10-12 19:24+0000) Unexpected error running Liquibase: Migration failed for change set 1000.file_name.sql::1::10000: Reason: liquibase.exception.DatabaseException: Invalid SQL type: sqlKind = UNINITIALIZED [Failed SQL: (17439) – problem comment] For more information, please use the --logLevel flag
--liquibase formatted sql
--changeset 1:10000 stripComments:false runOnChange:true endDelimiter:;
alter table SCHEMA_NAME.SOME_TABLE modify
"ANY_COLOMN varchar2(60 char)
;
commit
;
-- problem comment
UPD
The command I'm running:
liquibase.bat --driver=oracle.jdbc.OracleDriver --username=yyyy --password=xxxx --url=jdbc:oracle:thin:#(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=some_host.com)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=some_servicename))) --changeLogFile=db-changelog-test.yml update
db-changelog-test.yml
---
databaseChangeLog:
- include:
file: 1000.file_name.sql
relativeToChangelogFile: true
liquibase.bat: modified only one string
IF NOT DEFINED JAVA_OPTS set JAVA_OPTS=-Dfile.encoding=UTF-8
I'm fairly new to using gitlab-ci and as such, I've run into a problem where the following fails ci-lint because of my use of anchors/references:
image: docker:latest
services:
- docker:dind
variables:
DOCKER_DRIVER: overlay2
DOCKER_HOST: tcp://localhost:2375
.install_thing1: &install_thing1
- do things
- to install
- thing1
.install_thing2: &install_thing2
- do things to
- install thing2
.setup_thing1: &setup_things1
variables:
VAR: var
FOO: bar
script:
- all
- the
- things
before_script:
...
stages:
- deploy-test
- deploy-stage
- deploy-prod
test:
stage: deploy-test
variables:
RUN_ENV: "test"
...
only:
- tags
- branches
script:
- *install_thing1
- *install_thing2
- *setup_thing1
- other stuff
...
test:
stage: deploy-stage
variables:
RUN_ENV: "stage"
...
only:
- master
script:
- *install_thing1
- *install_thing2
- *setup_thing1
- other stuff
When I attempt to lint the gitlab-ci.yml, I get the following error:
Status: syntax is incorrect
Error: jobs:test:script config should be a string or an array of strings
The error eludes to just needing an array for the script piece, which I believe I have. Use of the <<: *anchor pragma causes an error as well.
So, how can one accomplish what I'm trying to do here where I don't have to repeat the code in every -block?
You can fix it and even make it more DRY, take a look at the Auto DevOps template Gitlab created.
It can fix your issue and even more improve your CI file, just have a template job like their auto_devops job, include it in a before_script and then you can combine and call multiple functions in a script block.
The anchors only give you limited flexibility.
(This concept made it possible for me to have one CI file for 20+ projects and a centralized functions file I wget and load in my before_script.)