How to output debug info with the dropwizard-migrations liquibase module? - liquibase

When you run liquibase on the command line, you can set the log level to debug with a command line option:
java -jar liquibase.jar \
--driver=oracle.jdbc.OracleDriver \
--classpath=\path\to\classes:jdbcdriver.jar \
--changeLogFile=com/example/db.changelog.xml \
--url="jdbc:oracle:thin:#localhost:1521:oracle" \
--username=scott \
--password=tiger \
--logLevel=debug
update
Does anyone know how to set the log level when running liquibase via the dropwizard-migrations module?
java -jar /mydropwizardapp.jar db migrate config.yml
(I am having a problem where it intermittently fails to get a lock (Waiting for changelog lock....) and my deployment fails, and I'd like to see more detail on what it is doing. I'm pretty sure the lock is not left from a previously failed deployment)
Thanks

You can set the logging level to DEBUG on the YAML file:
logging:
level: DEBUG

Related

Unable to connect to PostgreSQL from makefile and run migration scripts

I'm currently working on a simple project which is accessing the Github API.
PostgreSQL is used only to get more familiar with it.
I encountered a problem when trying to run the migration scripts from a makefile. This issue does only occur when I'm using make, by applying the migration scripts manually works fine.
The idea is to do this when running "make install":
Start the PostgreSQL container
Run migration scripts
Stop & remove the container
include ./config/dbConfig.env
all: install
.PHONY: all
install :
#runns all migration script. To run a specific migration version build more targets.
#echo "Get latest https://github.com/golang-migrate/migrate image"
docker pull migrate/migrate
#echo "Starting postgreSQL database"
docker run -t -d \
--name dev \
--env-file $(shell pwd)/config/dbConfig.env \
-p 5432:5432 \
-v $(shell pwd)/postgres:/var/lib/postgresql/data postgres
#echo "Wait for database"
sleep 5
#echo "Sun migration scripts"
docker run \
-v $(shell pwd)/migrations:/migrations \
--network host migrate/migrate \
-path=/migrations/ \
-database "postgres://${POSTGRES_USER}:${POSTGRES_PASSWORD}#localhost:5432/?sslmode=disable" up
# for specific migration: ....432/?sslmode=disable" up {{add integer migration number}}
#echo "Stop and remove PostgreSQL container"
docker stop `docker ps -aqf "name=dev"`
docker rm `docker ps -aqf "name=dev"`
You can also find the latest version is on the branch "develop" or this specific commit here
Error:
error: read tcp [::1]:34022->[::1]:5432: read: connection reset by peer
Why am I unable to connect to the server from the makefile but if I run the same procedure manually it works? Do you maybe have any ideas about why the behavior is not consistent?

Need Help starting DSE Graph

When I type in the following command "dse gremlin-console" the system tries to start but then i get the following error:
Exception in thread "main" java.awt.AWTError: Asssitive Technology not found: org.GNOME.Accessibility.AtkWrapper
Do i need to update my Java version?
This is a known issue with Docker images that should be eventually fixed.
The linked issue has 2 workarounds:
Install openjdk-8-jdk into Docker image after it's started
Disable assistive technologies for OpenJDK with something like this:
docker-compose exec --user root dse bash -l \
-c "sed -i -e '/^assistive_technologies=/s/^/#/' /etc/java-*-openjdk/accessibility.properties"

Whether drone.io support reusing docker container for build

I have setup drone.io locally and created a .drone.yml for CI build. But I found drone removes the docker container after finishing the build. Whether it support reusing the docker container? I am working on gradle project and the initial build takes a long time to download java dependencies.
UPDATE1
I used below command to set the admin user on running drone-server container.
docker run -d \
-e DRONE_GITHUB=true \
-e DRONE_GITHUB_CLIENT="xxxx" \
-e DRONE_GITHUB_SECRET="xxxx" \
-e DRONE_SECRET="xxxx" \
-e DRONE_OPEN=true \
-e DRONE_DATABASE_DRIVER=mysql \
-e DRONE_DATABASE_DATASOURCE="root:root#tcp(mysql:3306)/drone?parseTime=true" \
-e DRONE_ADMIN="joeyzhao0113" \
--restart=always \
--name=drone-server \
--link=mysql \
drone/drone:0.5
After doing this, I use the user joeyzhao0113 to login drone server but failed to enable the Trusted flag on the setting page. The popup message dialog shows setting successfully see below screenshot. But the flag keep showing disabled always.
No, it is not possible to re-use a Docker container for your Drone build. Build containers are ephemeral and are destroyed at the end of every build.
That being said, it doesn't mean your problem cannot be solved.
I think a better way to phrase this question would be "how do I prevent my builds from having to re-download dependencies"? There are two solutions to this problem.
Option 1, Cache Plugin
The first, recommended solution, is to use a plugin to cache and restore your dependencies. Cache plugins such as the volume cache and s3 cache are community contributed plugins.
pipeline:
# restores the cache from a local volume
restore-cache:
image: drillster/drone-volume-cache
restore: true
mount: [ /drone/.gradle, /drone/.m2 ]
volumes:
- /tmp/cache:/cache
build:
image: maven
environment:
- M2_HOME=/drone/.m2
- MAVEN_HOME=/drone/.m2
- GRADLE_USER_HOME=/drone/.gradle
commands:
- mvn install
- mvn package
# rebuild the cache in case new dependencies were
# downloaded during your build
rebuild-cache:
image: drillster/drone-volume-cache
rebuild: true
mount: [ /drone/.gradle, /drone/.m2 ]
volumes:
- /tmp/cache:/cache
Option 2, Custom Image
The second solution is to create a Docker image with your dependencies, publish to DockerHub, and use this as your build image in your .drone.yml file.
pipeline:
build:
image: some-image-with-all-my-dependencies
commands:
- mvn package

getting error ERROR 1000: Error during parsing. Lexical error

I wrote pig script as :
my_script.pig
bag_1 = LOAD '$INPUT' USING PigStorage('|') AS (LN_NR:chararray,ET_NR:chararray,ET_ST_DT:chararray,ED_DT:chararray,PI_ID:chararray);
bag_2 = LIMIT bag_1 $SIZE;
DUMP bag_2;
and made one param file as :
my_param.txt:
INPUT = hdfs://0.0.0.0:8020/user/training/example
SIZE = 10
now, I am calling the script by
pig my_param.txt my_script.pig
this command but getting error as:
ERROR 1000: Error during parsing. Lexical error
any suggestions for that
I think you need to provide the parameter file using -m or -param_file option. Refer the help documentation below.
$ pig --help
Apache Pig version 0.11.0-cdh4.7.1 (rexported)
compiled Nov 18 2014, 09:08:23
USAGE: Pig [options] [-] : Run interactively in grunt shell.
Pig [options] -e[xecute] cmd [cmd ...] : Run cmd(s).
Pig [options] [-f[ile]] file : Run cmds found in file.
options include:
-4, -log4jconf - Log4j configuration file, overrides log conf
-b, -brief - Brief logging (no timestamps)
-c, -check - Syntax check
-d, -debug - Debug level, INFO is default
-e, -execute - Commands to execute (within quotes)
-f, -file - Path to the script to execute
-g, -embedded - ScriptEngine classname or keyword for the ScriptEngine
-h, -help - Display this message. You can specify topic to get help for that topic.
properties is the only topic currently supported: -h properties.
-i, -version - Display version information
-l, -logfile - Path to client side log file; default is current working directory.
-m, -param_file - Path to the parameter file
-p, -param - Key value pair of the form param=val
-r, -dryrun - Produces script with substituted parameters. Script is not executed.
-t, -optimizer_off - Turn optimizations off. The following values are supported:
SplitFilter - Split filter conditions
PushUpFilter - Filter as early as possible
MergeFilter - Merge filter conditions
PushDownForeachFlatten - Join or explode as late as possible
LimitOptimizer - Limit as early as possible
ColumnMapKeyPrune - Remove unused data
AddForEach - Add ForEach to remove unneeded columns
MergeForEach - Merge adjacent ForEach
GroupByConstParallelSetter - Force parallel 1 for "group all" statement
All - Disable all optimizations
All optimizations listed here are enabled by default. Optimization values are case insensitive.
-v, -verbose - Print all error messages to screen
-w, -warning - Turn warning logging on; also turns warning aggregation off
-x, -exectype - Set execution mode: local|mapreduce, default is mapreduce.
-F, -stop_on_failure - Aborts execution on the first failed job; default is off
-M, -no_multiquery - Turn multiquery optimization off; default is on
-P, -propertyFile - Path to property file
$
You are not using the command correctly.
To use a property file, use -param_file in the command:
pig -param_file <file> pig_script.pig
You can refer more details in the Parameter Substitution

SGE Command Not Found, Undefined Variable

I'm attempting to setup a new compute cluster, and currently experiencing errors when using the qsub command in the SGE. Here's a simple experiment that shows the problem:
test.sh
#!/usr/bin/zsh
test="hello"
echo "${test}"
test.sh.eXX
test=hello: Command not found.
test: Undefined variable.
test.sh.oXX
Warning: no access to tty (Bad file descriptor).
Thus no job control in this shell.
If I ran the script on the head node (sh test.sh), the output is correct. I submit the job to the SGE by typing "qsub test.sh".
If I submit the exact same script job in the same way on an established compute cluster like HPC, it works perfectly as expected. What setting could be causing this problem?
Thanks for any help on this matter.
Most likely the queues on your cluster are set to posix_compliant mode with a default shell of /bin/csh. The posix_compliant setting means your #! line is ignored. You can either change the queues to unix_behavior or specify the required shell using qsub's -S option.
#$ -S /bin/sh