I want to ask whether liquibase supports cross instances, between instances can be connected if possible, is the command correct like this
.....\liquibase --changeLogFile=filename.sql update liquibase.command.url mysql://aws.ap-southeast-1.rds.amazonaws.com:3306/databasename
No, not in one command. For the user to run Liquibase on multiple instances, they will have to run the command several times to each instance with the correct URL values.
Here is an example you can use to create a loop that will run the update (or other Liquibase commands) to connect to different endpoints.
https://github.com/liquibase/liquibase-toolbox/tree/master/project_examples/multi_catalog_example
Related
I need to simulate the below:
1. SSH (only once)
2. Execute a command on all the rows in a csv file at once.
Number of rows in the csv file is dynamic. If 10, the command needs to be executed over all the 10 rows in parallel.
Am not sure of using SSH Command Sampler here. SSH and Command are to be entered in the same sampler. How do I separate these? i.e. SSH only once and then executing the commands in parallel. Which JMeter components do I use here?
Note: Increasing the number of Threads is not an efficient option. While doing this many sessions get created. In turn hanging the terminal. This option works fine up to 10 users. Not sure if there's a limit on the number of sessions.
Thanks for your support.
Regards,
Ajith
Why do you think that Increasing the number of Threads is not an efficient option?
I would suggest moving the SSH (only once) to setUp Thread Group and put Execute a command on all the rows in a csv file at once. bit under the normal Thread Group
If the number of rows in the CSV file is dynamic - you can make the number of threads dynamic as well using __groovy() function like:
${__groovy(new File('/path/to/your/file.csv').readLines().size,)}
If you want to execute all the 10 requests (or whatever is the number of lines) at exactly the same moment you can add a Synchronizing Timer
I have a small sql query(which uses Wbexport utility) that I need to run daily. I want to automate this task so I am thinking to write a batch file and schedule it using windows task scheduler. Could someone tell me how to run a sql query/.sql file from command line in SQL Workbench/J?
This is documented in the manual
You need to put the query you want to run in a .sql script and then pass the name of the script on the commandline:
java -jar sqlworkbench.jar -script=run_export.sql -profile=...
or
SQLWorkbench64.exe -script=run_export.sql -profile=...
There are various ways to define the connection, either through the -profile parameter or by specifying the complete connection information. When using the profile parameter you need to make sure that the profiles are stored in the default location or specify the location where the profiles are stored.
I am trying to run a simple code to simply show databases that I created previously on my hive2-server. (note in this example there are both, examples in python and scala both with the same results).
If I log in into a hive shell and list my databases I see a total of 3 databases.
When I start Spark shell(2.3) on pyspark I do the usual and add the following property to my SparkSession:
sqlContext.setConf("hive.metastore.uris","thrift://*****:9083")
And re-start a SparkContext within my session.
If I run the following line to see all the configs:
pyspark.conf.SparkConf().getAll()
spark.sparkContext._conf.getAll()
I can indeed see the parameter has been added, I start a new HiveContext:
hiveContext = pyspark.sql.HiveContext(sc)
But If I list my databases:
hiveContext.sql("SHOW DATABASES").show()
It will not show the same results from the hive shell.
I'm a bit lost, for some reason it looks like it is ignoring the config parameter as I am sure the one I'm using it's my metastore as the address I get from running:
hive -e "SET" | grep metastore.uris
Is the same address also if I run:
ses2 = spark.builder.master("local").appName("Hive_Test").config('hive.metastore.uris','thrift://******:9083').getOrCreate()
ses2.sql("SET").show()
Could it be a permission issue? Like some tables are not set to be seen outside the hive shell/user.
Thanks
Managed to solve the issue, because a communication issue the Hive was not hosted in that machine, corrected the code and everything fine.
I have a Hive query in Hue with one input variable, a string (for example a date like '20160117').
I'd like to execute this Hive query in Hue and pass it multiple values for that single variable.
Is it possible? If yes, how would you guys do it?
Oozie runs Direct Acyclic Graphs (DAG). And Acyclic comes down to no loop, ever. But of course there are workarounds.
So, if you must run the same HQL script exactly N times with a different parameter value...
either copy/paste the Hive Action N times, in a chain, with a different param value (quick and dirty)
or build a Sub-Workflow with just the Hive action and call it N times, in a chain, with a different param value
On the other hand, if you must adapt dynamically the number and the value of executions, then you must work out the "loop" logic outside of Oozie proper...
for instance, start with a Shell action that creates an empty HQL file, then adds N queries in a loop, then uploads the file to HDFS; next, a Hive action that executes the HQL script as-is (quick and dirty, but not ideal for exception handling)
or develop a Java program that connects to HiveServer2 via JDBC, submits a PreparedStatement with 1 bind variable, and executes the statement N times in a loop with different values of the variable.
And maybe, someday, Hive will support some kind of procedural language similar to PL/SQL, T-SQL, PgSQL etc. and you will be able to pass a comma-separated list of values and process it inside of Hive.
I have a situation to handle, i have my liquibase structured as per the best practices recommended. I have the change log xml structured as given below
Master XML
-->Release XML
-->Feature XML
-->changelog XML
In our application group, we run updateSQL to generate the consolidated sql file and get the changes executed through our DBA group.
However, the real problem I have is to execute a common set of sql statements during every iteration. Like
ALTER SESSION SET CURRENT_SCHEMA=APPLNSCHEMA
as the DBA executes the changes as SYSTEM but the target schema is APPLNSCHEMA.
How to include such common repeating statements in Liquibase changelog.
You would be able to write an extension (http://liquibase.org/extensions) that injects it in. If you need to do it per changeLog, it may work best to extend XMLChangeLogParser to automatically create and add a new changeSet that runs the needed SQL.
You could make a changeSet with the attribute 'runAlways' set to true and include the SQL.
As far as I know, there isn't a way to have Liquibase itself do this. I suggest that you wrap Liquibase with your favorite scripting language such that you run a command "generateSQLforThoseCrazyDBAs" that runs Liquibase and then prepends the SQL you need to the output created by Liquibase.