Beginner. Using CLI/Presto/Trino. Not sure the right term. It looks to be command line and we are using Hive.
I can run select, create tables. I'm trying to run multiple queries at once. I created SQL file and uploaded to Hive folder structure. I think I can execute all of them at once instead of going one by one.
How do I initiate the process of running SQL query from file?
--execute file user/hivefile.sql > result getting nowhere
Related
I'm using talend 6.2.1 version. Trying to run multiple hive queries in tHiveRow, but it simply not allowing me to do so if I separate my queries with a ";".
I have tried with tForEach, but there is a limitation to it as we cannot include a value greater than 130 characters.
So, I turned to tFixedFlowInput but iterative run for multiple queries seems not possible here.
I followed this Running multiple hive queries using tHiveRow component in Talend
Can anybody help me achieve my objective.
This can be achieved by saving the hive script in a text file. Now read the textfile with row delimiter as ";" and feild delimiter as something that is not used in the entire script (cedilla or $). Schema of this file will have only one column(say query)
Now connect tfileinputDelimite--row1-->thiverow. In thiverow query box write row1.query
That's it, it has woked for me .try.
This is my first job creation task as a SQL DBA. First step of the job runs a query and sends the output to a .CSV. As a last step, I need the job to execute the query from the .CSV file (output of first step).
I have Googled all possible combinations but no luck.
your question got lost somehow ...
You last two comments make ist a little clearer.
If I understand it correctly you create a SQL script which restores all the logins, roles and users, their rights etc. into a newly created db.
If this created script is executable within a query window you can easily execute it with EXECUTE (https://msdn.microsoft.com/de-de/library/ms188332(v=sql.120).aspx)
Another approach could be SQLCMD (http://blog.sqlauthority.com/2013/04/10/sql-server-enable-sqlcmd-mode-in-ssms-sql-in-sixty-seconds-048/)
If you need further help, please come back with more details: What does your "CSV" look like? What have you tried so far?
I am using Aqua Data Studio 7.0.39 for my Database Stuff.
I have a 20 SQL files(all contains sql statements, obviously).
I want to execute all rather than copy-paste contains for each.
Is there any way in Aqua to do such things.
Note: I am using Sybase
Thank you !!
I'm also not sure of how to do this in Aqua, but it's very simple to create a batch/powershell script to execute .sql files
You can use the SAP/Sybase isql utility to execute files, and just create a loop to cover all the files you wish to execute.
Check my answer here for more information:
Running bulk of SQL Scripts in Sybase through batch
In the latest versions of ADS there is an integrated shell named FluidShell where you can achieve what you are looking for. See an overview here: https://www.aquaclusters.com/app/home/project/public/aquadatastudio/wikibook/Documentation15/page/246/FluidShell
The command you are looking for is source
source
NAME
source - execute commands or SQL statements from a file
SYNOPSIS
source [OPTION...] FILE [ARGUMENT...]
source [OPTION...]
DESCRIPTION
Read and execute commands or SQL statements from FILE in the current shell environment.
I have not used Aquafold before so I can't tell you exactly. However I have tackled a similar problem once before.
I once created a Powershell script. It opened a ODBC connection to my database and then executed stored procedures in a loop until end of file.
I suggest having a text document with each line being the name of an Stored Proc to run. Then in your powershell script read in a line from the file concatenate it into the call to execute a stored procedure. After each execution is completed you can delete the line from the text file and then read the next line until the EOF (end of file) is reached.
Hope this helps. If I have some time this morning I will try and do a working example for you and post it.
I am new to hadoop and I have been provided with a task to come up with plans and ideas on how to convert the existing DB2 stored procedures into Hadoop UDF. Since Hive doesn't support stored procedures I am stuck.
I did a POC for a simple select statement and it worked:
POC takes a given SQL file as input and converts it first into an intermediate JSON file
Reads the JSON file - if any built-in function found, gets the equivalent hive function from a db2-hive functions lookup file.
Creates a hive SQL file from the intermediate JSON file.
Currently, it works for a simple SELECT SQL with WHERE clause
But i am unable to replicate it for procedures.
Please suggest me the approach and/or examples to proceed.
Thanks in advance.
Suppose I have wrote script Table_ABC.sql which creates table ABC. I have created many such scripts for each of required tables. Now i want to write a script that call all of these script files in a sequence so basically I want another script file createTables.sql. Mysql provides option to execute a script file from "mysql" shell application but could find some command like exec c:/myscripts/mytable.sql. Please tell me if there is any command that can be written in sql script itself to call other one in latest mysql versions or alternative for same.
Thanks
You can use source command. So your script will be something like:
use your_db;
source script/s1.sql;
source script/s2.sql;
-- so on, so forth