Is there a way to run a loop through the output of SQL transformation? - hana

I have run the following in SQL transform in Data Services
select stag_table from op_schem.dataset where tx=23
This gave me 40 tables. Is there a way, I can do a select * from <each stag_table> on each of these 40 tables and load them into flat files?

Sure there is.
You write a SELECT statement for each of those tables and store the result into a file.
This question seems to be about the same problem your other questions where you decided to not provide feedback to the answers you received.
If you cannot be asked to write 40 SELECTS based on the same simple template SELECT * FROM <table_name> why not do what programmers do and automate this?
Just write a program that replaces the table name for all of the 40 tables. That'll take a few minutes and surely will be more effective than asking the same question in a different once more.

Related

JOIN with a dataset

I am very new to Superset and SQL in general, please excuse my poor language as well.
General question: How do I use an existing superset dataset in a sql query?
Case: I am trying to create a map based on german postal codes. Therefor I need to join that table with a translation table containing german postal code to JSON coordinates. The translation table is in another database than the german postal codes are. I am constantly trying to JOIN these both together, but it does not work. I assume you can only work with the data from one single database at once. Is it possible to create datasets with the needed data and reuse these datasets in a sql query? I tried this, but I dont know how to access these. When using data on a database I would write:
Select * from database.table
To access a superset dataset in my query:
Select * from dataset (how it is named in the superset dataset list)
which does not work at all.
I am desperatly trying to solve this problem but I am just not able to.
Thanks for your help in advance.
In Superset's SQL Lab, you can run pretty much any valid SQL query that your database accepts. The query will more / less be sent to your database and the results displayed to you in the results panel. So you can run JOIN queries in SQL Lab, for example.
If you want to visualize data from the results of a SQL Query, hit the "Explore" button after running the query. Then, you'll be asked to publish the query you wrote & ran as a Virtual Dataset. Finally, you'll be taken to the Explore, no-code chart builder to visualize your data.
I wrote a bit more about the semantic layer in Superset here, if you'd like to learn more: https://preset.io/blog/understanding-superset-semantic-layer/

SQL selecting results from multiple tables

Hello I want to display results from unrelated tables where a text string exists in a column which is common to all tables in the database.
I can get the desired result with this:
SELECT *
FROM Table1
WHERE Title LIKE '%Text%'
UNION
SELECT *
FROM Table2
WHERE Title LIKE '%Text%'`
However my question is is there a more efficient way to go about this as I need to search dozens of tbls. Thanks for any help you can give!
ps the system I am using supports most dialects but would prefer to keep it simple with SQL Server as that is what I am used to.
There is a SP script you can find online called SearchAllTables (http://vyaskn.tripod.com/search_all_columns_in_all_tables.htm).
When you call it pass in the string, it will return the tables and columns as well as the full string.
You can modify it to work with other datatypes quite easily. It's a fantastic resource for tasks exactly like yours.

using MS SQL I need to select into a table while casting a whole load of strings to ints! can it be done?

Basically, I am the new IT type guy, old guy left a right mess for me! We have a MS-Access DB which stores the answers to an online questionnaire, this particular DB has about 45,000 records and each questionnaire has 220 questions. The old guy, in his wisdom decided to store the answers to the questionnaire questions as text even though the answers are 0-5 integers!
Anyway, we now need to add a load of new questions to the questionnaire taking it upto 240 questions. The 255 field limit for access and the 30ish columns of biographical data also stored in this database means that i need to split the DB.
So, I have managed to get all the bioinfo quite happily into a new table with:
SELECT id,[all bio column names] INTO resultsBioData FROM results;
this didn't cause to much of a problem as i am not casting anything, but for the question data i want to convert it all to integers, at the moment I have:
SELECT id,CInt(q1) AS nq1.......CInt(q220) AS nq220 INTO resultsItemData FROM results;
This seems to work fine for about 400 records but then just stops, I thought it may be because it hit something it cant convert to a integer to start with, so i wrote a little java program that deleted any record where any of ther 220 answers wasnt 0,1,2,3,4 or 5 and it still gives up around 400 (never the same record though!)
Anyone got any ideas? I am doing this on my test system at the moment and would really like something robust before i do it to our live system!
Sorry for the long winded question, but its doing my head in!
I'm unsure whether you're talking about doing the data transformation in Access or SQL Server. Either way, since you're redesigning the schema, now is the time to consider whether you really want resultsItemData table to include 200+ fields, from nq1 through nq220 (or ultimately nq240). And any future question additions would require changing the table structure again.
The rule of thumb is "columns are expensive; rows are cheap". That applies whether the table is in Access or SQL Server.
Consider one row per id/question combination.
id q_number answer
1 nq1 3
1 nq2 1
I don't understand why your current approach crashes at 400 rows. I wouldn't even worry about that, though, until you are sure you have the optimal table design.
Edit: Since you're stuck with the approach you described, I wonder if it might work with an "append" query instead of a "make table" query. Create resultsItemData table structure and append to it with a query which transforms the qx values to numeric.
INSERT INTO resultsItemData (id, nq1, nq2, ... nq220)
SELECT id, CInt(q1), CInt(q2), ... CInt(q220) FROM results;
Try this solution:
select * into #tmp from bad_table
truncate table bad_table
alter bad_table alter column silly_column int
insert bad_table
select cast(silly_column as int), other_columns
from #tmp
drop table #tmp
Reference: Change type of a column with numbers from varchar to int
Just wrote a small java program in the end that created the new table and went through each record individually casting the fields to integers, takes about an hour and a half to do the whole thing though so i am still after a better solution when i come to do this with the live system.

Executing multiple queries in a single DB Call - SQL

I'm creating pdf reports with Data retrieved from Database (Oracle). For each report I'm making a DB Call. I want to optimize the call for multiple reports ( can have max of 500 reports). In current scenario, I am making 500 DB calls and this results in timeout of the Server.
I'm looking for solutions and answers.
1. Can I pass a list of data as input to a query ? (The query required 2 inputs.)
2. The entire set of data retrieval should happen in 1 DB Call not 500 separate calls.
3. The response should be accumulated result of 500 inputs.
Please suggest ways to solve or directions to the solve the issue ?
It is a Java based system. The DB call is from a Web App. DB : Oracle.
Thanks!!
If you want to get the data for an arbitrary number of "reports" in a single database call, then I would imagine you need to be calling a stored procedure that returns a very large nugget of XML or JSON text that you can then parse and display in your application. Oracle has built-in functions for constructing XML, and JSON is pretty easy to structure yourself (though I believe a 3rd party PL/SQL JSON package may be available).
there are a few ways of combining results from multiple queries.
UNION ALL -- lets you literally combine results between query1 UNION ALL query2
Make 1 more general query. -- this is the best answer if it can be done.
Join Sub queries and print the data horizontally if you can join them. select a., b. from (querya) a join (queryb) b on (id)
There are probably other ways as well. Such as a stored procedure etc.

Is there a way to parser a SQL query to pull out the column names and table names?

I have 150+ SQL queries in separate text files that I need to analyze (just the actual SQL code, not the data results) in order to identify all column names and table names used. Preferably with the number of times each column and table makes an appearance. Writing a brand new SQL parsing program is trickier than is seems, with nested SELECT statements and the like.
There has to be a program, or code out there that does this (or something close to this), but I have not found it.
I actually ended up using a tool called
SQL Pretty Printer. You can purchase a desktop version, but I just used the free online application. Just copy the query into the text box, set the Output to "List DB Object" and click the Format SQL button.
It work great using around 150 different (and complex) SQL queries.
How about using the Execution Plan report in MS SQLServer? You can save this to an xml file which can then be parsed.
You may want to looking to something like this:
JSqlParser
which uses JavaCC to parse and return the query string as an object graph. I've never used it, so I can't vouch for its quality.
If you're application needs to do it, and has access to a database that has the tables etc, you could run something like:
SELECT TOP 0 * FROM MY_TABLE
Using ADO.NET. This would give you a DataTable instance for which you could query the columns and their attributes.
Please go with antlr... Write a grammar n follow the steps..which is given in antlr site..eventually you will get AST(abstract syntax tree). For the given query... we can traverse through this and bring all table ,column which is present in the query..
In DB2 you can append your query with something such as the following, but 1 is the minimum you can specify; it will throw an error if you try to specify 0:
FETCH FIRST 1 ROW ONLY