Adding two exact same table on qlikview and avoid qlik to merge/discard table - qlikview

I have a problem on adding two tables to the qlikview. Currently, I need to add multiple tables from excel data in qlikview, to avoid circular reference. I have tried to add it multiple times, but qlikview always merge the table, or even discard one, because it contains same data.
How to add two exact same table in qlikview?

You have two options:
NoConcatenate - using this prefix before loading table "forces" QV/QS no not concatenate this table to the other table(s) having the same set of columns. This will keep the field names as it is and you will get synthetic key between the tables (if dont drop/change them by the end of the script
Qualify - this prefix will load the following table and will change the field names in format TableName.FieldName.
For example:
Qualify *;
MyTable:
Load
Id,
Value
From
MyCSV.csv (txt)
;
UnQualify *;
Will result in MyTable with 2 fields - MyTable.Id and MyTable.Value
When using Qualify dont forget to call UnQualify when you dont need more tables to be qualified!
You can have Qualify and non-qualified fields using:
Qualify *;
UnQualify Id;
MyTable:
Load
Id,
Value
From
MyCSV.csv (txt)
;
UnQualify *;
Will result in MyTable with 2 fields - Id and MyTable.Value

If you do a noconcatenate load (instead of just load) then it will load the data in twice. However watch out as you'll end up with one big synthetic key if you do that without making sure the field names are different in the two tables. Either use qualify or set your field names in one or both tables to be different.

Related

BigQuery loop to select values from dynamic table_names registered in another table

I'm looking for a solution to extract data from multiple tables and insert it into another table automatically running a single script. I need to query many tables, so I want to make a loop to select from those table's names dynamically.
I wonder if I could have a table with table names, and execute a loop like:
foreach(i in table_names)
insert into aggregated_table select * from table_names[i]
end
Below is for BigQuery Standard SQL
#standardSQL
SELECT * FROM `project.dataset1.*`
WHERE _TABLE_SUFFIX IN (SELECT table_name FROM `project.dataset2.list`)
This approach will work if below conditions are met
all to be processed table from list have exact same schema
one of those tables is the most recent table - this table will define schema that will be used for all the rest tables in the list
to meet above bullet - ideally list should be hosted in another dataset
Obviously, you can add INSERT INTO ... to insert result into whatever destination is to be
Please note: Filters on _TABLE_SUFFIX that include subqueries cannot be used to limit the number of tables scanned for a wildcard table, so make sure your are using longest possible prefix - for example
#standardSQL
SELECT * FROM `project.dataset1.source_table_*`
WHERE _TABLE_SUFFIX IN (SELECT table_name FROM `project.dataset2.list`)
So, again - even though you will select data from specific tables (set in project.dataset2.list) the cost will be for scanning all tables that match project.dataset1.source_table_* woldcard
While above is purely in BigQuery SQL - you can use any client of your choice to script exacly the logic you need - read table names from list table and then select and insert in loop - this option is simplest and most optimal I think

Filter Rows - Pentaho

We are Getting inputs from two different tables and passing it to the Filter rows.
But we are getting the below error.
The DATE_ADDED Table has only one column DATE_ADDED and similarly the TODAYS_DATE Table has a single column TODAYS_DATE .
The condition given in the Filter is DATE_ADDED < TODAYS_DATE .
The transaformation is
Can someone tell, where I am doing the mistake
It won't work like this. You expect a join of two streams (like SQL JOIN of two tables) but actually you will have a union (like SQL UNION).
When two streams are intersected on a step they must have identical columns - names, order and types - and the result will be the union of both streams with the same structure as origins.
When you intersect streams with different structures - different column names in your case - you will have unpredictable column names and actually only one column - nothing to compare with.
To do what you need use the Merge Join step (do not forget to sort streams on the joining key)
Both the column names and types should be identical if you wanna merge the columns in single step, right click on both steps and click output fields to verify the datatypes.
if datatype issues arrives OR you want to rename the columns, you can place select step(for each table steps) after table steps and select the DATE Type(in your case)in the Meta-data tab, and rename the fields as well.
Hope this helps... :)

Swapping columns in a table to match formatting of another table prior to row insertion

I want to swap columns within Visual Fox Pro 9 in table_1 before inserting its rows into table_2 so as to avoid data losses caused by datatype variations. I tried these two options based on other solutions on stackoverflow, but I get syntax error messages for both command inputs. The name field is of datatype = character(5)and it needs to be after the subdir field.
ALTER table "f:\csp" modify COLUMN name character(5) after subdir
ALTER table "f:\csp" change COLUMN name name character(5) after subdir
I attempted these commands based on solutions here:
How to move columns in a MySQL table?
You never need to change the column order, and you never should rely on column order to do something.
For inserting into another table from this one you could simply select the columns in the order you desired (and their column names do not even need to be the same in the case of "insert ... select ... "). ie:
insert into table_2 (subdir, name) ;
select subdir, name from table_1
Another way is to use the xBase commands like:
select table_2
append from table_1
In the case of latter, VFP would do the match on column names.
All in all, relying on column ordering is dangerous. If you really want to do that, then you can still do, in a number of ways. One of them is to select all data into a temp table, recreate the table in the order you want and fill back from temp (might not be as easy as it sounds if there are existing dependencies such as referential integrity - also you need to recreate the indexes).

Copying contents of table A to table B (one more column than table A)

In our application, we have two sets of tables: One set of working tables (with the data that is currently analyzed) and another set of archive tables (with all data that has even been analyzed, same table-name but with a a_prefix). The structure of the tables is the same, except that the archive tables have an extra column run_id to distinguish between different sets of data.
Currently, we have a SQL script that copies the contents over with statements similar to this:
insert into a_deals (run_id, deal_id, <more columns>)
select maxrun, deal_id, <more columns>
from deals,
(select max(run_id) maxrun from batch_runs);
This works fine, but whenever we add a new column to the table, we also have to mpdify the script. Is there a better way to do this that is stable when we have new columns? (Of course the structures have to match, but we'd like to be able not to have to change the script as well.)
FWIW, we're using Oracle as our RDBMS.
Following up on the first answer, you could build a pl/sql procedure which will read all_tab_columns to build the insert statement, then execute immediate. Not too hard, but be careful about what input parameters you allow (table_name and the like) and who can run it since it could provide a great opportunity for SQL Injection.
If the 2 tables have the SAME columns in the same order (column_id from all_tab_columns) except for this run_id in front, then you can do something like:
insert into a_deals
select (select max(run_id) from maxrun), d.*
from deals
where ...;
This is a lazy approach imo, and you'll want to ensure that the columns are in the same position for both tables as part of this script (inspect all_tab_columns). 2 varchar2 fields that are switched will lead to data inserted into incorrect fields.

Change the order of database columns

I want to change the order of column e.g. name is first column of my table and there are 10 other columns in table I want to insert a new column in 2nd position after name column.
How is this possible?
1 - It's not possible without rebuilding the table, as Martin rightly points out.
2 - It's a good practice anyways to specify what fields you want and in what order in your SELECT statements as n8wrl points out.
3 - If you really really need a fixed order on your fields, could create a view that selects the fields you want in the order you want.
Like the rows in the table, there is no meaning to the order of the columns. In fact, it is best to specify the order you want the columns in your select statements rather than using select *, so you can 'insert' new columns wherever you want just by writing your SELECT statements accordingly.
Its possible to change the order. In some instances it really matters. have a personal experience.
Anyway..this query works fine.
ALTER TABLE user MODIFY Name VARCHAR(150) AFTER address;
You can achieve this by following these steps:
remove all foreign keys and primary key of the original table.
rename the original table.
using CTAS, create the original table in the order you want.
drop the old table.
apply all constraints back to the original table.