Qlikview - join tables and update values - qlikview

I have QV report with table that looks like this:
+---------+--------+---------------+------+-------+
| HOST | OBJECT | SPECIFICATION | COPY | LAST |
+---------+--------+---------------+------+-------+
| host001 | obj01 | spec01 | c1 | 15:55 |
| host002 | obj02 | spec02 | c2 | 14:30 |
| host003 | - | - | - | - |
| host004 | - | - | - | - |
+---------+--------+---------------+------+-------+
now I got another small table:
spec1
host1
host4
all I need is in loading script to connect these tables in this way:
the first row is specification and all others are hosts. If there is host with name from second row of second table(host1) and with specification from first row, than I need to copy all other values from the host row (host1) to rows where are other host from second table(host4), e.g.:
+---------+--------+---------------+------+-------+
| HOST | OBJECT | SPECIFICATION | COPY | LAST |
+---------+--------+---------------+------+-------+
| host001 | obj01 | spec01 | c1 | 15:55 |
| host002 | obj02 | spec02 | c2 | 14:30 |
| host003 | - | - | - | - |
| host004 | obj01 | spec01 | c1 | 15:55 |
+---------+--------+---------------+------+-------+
I have several tables like the second one and I need to connect all of them. Sure, there can be multiple rows with same host, same specification, etc. in firts table. "-" sign is null() value and one can change the second table layout.
I tried all JOINS and now Im trying to iterate over whole table and comparing, but Im new to QV and Im missing some SQL features like UPDATE.
I appreciate all your help.

Here's a script, it's not perfect and there is probably a neater solution(!) but it works for your scenario.
I rearranged your "Copy Table" so that it has three columns:
HOST SPECIFICATION TARGET_HOST
You could then repeat rows for the additional hosts that you wish to copy to as follows:
HOST SPECIFICATION TARGET_HOST
host001 spec01 host004
host001 spec01 host003
The script (I included some dummy data so you can try it out):
Source_Data:
LOAD * INLINE [
HOST, OBJECT, SPECIFICATION, COPY, LAST
host001, obj01, spec01 , c1, 15:55
host002, obj02, spec02 , c2, 14:30
host003
host004
];
Copy_Table:
LOAD * INLINE [
HOST, SPECIFICATION, TARGET_HOST
host001, spec01, host004
];
Link_Table:
NOCONCATENATE
LOAD
HOST & SPECIFICATION as %key,
TARGET_HOST
RESIDENT Copy_Table;
DROP TABLE Copy_Table;
LEFT JOIN (Link_Table)
LOAD
HOST & SPECIFICATION as %key,
HOST, OBJECT, SPECIFICATION, COPY, LAST
;
LOAD
*
RESIDENT Source_Data;
Complete_Data:
NOCONCATENATE LOAD
TARGET_HOST as HOST,
OBJECT, SPECIFICATION, COPY, LAST
RESIDENT Link_Table;
CONCATENATE (Complete_Data)
LOAD
*
RESIDENT Source_Data
WHERE NOT Exists(TARGET_HOST,HOST & SPECIFICATION); // old condition: WHERE NOT Exists(TARGET_HOST,HOST);
DROP TABLES Source_Data, Link_Table;

Related

How to import Excel table with double headers into oracle database

I have this excel table I am trying to transfer over to an oracle database. The thing is that the table has headers that overlap and I'm not sure if there is a way to import this nicely into an Oracle Database.
+-----+-----------+-----------+
| | 2018-01-01| 2018-01-02|
|Item +-----+-----+-----+-----+
| | RMB | USD | RMB | USD |
+-----+-----+-----+-----+-----+
| | | | | |
+-----+-----+-----+-----+-----+
| | | | | |
+-----+-----+-----+-----+-----+
| | | | | |
+-----+-----+-----+-----+-----+
| | | | | |
+-----+-----+-----+-----+-----+
The top headers are just the dates for the month and then their respective data for that date. Is there a way to nicely transfer this to an oracle table?
EDIT: Date field is an actual date such as 02/19/2018.
If you pre-create a table (as I do), then you can start loading from the 3rd line (i.e. skip the first two), putting every Excel column into the appropriate Oracle table column.
Alternatively (& obviously), rename column headers so that file wouldn't have two header levels).

Last accessed timestamp of a Netezza table?

Does anyone know of a query that gives me details on the last time a Netezza table was accessed for any of the operations (select, insert or update) ?
Depending on your setup you may want to try the following query:
select *
from _v_qryhist
where lower(qh_sql) like '%tablename %'
There are a collection of history views in Netezza that should provide the information you require.
Netezza does not track this information in the catalog, so you will typically have to mine that from the query history database, if one is configured.
Modern Netezza query history information is typically stored in a dedicated database. Depending on permissions, you may be able to see if history collection is enabled, and which database it is using with the following command. Apologies in advance for the screen-breaking wrap to come.
SYSTEM.ADMIN(ADMIN)=> show history configuration;
CONFIG_NAME | CONFIG_DBNAME | CONFIG_DBTYPE | CONFIG_TARGETTYPE | CONFIG_LEVEL | CONFIG_HOSTNAME | CONFIG_USER | CONFIG_PASSWORD | CONFIG_LOADINTERVAL | CONFIG_LOADMINTHRESHOLD | CONFIG_LOADMAXTHRESHOLD | CONFIG_DISKFULLTHRESHOLD | CONFIG_STORAGELIMIT | CONFIG_LOADRETRY | CONFIG_ENABLEHIST | CONFIG_ENABLESYSTEM | CONFIG_NEXT | CONFIG_CURRENT | CONFIG_VERSION | CONFIG_COLLECTFILTER | CONFIG_KEYSTORE_ID | CONFIG_KEY_ID | KEYSTORE_NAME | KEY_ALIAS | CONFIG_SCHEMANAME | CONFIG_NAME_DELIMITED | CONFIG_DBNAME_DELIMITED | CONFIG_USER_DELIMITED | CONFIG_SCHEMANAME_DELIMITED
-------------+---------------+---------------+-------------------+--------------+-----------------+-------------+---------------------------------------+---------------------+-------------------------+-------------------------+--------------------------+---------------------+------------------+-------------------+---------------------+-------------+----------------+----------------+----------------------+--------------------+---------------+---------------+-----------+-------------------+-----------------------+-------------------------+-----------------------+-----------------------------
ALL_HIST_V3 | NEWHISTDB | 1 | 1 | 20 | localhost | HISTUSER | aFkqABhjApzE$flT/vZ7hU0vAflmU2MmPNQ== | 5 | 4 | 20 | 0 | 250 | 1 | f | f | f | t | 3 | 1 | 0 | 0 | | | HISTUSER | f | f | f | f
(1 row)
Also make note of the CONFIG_VERSION, as it will come into play when crafting the following query example. In my case, I happen to be using the version 3 format of the query history database.
Assuming history collection is configured, and that you have access to the history database, you can get the information you're looking for from the tables and views in that database. These are documented here. The following is an example, which reports when the given table was the target of a successful insert, update, or delete by referencing the "usage" column. Here I use one of the history table helper functions to unpack that column.
SELECT FORMAT_TABLE_ACCESS(usage),
hq.submittime
FROM "$v_hist_queries" hq
INNER JOIN "$hist_table_access_3" hta
USING (NPSID, NPSINSTANCEID, OPID, SESSIONID)
WHERE hq.dbname = 'PROD'
AND hta.schemaname = 'ADMIN'
AND hta.tablename = 'TEST_1'
AND hq.SUBMITTIME > '01-01-2015'
AND hq.SUBMITTIME <= '08-06-2015'
AND
(
instr(FORMAT_TABLE_ACCESS(usage),'ins') > 0
OR instr(FORMAT_TABLE_ACCESS(usage),'upd') > 0
OR instr(FORMAT_TABLE_ACCESS(usage),'del') > 0
)
AND status=0;
FORMAT_TABLE_ACCESS | SUBMITTIME
---------------------+----------------------------
ins | 2015-06-16 18:32:25.728042
ins | 2015-06-16 17:46:14.337105
ins | 2015-06-16 17:47:14.430995
(3 rows)
You will need to change the digit at the end of the $v_hist_table_access_3 view to match your query history version.

Oracle Recursive Select to Find Current ID Associated with a Customer

I have a table that contains the history of Customer IDs that have been merged in our CRM system. The data in the historical reporting Oracle schema exists as it was when the interaction records were created. I need a way to find the Current ID associated with a customer from potentially an old ID. To make this a bit more interesting, I do not have permissions to create PL/SQL for this, I can only create Select statements against this data.
Sample Data in customer ID_MERGE_HIST table
| OLD_ID | NEW_ID |
+----------+----------+
| 44678368 | 47306920 |
| 47306920 | 48352231 |
| 48352231 | 48780326 |
| 48780326 | 50044190 |
Sample Interaction table
| INTERACTION_ID | CUST_ID |
+----------------+----------+
| 1 | 44678368 |
| 2 | 48352231 |
| 3 | 80044190 |
I would like a query with a recursive sub-query to provide a result set that looks like this:
| INTERACTION_ID | CUST_ID | CUR_CUST_ID |
+----------------+----------+-------------+
| 1 | 44678368 | 50044190 |
| 2 | 48352231 | 50044190 |
| 3 | 80044190 | 80044190 |
Note: Cust_ID 80044190 has never been merged, so does not appear in the ID_MERGE_HIST table.
Any help would be greatly appreciated.
You can look at CONNECT BY construction.
Also, you might want to play with recursive WITH (one of the descriptions: http://gennick.com/database/understanding-the-with-clause). CONNECT BY is better, but ORACLE specific.
If this is frequent request, you may want to store first/last cust_id for all related records.
First cust_id - will be static, but will require 2 hops to get to the current one
Last cust_id - will give you result immediately, but require an update for the whole tree with every new record

Unique string table in SQL and replacing index values with string values during query

I'm working on an old SQL Server database that has several tables that look like the following:
|-------------|-----------|-------|------------|------------|-----|
| MachineName | AlarmName | Event | AlarmValue | SampleTime | ... |
|-------------|-----------|-------|------------|------------|-----|
| 3 | 180 | 8 | 6.780 | 2014-02-24 | |
| 9 | 67 | 8 | 1.45 | 2014-02-25 | |
| ... | | | | | |
|-------------|-----------|-------|------------|------------|-----|
There is a separate table in the database that only contains unique strings, as well as the index for each unique string. The unique string table looks like this:
|----------|--------------------------------|
| Id | String |
|----------|--------------------------------|
| 3 | MyMachine |
| ... | |
| 8 | High CPU Usage |
| ... | |
| 67 | 404 Error |
| ... | |
|----------|--------------------------------|
Thus, when we want to get something out of the database, we get the respective rows out, then lookup each missing string based on the index value.
What I'm hoping to do is to replace all of the string indexes with the actual values in a single query without having to do post-processing on the query result.
However, I can't figure out how to do this in a single query. Do I need to use multiple JOINs? I've only been able to figure out how to replace a single value by doing something like -
SELECT UniqueString.String AS "MachineName" FROM UniqueString
JOIN Alarm ON Alarm.MachineName = UniqueString.Id
Any help would be much appreciated!
Yes, you can do multiple joins to the UniqueStrings table, but change the order to start with the table you are reporting on and use unique aliases for the joined table. Something like:
SELECT MN.String AS 'MachineName', AN.String as 'AlarmName' FROM Alarm A
JOIN UniqueString MN ON A.MachineName = MN.Id
JOIN UniqueString AN ON A.AlarmName = AN.Id
etc for any other columns

Check via Hector if secondary index already exists for a dynamic column in Cassandra

After the data import to my Cassandra Test-Cluster I found out that I need secondary indexes for some of the columns. Since the data is already inside the cluster, I want to achieve this by updating the ColumnFamilyDefinitions.
Now, the problem is: those columns are dynamic columns, so they are invisible to the getColumnMetaData() call.
How can I check via Hector if a secondary index has already been created and create one if this is not the case?
(I think the part how to create it can be found in http://comments.gmane.org/gmane.comp.db.hector.user/3151 )
If this is not possible, do I have to copy all data from this dynamic column family into a static one?
No need to copy all data from dynamic column family into static one.
Then How?? Let me explain you with an example, Suppose you have an CF schema mentioned below:
CREATE TABLE sample (
KEY text PRIMARY KEY,
flag boolean,
name text
)
NOTE I have done indexing on flag and name.
Now here are some data in the CF.
KEY,1 | address,Kolkata | flag,True | id,1 | name,Abhijit
KEY,2 | address,Kolkata | flag,True | id,2 | name,abc
KEY,3 | address,Delhi | flag,True | id,3 | name,xyz
KEY,4 | address,Delhi | flag,True | id,4 | name,pqr
KEY,5 | address,Delhi | col1,Hi | flag,True | id,4 | name,pqr
From the data you can understand that address, id & col1 all are dyamically created.
Now if i query something like that
SELECT * FROM sample WHERE flag =TRUE AND col1='Hi';
Note: col1 is not indexed, but i can filter using that field
Output:
KEY | address | col1 | flag | id | name
-----+---------+------+------+----+------
5 | Delhi | Hi | True | 4 | pqr
Another Query
SELECT * FROM sample WHERE flag =TRUE AND id>=1 AND id <5 AND address='Delhi';
Note: Here neither id is indexed, nor the address, still i am getting the output
Output:
KEY,3 | address,Delhi | flag,True | id,3 | name,xyz
KEY,4 | address,Delhi | flag,True | id,4 | name,pqr
KEY,5 | address,Delhi | col1,Hi | flag,True | id,4 | name,pqr
So basically if you have a column which value is always something you know, and its being indexed. Then you can easily filter on the rest of the dynamic columns aggregating them with indexed always positive column.