"RENAME FIELDS Using xMap", on multiple fields - qlikview

I'm trying to use the "RENAME FIELDS using x;" function to rename fields in my tables, but i'm running into some strange behaviour and I was wondering if someone could explain why this is happening and how best to avoid it?
see my code below, you can see it will not rename column "BLAH", but why?
t_1:
mapping load * inline [
Orig, New
CUSTNO, CustomerNumber
BLAH, CustomerNumber
];
test:
Load * inline [
CUSTNO, Name
1234, James
];
test2:
Load * inline [
BLAH, Name2
1235, Chris
];
RENAME FIELDS using t_1;

This is how the Rename Fields using X is working.
The following is from QV Help:
Two differently named fields cannot be renamed to having the same name. The script will run without errors, but the second field will not be renamed.

Related

TypeORM View Entity synchronization (creation) order problems

Using TypeORM, I'm trying to create ViewEntities that depend on each other, for example "View B" select from "View A". No matter what I do I can't get the ViewEntities to get created in the order of dependency. Sometimes "View B" is created first, and the synchronization process fails, because it can't find "View A", since it's not created yet.
The error:
QueryFailedError: relation "public.course_item_view" does not exist
Solutions I have tried:
Renaming the ViewEntity files (to check if the system uses ABC ordering on file names)
Renaming the ViewEntity classes (to check if the system uses ABC ordering on class names)
Renaming the ViewEntity's "name" property (to check if the system uses ABC ordering on the final SQL view names)
Reordering the ViewEntity class references in the "entities: []" array of the connection options
Reordering the ViewEntity class imports in the file where I declare the connection options
Removing/Adding the file again (to check if the system uses Creation Date based ordering)
Modifying the files (to check if the system uses Modification Date based ordering)
All of these failed. I cannot figure out how the system determines the order in which the view's are created.
Any help would be GREATLY appreciated!!
Expected Behavior
The view's should be created in an order that is either specified by a property inside the views, or the order should be resolved automatically from the SELECT statements (dependency array), or it should be based on the order in which I reference the ViewEntities in the "entities: []" array of the connection options, or any other solution would be perfect where one could determine the order in which the ViewEntities are created.
Actual Behavior
The ViewEntites are created in an order that I honestly can't understand. Sometimes a dependent ViewEntity is created before the ViewEntitiy it depends on. This causes the synchronization to fail.
File name: "CourseItemView" which resolves to: "course_item_view"
#ViewEntity({
expression: `
SELECT
"uvcv"."userId",
"uvcv"."courseId",
"uvcv"."videoId",
CAST (null AS integer) AS "examId",
"uvcv"."isComplete" AS "isComplete"
FROM public.video_completed_view AS "uvcv"
UNION ALL
SELECT
"uecv"."userId",
"uecv"."courseId",
CAST (null AS integer) AS "videoId",
"uecv"."examId",
"uecv"."isCompleted" AS "isComplete"
FROM public.user_exam_completed_view AS "uecv"
.
.
File name: "CourseItemStateView" which resolves to: "course_item_state_view"
This DEPENDS on the "course_item_view", as you can see in the SQL
#ViewEntity({
expression: `
SELECT
"course"."id" AS "courseId",
"user"."id" AS "userId",
"civ"."videoId" AS "videoId",
"civ"."isComplete" AS "isVideoCompleted",
"civ"."examId" AS "examId",
"civ"."isComplete" AS "isExamCompleted"
FROM public."course"
LEFT JOIN public."user"
ON 1 = 1
LEFT JOIN public.course_item_view AS "civ" ------------------- HERE
ON "civ"."courseId" = "course"."id"
AND "civ"."userId" = "user"."id"
ORDER BY "civ"."videoId","civ"."examId"
`
})
.
.
My connection options:
const postgresOptions = {
// properties, passwords etc...
entities: [
// entities....
// ...
// ...
// views
VideoCompletedView,
UserExamCompletedView,
UserExamAnswerSessionView,
UserVideoMaxWatchedSecondsView,
CourseItemView, --------------------------------HERE
CourseItemStateView ---------------------------HERE
],
} as ConnectionOptions;
createConnection(postgresOptions )
Steps to Reproduce
Create ViewEntites that depend on each other
You will run into this issue, but is hard to say exactly why and when, this is the main problem.

populate the content of two files into a final file using pentaho(different fields)

I have two files.
file A has, 3 columns
Sno,name,age,key,checkvalue
file B has 3 columns
Sno,title,age
I want to merge these two into final file C which has
Sno,name,age,key,checkvalue
I tried renaming "title" to "name" and then I used "Add constants" to add the other two field.
but, when i try to merge these, I get the below error
"
The name of field number 3 is not the same as in the first row received: you're mixing rows with different layout. Field [age String] does not have the same name as field [age String].
"
How to solve this issue.
After getting the input from file B. You use a select values and remove title column. Then you use a Add constants step and add new columns name,key,checkvalue and make Set empty string? to Y. Finally do the join accordingly. So it won't fail since both the files have same number of columns. Hope this helps.
Actually, there was an issue with the fields ... field mismatch. I used "Rename" option and it got fixed.

How to create a view against a table that has record fields?

We have a weekly backup process which exports our production Google Appengine Datastore onto Google Cloud Storage, and then into Google BigQuery. Each week, we create a new dataset named like YYYY_MM_DD that contains a copy of the production tables on that day. Over time, we have collected many datasets, like 2014_05_10, 2014_05_17, etc. I want to create a data set Latest_Production_Data that contains a view for each of the tables in the most recent YYYY_MM_DD dataset. This will make it easier for downstream reports to write their query once and always retrieve the most recent data.
To do this, I have code that gets the most recent dataset and the names of all the tables that dataset contains from the BigQuery API. Then, for each of these tables, I fire a tables.insert call to create a view that is a SELECT * from the table I am looking to create a reference to.
This fails for tables that contain a RECORD field, from what looks to be a pretty benign column-naming rule.
For example, I have this table:
For which I issue this API call:
{
'tableReference': {
'projectId': 'redacted',
'tableId': u'AccountDeletionRequest',
'datasetId': 'Latest_Production_Data'
}
'view': {
'query': u'SELECT * FROM [2014_05_17.AccountDeletionRequest]'
},
}
This results in the following error:
HttpError: https://www.googleapis.com/bigquery/v2/projects//datasets/Latest_Production_Data/tables?alt=json returned "Invalid field name "__key__.namespace". Fields must contain only letters, numbers, and underscores, start with a letter or underscore, and be at most 128 characters long.">
When I execute this query in the BigQuery web console, the columns are renamed to translate the . to an _. I kind of expected the same thing to happen when I issued the create view API call.
Is there an easy way I can programmatically create a view for each of the tables in my dataset, regardless of their underlying schema? The problem I'm encountering now is for record columns, but another problem I anticipate is for tables that have repeated fields. Is there some magic alternative to SELECT * that will take care of all these intricacies for me?
Another idea I had was doing a table copy, but I would prefer not to duplicate the data if I can at all avoid it.
Here is the workaround code I wrote to dynamically generate a SELECT statement for each of the tables:
def get_leaf_column_selectors(dataset, table):
schema = table_service.get(
projectId=BQ_PROJECT_ID,
datasetId=dataset,
tableId=table
).execute()['schema']
return ",\n".join([
_get_leaf_selectors("", top_field)
for top_field in schema["fields"]
])
def _get_leaf_selectors(prefix, field):
if prefix:
format = prefix + ".%s"
else:
format = "%s"
if 'fields' not in field:
# Base case
actual_name = format % field["name"]
safe_name = actual_name.replace(".", "_")
return "%s as %s" % (actual_name, safe_name)
else:
# Recursive case
return ",\n".join([
_get_leaf_selectors(format % field["name"], sub_field)
for sub_field in field["fields"]
])
We had a bug where you needed to need to select out the individual fields in the view and use an 'as' to rename the fields to something legal (i.e they don't have '.' in the name).
The bug is now fixed, so you shouldn't see this issue any more. Please ping this thread or start a new question if you see it again.

SQL: Use a predefined list in the where clause

Here is an example of what I am trying to do:
def famlist = selection.getUnique('Family_code')
... Where “””...
and testedWaferPass.family_code in $famlist
“””...
famlist is a list of objects
‘selection’ will change every run, so the list is always changing.
I want to return only columns from my SQL search where the row is found in the list that I have created.
I realize it is supposed to look like: in ('foo','bar')
But no matter what I do, my list will not get like that. So I have to turn my list into a string?
('\${famlist.join("', '")}')
Ive tried the above, idk. Wasn’t working for me. Just thought I would throw that in there. Would love some suggestions. Thanks.
I am willing to bet there is a Groovier way to implement this than shown below - but this works. Here's the important part of my sample script. nameList original contains the string names. Need to quote each entry in the list, then string the [ and ] from the toString result. I tried passing as prepared statement but for that you need to dynamically create the string for the ? for each element in the list. This quick-hack doesn't use a prepared statement.
def nameList = ['Reports', 'Customer', 'Associates']
def nameListString = nameList.collect{"'${it}'"}.toString().substring(1)
nameListString = nameListString.substring(0, nameListString.length()-1)
String stmt = "select * from action_group_i18n where name in ( $nameListString)"
db.eachRow( stmt ) { row ->
println "$row.action_group_id, $row.language, $row.name"
}
Hope this helps!

HTML Local Storage parameter replacement escaping

I'm experimenting with a simple HTML5 local storage based app and I'm having trouble with the parameter replacement escaping (maybe) in my code.
The SQL line I want to execute is:
SELECT name, title FROM testTable WHERE name LIKE '%test%';
so my Javascript line is something like:
tx.executeSql( "SELECT name, title FROM testTable WHERE name LIKE '%?%'", [ search_string ],
This fails (I think) because the ? is being treated as a literal and so the parser complains about too many parameters (search_string).
I optimistically tried using ??? and ["'%", search_string, "%'"] but same result.
Any suggestions - I imagine it's something really obvious so please be gentle.
How about:
tx.executeSql(
"SELECT name, title FROM testTable WHERE name LIKE ?",
[ '%'+search_string+'%' ]
);