I'm consistently unable to store data into a QVD file. The syntax I've been using works perfectly in other applications, but for some reason I can't get it to work in the application I'm trying to finish.
I've successfully developed a data model, now need to loop through by years, reduce the data for just the specific years, and then save that out to individual QVD files.
Here's the code:
FOR i = $(vL.MinFY) TO $(vL.MaxFY)
LET _table = 'T_FACT_$(i)_GL_BALANCES';
[$(_table)]:
LOAD
*
RESIDENT [FINAL BALANCES]
WHERE LEFT([Year-Period],4) = $(i)
;
IF NoOfRows('$(_table)') > 0 THEN
STORE $(_table) INTO [$(vG.TransformQVDPath)$(_table).QVD];
END IF
LET vRowCount_$(i) = NoOfRows('$(_table)');
DROP TABLE [$(_table)];
NEXT
The script burps on the DROP TABLE statement, and nothing is saved to QVD. I've tried various combinations of dollar-sign expansion, with and without quotes, brackets, etc.
Hope someone can help me determine what I'm missing.
DOH! Figured it out.
The LOAD statement requires a NOCONCATENATE, or else each iteration of the table is simply concatenated with the source table. Updated script for the load section is:
[$(_table)]:
NOCONCATENATE LOAD
*
RESIDENT [FINAL BALANCES]
WHERE LEFT([Year-Period],4) = $(i)
;
After doing that everything worked as expected.
Related
I am using a simple sql block of statements to execute and return a set of results in big query, This is working fine in big query and getting the results,I need to export this data to data studio, so in data studio i use bigquery as connector and select the project and custom query and in that I paste the contents below:
Declare metricType String;
SET metricType="compute.googleapis.com/instance/cpu/utilization";
BEGIN
IF (metricType="compute.googleapis.com/instance/cpu/usage_time")
THEN
SELECT m.value as InstanceName,metric.type as metricType,point.value.double_value as usagetime,point.interval.start_time as StartTime,point.interval.end_time as EndTime,h.value as instance_id FROM `myproject.metric_export.sd_metrics_export_fin`, unnest(resource.labels) as h,unnest(metric.labels) as m where metric.type='compute.googleapis.com/instance/cpu/usage_time' and h.key="project_id";
ELSE IF (metricType="compute.googleapis.com/instance/cpu/utilization")
THEN
SELECT m.value as InstanceName,metric.type as metricType,point.value.double_value as utilizationrate,point.interval.start_time as
StartTime,point.interval.end_time as EndTime,h.value as instance_id FROM `myproject-.metric_export.sd_metrics_export_fin`,unnest(resource.labels) as h,unnest(metric.labels) as m where metric.type='compute.googleapis.com/instance/cpu/utilization' and h.key="project_id";
END IF;
END IF;
END;
but after click "ADD" button i get the below error:
I am not sure what is this error about? I have not used any stored procedure and I am just pasting it as custom query.
Also If I try to save the results of the BigQuery into a view from the Bigquery console results pane, it gives me the error,
Syntax error: Unexpected keyword DECLARE at [1:1]
I am extremely new to datastudio and also to bigquery. Kindly Help thanks
First, I would like to make some considerations about your query. You are using Scripting in order to declare and create a loop within your query. However, since you declare and set the metricsType in the beginning of the query, it will never enter in the first IF. This happens because the value is set and it is not looping through anything.
I would suggest you to use CASE WHEN instead, as below:
SELECT m.value as InstanceName,metric.type as metricType,
CASE WHEN metric.type = #parameter THEN point.value.double_value ELSE 0 END AS usagetime,
CASE WHEN metric.type = #parameter THEN point.value.double_value ELSE 0 END AS utilizationrate,
point.interval.start_time as StartTime,point.interval.end_time as EndTime,h.value as instance_id
FROM `myproject.metric_export.sd_metrics_export_fin`, unnest(resource.labels) as h,unnest(metric.labels) as m
WHERE metric.type=#parameter and h.key="project_id";
Notice that I am using the concept of Parameterized queries. Also, for this reason this query won't work in the console. In addition, pay attention that whem you set the #parameter to "compute.googleapis.com/instance/cpu/utilization", it will have a non-null column with the usagetime and a null column named utilizationrate.
Secondly, in order to add a new data source in DataStudio, you can follow this tutorial from the documentation. After, selecting New Report, click on the BigQuery Connector > Custom Query> Write your Project id, you need to click in ADD PARAMETER (below the query editor). In the above query, I would add:
Name: parameter
Display name: parameter
Data type: text
Default value: leave it in blank
Check the box Allow "parameter" to be modified in reports. This means you will be able to use this parameter as a filter and modify its value within the reports.
Following all the steps above, the data source will be added smoothly.
Lastly, I must point that if your query ran in the Console you are able to save it as a view by clicking on Save view, such as described here.
In the following code I've loaded data from two excel documents that have the exact same column names, and have therefore given one of the tables aliases.
My problem occurs when I try to put in a not match() condition at the end of the script.
// New table
NewTable:
LOAD
[namn] as namnNy
FROM
[pglistaNy.xlsx]
(ooxml, embedded labels);
// Old table
OldTable:
LOAD
[namn]
FROM
[pglistaOld.xlsx]
(ooxml, embedded labels)
Where not match(namn, namnNy);
I get an error telling me that it does not recognize the namnNy alias, why is that and what's a better solution / method?
match function will not work in your case. You are trying to match values from field names from different tables. You should use the exists function (full documentation on Qlik's help page)
So your script will be:
// New table
NewTable:
LOAD
[namn] as namnNy
FROM
[pglistaNy.xlsx]
(ooxml, embedded labels);
// Old table
OldTable:
LOAD
[namn]
FROM
[pglistaOld.xlsx]
(ooxml, embedded labels)
Where
not Exists(namnNy, namn);
Example qvw file here
I need to make a query for a dataset provided by a public project. I created my own project and added their dataset to my project. There is a table named: domain_public. When I make query to this table I get this error:
Query Failed
Error: Not found: Dataset my-project-name:domain_public was not found in location US
Job ID: my-project-name:US.bquijob_xxxx
I am from non-US country. What is the issue and how to fix it please?
EDIT 1:
I change the processing location to asia-northeast1 (I am based in Singapore) but the same error:
Error: Not found: Dataset censys-my-projectname:domain_public was not found in location asia-northeast1
Here is a view of my project and the public project censys-io:
Please advise.
EDIT 2:
The query I used to type is based on censys tutorial is:
#standardsql
SELECT domain, alexa_rank
FROM domain_public.current
WHERE p443.https.tls.cipher_suite = 'some_cipher_suite_goes_here';
When I changed the FROM clause to:
FROM `censys-io.domain_public.current`
And the last line to:
WHERE p443.https.tls.cipher_suite.name = 'some_cipher_suite_goes_here';
It worked. Shall I understand that I should always include the projectname.dataset.table (if I'm using the correct terms) and point the typo the Censys? Or is this special case to this project for some reason?
BigQuery can't find your data
How to fix it
Make sure your FROM location contains 3 parts
A project (e.g. bigquery-public-data)
A database (e.g. hacker_news)
A table (e.g. stories)
Like so
`bigquery-public-data.hacker_news.stories`
*note the backticks
Examples
Wrong
SELECT *
FROM `stories`
Wrong
SELECT *
FROM `hacker_news.stories`
Correct
SELECT *
FROM `bigquery-public-data.hacker_news.stories`
In Web UI - click Show Options button and than select your location for "Processing Location"!
Specify the location in which the query will execute. Queries that run in a specific location may only reference data in that location. For data in US/EU, you may choose Unspecified to run the query in the location where the data resides. For data in other locations, you must specify the query location explicitly.
Update
As it stated above - Queries that run in a specific location may only reference data in that location
Assuming that censys-io.domain_public dataset has its data in US - you need to specify US for Processing Location
The problem turned out to be due to wrong table name in the FROM clause.
The right FROM clause should be:
FROM `censys-io.domain_public.current`
While I was typing:
FROM domain_public.current
So the project name is required in the FROM and `` are required because of - in the project name.
Make sure your FROM location contains 3 parts as #stevec mentioned
A project (e.g. bigquery-public-data)
A database (e.g. hacker_news)
A table (e.g. stories)
But in my case, I was using the LegacySql within the Google script editor, so in that case you need to state that to false, for example:
var projectId = 'xxxxxxx';
var request = {
query: 'select * from project.database.table',
useLegacySql: false
};
var queryResults = BigQuery.Jobs.query(request, projectId);
check exact case [upper or lower] and spelling of table or view name.
copy it from table definition and your problem will be solved.
i was using FPL009_Year_Categorization instead of FPL009_Year_categorization
using c as C and getting the error "not found in location asia-south1"
I copied with exact case and problem is resolved.
On your Big Query console, go to the Data Explorer on the left pane, click the small three dots, then select query option from the list. This step confirms you choose the correct project and dataset. Then you can edit the query on the query pane on the right.
may be dataset name changed in create dataset option. it should be US or default location
enter image description here
I am creating a Shiny app and have been using data from Microsoft SQL Server Management Studio by creating my table with the query below, saving it as a CSV, and reading it in with
alldata<-read.csv(file1$datapath, fileEncoding = "UTF-8-BOM")
with the above code in my server function and the below code in my ui function
fileInput("file1", "Choose CSV File", accept=".csv")
Using this code, I have been able to manipulate the all the data (creating tables and plots) from the CSV successfully. I wanted to try directly obtaining the data from the SQL server when my app loads instead of going into SQL, executing the query, saving the data, and then loading it into my app. I tried the below code, and it sort of works. For example, the variable CODE has 30 levels, all of which are represented and able to be manipulated when I read the data in with the CSV, but only 23 are represented and manipulated when I run the below code. Is there a specific reason this may be happening. I tried running the SQL code along with the code to make my datatables in base R, instead of shiny to see if I could spot something specific not being read in correctly, but it all works perfectly when I read it in line by line
library(RODBCext)
dbhandle<-odbcDriverConnect('driver={SQL Server}; server=myserver.com;
database=mydb; trusted_connection=true')
query<-"SELECT CAST(r.DATE_COMPLETED AS DATE)[DATE]
, res.CODE
, r.TYPE
, r.LOCATION
, res.OPERATION
, res.UNIT
FROM
mydb.RECORD r
LEFT OUTER JOIN mydb.RESULT res
ON r.AMSN = res.AMSN
and r.unit = res.unit
where r.STATUS = 'C'
and res.CODE like '%ABC-%'"
auditdata<-sqlExecute(channel=dbhandle, query=query, fetch=TRUE, stringsAsFactors=FALSE)
odbcClose(dbhandle)
*I only want the complete data set loaded once per Shiny session, so I currently have this outside of the server function in my server.R file.
Try adding 'believeNRows=FALSE' to your SQLExecute call. RODBC doesn't return the full query without this parameter.
auditdata<-sqlExecute(channel=dbhandle, query=query, fetch=TRUE, stringsAsFactors=FALSE, believeNRows=FALSE)
Trying to wrap my head around zoho creator, its not as simple as they make it out to be for building apps… I have an inventory database, and i have four fields that I call to fill a field called Inventory Number (Inv_Num1) –
First Name (First_Name)
Last Name (Last_Name)
Year (Year)
Number (Number)
I have a Custom Function script that I call through a Custom Action in the form report. What I am trying to do is upload a CSV file with 900 entries. Of course, not all of those have those values (first/last/number) so I need to bulk edit all of them. However when I do the bulk edit, the Inv_Num1 field is not updated with the new values. I use the custom action to populate the Inv_Num1 field with the values of the other 4 fields.
Heres is my script:
void onetime.UpdateInv()
{
for each Inventory_Record in Management
{
FN = Inventory_Record.First_Name.subString(0,1);
LN = Inventory_Record.Last_Name.subString(0,1);
YR = Inventory_Record.Year.subString(2,4);
NO = Inventory_Record.Number;
outputstr = FN + LN + YR + NO;
Inventory_Record.Inv_Num1 = outputstr;
}
}
I get this error back when I try to run this function
Error.
Error in executing UpdateInv workflow.
Error in executing For Each Record task.
Error in executing Set Variable task. Unable to update template variable FN.
Error evaluating STRING expression :
Even though there is a First Name for example, it still thinks there is none. This only happens on the fields I changed with Bulk Edit. If I do each one by hand, then the custom action works—but of course then the Inv_Num1 is already updated through my edit on success functions and makes the whole thing moot.
this may be one year late, you might have found the solution but just to highlight, the error u were facing was just due to the null value in first name.
you just have put a null check on each field and u r good to go.
you can generate the inv_number on the time of bulk uploading also by adding null check in the same code on and placing the code on Add> On Submt.( just the part inside the loop )
the Better option would be using a formula field, you just have to put this formula in that formula field and you'll get your inventory_number autogenerated , you can rename the formula_field to Inv Number or whaterver u want.
Since you are using substring directly in year Field, I am assuming the
year field as string.else you would have to user Year.tostring().substring(2,4) & instead of if(Year=="","",...) you have to put if(Year==null , null,...);
so here's the formula
if(First_Name=="","",First_Name.subString(0,1))+if(Last_Name =="","",Last_Name.subString(0,1)) + if(Year=="","",Year.subString(2,4)+Number
Let me know ur response if u implement this.
Without knowing the datatype its difficult to fix, but making the assumption that your Inventory_Record.number is a numeric data item you are adding a string to a number:
The "+" is used for string Concatenation - Joiner but it also adds two numbers together so think "a" + "b" = "ab" for strings but for numbers 1 + 2 = 3.
All good, but when you do "a" + 2 the system doesn't know whether to add or concatenate so gives an error.