BigQuery validator malfunction - google-bigquery

I'm using the BigQuery web UI and sometimes the validator will display the green tick to show that all is good to go but then the query does not execute even while the validator has approved the current query. Other times it just seems to continuously keep thinking about it while never validating anything and then I don't know how to proceed. I'm very reliant on it as I'm new to SQL so I'm getting stuck a lot when it malfunctions.
I tried to delete the table and recreating it but it just gives me a blank error. Screenshot below.
Can this be caused by bad latency to US servers? I just tested my internet speed and it's looking very good but I am in South Africa. If this is the case, what would the work around be?
The error simply says "Cannot run query" as in attached screenshots. The query looks like this now:
SELECT *,
CASE
WHEN STORE = 'Somerset Mall' THEN 'Somerset'
WHEN STORE = 'Pavilion 8ta Flagship' THEN 'Pavilion'
WHEN STORE = 'N1 City' THEN 'N1'
WHEN STORE = 'GALLIERIA' THEN 'Galleria'
WHEN STORE = 'KWADUKUZA' THEN 'Stanger'
WHEN STORE = 'Çape Town' THEN 'ÇBD'
WHEN STORE = 'Walmer Park' THEN 'Walmer'
WHEN STORE = 'Canal Walk' THEN 'Canal Walk'
WHEN STORE = 'Cape Gate' THEN 'Çape Gate'
WHEN STORE = 'CAVENDISH' THEN 'Cavendish'
WHEN STORE = 'Kenilworth' THEN 'Kenilworth'
WHEN STORE = 'Table View' THEN 'Table View'
WHEN STORE = 'Old Mutual Pinelands' THEN 'Old Mutual'
WHEN STORE = 'Sea Point' THEN 'Sea Point'
WHEN STORE = 'Knysna' THEN 'Knysna'
WHEN STORE = 'George' THEN 'George'
WHEN STORE = 'Mossel Bay' THEN 'Mossel Bay'
WHEN STORE = 'Hermanus' THEN 'Hermanus'
WHEN STORE = 'Mitchells Plain' THEN 'Mitchells Plain'
WHEN STORE = 'Stellenbosch' THEN 'Stellenbosch'
WHEN STORE = 'Tygervalley' THEN 'Tygervalley'
WHEN STORE = 'Worcester' THEN 'Worcester'
WHEN STORE = 'Gateway' THEN 'Gateway'
WHEN STORE = 'Musgrave' THEN 'Musgrave'
WHEN STORE = 'Pietermaritzburg' THEN 'Pietermaritzburg'
WHEN STORE = 'Richards Bay' THEN 'Richards Bay'
WHEN STORE = 'ETHEKWENI' THEN 'eThekwini'
WHEN STORE = 'Bluff' THEN 'Bluff'
WHEN STORE = 'Chatsworth' THEN 'Chatsworth'
WHEN STORE = 'Ballito' THEN 'Ballito'
WHEN STORE = 'Hemmingways 8ta Flagship' THEN 'Hemmingways'
WHEN STORE = 'Baywest' THEN 'Baywest'
WHEN STORE = 'Greenacres' THEN 'Bridge'
WHEN STORE = 'Vincent Park' THEN 'Vincent Park'
WHEN STORE = 'Bloemfontein' THEN 'Bloemfontein'
WHEN STORE = 'Welkom' THEN 'Welkom'
WHEN STORE = 'Kimberley' THEN 'Kimberley'
ELSE 'NEW QMAN STORE?'
END AS STORE_NAME
FROM `tester-253410.test1.Qman_data`
blank error
screenshot of a failed query (bottom left) while validator is green
screenshot of query beginning

I do not think that delete the table fix the issue.
Seems an issue with the BigQuery UI or with your browser, try to Clear cache & cookies.
My suggestion is to try with the command-line tool thru Cloud Shell running interactive and batch query jobs using the CLI setting the --dry_run flag (If set, do not run this job. A valid query will return a mostly empty response with some processing statistics, while an invalid query will return the same error it would if it was not a dry run) to validate your queries.
For example:
bq query \
--use_legacy_sql=false \
--dry_run \
'SELECT
COUNTRY,
AIRPORT,
IATA
FROM
`project_id`.dataset.airports
LIMIT
1000'
Returning:
Query successfully validated. Assuming the tables are not modified, running this query will process 122 bytes of data.

Related

How to save query as a table and store in redshift via Lambda Function in AWS?

I am writing a lambda function in aws thats calling on data from redshift. Purpose of this function is to run each day and sends out notifications (emails) of the output from this function (in this case - I want it to be a table).
Here is my current function. I am able to see the list of rows from the query output but now I want to save that in a table format or least print out the full table/output. Very new to AWS so I was wondering how do I store it as a new table in redshift/or anywhere else in AWS so I can send it to ppl?
Code:
import json
import psycopg2
import boto3
credential = {
'dbname' : 'main',
'host_url' : 'dd.us-west-1.redshift.amazonaws.com',
'port' : '5439',
'user' : 'private',
'password' : '12345678'
}
redshift_role = {
'dev': 'arn:aws:lambda:us-west-1:15131234566:function:test_function'
}
def lambda_handler(event, context):
# TODO implement
#client = boto3.client('redshift-data')
conn_string = "dbname='{}' port='{}' user='{}' password='{}' host='{}'"\
.format(credential['dbname'], credential['port'], credential['user'], credential['password'], credential['host_url'])
con = psycopg2.connect(conn_string)
cur = con.cursor()
sql_query = """with
tbl as (
select
case
when (sa.parentid like '001i0000023STBY%' or sa.ultimate_parent_account__c like '001i0000023STBY%') --Parent OR Ultimate Parent is <Department of Defense>
then sa.id
else
coalesce(sa.ultimate_parent_account__c, sa.parentid, sa.id) end as cust_id,
(select name from salesforce.account where id=cust_id) as cust_name,
sa.name as acct_name,
sa.id as acct_id,
sa.parentid,
(select name from salesforce.account where id=sa.parentid) as par_name,
(select name from salesforce.account where id=sa.ultimate_parent_account__c) as ult_par_name,
so.id as opp_id,
so.name as opp_name,
so.stagename as stg_name,
so.type as opp_type,
so.Manager_Commit__c as mgr_commit,
so.renewal_risk__c as opp_risk,
so.isclosed as cls,
so.finance_date__c as fin_date,
DATEPART(QUARTER,so.finance_date__c) as Q,
DATEPART(QUARTER,so.closedate) as Q_cls,
DATEPART(QUARTER,so.subscription_start_date__c) as Q_ren_due,
so.Total_NARR__c as arr,
so.NARR__c as fin_nacv,
so.churn__c as fin_churn,
so.Renewal_Amount__c as ren_amt,
so.Available_to_Renew_ARR__c as avl_ren_arr,
so.estimated_narr__c as nacv,
so.bi_detect_nacv__c as bi_detect,
so.bi_recall_nacv__c as bi_recall,
so.bi_stream_nacv__c as bi_stream,
so.bi_dfaws_nacv__c as bi_dfaws,
so.bi_o365_nacv__c as bi_o365,
so.bi_services_nacv__c as bi_svcs,
sp.name as pr_name,
sp.family as pr_family,
sp.sbqq__subscriptiontype__c as pr_type,
sol.product_code__c as oli_code,
sol.sbqq__quoteline__c as qli_id,
sol.quantity as qty,
sca.serial__c as ca_name,
(select name from salesforce.product2 where id = sca.product__c ) as ca_pr_name,
sca.mode_updater__c as ca_mode,
sca.updater_last_seen__c as ca_last_seen,
sca.software_version__c as ca_sw_version,
sca.total_hosts__c as ca_tot_hosts,
sca.active_hosts__c as ca_active_hosts,
sca.X95_Host_Total__c as ca_x95_hosts_tot,
sca.traffic__c as ca_traffic,
sca.uiconfig__c as ca_uiconfig
from
salesforce.opportunity so
join
salesforce.account sa on
so.accountid = sa.id
join salesforce.user su on
so.ownerid = su.id
join salesforce.opportunitylineitem sol on
so.id = sol.opportunityid
join salesforce.product2 sp on
sol.product2id = sp.id
join salesforce.customasset__c sca on
so.id = sca.opportunity__c
where
so.isdeleted = false
and sa.isdeleted = false
and sol.isdeleted = false
order by
Q
)
select * from
(select
tbl.acct_name as acct,
tbl.ca_name,
tbl.ca_pr_name,
tbl.ca_mode,
date(tbl.ca_last_seen) as ca_last_seen,
tbl.ca_sw_version,
tbl.ca_tot_hosts,
tbl.ca_active_hosts,
tbl.ca_x95_hosts_tot,
tbl.ca_traffic,
tbl.ca_uiconfig
from
tbl
where
tbl.stg_name like 'Closed Won%'
and tbl.arr is not null
group by
tbl.acct_name,
tbl.opp_id,
tbl.ca_name,
tbl.ca_pr_name,
tbl.ca_mode,
tbl.ca_last_seen,
tbl.ca_sw_version,
tbl.ca_tot_hosts,
tbl.ca_active_hosts,
tbl.ca_x95_hosts_tot,
tbl.ca_traffic,
tbl.ca_uiconfig) df
WHERE ca_last_seen >= DATEADD(MONTH, -3, GETDATE())"""
cur.execute(sql_query)
with con.cursor() as cur:
rows = []
cur.execute(sql_query)
for row in cur:
rows.append(row)
print(rows)
con.close()
return {
'statusCode': 200,
'body': json.dumps('Hello from Lambda!')
}
Update
I included a write portion to tmp directory but am having trouble writing to my s3 bucket as it leads to a timeout error.
Updated portion of code below:
with con.cursor() as cur:
# Enter the query that you want to execute
cur.execute(sql_query)
for row in cur:
res = cur.fetchall()
print(res)
#Save the Query results to a CSV file
fp = open('/tmp/Processlist.csv', 'w')
myFile = csv.writer(fp)
myFile.writerows(res)
fp.close()
#s3.upload_file('/tmp/Processlist.csv', 'data-lake-020192', 'Processlist.csv')
#con.close()
There are several ways to do this. The first and least efficient is to insert data into a table using INSERT INTO VALUES (...). This provides the data as part of the SQL and therefore is processed by the query compiler, moved from the leader to the compute nodes, and then stored in the table. This process is inefficient, potentially stresses the leader node, and is generally frowned upon. However, if you are only loading a small number of rows and it run infrequently (like when the database is generally lightly loaded) then this can work fine. Just remember there is a limit to how long a SQL statement can be but if you are anywhere close to this you are likely loading too much data via this path. 100 rows of 5 columns should be ok.
The best way, but takes more coding, is to write the data out to an S3 file (or files if the data is large) and then COPY it into the desired table. A CSV file is simple to generate and human readable. This process also gives you a record of the table contents per day for any future need (debug, audit, etc.).
Alternatively you could just save the data to S3 and then use Redshift Spectrum to access the data from S3. This may be a good choice for very large amounts of data and/or data that is rarely used. In most cases I would expect that having the data native in Redshift (COPY from S3) is the way to go.
Coding any of these in lambda is straight forward - just make the calls to the services and issue the SQL as needed.

ITXEX field cleared when HR_INFOTYPE_OPERATION is called

We got into difficulties in maintaining the ITXEX field (Long text indication) of an Infotype record.
Say we got an existing record in an Infotype database table with a long text filled (ITXEX field value in that record is set to 'X').
Some process updates the record through HR_CONTROL_INFTY_OPERATION like this:
CALL FUNCTION 'HR_CONTROL_INFTY_OPERATION'
EXPORTING
infty = '0081'
number = '12345678'
subtype = '01'
validityend = '31.12.9999'
validitybegin = '19.05.2019'
record = ls_0081 " ( ITXEX = 'X' )
operation = 'MOD'
tclas = 'A'
nocommit = abap_true
IMPORTING
return = ls_return.
This call does update the record but clearing it's ITXEX field.
It's important to say that making the same action through PA30 does update the record and maintain ITXEX field as it was.
The described problem seems similar to that question. Trying the solutions given there didn't solve the problem.
Why the two approaches (PA30 and function module) don't work the same? How to fix this?
First of all, FM parameters you use are incorrect. How do you want the infotype to be updated if you set nocommit = TRUE?
Also, you are missing the correct sequence which must be used for the update procedure:
Lock the Employee
Read the infotype
Update the infotype
Unlock the Employee
The correct snippet for your task would be
DATA: ls_return TYPE bapireturn1.
DATA: l_infty_tab TYPE TABLE OF p0002.
CALL FUNCTION 'HR_READ_INFOTYPE'
EXPORTING
pernr = '00000302'
infty = '0002'
TABLES
infty_tab = l_infty_tab.
READ TABLE l_infty_tab ASSIGNING FIELD-SYMBOL(<infotype>) INDEX 1.
<infotype>-midnm = 'Shicklgruber'. " updating the field of infotype
CALL FUNCTION 'ENQUEUE_EPPRELE'
EXPORTING
pernr = '00000302'
infty = '0002'.
CALL FUNCTION 'HR_CONTROL_INFTY_OPERATION'
EXPORTING
infty = <infotype>-infty
number = <infotype>-pernr
subtype = <infotype>-subty
validityend = <infotype>-endda
validitybegin = <infotype>-begda
record = <infotype>
operation = 'MOD'
tclas = 'A'
IMPORTING
return = ls_return.
CALL FUNCTION 'DEQUEUE_EPPRELE'
EXPORTING
pernr = '00000302'
infty = '0002'.
This way itxex field is treated correctly and if existed on that record, will remain intact. However, this method will not work for updating the long text itself, for that you must use object-oriented way, methods of class CL_HRPA_INFOTYPE_CONTAINER.

Create a url from JSON in SQL Server

I am able to output my SQL query to JSON using the FOR JSON command in my query. What I need to do is make that JSON data available via a live URL that I can fetch anytime I want a fresh dataset from the SQL Server.
Most of my research shows me how to either A) work with JSON inside SSMS, or B) output JSON from SSMS. What I need is a way to take my JSON output, store it somewhere external to my SQL server with a unique URL that will allow me to use the fetchURL command in my Google Scripts to update data in my Google Sheet from my SQL Server. I already know how to parse JSON data to my spreadsheet, and I already know how to get my SQL data into JSON format. The missing link is creating a URL for the JSON data that is live connected to my SQL Server database that refreshes every time I fetch it. Below is the SQL code that creates the JSON object (it is the Aeries Demo Database for their Student Information System...the query returns all the grades for all students at a high school...this is fictitious data)
--LIVE GRADES JSON
SELECT
STU.ID as [Student.PermID],
CONCAT(STU.LN,', ',STU.FN) AS [Student.LastFirst],
STU.GR as [Student.Grade],
GBK.PD as [Student.Period],
TCH.TE as [Student.TeacherName],
TCH.TN as [Student.TeacherNum],
GBU.CSC as [Student.Percentage],
LEFT(GBU.CMK,1) as [Student.Mark],
STU.U3 as [Student.User3],
STU.LF as [Student.LangFlu],
STU.CU as [Student.CounselorNum],
CAST(GBU.DTS as date) as [Student.LastUpdate]
FROM GBK
JOIN TCH ON (TCH.SC = GBK.SC AND TCH.TN = GBK.TN AND TCH.DEL = 0)
JOIN GBU ON (GBU.SC = GBK.SC AND GBU.GN = GBK.GN AND GBU.DEL = 0)
JOIN STU ON (STU.SC = GBU.SC AND STU.SN = GBU.SN AND STU.DEL = 0)
WHERE
GBK.DEL = 0 AND
GBK.SC = 994 AND
STU.GR IN (9, 10, 11, 12) AND
GBU.TG != 'I' AND
(GBK.TM = 'Spring' OR GBK.TM = '3' OR GBK.TM = '4' OR GBK.TM = 'Year') and
GBK.PD < 8
ORDER BY STU.LN, STU.ID, GBK.PD
FOR JSON PATH, ROOT('Students')
I need help creating a URL and direction on where to "store" it so that it I can fetch the data at any point from a custom script within Google sheets rather than having to run the query in SSMS and copy+paste it into my spreadsheet. I have the JSON and I know how to parse it to a spreadsheet. I just do not know how I go about creating and storing a URL that I can fetch the data when I want it and it is always live connected back to my SQL Server Database.

How to get long texts of FI held documents?

I need to get the particulars/long text of FI held documents. I tried the 'read_text' function module but had no luck since the held document has the temporary document number.
I tried looking for data in STXL and STXH tables, I also tried the function modules in FM group FTXT and STXD but had no luck.
Any other method to achieve that goal?
First of all, you need the temporary document number which can be get either from F-43 itself or from RFDT table.
In field SRTFD you should separate it from username.
Then run READ_TEMP_DOCUMENT FM, after running it you should have your texts in ABAP memory.
To get them use GET_TEXT_MEMORY.
ls_uf05a-tempd = '0012312356'. "doc number
ls_uf05a-unamd = 'JOHNDOE'. "username
CALL FUNCTION 'READ_TEMP_DOCUMENT'
EXPORTING
I_UF05A = ls_uf05a
TABLES
T_BKPF = lt_bkpf
T_BSEC = lt_bsec
T_BSED = lt_bsed
T_BSEG = lt_bseg
T_BSET = lt_bset
T_BSEZ = lt_bsez
.
DATA: lt_texts TYPE TABLE OF TCATALOG,
t_tline TYPE STANDARD TABLE OF tline,
memory_id(30) VALUE 'SAPLSTXD'.
CALL FUNCTION 'GET_TEXT_MEMORY'
TABLES
TEXT_MEMORY = lt_texts.
READ TABLE lt_texts ASSIGNING FIELD-SYMBOL(<cat>) WITH KEY tdobject = 'BELEG'
tdid = '0001'
tdspras = 'E' BINARY SEARCH.
IF sy-subrc = 0.
memory_id+8(6) = <cat>-id.
ENDIF.
IMPORT tline = t_tline FROM MEMORY ID memory_id.
LOOP AT t_tline ASSIGNING FIELD-SYMBOL(<tline>).
WRITE: <tline>-tdline. "showing the texts
ENDLOOP.

Corona Lua SQLite

this is my first question on stackOverflow.
I'm working with Corona and I'm having an issue accessing a SQLdb (I'm a bit of a SQL noob.)
I'm trying to access and return a value I've stored in the database.
Here's some code samples:
print("---------------- How I Create New Player Save Data")
local entry = [[CREATE TABLE IF NOT EXISTS playerData (key STRING PRIMARY KEY, content INTEGER);]]
db:exec(entry)
entry = [[INSERT INTO playerData VALUES ("LastLoginTime", 0);]]
db:exec( entry )
entry = [[INSERT INTO playerData VALUES ("Credits", 1000);]]
db:exec( entry )
entry = [[INSERT INTO playerData VALUES ("Level", 1);]]
db:exec( entry )
Now this function works, it will print everything in the db (i pass in 'dbName'):
--print all the table contents
for row in db:nrows("SELECT * FROM "..dbName) do
local text = row.key..": "..row.content
end
This doesn't work, it returns '0':
local grabCredits = "SELECT content FROM playerData WHERE key='Credits'"
local credits = db:exec(grabCredits)
print("-- value: "..credits)
Neither does this, also returns '0':
local grabCredits = "SELECT key FROM playerData WHERE content>=10"
local credits = db:exec(grabCredits)
print("-- value: "..credits)
I don't understand what I'm doing wrong. Maybe I need to use another function call on the db other than exec(). I realize I could iterate through the db every time I want to access a single entry, but that just seems inefficient.
Any help is very much appreciated.
If you want results, you must use some form of iterator. SQLite always returns rows for a query result.
This is similar to what I'm using to retrieve one result from my database and works well for me.
local query = "SELECT content FROM playerData WHERE key = 'Credits' LIMIT 1"
local queryResultTable = {}
local queryFunction = function(userData, numberOfColumns, columnValues, columnTitles)
for i = 1, numberOfColumns do
queryResultTable[columnTitles[i]] = columnValues[i]
end
end
db:exec(query, queryFunction)
for k,v in pairs(queryResultTable) do
print(k,v)
end