BigQuery Error while trying to query data? - google-bigquery

I am trying to run simple query on Google Bigquery. But each time when I run the query SELECT COUNT(totals.visits) FROM 122443995.ga_sessions_20160817 I get following error
Encountered " "FROM" "FROM "" at line 1, column 29. Was expecting: <EOF>
What could be the possible error in the query syntax?

You probably need to escape the table name. With legacy SQL, you would use square brackets, e.g.:
SELECT COUNT(totals.visits) FROM [122443995.ga_sessions_20160817];
With standard SQL, you would use backticks, e.g.:
SELECT COUNT(totals.visits) FROM `122443995.ga_sessions_20160817`;

For me, using parentheses to surround the clause in FROM solved it for me.
So instead of
SELECT totals FROM t1
use
SELECT totals FROM (t1)

Related

ROUND function in ODBC

I am working on a third-party custom flat file DB that I access through ODBC and the ROUND function is throwing errors always.
Is there a function that can do rounding in ODBC?
An example that throws an error:
SELECT AUDIT_SPLIT.ACCOUNT_REF, ROUND(SUM(AUDIT_SPLIT.GROSS_AMOUNT), 2) FROM AUDIT_SPLIT GROUP BY AUDIT_SPLIT.ACCOUNT_REF
Though Excel the error is "Column not found"
So I will try something like this:
SELECT A.ACCOUNT_REF, ROUND(SUM(A.GROSS_AMOUNT), 2) GROSS_AMOUNT_SUM
FROM AUDIT_SPLIT A
WHERE A.ACCOUNT_REF IS NOT NULL AND A.GROSS_AMOUNT IS NOT NULL
GROUP BY 1
Is your flat file something like a csv, a delimited text file or a fixed length file?
In this case I will suggest to check for the last line is not empty.
If the file ends with a cr/lf a new line is added and it is filled with nulls.
Also, from which client are you running the query? You have not given a name to ROUND() column, maybe you need it.
So I will try something like this:
SELECT A.ACCOUNT_REF, ROUND(SUM(A.GROSS_AMOUNT), 2) AS GROSS_AMOUNT_SUM
FROM AUDIT_SPLIT A
WHERE A.ACCOUNT_REF IS NOT NULL AND A.GROSS_AMOUNT IS NOT NULL
GROUP BY A.ACCOUNT_REF
UPDATE
Maybe the problem is related to the comma present in ROUND function, because it is interpreted as a column separator, you can avoid it using ODBC escape clause {fn ROUND()} (as suggested by jarlh in his comment), and to be sure and clear I will try to split the SUM() and the ROUND() in this way:
SELECT S.ACCOUNT_REF, {fn ROUND(S.GROSS_AMOUNT_SUM, 2)} AS ROUNDED_SUM
FROM (
SELECT A.ACCOUNT_REF, SUM(A.GROSS_AMOUNT) AS GROSS_AMOUNT_SUM
FROM AUDIT_SPLIT AS A
WHERE A.ACCOUNT_REF IS NOT NULL AND A.GROSS_AMOUNT IS NOT NULL
GROUP BY A.ACCOUNT_REF
) AS S

Error with group by in DB2 - SQLSTATE 42803

I've got a SQL script and its failing. This is what it looks like:
SELECT P.SOORT,P.TYPEBETALING, P.MIDDELCODE, P.HOEVEELHEID, P.EENHEID, P.BEDRAG,P.MIDDEL
FROM DAAO01.BETALINGEN as A join
DAAO01.KLANTEN P
on p.KLANT_ID = 1
GROUP BY P.TYPEBETALING
ORDER BY P.TYPEBETALING
When I execute this I get an error:
COLUMN OR EXPRESSION IN THE SELECT LIST IS NOT VALID. SQLCODE=-122, SQLSTATE=42803, DRIVER=4.18.60
What am I doing wrong?
It is quite difficult to tell what you're trying to do without seeing your data.
But the error is saying that you have not specified how you are going to deal with the rest of the fields in your Group by aggregation:
P.SOORT, P.MIDDELCODE, P.HOEVEELHEID, P.EENHEID, P.BEDRAG, P.MIDDEL
If they're numbers, then you could sum them or take the avg etc.
If they're strings, then you either need to group by them or remove them from your selection.

unable to execue the following select statement getting error

when i execute the following select statement i am getting the below error.
I am trying to execute the below statement in Oracle server 9.1.
I believe due to this i am getting this error.
I know we can excute the query for single quotes by '%Blondeau D''Uva%'. But i am looking for query which will pass the special character value as parameter.
Kindluy let me know how to escape single quotes in old oracle server
ERROR :
ORA-00933: SQL Command not properly ended
Error at line : 1 Column : 50
Query:
Select * FROM TABLEA where UI_FNAME like q'[%Michael%]' and UI_LNAME like q'[ %Blondeau D'Uva%]';
On oracle the following should work
Select *
FROM TABLEA
where UI_FNAME like '[%Michael%]'
and UI_LNAME like '[ %Blondeau D''Uva%]';
So no duplicate where
And you have to quote the ' between D und Uva
And probably the q before the ' is wrong ... so I have removed it as well.
Ok tried it out with the q operator:
Select *
FROM employees
where first_name like q'[%Michael%]'
and last_name like q'[ %Blondeau D'Uva%]';
No errors no row ...
Two things about your query:
Two times usage of where where there should be just one.
About the second condition. You have used q'[ %Blondeau D'Uva%]' in like clause. I think that won't give you the result you might be looking for. This has nothing to do with your error, but still, it would not hurt to re-check the query.
Try this, this shouldn't run you into any error :
Select * FROM TABLEA
where UI_FNAME like q'[%Michael%]'
and UI_LNAME like q'[%Blondeau D'Uva%]';
Cheers!
As others have tried to give you an answer, but I think you probably need to keep % outside brackets -
Select *
FROM TABLEA
where UI_FNAME like '%[Michael]%'
and UI_LNAME like '%[ Blondeau D''Uva]%';

BigQuery Wildcard using TABLE_DATE_RANGE()

Great news about the new table wildcard functions this morning! Is there a way to use TABLE_DATE_RANGE() on tables that include date but no prefix?
I have a dataset that contains tables named YYYYMMDD (no prefix). Normally I would query like so:
SELECT foo
FROM [mydata.20140319],[mydata.20140320],[mydata.20140321]
LIMIT 100
I tried the following but I'm getting an error:
SELECT foo
FROM
(TABLE_DATE_RANGE(mydata.,
TIMESTAMP('2014-03-19'),
TIMESTAMP('2015-03-21')))
LIMIT 100
as well as:
SELECT foo
FROM
(TABLE_DATE_RANGE(mydata,
TIMESTAMP('2014-03-19'),
TIMESTAMP('2015-03-21')))
LIMIT 100
The underlying bug here has been fixed as of 2015-05-14. You should be able to use TABLE_DATE_RANGE with a purely numeric table name. You'll need to end the dataset in a '.' and enclose the name in brackets, so that the parser doesn't complain. This should work:
SELECT foo
FROM
(TABLE_DATE_RANGE([mydata.],
TIMESTAMP('2014-03-19'),
TIMESTAMP('2015-03-21')))
LIMIT 100
Note: The underlying bug has been fixed, please see my other answer.
Original response left for posterity (since the workaround should still work, in case you need it for some reason)
Great question. That should work, but it doesn't currently. I've filed an internal bug. In the meantime, a workaround is to use the TABLE_QUERY function, as in:
SELECT foo
FROM (
TABLE_QUERY(mydata,
"TIMESTAMP(table_id) BETWEEN "
+ "TIMESTAMP('2014-03-19') "
+ "AND TIMESTAMP('2015-03-21')"))
Note that with standard SQL support in BigQuery, you can use _TABLE_SUFFIX, instead of TABLE_QUERY. For example:
SELECT foo
FROM `mydata_*`
WHERE _TABLE_SUFFIX BETWEEN '20140319' AND '20150321'
Also check this question for more about BigQuery standard SQL.

how to use substr in SQL Server?

I have the following extract of a code used in SAS and wanted to write it in SQL Server to extract data.
substr(zipname,1,4) in("2000","9000","3000","1000");run;
How do I write this in SQL Server ?
I tried and got this error:
An expression of non-boolean type specified in a context where a
condition is expected
In sql server, there's no substr function (it's substring)
by the way, you need a complete query...
select blabla
from blibli
where substring(zipname, 1, 4) in ('2000', '9000', 3000', '1000')
assuming zipname is a varchar or something like that...
You need a table that you are getting the records from, and zipname would be a column in the table. The statement would be something like this:
select * from tablename where substring(zipname,1,4) in ('2000','9000','3000','1000')
Since you want the first x characters, you can also use the left() function.
where left(zipname, 4) in (values go here)
Bear in mind that your values have to be single quoted. Your question has double quotes.