BigQuery select error. Error: Unexpected. Please try again - google-bigquery

When i run a the following query i get this error: Error: Unexpected. Please try again.
It only happened when i run a query that return a timestamp and a column name "profile" and only if i try to insert the result into a Destination Table.
SELECT 'helllo' as [profile],
TIMESTAMP( '2014-10-22' ) as date_out
i have tried to change the column name from "profile" to something else and it works, but i really need it to be profile...

I see the expected result when I try your query with "allowLargeResults" disabled, both in the web UI and with the 'bq' command-line tool:
+---------+---------------------+
| profile | date_out |
+---------+---------------------+
| helllo | 2014-10-22 00:00:00 |
+---------+---------------------+
If "allowLargeResults" is enabled, however, this produces an internal error. This is a bug in BigQuery, and I've filed against the team. Is it possible for you to work around the issue by disabling allowLargeResults?

Related

Why I cant use the column name in the alias when i opered with dates

Currently I am migrating a database from SQL_SERVER to SPARK using HIVE_SQL.
I had an issue when im trying to pass a number to a date format.I found the answer is:
from_unixtime(unix_timestamp(cast(DATE as string) , 'dd-MM-yyyy'))
When I execute this query it bring me the data, notice that iI put an alias different to the name of column FECHA :
SELECT FROM_UNIXTIME(UNIX_TIMESTAMP(CAST(FECHA AS STRING ) ,'yyyyMMdd'), 'yyyy-MM-dd') AS FECHA_1
FROM reportes_hechos_avisos_diarios
LIMIT 1
| FECHA_1 |
| -------- |
| 2019-01-01 |
But when I put the same alias as the column name it bring me an incosistent information:
SELECT FROM_UNIXTIME(UNIX_TIMESTAMP(CAST(FECHA AS STRING ) ,'yyyyMMdd'), 'yyyy-MM-dd') AS FECHA
FROM reportes_hechos_avisos_diarios
LIMIT 1
| FECHA |
| -------- |
| 2.019 |
I know the trivial answer is , put an alias that doesnt be the same as the column name, but i have an implementation in Tableau that feeds from this query and Its complicated to change this columns because basically i must change all implementation so I need to preserve the column name.This query works for me in SQL SERVER, but i dont know why doesnt works in hive.
Issue
ExpectedResult
PSDT:Thanks for your attention, this is the first question I ask in stack and my native language is not English, sorry if I had grammatical errors.
limit 1 without order by can produce non-deterministic results from run to run because the order of rows is random due to parallel execution, some factors may affect it somehow but getting the same row is not guaranteed.
What is happening - I guess you receiving different row and the date is corrupted in that row, this is why some weird result is returned.
Also, you can another method of conversion:
select date(regexp_replace(cast(20200101 as string),'(\\d{4})(\\d{2})(\\d{2})','$1-$2-$3')) --put your column instead of constant.
Result:
2020-01-01

stats latest not showing any value for field

We have following query -
index=yyy sourcetype=zzz "RAISE_ALERT" logger="aaa" | table uuid message timestamp | eval state="alert" | append [SEARCH index=yyy sourcetype=zzz "CLEAR_ALERT" logger="aaa" | table uuid message timestamp | eval state="no_alert" ] | stats latest(state) as state by uuid
But this query is not showing anything for state, it shows only uuid.
Query before and without latest works just fine. Here is screenshot of result of everything before stats -
If we replace stats latest with stats first, we can see uuid and state, its just not the latest observed value of state for that uuid.
Any idea as to why this can happen?
Looks like table clause was the issue. Removing both table clauses makes this work.

Stats Count Splunk Query

I wonder whether someone can help me please.
I'd made the following post about Splunk query I'm trying to write:
https://answers.splunk.com/answers/724223/in-a-table-powered-by-a-stats-count-search-can-you.html
I received some great help, but despite working on this for a few days now concentrating on using eval if statements, I still have the same issue with the "Successful" and "Unsuccessful" columns showing blank results. So I thought I'd cast the net a little wider and ask please whether someone maybe able to look at this and offer some guidance on how I may get around the problem.
Many thanks and kind regards
Chris
I tried exploring your use-case with splunkd-access log and came up with a simple SPL to help you.
In this query I am actually joining the output of 2 searches which aggregate the required results (Not concerned about the search performance).
Give it a try. If you've access to _internal index, this will work as is. You should be able to easily modify this to suit your events (eg: replace user with ClientID).
index=_internal source="/opt/splunk/var/log/splunk/splunkd_access.log"
| stats count as All sum(eval(if(status <= 303,1,0))) as Successful sum(eval(if(status > 303,1,0))) as Unsuccessful by user
| join user type=left
[ search index=_internal source="/opt/splunk/var/log/splunk/splunkd_access.log"
| chart count BY user status ]
I updated your search from splunk community answers (should look like this):
w2_wmf(RequestCompleted)`request.detail.Context="*test"
| dedup eventId
| rename request.ClientID as ClientID detail.statusCode AS statusCode
| stats count as All sum(eval(if(statusCode <= 303,1,0))) as Successful sum(eval(if(statusCode > 303,1,0))) as Unsuccessful by ClientID
| join ClientID type=left
[ search w2_wmf(RequestCompleted)`request.detail.Context="*test"
| dedup eventId
| rename request.ClientID as ClientID detail.statusCode AS statusCode
| chart count BY ClientID statusCode ]
I answered in Splunk
https://answers.splunk.com/answers/724223/in-a-table-powered-by-a-stats-count-search-can-you.html?childToView=729492#answer-729492
but using dummy encoding, it looks like
w2_wmf(RequestCompleted)`request.detail.Context="*test"
| dedup eventId
| rename request.ClientId as ClientID, detail.statusCode as Status
| eval X_{Status}=1
| stats count as Total sum(X_*) as X_* by ClientID
| rename X_* as *
Will give you ClientID, count and then a column for each status code found, with a sum of each code in that column.
As I gather you can't get this working, this query should show dummy encoding in action
`index=_internal sourcetype=*access
| eval X_{status}=1
| stats count as Total sum(X_*) as X_* by source, user
| rename X_* as *`
This would give an output of something like

0 results in MS Access totals query (w. COUNT) after applying criteria

A query I am working on is showing a rather interesting behaviour that I couldn't debug so far.
This is the query before it gets buggy:
QryCount
SELECT EmpId, [Open/Close], Count([Open/Close]) AS Occurences, Attribute1, Market, Tier, Attribute2, MtSWeek
FROM qrySource
WHERE (Venue="NewYork") AND (Type="TypeA")
GROUP BY EmpId, [Open/Close], Attribute1, Market, Tier, Attribute2, MtSWeek;
The query gives precisely the results that I would expect it to:
#01542 | Open | 5 | Call | English | Tier1 | Complain | 01/01/2017
#01542 | Closed | 2 | Call | English | Tier2 | ProdInfo | 01/01/2017
#01542 | Open | 7 | Mail | English | Tier1 | ProdInfo | 08/01/2017
etc...
But as a matter of fact in doing so it provides more records than needed at a subsequent step thereby creating cartesians.
qrySource.[Open/Close] is a string type field with possible attributes (you guessed) "open", "Closed" and null and it is actually provided by a mapping table at the creation stage of qrySource (not sure, but maybe this helps).
Now, the error comes in when I try to limit qryCount only to records where Open/Close = "Open".
I tried both using WHERE and HAVING to no avail. The query would result in 0 records, which is not what I would like to see.
I thought that maybe it is because "open" is a reserved term, but even by changing it to "D_open" in the source table didn't fix the issue.
Also tried to filter for the desired records in a subsequent query
SELECT *
FROM QryCount
WHERE [Open/Close] ="D_Open"
But nothing, still 0 records found.
I am suspicious it might be somehow related to some inherent proprieties of the COUNT function but not sure. Any help would be appreciated.
Everyone who participated, thank you and apologies for providing you with insufficient/confusing information. I recon the question could have been drafted better.
Anyhow, I found that the problem was apparently caused by the "/" in the Open/Closed field name. As soon as I removed it from the field name in the original mapping table the query performed as expected.

Rails / Postgres database - can't retrieve correct records from db

I need help troubleshooting a database query to generate correct results. This is a fitness app developed with Rails 4.2.6, Ruby 2.2.4, and a Postgres 1.9.0 database. When a user saves new body measurements, I'm trying to display changes (e.g. inches) since the last measurement. The problem is change values for previously saved records are not calculating correctly in show views.
Here's what I've coded so far:
controllers/fitness_measurements_controller.rb:
before_action :get_second_latest_waist_measurement
def
get_second_latest_waist_measurement
#waist_second_latest_measurement = #member.fitness_measurements.order(:created_at).offset(1).last.waist || 0
end
fitness_measurements/show.html.erb:
<p>
<strong>Change in waist since last measurement:</strong>
<%= (#fitness_measurement.waist - #waist_second_latest_measurement).round(2) %> in
</p>
With test data, here's a table which contains results of my database query. Note the values in the "Calc. Change" column.
+------------+-----------+------------------+--------------------+
| Date | Waist(in) | Calc. Change(in) | Correct Change(in) |
+------------+-----------+------------------+--------------------+
| 2016-10-01 | 37.5 | +1.00 | +1.00 |
+------------+-----------+------------------+--------------------+
| 2016-09-01 | 36.5 | 0.00 | +3.00 |
+------------+-----------+------------------+--------------------+
| 2016-08-01 | 33.5 | -3.00 | +0.05 |
+------------+-----------+------------------+--------------------+
| 2016-07-01 | 33.0 | -3.50 | 0.00 |
+------------+-----------+------------------+--------------------+
As you see, except for the last saved record (2016-10-01), the values in the "Calc. Change" column are not correct. There is something wrong with how I've designed the query.
Currently, my query retrieves the record that is 1 below the last saved measurement as the "second latest waist measurement." In the test case, that's the record created on 2016-09-01. This works for the LAST record saved to the database (in this case, 2016-10-01), but produces incorrect results when I request show page views of previously saved records. For instance, the record created on 2016-09-01 should be compared with the one created on 2016--08-01.
Using "offset(1)" in the query appears to be the root of my problem, but I don't know how to get the correct record. Is a solution to iterate through the records? I'm confused about how to do that in this situation?
How should I fix my database query to generate correct change values? If there's a better approach, please let me know. Thank you!
I'm not sure of the best way to do this in a "Rails Way" but to just do this in Postgresql you can use window functions:
SELECT *, waist - LAG(waist) OVER (order by date) as change FROM fitness_measurements WHERE member_id = ?
Tried this on PostgreSql 9.3.x and it works without issues. Not tried it in Rails but I guess you'd have to do something like:
sql = "SELECT *, waist - ..."
FitnessMeasurement.find_by_sql(sql)
...
The controller is the wrong place for that code -- for a more simple implementation add a method to the model that holds the saved values.
def previous
self.class.where(member: self.member).
where("date < ?", self.date).
order(date: :desc).
take
end
This gives you the previous instance for the member, from which you can then read the measurements, allowing logic such as:
#measurement.waist - #measurement.previous.waist