BigQuery: Failed to create view. Unexpected. Please try again - google-bigquery

I am trying to save a view in BigQuery, and keep getting the same error:
Failed to create view. Unexpected. Please try again.
The query is as follows:
SELECT
interaction.id AS Interaction.ID,
interaction.author.name AS Interaction.Author.Name,
interaction.author.username AS Interaction.Author.Username,
interaction.content AS Interaction.Content,
interaction.created_at_timestamp AS Interaction.Created_At_Timestamp,
klout.score AS Klout.Score,
twitter.geo.latitude AS Twitter.Geo.Latitude,
twitter.geo.longitude AS Twitter.Geo.Longitude,
twitter.media.expanded_url AS Twitter.Media.ExpandedUrl,
twitter.media.type AS Twitter.Media.Type,
twitter.place.country AS Twitter.Place.Country,
twitter.user.followers_count AS Twitter.User.Followers,
twitter.user.friends_count AS Twitter.User.Friends,
twitter.user.listed_count AS Twitter.User.Listed,
twitter.retweet.count AS Twitter.Retweet.Count
FROM
[**DATASET_NAME_OMITTED**.main_table]
WHERE
(interaction.id IS NOT NULL)
AND (interaction.created_at_timestamp IS NOT NULL)
AND (interaction.created_at_timestamp >= DATE_ADD(USEC_TO_TIMESTAMP(UTC_USEC_TO_HOUR(NOW())), -1, "DAY"))
AND (interaction.created_at_timestamp < USEC_TO_TIMESTAMP(UTC_USEC_TO_HOUR(NOW())))
The query validates, and runs without any problems:
Valid: This query will process 203 MB when run.
I did notice that the twitter.media is of type REPEATED RECORD. That said, removing twitter.media.* fields does not fix the issue.
I have been able to successfully save other views with the same timestamp restrictions and naming conventions. Attempting to save this one consistently fails.
For context: This table is populated by DataSift via their BigQuery connector (default, catch-all schema).

This is really weird.
I ran an experiment and pulled out each of the alias operations, and it worked.
I then slowly added some of them back in, and again; it continued working. However it seems that certain aliases do not want to work (I have no idea why).
I ended up with the following, which contains most of your aliases, and seems to work as expected:
SELECT
interaction.id AS Interaction.ID,
interaction.author.name AS Interaction.Author.Name,
interaction.author.username AS Interaction.Author.Username,
interaction.content AS Interaction.Content,
interaction.created_at_timestamp AS Interaction.Created_At_Timestamp,
klout.score AS Klout.Score,
twitter.geo.latitude AS Twitter.Geo.Latitude,
twitter.geo.longitude AS Twitter.Geo.Longitude,
twitter.media.expanded_url,
twitter.media.type AS Twitter.Media.Type,
twitter.place.country AS Twitter.Place.Country,
twitter.user.followers_count,
twitter.user.friends_count,
twitter.user.listed_count,
twitter.retweet.count AS Twitter.Retweet.Count
FROM [**DATASET_NAME_OMITTED**.main_table]
WHERE
(interaction.id IS NOT NULL)
AND (interaction.created_at_timestamp IS NOT NULL)
AND (interaction.created_at_timestamp >= DATE_ADD(USEC_TO_TIMESTAMP(UTC_USEC_TO_HOUR(NOW())), -1, "DAY"))
AND (interaction.created_at_timestamp < USEC_TO_TIMESTAMP(UTC_USEC_TO_HOUR(NOW())))
What seems really weird is that there is no pattern between what will, and will not work. The twitter.user.* fields are integers, but will not accept aliases, however the integer field klout.score field does accept an integer.

Related

`ROUND()` function returns unexpected value

I found in a specific case spanner's ROUND() function returns unexpected value.
Here's what I found.
SELECT ROUND(34.092135, 8)
> 34.092135
SELECT ROUND(34.092136, 8)
> 34.092135999999996 // this is supposed to return 34.092136
SELECT ROUND(34.092137, 8)
> 34.092137
I found these queries work the same with BigQuery.
Is there any misassumption on my side, or if not, how can I make it work correctly?
Thanks.
The issue seems to be there for both Cloud Spanner and BigQuery. Tried with different values but the issue seems to be for a particular set of inputs i.e. it is showing the unexpected result for values 33.092136, 34.092136, 35.092136, ........., 62.092136, 63.092136. Before 33.092136 and from 64.092136 onwards the issue seems to be not there. Also I tried with Cloud SQL(MySQL) and the issue is not there.
I have created an issue in Public Issue Tracker for the same. I would suggest you star the issue so that you will get notified whenever there is any update on the created issue.

Oracle spatial request working on one instance and not on another

I have this statement that is generated by Geoserver
SELECT
shape AS shape
FROM
(
SELECT
c.chantier_id id,
sdo_geom.sdo_buffer(c.shape, m.diminfo, 1) shape,
c.datedebut datedebut,
c.datefin datefin,
o.nom operation,
c.brouillon brouillon,
e.code etat,
u.utilisateur_id utilisateur,
u.groupe_id groupe
FROM
user_sdo_geom_metadata m, lyv_chantier c
JOIN lyv_utilisateur u ON c.createur_id = u.utilisateur_id
JOIN lyv_etat e ON c.etat_id = e.etat_id
JOIN lyv_operation o ON c.operation = o.id
WHERE
m.table_name = 'LYV_CHANTIER'
AND m.column_name = 'SHAPE'
) vtable
WHERE
( brouillon = 0
AND ( etat != 'archive'
OR etat IS NULL )
AND sdo_filter(shape, mdsys.sdo_geometry(2003, 4326, NULL, mdsys.sdo_elem_info_array(1, 1003, 1), mdsys.sdo_ordinate_array(
2.23365783691406, 48.665657043457, 2.23365783691406, 48.9341354370117, 2.76649475097656, 48.9341354370117, 2.76649475097656, 48.665657043457, 2.23365783691406, 48.665657043457)), 'mask=anyinteract querytype=WINDOW') = 'TRUE' );
On my local instance (dockerized if that can explain anything) it works fine, but on another instance I get an error :
ORA-13226: interface not supported without a spatial index
I guess that the SDO_FILTER is applied to the result of SDO_BUFFER which is therefore not indexed.
But why is it working on my local instance ?!
Is there some kind of weird configuration shenanigan that could explain the different behavior maybe ?
EDIT : The idea behind this is to get around a bug in Geoserver with Oracle databases where it renders only the first point of MultiPoint geometries, but works fine with MutltiPolygon.
I am using a SQL view as layer in Geoserver (hence the subselect I guess).
First, you need to do some debugging here.
Connect to each instance, on the same user as your Geoserver's datasource, and run the sql. From the same connections (in each instance) you must also verify that the user's metadata view (user_sdo_geom_metadata) have an entry for the table and the table has a spatial index - whose owner is the same user as the one you connect.
Also, your query ( select ... from 'vtable') has a column 'shape' which is a buffer of the column lyv_chantier.shape. The sdo_filter, in this sql, expects a spatial index on the vtable.shape - which cannot exist. You should try to use a different alias (e.g. buf_shape) and sdo_filter(buf_shape,...) - to see if the sql fails in both instances, as it should.
I'm in a bit of a hurry right now, so my instructions are summarized. If you want, do this debugging and post the results. We then can go into details.
EDIT: Judging from your efforts, I'd say that the simplest approach is: 1) add a second geometry column to lyv_chantier (e.g. buf_shp). 2) update lyv_chantier set buf_shp = sdo_geom.sdo_buffer(shape,...). 3) insert into user_sdo_geom_metadata the values (lyv_chantier, buf_shp, ...). 4) create a spatial index on column buf_shp. You may need to consider a trigger to update buf_shp whenever shape changes...
This is a very practical approach but you don't provide any info about your case (what is the oracle version, how many rows does the table have, how is it used, why do you want to use sdo_buffer, etc), so that's my recommendation for now.
Also, since you are, most likely, using an sql view as layer in Geoserver (you don't say anything about that, either), you could also consider using pure GS functionality to achieve your goal.
At the end, without describing your goal, it's difficult to provide anything more tailor-made.

Select last 30 days rows from MARA using SSIS

I'm trying to select rows for last date change = 30 days.
I tried LAEDA = ( sy-datum -30 ) in where clause, but it always generated error.I connect to sap Abap database.
The message error:
[EIS-Material 1] Error: ERPConnect.ERPException: Error while
receiving function return values: SYSTEM_FAILURE An error has occurred
while parsing a dynamic entry. at
ERPConnect.RFCAPI.ReceiveFunctionResults(UInt32 connectionHandle,
RFC_PARAMETER[] importing, RFC_PARAMETER[] changing, RFC_TABLE[]
tables, Encoding apiEncoding) at
ERPConnect.RFCFunction.ReceiveFunctionArguments(RFC_TABLE[]&
apiTables) at ERPConnect.RFCFunction.CallClassicAPI() at
ERPConnect.RFCFunction.ExecuteRFC(Byte[] tid) at
XtractKernel.Extractors.TableExtractor.GetPackage(RFCFunction& func)
at XtractKernel.Extractors.TableExtractor.Extract() at
XtractKernel.Extractors.ExtractorBase`1.Extract(ProcessResultCallback
processResult) at XtractIS.XtractSourceTable.PrimeOutput(Int32
outputs, Int32[] outputIDs, PipelineBuffer[] buffers) at
Microsoft.SqlServer.Dts.Pipeline.ManagedComponentHost.HostPrimeOutput(IDTSManagedComponentWrapper100
wrapper, Int32 outputs, Int32[] outputIDs, IDTSBuffer100[] buffers,
IntPtr ppBufferWirePacket)
So you are using a third party tool to extract data from an SAP system. According to the error message, the toole makes a Remote Function Call (RFC) and handing the SQL to the ABAP backend. Then your where condition must be valid ABAP/Open SQL syntax, regardless of the database behind.
Your call (simplified) would look like this in ABAP (with new #-syntax):
DATA(lf_dat) = sy-datum - 30.
SELECT matnr
FROM mara
WHERE laeda >= #lf_dat
INTO TABLE #DATA(lt_matnr)
.
The problem is, that you are not allowed to make this calculation within the the statement, as far as I know, so you have to use a variable. But since your third party tool only allows you to write a where condition I see no way to handle this, except with a static date in the condition:
laeda >= '20190106' "YYYYMMDD
You can add the ABAP tag to your question to attract more specialists on this ABAP specific topic.
I see in the Xtract IS online help that there's a custom function module named Z_THEO_READ_TABLE installed at ABAP side, which executes the SQL sent by Xtract IS. The module is provided in 2 flavors, one being for ABAP >= 740 SP 5, so I guess it's a version for ABAP SQL Strict Mode.
So, I thought that maybe you could write this ABAP-like Where Clause by using a "host expression", which is valid in ABAP SQL Strict Mode :
LAEDA = #( sy-datum - 30 )
Based on the error message you have, "An error has occurred while parsing a dynamic entry", I guess that this function module does something like SELECT (dyn-columns) FROM (dyn-table) WHERE (dyn-condition), i.e. all elements are dynamically defined at run time.
Unfortunately, the "ABAP documentation sql_cond - (cond_syntax) says that "Host expressions are not allowed in dynamic logical expressions."
So long, impossible to make a where clause as you wish.
There are probably many ways to get around this limit (like creating a SAPquery or BAPI in SAP and calling it from Xtract IS, etc.) but that's another question.
In mySQL / MariaDB, this works:
select ...
from ...
where date >= DATE_ADD(CURDATE(), INTERVAL -30 DAY)
but we need to know what database you are working with.
You may try it, if you use SQL database:
Select DATEADD(Month, -1, getdate())
You cannot specify ABAP formula like that through SAP Open SQL.
Not to directly resolve your challenge (as you have product limitation), here is how dynamic filter is achieved through AecorSoft tool:
(DT_WSTR, 4)(DATEPART("yy" , GETDATE())) + RIGHT("0" + (DT_WSTR, 4)DATEPART("mm" , GETDATE()),2) + RIGHT("0" + (DT_WSTR, 4)DATEPART("dd" , GETDATE()),2)
For complete use case, you can check the blog SAP Table Delta Extract Made Easy through Dynamic Filters

orientdb sql update edge?

I have been messing around with orientdb sql, and I was wondering if there is a way to update an edge of a vertex, together with some data on it.
assuming I have the following data:
Vertex: Person, Room
Edge: Inside (from Person to Room)
something like:
UPDATE Persons SET phone=000000, out_Inside=(
select #rid from Rooms where room_id=5) where person_id=8
obviously, the above does not work. It throws exception:
Error: java.lang.ClassCastException: com.orientechnologies.orient.core.id.ORecordId cannot be cast to com.orientechnologies.orient.core.db.record.ridbag.ORidBag
I tried to look at the sources at github searching for a syntax for bag with 1 item,
but couldn't find any (found %, but that seems to be for serialization no for SQL).
(1) Is there any way to do that then? how do I update a connection? Is there even a way, or am I forced to create a new edge, and delete the old one?
(2) When writing this, it came to my mind that perhaps edges are not the way to go in this case. Perhaps I should use a LINK instead. I have to say i'm not sure when to use which, or what are the implications involved in using any of them. I did found this though:
https://groups.google.com/forum/#!topic/orient-database/xXlNNXHI1UE
comment 3 from the top, of Lvc#, where he says:
"The suggested way is to always create an edge for relationships"
Also, even if I should use a link, please respond to (1). I would be happy to know the answer anyway.
p.s.
In my scenario, a person can only be at one room. This will most likely not change in the future. Obviously, the edge has the advantage that in case I might want to change it (however improbable that may be), it will be very easy.
Solution (partial)
(1) The solution was simply to remove the field selection. Thanks for Lvca for pointing it out!
(2) --Still not sure--
CREATE EDGE and DELETE EDGE commands have this goal: avoid the user to fight with underlying structure.
However if you want to do it (a little "dirty"), try this one:
UPDATE Persons SET phone=000000, out_Inside=(
select from Rooms where room_id=5) where person_id=8
update EDGE Custom_Family_Of_Custom
set survey_status = '%s',
apply_source = '%s'
where #rid in (
select level1_e.#rid from (
MATCH {class: Custom, as: custom, where: (custom_uuid = '%s')}.bothE('Custom_Family_Of_Custom') {as: level1_e} .bothV('Custom') {as: level1_v, where: (custom_uuid = '%s')} return level1_e
)
)
it works well

Rails generated query result is empty, but returns value in Postgres

I'm having a strange problem in Rails with a Postgres query.
The query looks something like this:
WeeklyPlanner.find_or_create_by_user_id(current_user.id).recipes.find(:all,
:conditions => ["
weekly_planner_events.time_start =
date_part('epoch', to_timestamp(?)::timestamptz at time zone 'CDT')",
Time.local(Time.now.year, Time.now.month, Time.now.day).to_i
])
This generates (as I can view in the console), the following SQL statement:
SELECT "recipes".* FROM "recipes"
INNER JOIN "weekly_planner_events" ON "recipes"."id" = "weekly_planner_events"."recipe_id"
WHERE "weekly_planner_events"."weekly_planner_id" = 2
AND (weekly_planner_events.time_start = date_part('epoch', to_timestamp(1347426000)::timestamptz at time zone 'CDT'))
My problem is that the generated SQL statement works well on psql or pgAdmin, but on rails it returns an empty array. That is, if I copy and paste it as is on a postgres console, it works perfectly fine, but when I run it on the Rails console, it returns nothing, and I have no idea why its happening.
I've tried the following:
Parametrizing 'epoch' and 'CDT'/timezone (in order to remove 's
Switching to a where statement, with the same condition
Passing the variables with #{}s
Doing the search without the date_part('epoch', [float]) function works fine in Rails, but its obviously not the result I need.
I'm finding this issue quite confusing, if there is any other data you need please let me know and I will edit the post.
Thank you.
Maybe when you are using the find_or_create method it is using the create action, so you create a new WeeklyPlanner for a user, and this brand new WeeklyPlanner doesn't have recipes attached to it because it has just been created.
When you go to psql, you probably use a existent WeeklyPlanner.
But this is just a guess.