Dataflow SQL - Unsupported type Geography - google-bigquery

I'm trying to create a Dataflow SQL on Google Big Query and I got this error
Unsupported type for column centroid.centroid: GEOGRAPHY
I couldnt find any evidence that Dataflow SQL actually does not support Geography data and in the documentation geography data is not mentioned at all. Is this the case, why is that and is there any workaround?

No unfortunately Dataflow SQL does not support Geography types. It supports a subset of BigQuery Standard SQL. Only the data types listed explicitly in the page you linked are supported, it should probably be more clear about that.
Dataflow SQL relies on ZetaSQL to parse and analyze queries, and ZetaSQL does not yet support Geography (you can see the current status here).
Unfortunately for now the only workaround is to convert any GEOGRAPHY fields to a supported type.

Related

Running SQL via SQLWorkbench versus via Tableau Prep

I have developed some SQL that reads from a redshift table, does some manipulation (esp listagg some fields), and then writes to another redshift table.
When I run the SQL using SQLWorkbench it executes successfully. When I embed it in a Tableau Prep flow (as "Complex SQL") I get several of these errors: "System error: AqlProcessor evaluation failed: [Amazon][Support] (40550) Invalid character value for cast specification." Presumably these relate to my treatment of data types. What I don't is what is so difference in the environment that would cause different results like this? Is it because SQLWorkbench and Tableau Prep use different SQL interpreters? Or is my question too broad to even speculate without going through the actual code?
Best guess is that Tableau, which has knowledge of DDL, is add some CAST() operations to the SQL. SQLWorkbench is simpler and is pushing the SQL to Redshift as written. This is based on there being no explicit CASTs in your SQL but an error message that identifies a CAST().
Look at stl_querytext for these two queries and see if they are being given to Redshift differently by the two benches. I suspect this will give you some clues to go on.
If there are no differences in the SQL then the issue may be with user / connection differences and more info will likely be needed about the issue.

Where can I find what SQL dialect that MarkLogic TDE based SQL support?

MarkLogic TDE enables SQL 'like' access to the document data.
Hence via common ODBC driver, other BI tools could possibly access ML DB in a 'relation db' way. However the challenge I have is to know which SQL dialet ML supports.
For example, I want to find how to find the first 10 records to get a snippet of the data. I could do that with
select top 10 * from book (ms sql)
or
select * from book where rownum <= 10 (oracle sql)
How to do the same with MarkLogic SQL?
There are actually many such types of sql syntax questions. I need to find the equivalent of what I normally used with ms sql.
Is there a wiki page to show the difference between ML SQL and MS SQL?
In general, MarkLogic supports the syntax from the SQL92 standard.
Supported SQL Statements, Functions and Types
This section describes the SQL statements and functions supported in MarkLogic. The topics are:
Supported Statements
Supported Functions
Supported Types

Hana Column Store dialect to Oracle 12c SQL

While trying to benchmark Oracle's Database Inmemory, we were looking for publicly available benchmarking data set and tools. The CH-benCHmark suited our requirement exactly, but it has HANA Column Store Dialect as part of the source files.
So, our requirement is to convert these HANA Column Store dialect SQLs to Oracle 12c SQLs. Google search returned the conversion from Oracle to Hana dialect not the reverse.
Has anyone came across this requirement? Is there a simple/direct way to do the conversion?
Any pointers will be much helpful.
Yes I have done this exercise! there's no direct way from HANA Dialect to Oracle Dialect, But you can make use of ORACLE_LOADER and it's semantics to effectively create Oracle Dialect! Only problem you may face would be the flow, where HANA's flow is totally different from Oracle's schema creation flow.
For example:
you can easily use LOAD FROM FILE... syntax in HANA, But you need an externally organized table in case of Oracle.

SAP Hana SQL Type mapped to OData Edm.boolean

In the SAP HANA developer guide there is a list explaining SQL - EDM Type mapping
Missing is how to map from a SQL Type to Edm.Boolean, anyone know how?
In the SAP HANA developer guide (SPS 07) it also says that the OData implementation in SAP HANA XS supports only those SQL types listed in the mentioned list/table. Therefore Edm.Boolean is not supported.
A workaround could be to use Edm.Byte instead of Edm.Boolean which is mapped to the TinyInt HANA SQL Type. If you only want to have a true/false or 0/1 value I think the TinyInt SQL Type is the closest you can get to a Boolean.

How to insert LONG BINARY from SQL Server to Oracle

I need to get a copy of a SQL Server 2008 table into an Oracle RDBMS. I have database link for SQL Server, database has a table which contains LONG BINARY type column.
When I issue
create table test_ora as select * from mssqltable#dblink
I get the error
Can't convert LONG
I tried to use to_lob, to_char, hextoraw and a ream of Oracle conversion function but still hasn't defeated the issue. Do you have any ideas?
p.s. I'm out of work now so can't tell exact ORA- error number.
There is a way to do that with undocumented Oracle's package:
http://tonguc.wordpress.com/2008/08/28/how-to-transfer-long-datatype-over-dblink/
I would recommend tool called Pentaho Data Integration. This is free, small and superb ETL tool.
Download page: community(.)pentaho(.)com
It will recreated all tables and types for you. How to do it:
pldwh(.)blogspot(.)co(.)uk/2013/03/pentaho-data-integration-create-tables_1(.)html