How to convert SQL CONCAT To GRAPHQL query for hasura - sql

I would like to convert the following postgres query to graphql for hasura. How can I do it?
Select CONCAT(cast( "vendorId" as text),split_part("customerUserName",'+', 2)) as id,"vendorId","customerUserName" from "VendorCustomertList"

You can create a Postgres VIEW and track that via Hasura. From there you have all the same capabilities as a table: changing GraphQL field names, relationships, permissions, etc.
http://localhost:9695/console/data/sql is where you can write custom SQL and have it be tracked as a migration.

Related

Mulesoft not able to pass dynamic SQL queries based on environments

Hello for demonstration purposes I trimmed out my actual sql query.
I have a SQL query
SELECT *
FROM dbdev.training.courses
where dbdev is my DEV database table name. When I migrate to TEST env, I want my query to dynamically change to
SELECT *
FROM dbtest.training.courses
I tried using input parameters like {env: p('db_name')} and using in the query as
SELECT * FROM :env.training.courses
or
SELECT * FROM (:env).training.courses
but none of them worked. I don't want my SQL query in properties file.
Can you please suggest a way to write my SQL query dynamically based on environment?
The only alternative way is to deploy separate jars for different environments with different code.
You can set the value of the property to a variable and then use the variable with string interpolation.
Warning: creating dynamic SQL queries using any kind of string manipulation may expose your application to SQL injection security vulnerabilities.
Example:
#['SELECT * FROM $(vars.database default "dbtest").training.courses']
Actually, you can do a completely dynamic or partially dynamic query using the MuleSoft DB connector.
Please see this repo:
https://github.com/TheComputerClassroom/dynamicSQLGETandPATCH
Also, I'm about to post an update that allows joins.
At a high level, this is a "Query Builder" where the code that builds the query is written in DataWeave 2. I'm working on another version that allows joins between entities, too.
If you have questions, feel free to reply.
One way to do it is :
Create a variable before DB Connector:
getTableName - ${env}.training.courses
Write SQL Query :
Select * from $(getTableName);

How do I query a specific range of Firebase's analytics table using Data Studio's date parameters?

I've been reading up on how to query a wildcard table in BigQuery, but Data Studio doesn't seem to recognize the _TABLE_SUFFIX keyword.
Google Data Studio on using parameters
Google BigQuery docs on querying wildcard tables
I'm trying to use the recently added date parameters for a custom query in Data Studio. The goal is to prevent the custom query from scanning all partitions to save time.
When using the following query:
SELECT
*
FROM
`project-name.analytics_196324132.events_*`
WHERE
_TABLE_SUFFIX BETWEEN DS_START_DATE AND DS_END_DATE
I receive the following error:
Unrecognized name: _TABLE_SUFFIX
I expected the suffix keyword to be recognized so that the custom query is more efficient. But I get this error message. Does Data Studio not yet support this? Or is there another way?
It could be possible that you are setting the query in the wrong place. I created a DataSource from a Custom Query and the wildcard worked. The query I tested was the following, similar to yours since _TABLE_SUFFIX is a wildcard that is available in standardSQL in BigQuery:
select
*
from
`training_project.training_dataset.table1_*`
where
_TABLE_SUFFIX BETWEEN '20190625' AND '20190626'
As per your comments you are trying to add a query in the formula field of a custom parameter, however the formula field only accepts basic math operations, functions, and branching logic.
The workaround I can see is to build a select query and use it as a Custom Query in the Data Source definition so that the query can calculate any extra fields in advance (steps 5,6 and 7 from this tutorial).

Adding <?xml-multiple?> in select statement

I select data from an Oracle SQL database using XMLELEMENTs. These data get passed to an application that will convert it into JSON and then sends it to a REST API.
Currently, I have the same issue as here, and the solution should be to add <?xml-multiple?> as a tag.
How can I select it from the database?
SELECT XMLEMENT("Body",
XMLELEMENT("User",
XMLELEMENT("Name", UserName),
XMLELEMENT("Adress", Adress)))
FROM USERS;
Let's say I want to mark that there could be multiple users with xml-multiple.
How do I need to change my query?
How about using the XMLPI function. It allows you to add processing instructions to the XML.
SELECT XMLEMENT("Body",
XMLPI("xml-multiple"),
XMLELEMENT("User",
XMLELEMENT("Name", UserName),
XMLELEMENT("Adress", Adress)))
FROM USERS;

Determine/Find underlying SQL field type of a Django Field

Is there an easy way to determine or find the underlying SQL field type of a Django Field, for any of the supported by default database backends? I have searched on the web and there is no documentation over how the Django fields are represented in SQL in each of the supported databases. The only way for me to see the underlying SQL field type, is to run the mysqlmigrate command of manage.py and examine the SQL code.
The type depends on the database backend, so you need to get a db connection first:
from django.db import connection
and now you can look up the field via the model Meta API:
my_field = MyModel._meta.get_field('my_field_name')
and use its db_type method:
my_field.db_type(connection)
which will return something like "varchar(10)".
Be sure you really need to do this, though. Usually this information is only useful inside migrations.

Knex.js divide value of a column by another column

Hellow I'm searching how to create query like this with knex
SELECT product.price/100 AS priceInDollars
and getting error 'price/100 not found'
related question divide the value of a column by another column
Knex seems to wrap the columns in quotes, so such operations cannot be supported using Knex query builder, as the database would interpret that as literals.
knex.column('title', 'author', 'year').select().from('books')
Outputs:
select `title`, `author`, `year` from `books`
However, knex also provides a way to fire raw SQL statements, so you would be able to execute this query.
knex.raw('SELECT product.price/100 AS priceInDollars').then(function(resp) { ... });
Further reading: Knex Raw Queries
This can be done using knex.raw query partially for the columns.
You have two possible solutions:
Raw SQL:
You have possibility to use knex.raw to use full raw SQL query as you would execute it against database (as other answers already indicated). However, if you are using tools like knex, usually this is something you want to avoid (especially when you are using query builder to build more complicated queries and relationships - I assume that this is why you are using knex in the first place).
You can use knex.raw partially for specific column instead.
Lets consider following query:
SELECT id, product.price/100 AS priceInDollars, created_at WHERE id='someUUID';
You can execute this with knex in a following format:
knex
.select([
'id',
knex.raw('products.price::numeric/100 as priceInDollars'),
'created_at'
])
.from('products')
.where({ id: 'someUUID' });
My assumption in the answer is that postgresql is used (hence numeric), but if you want to extract float after the division, you will need to do a casting (in dependency of what kind of types database support)