How to convert `json query` to `sql query`? - sql

I have an AngularJS App designed to build queries in JSON format, those queries are built with many tables, fields, and operators like "join", "inner", "where","and","or","like", etc.
AngularJS App is sending this JSON queries to my Django-Restframework backend, so I need to translate that JSON query into SQL query, to be able to run Raw SQL with previous validations of what tables/models are allowed for select.
I don't need a full JSON query to SQL query translation, I just want to translate selects with support for clauses like "where","and", "or", "group_by".
For better understanding of my question I put the following snippets:
{
"selectedFields": {
"orders": {
"id": true,
"orderdate": true},
"customers": {
"customername": true,
"customerlastname": true}
},
"from": ["orders"],
"inner_join":
{
"customers": {
"on_eq": [
{
"orders": {
"customderID": true
},
},
{
"customers": {
"customerID": ture
}
}
]
}
}
}
SELECT
Orders.OrderID,
Customers.CustomerName,
Customers.CustomerLastName,
Orders.OrderDate
FROM Orders
INNER JOIN Customers
ON Orders.CustomerID=Customers.CustomerID;
I took the example from: http://www.w3schools.com/sql/sql_join.asp
Please note that I am not trying to serialize any SQL query output to JSON.

I found a NodeJS package (https://www.npmjs.com/package/json-sql) who converts JSON queries to SQL queries, so I made a NodeJS script and then I create a class in Python to call NodeJS script.
With this approach I just need to send all AngularJS queries following this syntax (https://github.com/2do2go/json-sql/tree/master/docs#join)
NodeJS script.
// Use:
//$ nodejs reporter/services.js '{"type":"select","fields":["a","b"],"table":"table"}'
var jsonSql = require('json-sql')();
var json_query = JSON.parse(process.argv[2]);
var sql = jsonSql.build(json_query);
console.log(sql.query);
DRF class:
from unipath import Path
import json
from django.conf import settings
from Naked.toolshed.shell import muterun_js
full_path = str(Path(settings.BASE_DIR, "reporter/services.js"))
class JSONToSQL:
def __init__(self, json_):
self.json = json_
self.sql = None
self.dic = json.loads(json_)
self.to_sql()
def to_sql(self):
response = muterun_js('%s \'%s\'' % (full_path, self.json))
if response.exitcode == 0:
self.sql = str(response.stdout).replace("\n","")

You could write some custom JS to parse the object like this:
var selectedfields ='';
var fields = Object.keys(obj.selectedFields);
for (i=0;i<fields.length; i++) {
var subfields = Object.keys(obj.selectedFields[fields[i]]);
for (j=0;j<subfields.length; j++) {
selectedfields = selectedfields + fields[i]+'.'+subfields[j]+' ';
}
}
var from="";
for (i=0;i<obj.from.length; i++) {
if (from=="") {
from = obj.from[i]
} else {
from = from + ',' +obj.from[i]
}
}
var output = 'SELECT '+selectedfields+ ' FROM '+from;
document.getElementById('output').innerHTML=output;
or in angular you would use $scope.output = ... from within a controller perhaps.
jsfiddle here: https://jsfiddle.net/jsheridan390/fpbp6cz0/1/

Related

How to validate an object inside a JSON schema in Karate whether its empty or contains a series of key:value pairs?

I am trying to validate an API response using Karate for either of these two states.
Scenario 1 (when it returns a contractData object that contains a fee key):
{
"customer": {
"financialData": {
"totalAmount": 55736.51,
"CreateDate": "2022-04-01",
"RequestedBy": "user1#test.com"
},
"contractData": {
"Fee": 78.00
}
}
}
Scenario 2 (when it returns an empty contractData object):
{
"customer": {
"financialData": {
"totalAmount": 55736.51,
"CreateDate": "2022-04-01",
"RequestedBy": "user1#test.com"
},
"contractData": {}
}
}
How can I write my schema validation logic to validate both states?
The best thing I could have done is to write it like this:
* def schema = {"customer":{"financialData":{"totalAmount":"#number","CreateDate":"#?isValidDate(_)","RequestedBy":"#string"},"contractData":{"Fee": ##number}}}
* match response == schema
And it seems like it works for both above scenarios, but I am not sure whether this is the best approach or not. Problem with this approach is if I have more than one key:value pair inside "contractData" object and I want to be sure all those keys are present in there when it is not empty, I cannot check it via this approach because for each individual key:value pair, this approach assumes that they could either be present or not and will match the schema even if some of those keys will be present.
Wow, I have to admit I've never come across this case ever, and that's saying something. I finally was able to figure out a possible solution:
* def chunk = { foo: 'bar' }
* def valid = function(x){ return karate.match(x, {}).pass || karate.match(x, chunk).pass }
* def schema = { hey: '#? valid(_)' }
* def response1 = { hey: { foo: 'bar' } }
* def response2 = { hey: { } }
* match response1 == schema
* match response2 == schema

GraphQL stitch and union

I have a need to 'aggregate' multiple graphQl services (with same schema) into single read-only (query only) service exposing data from all services. For example:
---- domain 1 ----
"posts": [
{
"title": "Domain 1 - First post",
"description": "Content of the first post"
},
{
"title": "Domain 1 - Second post",
"description": "Content of the second post"
}
]
---- domain 2 ----
"posts": [
{
"title": "Domain 2 - First post",
"description": "Content of the first post"
},
{
"title": "Domain 2 - Second post",
"description": "Content of the second post"
}
]
I understand that 'stitching' is not meant for UC's like this but more to combine different micro-services into same API. In order to have same types (names) into single API, I implemented 'poor man namespaces' by on-the-fly' appending domain name to all data types. However, I'm able only to make a query with two different types like this:
query {
domain_1_posts {
title
description
}
domain_2_posts {
title
description
}
}
but, it results with data set consist out of two arrays:
{
"data": {
"domain_1_posts": [
{ ...},
],
"domain_2_posts": [
{ ...},
]
}
}
I would like to hear your ideas what I can do to combine it into single dataset containing only posts?
One idea is to add own resolver that can call actual resolvers and combine results into single array (if that is supported at all).
Also, as a plan B, I could live with sending 'domain' param to query and then construct query toward first or second domain (but, to keep initial query 'domain-agnostic', e.g. without using domain namses in query itself?
Thanks in advance for all suggestions...
I manage to find solution for my use-case so, I'll leave it here in case that anyone bump into this thread...
As already mentioned, stitching should be used to compose single endpoint from multiple API segments (microservices). In case that you try to stitch schemas containing same types or queries, your request will be 'routed' to pre-selected instance (so, only one).
As #xadm suggested, key for 'merging' data from multiple schemas into singe data set is in using custom fetch logic for Link used for remote schema, as explained:
1) Define custom fetch function matching your business needs (simplified example):
const customFetch = async (uri, options) => {
// do not merge introspection query results!!!
// for introspection query always use predefined (first?) instance
if( operationType === 'IntrospectionQuery'){
return fetch(services[0].uri, options);
}
// array fecth calls to different endpoints
const calls = [
fetch(services[0].uri, options),
fetch(services[1].uri, options),
fetch(services[2].uri, options),
...
];
// execute calls in parallel
const data = await Promise.all(fetchCalls);
// do whatever you need to merge data according to your needs
const retData = customBusinessLogic();
// return new response containing merged data
return new fetch.Response(JSON.stringify(retData),{ "status" : 200 });
}
2) Define link using custom fetch function. If you are using identical schemas you don't need to create links to each instance, just one should be enough.
const httpLink = new HttpLink(services[0].uri, fetch: customFetch });
3) Use Link to create remote executable schema:
const schema = await introspectSchema(httpLink );
return makeRemoteExecutableSchema({
schema,
link: httpLink,
context: ({ req }) => {
// inject http request headers into context if you need them
return {
headers: {
...req.headers,
}
}
},
})
4) If you want to forward http headers all the way to the fetch function, use apollo ContextLink:
// link for forwarding headers through context
const contextLink = setContext( (request, previousContext) => {
if( previousContext.graphqlContext ){
return {
headers: {
...previousContext.graphqlContext.headers
}
}
}
}).concat(http);
Just to mention, dependencies used for this one:
const { introspectSchema, makeRemoteExecutableSchema, ApolloServer } = require('apollo-server');
const fetch = require('node-fetch');
const { setContext } = require('apollo-link-context');
const { HttpLink } = require('apollo-link-http');
I hope that it will be helfull to someone...

Why is Date query with aggregate is not working in parse-server?

I want to query user where updatedAt is less than or equal today using aggregate because I'm doing other stuff like sorting by pointers.
I'm using cloud code to define the query from the server.
I first tried using mongoDB Compass to check my query using ISODate and it works, but using it in NodeJS seems not working correctly.
I also noticed about this problem that was already fix, they say. I also saw their tests.
Here's a link to that PR.
I'm passing date like this:
const pipeline = [
{
project: {
_id: true,
process: {
$substr: ['$_p_testdata', 12, -1]
}
}
},
{
lookup: {
from: 'Test',
localField: 'process',
foreignField: '_id',
as: 'process'
}
},
{
unwind: {
path: '$process'
}
},
{
match: {
'process._updated_at': {
$lte: new Date()
}
}
}
];
const query = new Parse.Query('data');
return query.aggregate(pipeline);
I expect value to be an array with length of 4 but only give me empty array.
I was able to fetch data without match date.
Please try this:
const pipeline = [
{
match: {
'editedBy.updatedAt': {
$lte: new Date()
}
}
}
];

Add a new element in each array of objects where array may have different length in mongodb

I have a following shema.
{
id:week
output:{
headerValues:[
{startDate:"0707",headers:"ID|week"},
{startDate:"0715",headers:"ID1|week1"},
{startDate:"0722",headers:"ID2|week2"}
]
}
}
I have to add a new field into headerValues array like this:
{
id:week
output:{
headerValues[
{startDate:"0707",headers:"ID|week",types:"used"},
{startDate:"0715",headers:"ID1|week1",types:"used"},
{startDate:"0722",headers:"ID2|week2",types:"used"}
]
}
}
I tried different approaches like this:
1)
db.CollectionName.find({}).forEach(function(data){
for(var i=0;i<data.output.headerValues.length;i++) {
db.CollectionName.update({
"_id": data._id, "output.headerValues.startDate":data.output.headerValues[i].startDate
},
{
"$set": {
"output.headerValues.$.types":"used"
}
},true,true
);
}
})
So, In this approach it is executing script and then failing. It is updating result with failed statement.
2)
Another approach I have followed using this link:
https://jira.mongodb.org/browse/SERVER-1243
db.collectionName.update({"_id":"week"},
{ "$set": { "output.headerValues.$[].types":"used" }
})
But it fails with error:
cannot use the part (headerValues of output.headerValues.$[].types) to
traverse the element ({headerValues: [ { startDate: "0707", headers:
"Id|week" } ]}) WriteError#src/mongo/shell/bulk_api.js:469:48
Bulk/mergeBatchResults#src/mongo/shell/bulk_api.js:836:49
Bulk/executeBatch#src/mongo/shell/bulk_api.js:906:13
Bulk/this.execute#src/mongo/shell/bulk_api.js:1150:21
DBCollection.prototype.updateOne#src/mongo/shell/crud_api.js:550:17
#(shell):1:1
I have searched with many different ways which can update different arrays object by adding new field to each object but no success. Can anybody please suggest that what am I doing wrong?
Your query is {"_id" : "week"} but in your data id field is week
So you can change {"_id" : "week"} to {"id" : "week"} and also update your mongodb latest version
db.collectionName.update({"id":"week"},
{ "$set": { "output.headerValues.$[].types":"used" }
})

Ramda `evolve` nested object

I have a list similar to this:
var list = [
{
stack: [
{
file: 'abc'
}
]
},
{
stack: [
{
file: 'abc'
},
{
file: 'abc'
}
]
}
];
I want to change every file name with e.g 'def'. How to do that by using ramda ?
I tried things like:
var trans = {
file: replace('abc', 'def')
};
var f = R.evolve(trans)
var f2 = R.map(f)
R.map(f2, list)
But it doesn't work. I need to include stack field in solution somehow.
Well, it's not pretty, but I think this will do it:
R.map(R.over(
R.lensProp('stack'),
R.map(R.over(R.lensProp('file'), R.replace('abc', 'def')))
))(list)
You could probably also use an evolve inside, but lenses are pretty powerful, and more generally useful.