Sencha Touch SQL proxy - sql

I am using SQL proxy in my Sencha Touch 2 app and I am able to store and retrieve data offline.
What I cannot do and what the Sencha documentation does not seem to provide is how do I customize the Sencha SQL store.
For eg, to create a SQL based store, I did the following -
Ext.define("MyApp.model.Customer", {
extend: "Ext.data.Model",
config: {
fields: [
{name: 'id', type: 'int'},
{name: 'name', type: 'string'},
{name: 'age', type: 'string'}
],
proxy: {
type: "sql",
database: "MyDb",
}
}
});
1. Now, how do i specify the size of the database ?
2. How do I specify constraints on the fields like unique, primary key etc ?
Say, I have 4 columns in my database :
pid, name, age, phone
I want to have a primary key for multiple fields : (pid, name)
If I was creating a table via SQL query, I would do something like -
CREATE TABLE Persons
(
pid int,
name varchar(255),
age int,
phone int,
primary key (pid,name)
);
Now how do I achieve the same via model ?
3. If I want to interact with the database via SQL query, I do the following -
var query = "SELECT * from CUSTOMER";
var db = openDatabase('MyDb', '1.0', 'MyDb', 2 * 1024 * 1024);
db.transaction(function (tx) {
tx.executeSql(query, [], function (tx, results) {
// do something here
}, null);
});
Is this the best way to do it?

Related

Handling Data on PostgreSQL for a comment thread app similar to Reddit

First time posting a question on here.
I am working with PostgreSQL and had a question on how to format data coming from a PostgreSQL DB. For practice, I am building a comment thread app similar to Reddit's. I organized my database in the following way
Post Table
CREATE TABLE Posts (
id serial PRIMARY KEY,
userId int,
header VARCHAR
)
Comment Table
CREATE TABLE Comments (
id serial PRIMARY KEY,
userId int,
commentText VARCHAR,
parentId int,
postId int
)
I want my end data to be an array of objects organized by postId with a comments key / value pair that stores all the comments for that postId (example below).
Should I format my data in this way using postgres queries, run sorting server side, or on client side? AND is this a conventional/efficient way of handling/formatting this kind of data? or am I missing some other way of organizing data for a comment thread?
Im use to working with MongoDb so not sure if my way of wanting to structure the data is due to working with mongoDb.
I would like for my data to look somewhat like this (unless there is a better more efficient way of doing it):
const posts = [
{
postId: 1,
header: 'Post Header',
comments: [
{
commentId: 1,
text: 'comment text',
parentId: null
},
{
commentId: 2,
text: 'comment text',
parentId: 1
}
]
},
{
postId: 2,
header: 'Post Header',
comments: [
{
commentId: 3,
text: 'comment text',
parentId: null
},
{
commentId: 2,
text: 'comment text',
parentId: 3
}
]
},
]
Thank you in advance!!
Postgres has a number of built-in ways to handle JSON: https://www.postgresql.org/docs/9.5/functions-json.html
Something like
SELECT
postID
,json_build_array(
json_build_object('commentId', id, 'text', commentText, 'parentId', parentId)
) as comments
FROM comments
GROUP BY postId
and then just join to the original Posts for metadata

Sequelize addIndex on BLOB Field

I'm using v6 of Sequelize with mariaDB. When I try to execute the following in a migrate file:
return queryInterface.addIndex('RVersion', ['terms'], {
indicesType: 'FULLTEXT'
});
I get the following error message:
BLOB/TEXT column 'terms' used in key specification without a key length
What is the correct way to create this index in sequelize?
Use fields option in options instead of the second parameter like this:
return queryInterface.addIndex('RVersion', {
fields: [{
name: 'terms',
length: 255
}],
type: 'FULLTEXT' // this option name is `type`
});

Many to many with pivot data to dgraph using graphql schema

I have the bellow many to many relation using a relational DB and I want to transition this to the dgraph DB.
This relation has also extra columns in the pivot table: products_stores like price, disc_price.
I have the bellow dgraph schema using graphql:
type Product {
id: ID!
name: String! #id
slug: String! #id
image: String
created_at: DateTime!
updated_at: DateTime!
stores: [Store] #hasInverse(field: products)
}
type Store {
id: ID!
name: String! #id
logo: String
products: [Product] #hasInverse(field: stores)
created_at: DateTime!
updated_at: DateTime!
}
I am newbie to graph databases and I don't know how to define these extra pivot columns.
Any help would be greatly appreciated.
To model a pivot table that is only a linking pivot table holding no additional information, then you model it as you did above. However, if your pivot table contains additional information regarding the relationship, then you will need to model it with an intermediate linking type. Almost the same idea as above. I prefer these linking types to have a name describing the link. For instance I named it in this case Stock but that name could be anything you want it to be. I also prefer camelCase for field names so my example reflects this preference as well. (I added some search directives too)
type Product {
id: ID!
name: String! #id
slug: String! #id
image: String
createdAt: DateTime! #search
updatedAt: DateTime! #search
stock: [Stock] #hasInverse(field: product)
}
type Store {
id: ID!
name: String! #id
logo: String
stock: [Stock] #hasInverse(field: store)
createdAt: DateTime! #search
updatedAt: DateTime! #search
}
type Stock {
id: ID!
store: Store!
product: Product!
name: String! #id
price: Float! #search
originLink: String
discPrice: Float #search
}
The hasInverse directive is only required on one edge of the inverse relationship, if you want to for readability you can define it on both ends without any side effects
This model allows you to query many common use cases very simply without needing to do additional join statements like you are probably use to in sql. And the best part about Dgraph is that all of these queries and mutations are generated for you so you don't have to write any resolvers! Here is one example of finding all the items in a store between a certain price range:
query ($storeName: String, $minPrice: Float!, $maxPrice: Float!) {
getStore(name: $storeName) {
id
name
stock(filter: { price: { between: { min: $minPrice, max: $maxPrice } } }) {
id
name
price
product {
id
name
slug
image
}
}
}
}
For a query to find only specific product names in a specific store, then use the cascade directive to remove the undesired Stock nodes (until Dgraph finished nested filters RFC in progress)
query ($storeName: String, $productIDs: [ID!]!) {
getStore(name: $storeName) {
id
name
stock #cascade(fields:["product"]) {
id
name
price
product(filter: { id: $productIDs }) #cascade(fields:["id"]) {
id
name
slug
image
}
}
}
}

Datatables - Inline editor only updates host table

I have a simple table using a left join:
Editor::inst( $db, 'enqitem', 'enqitemid')
->fields(
Field::inst( 'salstkitem.salsubid' ),
Field::inst( 'salstkitem.condition1' ),
Field::inst( 'enqitem.cost' )
)
->leftJoin('salstkitem', 'salstkitem.salsubid', '=', 'enqitem.itemid')
->where('enqitem.enqnr',141316)
->debug( true )
->process( $_POST )
->json();
In the editor, I have hidden the primary key of the non-host table:
editor = new $.fn.dataTable.Editor( {
ajax: "datatables.php",
table: "#example",
fields: [{
name: "salstkitem.salsubid",
type: "hidden"
},{
label: "Condition:",
name: "salstkitem.condition1"
},{
label: "Cost:",
name: "enqitem.cost"
}
]
});
I've set it to be editable inline:
$('#example').on( 'click', 'tbody td:not(:first-child)', function (e) {
editor.inline( this, {
onBlur: 'submit'
} );
});
When I edit inline, the cost updates successfully, as it's a member of the host table. However condition1 will not update.
If I select the EDIT button, both fields update successfully.
This issue is purely for inline editing.
Does anyone have any idea why?
The debug suggests it isn't trying to update at all. It is purely a SELECT query.
Allan, the creator of datatables got back to me:
If you are writing to a joined table rather than just the master table you need to have Editor submit the joined table's primary key as well (enqitem.enqitemid in this case I guess). When you are inline editing, by default it will only submit the edited field, but you can use the form-options object to change that:
editor.inline( this, {
onBlur: 'submit',
submit: 'allIfChanged'
} );
Regards,
Allan

How to avoid uploading duplicated row into BigQuery table with Google App Script

I'm uploading some data into BigQuery from a google sheet using Google App Script. Is there a way to upload these data without uploading duplicated row...
Here is the JobSpec I'm currently using :
var jobSpec = {
configuration: {
load: {
destinationTable: {
projectId: projectId,
datasetId: 'ClientAccount',
tableId: tableId
},
allowJaggedRows: true,
writeDisposition: 'WRITE_APPEND',
schema: {
fields: [
{name: 'date', type: 'STRING'},
{name: 'Impressions', type: 'INTEGER'},
{name: 'Clicks', type: 'INTEGER'},
]
}
}
}
};
So I'm looking for something like allowDuplicates: true... I think you get the idea... I can I do this...
BigQuery loads do not have any concept of deduplication, but you can effectively do this by loading all the data to an initial table, then querying that table with a deduplication query into another table.
with t as (SELECT 1 as field, [1,3,4, 4] as dupe)
SELECT ANY_VALUE(field), dupe FROM t, t.dupe group by dupe;
You can deduplicate your data with Apps Script directly in Google Sheets before loading to BQ.
Or as Victor said you can deduplicate your data into BQ. With smth like:
SELECT
*
FROM (
SELECT
*,
ROW_NUMBER() OVER (PARTITION BY Field_to_deduplicate ORDER BY key) AS RowNr
FROM
YourDataset.YourTable ) AS X
WHERE
X.RowNr = 1