How to specify the id for a row when adding data as an array - datatables

I need to be able to associate an id with every row in my table so I can look it up at a later stage. I add rows dynamically using the row.add() method. But the row add only has a single parameter for the data which in my case is just an array of cells:
table.row.add( ["Tiger Nixon", 32, "System Architect"] );
How do I specify an id?

I have seen this requirement handled in one of 2 ways (or some variation on one of these approaches).
Both ways involve the source data providing the ID in each row array (or object). So, for example, if Tiger Nixon's data is sourced from an "employees" table in a database, then this value is assumed to be the primary key of the record.
Assuming the ID in this example is 1234...
1) Place the ID value in a hidden column
var table = $('#example').DataTable( {
"columnDefs": [
{
"targets": [ -1 ], // the final column in the table
"visible": false,
"searchable": false
}
} );
table.row.add( ["Tiger Nixon", 32, "System Architect", 1234] );
2) Place the ID in the <tr> element as an attribute
var rowData = ["Tiger Nixon", 32, "System Architect", 1234];
var rowID = rowData.pop();
var row = table.row.add( rowData ).draw();
$( row.node() ).attr( 'data-record-id', rowID );
This may be a completely unnecessary footnote, but just in case:
You cannot rely on DataTables to provide a durable ID. It does assign unique index values to each row, but that applies to one instance of a DataTable. That values will not persist across different creations of the table - and so there is no guarantee that "Tiger Nixon" will always be assigned the same index number.
If your data source does not have (or cannot provide) a unique ID already, then you could generate an ID yourself, upon row creation using a GUID generator, or something similar - and then add that value to the data array used to create your row. But here, again, you would have to ensure that the ID you generated is used consistently in every situation where the "Tiger Nixon" data is used. I think that could be challenging, or impossible.

Related

Effectively query range of arbitrary data stored in json array

I am trying to query data stored in an array in a field of type jsonb as follows:
Data description:
There is a table named items which has 4 columns: name, owner, seller, and metadata (metadata is the column that I want to query on). The data in metadata looks like this:
metadata {
...
"attributes": [
{
"trait_type": "rareity"
"value": "ultra-rare",
},
{
"trait_type": "color"
"value": "red",
},
{
"trait_type": "attract"
"value": 6,
},
{
"trait_type": "defend"
"value": 5,
},
...
],
...
}
There are usually many pairs of trait_type and value in attributes , a attributes in level 1 of metadata column. Pairs in attributes can be added arbitrarily. For example, I can add pair { "trait_type": "material", "value": "silk" } into attributes in example above.
Question:
I want to perform a mix of multiple string matching searches and number in range searches. For example: trait_type = material and 10 < defend < 20 and attract > 5.
I have thought of separating attributes into a table with 2 colums trait_type and value and querying SQL using intersect but when adding more conditions, I have to add more intersect also and It is very ineffective.
Is there any table design, indexing strategy, tools... so I can effectively query that kind of data?
As long as the free version of PostGreSQL will be unable to have some "in memory" features like Microsoft SQL Server, there is no proper way to ensure performances for those type of queries. The usual way to do that is to put the data in a key/value pair table in memory like Redis or some other specialized NoSQL DBMS do.

Datatables: how to get column's sum with an unknown column ordinal number

In my project I have a very wide table. In which each user define and hide some columns on his own. Column order preferences are stored on the server and are unknown at client side.
My task to calculate the sum of certain columns and display it in the table footer. I can calculate sum in the classical way and use it in other tables where the order of the columns is known.
But what about this case? For the user Paul, the "price" column can have a order number 8, and for the user John - 12. At the same time, Paul and John can change the order of these columns at any time in their interface.
May be possible that instead of
table.column(14).data().sum();
I can make this:
table.column("mnemonic_name").data().sum();
?
I could not find an answer in the Datable forum and here. Please help. Any help would be greatly appreciated!
Assign names to your columns, in the DataTable definition.
For example:
columns: [
title: "ID", name: "identifier", data: "id" },
title: "Quantity", name: "quant", data: "quantity" }
]
Then you can use a column selector based on the assigned names, instead of the indexes:
table.column( 'quant:name' ).data();
You can see the column-selector documentation here for further details.

Results DataSet from DynamoDB Query using GSI is not returning correct results

I have a dynamo DB table where I am currently storing all the events that are happening in my system with respect to every product. There is a primary key with a Hash combination of productid,eventtype and eventcategory and Sort Key as Creation Time on the main table. The table was created and data was added into it.
Later I added a new GSI on the table with the attributes being Secondary Hash (which is just the combination of eventcategory and eventtype (excluding productid) and CreationTime as Sort Key. This was added so that I can query for multiple products at once.
The GSI seems to work fine, However only later I realized the data being returned is incorrect
Here is the scenario. (I am running all these queries against the newly created index)
I was querying for products with in the last 30 days and the Query returns 312 records, However, when I run the same query for last 90 days, it returns me only 128 records (which is wrong, should be atleast equal or greater than last 30 days records)
I have the pagination logic already embedded in my code, so that the lastEvaluatedKey is verified every-time, to loop and fetch the next set of records and after the loop, all the results are combined.
Not sure if I am missing something.
ANy suggestions would be appreciated.
var limitPtr *int64
if limit > 0 {
limit64 := int64(limit)
limitPtr = &limit64
}
input := dynamodb.QueryInput{
ExpressionAttributeNames: map[string]*string{
"#sch": aws.String("SecondaryHash"),
"#pkr": aws.String("CreationTime"),
},
ExpressionAttributeValues: map[string]*dynamodb.AttributeValue{
":sch": {
S: aws.String(eventHash),
},
":pkr1": {
N: aws.String(strconv.FormatInt(startTime, 10)),
},
":pkr2": {
N: aws.String(strconv.FormatInt(endTime, 10)),
},
},
KeyConditionExpression: aws.String("#sch = :sch AND #pkr BETWEEN :pkr1 AND :pkr2"),
ScanIndexForward: &scanForward,
Limit: limitPtr,
TableName: aws.String(ddbTableName),
IndexName: aws.String(ddbIndexName),
}
You reached the maximum number of items to evaluate (not necessarily the number of matching items). The limit is 1 MB.
The response will contain a LastEvaluatedKey parameter, it is the last item's id. You have to perform a new query with an extra ExclusiveStartKey parameter. (ExclusiveStartKey should be equal with LastEvaluatedKey's value.)
When the LastEvaluatedKey is empty you reached the end of the table.

Maintaining auto ranking as a column in MongoDB

I am using MongoDB as my database.
I have data which contains rank and name as columns. Now a new row can be updated with a rank different from ranks already existing or can be same.
If same then the ranks of other rows must be adjusted .
Rows having lesser rank than the to be inserted one must be incremented by one and the rows which are having ranks can remain as it it.
Feature is something like number bulleted list in MS Word type of applications. Where inserting a row in between adjust the numbering of other rows below it.
Rank 1 is the highest rank.
For e.g. there are 3 rows
Name Rank
A 1
B 2
C 3
Now i want to update a row with D as name and 2 as rank. So now after the row insert, the DB should like below
Name Rank
A 1
B 3
C 4
D 2
Probably using Database triggers i can achieve this by updating the other rows.
I have couple of questions
(a) Is there any other better way than using database trigger for achieving this kind of scenario ? Updating all the rows might be a time consuming job.
(b) Does MongoDB support database trigger natively ?
Best Regards,
Saurav
No, MongoDB, does not provide triggers (yet). Also I don't think trigger is really a great way to achieve this.
So I would just like to throw some ideas, see if it makes sense.
Approach 1
Maybe instead of disturbing those many documents, you can create a collection with only one document (Let's call the collection ranking). In that document, have an array field call ranks. Since it's an array it's already maintaining a sequence.
{
_id : "RANK",
"ranks" : ["A","B","C"]
}
Now if you want to add D to this rank at 2nd position
db.ranking.update({_id:"RANK"},{$push : {"ranks":{$each : ["D"],$position:1}}});
it would add D to index 1 which is 2nd position considering index starts at 0.
{
_id : "RANK",
"ranks" : ["A","D","B","C"]
}
But there is a catch, what if you want to change C position to 1st from 4th, you need to remove it from end and put it in the beginning, I am sure both operation can't be achieved in single update (didn't dig in the options much), so we can run two queries
db.ranking.update({_id:"RANK"},{$pull : {"ranks": "C"}});
db.ranking.update({_id:"RANK"},{$push : {"ranks":{$each : ["C"],$position:0}}});
Then it would be like
{
_id : "RANK",
"ranks" : ["C","A","D","B"]
}
maintaining the rest of sequence.
Now you would probably want to store id instead of A,B,C etc. one document can be 16MB so basically this ranks array can store more than 1.3 million entries of id, if id is MongoDB ObjectId of 12 bytes each. if that is not enough, we still have option to have followup document(s) with further ranking.
Approach 2
you can also, instead of having rank as number, just have two field like followedBy and precededBy.
so your user document would look
{
_id:"A"
"followedBy":"B",
}
{
_id:"B"
"followedBy":"C",
"precededBy":"A"
}
{
_id:"c"
"precededBy":"B",
}
if you want to add D at second position, then you need to change the current 2nd position and you need to insert the new One, so it would be change in only two document
{
_id:"A"
"followedBy":"B",
}
{
_id:"B"
"followedBy":"C",
"precededBy":"D" //changed from A to D
}
{
_id:"c"
"precededBy":"B",
}
{
_id:"D"
"followedBy":"B",
"precededBy":"A"
}
The downside of this approach is that you cannot sort in query based on the ranking until and unless you get all these in application and create a linkedlist sort of structure.
This approach just preserve the ranking with minimum DB changes.

How to use redis to store hierarchical data?

I have a set of hierarchical data to store, the hierarchy is like site/building/floor, the data, for example
{
site:'New York',
buildings: [
{
name:'building a',
floors: [
'Ground':[{room:room1},{room:room2}],
'First':[{room:room1},{room:room2}]
]
}
]
},
{
site:'London',
buildings: [
{
name:'building a',
floors: [
'Ground':[{room:room1},{room:room2}],
'First':[{room:room1},{room:room2}]
]
}
]
}
I want to store these room data into a set, but I can also query the a subset of rooms by selecting the site name or (site name + building name ) or ( site name + building name + floor )
In Redis you won't store your data in a unique data structure. You have to create multiple data structure, each one being identified by a key.
Use a convention to name yours keys: for example site:<CITY>:buildings will be a set that contains the list of building ids for a given site.
Then define hashes to store each building description. The key for these hashes could be something like: building:<ID>
In the hash you have 2 members: name and floors. Floors value is the unique id of the set containing the list of floor identifiers.
Then create a last set for each floor, to store the room names. The name of the sets could be something like: floor:<ID>.
Tips:
use redis INCR command to generate unique IDs.
avoid too long keys if you intend to store a very high number of them (longer keys require more memory)