nested dropped down filters with yadcf select2 and dataTables - datatables

I have been using dataTables+yacdf+select2 combination successfully for quite a while. Now I am working on converting my select2 to be an ordered indented drop-down list with optgroup selectable as well (https://select2.org/options, "Hierarchical options", Selectable optgroups in Select2).
However with yacdf I can not seem to pass data to select2 in the hierarchical format like the one below:
var data = [
{
"text": "Group 1",
"children" : [
{
"id": 1,
"text": "Option 1.1"
},
{
"id": 2,
"text": "Option 1.2"
}
]
},
{
"text": "Group 2",
"children" : [
{
"id": 3,
"text": "Option 2.1"
},
{
"id": 4,
"text": "Option 2.2"
}
]
}];
Previously the working code was:
.yadcf([{column_number: 1, filter_type: "multi_select", select_type: 'select2', filter_container_id: "someFilter2", filter_default_label: "Select xxx", filter_reset_button_text: false, style_class: "form-control",
select_type_options: {
multiple: 'multiple',
width: '100%',
placeholder: 'something',
},
data: [<comma separated list of values>]
yacdf source code states that:
Required: false
Type: Array (of string or objects)
Description: When the need of predefined data for filter is needed just use an array of strings ["value1","value2"....] (supported in select / multi_select / auto_complete filters) or array of objects [{value: 'Some Data 1', label: 'One'}, {value: 'Some Data 3', label: 'Three'}] (supported in select / multi_select filters)
Note: that when filter_type is custom_func / multi_select_custom_func this array will populate the custom filter select element
"
Is it really not possible?
If not anybody managed to get nested dropped down filters work with dataTables any other way?

You should use the data_as_is: true , read docs for more info
data_as_is
Required: false
Type: boolean
Default value: false
Description: When set to true, the value of the data attribute will be fed into the filter as is (without any modification/decoration).
Perfect to use when you want to define your own for the filter
Note: Currently supported by the select / multi_select filters

Related

Indexes: Search by Boolean?

I'm having some trouble with FaunaDB Indexes. FQL is quite powerful but the docs seem to be limited (for now) to only a few examples/use cases. (Searching by String)
I have a collection of Orders, with a few fields: status, id, client, material and date.
My goal is to search/filter for orders depending on their Status, OPEN OR CLOSED (Boolean true/false).
Here is the Index I created:
CreateIndex({
name: "orders_all_by_open_asc",
unique: false,
serialized: true,
source: Collection("orders"),
terms: [{ field: ["data", "status"] }],
values: [
{ field: ["data", "unique_id"] },
{ field: ["data", "client"] },
{ field: ["data", "material"] },
{ field: ["data", "date"] }
]
}
So with this Index, I want to specify either TRUE or FALSE and get all corresponding orders, including their data (fields).
I'm having two problems:
When I pass TRUE OR FALSE using the Javascript Driver, nothing is returned :( Is it possible to search by Booleans at all, or only by String/Number?
Here is my Query (in FQL, using the Shell):
Match(Index("orders_all_by_open_asc"), true)
And unfortunately, nothing is returned. I'm probably doing this wrong.
Second (slightly unrelated) question. When I create an Index and specify a bunch of Values, it seems the data returned is in Array format, with only the values, not the Fields. An example:
[
1001,
"client1",
"concrete",
"2021-04-13T00:00:00.000Z",
],
[
1002,
"client2",
"wood",
"2021-04-13T00:00:00.000Z",
]
This format is bad for me, because my front-end expects receiving an Object with the Fields as a key and the Values as properties. Example:
data:
{
unique_id : 1001,
client : "client1",
material : "concrete",
date: "2021-04-13T00:00:00.000Z"
},
{
unique_id : 1002,
client : "client2",
material : "wood",
date: "2021-04-13T00:00:00.000Z"
},
etc..
Is there any way to get the Field as well as the Value when using Index values, or will it always return an Array (and not an object)?
Could I use a Lambda or something for this?
I do have another Query that uses Map and Lambda to good effect, and returns the entire document, including the Ref and Data fields:
Map(
Paginate(
Match(Index("orders_by_date"), date),
),
Lambda('item', Get(Var('item')))
)
This works very nicely but unfortunately, it also performs one Get request per Document returned and that seems very inefficient.
This new Index I'm wanting to build, to filter by Order Status, will be used to return hundreds of Orders, hundreds of times a day. So I'm trying to keep it as efficient as possible, but if it can only return an Array it won't be useful.
Thanks in advance!! Indexes are great but hard to grasp, so any insight will be appreciated.
You didn't show us exactly what you have done, so here's an example that shows that filtering on boolean values does work using the index you created as-is:
> CreateCollection({ name: "orders" })
{
ref: Collection("orders"),
ts: 1618350087320000,
history_days: 30,
name: 'orders'
}
> Create(Collection("orders"), { data: {
unique_id: 1,
client: "me",
material: "stone",
date: Now(),
status: true
}})
{
ref: Ref(Collection("orders"), "295794155241603584"),
ts: 1618350138800000,
data: {
unique_id: 1,
client: 'me',
material: 'stone',
date: Time("2021-04-13T21:42:18.784Z"),
status: true
}
}
> Create(Collection("orders"), { data: {
unique_id: 2,
client: "you",
material: "muslin",
date: Now(),
status: false
}})
{
ref: Ref(Collection("orders"), "295794180038328832"),
ts: 1618350162440000,
data: {
unique_id: 2,
client: 'you',
material: 'muslin',
date: Time("2021-04-13T21:42:42.437Z"),
status: false
}
}
> CreateIndex({
name: "orders_all_by_open_asc",
unique: false,
serialized: true,
source: Collection("orders"),
terms: [{ field: ["data", "status"] }],
values: [
{ field: ["data", "unique_id"] },
{ field: ["data", "client"] },
{ field: ["data", "material"] },
{ field: ["data", "date"] }
]
})
{
ref: Index("orders_all_by_open_asc"),
ts: 1618350185940000,
active: true,
serialized: true,
name: 'orders_all_by_open_asc',
unique: false,
source: Collection("orders"),
terms: [ { field: [ 'data', 'status' ] } ],
values: [
{ field: [ 'data', 'unique_id' ] },
{ field: [ 'data', 'client' ] },
{ field: [ 'data', 'material' ] },
{ field: [ 'data', 'date' ] }
],
partitions: 1
}
> Paginate(Match(Index("orders_all_by_open_asc"), true))
{ data: [ [ 1, 'me', 'stone', Time("2021-04-13T21:42:18.784Z") ] ] }
> Paginate(Match(Index("orders_all_by_open_asc"), false))
{ data: [ [ 2, 'you', 'muslin', Time("2021-04-13T21:42:42.437Z") ] ] }
It's a little more work, but you can compose whatever return format that you like:
> Map(
Paginate(Match(Index("orders_all_by_open_asc"), false)),
Lambda(
["unique_id", "client", "material", "date"],
{
unique_id: Var("unique_id"),
client: Var("client"),
material: Var("material"),
date: Var("date"),
}
)
)
{
data: [
{
unique_id: 2,
client: 'you',
material: 'muslin',
date: Time("2021-04-13T21:42:42.437Z")
}
]
}
It's still an array of results, but each result is now an object with the appropriate field names.
Not too familiar with FQL, but I am somewhat familiar with SQL languages. Essentially, database languages usually treat all of your values as strings until they don't need to anymore. Instead, your query should use the string definition that FQL is expecting. I believe it should be OPEN or CLOSED in your case. You can simply have an if statement in java to determine whether to search for "OPEN" or "CLOSED".
To answer your second question, I don't know for FQL, but if that is what is returned, then your approach with a lamda seems to be fine. Not much else you can do about it from your end other than hope that you get a different way to get entries in API form somewhere in the future. At the end of the day, an O(n) operation in this context is not too bad, and only having to return a hundred or so orders shouldn't be the most painful thing in the world.
If you are truly worried about this, you can break up the request into portions, so you return only the first 100, then when frontend wants the next set, you send the next 100. You can cache the results too to make it very fast from the front-end perspective.
Another suggestion, maybe I am wrong and failed at searching the docs, but I will post anyway just in case it's helpful.
My index was failing to return objects, example data here is the client field:
"data": {
"status": "LIVRAISON",
"open": true,
"unique_id": 1001,
"client": {
"name": "TEST1",
"contact_name": "Bob",
"email": "bob#client.com",
"phone": "555-555-5555"
Here, the client field returned as null even though it was specified in the Index.
From reading the docs, here: https://docs.fauna.com/fauna/current/api/fql/indexes?lang=javascript#value
In the Value Objects section, I was able to understand that for Objects, the Index Field must be defined as an Array, one for each Object key. Example for my data:
{ field: ['data', 'client', 'name'] },
{ field: ['data', 'client', 'contact_name'] },
{ field: ['data', 'client', 'email'] },
{ field: ['data', 'client', 'phone'] },
This was slightly confusing, because my beginner brain expected that defining the 'client' field would simply return the entire object, like so:
{ field: ['data', 'client'] },
The only part about this in the docs was this sentence: The field ["data", "address", "street"] refers to the street field contained in an address object within the document’s data object.
This is enough information, but maybe it would deserve its own section, with a longer example? Of course the simple sentence works, but with a sub-section called 'Adding Objects to Fields' or something, this would make it extra-clear.
Hoping my moments of confusion will help out. Loving FaunaDB so far, keep up the great work :)

Object property is filled when shown but undefined when accessed. Nodejs Sequelize

I am performing this raw sql query
SELECT postId, users.id as userId,users.firstName,users.lastName,users.avatar,COUNT(postId) as
numOfLikes,body
FROM posts
INNER JOIN likes ON likes.postId = posts.id
INNER JOIN users ON users.id = posts.userId
GROUP BY postId
ORDER BY postId DESC
through nodeJs sequelize ORM
Posts.findAll({
attributes: ['id','body','createdAt', [db.fn('count', db.col('likes.postId')), 'numOfLikes']],
include: [{ attributes: [], model: Likes,required:true, },{model:Users,required:true}],
group: ['id'],
order: [['id', 'DESC']]
})
I receive everything as it should be but cannot access numOfLikes object property (undefined)
{
"id": 18,
"body": "This show was organized.",
"createdAt": "2021-03-06T23:55:44.000Z",
"numOfLikes": 5,
"user": {
"id": 73,
"firstName": "Paolo",
"lastName": "Jovovic",
"email": "dzonnna#gmail.com",
"email_verified": "1",
"password": "$2b$10$6fLwPfuLP8Jfp7em0iqBm.YhznDut8AWOmUPynqecfd9YMvZBMaXq",
"google_id": null,
"avatar": "1_aZF6_EToO4T3ZeHXfgF-Vg.png",
"role": "0",
"createdAt": "2021-02-28T22:30:42.000Z",
"updatedAt": "2021-03-01T23:59:21.000Z"
}
}
I had this same issue - I was selecting certain attributes and the query was working correctly.
const books = await Book.findAll({
where: {status_id: [status[0].id, status[1].id]},
attributes: ["id", "cover", "title", "status_id", "created_at"],
order: [["_id", "DESC"]],
})
the object showed the value but when i tried to access it, it was always undefined.
dataValues: {
id: '9beb341c-a0fa-489d-8a32-c9ec28f0ab16',
cover: 'cover_500.jpg',
title: 'birth',
status_id: '8a5b8c46-b5ac-477a-b1c5-ca2210399e6c',
created_at: 2022-10-16T02:10:32.000Z
},
I should point out here that i've changed the name of createAt in the model
createdAt: {
field: "created_at",
type: Sequelize.DATE,
},
when trying to access the value with book.create_at or book.createAt both come back as undefined even though they are clearly on the object. this is a very strange behavior. since they are in the datavalues they should be accessible.
when I remove the attributes selection column and just return everything for the row, it magically works.
so the issue is either,
I changed the name in the model and there is some kind of issue / error in sequalize
There is just some kind of issue / error with sequalize access the datavalue
either way, you can grab the whole row and only return / build your object out of the columns you desire.
I switched the attributes to
attributes: ["id", "cover", "title", "status_id", "createdAt"],
and it is now letting me access the field with createdAt but still not created_at so the issue appears to be the renaming of the field along with the selection of attributes.
hope this helps someone until they can fix this.
Posts.findAll({
attributes: ['id', 'body', 'createdAt', [db.fn('count', db.col('likes.postId')), 'numOfLikes']],
include: [{ attributes: [], model: Likes, required: true, }, { model: Users, required: true }],
group: ['id'],
order: [['id', 'DESC']]
}).then(postList => {
postList = JSON.parse(JSON.stringify(postList)) // This is important
postList.forEach(post => {
// access post
console.log(post, post.users)
});
})

How to define Columndefs after DataTables initialization

Currently I'm using DataTables in each page initializing them individually like this.
var table = $('#' + '<%= gvReports.ClientID %>').DataTable({
"responsive": true,
"bAutoWidth": true,
"oLanguage": {
"sSearch": "Search Table: ",
"sSearchPlaceholder": "Search records",
"sEmptyTable": "No data available to display"
},
"columnDefs": [
{
"targets": [0],
type: 'natural-nohtml'
}
],
"sScrollY": "55vh",
"scrollCollapse": false,
"pagingType": "full_numbers",
"lengthMenu": [[25, 50, 100, 150, 200], [25, 50, 100, 150, 200]]
});
When I tried to create a generic JQuery method to use in multiple places the type attribute in columnDef is not working properly
"columnDefs": [
{
"targets": [0],
type: 'natural-nohtml'
}
],
Im using NatualSort plugin to sort the data as the column '0' contains alphanumeric data.
Is there a way i can set the columnDefs dynamically? or to set the ColumnDef Type for Column(0) after I initialize the table?
Something Like
table.column("0:visible").Type('natural-nohtml');
Any help is appreciated? I want to know if I'm thinking in right way?
There is no way that you can manipulate columnDefs after the dataTable is initialised. However, when you basically just want to set natural-nohtml as type for the first column for any dataTable, then you could simply extend $.fn.dataTable.defaults. Declare this before you initialise any dataTable :
$.extend( true, $.fn.dataTable.defaults, {
columnDefs: [
{ targets: [0], type: 'natural-nohtml' }
]
} );
This will set any dataTables' first column to type natural-nohtml.

sWidth is not working in dataTables

I am trying to fix column width instead of dataTable choosing it automatically , hence i am trying to set sWidth but its not applying. Following is my code,
$(document).ready(function(){
$('#Emp_table').dataTable()
.columnFilter({
aoColumns: [ {type:"text"},
{ type: "text" },
{ type: "select",bSmart: false,"sType": "string", "sWidth": "5%" },
{ type: "select" },
{ type: "select" },
{ type: "select"},
{ type: "select" },
{ type: "select" }
],
});
});
Here all works fine like filter by text or value except the column width, because of that the table length is extending beyond my page. Even though my first value is ID which is not even more than 3 digits the width takes for around 10 characters . Not only sWidth, even bSmart i am trying to false but still it works with smart filter .
You need to set http://datatables.net/reference/option/autoWidth to false
Also it looks like you're creating your table a bit oddly.
For datatables 1.10 use:
$('#Emp_table').DataTable({
'columns': [
{"type": "string", "width": "5%" },
// etc...
],
'autoWidth': false,
})

Dojo Gridx with JsonStore

I'm trying to connect Gridx grid to JsonStore. The code and data is bellow. The problem is that Gridx is rendered correctly but it says: No items to display. Anybody know what I'm doing wrong? Dojo and Gridx are the latest versions installed with cpm.
edit: there is no ajax requet to /test/ in the Firebug/Chrom development tools
structure: [
{ field: 'id', name: 'Id' },
{ field: 'title', name: 'Title' },
{ field: 'artist', name: 'Artist' }
],
store: new JsonRestStore({
idAttribute: 'id',
target: '/test/'
}),
Data returned by /test is like this:
{
identifier: "id",
label: "title",
items: [
{
id: 1,
title: "Title 1",
artist: "Artist 1"
},
{
id: 2,
title: "Title 2",
artist: "Artist 2"
},
...
}
Grid is created with:
this.grid = new Grid({
structure: structure,
store: store,
modules: [
Pagination,
PaginationBar,
//Focus,
SingleSort,
ToolBar
],
//paginationInitialPage: 3,
paginationBarSizes: [10, 25, 50, 100],
paginationBarVisibleSteppers: 5,
paginationBarPosition: 'bottom'
}, this.gridNode);
have you specified which cache to use? In your case it should be an Async cache.
require([
'gridx/core/model/cache/Async',
.....
], function(Cache, ...){
this.grid = new Grid({
cacheClass: Cache,
......
});
I've found that this happens when the server doesn't return the Content-Range header in the response. Apparently the store isn't smart enough to just count the items in the returned array...