I have an array of objects that I am simplifying as this:
var expenses = [
{
arm: "0",
cat_id: "1",
crop: {
crop: "corn",
id: 1
},
crop_id: "1",
dist: "164.97",
expense: "Fertilizer",
id: "1",
loan_id: "1"
},
{
arm: "20",
cat_id: "8",
crop: {
crop: "corn",
id: 1
},
crop_id: "1",
dist: "0",
expense: "Labor",
id: "8",
loan_id: "1"
}
];
I am trying to end up with this:
var expenses = [{
arm: 0,
cat_id: 1,
crop: "corn",
crop_id: 1,
dist: 164.97,
expense: "Fertilizer",
id: 1,
loan_id: 1
},{
arm: 20,
cat_id: 6,
crop: "corn",
crop_id: 1,
dist: 0,
expense: "Labor",
id: 1,
loan_id: 1
}];
I can get certain pieces in that direction but can't pull it all together without error. I can't find out how to cast the values to float or put crop INSIDE of stub because casted returns all nulls. I currently have this:
flattened = _.map(expenses, function(item){
var crop = item.crop.crop;
var stub = _.pick(item, [
'id',
'loan_id',
'cat_id',
'expense',
'crop_id',
'arm',
'dist'
]);
var casted = _.map(stub, function(i){
i.crop = crop;
return i;
});
return stub;
});
Any help is appreciated.
Problem 1: I can't find out how to cast the values to float
This should be easily fixed by using parseFloat.
e.g.
item.dist = parseFloat(item.dist);
Problem 2: put crop INSIDE of stub because casted returns all nulls
Since you're already using lodash, might as well get used to their chaining feature (lazy evaluation).
DEMO
var flattened = _.map(expenses, function(item) {
item.dist = parseFloat(item.dist);
return _(item)
.omit('crop')
.assign(_.omit(item.crop, 'id'))
.value();
});
The solution above maps the entire expenses array, converting item.dist to a floating point value and then flattening the values from the item.crop object towards the item object with the exception of the item.crop.id value.
Note: In regards to your solution above, using _.map in an object
results to an array.
My attempt for my own learning purposes based on #ryeballar code.
var flattened = _.map(expenses, function(item) {
return _(item)
.set('crop', item.crop.crop) // set item.crop
.set('dist', parseFloat(item.dist)) // set dist as (float)item.dist
.value();
});
Related
I am trying to use multiple ramda functions on this example:
const data = {
"tableItems": [
{
"id": 1,
"name": "1",
"startingPoint": true,
"pageNumber": 15,
"nodes": [
100,
200
]
},
{
"id": 2,
"name": "2",
"startingPoint": true,
"pageNumber": 14,
"nodes": [
300,
400
]
}
],
"nodes": [
{
"id": 100,
"tableItemId": 1,
"content": "test"
},
{
"id": 200,
"tableItemId": 1,
"content": "test"
},
{
"id": 300,
"tableItemId": 2,
"content": "test"
},
{
"id": 400,
"tableItemId": 2,
"content": "test"
}
]
}
I am trying to create new JSON which should look like this where nodes array should be filled with another ramda function:
const newJSON = [
{
"id": "chapter-1",
"name": "2",
"nodes": []
},
{
"id": "chapter-2",
"name": "1",
"nodes": []
}
]
I started with:
let chapters = [];
let chapter;
const getChapters = R.pipe(
R.path(['tableItems']),
R.sortBy(R.prop('pageNumber')),
R.map((tableItem) => {
if(tableItem.startingPoint) {
chapter = {
id: `chapter-${chapters.length+1}`,
name: tableItem.name,
nodes: []
}
chapters.push(chapter);
}
return tableItem
})
)
But how to combine getNodes which needs access to the whole scope of data?
I tried pipe but something is not working.
Example:
const getNodes = R.pipe(
R.path(['nodes']),
R.map((node) => {
console.log(node)
})
)
R.pipe(
getChapters,
getNodes
)(data)
Any help would be appreciated.
We could write something like this, using Ramda:
const {pipe, sortBy, prop, filter, map, applySpec, identity, propEq, find, __, addIndex, assoc} = R
const transform = ({tableItems, nodes}) => pipe (
filter (prop ('startingPoint')),
sortBy (prop ('pageNumber')),
map (applySpec ({
name: prop('name'),
nodes: pipe (prop('nodes'), map (pipe (propEq ('id'), find (__, nodes))), filter (Boolean))
})),
addIndex (map) ((o, i) => assoc ('id', `chapter-${i + 1}`, o))
) (tableItems)
const data = {tableItems: [{id: 1, name: "1", startingPoint: true, pageNumber: 15, nodes: [100, 200]}, {id: 2, name: "2", startingPoint: true, pageNumber: 14, nodes: [300, 400]}], nodes: [{id: 100, tableItemId: 1, content: "test"}, {id: 200, tableItemId: 1, content: "test"}, {id: 300, tableItemId: 2, content: "test"}, {id: 400, tableItemId: 2, content: "test"}]}
console .log (transform (data))
.as-console-wrapper {max-height: 100% !important; top: 0}
<script src="//cdnjs.cloudflare.com/ajax/libs/ramda/0.27.1/ramda.min.js"></script>
First we filter the tableItems to include only those with startingPoint of true, then we sort the result by pageNumber. Then for each, we create name and nodes elements, based on the original data and on a function that maps the node values to the element in the initial nodes property. Finally, for each one, we add the chapter-# id element using addIndex (map).
This works, and is not horrible. It would take a fair bit of work to make this entirely point-free, I believe. And I don't find it worthwhile... especially because this Ramda version doesn't add anything to a simpler vanilla implementation:
const transform = ({tableItems, nodes}) =>
tableItems
.filter (x => x .startingPoint)
.sort (({pageNumber: a}, {pageNumber: b}) => a - b)
.map (({name, nodes: ns}, i) => ({
id: `chapter-${i + 1}`,
name,
nodes: ns .map (n => nodes .find (node => node .id == n)) .filter (Boolean)
}))
const data = {tableItems: [{id: 1, name: "1", startingPoint: true, pageNumber: 15, nodes: [100, 200]}, {id: 2, name: "2", startingPoint: true, pageNumber: 14, nodes: [300, 400]}], nodes: [{id: 100, tableItemId: 1, content: "test"}, {id: 200, tableItemId: 1, content: "test"}, {id: 300, tableItemId: 2, content: "test"}, {id: 400, tableItemId: 2, content: "test"}]}
console .log (transform (data))
.as-console-wrapper {max-height: 100% !important; top: 0}
This works similarly to the above except that it assigns the id at the same time as name and nodes.
I'm a founder of Ramda and remain a big fan. But it doesn't always add anything to vanilla modern JS.
You can use a curried function. Because the pipe will always pipe the result of the previous function call into the next function. You can use R.tap if you want to step over.
However, I guess you want to have the data object and the output of the previous function call both in your getNodes function. In that case you can use a curried function, where you pass the response of the previous function as last parameter.
const getNodes = R.curryN(2, function(data, tableItemList){
console.log(tableItemList) // result of previous function call
return R.pipe(
R.path(['nodes']),
R.map((node) => {
console.log('node:', node);
})
)(data)
})
And use it like:
R.pipe(
getChapters,
getNodes(data)
)(data)
I would split the solution into two steps:
Prepare the tableItems and nodes to the required end state using R.evolve - filter, sort, and then use R.toPairs the tableItems to get an array that includes the index and the object. Group the nodes by id so you can pick the relevant nodes by id in the combine step.
Combine both properties to create the end result by mapping the new tableItems, and using R.applySpec to create the properties.
const {pipe, evolve, filter, prop, sortBy, toPairs, groupBy, map, applySpec, path, flip, pick} = R
const transform = pipe(
evolve({ // prepare
tableItems: pipe(
filter(prop('startingPoint')),
sortBy(prop('pageNumber')),
toPairs
),
nodes: groupBy(prop('id'))
}),
({ tableItems, nodes }) => // combine
map(applySpec({
id: ([i]) => `chapter-${+i + 1}`,
name: path([1, 'name']),
nodes: pipe(path([1, 'nodes']), flip(pick)(nodes)),
}))(tableItems)
)
const data = {tableItems: [{id: 1, name: "1", startingPoint: true, pageNumber: 15, nodes: [100, 200]}, {id: 2, name: "2", startingPoint: true, pageNumber: 14, nodes: [300, 400]}], nodes: [{id: 100, tableItemId: 1, content: "test"}, {id: 200, tableItemId: 1, content: "test"}, {id: 300, tableItemId: 2, content: "test"}, {id: 400, tableItemId: 2, content: "test"}]}
console.log(transform(data))
.as-console-wrapper {max-height: 100% !important; top: 0}
<script src="//cdnjs.cloudflare.com/ajax/libs/ramda/0.27.1/ramda.min.js"></script>
I'm having some trouble with FaunaDB Indexes. FQL is quite powerful but the docs seem to be limited (for now) to only a few examples/use cases. (Searching by String)
I have a collection of Orders, with a few fields: status, id, client, material and date.
My goal is to search/filter for orders depending on their Status, OPEN OR CLOSED (Boolean true/false).
Here is the Index I created:
CreateIndex({
name: "orders_all_by_open_asc",
unique: false,
serialized: true,
source: Collection("orders"),
terms: [{ field: ["data", "status"] }],
values: [
{ field: ["data", "unique_id"] },
{ field: ["data", "client"] },
{ field: ["data", "material"] },
{ field: ["data", "date"] }
]
}
So with this Index, I want to specify either TRUE or FALSE and get all corresponding orders, including their data (fields).
I'm having two problems:
When I pass TRUE OR FALSE using the Javascript Driver, nothing is returned :( Is it possible to search by Booleans at all, or only by String/Number?
Here is my Query (in FQL, using the Shell):
Match(Index("orders_all_by_open_asc"), true)
And unfortunately, nothing is returned. I'm probably doing this wrong.
Second (slightly unrelated) question. When I create an Index and specify a bunch of Values, it seems the data returned is in Array format, with only the values, not the Fields. An example:
[
1001,
"client1",
"concrete",
"2021-04-13T00:00:00.000Z",
],
[
1002,
"client2",
"wood",
"2021-04-13T00:00:00.000Z",
]
This format is bad for me, because my front-end expects receiving an Object with the Fields as a key and the Values as properties. Example:
data:
{
unique_id : 1001,
client : "client1",
material : "concrete",
date: "2021-04-13T00:00:00.000Z"
},
{
unique_id : 1002,
client : "client2",
material : "wood",
date: "2021-04-13T00:00:00.000Z"
},
etc..
Is there any way to get the Field as well as the Value when using Index values, or will it always return an Array (and not an object)?
Could I use a Lambda or something for this?
I do have another Query that uses Map and Lambda to good effect, and returns the entire document, including the Ref and Data fields:
Map(
Paginate(
Match(Index("orders_by_date"), date),
),
Lambda('item', Get(Var('item')))
)
This works very nicely but unfortunately, it also performs one Get request per Document returned and that seems very inefficient.
This new Index I'm wanting to build, to filter by Order Status, will be used to return hundreds of Orders, hundreds of times a day. So I'm trying to keep it as efficient as possible, but if it can only return an Array it won't be useful.
Thanks in advance!! Indexes are great but hard to grasp, so any insight will be appreciated.
You didn't show us exactly what you have done, so here's an example that shows that filtering on boolean values does work using the index you created as-is:
> CreateCollection({ name: "orders" })
{
ref: Collection("orders"),
ts: 1618350087320000,
history_days: 30,
name: 'orders'
}
> Create(Collection("orders"), { data: {
unique_id: 1,
client: "me",
material: "stone",
date: Now(),
status: true
}})
{
ref: Ref(Collection("orders"), "295794155241603584"),
ts: 1618350138800000,
data: {
unique_id: 1,
client: 'me',
material: 'stone',
date: Time("2021-04-13T21:42:18.784Z"),
status: true
}
}
> Create(Collection("orders"), { data: {
unique_id: 2,
client: "you",
material: "muslin",
date: Now(),
status: false
}})
{
ref: Ref(Collection("orders"), "295794180038328832"),
ts: 1618350162440000,
data: {
unique_id: 2,
client: 'you',
material: 'muslin',
date: Time("2021-04-13T21:42:42.437Z"),
status: false
}
}
> CreateIndex({
name: "orders_all_by_open_asc",
unique: false,
serialized: true,
source: Collection("orders"),
terms: [{ field: ["data", "status"] }],
values: [
{ field: ["data", "unique_id"] },
{ field: ["data", "client"] },
{ field: ["data", "material"] },
{ field: ["data", "date"] }
]
})
{
ref: Index("orders_all_by_open_asc"),
ts: 1618350185940000,
active: true,
serialized: true,
name: 'orders_all_by_open_asc',
unique: false,
source: Collection("orders"),
terms: [ { field: [ 'data', 'status' ] } ],
values: [
{ field: [ 'data', 'unique_id' ] },
{ field: [ 'data', 'client' ] },
{ field: [ 'data', 'material' ] },
{ field: [ 'data', 'date' ] }
],
partitions: 1
}
> Paginate(Match(Index("orders_all_by_open_asc"), true))
{ data: [ [ 1, 'me', 'stone', Time("2021-04-13T21:42:18.784Z") ] ] }
> Paginate(Match(Index("orders_all_by_open_asc"), false))
{ data: [ [ 2, 'you', 'muslin', Time("2021-04-13T21:42:42.437Z") ] ] }
It's a little more work, but you can compose whatever return format that you like:
> Map(
Paginate(Match(Index("orders_all_by_open_asc"), false)),
Lambda(
["unique_id", "client", "material", "date"],
{
unique_id: Var("unique_id"),
client: Var("client"),
material: Var("material"),
date: Var("date"),
}
)
)
{
data: [
{
unique_id: 2,
client: 'you',
material: 'muslin',
date: Time("2021-04-13T21:42:42.437Z")
}
]
}
It's still an array of results, but each result is now an object with the appropriate field names.
Not too familiar with FQL, but I am somewhat familiar with SQL languages. Essentially, database languages usually treat all of your values as strings until they don't need to anymore. Instead, your query should use the string definition that FQL is expecting. I believe it should be OPEN or CLOSED in your case. You can simply have an if statement in java to determine whether to search for "OPEN" or "CLOSED".
To answer your second question, I don't know for FQL, but if that is what is returned, then your approach with a lamda seems to be fine. Not much else you can do about it from your end other than hope that you get a different way to get entries in API form somewhere in the future. At the end of the day, an O(n) operation in this context is not too bad, and only having to return a hundred or so orders shouldn't be the most painful thing in the world.
If you are truly worried about this, you can break up the request into portions, so you return only the first 100, then when frontend wants the next set, you send the next 100. You can cache the results too to make it very fast from the front-end perspective.
Another suggestion, maybe I am wrong and failed at searching the docs, but I will post anyway just in case it's helpful.
My index was failing to return objects, example data here is the client field:
"data": {
"status": "LIVRAISON",
"open": true,
"unique_id": 1001,
"client": {
"name": "TEST1",
"contact_name": "Bob",
"email": "bob#client.com",
"phone": "555-555-5555"
Here, the client field returned as null even though it was specified in the Index.
From reading the docs, here: https://docs.fauna.com/fauna/current/api/fql/indexes?lang=javascript#value
In the Value Objects section, I was able to understand that for Objects, the Index Field must be defined as an Array, one for each Object key. Example for my data:
{ field: ['data', 'client', 'name'] },
{ field: ['data', 'client', 'contact_name'] },
{ field: ['data', 'client', 'email'] },
{ field: ['data', 'client', 'phone'] },
This was slightly confusing, because my beginner brain expected that defining the 'client' field would simply return the entire object, like so:
{ field: ['data', 'client'] },
The only part about this in the docs was this sentence: The field ["data", "address", "street"] refers to the street field contained in an address object within the document’s data object.
This is enough information, but maybe it would deserve its own section, with a longer example? Of course the simple sentence works, but with a sub-section called 'Adding Objects to Fields' or something, this would make it extra-clear.
Hoping my moments of confusion will help out. Loving FaunaDB so far, keep up the great work :)
I have a database field that contains either integer 1 (representing the value 'document') or integer 2 (representing the value 'email') and I use datatables to format/display these integers as the words 'document' or 'email' within the table column. I would therefore like to use YADCF to be able to select 'document' or 'email' and have datatables filter the resultset accordingly.
Link to screen shot of table column.
I have used the custom_func feature as follows (code stripped down for brevity), but regardless of all other aspects of the datatable looking and working correctly, with a filter selector looking as expected and holding the selected value, and no console errors, my custom function is not being called. I have tried manually calling the custom function immediately before initialising YADCF, to check the scope, and it is being called just fine. Can anyone see anything obviously wrong please?
$(document).ready(function() {
function customFilterDocEmail(filterVal, columnVal) {
alert('customFilterDocEmail called');
var found;
if (columnVal === '') {
return true;
}
switch (filterVal) {
case 'doc':
found = columnVal === '1';
break;
case 'email':
found = columnVal === '2';
break;
default:
found = 1;
break;
}
if (found !== -1) {
return true;
}
return false;
}
})
var table = $('#table').DataTable({
serverSide: true,
processing: true,
ajax: {
url: '/api/datatables/getJson/doctext/doctexts',
type: 'POST',
data: function(d) {d.CSRFToken = '****';}
},
stateSave: true,
responsive: true,
pageLength: 25,
order: [[0, 'asc']],
columns: [
{ data: 'txt_type', width: '10%' },
{ data: 'txt_title' },
{ data: 'txt_name', width: '20%' },
{ data: 'link', orderable: false, width: '5%' },
],
}
});
yadcf.init(table, [
{ column_number: 0, filter_type: 'custom_func', custom_func: customFilterDocEmail,
data: [{ value: 'doc', label: 'Document' }, { value: 'email', label: 'Email' }] },
{ column_number: 1, column_data_type: 'text', filter_type: 'text', filter_delay: 500, filter_default_label: '', },
{ column_number: 2, column_data_type: 'text', filter_type: 'text', filter_delay: 500, filter_default_label: '', },
], 'footer');
YADCF Version: 0.9.3.beta.11
DataTables Version: 1.10.16
UPDATE: I haven't a clue what I'm doing wrong above, but I have come up with a little hack that saves having to use 'custom_func' and a custom filter. I've used the standard 'select' filter type but intercepted the filter string within the filter() method of my DatatablesSSP script thus:
$str = $requestColumn['search']['value'];
// returned search values for doctext 'txt_type' are Document/Email, so we need to map these to 1/2.
if ($column['db'] == 'txt_type') {
if ($str == 'Document') { $str = '1'; }
if ($str == 'Email') { $str = '2'; }
}
This works a treat :)
You have to place the customFilterDocEmail function outside of the $(document).ready block so it will be global and yadcf will see it
Not sure if this is possible. In my example I am using json as the source but this could be any size. In my example on fiddle I would use this data in a shared fashion by only binding two columns ProductFamily (xAxis) and Value (yAxis) but I would like to be able to group the columns by using an aggregate.
In this example without the grouping it shows multiple columns for FamilyA. Can this be grouped into ONE column and the values aggregated regardless of the amount of data?
So the result will show one column for FamilyA of Value 4850 + 4860 = 9710 etc.?
A problem with all examples online is that there is always the correct amount of data for each category. Not sure if this makes sense?
http://jsfiddle.net/jqIndy/ZPUr4/3/
//Sample data (see fiddle for complete sample)
[{
"Client":"",
"Date":"2011-06-01",
"ProductNumber":"5K190",
"ProductName":"CABLE USB",
"ProductFamily":"FamilyC",
"Status":"OPEN",
"Units":5000,
"Value":5150.0,
"ShippedToDestination":"CHINA"
}]
var productDataSource = new kendo.data.DataSource({
data: dr,
//group: {
// field: "ProductFamily",
//},
sort: {
field: "ProductFamily",
dir: "asc"
},
//aggregate: [
// { field: "Value", aggregate: "sum" }
//],
//schema: {
// model: {
// fields: {
// ProductFamily: { type: "string" },
// Value: { type: "number" },
// }
// }
//}
})
$("#product-family-chart").kendoChart({
dataSource: productDataSource,
//autoBind: false,
title: {
text: "Product Family (past 12 months)"
},
seriesDefaults: {
overlay: {
gradient: "none"
},
markers: {
visible: false
},
majorTickSize: 0,
opacity: .8
},
series: [{
type: "column",
field: "Value"
}],
valueAxis: {
line: {
visible: false
},
labels: {
format: "${0}",
skip: 2,
step: 2,
color: "#727f8e"
}
},
categoryAxis: {
field: "ProductFamily"
},
legend: {
visible: false
},
tooltip: {
visible: true,
format: "Value: ${0:N0}"
}
});
The Kendo UI Chart does not support binding to group aggregates. At least not yet.
My suggestion is to:
Move the aggregate definition, so it's calculated per group:
group: {
field: "ProductFamily",
aggregates: [ {
field: "Value",
aggregate: "sum"
}]
}
Extract the aggregated values in the change handler:
var view = products.view();
var families = $.map(view, function(v) {
return v.value;
});
var values = $.map(view, function(v) {
return v.aggregates.Value.sum;
});
Bind the groups and categories manually:
series: [ {
type: "column",
data: values
}],
categoryAxis: {
categories: families
}
Working demo can be found here: http://jsbin.com/ofuduy/5/edit
I hope this helps.
I have created classes
dojo.declare("UNIT",null,{
_id:'',
constructor:function(i){
this._id=i;
});
and
dojo.declare("ELEMENT", null, {
_id:'',
_unit_id:'',
constructor:function(u,i){
this._unit_id=u;
this._id=i;
});
I have array of Units and I want find one which have id like my element._unit_id. Hot to do this with Dojo ? I was looking in documentation examples but there is dojo.filter by I cannot pass argument . Can anybody help ?
You can use dojo.filter.E.g:
var units = [{
id: 1,
name: "aaaa"
},
{
id: 2,
name: "bbbb"
},
{
id: "2",
name: "cccc"
},
{
id: "3",
name: "dddd"
}];
var currentElementId = 2;
var filteredArr = dojo.filter(units, function(item) {
return item.id==currentElementId;
});
// do something with filtered array
}
Test page for you