Due to limitations of the com.mongodb.BasicDocument, you can't add a second ''? - mongodb-query

I have a requirement when PhoneNumber and emailID are passed as search parameters together and I have to query mongoDB for condition like shown below
Where (PhoneNumber = "primaryPhoneNumber" or "SecondaryPhoneNumber") and (emailId= "primaryEmailid" or "secondaryEmailID")
SO when I am trying to frame my Query somewhat like below I am getting the exception-
**
Due to limitations of the com.mongodb.BasicDocument, you can't add a second '' criteria. Query already contains '{ "$or" : [{ "createdByEmailId" : "brokermanager_lion#perfios.com"}, { "clientRM" : { "$in" : ["brokermanager_lion#perfios.com"]}}, { "productRM" : { "$in" : ["brokermanager_lion#perfios.com"]}}]}'**
`
if (StringUtils.isNotBlank(leadsFilters.getPhoneNumber())) {
Criteria andCriteria = new Criteria();
query.addCriteria(andCriteria.andOperator(Criteria.where("")
.orOperator(Criteria.where("userInfo.primaryMobileNumber").is(leadsFilters.getPhoneNumber()),
Criteria.where("userInfo.secondaryMobileNumber").is(leadsFilters.getPhoneNumber()))));
}
query.addCriteria(Criteria.where("lspCode").is(lspCode));
if (StringUtils.isNotBlank(leadsFilters.getEmailId())) {
Criteria andEmailCriteria = new Criteria();
query.addCriteria(Criteria.where("").
orOperator(Criteria.where("userInfo.primaryEmailId").is(leadsFilters.getEmailId()),
Criteria.where("userInfo.secondaryEmailId").is(leadsFilters.getEmailId())));
}
`

Related

Add a new element in each array of objects where array may have different length in mongodb

I have a following shema.
{
id:week
output:{
headerValues:[
{startDate:"0707",headers:"ID|week"},
{startDate:"0715",headers:"ID1|week1"},
{startDate:"0722",headers:"ID2|week2"}
]
}
}
I have to add a new field into headerValues array like this:
{
id:week
output:{
headerValues[
{startDate:"0707",headers:"ID|week",types:"used"},
{startDate:"0715",headers:"ID1|week1",types:"used"},
{startDate:"0722",headers:"ID2|week2",types:"used"}
]
}
}
I tried different approaches like this:
1)
db.CollectionName.find({}).forEach(function(data){
for(var i=0;i<data.output.headerValues.length;i++) {
db.CollectionName.update({
"_id": data._id, "output.headerValues.startDate":data.output.headerValues[i].startDate
},
{
"$set": {
"output.headerValues.$.types":"used"
}
},true,true
);
}
})
So, In this approach it is executing script and then failing. It is updating result with failed statement.
2)
Another approach I have followed using this link:
https://jira.mongodb.org/browse/SERVER-1243
db.collectionName.update({"_id":"week"},
{ "$set": { "output.headerValues.$[].types":"used" }
})
But it fails with error:
cannot use the part (headerValues of output.headerValues.$[].types) to
traverse the element ({headerValues: [ { startDate: "0707", headers:
"Id|week" } ]}) WriteError#src/mongo/shell/bulk_api.js:469:48
Bulk/mergeBatchResults#src/mongo/shell/bulk_api.js:836:49
Bulk/executeBatch#src/mongo/shell/bulk_api.js:906:13
Bulk/this.execute#src/mongo/shell/bulk_api.js:1150:21
DBCollection.prototype.updateOne#src/mongo/shell/crud_api.js:550:17
#(shell):1:1
I have searched with many different ways which can update different arrays object by adding new field to each object but no success. Can anybody please suggest that what am I doing wrong?
Your query is {"_id" : "week"} but in your data id field is week
So you can change {"_id" : "week"} to {"id" : "week"} and also update your mongodb latest version
db.collectionName.update({"id":"week"},
{ "$set": { "output.headerValues.$[].types":"used" }
})

How to convert `json query` to `sql query`?

I have an AngularJS App designed to build queries in JSON format, those queries are built with many tables, fields, and operators like "join", "inner", "where","and","or","like", etc.
AngularJS App is sending this JSON queries to my Django-Restframework backend, so I need to translate that JSON query into SQL query, to be able to run Raw SQL with previous validations of what tables/models are allowed for select.
I don't need a full JSON query to SQL query translation, I just want to translate selects with support for clauses like "where","and", "or", "group_by".
For better understanding of my question I put the following snippets:
{
"selectedFields": {
"orders": {
"id": true,
"orderdate": true},
"customers": {
"customername": true,
"customerlastname": true}
},
"from": ["orders"],
"inner_join":
{
"customers": {
"on_eq": [
{
"orders": {
"customderID": true
},
},
{
"customers": {
"customerID": ture
}
}
]
}
}
}
SELECT
Orders.OrderID,
Customers.CustomerName,
Customers.CustomerLastName,
Orders.OrderDate
FROM Orders
INNER JOIN Customers
ON Orders.CustomerID=Customers.CustomerID;
I took the example from: http://www.w3schools.com/sql/sql_join.asp
Please note that I am not trying to serialize any SQL query output to JSON.
I found a NodeJS package (https://www.npmjs.com/package/json-sql) who converts JSON queries to SQL queries, so I made a NodeJS script and then I create a class in Python to call NodeJS script.
With this approach I just need to send all AngularJS queries following this syntax (https://github.com/2do2go/json-sql/tree/master/docs#join)
NodeJS script.
// Use:
//$ nodejs reporter/services.js '{"type":"select","fields":["a","b"],"table":"table"}'
var jsonSql = require('json-sql')();
var json_query = JSON.parse(process.argv[2]);
var sql = jsonSql.build(json_query);
console.log(sql.query);
DRF class:
from unipath import Path
import json
from django.conf import settings
from Naked.toolshed.shell import muterun_js
full_path = str(Path(settings.BASE_DIR, "reporter/services.js"))
class JSONToSQL:
def __init__(self, json_):
self.json = json_
self.sql = None
self.dic = json.loads(json_)
self.to_sql()
def to_sql(self):
response = muterun_js('%s \'%s\'' % (full_path, self.json))
if response.exitcode == 0:
self.sql = str(response.stdout).replace("\n","")
You could write some custom JS to parse the object like this:
var selectedfields ='';
var fields = Object.keys(obj.selectedFields);
for (i=0;i<fields.length; i++) {
var subfields = Object.keys(obj.selectedFields[fields[i]]);
for (j=0;j<subfields.length; j++) {
selectedfields = selectedfields + fields[i]+'.'+subfields[j]+' ';
}
}
var from="";
for (i=0;i<obj.from.length; i++) {
if (from=="") {
from = obj.from[i]
} else {
from = from + ',' +obj.from[i]
}
}
var output = 'SELECT '+selectedfields+ ' FROM '+from;
document.getElementById('output').innerHTML=output;
or in angular you would use $scope.output = ... from within a controller perhaps.
jsfiddle here: https://jsfiddle.net/jsheridan390/fpbp6cz0/1/

Translate SQL query to MongoDB mapreduce

I have the following collection in mongo:
{
"_id" : ObjectId("506217890b50f300d020d237"),
"o_orderkey" : NumberLong(1),
"o_orderstatus" : "O",
"o_totalprice" : 173665.47,
"o_orderdate" : ISODate("1996-01-02T02:00:00Z"),
"o_orderpriority" : "5-LOW",
"o_clerk" : "Clerk#000000951",
"o_shippriority" : 0,
"o_comment" : "blithely final dolphins solve-- blithely blithe packages nag blith",
"customer" : {
"c_custkey" : NumberLong(36901),
"c_name" : "Customer#000036901",
"c_address" : "TBb1yDZcf 8Zepk7apFJ",
"c_phone" : "23-644-998-4944",
"c_acctbal" : 4809.84,
"c_mktsegment" : "AUTOMOBILE",
"c_comment" : "regular accounts after the blithely pending dependencies play blith",
"c_nationkey" : {
"n_nationkey" : NumberLong(13),
"n_name" : "JORDAN",
"n_comment" : "blithe, express deposits boost carefully busy accounts. furiously pending depos",
"n_regioin" : {
"r_regionkey" : NumberLong(4),
"r_name" : "MIDDLE EAST",
"r_comment" : "furiously unusual packages use carefully above the unusual, exp"
}
}
},
"o_lineitem" : [
{
"l_linenumber" : 1,
"l_quantity" : 17,
"l_extendedprice" : 21168.23,
"l_discount" : 0.04,
"l_tax" : 0.02,
"l_returnflag" : "N",
"l_linestatus" : "O",
"l_shipdate" : ISODate("1996-03-13T03:00:00Z"),
"l_commitdate" : ISODate("1996-02-12T03:00:00Z"),
"l_receiptdate" : ISODate("1996-03-22T03:00:00Z"),
"l_shipinstruct" : "DELIVER IN PERSON",
"l_shipmode" : "TRUCK",
"l_comment" : "blithely regular ideas caj",
"partsupp" : {
"ps_availqty" : 6157,
"ps_supplycost" : 719.17,
"ps_comment" : "blithely ironic packages haggle quickly silent platelets. silent packages must have to nod. slyly special theodolites along the blithely ironic packages nag above the furiously pending acc",
"ps_partkey" : {
"p_partkey" : NumberLong(155190),
"p_name" : "slate lavender tan lime lawn",
"p_mfgr" : "Manufacturer#4",
"p_brand" : "Brand#44",
"p_type" : "PROMO BRUSHED NICKEL",
"p_size" : 9,
"p_container" : "JUMBO JAR",
"p_retailprice" : 1245.19,
"p_comment" : "regular, final dol"
},
"ps_suppkey" : {
"s_suppkey" : NumberLong(7706),
"s_name" : "Supplier#000007706",
"s_address" : "BlHq75VoMNCoU380SGiS9fTWbGpeI",
"s_phone" : "33-481-218-6643",
"s_acctbal" : -379.71,
"s_comment" : "carefully pending ideas after the instructions are alongside of the dolphins. slyly pe",
"s_nationkey" : {
"n_nationkey" : NumberLong(23),
"n_name" : "UNITED KINGDOM",
"n_comment" : "fluffily regular pinto beans breach according to the ironic dolph",
"n_regioin" : {
"r_regionkey" : NumberLong(3),
"r_name" : "EUROPE",
"r_comment" : "special, bold deposits haggle foxes. platelet"
}
}
}
}
},
.
.
.
]
}
And im trying to translate the following sql query:
select
s_acctbal,
s_name,
n_name,
p_partkey,
p_mfgr,
s_address,
s_phone,
s_comment
from
part,
supplier,
partsupp,
nation,
region
where
p_partkey = ps_partkey
and s_suppkey = ps_suppkey
and p_size = 15
and p_type like '%BRASS'
and s_nationkey = n_nationkey
and n_regionkey = r_regionkey
and r_name = 'EUROPE'
and ps_supplycost = (
select
min(ps_supplycost)
from
partsupp, supplier,
nation, region
where
p_partkey = ps_partkey
and s_suppkey = ps_suppkey
and s_nationkey = n_nationkey
and n_regionkey = r_regionkey
and r_name = 'EUROPE'
)
order by
s_acctbal desc,
n_name,
s_name,
p_partkey;
My function that i was trying:
db.runCommand({
mapreduce: "ordersfull",
query: {
},
map: function Map() {
var pattern = /BRASS$/g;
for(var i in this.o_lineitem){
var p_size = this.o_lineitem[i].partsupp.ps_partkey.p_size;
var p_type = this.o_lineitem[i].partsupp.ps_partkey.p_type;
var region = this.o_lineitem[i].partsupp.ps_suppkey.s_nationkey.n_regioin.r_name;
if(p_size==15 && p_type.match(pattern)!=null && region == "EUROPE"){
emit("",{
s_acctbal: this.o_lineitem[i].partsupp.ps_suppkey.s_acctbal,
s_name: this.o_lineitem[i].partsupp.ps_suppkey.s_name,
n_name: this.o_lineitem[i].partsupp.ps_suppkey.s_nationkey.n_name,
p_partkey: this.o_lineitem[i].partsupp.ps_partkey.p_partkey,
p_mfgr: this.o_lineitem[i].partsupp.ps_partkey.p_mfgr,
s_address: this.o_lineitem[i].partsupp.ps_suppkey.s_address,
s_phone: this.o_lineitem[i].partsupp.ps_suppkey.s_phone,
s_comment: this.o_lineitem[i].partsupp.ps_suppkey.s_comment
} );
}
}
},
reduce: function(key, values) {
},
out: 'query002'
});
In my result i got null value for all entries, what happen?
You can debug your MapReduce output by including print() or printjson() statements in the JavaScript functions. The resulting print output will be saved in your MongoDB log.
There are several issues with your MapReduce:
the for .. in loop will not work as you expect .. you should instead use array.forEach(..)
if you are iterating an array, you will already have a reference to the array item and should not use array[index]
you should emit() with a unique key name if you don't want unexpected grouping
your reduce() should return a value matching the structure of the emitted data
you should ideally use a query parameter to limit the documents that need to be inspected
Given you appear to just be iterating documents without doing any grouping or reduce(), you may find it easier to fetch the documents and perform the same matching in your application code.
In any case, the map() function should actually look more like:
var map = function () {
var pattern = /BRASS$/;
this.o_lineitem.forEach(function(item) {
var partKey = item.partsupp.ps_partkey;
var suppKey = item.partsupp.ps_suppkey;
var region = suppKey.s_nationkey.n_regioin.r_name;
if (partKey.p_size==15 && partKey.p_type.match(pattern) !=null && region == "EUROPE") {
emit(suppKey.s_name,
{
s_acctbal: suppKey.s_acctbal,
s_name: suppKey.s_name,
n_name: suppKey.s_nationkey.n_name,
p_partkey: partKey.p_partkey,
p_mfgr: partKey.p_mfgr,
s_address: suppKey.s_address,
s_phone: suppKey.s_phone,
s_comment: suppKey.s_comment
}
);
}
})
}
It would be easier to translate this query into the new Aggregation Framework in MongoDB 2.2, given your data structure and the desired multiple matching and sorting.
There are some current limitations to be aware of (such as the present 16MB maximum on output from the Aggregation pipeline), but you will likely find the queries easier to create and debug.
Here is a commented example using the Aggregation Framework, including initial match criteria for order status, date, and part/supplier items of interest:
db.ordersfull.aggregate(
// Find matching documents first (can take advantage of index)
{ $match: {
o_orderstatus: 'O',
o_orderdate: { $gte: new ISODate('2012-10-01') },
$and: [
{ o_lineitem: { $elemMatch: { 'partsupp.ps_partkey.p_size': 15 }} },
{ o_lineitem: { $elemMatch: { 'partsupp.ps_partkey.p_type': { $exists : true } }} },
{ o_lineitem: { $elemMatch: { 'partsupp.ps_suppkey.s_nationkey.n_regioin.r_name': 'EUROPE'}} }
]
}},
// Filter to fields of interest
{ $project: {
_id: 0,
o_lineitem: 1
}},
// Convert line item arrays into document stream
{ $unwind: '$o_lineitem' },
// Match desired line items
{ $match: {
'o_lineitem.partsupp.ps_partkey.p_size': 15,
'o_lineitem.partsupp.ps_partkey.p_type': /BRASS$/,
'o_lineitem.partsupp.ps_suppkey.s_nationkey.n_regioin.r_name': 'EUROPE'
}},
// Final field selection
{ $project: {
s_acctbal: '$o_lineitem.partsupp.ps_suppkey.s_acctbal',
s_name: '$o_lineitem.partsupp.ps_suppkey.s_name',
n_name: '$o_lineitem.partsupp.ps_suppkey.s_nationkey.n_name',
p_partkey: '$o_lineitem.partsupp.ps_partkey.p_partkey',
p_mfgr: '$o_lineitem.partsupp.ps_partkey.p_mfgr',
s_address: '$o_lineitem.partsupp.ps_suppkey.s_address',
s_phone: '$o_lineitem.partsupp.ps_suppkey.s_phone',
s_comment: '$o_lineitem.partsupp.ps_suppkey.s_comment'
}},
// Sort the output
{ $sort: {
s_acctbal: -1,
n_name: 1,
s_name: 1,
p_partkey: 1
}}
)

In an ExtJS Grid, how do I get access to the data store fields that are part of the sort set

How do I get access to the columns/datastore fields that are part of the sort set.
I am looking to modify the a grid's sort parameters for remote sorting. I need the remote sort param's sort key to match the column's field's mapping property. I need these things to happen though the normal 'column header click sorts the data' functionality.
Remote sorting and field mapping (ExtJS 4.1)
This functionality seems not to be implemented in ExtJS. Here is a solution using the encodeSorters function provided since ExtJS 4. Accessing fields map throught the model's prototype is a bit dirty but it does the job :
var store = Ext.create('Ext.data.Store', {
...,
proxy: {
...,
encodeSorters: function (sorters) {
var model = store.proxy.model,
map = model.prototype.fields.map;
return Ext.encode(Ext.Array.map(sorters, function (sorter) {
return {
property : map[sorter.property].mapping || sorter.property,
direction: sorter.direction
};
}));
}
}
});
However, it would be more relevant to override the original method :
Ext.data.proxy.Server.override({
encodeSorters: function(sorters) {
var min, map = this.model.prototype.fields.map;
min = Ext.Array.map(sorters, function (sorter) {
return {
property : map[sorter.property].mapping || sorter.property,
direction: sorter.direction
};
});
return this.applyEncoding(min);
}
});
Assuming you are using simpleSortMode, you could do something like this in your store.
listeners: {
beforeload: function( store, operation, eOpts ) {
if (store.sorters.length > 0) {
var sorter = store.sorters.getAt(0),
dir = sorter.direction,
prop = sorter.property,
fields = store.model.getFields(),
i,
applyProp = prop;
for (i = 0; i < fields.length; i++) {
if (fields[i].name == prop) {
applyProp = fields[i].mapping || prop;
break;
}
}
//clearing the sorters since the simpleSortMode is true so there will be only one sorter
store.sorters.clear();
store.sorters.insert(0, applyProp, new Ext.util.Sorter({
property : applyProp,
direction: dir
}));
}
}
},

Filtering epics from Kanban board

I would like to start by saying I have read Rally Kanban - hiding Epic Stories but I'm still having trouble on implementing my filter based on the filter process from the Estimation Board app. Currently I'm trying to add an items filter to my query object for my cardboard. The query object calls this._getItems to return an array of items to filter from. As far as I can tell the query calls the function, loads for a second or two, and then displays no results. Any input, suggestions, or alternative solutions are welcomed.
Here's my code
$that._redisplayBoard = function() {
that._getAndStorePrefData(displayBoard);
this._getItems = function(callback) {
//Build types based on checkbox selections
var queries = [];
queries.push({key:"HierarchicalRequirement",
type: "HierarchicalRequirement",
fetch: "Name,FormattedID,Owner,ObjectID,Rank,PlanEstimate,Children,Ready,Blocked",
order: "Rank"
});
function bucketItems(results) {
var items = [];
rally.forEach(queries, function(query) {
if (results[query.key]) {
rally.forEach(results[query.key], function(item) {
//exclude epic stories since estimates cannot be altered
if ((item._type !== 'HierarchicalRequirement') ||
(item._type === 'HierarchicalRequirement' && item.Children.length === 0)) {
items = items.concat(item);
}
});
}
});
callback(items);
}
rallyDataSource.findAll(queries, bucketItems);
};
function displayBoard() {
artifactTypes = [];
var cardboardConfig = {
types: [],
items: that._getItems,
attribute: kanbanField,
sortAscending: true,
maxCardsPerColumn: 200,
order: "Rank",
cardRenderer: KanbanCardRenderer,
cardOptions: {
showTaskCompletion: showTaskCompletion,
showAgeAfter: showAgeAfter
},
columnRenderer: KanbanColumnRenderer,
columns: columns,
fetch: "Name,FormattedID,Owner,ObjectID,Rank,Ready,Blocked,LastUpdateDate,Tags,State,Priority,StoryType,Children"
};
if (showTaskCompletion) {
cardboardConfig.fetch += ",Tasks";
}
if (hideLastColumnIfReleased) {
cardboardConfig.query = new rally.sdk.util.Query("Release = null").or(kanbanField + " != " + '"' + lastState + '"');
}
if (filterByTagsDropdown && filterByTagsDropdown.getDisplayedValue()) {
cardboardConfig.cardOptions.filterBy = { field: FILTER_FIELD, value: filterByTagsDropdown.getDisplayedValue() };
}
cardboardConfig.types.push("HierarchicalRequirement");
if (cardboard) {
cardboard.destroy();
}
artifactTypes = cardboardConfig.types;
cardboard = new rally.sdk.ui.CardBoard(cardboardConfig, rallyDataSource);
cardboard.addEventListener("preUpdate", that._onBeforeItemUpdated);
cardboard.addEventListener("onDataRetrieved", function(cardboard,args){ console.log(args.items); });
cardboard.display("kanbanBoard");
}
};
that.display = function(element) {
//Build app layout
this._createLayout(element);
//Redisplay the board
this._redisplayBoard();
};
};
Per Charles' hint in Rally Kanban - hiding Epic Stories
Here's how I approached this following Charles' hint for the Rally Catalog Kanban. First, modify the fetch statement inside the cardboardConfig so that it includes the Children collection, thusly:
fetch: "Name,FormattedID,Children,Owner,ObjectID,Rank,Ready,Blocked,LastUpdateDate,Tags,State"
Next, in between this statement:
cardboard.addEventListener("preUpdate", that._onBeforeItemUpdated);
And this statement:
cardboard.display("kanbanBoard");
Add the following event listener and callback:
cardboard.addEventListener("onDataRetrieved",
function(cardboard, args){
// Grab items hash
filteredItems = args.items;
// loop through hash keys (states)
for (var key in filteredItems) {
// Grab the workproducts objects (Stories, defects)
workproducts = filteredItems[key];
// Array to hold filtered results, childless work products
childlessWorkProducts = new Array();
// loop through 'em and filter for the childless
for (i=0;i<workproducts.length;i++) {
thisWorkProduct = workproducts[i];
// Check first if it's a User Story, since Defects don't have children
if (thisWorkProduct._type == "HierarchicalRequirement") {
if (thisWorkProduct.Children.length === 0 ) {
childlessWorkProducts.push(thisWorkProduct);
}
} else {
// If it's a Defect, it has no children so push it
childlessWorkProducts.push(thisWorkProduct);
}
}
filteredItems[key] = childlessWorkProducts;
}
// un-necessary call to cardboard.setItems() was here - removed
}
);
This callback should filter for only leaf-node items.
Mark's answer caused an obscure crash when cardboard.setItems(filteredItems) was called. However, since the filtering code is actually manipulating the actual references, it turns out that setItems() method is actually not needed. I pulled it out, and it now filters properly.
Not sure this is your problem but your cardboard config does not set the 'query' field. The fetch is the type of all data to retrieve if you want to filter it you add a "query:" value to the config object.
Something like :
var cardboardConfig = {
types: ["PortfolioItem", "HierarchicalRequirement", "Feature"],
attribute: dropdownAttribute,
fetch:"Name,FormattedID,Owner,ObjectID,ClassofService",
query : fullQuery,
cardRenderer: PriorityCardRenderer
};
Where fullQuery can be constructed using the the Rally query object. You find it by searching in the SDK. Hope that maybe helps.