Why is Date query with aggregate is not working in parse-server? - parse-server

I want to query user where updatedAt is less than or equal today using aggregate because I'm doing other stuff like sorting by pointers.
I'm using cloud code to define the query from the server.
I first tried using mongoDB Compass to check my query using ISODate and it works, but using it in NodeJS seems not working correctly.
I also noticed about this problem that was already fix, they say. I also saw their tests.
Here's a link to that PR.
I'm passing date like this:
const pipeline = [
{
project: {
_id: true,
process: {
$substr: ['$_p_testdata', 12, -1]
}
}
},
{
lookup: {
from: 'Test',
localField: 'process',
foreignField: '_id',
as: 'process'
}
},
{
unwind: {
path: '$process'
}
},
{
match: {
'process._updated_at': {
$lte: new Date()
}
}
}
];
const query = new Parse.Query('data');
return query.aggregate(pipeline);
I expect value to be an array with length of 4 but only give me empty array.
I was able to fetch data without match date.

Please try this:
const pipeline = [
{
match: {
'editedBy.updatedAt': {
$lte: new Date()
}
}
}
];

Related

Fetching nearest events with user location using meetup.com GraphQL API

I am trying to find out a way to fetch nearby events using GraphQL meetup.com API. After digging into the documentation for quite some time, I wasn't able to find a query that suits my needs. Furthermore, I wasn't able to find old, REST, documentation, where, the solution for my case might be present.
Thanks in advance !
This is what I could figure out so far, the Documentation for SearchNode is missing, but I could get id's for events:
query($filter: SearchConnectionFilter!) {
keywordSearch(filter: $filter) {
count
edges {
cursor
node {
id
}
}
}
}
Input JSON:
{ "filter" : {
"query" : "party",
"lat" : 43.8,
"lon" : -79.4, "radius" : 100,
"source" : "EVENTS"
}
}
Hope that helps. Trying to figure out this new GraphQL API
You can do something like this (customize it with whatever fields you want from Event):
const axios = require('axios');
const data = {
query: `
query($filter: SearchConnectionFilter!) {
keywordSearch(filter: $filter) {
count
edges {
cursor
node {
id
result {
... on Event {
title
eventUrl
description
dateTime
going
}
}
}
}
}
}`,
variables: {
filter: {
query: "party",
lat: 43.8,
lon: -79.4,
radius: 100,
source: "EVENTS",
},
},
};
axios({
method: "post",
url: `https://api.meetup.com/gql`,
headers: {
Authorization: `Bearer YOUR_OAUTH_ACCESS_TOKEN`,
},
data,
})

Ordering nested / second level Arrays with Prisma?

I am using Prisma & PostgreSQL. Here I grab some stuff:
await prisma.items.findMany({
where: { itemId: itemId },
include: {
modules: {
include: {
lessons: true
}
}
}
});
I do not need to order the items themselves, but I would like to order the modules & lessons that I get back. Both have an INT property (called: number) on which I could perform the ordering, but I do not know how to do this with prisma / postgresql, or even if it's possible.
Any ideas?
You can use the orderBy operator for this.
Here's what the query would look like for your use-case:
const data = await prisma.items.findMany({
where: {itemId: itemId},
include: {
modules: {
orderBy: {
number: 'asc'
},
include: {
lessons: {
orderBy: {
number: 'asc'
}
}
}
}
}
})
The article on filtering and sorting contains more information on this.

Add a new element in each array of objects where array may have different length in mongodb

I have a following shema.
{
id:week
output:{
headerValues:[
{startDate:"0707",headers:"ID|week"},
{startDate:"0715",headers:"ID1|week1"},
{startDate:"0722",headers:"ID2|week2"}
]
}
}
I have to add a new field into headerValues array like this:
{
id:week
output:{
headerValues[
{startDate:"0707",headers:"ID|week",types:"used"},
{startDate:"0715",headers:"ID1|week1",types:"used"},
{startDate:"0722",headers:"ID2|week2",types:"used"}
]
}
}
I tried different approaches like this:
1)
db.CollectionName.find({}).forEach(function(data){
for(var i=0;i<data.output.headerValues.length;i++) {
db.CollectionName.update({
"_id": data._id, "output.headerValues.startDate":data.output.headerValues[i].startDate
},
{
"$set": {
"output.headerValues.$.types":"used"
}
},true,true
);
}
})
So, In this approach it is executing script and then failing. It is updating result with failed statement.
2)
Another approach I have followed using this link:
https://jira.mongodb.org/browse/SERVER-1243
db.collectionName.update({"_id":"week"},
{ "$set": { "output.headerValues.$[].types":"used" }
})
But it fails with error:
cannot use the part (headerValues of output.headerValues.$[].types) to
traverse the element ({headerValues: [ { startDate: "0707", headers:
"Id|week" } ]}) WriteError#src/mongo/shell/bulk_api.js:469:48
Bulk/mergeBatchResults#src/mongo/shell/bulk_api.js:836:49
Bulk/executeBatch#src/mongo/shell/bulk_api.js:906:13
Bulk/this.execute#src/mongo/shell/bulk_api.js:1150:21
DBCollection.prototype.updateOne#src/mongo/shell/crud_api.js:550:17
#(shell):1:1
I have searched with many different ways which can update different arrays object by adding new field to each object but no success. Can anybody please suggest that what am I doing wrong?
Your query is {"_id" : "week"} but in your data id field is week
So you can change {"_id" : "week"} to {"id" : "week"} and also update your mongodb latest version
db.collectionName.update({"id":"week"},
{ "$set": { "output.headerValues.$[].types":"used" }
})

Validation of fetched data from API Redux React

So, I will go straight to the point. I am getting such data from api:
[
{
id: 123,
email: asd#asd.com
},
{
id: 456,
email: asdasd.com
},
{
id: 789,
email: asd#asd
},
...
]
and I should validate email and show this all info in a list, something like this:
asd#asd.com - valid
asdasd.com - invalid
asd#asd - invalid
...
My question is what is the best way to store validation data in a store? Is it better to have something like "isValid" property by each email? I mean like this:
store = {
emailsById: [
123: {
value: asd#asd.com,
isValid: true
},
456: {
value: asdasd.com,
isValid: false
},
789: {
value: asd#asd,
isValid: false
}
...
]
}
or something like this:
store = {
emailsById: [
123: {
value: asd#asd.com
},
456: {
value: asdasd.com
},
789: {
value: asd#asd
}
...
],
inValidIds: ['456', '789']
}
which one is better? Or maybe there is some another better way to have such data in store? Have in mind that there can be thousands emails in a list :)
Thanks in advance for the answers ;)
I recommend reading the article "Avoiding Accidental Complexity When Structuring Your App State" by Tal Kol which answers exactly your problem: https://hackernoon.com/avoiding-accidental-complexity-when-structuring-your-app-state-6e6d22ad5e2a
Your example is quite simplistic and everything really depends on your needs but personally I would go with something like this (based on linked article):
var store = {
emailsById: {
123: {
value: '123#example.com',
},
456: {
value: '456#example.com',
},
789: {
value: '789#example.com',
},
// ...
},
validEmailsMap: {
456: true, // true when valid
789: false, // false when invalid
},
};
So your best option would be to create a separate file that will contain all your validations methods. Import that into the component you're using and then when you want to use the logic for valid/invalid.
If its something that you feel you want to put in the store from the beginning and the data will never be in a transient state you could parse your DTO through an array map in your reducer when you get the response from your API.
export default function (state = initialState, action) {
const {type, response} = action
switch (type) {
case DATA_RECIEVED_SUCCESS:
const items = []
for (var i = 0; i < response.emailsById.length; i++) {
var email = response.emailsById[i];
email.isValid = checkEmailValid(email)
items.push(email)
}
return {
...state,
items
}
}
}
However my preference would be to always check at the last moment you need to. It makes it a safer design in case you find you need to change you design in the future. Also separating the validation logic out will make it more testable
First of all, the way you defined an array in javascript is wrong.
What you need is an array of objects like,
emails : [
{
id: '1',
email: 'abc#abc.com',
isValid: true
},
{
id: '2',
email: 'abc.com',
isValid: false;
}
];
if you need do access email based on an id, you can add an id property along with email and isValid. uuid is a good way to go about it.
In conclusion, it depends upon your use case.
I believe, the above example is a good way to keep data in store because it's simple.
What you described in your second example is like maintaining two different states. I would not recommend that.

GraphQL queries with tables join using Node.js

I am learning GraphQL so I built a little project. Let's say I have 2 models, User and Comment.
const Comment = Model.define('Comment', {
content: {
type: DataType.TEXT,
allowNull: false,
validate: {
notEmpty: true,
},
},
});
const User = Model.define('User', {
name: {
type: DataType.STRING,
allowNull: false,
validate: {
notEmpty: true,
},
},
phone: DataType.STRING,
picture: DataType.STRING,
});
The relations are one-to-many, where a user can have many comments.
I have built the schema like this:
const UserType = new GraphQLObjectType({
name: 'User',
fields: () => ({
id: {
type: GraphQLString
},
name: {
type: GraphQLString
},
phone: {
type: GraphQLString
},
comments: {
type: new GraphQLList(CommentType),
resolve: user => user.getComments()
}
})
});
And the query:
const user = {
type: UserType,
args: {
id: {
type: new GraphQLNonNull(GraphQLString)
}
},
resolve(_, {id}) => User.findById(id)
};
Executing the query for a user and his comments is done with 1 request, like so:
{
User(id:"1"){
Comments{
content
}
}
}
As I understand, the client will get the results using 1 query, this is the benefit using GraphQL. But the server will execute 2 queries, one for the user and another one for his comments.
My question is, what are the best practices for building the GraphQL schema and types and combining join between tables, so that the server could also execute the query with 1 request?
The concept you are refering to is called batching. There are several libraries out there that offer this. For example:
Dataloader: generic utility maintained by Facebook that provides "a consistent API over various backends and reduce requests to those backends via batching and caching"
join-monster: "A GraphQL-to-SQL query execution layer for batch data fetching."
To anyone using .NET and the GraphQL for .NET package, I have made an extension method that converts the GraphQL Query into Entity Framework Includes.
public static class ResolveFieldContextExtensions
{
public static string GetIncludeString(this ResolveFieldContext<object> source)
{
return string.Join(',', GetIncludePaths(source.FieldAst));
}
private static IEnumerable<Field> GetChildren(IHaveSelectionSet root)
{
return root.SelectionSet.Selections.Cast<Field>()
.Where(x => x.SelectionSet.Selections.Any());
}
private static IEnumerable<string> GetIncludePaths(IHaveSelectionSet root)
{
var q = new Queue<Tuple<string, Field>>();
foreach (var child in GetChildren(root))
q.Enqueue(new Tuple<string, Field>(child.Name.ToPascalCase(), child));
while (q.Any())
{
var node = q.Dequeue();
var children = GetChildren(node.Item2).ToList();
if (children.Any())
{
foreach (var child in children)
q.Enqueue(new Tuple<string, Field>
(node.Item1 + "." + child.Name.ToPascalCase(), child));
}
else
{
yield return node.Item1;
}
}}}
Lets say we have the following query:
query {
getHistory {
id
product {
id
category {
id
subCategory {
id
}
subAnything {
id
}
}
}
}
}
We can create a variable in "resolve" method of the field:
var include = context.GetIncludeString();
which generates the following string:
"Product.Category.SubCategory,Product.Category.SubAnything"
and pass it to Entity Framework:
public Task<TEntity> Get(TKey id, string include)
{
var query = Context.Set<TEntity>();
if (!string.IsNullOrEmpty(include))
{
query = include.Split(',', StringSplitOptions.RemoveEmptyEntries)
.Aggregate(query, (q, p) => q.Include(p));
}
return query.SingleOrDefaultAsync(c => c.Id.Equals(id));
}