each of my docs in my firestore db contains 1 field {name: 'some value'}
I would like to loop through all the docs and then if the doc's fields value is equal to my param I would like to remove that doc
I'm trying to do it like so:
removeContact: function(name){
console.log('removing contact',name)
db.collection("contacts").forEach(doc=>{
if(doc.data().name === name){
doc.delete()
}
})
}
but I get the error that forEach() is not defined.
You need to use .get() following the collection or query to get a query snapshot promise, which you then handle accordingly. You can use forEach on the snapshot and delete each doc.
A better way, instead of searching through every document and using an if statement, would be to use a query like where('name', '==', name) and delete the document that way. Using a query would leave less for your function to do.
To delete a document, you need to know the full path to that document. And since you don't know the document IDs, that means you'll first need to read the documents.
The good news is that this means you can also perform a query to filter only the documents you're interesting in, instead of doing an client-side if.
db.collection("contacts").where("name", "==", name)
.get()
.then((querySnapshot) => {
querySnapshot.forEach(doc=>{
doc.ref.delete()
})
})
Related
I am creating an app, where announcements are shown, stored in firestore and with that there is a hasRead object for each announcement.
It works, as in when a user is reading the announcement it is shown as read on the users app. But when another user is reading the same announcement, his/her usrid is being stored, overwriting the any other usrid stored.
Here his how I store it.
setAnnounceToRead(userId) {
firebase.firestore().collection('announcements').doc(this.state.id).set({
hasread: {
userId
}
},
{ merge: true });
}
I already found out that it is because of the merge, as it doesn't "adds" the usrid but overrides it instead.
How can I add every userid that reads the announcement, but keeping the already existing userids?
Cheers
Right now you're storing each user's UID as a field named userId. Since you're using the same field name for each user, you end up storing only the last user's UID.
To store the UID for all users, you'd usually have a structure like this:
hasread: {
udartsUid: true,
pufsUid: true
}
In your code that would translate to something like:
let update = {};
update[userId] = true;
firebase.firestore().collection('announcements').doc(this.state.id).set({
hasread: update
},
{ merge: true });
But this type of operation got a lot easier recently, since Firestore now has operations that allow you to use an array for this type of information.
let doc = firebase.firestore().collection('announcements').doc(this.state.id);
doc.update({ "hasRead": FieldValue.arrayUnion(userId) });
This snippets will add the userId value to the array if it isn't already in there. If the value is already in the array, it does nothing.
For more on the latter, see the blog post Better arrays in Cloud Firestore.
in keystoneJs's doc:
Populating related data in queries
You can populate related data for relationship fields thanks to Mongoose's populate functionality. To populate the author and category documents when loading a Post from the example above, you would do this:
Post.model.findOne().populate('author categories').exec(function(err,post) {
// the author is a fully populated User document
console.log(post.author.name);
});
my question is that there is any options I can configure so these List APIs can populate the many relationship automatically.
thanks.
mike so
I think not. This is how I do it when I use Keystone as an API (using .populate).
exports.getStoreWithId = function (req, res) {
Store.model
.find()
.populate('productTags productCategories')
.where('_id', req.params.id)
.exec(function (err, item) {
if (err) return res.apiError('database error', err);
res.apiResponse({
store: item,
});
});
};
Pretty sure the short answer here is no. If you want to populate you'll need to include the .populate.
That being said, keystone gives you access to the mongoose schema, so the answer here should work. Obviously their mongoose.Schema is done by your Post.add stuff so I think you can ignore their first snippet, and you should be able to add the hooks as Post.schema.pre(... for the second snippet.
The Post.schema.pre('save',... hooks definitely work with keystone, so I assume the pre-find hooks work too, however I've not actually tested this. (I'd be interested to know the outcome though!)
Finally, if that works, you could also have a look at the mongoose-autopopulate package, and see if you can get that to play nicely with keystone.
I'm looking for a way to clean document nested field, for example, consider I have a JSON object:
{
fieldToClean: {
fieldA: '..',
fieldB: '..',
fieldC: '..'
}
}
I know that I don't need fieldB anymore. I found one solution that looks like:
var record = deepstream.record.getRecord('<proper path>')
record.whenReady(function(){
var fieldToClean = record.get('fieldToClean')
delete fieldToClean.fieldB
record.set('fieldToClean', fieldToClean)
})
I wonder if deepstream provides something like:
record.delete('fieldToClean.fieldB')
or
record.set('fieldToClean.fieldB', undefined)
I wasn't able to find something like this in documentation.
Thank you for your time!
There's actually an issue for this open, our main design question is around deleting an index in array. Is that a null or splice? Be great to have your feedback!
https://github.com/deepstreamIO/deepstream.io/issues/29
I am trying to create a custom search module based on the Orchard.Search. I have created a custom field called keywords which I have successfully added to the index. I want to match content where the title, body or keywords match. Adding these using .WithField or passing a string array of fields tests for each field matching the term, I need these to return content if there is a match in any of the fields. I have included examples of how I am using both methods below.
Examples of how I am using the search builder:
var searchBuilder = Search()
.WithField("type", "Cell").Mandatory().ExactMatch()
.WithField("body", query)
.WithField("title", query);
.WithField("cell-keywords", query);
String Array FieldNames:
string[] searchFields = new string[2] { "body", "title", "cell-keywords"};
var searchBuilder = Search().WithField("type", "Cell").Mandatory().ExactMatch().Parse(searchFields, query, false);
If anyone could point me in the right direction that would fantastic :)
A colleague wrote an article on this on his blog, should prove helpful http://breakoutdeveloper.com/orchard-cms/creating-an-advanced-search
I have resolved my issue!
The problem was when I was adding my keywords field to the index on the part handler. There were content items with NULL which was causing an error which I missed!!
I have a document in RavenDB that looks looks like:
{
"ItemId": 1,
"Title": "Villa
}
With the following metadata:
Raven-Clr-Type: MyNamespace.Item, MyNamespace
Raven-Entity-Name: Doelkaarten
So I serialized with a type MyNamespace.Item, but gave it my own Raven-Entity-Name, so it get its own collection.
In my code I define an index:
public class DoelkaartenIndex : AbstractIndexCreationTask<Item>
{
public DoelkaartenIndex()
{
// MetadataFor(doc)["Raven-Entity-Name"].ToString() == "Doelkaarten"
Map = items => from item in items
where MetadataFor(item)["Raven-Entity-Name"].ToString() == "Doelkaarten"
select new {Id = item.ItemId, Name = item.Title};
}
}
In the Index it is translated in the "Maps" field to:
docs.Items
.Where(item => item["#metadata"]["Raven-Entity-Name"].ToString() == "Doelkaarten")
.Select(item => new {Id = item.ItemId, Name = item.Title})
A query on the index never gives results.
If the Maps field is manually changed to the code below it works...
from doc in docs
where doc["#metadata"]["Raven-Entity-Name"] == "Doelkaarten"
select new { Id = doc.ItemId, Name=doc.Title };
How is it possible to define in code the index that gives the required result?
RavenDB used: RavenHQ, Build #961
UPDATE:
What I'm doing is the following: I want to use SharePoint as a CMS, and use RavenDB as a ready-only replication of the SharePoint list data. I created a tool to sync from SharePoint lists to RavenDB. I have a generic type Item that I create from a SharePoint list item and that I serialize into RavenDB. So all my docs are of type Item. But they come from different lists with different properties, so I want to be able to differentiate. You propose to differentiate on an additional property, this would perfectly work. But then I will see all list items from all lists in one big Items collection... What would you think to be the best approach to this problem? Or just live with it? I want to use the indexes to create projections from all data in an Item to the actual data that I need.
You can't easily change the name of a collection this way. The server-side will use the Raven-Entity-Name metadata, but the client side will determine the collection name via the conventions registered with the document store. The default convention being to use the type name of the entity.
You can provide your own custom convention by assigning a new function to DocumentStore.Conventions.FindTypeTagName - but it would probably be cumbersome to do that for every entity. You could create a custom attribute to apply to your entities and then write the function to look for and understand that attribute.
Really the simplest way is just to call your entity Doelkaarten instead of Item.
Regarding why the change in indexing works - it's not because of the switch in linq syntax. It's because you said from doc in docs instead of from doc in docs.Items. You probably could have done from doc in docs.Doelkaartens instead of using the where clause. They are equivalent. See this page in the docs for further examples.