Ravendb memory leak on query - ravendb

I'm having a hard problem solving an issue with RavenDB.
At my work we have a process to trying to identify potential duplicates in our database on a specified collection (let's call it users collection).
That means, I'm iterating through the collection and for each document there is a query that is trying to find similar entities. So just imagine, it's quite a long task to run.
My problem is, when the task starts running, the memory consumption for RavenDB is going higher and higher, it's literally just growing and growing, and it seems to continue until it reaches the maximum memory of the system.
But it doesn't really makes sense, since I'm only doing query, I'm using one single index and take a default page size when querying (128).
Anybody meet a similar problem like this? I really have no idea what is going on in ravendb. but it seems like a memory leak.
RavenDB version: 3.0.179

When i need to do massive operations on large collections i work following this steps to prevent problems on memory usage:
I use Query Streaming to extract all the ids of the documents that i want to process (with a dedicated session)
I open a new session for each id, i load the document and then i do what i need

First, a recommendation: if you don't want duplicates, store them with a well-known ID. For example, suppose you don't want duplicate User objects. You'd store them with an ID that makes them unique:
var user = new User() { Email = "foo#bar.com" };
var id = "Users/" + user.Email; // A well-known ID
dbSession.Store(user, id);
Then, when you want to check for duplicates, just check against the well known name:
public string RegisterNewUser(string email)
{
// Unlike .Query, the .Load call is ACID and never stale.
var existingUser = dbSession.Load<User>("Users/" + email);
if (existingUser != null)
{
return "Sorry, that email is already taken.";
}
}
If you follow this pattern, you won't have to worry about running complex queries nor worry about stale indexes.
If this scenario can't work for you for some reason, then we can help diagnose your memory issues. But to diagnose that, we'll need to see your code.

Related

RavenDB Consistency - WaitForIndexesAfterSaveChanges() / WaitForNonStaleResultsAsOfNow

I am using RavenDB 3.5.
I know that querying entities is not acid but loading per ID is.
Apparently writing to DB is also acid.
So far so good.
Now a question:
I've found some code:
session.Advanced.WaitForIndexesAfterSaveChanges();
entity = session.Load<T>(id);
session.Delete(entity);
session.SaveChanges();
// Func<T, T> command
command?.Invoke(entity);
what would be the purpose of calling WaitForIndexesAfterSaveChanges() here?
is this because of executing a command?
or is it rather because might depedning/consuming queries are supposed to immediately catch up with those changes made?
if this would be the case, I could remove WaitForIndexesAfterSaveChanges() in this code block and just add WaitForNonStaleResultsAsOfNow() in the queries, couldn't I?
When would I use WaitForIndexesAfterSaveChanges() in the first place if my critical queries are already flagged with WaitForNonStaleResultsAsOfNow()?
The most likely reason for this behavior is wanting to wait, in this operation, for the indexes to complete.
A good example why you want to do that is when you create a new item, and the next operation is going to show a list of items. You can use WaitForIndexesAfterSaveChanges to wait, during the save, for the indexes to update.

How do I delete all keys matching a specified key pattern using StackExchange.Redis?

I've got about 150,000 keys in a Redis cache, and need to delete > 95% of them - all keys matching a specific key prefix - as part of a cache rebuild. As I can see it, there are three ways to achieve this:
Use server.Keys(pattern) to pull out the entire key list matching my prefix pattern, and iterate through the keys calling KeyDelete for each one.
Maintain a list of keys in a Redis set - each time I insert a value, I also insert the key in the corresponding key set, and then retrieve these sets rather than using Keys. This would avoid the expensive Keys() call, but still relies on deleting tens of thousands of records one by one.
Isolate all of my volatile data in a specific numbered database, and just flush it completely at the start of a cache rebuild.
I'm using .NET and the StackExchange.Redis client - I've seen solutions elsewhere that use the CLI or rely on Lua scripting, but nothing that seems to address this particular use case - have I missed a trick, or is this just something you're not supposed to do with Redis?
(Background: Redis is acting as a view model in front of the Microsoft Dynamics CRM API, so the cache is populated on first run by pulling around 100K records out of CRM, and then kept in sync by publishing notifications from within CRM whenever an entity is modified. Data is cached in Redis indefinitely and we're dealing with a specific scenario here where the CRM plugins fail to fire for a period of time, which causes cache drift and eventually requires us to flush and rebuild the cache.)
Both options 2 & 3 are reasonable.
Steer clear of option 1. KEYS really is slow and only gets slower as your keyspace grows.
I'd normally go for 2 (without LUA, including LUA would increase the learning curve to support the solution - which of course is fine when justified and assuming it's existence is clear/documented.), but 3 could definitely be a contender, fast and simple, as long as you can be sure you won't exceed the configured DB limit.
Use scanStream instead of keys and it will work like a charm.
Docs - https://redis.io/commands/scan
The below code can get you a array of keys starting with LOGIN:: and you can loop through the array and execute redis DEL command to del the corresponding keys.
Example code in nodejs :-
const redis = require('ioredis');
let stream = redis.scanStream({
match: "LOGIN::*"
});
stream.on("data", async (keys = []) => {
let key;
for (key of keys) {
if (!keysArray.includes(key)) {
await keysArray.push(key);
}
}
});
stream.on("end", () => {
res(keysArray);
});

Single Instance VS PerCall in WCF

There are a lot of posts saying that SingleInstance is a bad design. But I think it is the best choice in my situation.
In my service I have to return a list of currently logged-in users (with additional data). This list is identical for all clients. I want to retrieve this list from database every 5 seconds (for example) and return a copy of it to the client, when needed.
If I use PerCall instancing mode, I will retrieve this list from database every single time. This list is supposed to contain ~200-500 records, but can grow up to 10 000 in the future. Every record is complex and contains about 10 fields.
So what about performance? Is it better to use "bad design" and get list once or to use "good approach" and get list from database on every call?
So what about performance? Is it better to use "bad design" and get
list once or to use "good approach" and get list from database on
every call?
Performance and good design are NOT mutually exclusive. The problem with using a single instance is that it can only service a single request at a time. So all other requests are waiting on it to finish doing it's thing.
Alternatively you could just leverage a caching layer to hold the results of your query instead of coupling that to your service.
Then your code might look something like this:
public IEnumerable<BigDataRecord> GetBigExpensiveQuery(){
//Double checked locking pattern is necessary to prevent
// filling the cache multiple times in a multi-threaded
// environment
if(Cache["BigQuery"] == null){
lock(_bigQueryLock){
if(Cache["BigQuery"] == null){
var data = DoBigQuery();
Cache.AddCacheItem(data, TimeSpan.FromSeconds(5));
}
}
}
return Cache["BigQuery"];
}
Now you can have as many instances as you want all accessing the same Cache.

Searching Authorize.net CIM Records

Has anyone come up with an elegant way to search data stored on Authorize.net's Customer Information Manager (CIM)?
Based on their XML Guide there doesn't appear to be any search capabilities at all. That's a huge short-coming.
As I understand it, the selling point for CIM is that the merchant doesn't need to store any customer information. They merely store a unique identifier for each and retrieve the data as needed. This may be great from a PCI Compliance perspective, but it's horrible from a flexibility standpoint.
A simple search like "Show me all orders from Texas" suddenly becomes very complicated.
How are the rest of you handling this problem?
The short answer is, you're correct: There is no API support for searching CIM records. And due to the way it is structured, there is no easy way to use CIM alone for searching all records.
To search them in the manner you describe:
Use getCustomerProfileIdsRequest to get all the customer profile IDs you have stored.
For each of the CustomerProfileIds returned by that request, use getCustomerProfileRequest to get the specific record for that client.
Examine each record at that time, looking for the criterion you want, storing the pertinent records in some other structure; a class, a multi-dimensional array, an ADO DataTable, whatever.
Yes, that's onerous. But it is literally the only way to proceed.
The previously mentioned reporting API applies only to transactions, not the Customer Information Manager.
Note that you can collect the kind of data you want at the time of recording a transaction, and as long as you don't make it personally identifiable, you can store it locally.
For example, you could run a request for all your CIM customer profile records, and store the state each customer is from in a local database.
If all you store is the state, then you can work with those records, because nothing ties the state to a specific customer record. Going forward, you could write logic to update the local state record store at the same time customer profile records are created / updated, too.
I realize this probably isn't what you wanted to hear, but them's the breaks.
This is likely to be VERY slow and inefficient. But here is one method. Request an array of all the customer Id's, and then check each one for the field you want... in my case I wanted a search-by-email function in PHP:
$cimData = new AuthorizeNetCIM;
$profileIds = $cimData->getCustomerProfileIds();
$profileIds = $cimData->getCustomerProfileIds();
$array = $profileIds->xpath('ids');
$authnet_cid = null;
/*
this seems ridiculously inefficient...
gotta be a better way to lookup a customer based on email
*/
foreach ( $array[0]->numericString as $ids ) { // put all the id's into an array
$response = $cimData->getCustomerProfile($ids); //search an individual id for a match
//put the kettle on
if ($response->xml->profile->email == $email) {
$authnet_cid = $ids;
$oldCustomerProfile = $response->xml->profile;
}
}
// now that the tea is ready, cream, sugar, biscuits, you might have your search result!
CIM's primary purpose is to take PCI compliance issues out of your hands by allowing you to store customer data, including credit cards, on their server and then access them using only a unique ID. If you want to do reporting you will need to keep track of that kind of information yourself. Since there's no PCI compliance issues with storing customer addresses, etc, it's realistic to do this yourself. Basically, this is the kind of stuff that needs to get flushed out during the design phase of the project.
They do have a new reporting API which may offer you this functionality. If it does not it's very possible it will be offered in the near future as Authnet is currently actively rolling out lots of new features to their APIs.

Fastest way to query for object existence in NHibernate

I am looking for the fastest way to check for the existence of an object.
The scenario is pretty simple, assume a directory tool, which reads the current hard drive. When a directory is found, it should be either created, or, if already present, updated.
First lets only focus on the creation part:
public static DatabaseDirectory Get(DirectoryInfo dI)
{
var result = DatabaseController.Session
.CreateCriteria(typeof (DatabaseDirectory))
.Add(Restrictions.Eq("FullName", dI.FullName))
.List<DatabaseDirectory>().FirstOrDefault();
if (result == null)
{
result = new DatabaseDirectory
{
CreationTime = dI.CreationTime,
Existing = dI.Exists,
Extension = dI.Extension,
FullName = dI.FullName,
LastAccessTime = dI.LastAccessTime,
LastWriteTime = dI.LastWriteTime,
Name = dI.Name
};
}
return result;
}
Is this the way to go regarding:
Speed
Separation of Concern
What comes to mind is the following: A scan will always be performed "as a whole". Meaning, during a scan of drive C, I know that nothing new gets added to the database (from some other process). So it MAY be a good idea to "cache" all existing directories prior to the scan, and look them up this way. On the other hand, this may be not suitable for large sets of data, like files (which will be 600.000 or more)...
Perhaps some performance gain can be achieved using "index columns" or something like this, but I am not so familiar with this topic. If anybody has some references, just point me in the right direction...
Thanks,
Chris
PS: I am using NHibernate, Fluent Interface, Automapping and SQL Express (could switch to full SQL)
Note:
In the given problem, the path is not the ID in the database. The ID is an auto-increment, and I can't change this requirement (other reasons). So the real question is, what is the fastest way to "check for the existance of an object, where the ID is not known, just a property of that object"
And batching might be possible, by selecting a big group with something like "starts with C:Testfiles\" but the problem then remains, how do I know in advance how big this set will be. I cant select "max 1000" and check in this buffered dictionary, because i might "hit next to the searched dir"... I hope this problem is clear. The most important part, is, is buffering really affecting performance this much. If so, does it make sense to load the whole DB in a dictionary, containing only PATH and ID (which will be OK, even if there are 1.000.000 object, I think..)
First off, I highly recommend that you (anyone using NH, really) read Ayende's article about the differences between Get, Load, and query.
In your case, since you need to check for existence, I would use .Get(id) instead of a query for selecting a single object.
However, I wonder if you might improve performance by utilizing some knowledge of your problem domain. If you're going to scan the whole drive and check each directory for existence in the database, you might get better performance by doing bulk operations. Perhaps create a DTO object that only contains the PK of your DatabaseDirectory object to further minimize data transfer/processing. Something like:
Dictionary<string, DirectoryInfo> directories;
session.CreateQuery("select new DatabaseDirectoryDTO(dd.FullName) from DatabaseDirectory dd where dd.FullName in (:ids)")
.SetParameterList("ids", directories.Keys)
.List();
Then just remove those elements that match the returned ID values to get the directories that don't exist. You might have to break the process into smaller batches depending on how large your input set is (for the files, almost certainly).
As far as separation of concerns, just keep the operation at a repository level. Have a method like SyncDirectories that takes a collection (maybe a Dictionary if you follow something like the above) that handles the process for updating the database. That way your higher application logic doesn't have to worry about how it all works and won't be affected should you find an even faster way to do it in the future.