How to get last write date of a RavenDB collection/index - ravendb

I want to detect when a collection or index was last modified or written to.
Note: This works on document level, but i need on collection/index level. How to get last write date of a RavenDB document via C#

RavenQueryStatistics stats;
using(var session = _documentStore.OpenSession()) {
session.Query<MyDocument>()
.Statistics(out stats)
.Take(0) // don't load any documents as we need only the stats
.ToArray(); // this is needed to trigger server side query execution
}
DateTime indexTimestamp = stats.IndexTimestamp;
string indexEtag = stats.IndexEtag;;

getting meta data only via RavenDB Http Api:
GET http://localhost:8080/indexes/dynamic/MyDocuments/?metadata-only=true
would return:
{ "Results":[],
"Includes":[],
"IsStale":false,
"IndexTimestamp":"2013-09-16T15:54:58.2465733Z",
"TotalResults":0,
"SkippedResults":0,
"IndexName":"Raven/DocumentsByEntityName",
"IndexEtag":"01000000-0000-0008-0000-000000000006",
"ResultEtag":"3B5CA9C6-8934-1999-45C2-66A9769444F0",
"Highlightings":{},
"NonAuthoritativeInformation":false,
"LastQueryTime":"2013-09-16T15:55:00.8397216Z",
"DurationMilliseconds":102 }
ResultEtag and IndexTimestamp change on every write

Related

Why can I not use Continuation when using a proxy class to access MS CRM 2013?

So I have a standard service reference proxy calss for MS CRM 2013 (i.e. right-click add reference etc...) I then found the limitation that CRM data calls limit to 50 results and I wanted to get the full list of results. I found two methods, one looks more correct, but doesn't seem to work. I was wondering why it didn't and/or if there was something I'm doing incorrectly.
Basic setup and process
crmService = new CrmServiceReference.MyContext(new Uri(crmWebServicesUrl));
crmService.Credentials = System.Net.CredentialCache.DefaultCredentials;
var accountAnnotations = crmService.AccountSet.Where(a => a.AccountNumber = accountNumber).Select(a => a.Account_Annotation).FirstOrDefault();
Using Continuation (something I want to work, but looks like it doesn't)
while (accountAnnotations.Continuation != null)
{
accountAnnotations.Load(crmService.Execute<Annotation>(accountAnnotations.Continuation.NextLinkUri));
}
using that method .Continuation is always null and accountAnnotations.Count is always 50 (but there are more than 50 records)
After struggling with .Continutation for a while I've come up with the following alternative method (but it seems "not good")
var accountAnnotationData = accountAnnotations.ToList();
var accountAnnotationFinal = accountAnnotations.ToList();
var index = 1;
while (accountAnnotationData.Count == 50)
{
accountAnnotationData = (from a in crmService.AnnotationSet
where a.ObjectId.Id == accountAnnotationData.First().ObjectId.Id
select a).Skip(50 * index).ToList();
accountAnnotationFinal = accountAnnotationFinal.Union(accountAnnotationData).ToList();
index++;
}
So the second method seems to work, but for any number of reasons it doesn't seem like the best. Is there a reason .Continuation is always null? Is there some setup step I'm missing or some nice way to do this?
The way to get the records from CRM is to use paging here is an example with a query expression but you can also use fetchXML if you want
// Query using the paging cookie.
// Define the paging attributes.
// The number of records per page to retrieve.
int fetchCount = 3;
// Initialize the page number.
int pageNumber = 1;
// Initialize the number of records.
int recordCount = 0;
// Define the condition expression for retrieving records.
ConditionExpression pagecondition = new ConditionExpression();
pagecondition.AttributeName = "address1_stateorprovince";
pagecondition.Operator = ConditionOperator.Equal;
pagecondition.Values.Add("WA");
// Define the order expression to retrieve the records.
OrderExpression order = new OrderExpression();
order.AttributeName = "name";
order.OrderType = OrderType.Ascending;
// Create the query expression and add condition.
QueryExpression pagequery = new QueryExpression();
pagequery.EntityName = "account";
pagequery.Criteria.AddCondition(pagecondition);
pagequery.Orders.Add(order);
pagequery.ColumnSet.AddColumns("name", "address1_stateorprovince", "emailaddress1", "accountid");
// Assign the pageinfo properties to the query expression.
pagequery.PageInfo = new PagingInfo();
pagequery.PageInfo.Count = fetchCount;
pagequery.PageInfo.PageNumber = pageNumber;
// The current paging cookie. When retrieving the first page,
// pagingCookie should be null.
pagequery.PageInfo.PagingCookie = null;
Console.WriteLine("#\tAccount Name\t\t\tEmail Address");while (true)
{
// Retrieve the page.
EntityCollection results = _serviceProxy.RetrieveMultiple(pagequery);
if (results.Entities != null)
{
// Retrieve all records from the result set.
foreach (Account acct in results.Entities)
{
Console.WriteLine("{0}.\t{1}\t\t{2}",
++recordCount,
acct.EMailAddress1,
acct.Name);
}
}
// Check for more records, if it returns true.
if (results.MoreRecords)
{
// Increment the page number to retrieve the next page.
pagequery.PageInfo.PageNumber++;
// Set the paging cookie to the paging cookie returned from current results.
pagequery.PageInfo.PagingCookie = results.PagingCookie;
}
else
{
// If no more records are in the result nodes, exit the loop.
break;
}
}

"update" query - error invalid input synatx for integer: "{39}" - postgresql

I'm using node js 0.10.12 to perform querys to postgreSQL 9.1.
I get the error error invalid input synatx for integer: "{39}" (39 is an example number) when I try to perform an update query
I cannot see what is going wrong. Any advise?
Here is my code (snippets) in the front-end
//this is global
var gid=0;
//set websockets to search - works fine
var sd = new WebSocket("ws://localhost:0000");
sd.onmessage = function (evt)
{
//get data, parse it, because there is more than one vars, pass id to gid
var received_msg = evt.data;
var packet = JSON.parse(received_msg);
var tid = packet['tid'];
gid=tid;
}
//when user clicks button, set websockets to send id and other data, to perform update query
var sa = new WebSocket("ws://localhost:0000");
sa.onopen = function(){
sa.send(JSON.stringify({
command:'typesave',
indi:gid,
name:document.getElementById("typename").value,
}));
sa.onmessage = function (evt) {
alert("Saved");
sa.close;
gid=0;//make gid 0 again, for re-use
}
And the back -end (query)
var query=client.query("UPDATE type SET t_name=$1,t_color=$2 WHERE t_id = $3 ",[name, color, indi])
query.on("row", function (row, result) {
result.addRow(row);
});
query.on("end", function (result) {
connection.send("o");
client.end();
});
Why this not work and the number does not get recognized?
Thanks in advance
As one would expect from the initial problem, your database driver is sending in an integer array of one member into a field for an integer. PostgreSQL rightly rejects the data and return an error. '{39}' in PostgreSQL terms is exactly equivalent to ARRAY[39] using an array constructor and [39] in JSON.
Now, obviously you can just change your query call to pull the first item out of the JSON array. and send that instead of the whole array, but I would be worried about what happens if things change and you get multiple values. You may want to look at separating that logic out for this data structure.

Get response using pdo sql?

I am trying to get the actual response (the data) from my database using prepared statements:
$stmt=$dbconn->prepare("SELECT user_videos FROM public.account_recover_users WHERE user_mail= :email");
$stmt->execute(array(':videos'=>$json_videos,':email'=>$email));
I know that $stmt->execute(array(':videos'=>$json_videos,':email'=>$email)); will return a boolean, not the actual data. But how to get the data from my database into an array? I will need to later return that data, the script is accessed via a GET request, and I will need to do exit("{'data':$data_from_db}"); so I don't want to fetch each row using foreach($stmt as $row). Just pass it all as it is.
$results = array();
while($row = $stmt->fetch(PDO::FETCH_ASSOC)){
$results[] = $row;
}

Ordering a query by the string length of one of the fields

In RavenDB (build 2330) I'm trying to order my results by the string length of one of the indexed terms.
var result = session.Query<Entity, IndexDefinition>()
.Where(condition)
.OrderBy(x => x.Token.Length);
However the results look to be un-sorted. Is this possible in RavenDB (or via a Lucene query) and if so what is the syntax?
You need to add a field to IndexDefinition to order by, and define the SortOption to Int or something more appropriate (however you don't want to use String which is default).
If you want to use the Linq API like in your example you need to add a field named Token_Length to the index' Map function (see Matt's comment):
from doc in docs
select new
{
...
Token_Length = doc.TokenLength
}
And then you can query using the Linq API:
var result = session.Query<Entity, IndexDefinition>()
.Where(condition)
.OrderBy(x => x.Token.Length);
Or if you really want the field to be called TokenLength (or something other than Token_Length) you can use a LuceneQuery:
from doc in docs
select new
{
...
TokenLength = doc.Token.Length
}
And you'd query like this:
var result = session.Advanced.LuceneQuery<Entity, IndexDefinition>()
.Where(condition)
.OrderBy("TokenLength");

Proper Way to Retrieve More than 128 Documents with RavenDB

I know variants of this question have been asked before (even by me), but I still don't understand a thing or two about this...
It was my understanding that one could retrieve more documents than the 128 default setting by doing this:
session.Advanced.MaxNumberOfRequestsPerSession = int.MaxValue;
And I've learned that a WHERE clause should be an ExpressionTree instead of a Func, so that it's treated as Queryable instead of Enumerable. So I thought this should work:
public static List<T> GetObjectList<T>(Expression<Func<T, bool>> whereClause)
{
using (IDocumentSession session = GetRavenSession())
{
return session.Query<T>().Where(whereClause).ToList();
}
}
However, that only returns 128 documents. Why?
Note, here is the code that calls the above method:
RavenDataAccessComponent.GetObjectList<Ccm>(x => x.TimeStamp > lastReadTime);
If I add Take(n), then I can get as many documents as I like. For example, this returns 200 documents:
return session.Query<T>().Where(whereClause).Take(200).ToList();
Based on all of this, it would seem that the appropriate way to retrieve thousands of documents is to set MaxNumberOfRequestsPerSession and use Take() in the query. Is that right? If not, how should it be done?
For my app, I need to retrieve thousands of documents (that have very little data in them). We keep these documents in memory and used as the data source for charts.
** EDIT **
I tried using int.MaxValue in my Take():
return session.Query<T>().Where(whereClause).Take(int.MaxValue).ToList();
And that returns 1024. Argh. How do I get more than 1024?
** EDIT 2 - Sample document showing data **
{
"Header_ID": 3525880,
"Sub_ID": "120403261139",
"TimeStamp": "2012-04-05T15:14:13.9870000",
"Equipment_ID": "PBG11A-CCM",
"AverageAbsorber1": "284.451",
"AverageAbsorber2": "108.442",
"AverageAbsorber3": "886.523",
"AverageAbsorber4": "176.773"
}
It is worth noting that since version 2.5, RavenDB has an "unbounded results API" to allow streaming. The example from the docs shows how to use this:
var query = session.Query<User>("Users/ByActive").Where(x => x.Active);
using (var enumerator = session.Advanced.Stream(query))
{
while (enumerator.MoveNext())
{
User activeUser = enumerator.Current.Document;
}
}
There is support for standard RavenDB queries, Lucence queries and there is also async support.
The documentation can be found here. Ayende's introductory blog article can be found here.
The Take(n) function will only give you up to 1024 by default. However, you can change this default in Raven.Server.exe.config:
<add key="Raven/MaxPageSize" value="5000"/>
For more info, see: http://ravendb.net/docs/intro/safe-by-default
The Take(n) function will only give you up to 1024 by default. However, you can use it in pair with Skip(n) to get all
var points = new List<T>();
var nextGroupOfPoints = new List<T>();
const int ElementTakeCount = 1024;
int i = 0;
int skipResults = 0;
do
{
nextGroupOfPoints = session.Query<T>().Statistics(out stats).Where(whereClause).Skip(i * ElementTakeCount + skipResults).Take(ElementTakeCount).ToList();
i++;
skipResults += stats.SkippedResults;
points = points.Concat(nextGroupOfPoints).ToList();
}
while (nextGroupOfPoints.Count == ElementTakeCount);
return points;
RavenDB Paging
Number of request per session is a separate concept then number of documents retrieved per call. Sessions are short lived and are expected to have few calls issued over them.
If you are getting more then 10 of anything from the store (even less then default 128) for human consumption then something is wrong or your problem is requiring different thinking then truck load of documents coming from the data store.
RavenDB indexing is quite sophisticated. Good article about indexing here and facets here.
If you have need to perform data aggregation, create map/reduce index which results in aggregated data e.g.:
Index:
from post in docs.Posts
select new { post.Author, Count = 1 }
from result in results
group result by result.Author into g
select new
{
Author = g.Key,
Count = g.Sum(x=>x.Count)
}
Query:
session.Query<AuthorPostStats>("Posts/ByUser/Count")(x=>x.Author)();
You can also use a predefined index with the Stream method. You may use a Where clause on indexed fields.
var query = session.Query<User, MyUserIndex>();
var query = session.Query<User, MyUserIndex>().Where(x => !x.IsDeleted);
using (var enumerator = session.Advanced.Stream<User>(query))
{
while (enumerator.MoveNext())
{
var user = enumerator.Current.Document;
// do something
}
}
Example index:
public class MyUserIndex: AbstractIndexCreationTask<User>
{
public MyUserIndex()
{
this.Map = users =>
from u in users
select new
{
u.IsDeleted,
u.Username,
};
}
}
Documentation: What are indexes?
Session : Querying : How to stream query results?
Important note: the Stream method will NOT track objects. If you change objects obtained from this method, SaveChanges() will not be aware of any change.
Other note: you may get the following exception if you do not specify the index to use.
InvalidOperationException: StreamQuery does not support querying dynamic indexes. It is designed to be used with large data-sets and is unlikely to return all data-set after 15 sec of indexing, like Query() does.