Sort Google Custom Search Results by Date - google-custom-search

I'm in the middle of migrating Google Custom Search Engine to use the CustomSearchControl to replace the deprecated WebSearch API, and one of the requirement is to sort the suggestion results by date. But so far, I couldn't figure out how to tell Google to sort results by latest date before returning the suggestion. The sample code is as follows:
var refinement="Support";
.....
switch(product)
{
case "10000":
refinement = "Support1";
break;
case "10002":
refinement = "Support1";
break;
case "10001":
refinement = "Support2";
break;
default:
break;
}
var customSearchControl = new google.search.CustomSearchControl('cseId');
customSearchControl.setSearchStartingCallback(this, function(control, searcher, query) {
searcher.setQueryAddition('more:' + refinement);
});
customSearchControl.setResultSetSize(7);
customSearchControl.draw('entries');
......
I've tried the recency label to sort the results, but it doesn't work:
customSearchControl.setSearchStartingCallback(this, function(control, searcher, query) {
//searcher.setQueryAddition('more:recent3');
searcher.setQueryAddition('more:' + refinement + ', more:recent3');
});
And I also tried sorting by attributes but it's not working either:
var options = {};
options[google.search.Search.RESTRICT_EXTENDED_ARGS] = {'sort': 'date-sdate:d:s'}; //Tried to use other date format but it doesn't help
var customSearchControl = new google.search.CustomSearchControl('cseId', options);
Perhaps sorting by attributes will not work because we don't have attributes declared in our web documentations. Thus, is there any other way that allows us to tell Google to sort the search results by date?

I came across the following:
http://code.google.com/intl/nl-NL/apis/customsearch/docs/js/cselement-reference.html
options[google.search.Search.RESTRICT_EXTENDED_ARGS] = {
'lr': 'lang_it',
'sort': 'date'
};
var customSearchControl = new google.search.CustomSearchControl(id, options);
Hope this will help if the problem is still there.

Related

Creating a MoreLikeThis query yields empty queries and terms

I'm trying to get similar Document of another one . I'm using the Lucene.Net MoreLikeThis-Class to achieve this. For this i seperate my Documents in multiple Fields - Title and Content. Now creating the actual query results in an empty query without interesting Terms.
My could looks like this:
var queries = new List<Query>();
foreach(var docField in docFields)
var similarSearch = new MoreLikeThis(indexReader);
similarSearch.SetFieldNames(docField.fieldName);
similarSearch.Analyzer = new GermanAnalyzer(Version.LUCENE_30, new HashSet<string>(StopWords));
similarSearch.MinDocFreq = 1;
similarSearch.MinTermFreq = 1;
similarSearch.MinWordLen = 1;
similarSearch.Boost = true;
similarSearch.BoostFactor = boostFactor;
using(var reader = new StringReader(docField.Content)){
var searchQuery = similarSearch.Like(reader);
// debugging purpose
var queryString = searchQuery.ToString(); // empty
var terms = similarSearch.RetrieveInterestingTerms(reader); // also empty
queries.Add(searchQuery);
}
var booleanQuery = new BooleanQuery();
foreach(var moreLikeThisQuery in queries)
{
booleanQuery.Add(moreLikeThisQuery, Occur.SHOULD);
}
var topDocs = indexSearcher.Search(booleanQuery, maxNumberOfResults); // and of course no results obtained
So the question is:
Why there are no Terms / why there is no query generated?
I hope important thing's be seen, if not please help me to make my first question better :)
I got it to work.
The problem was, that i worked on the false directory.
I have different Solutions for creating the index and creating the queries and had a missmatch with the index-location.
So the generall solution would be:
Is your Querygenerating-Class fully initialized? (MinDocFreq, MinTermFreq, MinWordLen, has a Analyzer, set the fieldNames)
Is your used IndexReader correctly initialized?

Why can I not use Continuation when using a proxy class to access MS CRM 2013?

So I have a standard service reference proxy calss for MS CRM 2013 (i.e. right-click add reference etc...) I then found the limitation that CRM data calls limit to 50 results and I wanted to get the full list of results. I found two methods, one looks more correct, but doesn't seem to work. I was wondering why it didn't and/or if there was something I'm doing incorrectly.
Basic setup and process
crmService = new CrmServiceReference.MyContext(new Uri(crmWebServicesUrl));
crmService.Credentials = System.Net.CredentialCache.DefaultCredentials;
var accountAnnotations = crmService.AccountSet.Where(a => a.AccountNumber = accountNumber).Select(a => a.Account_Annotation).FirstOrDefault();
Using Continuation (something I want to work, but looks like it doesn't)
while (accountAnnotations.Continuation != null)
{
accountAnnotations.Load(crmService.Execute<Annotation>(accountAnnotations.Continuation.NextLinkUri));
}
using that method .Continuation is always null and accountAnnotations.Count is always 50 (but there are more than 50 records)
After struggling with .Continutation for a while I've come up with the following alternative method (but it seems "not good")
var accountAnnotationData = accountAnnotations.ToList();
var accountAnnotationFinal = accountAnnotations.ToList();
var index = 1;
while (accountAnnotationData.Count == 50)
{
accountAnnotationData = (from a in crmService.AnnotationSet
where a.ObjectId.Id == accountAnnotationData.First().ObjectId.Id
select a).Skip(50 * index).ToList();
accountAnnotationFinal = accountAnnotationFinal.Union(accountAnnotationData).ToList();
index++;
}
So the second method seems to work, but for any number of reasons it doesn't seem like the best. Is there a reason .Continuation is always null? Is there some setup step I'm missing or some nice way to do this?
The way to get the records from CRM is to use paging here is an example with a query expression but you can also use fetchXML if you want
// Query using the paging cookie.
// Define the paging attributes.
// The number of records per page to retrieve.
int fetchCount = 3;
// Initialize the page number.
int pageNumber = 1;
// Initialize the number of records.
int recordCount = 0;
// Define the condition expression for retrieving records.
ConditionExpression pagecondition = new ConditionExpression();
pagecondition.AttributeName = "address1_stateorprovince";
pagecondition.Operator = ConditionOperator.Equal;
pagecondition.Values.Add("WA");
// Define the order expression to retrieve the records.
OrderExpression order = new OrderExpression();
order.AttributeName = "name";
order.OrderType = OrderType.Ascending;
// Create the query expression and add condition.
QueryExpression pagequery = new QueryExpression();
pagequery.EntityName = "account";
pagequery.Criteria.AddCondition(pagecondition);
pagequery.Orders.Add(order);
pagequery.ColumnSet.AddColumns("name", "address1_stateorprovince", "emailaddress1", "accountid");
// Assign the pageinfo properties to the query expression.
pagequery.PageInfo = new PagingInfo();
pagequery.PageInfo.Count = fetchCount;
pagequery.PageInfo.PageNumber = pageNumber;
// The current paging cookie. When retrieving the first page,
// pagingCookie should be null.
pagequery.PageInfo.PagingCookie = null;
Console.WriteLine("#\tAccount Name\t\t\tEmail Address");while (true)
{
// Retrieve the page.
EntityCollection results = _serviceProxy.RetrieveMultiple(pagequery);
if (results.Entities != null)
{
// Retrieve all records from the result set.
foreach (Account acct in results.Entities)
{
Console.WriteLine("{0}.\t{1}\t\t{2}",
++recordCount,
acct.EMailAddress1,
acct.Name);
}
}
// Check for more records, if it returns true.
if (results.MoreRecords)
{
// Increment the page number to retrieve the next page.
pagequery.PageInfo.PageNumber++;
// Set the paging cookie to the paging cookie returned from current results.
pagequery.PageInfo.PagingCookie = results.PagingCookie;
}
else
{
// If no more records are in the result nodes, exit the loop.
break;
}
}

Calculate distance between two PFGeopoints in a NSPredicate for a Parse Query

I have a special case when I want to do something like
let predicate = NSPredicate(format:"
DISTANCE(\(UserLocation),photoLocation) <= visibleRadius AND
DISTANCE(\(UserLocation),photoLocation" <= 10)"
var query = PFQuery(className:"Photo", predicate:predicate)
Basically, I want to get all photos that are taken within 10km around my current location if my current location is also within the photo's visible radius
Also, photoLocation and visibleRadius are two columns in the database, I will supply UserLocation as a PFGeoPoint.
Is it possible to achieve this? In my opinion, I don't think that I may call, for example, photoLocation.latitude to get a specific coordinate value. May I?
I'll appreciate you a lot if this can be achieved!!
I found this at the pares.com docs here is the link
let swOfSF = PFGeoPoint(latitude:37.708813, longitude:-122.526398)
let neOfSF = PFGeoPoint(latitude:37.822802, longitude:-122.373962)
var query = PFQuery(className:"PizzaPlaceObject")
query.whereKey("location", withinGeoBoxFromSouthwest:swOfSF, toNortheast:neOfSF)
var pizzaPlacesInSF = query.findObjects()
This code fetch you all the objects that are in a rectangle area defined by the swOfSF & neOfSF objectc, where seOfSF is in the south-west corner and neOfSF is in the north-east corner.
You can make some alterations to the code and get all the objects in rectangle area that your object is in middle
i would recommend that you don't use a radius, because it will take a lot of calculations. Instead use a rectangle area (like in the code i gave you).
just calculate what is the max/min longitude & max/min latitude from your position and fetch all the objects that are in between. you can read about how to fine the min/max longitude & latitude here Link
I managed to solve it using Parse Cloud Code, here is the quick tutorial
Parse.Cloud.define("latestPosts", function(request, response) {
var limit = 20;
var query = new Parse.Query("Post");
var userLocation = request.params.userLocation;
var searchScope = request.params.searchScope;
var afterDate = request.params.afterDate;
var senderUserName = request.params.senderUserName;
query.withinKilometers("Location", userLocation, searchScope);
query.greaterThan("createdAt", afterDate);
query.notEqualTo("senderUserName",senderUserName);
query.ascending("createdAt");
query.find({
success: function(results) {
var finalResults = results.filter(function(el) {
var visibleRadius = el.get("VisibleRadius");
var postLocation = el.get("Location");
return postLocation.kilometersTo(userLocation) <= visibleRadius;
});
if (finalResults.length > limit) {
var slicedFinalResults = results.slice(0, 20);
response.success(slicedFinalResults);
} else {
response.success(finalResults);
}
},
error: function() {
response.error("no new post");
}
});
});
The code above illustrate a basic example of how to use Cloud Code. Except, I have to make sure that all the returned image are in the union of user's search scope and photo's visible circle. There are more techniques such as Promises. But for my purpose, the code above should just suffice.

Can't get additionalSearchFields to work

jsonStoreInit = function(pSuccess, pFailure){
collections={};
collections['objects'] = {};
var options = {};
options.localKeyGen = false;
options.clear = false;
options.username = app.username;
options.password = app.password;
options.additionalSearchFields = {key: 'string'};
WL.JSONStore.init(collections, options)
.then(pSuccess)
.fail(pFailure);
};
putObject = function(pObject) {
var keyValue = pObject.getKey();
var object = {myObject : pObject.getKey()};
var options = {};
//options.additionalSearchFields = {key : keyValue};
WL.JSONStore.get("objects")
.add(object, options);
};
I'm on WL 6.0 FP 1
In the code sample above jsonStoreInit is what I use to init my store including the options.additionalSearchFields.
When I come to add the objects in the putObject funciton it works fine with the additionalSearchFields commented out, but when I uncomment it to add the additional fields I get an error
[wl.jsonstore] {"src":"store","err":21,"msg":"INVALID_ADD_INDEX_KEY","col":"objects","usr":"xxxx","doc":{},"res":{}}
When I look this error message up all I get is
21 INVALID_ADD_INDEX_KEY
Problem with additional search fields.
Which I had kinda figured ... can anyone provide any help on this ...
I don't need to you fix my code but if you could point me to a working example that would be excellent.
Many thanks, ownimage
The person that asked the question solved it, but I'm leaving this answer in case someone is wondering how to pass data that uses additionalSearchFields.
Example:
var data = {hello: 'world'};
WL.JSONStore.get('collection').add(data, {additionalSearchFields: {key: 'value'}})
The example assumes the collection was created with a search field for hello as string and an additional search field for key as string. It also assumes there's a collection initialized called collection.

Proper Way to Retrieve More than 128 Documents with RavenDB

I know variants of this question have been asked before (even by me), but I still don't understand a thing or two about this...
It was my understanding that one could retrieve more documents than the 128 default setting by doing this:
session.Advanced.MaxNumberOfRequestsPerSession = int.MaxValue;
And I've learned that a WHERE clause should be an ExpressionTree instead of a Func, so that it's treated as Queryable instead of Enumerable. So I thought this should work:
public static List<T> GetObjectList<T>(Expression<Func<T, bool>> whereClause)
{
using (IDocumentSession session = GetRavenSession())
{
return session.Query<T>().Where(whereClause).ToList();
}
}
However, that only returns 128 documents. Why?
Note, here is the code that calls the above method:
RavenDataAccessComponent.GetObjectList<Ccm>(x => x.TimeStamp > lastReadTime);
If I add Take(n), then I can get as many documents as I like. For example, this returns 200 documents:
return session.Query<T>().Where(whereClause).Take(200).ToList();
Based on all of this, it would seem that the appropriate way to retrieve thousands of documents is to set MaxNumberOfRequestsPerSession and use Take() in the query. Is that right? If not, how should it be done?
For my app, I need to retrieve thousands of documents (that have very little data in them). We keep these documents in memory and used as the data source for charts.
** EDIT **
I tried using int.MaxValue in my Take():
return session.Query<T>().Where(whereClause).Take(int.MaxValue).ToList();
And that returns 1024. Argh. How do I get more than 1024?
** EDIT 2 - Sample document showing data **
{
"Header_ID": 3525880,
"Sub_ID": "120403261139",
"TimeStamp": "2012-04-05T15:14:13.9870000",
"Equipment_ID": "PBG11A-CCM",
"AverageAbsorber1": "284.451",
"AverageAbsorber2": "108.442",
"AverageAbsorber3": "886.523",
"AverageAbsorber4": "176.773"
}
It is worth noting that since version 2.5, RavenDB has an "unbounded results API" to allow streaming. The example from the docs shows how to use this:
var query = session.Query<User>("Users/ByActive").Where(x => x.Active);
using (var enumerator = session.Advanced.Stream(query))
{
while (enumerator.MoveNext())
{
User activeUser = enumerator.Current.Document;
}
}
There is support for standard RavenDB queries, Lucence queries and there is also async support.
The documentation can be found here. Ayende's introductory blog article can be found here.
The Take(n) function will only give you up to 1024 by default. However, you can change this default in Raven.Server.exe.config:
<add key="Raven/MaxPageSize" value="5000"/>
For more info, see: http://ravendb.net/docs/intro/safe-by-default
The Take(n) function will only give you up to 1024 by default. However, you can use it in pair with Skip(n) to get all
var points = new List<T>();
var nextGroupOfPoints = new List<T>();
const int ElementTakeCount = 1024;
int i = 0;
int skipResults = 0;
do
{
nextGroupOfPoints = session.Query<T>().Statistics(out stats).Where(whereClause).Skip(i * ElementTakeCount + skipResults).Take(ElementTakeCount).ToList();
i++;
skipResults += stats.SkippedResults;
points = points.Concat(nextGroupOfPoints).ToList();
}
while (nextGroupOfPoints.Count == ElementTakeCount);
return points;
RavenDB Paging
Number of request per session is a separate concept then number of documents retrieved per call. Sessions are short lived and are expected to have few calls issued over them.
If you are getting more then 10 of anything from the store (even less then default 128) for human consumption then something is wrong or your problem is requiring different thinking then truck load of documents coming from the data store.
RavenDB indexing is quite sophisticated. Good article about indexing here and facets here.
If you have need to perform data aggregation, create map/reduce index which results in aggregated data e.g.:
Index:
from post in docs.Posts
select new { post.Author, Count = 1 }
from result in results
group result by result.Author into g
select new
{
Author = g.Key,
Count = g.Sum(x=>x.Count)
}
Query:
session.Query<AuthorPostStats>("Posts/ByUser/Count")(x=>x.Author)();
You can also use a predefined index with the Stream method. You may use a Where clause on indexed fields.
var query = session.Query<User, MyUserIndex>();
var query = session.Query<User, MyUserIndex>().Where(x => !x.IsDeleted);
using (var enumerator = session.Advanced.Stream<User>(query))
{
while (enumerator.MoveNext())
{
var user = enumerator.Current.Document;
// do something
}
}
Example index:
public class MyUserIndex: AbstractIndexCreationTask<User>
{
public MyUserIndex()
{
this.Map = users =>
from u in users
select new
{
u.IsDeleted,
u.Username,
};
}
}
Documentation: What are indexes?
Session : Querying : How to stream query results?
Important note: the Stream method will NOT track objects. If you change objects obtained from this method, SaveChanges() will not be aware of any change.
Other note: you may get the following exception if you do not specify the index to use.
InvalidOperationException: StreamQuery does not support querying dynamic indexes. It is designed to be used with large data-sets and is unlikely to return all data-set after 15 sec of indexing, like Query() does.