xwiki/velocity recent changes - velocity

by default the recent pages code that can be found does not do what I want it to do.
How can I get
a media-wiki-like version of recent changes
-and/or-
the last 10 changed pages
preferably using velocity.
Many greetings

The code is probably based on a database request to get the last pages so that mean you can limit the number of result easily either as a $xwiki.search method parameter (see http://tinyurl.com/7r8od94 for example) or better using setLimit if you are using the new query service (see http://tinyurl.com/7y99smg).
If you can point me to the exact code you are talking about I can probably give you more details on what to modify.

Related

How to correctly search (Real Estate Transaction Standard aka RETS) server?

I am trying to interact with a RETS (Real Estate Transaction Standard) server to find all listings where matrix_unique_id field is greater than or equal to 0.
After logging in, I tried the following URI
Search.ashx?SearchType=Property&Class=Listing&Limit=1000&Query=(matrix_unique_id=0+)&StandardNames=0
The above call returns
<RETS ReplyCode="20201" ReplyText="No Records Found."/>
But then I supplied a valid Matrix_Unique_Id value like this
Search.ashx?SearchType=Property&Class=Listing&Limit=1000&Query=(matrix_unique_id=59075770+)&StandardNames=0
Now that returns something but not what I am expecting. The returned value is as follow
Here is the documentation for RETS 1.7.2 and a PDF
Additionally, here is an example of how to search RETS server for a different server but both adhere to the same specification.
https://www.flexmls.com/developers/rets/tutorials/example-rets-session/
Additionally, I used RETS Connector to query the listing and I am able to download listings with no issues which indicated that my account is working and has permission to search.
Question: How can I correctly search up all properties where the field Matrix_Unique_Id is 0+?
For getting full result try the following logic,
(ModificationTimestamp=2000-01-01T00:00:00+)
This will return all the listings from the year 2000 onwards. If you need further old, give 1990 or older in the query.
Note: Your example query (matrix_unique_id=0+) is not working because
of its pattern may not be correct, say 8 digit number only will take
as input.

How can I fetch (via GET) all JIRA issues? Do I go to the Search node?

It looks like /api/2/project easily returns all projects in a JIRA instance in JSON format.
I'd like to do the same for issues, but this does not appear to exist.
Is /api/2/search the standard way to do a mass-dump like this? And what is the best way to regularly update this to a database? Would I do something like search (update date > [last entry in database]) and then go through the pagination? Surely I can't be the first person attempting this, though I see no similar guide anywhere online to this (I checked Jira's own docs, no mass-issue-export guide really).
EDIT: Okay it looks like search really is the "issue dump" and not the issue node which, contrary to their documentation, does not default to a collection but really for creating issues or listing one at at time. I'll probably go the route of updated > [whatever last date is in the DB]
Unless you have very few issues, you can't fetch all of them at once.
What you can do is to execute the search step by step.
For example, lets say you have 1324 JIRA issues. In order to retrive all of them you have to execute a search similar to this several times:
/rest/api/2/search?&maxResults=100&startAt=0
This will retrive the first 100 JIRA issues starting from 0.
How to get the others?
When you execute the search, a field named total is returned. That field is the number of the total JIRA issues in your system (1324 issues).
The next query will be:
/rest/api/2/search?&maxResults=100&startAt=100
Repeat this operation, incrementing the value of startAt by 100 every time, until all the issues are returned.

jsFiddle API to get row count of user's fiddles

So, I had a nice thing going on a jsFiddle where I listed all my fiddles on one page:
jsfiddle.net/show
However, they have been changing things slowly this year, and I've already had to make some changes to keep it running. The newest change is rather annoying. Of course, I like to see ALL my fiddles all at once, make it easier to just hit ctrl+f and find what I might be looking for, but they' made it hard to do now. Used to I could just set the limit to 99999, and see everything, but now it appears I can't go past how many i actually have (186 atm).
I tried using a start to limit solution, but when it got to last 10|50 (i tried like start={x}&limit10 and start={x}&limit50) it would die. Namely because last pull had to be exact count. Example, I have 186, and use the by 10's solution, then it would die at start=180&limit=10.
I've search the API docs but can't seem to find a row count or anything of that manner. Anyone know of a good feasible solution that wont have me overloading there servers doing a constant single row check?
I'm having the same problem as you are. Then I checked the docs (Displaying user’s fiddles - Result) and found out that if you include callback=Api parameter, an additional overallResultSetCount field is included in the JSON response. I checked your fiddles and currently you have total of 229 public fiddles.
The solution I can think of will force you to only request twice. The first request's parameters doesn't matter as long as you have callback=Api. Then you send the second request in which your limit will be overallResultSetCount value.
Edit:
It's not in the documentation, however, I think the result set is limited to 200 entries only (hence your start/limit is from 0 - 199). I tried to query more than the 200 range but I get a Error 500. I couldn't find another user whose fiddle count is more than 200 (most of the usernames I tested only have less than 100 fiddles like zalun, oskar, and rpflorence).
Based on this new observation, you can update your script like this:
I have tested that if the total fiddle count is less than 200,
adding start=0&limit=199 parameter will only return all the
fiddles. Hence, you can add that parameter on your initial call.
Check if your total result set is more than 200. If yes, update your
parameters to reflect the range for the remaining result set (in
this case, start=199&limit=229) and add the new result set to your
old result set. Else, show/print the result set you initially got from your first query.
Repeat steps 1 and 2, if your total count reaches 400, 600, etc (any
multiple of 200).

Getting the exact edited data from SQL Server

I have two Tables:
Articles(artID, artContents, artPublishDate, artCategoryID, publisherID).
ArticleUpdated(upArtID, upArtContents, upArtEditedData, upArtPublishDate, upArtCategory, upArtOriginalArticleID, upPublisherID)
A user logging in to the application and update an article's
contents at (artContents) column. I want to know about:
Which Changes the user made to the article's contents?
I want to store both versions of the Article, Original version and Edited Version!
What should I do for doing above two task:
Any necessary changes into the tables?
The query for getting exact edited data of (artContents).
(The exact edited data means, that there may 5000 characters in the coloumns, the user may edit 200 characters in the middle or somewhere else in column's characters, I want exact those edited characters, before of edit and after of edit)
Note: I am using ASP.NET with C# for Developing
You are not going to be able to do the exact editing using SQL. You need an algorithm such as the Unix diff on files (which works on the line level). At the character level, the algorithm would be some variation of Levenshtein distance. If diff meets your needs, you could download it, write a stored-procedure to call it, and then use it in the database. This would be rather expensive.
The part of your question of maintaining the different versions is much easier. I would add two colmnns EffDate and EndDate onto each record. You can get the most recent version by looking for EndDate is NULL and find the version active at any given time. Merge is generally useful for maintaining such a table.
Basically this type for requirement needs custom logging.
The example what you have provided i.e. "The exact edited data means, that there may 5000 characters in the coloumns, the user may edit 200 characters in the middle or somewhere else in column's characters, I want exact those edited characters, before of edit and after of edit"
Can have a case that user updates particular words from different place from the text.
You can use http://nlog-project.org/ for logging, its a fast and robust tool that normally we use for doing .net logging.
Also you can take a look
http://www.codeproject.com/Articles/38756/Two-Simple-Approaches-to-WinForms-Dirty-Tracking
Asp.net Event for change tracking of entities
What would be the best way to implement change tracking on an object
Above urls will clear some air, on how to do it.
You would obviously need to track down and store every change.

Cost comparison using Solr

I plan to build something like pricegrabber.com/google product search.
Assume I already have the data available in a huge table. I plan to submit this all to Solr. This solves the problem of search. However I am not sure how to do comparison. I can do a group by query(on UPC/SKU) for the products returned by Solr on the DB. However, I dont want to do that. I want to somehow get product comparison data returned to me along with search from Solr itself.
How do you think should my schema be? Do you think this use-case can be solved all by Solr/Sphinx?
You need 'result grouping' or 'field collapsing' support to properly handle it.
In Solr, the feature is not available in any release version and is still under development. If you are willing to use an unreleased version of Solr, then get the details here.
Sphinx supports result grouping and I had used it a long time ago in a similar project. You can get more details here.
An alternative strategy could be to preprocess your data so that only a single record per UPC/SKU gets inserted in the index. Each record can have a separate field containing the ids of all the items with the same UPC/SKU.
Doing a database GROUP BY on the products returned by Solr may not be enough. For example, if products A and B have the same UPC and a certain query matches A but not B, then you will not get both A and B in your result set.