WCF Data Service REST Client - duplicated entries - wcf

We are using the newest WCF dlls (5.6.1) to implement OData Client. We send a request to a service and we are supposed to get multiple, various entries. Unfortunately, something goes wrong and in the response we are getting the correct count of entries, which are incorrectly duplicated (The entry which is the last one is repeated by the count of entries). When the url, which is sent to the service, is executed in browser the entries in response are correct - every entry is different. The same situation is when in linq query I select only some of the columns by new{x.column1, x.column2} - every entry is different.
This is part of the response from browser:
<entry>
<id>...</id>
<title type="text">...</title>
<updated>2014-05-22T08:44:46Z</updated>
<category term="GET_AVG_CONS.Y_AVG_CONS_YEAR" scheme="http://schemas.microsoft.com/ado/2007/08/dataservices/scheme"/>
<link href="Y_AVG_CONS_YEAR_COLLECTION(Customer='1000000167',CompCode='DE0J')" rel="edit" title="Y_AVG_CONS_YEAR"/>
<content type="application/xml">
<m:properties>
<d:BpCat/>
<d:Cusvalsg/>
<d:Salesorg/>
<d:Flag/>
<d:Calmonth>201309</d:Calmonth>
<d:BpCatX/>
<d:CusvalsgX/>
<d:SalesorgX/>
<d:FlagX/>
<d:CalmonthX/>
<d:Customer>1000000167</d:Customer>
<d:ProdHier/>
<d:CompCode>DE0J</d:CompCode>
<d:HierX/>
<d:OrdMeth/>
<d:Nptotcap>2200.000</d:Nptotcap>
<d:Type/>
<d:Code>000</d:Code>
<d:Message/>
</m:properties>
</content>
</entry>
<entry>
<id>...</id>
<title type="text">...</title>
<updated>2014-05-22T08:44:46Z</updated>
<category term="GET_AVG_CONS.Y_AVG_CONS_YEAR" scheme="http://schemas.microsoft.com/ado/2007/08/dataservices/scheme"/>
<link href="Y_AVG_CONS_YEAR_COLLECTION(Customer='1000000167',CompCode='DE0J')" rel="edit" title="Y_AVG_CONS_YEAR"/>
<content type="application/xml">
<m:properties>
<d:BpCat/>
<d:Cusvalsg/>
<d:Salesorg/>
<d:Flag/>
<d:Calmonth>201310</d:Calmonth>
<d:BpCatX/>
<d:CusvalsgX/>
<d:SalesorgX/>
<d:FlagX/>
<d:CalmonthX/>
<d:Customer>1000000167</d:Customer>
<d:ProdHier/>
<d:CompCode>DE0J</d:CompCode>
<d:HierX/>
<d:OrdMeth/>
<d:Nptotcap>220.000</d:Nptotcap>
<d:Type/>
<d:Code>000</d:Code>
<d:Message/>
</m:properties>
</content>
</entry>
This is part of the response from linq query selecting all colums:
And this is the response when I select only some of the columns in linq query:
Does anyone know what might be the problem?
#Zoe
The request which is sent by browser as well as from application, which results in duplicated entries, is the same:
IPaddress:port/something/GET_AVG_CONS/Y_AVG_CONS_COLLECTION()?$filter=Customer eq '1000000167'.
The request from application, which results in correct entries is:
IPaddress:port/something/GET_AVG_CONS/Y_AVG_CONS_YEAR_COLLECTION()?$filter=Customer eq '1000000167'&$select=Calmonth,Nptotcap.
#christiandev
I don't understand your question. I'm using linq query. Before the query is executed, there are no entries in collection. After the execution, they are already duplicated.

Related

LabVIEW Parsing XML String without using tools

I am creating an information displaying mini-app for a device. The response I receive from the device when I send an HTTP Get request is literally as follows:
<?xml version="1.0" encoding="iso-8859-2"?>
<root xmlns="http://www.papouch.com/xml/th2e/act">
<sns id="1" type="1" status="0" unit="0" val="25.0" w-min="" w-max="" e-min-val=" -0.3" e-max-val=" 124.0" e-min-dte="01/01/2014 13:16:44" e-max-dte="05/14/2014 10:00:43" /><sns id="2" type="2" status="0" unit="3" val="56.4" w-min="" w-max="" e-min-val=" 0.1" e-max-val=" 100.0" e-min-dte="01/27/2014 08:39:14" e-max-dte="03/04/2014 11:02:40" /><sns id="3" type="3" status="0" unit="0" val="15.7" w-min="" w-max="" e-min-val=" -21.3" e-max-val=" 85.9" e-min-dte="01/27/2014 12:21:28" e-max-dte="03/04/2014 11:29:32" /><status frm="1" location="NONAME" time="01/02/2014 7:12:00" typesens="3" /></root>
There are 3 sns elements with incrementing ids, I need to read the val attribute of the sns element with the id 1.
I tried implementing the suggested way here:Get specific XML element attributes in Labview , and shown below is my implementation, but it does not work. I tested the XPath on http://xpather.com/ and it fetches the value I need just fine.
The XPath I am using is: //root/sns[#id="1"]/#val
The result I get when I run is just nothing, no Parsing errors, no any other errors, everything seems to be okay but the String indicator is always empty, String 2 displays the HTTP response fine.
I am using (and have to use) LabVIEW 2011 SP1.
The reason why the result is empty is the wrong input of Get Node Text Content.

WCF returning nil value and actual value in Data Contract

I have WCF service that is calling another WCF service to get some information from one of our systems and it appears that the values being returned contained some nil values. However, on looking at the XML that was returned, it appeared that the returned values contain two entries for the same DataMamber, one with a nil value and one with the actual value i was expecting eg,
I see something similar to the following in the of the XML returned where the DataMembers have nil values:
<b:AccountNumber i:nil="true" />
<b:Created>0001-01-01T00:00:00</b:Created>
<b:CreatedBy i:nil="true" />
<b:EmailAddress i:nil="true" />
<b:GivenNames i:nil="true" />
and then in the same document but further down, I see the following where the same Data Members have the values I expect:
<b:Id>16996172</b:Id>
<b:Created>2007-07-16T16:32:48.789755</b:Created>
<b:CreatedBy>SYSTEM</b:CreatedBy>
<b:RowStatus>None</b:RowStatus>
<b:AccountNumber>1234567</b:AccountNumber>
<b:EmailAddress>email#test.com.au</b:EmailAddress>
<b:GivenNames>TEST NAME</b:GivenNames>
Not all the DataMembers that are being returned are duplicated like this and it seems that a few values are returned with nil then all of the correct values are returned.
Has anyone seen something like this before or could hazard a guess as to what could be causing it?
It seems that the problem was caused by the WSDL and Data Contracts not matching the web services themselves.
Running svcutil.exe against the web services that were running and not the WSDL files provided fixed the problem.

How to get more info within only one geosearch call via Wikipedia API?

I am using an API call similar to http://en.wikipedia.org/w/api.php?action=query&list=geosearch&gsradius=10000&gscoord=41.426140|26.099319.
I returns something like this
<?xml version="1.0"?>
<api>
<query>
<geosearch>
<gs pageid="27460829" ns="0" title="Kostilkovo" lat="41.416666666667" lon="26.05" dist="4245.1" primary="" />
<gs pageid="27460781" ns="0" title="Belopolyane" lat="41.45" lon="26.15" dist="4988.7" primary="" />
<gs pageid="27460862" ns="0" title="Siv Kladenets" lat="41.416666666667" lon="26.166666666667" dist="5713.5" primary="" />
<gs pageid="13811116" ns="0" title="Svirachi" lat="41.483333333333" lon="26.116666666667" dist="6521.9" primary="" />
<gs pageid="27460810" ns="0" title="Gorno Lukovo" lat="41.366666666667" lon="26.1" dist="6613.4" primary="" />
<gs pageid="27460799" ns="0" title="Dolno Lukovo" lat="41.366666666667" lon="26.083333333333" dist="6746.2" primary="" />
<gs pageid="27460827" ns="0" title="Kondovo" lat="41.433333333333" lon="26.016666666667" dist="6937" primary="" />
<gs pageid="27460848" ns="0" title="Plevun" lat="41.45" lon="26.016666666667" dist="7383.1" primary="" />
<gs pageid="24179704" ns="0" title="Villa Armira" lat="41.499069444444" lon="26.106263888889" dist="8130" primary="" />
<gs pageid="27460871" ns="0" title="Zhelezari" lat="41.413333333333" lon="25.998333333333" dist="8540.1" primary="" />
</geosearch>
</query>
</api>
But while I am actually trying to get some pictures of those pages, subsequent calls are needed, like
to get some page images
http://en.wikipedia.org/w/api.php?action=query&prop=images&pageids=13843906
then, to get image info
http://en.wikipedia.org/w/api.php?action=query&titles=File:Alexandru_Ioan_Cuza_Dealul_Patriarhiei.jpg&prop=imageinfo&iiprop=url
Well, even if this gets me what I ultimately need, it is not efficient at all.
I would like to know if there are some parameters for this calls, or maybe completely other call(s) that would bring all this info in maximum 2 steps/calls. It would be great, though, if it would be only one.
Wow, I had no idea that such a feature exists nowadays! But to answer your question, since it's a list query, you can probably use it as a generator.
Let's try it:
Original geosearch query: http://en.wikipedia.org/w/api.php?action=query&list=geosearch&gsradius=10000&gscoord=41.426140|26.099319
Generator query to get images on matching pages: http://en.wikipedia.org/w/api.php?action=query&prop=images&imlimit=max&generator=geosearch&ggsradius=10000&ggscoord=41.426140|26.099319
The prop=images query can also be used as a generator, so you can also do this:
Get URLs for all images on a list of pages: http://en.wikipedia.org/w/api.php?action=query&prop=imageinfo&iiprop=url&generator=images&gimlimit=max&pageids=13811116|24179704|27460781|27460799|27460810|27460827|27460829|27460848|27460862|27460871
Alas, AFAIK you can't nest generators, so you can't do both steps in one query. You can either:
get the list of images in one query, and then use another query to get the URLs, or
start with the basic geosearch query to get the page IDs, and then get the images and their URLs in another query.
Alas, it turns out that both of these options fail to give you some information that you may want. If you use list=geosearch as a generator, you don't get the coordinate information that you may need if you e.g. wish to display the results on a map. On the other hand, using prop=images as a generator makes you miss out on something even more important: the knowledge of which images are used on which pages!
Thus, unfortunately, it seems that, if your goal is to place images on a map, you'll probably have to do it with three separate queries. At least you can still query multiple pages / images in one request, so you shouldn't need more than three (until you hit the query limits and need to use continuations, that is).
(Also, doing it in three steps lets you apply some filtering to the images before the third step. For example, most of the pages returned by your example query only have the same three images — Flag of Bulgaria.svg, Ivaylovgrad Reservoir.jpg and Oblast Khaskovo.png — all of which are used via templates, and none of which really look like good choices to represent the specific location.)
Ps. If you're just interested in finding images near a particular location, even if they're not used on any specific Wikipedia article, you might want to try using geosearch directly on Wikimedia Commons. It doesn't seem to return any results for your Bulgarian example coordinates, but it works just fine in a more crowded location.
Here is an alternative to build on the previous answer. If you start with this query as a partial answer:
https://en.wikipedia.org/w/api.php?action=query&prop=images&imlimit=max&generator=geosearch&ggsradius=10000&ggscoord=41.426140|26.099319
Then you can build on this to get the information in a single query. The pageimages property can work with the generator. You cannot nest generators but you can chain properties. A query can use pageimages to get the page's main image url for each of the geosearch results. It looks like this:
https://en.wikipedia.org/w/api.php?action=query&prop=images|pageimages&pilimit=max&piprop=thumbnail&iwurl=&imlimit=max&generator=geosearch&ggsradius=10000&ggscoord=41.426140|26.099319
This query returns the image "File" names (images property) and a single URL for the main image (pageimages property). The main image of the page is all I need. You might be able to extrapolate the "file" urls by matching the changes from the file to the url that is output with the query but I cannot recommend such a hack.
The images property has a setting that is supposed to return urls for interwiki links, iwurl. I see the "file" as an interwiki link. This parameter is not working and images does not return a url. Playing on the sandbox might lead you to a better answer.
Intuitively it seems like you should be able to chain the images and imageinfo properties together. Doing so does not give the expected results.
If a single url for the main image of the page is not enough I can encourage you to play in the API sandbox to try and get what you need with some combination of properties. I am using the geosearch generator and get the page image, text description, and lat/long coordinates so that I can get the address. Good luck!

REST wrapping a single resource in a collection

I have a small dilemma.
If you have the following URI endpoints:
/item
/item/{id}
If I make a GET request to /item I expect something like this:
<Items>
<Item>...</Item>
<Item>...</Item>
...
</Items>
If I make a GET request to /item/{id} I expect something like this:
<Item>
...
</Item>
Some of my fellow team members argue we should design the API so when someone does a GET for /item/{id} it should be returned as a collection of a single element. Like this:
<Items>
<Item>...</Item>
</Items>
This seems wrong to me. Does it seem wrong to you too? Please explain why, so I might convince either myself to go with the always wrapped version of the resource or my fellow devs to go with the non-wrapped single resource.
Most importantly, there is no right and wrong answer to this question.
However, here is what I think.
If you want to return a single item, I would tend to do this:
GET /Item/{Id}
=>
<Item>
...
</Item>
If the {Id} does not exist then the server should return a 404.
If I want to return a collection of items, I would do
GET /Items
=>
<Items>
<Item>...</Item>
<Item>...</Item>
</Items>
If there are no items, then it should return a 200 with an empty <Items/> element.
If it really makes it easier for the client to deal with a collection that has just one element, then you could do something like this.
GET /Items?Id={Id}
=>
<Items>
<Item> ... </Item>
</Items>
The difference here is that if the {Id} did not exist then I would tend to return 200 not a 404.
Seems counterintuitive to me. You are potentially saving code effort on the client side by having one way of reading data from your two GET methods. This is of course countered by having extra code to wrap your single GET method in a collection.
If you want real world examples,
twitter returns an individual
representation of a resource not
wrapped in a collection
basecamp, an
early proponent of REST based API,
also follows this model
EDIT: Our API uses this HTTP status code structure
I think your colleagues are right if you think about the consuming side of your REST service, which then can handle every response as a collection. And there's one more thing: If the {id} did not exist, what does your service return? Nothing? Then the consumer has to check for either a null result or error response, a single element or a collection. According to my experience getting a collection in any case (which may be empty) is the most convenient way to be served by a REST service.

list=alllinks confusion

I'm doing a research project for the summer and I've got to use get some data from Wikipedia, store it and then do some analysis on it. I'm using the Wikipedia API to gather the data and I've got that down pretty well.
What my questions is in regards to the links-alllinks option in the API doc here
After reading the description, both there and in the API itself (it's down and bit and I can't link directly to the section), I think I understand what it's supposed to return. However when I ran a query it gave me back something I didn't expect.
Here's the query I ran:
http://en.wikipedia.org/w/api.php?action=query&prop=revisions&titles=google&rvprop=ids|timestamp|user|comment|content&rvlimit=1&list=alllinks&alunique&allimit=40&format=xml
Which in essence says: Get the last revision of the Google page, include the id, timestamp, user, comment and content of each revision, and return it in XML format.
The allinks (I thought) should give me back a list of wikipedia pages which point to the google page (In this case the first 40 unique ones).
I'm not sure what the policy is on swears, but this is the result I got back exactly:
<?xml version="1.0"?>
<api>
<query><normalized>
<n from="google" to="Google" />
</normalized>
<pages>
<page pageid="1092923" ns="0" title="Google">
<revisions>
<rev revid="366826294" parentid="366673948" user="Citation bot" timestamp="2010-06-08T17:18:31Z" comment="Citations: [161]Tweaked: url. [[User:Mono|Mono]]" xml:space="preserve">
<!-- The page content, I've replaced this cos its not of interest -->
</rev>
</revisions>
</page>
</pages>
<alllinks>
<!-- offensive content removed -->
</alllinks>
</query>
<query-continue>
<revisions rvstartid="366673948" />
<alllinks alfrom="!2009" />
</query-continue>
</api>
The <alllinks> part, its just a load of random gobbledy-gook and offensive comments. No nearly what I thought I'd get. I've done a fair bit of searching but I can't seem to find a direct answer to my question.
What should the list=alllinks option return?
Why am I getting this crap in there?
You don't want a list; a list is something that iterates over all pages. In your case you simply "enumerate all links that point to a given namespace".
You want a property associated with the Google page, so you need prop=links instead of the alllinks crap.
So your query becomes:
http://en.wikipedia.org/w/api.php?action=query&prop=revisions|links&titles=google&rvprop=ids|timestamp|user|comment|content&rvlimit=1&format=xml