Is there a way to do the following in Moqui?
Say I have a list of parent categories (or classifications etc.)... Taking Request categories:
<entity-find entity-name="mantle.request.RequestCategory" list="parentCategoryList">
<econdition field-name="parentCategoryId" operator="is-null" />
</entity-find>
And I want to use 'parentCategoryList' to produce a sub-list for EACH parent category, to display separate form-lists on screen:
Something like:
<iterate list="parentCategoryList" entry="thisCategory" >
<entity-find entity-name="mantle.request.RequestCategory" list="categoryList">
<econdition field-name="parentCategoryId" from="thisCategory.requestCategoryId" />
</entity-find>
<!-- I include the following only to give an idea of what I am trying to do.
It is incorrect and incomplete -->
<script>listOfLists.add(categoryList)</script>
</iterate>
Then use that 'listOfLists' to iterate a form-list, supplying the form-list 'name' and 'list' sequentially for each list in the list. (I know you can't use iterate outside of actions, and you can't use forms inside of actions.)
I may well be thinking about this in the wrong way.
You can iterate within the screen.widgets element, just use section-iterate. There are limitations to how much you can nest these (the current template macros for XML Screens/Forms only support so much), but you can do quite a bit. There are example of this in SimpleScreens, like the OrderDetail.xml screen iterating over order parts.
Related
My goal is to add different banners to the bottom of each category, right below the list of products.
This could be accomplished in the following ways, but I'm not sure how to do it in aspdotnetstorefront:
Add custom CSS per category
Add custom HTML per category
I'm trying to avoid adding content using Javascript, but will do as a last resort. That would be easy, but could cause maintenance issues.
I think your best bet is to add the summary to the XMLPackage you are using for your category pages. Adding the following line will allow you to add the banners to the Summary field (editable via admin):
<xsl:value-of select="aspdnsf:GetMLValue($CurrentEntityNode/Summary)" />
This snippet assumes that the parameter CurrrentEntityNode has been declared:
<xsl:param name="CurrentEntityNode" select="/root/EntityHelpers/*[name()=/root/Runtime/EntityName]//Entity[EntityID = $CurrentEntityID]" />
I am using an API call similar to http://en.wikipedia.org/w/api.php?action=query&list=geosearch&gsradius=10000&gscoord=41.426140|26.099319.
I returns something like this
<?xml version="1.0"?>
<api>
<query>
<geosearch>
<gs pageid="27460829" ns="0" title="Kostilkovo" lat="41.416666666667" lon="26.05" dist="4245.1" primary="" />
<gs pageid="27460781" ns="0" title="Belopolyane" lat="41.45" lon="26.15" dist="4988.7" primary="" />
<gs pageid="27460862" ns="0" title="Siv Kladenets" lat="41.416666666667" lon="26.166666666667" dist="5713.5" primary="" />
<gs pageid="13811116" ns="0" title="Svirachi" lat="41.483333333333" lon="26.116666666667" dist="6521.9" primary="" />
<gs pageid="27460810" ns="0" title="Gorno Lukovo" lat="41.366666666667" lon="26.1" dist="6613.4" primary="" />
<gs pageid="27460799" ns="0" title="Dolno Lukovo" lat="41.366666666667" lon="26.083333333333" dist="6746.2" primary="" />
<gs pageid="27460827" ns="0" title="Kondovo" lat="41.433333333333" lon="26.016666666667" dist="6937" primary="" />
<gs pageid="27460848" ns="0" title="Plevun" lat="41.45" lon="26.016666666667" dist="7383.1" primary="" />
<gs pageid="24179704" ns="0" title="Villa Armira" lat="41.499069444444" lon="26.106263888889" dist="8130" primary="" />
<gs pageid="27460871" ns="0" title="Zhelezari" lat="41.413333333333" lon="25.998333333333" dist="8540.1" primary="" />
</geosearch>
</query>
</api>
But while I am actually trying to get some pictures of those pages, subsequent calls are needed, like
to get some page images
http://en.wikipedia.org/w/api.php?action=query&prop=images&pageids=13843906
then, to get image info
http://en.wikipedia.org/w/api.php?action=query&titles=File:Alexandru_Ioan_Cuza_Dealul_Patriarhiei.jpg&prop=imageinfo&iiprop=url
Well, even if this gets me what I ultimately need, it is not efficient at all.
I would like to know if there are some parameters for this calls, or maybe completely other call(s) that would bring all this info in maximum 2 steps/calls. It would be great, though, if it would be only one.
Wow, I had no idea that such a feature exists nowadays! But to answer your question, since it's a list query, you can probably use it as a generator.
Let's try it:
Original geosearch query: http://en.wikipedia.org/w/api.php?action=query&list=geosearch&gsradius=10000&gscoord=41.426140|26.099319
Generator query to get images on matching pages: http://en.wikipedia.org/w/api.php?action=query&prop=images&imlimit=max&generator=geosearch&ggsradius=10000&ggscoord=41.426140|26.099319
The prop=images query can also be used as a generator, so you can also do this:
Get URLs for all images on a list of pages: http://en.wikipedia.org/w/api.php?action=query&prop=imageinfo&iiprop=url&generator=images&gimlimit=max&pageids=13811116|24179704|27460781|27460799|27460810|27460827|27460829|27460848|27460862|27460871
Alas, AFAIK you can't nest generators, so you can't do both steps in one query. You can either:
get the list of images in one query, and then use another query to get the URLs, or
start with the basic geosearch query to get the page IDs, and then get the images and their URLs in another query.
Alas, it turns out that both of these options fail to give you some information that you may want. If you use list=geosearch as a generator, you don't get the coordinate information that you may need if you e.g. wish to display the results on a map. On the other hand, using prop=images as a generator makes you miss out on something even more important: the knowledge of which images are used on which pages!
Thus, unfortunately, it seems that, if your goal is to place images on a map, you'll probably have to do it with three separate queries. At least you can still query multiple pages / images in one request, so you shouldn't need more than three (until you hit the query limits and need to use continuations, that is).
(Also, doing it in three steps lets you apply some filtering to the images before the third step. For example, most of the pages returned by your example query only have the same three images — Flag of Bulgaria.svg, Ivaylovgrad Reservoir.jpg and Oblast Khaskovo.png — all of which are used via templates, and none of which really look like good choices to represent the specific location.)
Ps. If you're just interested in finding images near a particular location, even if they're not used on any specific Wikipedia article, you might want to try using geosearch directly on Wikimedia Commons. It doesn't seem to return any results for your Bulgarian example coordinates, but it works just fine in a more crowded location.
Here is an alternative to build on the previous answer. If you start with this query as a partial answer:
https://en.wikipedia.org/w/api.php?action=query&prop=images&imlimit=max&generator=geosearch&ggsradius=10000&ggscoord=41.426140|26.099319
Then you can build on this to get the information in a single query. The pageimages property can work with the generator. You cannot nest generators but you can chain properties. A query can use pageimages to get the page's main image url for each of the geosearch results. It looks like this:
https://en.wikipedia.org/w/api.php?action=query&prop=images|pageimages&pilimit=max&piprop=thumbnail&iwurl=&imlimit=max&generator=geosearch&ggsradius=10000&ggscoord=41.426140|26.099319
This query returns the image "File" names (images property) and a single URL for the main image (pageimages property). The main image of the page is all I need. You might be able to extrapolate the "file" urls by matching the changes from the file to the url that is output with the query but I cannot recommend such a hack.
The images property has a setting that is supposed to return urls for interwiki links, iwurl. I see the "file" as an interwiki link. This parameter is not working and images does not return a url. Playing on the sandbox might lead you to a better answer.
Intuitively it seems like you should be able to chain the images and imageinfo properties together. Doing so does not give the expected results.
If a single url for the main image of the page is not enough I can encourage you to play in the API sandbox to try and get what you need with some combination of properties. I am using the geosearch generator and get the page image, text description, and lat/long coordinates so that I can get the address. Good luck!
I have an application which creates datarequests which can be quite complex. These need to be stored in the database as tables. An outline of a datarequest (as XML) would be...
<datarequest>
<datatask view="vw_ContractData" db="reporting" index="1">
<datefilter modifier="w0">
<filter index="1" datatype="d" column="Contract Date" param1="2009-10-19 12:00:00" param2="2012-09-27 12:00:00" daterange="" operation="Between" />
</datefilter>
<filters>
<alternation index="1">
<filter index="1" datatype="t" column="Department" param1="Stock" param2="" operation="Equals" />
</alternation>
<alternation index="2">
<filter index="1" datatype="t" column="Department" param1="HR" param2="" operation="Equals" />
</alternation>
</filters>
<series column="Turnaround" aggregate="avg" split="0" splitfield="" index="1">
<filters />
</series>
<series column="Requested 3" aggregate="avg" split="0" splitfield="" index="2">
<filters>
<alternation index="1">
<filter index="1" datatype="t" column="Worker" param1="Malcom" param2="" operation="Equals" />
</alternation>
</filters>
</series>
<series column="Requested 2" aggregate="avg" split="0" splitfield="" index="3">
<filters />
</series>
<series column="Reqested" aggregate="avg" split="0" splitfield="" index="4">
<filters />
</series>
</datatask>
</datarequest>
This encodes a datarequest comprising a daterange, main filters, series and series filters. Basically any element which has the index attribute can occur multiple times within its parent element - the exception to this being the filter within datefilter.
But the structure of this is kind of academic, the problem is more fundamental:
When a request comes through, XML like this is sent to SQLServer as a parameter to a stored proc. This XML is shredded into a de-normalised table and then written iteratively to normalised tables such as tblDataRequest (DataRequestID PK), tblDataTask, tblFilter, tblSeries. This is fine.
The problem occurs when I want to match a given XML defintion with one already held in the DB. I currently do this by...
Shredding the XML into a de-normalised table
Using a CTE to pull all the existing data in the database into that same de-normalised form
Matching using a huge WHERE condition (34 lines long)
..This will return me any DataRequestID which exactly matches the XML given. I fear that this method will end up being painfully slow - partly because I don't believe the CTE will do any clever filtering, it will pull all the data every single time before applying the huge WHERE.
I have thought there must be better solutions to this eg
When storing a datarequest, also store a hash of the datarequest somehow and simply match on that. In the case of collision, use the current method. I wanted however to do this using set-logic. And also, I'm concerned about irrelevant small differences in the XML changing the hash - spurious spaces etc.
Somehow perform the matching iteratively from the bottom up. Eg produce a list of filters which match on the lowest level. Use this as part of an IN to match Series. Use this as part of an IN to match DataTasks etc etc. The trouble is, I start to black-out when I think about this for too long.
Basically - Has anyone ever encountered this kind of problem before (they must have). And what would be the recommended route for tackling it? example (pseudo)code would be great :)
To get rid of the possibility of minor variances, I'd run the request through an XML transform (XSLT).
Alternatively, since you've already got the code to parse this out into a denormalized staging table that's fine too. I would then simply using FOR XML to create a new XML doc.
Your goal here is to create a standardized XML document that respects ordering where appropriate and removes inconsistencies where it is not.
Once that is done, store this in a new table. Now you can run a direct comparison of the "standardized" request XML against existing data.
To do the actual comparison, you can use a hash, store the XML as a string and do a direct string comparison, or do a full XML comparison like this: http://beyondrelational.com/modules/2/blogs/28/posts/10317/xquery-lab-36-writing-a-tsql-function-to-compare-two-xml-values-part-2.aspx
My preference, as long as the XML is never over 8000bytes, would be to create a unique string (either VARCHAR(8000) or NVARCHAR(4000) if you have special character support) and create a unique index on the column.
I'm looking for a way to receive a XML Element (the id of an entry) from a YouTube feed (e.g. http://gdata.youtube.com/feeds/api/users/USERNAME/uploads).
The feed looks like this:
<feed xmlns="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/" xmlns:openSearch="http://a9.com/-/spec/opensearch/1.1/" xmlns:gd="http://schemas.google.com/g/2005" xmlns:yt="http://gdata.youtube.com/schemas/2007" gd:etag="W/"DUcFQncyfCp7I2A9WhVUFE4."">
<id>tag:youtube.com,2008:user:USERNAME:uploads</id>
<updated>2012-05-19T14:16:53.994Z</updated>
...
<entry gd:etag="W/"DE8NSX47eCp7I2A9WhVUFE4."">
<id>tag:youtube.com,2008:video:MfPpj7f6Jj0</id>
<published>2012-05-18T13:30:38.000Z</published>
...
I want to get the first tag in entry (tag:youtube.com, 2008 ...).
After googling for some hours and looking through the GDataXML wiki, I'm clueless because neither XPath nor GData could deliver the right element.
My first guess is, they can't ignore the attributes in the feed and entry tags.
A solution using XPath would be great, but one in Objective-C is equally welcome.
You might be having an issue trying to get XPath to work because of the default namespace.
If you just want the first tag in entry, you can use this:
/*/*[name()='entry']/*[1]
If you want the first id specifically, you can use this:
/*/*[name()='entry']/*[name()='id'][1]
Also if you can use XPath 2.0, you can skip the predicate entirely and use * for the namespace prefix:
/*/*:entry/*:id[1]
A set of forms (using Zend_Form) that I have been working on were causing me some headaches trying to figure out what was wrong with my XML configuration, as I kept getting unexpected HTML output for a particular INPUT element. It was supposed to be getting a default value, but nothing appeared.
It appears that the following 2 pieces of XML are not equal when used to instantiate Zend_Form:
Snippet #1:
<form>
<elements>
<test type="hidden">
<options ignore="true" value="foo"/>
</test>
</elements>
</form>
Snippet #2:
<form>
<elements>
<test type="hidden">
<options ignore="true">
<value>foo</value>
</options>
</test>
</elements>
</form>
The type of the element doesn't appear to make a difference, so it doesn't appear to be related to hidden fields.
Is this expected or not?
As it was rather quiet on here, I took a look further into the source code and documentation.
On line 259 of Zend_Config_Xml, the SimpleXMLElement object attributes are converted to a string, resulting in:
options Object of: SimpleXMLElement
#attributes Array [2]
label (string:7) I can't see this because
value (string:21) something happens to this
becoming
options (string:21) something happens to this
So, I hunted through the documentation only to find that "value" is a reserved keyword when used as an attribute in an XML file that is loaded into Zend_Config_Xml:
Example #2 Using Tag Attributes in Zend_Config_Xml
"..Zend_Config_Xml also supports two
additional ways of defining nodes in
the configuration. Both make use of
attributes. Since the extends and the
value attributes are reserved keywords
(the latter one by the second way of
using attributes), they may not be
used..."
Thus, it would appear to be "expected" according to the documentation.
I'm not entirely happy that this is a good idea though, considering "value" is an attribute of form elements.
Don't worry about this. The reserved keywords were moved to their own namespace, and the previous attributes were depricated. In Zend Framework 2.0 the non-namespaced attributes will be removed so you can use them again.