I am having problems using the sort attribute within my GCS request. Whenever I use sort targeting a PageMap element it returns with 0 results. Check it out below. Am I simply missing or misunderstanding something?
Request Without Sort:
http://www.google.com/cse?cx=010537259921960391009:s7pan_vavqu&output=xml_no_dtd&q=fire+more:pagemap:course
Request With Sort:
http://www.google.com/cse?cx=010537259921960391009:s7pan_vavqu&output=xml_no_dtd&q=fire+more:pagemap:course&sort=course-coursenumber:a
you probably didn't enable the sort option. read how to do so in:
http://support.google.com/customsearch/bin/answer.py?hl=en&answer=2549537
Related
Seeking guidance on REST API Signature for an API.
To handle logistics, we want to support shipping label generation for courier packages.
Which of these would be a more RESTFul way to model these APIs.
Generate Label API : POST /package/{package-id}/label.
Regenerate Label API : POST /package/{package-id}/label/regenerate.
vs
Generate Label API : POST /package/{package-id}/label?operation=generate.
Regenerate Label API : POST /package/{package-id}/label?operation=regenerate.
Regenerate API ends up creating a new Label for the package, based on updated shipping dates, etc passed as part of request payload.
Just use:
POST /package/{package-id}/label
PUT/PATCH /package/{package-id}/label
https://restfulapi.net/http-methods/
If the /package/{package-id}/label is a 'label resource' that you're creating, then the appropriate method is probably PUT here.
But 'regenerating' is a bit hard to explain here. Could you just fetch the label over and over again by doing a GET request on /package/{package-id}/label.
If 'creating a label' or 'regenerating a label' doesn't have side-effects (such as a physical printer printing a label), then a GET might work best.
If 'creating a label' or 'regenerating a label' is not really returning a label but causing some external effect in a different system, then it feels better for this to be more like the RPC call you mentioned.
POST /package/{package-id}/label
Then my question is, why is there a difference between 'generating' and 'regenerating'. It sounds like you're just doing the same thing twice. Do you really need 2 endpoints? Can't the system figure out if a label was created before for the {package-id}?
If the system doesn't know the label is generated vs regenerated, and only the client can know I would be inclined to use the same endpoint and add some flag to the request body instead of having 2 different URLS.
I spent almost a full day debugging why my client can't post any forms, until I found out the anti-forgery mechanism got borked on the client-side and the server just responded with a 400 error, with zero logs or information (turns out anti-forgery validation is logged internally with Info level).
So I decided the server needs to special handle this scenario, however according to this answer I don't really know how to do that (aside from hacking).
Normally I would set up a IAlwaysRunResultFilter and check for IAntiforgeryValidationFailedResult. Easy.
Except that I use Api Controllers, so by default all results get transformed into ProblemDetails. So context.Result as mentioned here is always of type ObjectResult. The solution accepted there is to use options.SuppressMapClientErrors = true;, however I want to retain this mapping at the end of the pipeline. But if this option isn't set to true, I have no idea how to intercept the Result in the pipeline before this transformation.
So in my case, I want to do something with the result of the anti-forgery validation as mentioned in the linked post, but after that I want to retain the ProblemDetails transformation. But my question is titled generally, as it is about executing filters before the aforementioned client mapping filter.
Through hacking I am able to achieve what I want. If we take a look at the source code, we can see that the filter I want to precede has an order of -2000. So if I register my global filter like this o.Filters.Add(typeof(MyResultFilter), -2001);, then the filter shown here correctly executes before ClientErrorResultFilter and thus I can handle the result and retain the transformation after the handling. However I feel like this is just exploiting the open-source-ness of .Net 6 and of course as you can see it's an internal constant, so I have no guarantee the next patch doesn't change it and my code breaks. Surely there must be a proper way to order my filter to run before the api transform.
Is there a way to add Validation to a property on my VM "dynamically" (i.e. sometime after I register the initial rules on the VM)?
Currently, I'm registering the rules in the constructor of the VM, then a little while later, after the user has entered a bunch of data, I need to show a new field (using if.bind) and want to add validation depending on the result of a web api call..
Wondering if there's an API for this that I've missed?
You can achieve it without dynamically adding rule but instead you can use satisfiesRule and when, see it here in section Conditional Validation. satisfiesRule will only evaluated if the property that attached to it is already pass.
when will only evaluated the rule if the condition is true.
Additional link.
If you're using bootstrap, you can find this useful.
I have some difficulties customizing the Aikau faceted search page on Alfresco, which may be more a matter of lack of my knowledge about dojo/AMD.
What I want to do is to replace the document details page URL by a download URL.
I extended the Search Results Widget to include my own custom module :
var searchResultWidget = widgetUtils.findObject(model.jsonModel, "id", "FCTSRCH_SEARCH_RESULT");
if(searchResultWidget) {
searchResultWidget.name = "mynamespace/search/CustomAlfSearchResult";
}
I understand search results URLs are rendered this way :
AlfSearchResult module => uses SearchResultPropertyLink module => mixins _SearchResultLinkMixin renderer => bring the "generateSearchLinkPayload" function => renders URLs depending on the result type
I want to override this "generateSearchLinkPayload" function but I can't figure out what is the best way to do that.
Thanks in advance for the help !
This answer assumes you're able to use the latest version of Aikau (at the time of writing this is 1.0.61). Older versions might require slightly different overriding...
In order to do this you're going to need to override the createDisplayNameRenderer function of AlfSearchResult in your CustomAlfSearchResult widget. This will allow you to create an extension of alfresco/search/SearchResultPropertyLink.
If you want to take advantage of the the download capabilities provided by the alfresco/services/DocumentService for downloading both documents and folders (as a zip) then you're going to want to change both the publishTopic and publishPayload of the SearchResultPropertyLink.
You should extend the getPublishTopic and generateSearchLinkPayload functions. For the getPublishTopic function you'll want to change the return value to be "ALF_SMART_DOWNLOAD" (there are constants available for these strings in the alfresco/core/topics module). This topic can be used to tell the DocumentService to take care of figuring out if the node is a folder or document and will make an XHR request for the full node metadata (in order to get the contentUrl attribute that is not included in the data returned by the Search API.
You should extend the generateSearchLinkPayload function so that for document or folder types the payload contains the attribute nodes that is a single array where the object is the search result object that you wish to download.
I would recommend that you call this.inherited first to get the default payload and only update it for documents and folders.
Hopefully that all makes sense - if not, add a comment and I'll try to provide further assistance!
This is the answer for 1.0.25.2 - unfortunately it's not quite so straightforward...
You still need to extend the alfresco/search/AlfSearchResult widget, however this time you need to extend the postCreate function (remembering to call this.inherited(arguments)). It's not possible to stop the original alfresco/search/SearchResultPropertyLink widget from being created... so it will be necessary to find it and destroy it.
The widget is not assigned to a variable, so it will be necessary to find it using dijit/registry. Use the byNode function from dijit/registry to find the widget assigned to this.nameNode and then call destroy on it (be sure to pass the argument true to preserve the DOM). However, you will then need to empty the DOM node so that you can start again...
Now you need to add in your extension to alfresco/search/SearchResultPropertyLink. Unfortunately, because the smart download capability is not available you'll need to do more work. The difference here is that you'll need to make an XHR request to retrieve the full node metadata in order to obtain the contentURL. It's possible to publish a request to the DocumentService(via the "ALF_RETRIEVE_SINGLE_DOCUMENT_REQUEST" topic). However, you need to be aware that having the XHR step will not allow you to then proceed with the download as is. Instead you'll need to use an iframe download solution, I'd suggest you take a look at the changes in the pull request we recently made to solve this problem and backport them into your own solution.
I'm new to Pentaho and using the Rest Client. I can get the Rest client to work by using generate rows for the url. But then I need to pass part of the result of the json to be part of the url for the next request. I'm not sure how to do this. Any suggestions.
Remember that PDI works with streams, you, for each REST request you made, you will have one row as result. You will have as many rows as many requests you do.
I'm not sure if you can deserialize the JSON object directly from the PDI interface, but in the worst scenario, you can use the "User Defined Java Class" to use some external library (like Gson) and deserialize the object.
Then, you can create another variable in the UDJC step and concatenate the attributes you need on the URL string that comes from the last step.
In the other hand you can use "Modified Javascript" to deserialize it and return the attributes you need to then concatenate it with the URL. To use it, just declare varibles inside the code, and then use "Get variables" button to retrieve the available fields to send to the next step.
There are many ways to do it, I suggest you to use the Modified Javascript because it's easier to handle.
You CAN parse the Json response, just use Json Input a a nex step, and then use XPath to parse the field you want: $.result.the.thing.u.want.