XML Parser problem - objective-c

i have to parse data xml.
any one suggest me a simple parser so i can get the data. please don't suggest of ASIHTTPrequest method..
In my xml file there are tad's in which data contain [CDATA] block. iseen this first time i get the null value for this tag

At this link you will find a list of popular XML parser libraries for iOS and differences between them. I used TouchXML in my apps and it was really easy to use.
ASIHTTPrequest is not a parsing library, it's a wrapper for the CFNetwork API. It is used to perform basic HTTP requests and interacting with REST-based services (GET / POST / PUT / DELETE).

Related

SoapUI request modifying "#" to "%40"

I'm using SoapUI to automate tests against my companies APIs. I've successfully setup and run dozens of these cases.
This tyme, I'm getting an error which, after exhaustive tracking down, I've found is due to our APIs requiring the "#" char itself rather than the URL friendly "%40" substitution.
The post request I want is structured like this:
https://<endpoint>.com/<resource>?<param>&email_address#example.com
And what I'm seeing made is:
https://<endpoint>.com/<resource>?<param>&email_address%40example.com
How can I enforce SoapUI to use the char itself?
I've tried setting headers, media type and representations (though possibly not through all permutations).
Thanks.
Use Disable Encoding for the parameter.

Behat/Mink: how can I evaluate a xml response

I am trying to test a RSS feed via Behat/Mink. Unfortunately I am getting an error message all the time:
The current node list is empty.
Does anyone know how to test an XML response (search for a string in xml) via Behat/Mink?
edit
I need to find some way, and best would be to get it running with Behat/Mink.
But if thats not possible at all, I can live with a workaround too.
An example on how to do that would be great!
In your FeatureContext.php file You can get raw content by
$xml = $this->getSession()->getDriver()->getContent();
And then you can use Regex/DomDocument to test the returned xml content.
Mink is a browser emulation abstraction layer. Some browsers can read RSS, some can't. Parsing custom XML is not Mink responsibility. Use combination of Behat + Web crawler + DomDocument (or any PHP RSS parser) for that.
As #everzet mentioned, Mink is not the best tool for the job, since it's a browser emulator rather than an http client.
You're not limited to Mink though and you could use any PHP http client (like guzzle or buzz) or even file_get_contents() to fetch the rss feed.

How to set http headers in dotCMS

I'm trying to create a XML data feed with dotCMS. I can easily output the correct XML document structure in a .dot "page", but the http headers sent to the client are still saying that my page contains "text/html". How can I change them to "text/xml" or "application/xml"?
Apparently there's no way to do it using the administration console. The only way I found is to add this line of (velocity) code
$response.setHeader("Content-Type", "application/xml")
to the top of the page template.
Your solution is the easiest. However there are other options that are a bit more work, but that would prevent you from having to use velocity to do the XML generation, which is more robust most of the time.
DotCMS uses xstream to generate XML files (and vise versa). You could write a generic plugin to use this as well.
An JSONContentServlet exists in dotCMS that takes a query and generates json or xml (depending on your parameters). It is not mapped on a servlet by default, but that is easy to add.

Amazon S3 pre signed URLs using Amazon Java SDK and extra / characters

I've been creating Presigned HTTP PUT URLs and everything was working great until I wanted to start using "folders" in S3; I wanted the key to have the character '/'.
Now I get Signature doesn't match when I send the HTTP PUT requests due to the fact the '/' probably changes to %2F... If I escape the character before creating the presigned URL it works great, but then the Amazon console management doesn't understand it and shows it as one file instead of subfolders.
Any idea?
P.s.
The HTTP PUT requests are sent using C++ with POCO NET library.
EDIT
I'm using Poco HttpRequest from C++ to my Java web server to generate a signed url (returned on the response).
C++ then uses this url to put a file in s3 using Poco again.
The problem was that the urls returned from the web server were parsed through Poco URI objects that auto decoded the s3 object key thus changing it.With that in mind I was able to fix my problem.
Tricky - I'll try to approach this bottom up.
Disclaimer: I got carried away visually inspecting the Poco libraries instead of actually debugging a code sample, which should yield more reliable results much faster, see below ;)
Analysis
If I escape the character before creating the presigned URL it works
great, but then the Amazon console management doesn't understand it
and shows it as one file instead of subfolders.
The latter stems from S3 not having a concept of folders on the storage level actually, see e.g. section Index Documents and Folders within Index Document Support:
Objects stored in Amazon S3 are stored within a flat container, i.e.,
an Amazon S3 bucket, and it does not provide any hierarchical
organization, similar to a file system's. However, you can create a
logical hierarchy using object key names and use these names to infer
logical folders that contain these objects.
That's exactly what the AWS Management Console is doing here as well:
The AWS Management Console also supports the concept of folders, by
using the same key naming convention used in the preceding sample.
However, your test regarding the assumption of / being encoded as %2F proves, that this is indeed how Poco::Net is encoding the URL when performing the HTTP PUT request.
(I'm actually a bit surprised that the AWS Java SDK seems to generate different URLs here for / vs. %2F, insofar a recent analysis regarding Why is my S3 pre-signed request invalid when I set a response header override that contains a “+”? seems to indicate respective canonicalization by the AWS .NET SDK, see below for more on this.)
Potential Solution
In order for your scenario to work as desired, you'll need to figure out where the URL is encoded this way - I could think of two components in principle:
Poco::Net
Finding out why Poco::Net is encoding the URL different than S3 (if at all, see below) is best done by debugging your code, here's where I'd start:
Class HTTPRequest uses class URI in turn, which automatically performs a few normalizations on all URIs and URI parts passed to it, in particular percent-encoded characters are decoded. The other way round is handled by method encode(), which is where things get interesting and call for a breakpoint, see URI.cpp:
lines 575 ff. - here encode() does its magic, which indeed seems to be in place, insofar neither the code within the function nor the various chars passed in via the reserved parameter contain the offending / (see lines 47 ff. for the respective constants in use)
consequently you might want to set a breakpoint in this function and backtrace the callstack to find out which code is actually doing the encoding upfront, which might not yield an offender at all, see below.
Java => C++ transition
You haven't specified yet, which channel is actually used to communicate the pre-signed URL generated by the AWS Java SDK to C++ in turn. Given the code review (mind you, visual inspection only, I haven't debugged this myself yet) of the Poco::Net functionality yields the conclusion, that no obvious offender can be identified in the library itself, thus it seems more likely that it might already enter your C++ layer encoded (easily verified via debugging of course) - are you by chance using any kind of web service between these components for example?
Good luck!

REST API for multiple actions on a file

This is continuation of my question on how to design a REST API for a media analysis server. As per Derrel's answer, in my current design I start the analysis of a media file using a POST /facerecognition/analysisrequests?profileId=33 which specifies that profile ID 33 (previously created on the server by another POST) should be used.
I have two short questions:
How can I extend this approach to have multiple analysis requests on the same file, e.g. perform both face recognition, text detection, and ad detection on the given file? Is using a binary coding (e.g. each bit signifies an analysis) and e.g. doing POST http:[server URL]/00000011/analysisrequests?profileId=33 a good idea?
Is using a server side DB (e.g. mySQL) the best way to keep track of all the profile and process IDs?
Thanks,
C
I'd put the types of analysis requested as parameters, rather than as part of the path. They could be POST parameters in the body of the request, or specified in the URL list profileId. Example: POST http://server/analysisrequest?profileId=33&analysisType=faceRecognition&analysisType=textDetection. It's perfectly ok to submit multiple values for a parameter.
You could submit the binary encoding of the analysis type, but spelling it out is a lot more clear and self-documenting. The binary encoding is a bit fragile when adding a new analysis type as well; adding a new digit would affect the urls all requests, even those that don't use the new type.
A server side database is typical for this kind of web application and it's probably a good solution. You might also want to consider an in-process SQL database solution like sqlite or derby to avoid the complexity of a separate database process.
I would recommend making more complete use of HTTP POST. Make all POST requests against the same URI: /analysisrequest. Use application/x-www-form-urlencoded to send the parameters.
So:
Host: yourserver.com
Accept: */*
Content-Length: 73
Content-Type: application/x-www-form-urlencoded
face_recognition=true&text_detection=true&ad_detection=true&profile_id=33
multipart/form-data would also allow you to send the file being analyzed in the same request as the operations to perform on the file, assuming that's a desired scenario. With the added advantage that you ought to be able to use the exact same API end-point for both HTML forms and your REST API.