Best practice: capturing data from URL - asp.net-mvc-4

I'm looking for some advice on best practice in an MVC application. I can think of several ways of achieving what I want to achieve but I don't know what is considered best.
I am writing a discussion forum app and need to capture data from a URL to include in a post back from the page. I have links like this:
http://somedomain/Discussion/AddMessage/messageid/messagetitle
I need to include messageid and messagetitle in the form that is posted back on this page. What's the best way of going about this?
Ta! Mark

Your current url is more RPC, I would recommend reading about REST. I think a url like
http://somedomain/discussion/message
MessageId and MessageTitle are simply POST values.
http://www.codeproject.com/Articles/233572/Build-truly-RESTful-API-and-website-using-same-ASP

Related

REST API design for running a transformation on a resource

So I know this is crazy from a security standpoint, but let's say I have a posts resource at /posts/ and I'd like an admin to be able to trigger a transformation on the collection (in this case, a simple data migration).
How should I design the URL for something like that? It's basically a remote procedure: "take all the posts, modify them, and save them", which is why it is hard to shoehorn onto REST.
I ended up just doing POST /posts/name-of-transform. It's going to be hacky either way :(
So what you want is to update a collection right?
I think what you're looking for is the http PATCH method. It will acte pretty much like your POST method but instead of creating the ressources it will update them.
You can find more about the PATCH method at this address : https://restful-api-design.readthedocs.org/en/latest/methods.html

API lookup does not find thing - am I using the correct URLs?

I am currently working on a Flattr plugin for a popular open-source RSS reader (tiny tiny RSS).
I am using the lookup API for the first time and am unsure why I am getting mixed results.
So I'm unsure if I use the API correctly and want to confirm with you experts if I got something basic wrong.
First, let's see if I can come up with an API call that looks up a thing successfully. I look at the Flattr page of thing 1066706 (I can't post the whole URL here as SO only allows me two URLs for this whole post). On that page, I find the official URL which Flattr stores for that thing and look that up with the API:see here
This returns {"type":"thing","resource":"https:\/\/api.flattr.com\/rest\/v2\/things\/1066706", ... so that's good.
But it seems this method is not a sure way to test if things exist. Here is an example that doesn't work: I open the Flattr page of thing e7579b349cb7b319b28d883cd4064e1e.
That URL I find on that page is indeed the URL of that article and I don't see any other URL it might have. I look that up in the same way as above:check this
Alas, I get {"message":"not_found","description":"No thing was found"}
(I also tried both of these with encoded URLs, but got the same result. I figured this is easier to read for you.)
So, why would that second thing not be found? Thanks for any enlightenment.
The id "e7579b349cb7b319b28d883cd4064e1e" is not a real thing id but a hash that identifies a temporary thing for a not yet submitted thing - it's part of Flattr's autosubmit functionality: http://developers.flattr.net/auto-submit/
So the system is very correct in telling you that a thing for that URL doesn't exist - someone would need to flattr that thing for it to become submitted for real and created in the system with a real id to it.
(Just for reference - for some URL:s, like Twitter URL:s, Flattr can actually answer that the URL is flattrable even though it can't find it in the system: {"message": "flattrable", "description": "Thing is flattrable "} That way you can now that it is possible to flattr that thing without you having to use any kind of flattr-button/url supplied by the author to be able to flattr the URL)
Also - if you don't know it yet then for a RSS reader you should primarily be looking for rel-payment links to find out whether an entry is flattrable or not, see http://developers.flattr.net/feed/ and http://relpayment.com/

SEO - Temporary unpublished pages - Which http status code to return?

In my CMS there are some pages that are temporary unpublished and later re-published. From a SEO perspective which is the best way to handle them. Tell the search engines to remove them, or that they are temporary moved?
Not all pages are always republshed, in some case the status code returned will not be the best option, but I guess it makes more sense to handle in the right way the ones that are going to be re-publised than the ones that will never be.
Which status to return when a user or a search engine try to surf to this page?
302?
307?
404?
Or which is the best way to handle this scenario?
Many thanks
Some suggest using a 503 for reasons explained here:
http://news.softpedia.com/news/Take-Down-Your-Website-Temporarily-Without-Affecting-Google-Ranking-246829.shtml

How to parse information from blog to iphone?

could someone point me in the right direction in how i can parse data from a blog to an iphone. E.g. You have a table view displaying the posts of the blog, you select a table cell and text content is displayed. Are there any tutorials/examples on this?
I have a bit of experience with parsing data using JSON (parsed data from database to iphone) but unsure on where to start with this?
Thanks for an help..
What you want to do is use the Wordpress API. The flow goes like this:
Make an API call. This is a subject unto itself, but typically you'd use NSHTTPRequest and NSURLConnection to make the request.
Parse the result. I forget if you get XML back; you probably do; in that case there are tons of solutions for parsing, including NSXMLParser and libxml2.
Populate UITables, etc. with the retrieved information.
The Wordpress API is not that big or complex, but it's a bigger subject than I can really get into in the context of an SO answer, so I'll just refer you to the aforelinked documentation and wish you happy reading.

How can I get the full change history for an article on Wikipedia?

I'd like a way to download the content of every page in the history of a popular article on Wikipedia. In other words I want to get the full contents of every edit for a single article. How would I go about doing this?
Is there a simple way to do this using the Wikipedia API. I looked and didn't find anything the popped out as a simple solution. I've also looked into the scripts on the PyWikipedia Bot page (http://botwiki.sno.cc/w/index.php?title=Template:Script&oldid=3813) and didn't find anything that was useful. Some simple way to do it in Python or Java would be the best, but I'm open to any simple solution that will get me the data.
There are multiple options for this. You can use the Special:Export special page to fetch an XML stream of the page history. Or you can use the API, found under /w/api.php. Use action=query&title=$TITLE&prop=revisions&rvprop=timestamp|user|content etc. to fetch the history.
Pywikipedia provides an interface to this, but I do not know by heart how to call it. An alternative library for Python, mwclient, also provides this, via site.pages[page_title].revisions()
Well, one solution is to parse the Wikipedia XML dump.
Just thought I'd put that out there.
If you're only getting one page, that's overkill. But if you don't need the very very latest information, using the XML would have the advantage of being a one-time download instead of repeated network hits.