Is it possible to use the Canonical Landscape API to get script output? - api

The documentation I can find for the Canonical Landscape API lets you do lots of things with scripts, but I can't find anything suggesting that you can get output. However, if you use the Canonical web interface, script output is available, so it's presumably exposed somehow...?

I just had this issue as well and since you're the first hit right now on google, I wanted to share the answer for everyone - if you run ExecuteScript on a landscape client and get back an ID of 123, and let's assume the job finished already - you want to then use that ID to ask the GetActivities API, with an input argument of "query" with value "parent-id:123". If there is a result there, you will find the script output you are looking for under the result_text field of the response. Good luck!!! It worked over here very well.

Related

How do I programatically download bank of America transactions?

I use quicken, which can automatically download bank of America transactions. However, it truncates all the payees so I lose data. I'd like to work around this and I'm thinking of downloading the transaction data and generating my own QFX file with the full payee info.
Is there a way that I can download transactions programmatically, or download something like a .qif (available on their website) programmatically? For the latter, I could convert the gif to a QFX myself.
If anyone has other ideas to download all of the transaction information without losing the payee info, I would welcome those ideas as well.
Do they provide an api for this? but most probably not for 3p without a contract. since its bank , there must be check for browser etc along with standard sign in so it'll hard for curl. you can have a browser plugin to read all the data from the page and do auto scroll to get new transactions if not fitting in page. it's a hacky solution but good to get what we need as you told that data is available on the page and have to revisit with updates but changes in basic structure is rare.
A quick search for bank of america api yielded this BofA API. They even have many options for types of payment information you could query here as well as lots of individual account types that you can access it as.
It looks pretty comprehensive. If you don't see what you are looking for there I put another option below, just in case.
I don't use BofA. So I can't speak to what they have natively available. But you could always use a bot to scrape it if they present it anywhere in the User Interface.
I would agree with Meena that you should not be able to use curl. But selenium uses a browser to programmatically do just about anything that you would want to do with any website. They also have bindings for many languages. So you could just pick your favorite and go to town...
It seems the API will return a JSON so you may need to find a tool to convert that to a qif or qfx if that part is important. After digging further, I can't test this without having a CashPro account but it seems what you need to do is...
Step 1:
Get an access token from here. You'll need to send this in the header of any requests
Step 2:
Send an http request with a header in the following format:
{
"accounts": [
{
"accountNumber": "xxxxxxx",
"bankId": "xxxxxxx"
}
],
"fromDate": "yyyy-mm-dd",
"toDate": "yyyy-mm-dd"
}
to https://developer.bankofamerica.com/cashpro/reporting/v1/transaction-inquiries/previous-day
Step 3:
You should get a JSON as a response
As mentioned, I can't test this but here's the documentation of the specific API endpoint you need

Getting a table from wikipedia article from it's API

I need to use the table in from this wiki page https://en.wikipedia.org/wiki/List_of_most_visited_museums to make a database in python (though the latter part is irrelevant atm). I have to use the API (can't scrape) to access it. Right now I'm trying the API's documentation https://www.mediawiki.org/wiki/API:Parsing_wikitext#Example_1:_Parse_content_of_a_page Example #2 from this page is exactly what I want to do, but it's returning an error, and even running the original code in my notebook it also returns an error. Can anyone tell me how to either alter that code so it runs, or direct me to another way to do the same thing? Thanks.
This fetches the content you're after: https://en.wikipedia.org/w/api.php?action=parse&page=List%20of%20most%20visited%20museums&prop=wikitext&section=2&format=json
If the example isn't working in your notebook, the problem probably lies with the rest of your code and not with the API call.

Endpoint with Google Flex env

There is bit of a confusion, wondering if somebody can help me with this.
Here is an example which is a year old and uses goapp with polymer and endpoint
https://github.com/googlesamples/cloud-polymer-go
Here is a recent example using gcloud
https://github.com/GoogleCloudPlatform/golang-samples/tree/master/appengine_flexible/endpoints
Both are different as google changed its approach.
As per second example, I am able to create endpoint functions which uses json for input and output. However there are 2 problems
1st. It is throwing error if I put functions in different file under same package. It works when I run go run .go. but then I dont understand how app.yaml come into picture. I think this url /_ah/spi/. should work
2nd. I am using postman app to check the request and response from endpoint. Is there a better way? I thought google does provide a platform to test endpoint.
Is there any example which implements examples similar to 1st one with new libraries?
looking forward for your help. Thanks.

API lookup does not find thing - am I using the correct URLs?

I am currently working on a Flattr plugin for a popular open-source RSS reader (tiny tiny RSS).
I am using the lookup API for the first time and am unsure why I am getting mixed results.
So I'm unsure if I use the API correctly and want to confirm with you experts if I got something basic wrong.
First, let's see if I can come up with an API call that looks up a thing successfully. I look at the Flattr page of thing 1066706 (I can't post the whole URL here as SO only allows me two URLs for this whole post). On that page, I find the official URL which Flattr stores for that thing and look that up with the API:see here
This returns {"type":"thing","resource":"https:\/\/api.flattr.com\/rest\/v2\/things\/1066706", ... so that's good.
But it seems this method is not a sure way to test if things exist. Here is an example that doesn't work: I open the Flattr page of thing e7579b349cb7b319b28d883cd4064e1e.
That URL I find on that page is indeed the URL of that article and I don't see any other URL it might have. I look that up in the same way as above:check this
Alas, I get {"message":"not_found","description":"No thing was found"}
(I also tried both of these with encoded URLs, but got the same result. I figured this is easier to read for you.)
So, why would that second thing not be found? Thanks for any enlightenment.
The id "e7579b349cb7b319b28d883cd4064e1e" is not a real thing id but a hash that identifies a temporary thing for a not yet submitted thing - it's part of Flattr's autosubmit functionality: http://developers.flattr.net/auto-submit/
So the system is very correct in telling you that a thing for that URL doesn't exist - someone would need to flattr that thing for it to become submitted for real and created in the system with a real id to it.
(Just for reference - for some URL:s, like Twitter URL:s, Flattr can actually answer that the URL is flattrable even though it can't find it in the system: {"message": "flattrable", "description": "Thing is flattrable "} That way you can now that it is possible to flattr that thing without you having to use any kind of flattr-button/url supplied by the author to be able to flattr the URL)
Also - if you don't know it yet then for a RSS reader you should primarily be looking for rel-payment links to find out whether an entry is flattrable or not, see http://developers.flattr.net/feed/ and http://relpayment.com/

Flickr API - Photos Search, excluding tags: Am I doing this wrong?

So, I'm trying to pull all photos of a specific user's account via the flickr.photos.search method, but I want to exclude photos with a particular tag. The related documentation page states that "You can exclude results that match a term by prepending it with a - character." ... Well, I tried implementing that option but what get in return is only one photo (even though there are several photos with the tag in question) and that result remains the same whether that specific photo has the tag in question or not AND whether or not I use the "-" option to exclude that tag rather than include it. I also tried the text method with the same exact result. Here's my REST call:
http://api.flickr.com/services/rest/?&method=flickr.photos.search&api_key='.$api_key.'&user_id='.$user_id.'&tag_mode=any&tags=-blog&extras=url_o,url_t&format=json
And here is the page where I'm trying to get this all working:
http://corazonbrew.com/temp/
Anyone know what is going on here?
It seems the answer in the Flickr discussion board I linked to earlier is proving true. In order to use the exclusion option, there has to also be at least one other, non-excluded tag. Well, that is just not good enough for me.
A couple of friends tell me this is a longstanding bug that will not be fixed anytime soon, if ever. But those friends also kindly reminded me of my n00bishness- this whole time I thought I needed to affect the feed to get the desired output. I totally was not realizing I could just use some good ol' PHP if statements to weed out what I don't want.