How to write filter {key} for a 'get items as xlsx' request via Podio API - podio

I'll get this out of the way first: I'm an amateur programmer (at best)...i have some knowledge of how APIs work, but little to no experience manipulating the podio API directly (ie I use zapier/globiflow a lot and don't write any php/ruby). I'm sure other people can figure this out via the API documentation, but I can't. So i'm really hoping someone can help clarify and give some more detailed instruction.
My Overall objective:
I frequently export podio files as xlsx from the podio front-end. This is used by my team and me to do regular data analysis tasks in excel. I would like to make this process easier by automating the function of getting an updated podio export into my excel. My plan is to do this via excel VBA. I understand from other searching that it is possible to send an HTTP request using VBA, so i want to make sure i understand what I need to send to the Podio API to get what I need. The method of how to write the HTTP request in excel VBA is outside the intentional scope of this question (though i'd accept any help on this!)
What I've tried so far:
I know that 'get items as xlsx' is part of the podio API: https://developers.podio.com/doc/items/get-items-as-xlsx-63233
However I cannot seem to get this to work in the sandbox environment on that page so that i can figure out a valid request url. I get this message: 'Invalid filtering key' ... because i have no idea how to fill in that field. The information on that page is not clear on this. Nor is it evident on the referenced 'views page'. There are no examples to follow!
I don't even want to do any filtering. I just want to get ALL items in the app. Or i can give it a pre-existing view_id, but that doens't seem to work either without a {key}
I realise this is probably dead simple. Please help a noob? :)

Unfortunately, the interactive API sandbox does not behave appropriately for this particular endpoint. For filtering, this API endpoint expects query string parameters where the field-value pairs consist of integer field IDs and the allowed values for each field. Filtering by fields is totally optional. It looks like this sandbox page isn't built for this kind of operation with dynamic query string field names; wherever you see the {key} field on that page is meant as a placeholder for whatever field IDs that you would use for filtering.
If you want to experiment with this endpoint, I would encourage you to try another dedicated HTTP client first. I was able to get this simple example working with the command-line program wget:
wget --header="Authorization:OAuth2 $MY_SECRET_TOKEN" \
--content-disposition \
"https://api.podio.com/item/app/16476850/xlsx/"
In this case, wget downloaded an Excel file containing all the items in my app without any filtering applied. The additional --content-disposition argument tells wget to save the output as a file with a name using the information in the server's Content-Disposition response header.
With a filter applied:
wget --header="Authorization:OAuth2 $MY_SECRET_TOKEN" \
--content-disposition \
"https://api.podio.com/item/app/16476850/xlsx/?130654431=galaxy"
In this case, the downloaded file filtered the results to items where field id 130654431 (which is a category field) contain the value galaxy.

Related

Filter Pushes by Type

I'm working on building a bash script that will listen for pushes of the type link which will then extract just the URL and write it to a local file.
The idea is to be able to gather links I find and save them for automated processing later.
However, it doesn't appear that the API allows you to filter by type - only by active, modified_after, cursor, and limit - which seems a bit sparse to me, but oh well.
Since type isn't a filterable option, I'm assuming that I will need to add an additional step to parse the JSON returned and filter out any that don't have the type I'm looking for, then pass those to another step to extract the link and write it down to a file.
Before I go bashing (heh) my head against the wall writing this, is anyone aware of something that will already handle this, or at least keep me from re-inventing part of the wheel?
I've solved this issue by using jq to process and filter the json payload, allowing me to only select the types of pushes that I want.

Bulk extract with Authenticated Connector (import.io)

I am new to import.io and this forum.
I am trying to extract information from a target database where I have to run a query with an input. With help of the support, I successfully created the authenticated connector. With multiple inputs that have to be manually entered in the UI, it fetches the data properly.
The problem is I have more than 10,000 inputs to run, so it has to be in a form of bulk extraction. import.io support told me that they do not have this feature within their UI and suggested to use their API posted in here: http://api.docs.import.io/#!/Query_Methods/queryPost.
Could anyone walk me through to make a use of this? I just need a working script that takes multiple string lines as inputs and run the connector that I built and post the result. I am not very familiar with this kind of technology but I am very willing to learn.
Thanks all in advance!
I would be happy to walk you through a bit of an into. It will be a bit basic though since I don't know your specific use case.
Yes, support was correct. You will need to use the POST query in order to pass your authentication credentials as inputs.
I will break down this query by steps. Essentially, our API docs are just a simple UI to pass through your credentials, then you can generate a query API.
ID - This is the GUID of your connector. This information can be found at the end of the URL, like this: https://import.io/data/mine/?tag=CONNECTOR&id=33f4e828-25ce-40c4-948c-9b734c70d1ab
Query - This is where you will put the inputs from your connector in order to execute. Be sure to keep this in structured JSON or it will bring back errors when you are querying.
Once you have successfully entered that information you will query the API.
This will give you the request URL that you need to query the API.
If you have anymore questions, just let me know.
Thanks,
Meg

Can CSV data be sent to OpenERP/Odoo through the API?

I can import Comma Separated Values (CSV) data through the admin pages, into most models. This process handles the external IDs so that the data can be added to or amended as appropriate in later CSV imports. This is a manial action.
Through the API, the same records can be created and amended, and external IDs can be set. This, however, requires a lot of the logic that would otherwise be handled by the CSV importer to be coded by hand, in the external application that uses the API to push in data. Pushing data through the API can be automated.
Is there a way the API can be used (so no changes need to be made to code within Odoo) to push CSV data (so the logic for insert/update/relationships/external IDs/ etc. is handled by Odoo)? This would be a kind of hybrid approach, and I am trying to avoid the need to create import modules within Odoo.
Edit: the "external ID" is often called the "XML ID". I think it is a terminology that has stuck from earlier versions of OpenERP, rather than having anything specific to do with XML.
Edit
This page describes a load() function that pushes CSV-like data through a pipeline to load it into the system:
http://openerp-server.readthedocs.org/en/latest/06_misc_import.html
I can't see how to translate the summary on that page into an operation through the API, if indeed that is possible. I'm guessing I will need the interface (entry point), model, method (load(), probably), and some additional parameters, but the details are beyond me.
The answer is kind of "yes".
The load() method can be used against any model to load data. This method takes data in the same structure as a CSV file would provide.
The first parameter is an array of field names, like the column headings on a CSV import.
The second parameter is an array of records. Each record is an array of values matching each field.
The API will return a list of errors where they are catered for by OpenERP. Many errors, however, just result in database exceptions on OpenERP and so need to be picked up as an API failure. This is largely because the OpenERP API is not designed as a generic API, but as a part of the GUI, and so the data sent to the API is very much bound to the current state of the application through that GUI. In other words, invalid data will seldom find its way to the API using the OpenERP GUI.
I have wrapped the loader functionality, catching errors and exceptions, in my PHP OpenERP API library here:
https://github.com/academe/openerpapi/blob/master/src/App/Loader.php
Hopefully that will be useful to others too.
I think the answer is "no".
However, this technique has been explained to me:
Create a module with little in it but CSV files for importing.
Install the module.
When a new CSV file needs to be imported, transfer it into the module (FTP or similar).
Once transferred, run the update() method for the module. This can be done through the API.
The update method will scan and load all the CSV files set up within the module. Care needs to be taken to make sure only one upload/update transaction will be run at any time.
I'll post additional details here when I have got this working, or will happily accept an alternate answer if there is a better way to handle this.

Web Scraping through Excel VBA

I need to fetch company addresses(cim) from site http://www.ceginfo.hu/
Example Company Name: AB-KONTÍR Szolgáltató Bt.
I know how to do it using WinHttp.WinHttpRequest object and FireBug.
But I am not able to decide to which URL I should send this request.
When I analyse the request/responses using FireBug, I get the following URL:
http://www.ceginfo.hu/company/search/4221638
4221638 is CompanyID here I think. But in my case I will have company name only and that's what my problem is.
So can anybody please tell me where can I get URL using firebug or any other tool using which I can track the URL with Company Name as parameter which I can use in my VBA code.
Thanks in advance!
So can anybody please tell me where can I get URL using firebug or any
other tool using which I can track the URL with Company Name as
parameter which I can use in my VBA code.
No. Unless there is a publicly available database (I would suggest calling them, if you can) or an API that allows for programmatic access, the only way to arrive at this link slug is by executing the search.
Further, the post slog is not as relevant as you think. If you search for simply "Kontir", this is the resulting page -- with many results:
http://www.ceginfo.hu/company/search/4222407
You're going to have to automate the "search" -- passing the criteria to the Web Page and executing the button-click and/or HTTPPost, and then parse the result(s). In the example company name, there is only one result. But it is possible as in my example above, that there may be multiple matches for some queries, and then you will need to have a method of dealing with these, or ignoring them.

Pass a string to various websites

I have a product code which I need to enter into 6 different websites in order to pull different information from them about the product. Is there away to save this product code into some sort of variable and pass it into each websites input box and it return all the information from each one automatically? Really have no idea where to go/start with this so if anyone can brainstorm a few ideas to get me moving that would be great.
In order get what you are planning for:
You need a script which visits the specified web site,
then at the website, you can get the element by tag.
For instance in javascript,
var textBox = document.getElementByTag(Input);
This will give you a reference to text field to enter the text. It can be done as follows:
textBox.value = "any string";
Once you have done this, you will have to retrieve the results from the page, based on the website layout.
So if you can specify about your work in detail, you would get better response.
Assuming you're talking about using an ordinary GUI browser, the best you can do is copy it to your system clipboard, and paste it into each page on the browser.
If you're talking about a programmatic web-access like wget or curl, it depends on what language you are writing your script in.
you have to create the web request for each web site and find a way to parse the response which will be HTML
have a look at the HttpWebRequest you can find lots of example on internet that shows how you can create an HTTP POST to a website.
http://www.terminally-incoherent.com/blog/2008/05/05/send-a-https-post-request-with-c/