Google Spreadsheet API: Post request fails - google-sheets-api

I have a Google spreadsheet published to the web, and I'm trying to read and write list-based feeds. I have not authenticated.
I can read data using a GET request just fine.
curl https://spreadsheets.google.com/feeds/list/1WngGKZHauwBqRxgC5E6DFMiJ-J8BDsadfwerF4RE0M/1/public/full
However, all my post requests to write data come back with an error
curl --data '' https://spreadsheets.google.com/feeds/list/1WngGKZHauwBqRxgC5E6DFMiJ-J8BDlN3mMaBfF4RE0M/1/public/full
curl --data '<entry xmlns="http://www.w3.org/2005/Atom" xmlns:gsx="http://schemas.google.com/spreadsheets/2006/extended"><gsx:id>1</gsx:id><gsx:status>1</gsx:status><gsx:user_email>1</gsx:user_email><gsx:url domain>1</gsx:url domain><gsx:highlighted_text>1</gsx:highlighted_text><gsx:complete_text comments>1</gsx:complete_text comments><gsx:created_date>1</gsx:created_date><gsx:updated_date>1</gsx:updated_date><gsx:created_by>1</gsx:created_by><gsx:updated_by>1</gsx:updated_by></entry>' https://spreadsheets.google.com/feeds/list/1WngGKZHauwBqRxgC5E6DFMiJ-J8BDlN3mMaBfF4RE0M/1/public/full
The error I'm getting is: "Sorry, the file you have requested does not exist. Please check the address and try again."
Do POST request require authentication? I can't find anything in the docs that say so.
Any help would be appreciated. Thanks.

Two things:
I think you need to use the private feed (.../private/full), not the public one (.../public/full) like you're using now. More on the private-versus-public distinction is in the API docs.
You will need an Authorization header. (That process gets pretty complicated, but there's a lot of good info here. Remember to select the appropriate scopes.)
Both are mentioned in the example about adding a list row in the Sheets API docs. (I'd link directly, but I don't have enough rep to add more than two links. Just search for "Adding a list row".)

Related

API Request URL returns "Invalid Access"

I'm trying to scrape data from a website but I have no experience with scraping or APIs except for making a Discord Bot once. So I followed the steps described here to find the API:
http://www.gregreda.com/2015/02/15/web-scraping-finding-the-api
The Request URL in the Headers tab with the important information is this one:
https://api.amiami.com/api/v1.0/item?gcode=FIGURE-119023&lang=eng
When I try to open this page, like he does, it only returns:
{"RSuccess":false,"RValue":{"HttpStatusCode":400},"RMessage":"Invalid access."}
If you want to try getting the Request URL yourself, the original page I used was:
https://www.amiami.com/eng/detail/?gcode=FIGURE-119023
Removing the language argument doesn't seem to change anything either. So I guess there's something that detects that I'm not accessing it in a normal way. Any Ideas on how to fix this?

How do I use/read an api documentation to send a simple request?

I know this is probably strictly case-specific, but I do feel like I encounter this problem a lot so I will make an effort to try and understand it better.
I am new to using APIs, but I have never succeeded in using one without copying someone's code. In this case, I can't even find any examples on forums, nor in the API documentation.
I'm trying to pull my balance value from my investment bank "NordNet" to scroll, amongst other things, on an Arduino display I've made. Right now I use python Selenium to automatically but "physically" login to NordNet and grab my balance from the DOM. As I'm afraid I might get "punished" for such botted behavior, and because the script is fairly high maintenance (as the HTML changes over time), I would obviously much rather get this information through NordNet's new API.
Link to NordNets API doc
Every time I try to utilize an API doc it's always the same, it looks easy, but I can never get it to work.
This time I tried to just play a little with the API before exploring further.
I use PostMan to send the simplest request:
https://www.nordnet.se/api/2
And I get a successful code 200 JSON response.
I then try to take it a step further to access my account data using this endpoint:
https://www.nordnet.se/api/2/accounts
For this one, I obviously need some authentication of some sort
The doc looks like this:
So I set my PostMan client up like this and get the response showcased:
I've put my NordNet login into the "Auth" tab as "basic auth" and I then see PostMan encrypts this info some way, in the "Headers" tab.
I'm getting an unauthorized response code and I have no idea why. Am I using PostMan wrong (probably)? Is the API faulty (probably not)? There is a mention of a session_id that should contain both password and username? Maybe something completely else...
I hope you can help
The documentation says to use session_id as username and password for that api ,
so try logging in and then get the session id (try with both sid and ssid) . from network tab and pass it as username and password for authorization .
sid- is for http and ssid for https i guess , try with both

Including the minelead API in my code and how to understand its syntax

I am trying to add the Minelead API to my code to allow myself to search and find out user emails from certain domains.
I am unable to find a list of syntax or an explanation of how to implement the API into my code.
I can include the API through script tags but aside from that I am completely lost, an answer or a good API tutorial would really help,thank you.
I'm Mohamed from the Minelead Developer Team
As for your request, I need to know what programming language you're using, and what you're trying to do exactly.
we have examples in the documentation page of simple cURL requests.
Here's also a simple example in Python using the requests module
import requests as req
uri = "https://api.minelead.io/v1/search/?domain=example.com&key=yourApiKey"
response = req.get(uri)
print(response.json())

How to get all unread items using Feedly API

OK, either this is incredibly obvious and I'm just too dumb to see it, or it is not possible at all.
I created a Feedly developer access token and can call some end points just fine, like /profile, /categories, etc. Currently testing them with curl, but eventually will do this in Ruby.
What I can't find from the documentation (or even by Googling) is how to access all the unread entries from all my subscriptions just like I do in the Feedly app:
As far as I understand, Streams are for specific feeds. And the closest thing to the All stream, I think, is the Global Resource Id "global.all". But when I call it according to the documentation I get "API handler not found".
curl -H 'Authorization: OAuth [<MY_ACCESS_TOKEN>]' http://cloud.feedly.com/v3/user/<MY_USER_ID>/category/global.all
{"errorCode":404,"errorId":"ap5int-sv2.2018082612.1607238","errorMessage":"API handler not found"}
At some point I thought maybe Feedly just doesn't support this and went and looked at Inoreader's API documentation and it looks pretty much the same. There is no /All stream where I can pull my unread entries. So I feel like I'm missing something very obvious here.
What I am trying to do:
Basically I want to create an app for myself where I pull all my unread entries and flag the ones I'm interested in as "Read Later". This is part of a bigger workflow app that I'm working on.
You're almost there. To get a stream of all your unread articles, the syntax would be:
curl -H 'Authorization: OAuth <access token>' 'https://cloud.feedly.com/v3/streams/contents?streamId=user/<user id>/category/global.all&unreadOnly=true'
The <user id> can be obtained by the Profile API which can be found here.
The streams API is documented here.
Hope this helps.

How to write filter {key} for a 'get items as xlsx' request via Podio API

I'll get this out of the way first: I'm an amateur programmer (at best)...i have some knowledge of how APIs work, but little to no experience manipulating the podio API directly (ie I use zapier/globiflow a lot and don't write any php/ruby). I'm sure other people can figure this out via the API documentation, but I can't. So i'm really hoping someone can help clarify and give some more detailed instruction.
My Overall objective:
I frequently export podio files as xlsx from the podio front-end. This is used by my team and me to do regular data analysis tasks in excel. I would like to make this process easier by automating the function of getting an updated podio export into my excel. My plan is to do this via excel VBA. I understand from other searching that it is possible to send an HTTP request using VBA, so i want to make sure i understand what I need to send to the Podio API to get what I need. The method of how to write the HTTP request in excel VBA is outside the intentional scope of this question (though i'd accept any help on this!)
What I've tried so far:
I know that 'get items as xlsx' is part of the podio API: https://developers.podio.com/doc/items/get-items-as-xlsx-63233
However I cannot seem to get this to work in the sandbox environment on that page so that i can figure out a valid request url. I get this message: 'Invalid filtering key' ... because i have no idea how to fill in that field. The information on that page is not clear on this. Nor is it evident on the referenced 'views page'. There are no examples to follow!
I don't even want to do any filtering. I just want to get ALL items in the app. Or i can give it a pre-existing view_id, but that doens't seem to work either without a {key}
I realise this is probably dead simple. Please help a noob? :)
Unfortunately, the interactive API sandbox does not behave appropriately for this particular endpoint. For filtering, this API endpoint expects query string parameters where the field-value pairs consist of integer field IDs and the allowed values for each field. Filtering by fields is totally optional. It looks like this sandbox page isn't built for this kind of operation with dynamic query string field names; wherever you see the {key} field on that page is meant as a placeholder for whatever field IDs that you would use for filtering.
If you want to experiment with this endpoint, I would encourage you to try another dedicated HTTP client first. I was able to get this simple example working with the command-line program wget:
wget --header="Authorization:OAuth2 $MY_SECRET_TOKEN" \
--content-disposition \
"https://api.podio.com/item/app/16476850/xlsx/"
In this case, wget downloaded an Excel file containing all the items in my app without any filtering applied. The additional --content-disposition argument tells wget to save the output as a file with a name using the information in the server's Content-Disposition response header.
With a filter applied:
wget --header="Authorization:OAuth2 $MY_SECRET_TOKEN" \
--content-disposition \
"https://api.podio.com/item/app/16476850/xlsx/?130654431=galaxy"
In this case, the downloaded file filtered the results to items where field id 130654431 (which is a category field) contain the value galaxy.