Retrieve the list of friends that did a custom action on a custom object in open graph 2 - api

I would like to do something like facepile using the graph api with open graph 2 actions : from a custom object and a custom object, give me the friends (using my facebook application) that did this action on this object.
The problem is that using FQL, I cannot query custom objects and actions. Using the graph API, I cannot find a way to intersect the list of my friends with the object I'm interested in.
The best I could do was the following using the batch mode of the graph API :
batch=[
// First we get the list of friends that are using my facebook application
{ "method": "GET", "relative_url": "fql?q=SELECT+uid+FROM+user+WHERE+uid+IN+(SELECT+uid1+FROM+friend+WHERE+uid2=me())+AND+is_app_user=1+LIMIT+0,49", "name": "friends"},
// Then query each friend to get the list of objects that went through my namespace:testaction
{ "method": "GET", "relative_url": "{result=friends:$.data.0.uid}/namespace:testaction" },
{ "method": "GET", "relative_url": "{result=friends:$.data.1.uid}/namespace:testaction" },
...
{ "method": "GET", "relative_url": "{result=friends:$.data.49.uid}/namespace:testaction" }
]
It's quite inefficient and does not fully resolve my issue since :
I still have to filter the results to get only the one that matches
the object I want
If there is a large number of objects in namespace:testaction, I have to go through paging, doing more queries (I try to minimize the number of queries)
Do you see a better way to do this ?

This probably isn't exactly what you're looking for, but given the fact that facebook (AFAIK) doesn't provide (and will probably never provide) the ability to do this. I think you should simply store the information yourself and then query the data from your own database. It would be like what you're doing in your question, but you can optimize it since it's your database.
I'm sure you thought about this already, but someone had to say it.

It's now possible to do this with one Graph API request:
GET https://graph.facebook.com/me/friends?limit=50&fields=name,namespace:testaction.limit(100)
see field expansion and updates to the graph API.

If the answer derickito gave is not enough, you should explore getting your app on the Facebook white-list (aka become a partner) to get at some the private Graph API where this functionality might exist, but is not available for "normal" application that are stuck using the public Graph API.

Related

Pwa wrong to consider api cache instead of db?

im building a specific book reader like app.
Main page call api/booksList and receive the json array containing each book info like:
[ { id: server_db_id, title: "title test", sum: 10 , date: ... }
]
ans its cached after the request, so im not saving the book list into indexedDB, localStorage or other storage. If i need one specific book, i just call the api book list again and filter it. Is that bad design? (book will be over 200 items)
Whe user open the book, it call the /api/book/book_id and its cached too, the opened book response is a json list of the lines of the book, eg:
[
{
id: ...
content: "This is line...lore ipsum..."
....
}
]
I put the api response inside vue data variable and the component is rendered correclty
Im not using any kind of handler for keeping this offline by my hand. To detect if user already opened this book, i just call the api, check if errors happened or the responde body has content.
Is that a wrong, bad or stupid decision? Will this hit the quota limit api or other kind of limitation? The "gods" of pwa will raise the finger to me and say: WAAAT. (im not using indexedDB at first because it need some models handling and i want to make things easier if possible)
I my self was just researching this and concluded, at the moment I am going to go with this method, where I use cache for assets, js, css, html etc based on their matching routes.
Then when it comes to data e.g. json requests etc. Its best to store them in indexedDB (or an equivalent), which really does not require a model or schema as such.
See Jake Archibald's IndexedDB-Promise library https://github.com/jakearchibald/idb its really simple to get your head round.
Though both Jake and Addy say it's not a defacto rule, so you can decide ultimately what is best for you.
Read this for better clarification
https://developers.google.com/web/ilt/pwa/live-data-in-the-service-worker
https://medium.com/dev-channel/offline-storage-for-progressive-web-apps-70d52695513c
It helped me to make a better decision on how to go about moving forward.
Recommendations Also
Check out PWA Training: https://developers.google.com/web/ilt/pwa
Workbox: https://developers.google.com/web/tools/workbox (This has sped up my development massively!)
Codelabs: https://codelabs.developers.google.com/ (Search PWA)
The guides on here are really good at taking you through everything you need.
Good Luck with your PWA
Random thought (edit)
One thing that makes me question this though is based on some of the examples and guides I have seen is that, data storage is handled in a more ad-hoc manner. For example, if the PWA calls out an API, there are two methods I have come across where you can either manage cached data in the application or in the service worker, e.g. if your API calls to get JSON fails in the app, it can revert to getting data in the indexedDB which hopefully was pre-cached the first time your app called the API.
Or you can use self.addEventListener('fetch', (event) => { ad-hoc stuff here }) this is where you can match either an asset, or data request and hijack the response with either a cache or indexedDB response. Which prevents the need handle offline data in your app.
The first method makes me feel uneasy so i'm gonna go with the addEventListener approach both in the service worker cause thats what it is there for plus my app does not then have to worry about that.

Is it possible to get raw data from a Metabase MBQL / SQL query via the REST API?

Is there a Metabase REST API that takes a MBQL/SQL query and returns the raw data?
I can perform a MBQL query via the API in a 2-step process by doing the intermediate step of creating a Question via the Metabase web app UI and then querying the Question, but I haven't figured how how to combine MBQL with the REST API in a single step.
Some items I'd like to accomplish by having the MBQL in the API request instead of a UI-generated Question:
better version management as the MBQL query can be checked into source control with the code
better isolation as the API call won't be dependent on the question which can change
Here's some info on how to perform the 2-step process.
2-Step Process
The two step process is:
Use web app to create a MBQL/SQL Metabase Question
Use REST API to query existing Question created in web app using the Card API
Step 1) Creating Question via Web UI
Log into the web app and click the "New Question" button in the top menu.
Once your question has been created you will be directed to a URL like the following where :question-id is an integer.
Web UI endpoint: GET /question/:question-id
Note this value and use it in the API in the next step.
Note: an alternative for creating the card is to use the POST /api/card API endpoint per YakovL. This can be useful in some scenarios were UI questions/cards are desirable, but I'm also trying to avoid creating creating cards / questions in the first place, since I'm not planning on using the Metabase UI to consume them. Reasons to avoid cards for me include needing to perform extra work to verify the card query definitions haven't changed but still having the SQL in the code to create the cards, and generate a lot of unneeded question cards in the UI.
Step 2) REST API for Question Data
The API uses the term "card" for the Web UI "question" object, so make an API call to the following Card API:
API endpoint: POST /api/card/:card-id/query/:export-format
In this URL:
:card-id is the :question-id from the Web UI URL
:export-format can be json or another format
More information on the API is available in the API Documentation:
https://github.com/metabase/metabase/blob/master/docs/api-documentation.md
Question
Is there a way to do this directly by sending the MBQL/SQL query in the API request in a single step without a pre-existing Question/Card?
Querying via raw SQL and MBQL are both available via the POST /api/dataset/ API. The documentation for the endpoint mentions the query request definition but does not define it.
I ended up doing some more research and asking on the Metabase Discourse forum. The following examples were posted by sbelak.
Raw SQL Query
I was able to successfully make a native SQL query using the go-metabase SDK to make the following request:
POST /api/dataset
Content-Type: application/json
X-Metabase-Session: <sessionId>
{
"database": 1,
"native": {
"query": "SELECT COUNT(*) FROM orders"
},
type: "native"
}
Notes:
The POST /api/dataset does not set the response Content-Type header.
There is a POST /api/dataset/json endpoint, but that does not seem to accept the native property.
To set X-Metabase-Session see github.com/goauth/metabase.
MBQL
POST /api/dataset
Content-Type: application/json
X-Metabase-Session: <sessionId>
{
"database": 1,
"type": "query",
"query": {
"source-table": 2,
"breakout": [
[
"binning-strategy", ["field-id", 14], "default"
]
],
"aggregation": [["avg", ["field-id", 17]]]
}
}
Notes:
To set X-Metabase-Session see github.com/goauth/metabase.

Yammer statistics through APIs

We have a CMS-solution, where Yammer is integrated using the "Embedded Feed". Next to most of the pages in the solution, there is a Yammer part for comments and liking.
Now we would like to increase the functionality with the following:
A list of the most liked pages
A list of the most commented pages
How many people liked the current page
How many people commented current page
Anyone have experience with this? Ie, to collect already summarized data, or retrieve data and summarize yourself in the solution? And especially dealt with rate limits and worked with some form of caching?
The Yammer APIs are very limited in functionality, and will not support what you are trying to do.
Even without the throttling, getting the most liked and most-commented pages are going to be flat-out impossible. There's no way to query for most-liked or most-commented Open Graph objects. (Unless I am mistaken)
To get the total likes and comments on a given page, ignoring the throttling issues, here's what you could do:
Pages are represented as Open Graph objects in Yammer. Getting the likes and comments is a 2-step process. First, you need to grab the Open Graph ID of a given URL, then fetch the messages related to that OG object. But, again, you'll only get the first twenty.
To grab the OG object:
yam.platform.request({
url: "open_graph_objects?url=" + url.toLowerCase(),
method: "GET",
data: {},
success: function (OGObj) {
//your id is in the OGObj.id
}
});
then, to get the messages:
yam.platform.request({
url: "messages/open_graph_objects/" + OG_id + ".json",
method: "GET",
data: {},
success: function (msg) {
//parse out this object for the messages, which
// contain like and comments counts
}
});
**Now, there is a Yammer "Like" button that allows you to directly "Like" Yammer Oopen Graph objects, but incredibly there is no way to actually retreive those Likes. You can only get likes on messages related to those URLs.

RESTful way of referencing other resources in the request body

Let's assume that I have a resource called group with the following representation:
{
"id": 1,
"name": "Superheroes"
"_links": {
"self": {
"href": "http://my.api.com/groups/1"
}
}
}
Now let's say I want to create a new person instance by POSTing to /persons/1. Which of the following should I use for the request body:
Using ID
{
"name": "Batman",
"groupId": 1
}
Using link
{
"name": "Batman",
"group": "http://my.api.com/groups/1"
}
With the first method I access the id directly either to look up the related resource or eventually store the id in the database, when I persist the person instance. But with the other method, I either have to extract the id from the URI or, follow the link to load the related resource, and then find out its id. I really don't want to store the URI in the database.
With the latter option, seeing that the server controls the structure of the URI, is it fine for me to parse the id out of the link? Following the link back to the server itself seems odd, seeing that at this point we already have access to the information directly (we just need the id).
So to sum up, which of these options is best?
Use the id directly.
Use the link, but parse out the id.
Use the link, but access the link to get the resource instance, and then get the id.
TL;DR: Use simple ids.
More detailed explanation:
A straightforward approach is to create a person by POSTing to /groups/1/persons with a payload {"name": "Batman"}.
However, while such approach works for simple cases, the situation gets complicated if there are 2 resources that need to be referenced. Let's assume that a person also needs to belong to exactly one company:
GET /persons/1
{
"name": "Batman",
"group": 1, // Superheros, available at /groups/1
"company": 5 // Wayne Enterprises, available at /companies/5
}
Since there is no relationship between companies and groups, it is not semantically correct to create a person via POSTing to /groups/1/companies/5/persons or to /companies/5/groups/1/persons.
So let's assume you want to create a person with a request looking like this:
POST /persons
{
"name": "Batman"
"group": ???, // <--- What to put here?
"company": ??? // <--- What to put here?
}
Which brings us to the answer to your question:
Ease of use. Your API should be primarily designed for the ease of use. This is especially true, if you design a public API. Therefore, Option 2 (Use the link, but parse out the id) is out, since it imposes additional work for clients of your API.
Constructing search queries. If you want to be able to query persons which belong to the company 10 and the group 42, simple ids lead to more readable and less error-prone urls. Which of the following do you consider more readable?
URL with a simple id:
GET /groups/42?company=10
or URL with a url-encoded link:
GET /groups/42?company=http%3A%2F%2Fmy.api.com%2Fcompanies%2F10
I wouldn't underestimate the point of readability. How many times do you need to debug your API in various curls, logs, postmans, etc.
Development Links need to be parsed in the backend, while simple ids can be used directly. It's not about performance, but rather about additional work/tests you have to put in.
Endpoint maintenance. Imagine that your API endpoint evolves. You decide one day to switch to https or to include versioning in the url. This might break API clients, if they for some reason rely on structure of the links. Also, you might want to checkout if link parsing on your backend is done properly.
Argumentum ab auctoritate I know this is not a proper argument, but if you checkout APIs of large players, e.g. Twitter, Github or Stripe, they all use simple ids.
HATEOAS. One common argument in favour of links is that it is aligned with HATEOAS. However, as far as I know, this relates to additional links in API responses rather than using links in payloads of POST requests.
All in all, I would go for simple ids, since I haven't yet heard a compelling argument favouring links, which would beat the aforementioned.
You are missing two important things here.
You need a standard way to describe forms in the response, in this case your POST form.
The information about the group ids / uris, or how to get them has to be described in the form in a standard way.
For example a HTML FORM with a SELECT INPUT would be RESTful. The closest thing we got in json to do the same is json-ld and hydra. But if you are obsessed with hal, then use hyperagent forms or something like that. It will never be a standard, but if compatibility is not an issue, then it is good enough.
To answer your question, you should use the id, because the server knows how to interpret it. The client needs the resource identifiers, the server needs it only in the uri part of the request, not in the body.
From my experience, it is always best to go with the simplest solution for making requests.
The process of generating a new url and parsing it seems excessive to get a resource, whereas sending the id of the item you want seems much simpler.
Thus, I would send a request in the form:
{
"name": "Batman",
"group": 1
}

Some general restful api design questions

A few general design questions:
Give the example here:
https://developers.google.com/+/api/latest/activities/list#nextPageToken
Why would the server return a token to retreive the next paginated result? Doesn't this break the idea of being stateless?
Why not just pass a MySQL like LIMIT name=value as the parameters? The server now has to return the number of pages I suppose...what am I missing?
I read many but this one was of interest:
REST Web Services API Design
The second reply, offers the following examples.
GET http://api.domain.com/user/<id>
GET http://api.domain.com/users
PUT http://api.domain.com/user/<id>
POST http://api.domain.com/users
DELETE http://api.domain.com/user/<id>
Makes sense but why are there two plural resources? Could one not assume that if "user" is queried and was NULL or not provided that "all" was intended? Likewise for POST? If plural is for improved readability - why is there not a "users" resource for DELETE?
Ultimately, I understand REST to mean...representation of a single resource - using HTTP verbs (GET, PUT, POST, DELETE) to essentially manage that resource - similar to CRUD.
EDIT | Lastly I also wanted to ask why Google API sends the API version in the URI instead of using HTTP headers? Is there a reason? For backwards compat with older clients?
Comments?
Why would the server return a token to retrieve the next paginated result? Doesn't this break the idea of being stateless?
Using this kind of mechanism for paginated result sets is completely standard and does not break the idea of being stateless. Consider the following example.
Suppose GET /users?after=<after> (where after is optional) is supposed to return the list of all users in a paginated fashion, say <= 4 per page.
The first request a client makes is GET /users with a response that might look like the following (formatted as JSON).
{
"users": [ "alex", "bob", "carter", "dan" ]
"more_after": "dan"
}
In this example, the more_after property designates there may be more users left in the user list. So the client then requests GET /users?after=dan and gets a second response that looks like the following.
{
"users": [ "edward", "frank" ]
}
The absence of the more_after property designates that this is the last page of users.
Now the question is: was the "dan" token used as the page separator something that breaks the "statelessness" property we want? Clearly the answer is no. The server doesn't have to remember anything between the two GET requests. There's no concept of a session. Any state that needs to persist between the two GET requests exists only client-side - that's the important distinction. It's completely acceptable - and often required - to have the client persist state between calls to the service.