Building REST API - separate requests - api

I am building an API and am a little unsure whether it would be better to have a request that brings back all information relating to a resource, or just bring back info separately according to tasks that need carrying out. For example, I have a messages resource and am struggling to decide whether to bring back all message information in one go. OR have a separate request for unread messages, a separate request for a list of messages and another request for a single message.
What is the proper way? I am tempted to keep them all separate but then worrya bout having to do too many requests.

Stop worrying and just do.
I like to keep things separate in the beginning, and at some point, I realise that request x always followed by request y, so I'll just merge those two. You won't know what you'll need until you're working on it...

Related

Retrieving messages in chat application

I am working on an application for chatting. I wondering how to design the API for getting messages from a server.
When the application loads the chat window, I display the last 20 messages from the server using the endpoint:
/messages?user1={user1Id}&user2={user2Id}&page=0
Secondly, I allow user to load and display previous messages when the users scrolls to the top using the same endpoint but with different page (1)
/messages?user1={user1Id}&user2={user2Id}&page=1
But this design doesn't work correctly when users start to send messages to each others. The reason is that the endpoint returns the messages using descending order. Invoking the endpoint will give different result sets before sending/receiving a message and after.
My goal is to get always 20 previous messages when a user scrolls through conversation history.
How would you implement this in a clean way (including the REST API design)? I can imagine some solutions, but they seem dirty to me.
REST does not scale well for real time applications. It takes too many resources to open the HTTP connection again and again. Better to use websockets for it.
As of the upper problem if you start pagination with the latest message, not with the first message, then I would not wonder that it changes. To keep the pages after it you need to send the message id you started with. So for example: GET ...&count=20&latest=1, after that you get a 20 list of messages, you get the message id of the latest one and do the pagination with GET ...&count=20&basePoint={messageId}&page=0 to always get the exact same page no matter that you got new messages. After that you continue with GET ...&count=20&basePoint={messageId}&page=1 and so on... You will have negative pages GET ...&count=20&basePoint={messageId}&page=-1 if there are newer messages. Though I don't know any chatting application which uses pagination this way.
What I would do is GET ...&count=20&latest=1 and get the 20th message and do GET ...&count=20&before={messageId_20th} after that get the last message again from that list which is the 40th and do GET ...&count=20&before={messageId_40th} and so on. To get the new messages you can do GET ...&count=20&latest=1 again or GET ...&count=20&after={messageId_1st}.
None of the above is really good if you are using a caching mechanism in the client. I would add something like key frames in videos, so key messages. For example every 20th message can be a key message. I would do caching based on the key message ids so the responses could be cacheable, something like GET ...&count=20&latest=1 would return messages and one in the middle of the list would have a property of keyMessage=true. I would start the next request from the last key message something like GET ...&count=20&before={messageId_lastKey}, so the response will be cacheable.
Another way of doing this is starting pagination from the very first message. So when you do GET ...&count=20&latest=1 it will write the index of the message, something like 1234th message. You can do caching based on every 20th message just like in the key message solution, but instead of the message ids, you can do GET ...&count=20&to=1220 and merge the pages on the client. Or if you really want to spare with data, then GET ...&from=1201&to=1220&before=1215 or GET ...&from=1201&to=1220&last=1214 or GET ...&from=1201&to=1214, etc... And continue normal pagination with GET ...&count=20&to=1200 or GET ...&from=1181&to=1200.
What is good with the upper fixed pages approach, that you can use range headers too. For example GET .../latest would return the last 20 message with the header of Content-Range: messages 1215-1234/1234. After that you can do GET .../ Range: messages=1201-1214 and GET .../ Range: messages=1181-1200 and so on... When the total message count is updated in the response Content-Range: messages 1181-1200/1245, then you can automagically download the new message with GET .../ Range: messages=1235-1245 or with GET .../ Range: messages=1235- if you expect the 1245 to change meanwhile. You can do the same thing with the URI too, but if you have a standard solution like range headers it is better to use it instead of reinventing the wheel.
As of the &user1={user1Id}&user2={user2Id} part I would order it based on alphabet, so always user1 would be earlier in alphabetic order than user2, something like &user1=B&user2=A -> &user1=A&user2=B. So that will be properly cacheable too, not necassarily on the client, but you can add a server side cache too, so if the two participating users try to get the same conversation recently it won't be queried from the database again. Another way of doing this is adding a conversation id, which can be random unique or generated from the two user ids, but it must be always the same for the two users e.g. /conversations/A+B or /conversations/x23542asda. I would do the latter, because if you start support conversations with multiple users it is better to have a dedicated conversation identifier. Another thing you can support is having multiple topic related conversations between the two users, so you won't need to do /conversations/A+B/{topic} just use a unique id and search for conversations before creating a new one, so either /conversations?participants[]=A&participants[]=B&topic={topic} will give you an empty list or a list with a single conversation. This could be used to list coversations of a single user /conversations?participants[]=A or list conversations of a single user in a certain topic /conversations?participants[]=A&topic={topic}.
Generally speaking, for "system design" questions, there isn't any one correct answer. Multiple approaches can be considered, based on their pros and cons. You've mentioned you want a clean way to retrieve messages, so your expectation might be different from mine.
Here is a simple approach/idea to get you started:
Have a field in your database that can be used to order messages. It could be a timestamp, a message_id, or something else that you want. Let's call this m_id for now.
On your client, have a field that tracks the m_id of the earliest message currently available locally on the client. This key (let's call it earliest_m_id) will be used to query the database, because you can simply fetch x number of messages before the earliest_m_id.
This would solve your issue of incorrect messages being fetched. Using paging will not work, since pages will continuously change with the exchange of messages.

Need suggestions: Send multiple images to backend, perform upload operation in backend, send response

I need some best practice guidelines for a backend service in a scenario like this one:
UI sends multiple images for uploading to the backend service
Backend service receives all of the images and processes upload to storage one by one
There can be failure in 1 or multiple image upload
My question is how do I send the response towards UI if my backend service is unable to upload 1 or more file(s).
One way can be to send failed and successful image link together in a JSON response body. So the UI knows about the failure and handles it in its own way.
Another way can be to send only the successfully uploaded images' link which is the best case scenario.
Any suggestions will be welcomed with some reference links.
Use an Orchestrator - something specific that can coordinate multiple actions and provide a meaningful result back to the caller.
This might be as simple as a component sitting in the UI that orchestrates calls to the backend. The UI component and the backend service might be designed as parts of a cohesive solution, or the UI component might simply act as a type of client/proxy/facade to some random backend service.
UI calls the orchestrator with references to all the images it needs uploading.
The orchestrator works through the items, uploading each as you prefer (sequentially or in parallel, etc). For each file, handle errors however you prefer - e.g. try once and die gracefully on failure; put errors into a queue or some other mechanism for retry (how many times is up to you); etc.
Based on rules internal to the orchestrator, return status to the caller.
For potentially long-running processes (like file uploads) make sure the call to the orchestrator is asynchronous.
Rather than only returning "complete" result at the end, the orchestrator might provide a simple status back, allowing callers to get some idea of where processing is at. For example, you might have a call-back (from the orchestrator to it's caller) that simply emits very simple statuses like: processing, failed and complete. A more complex solution would be for the orchestrator to return more specific info like %complete and detailed error info.
Have a look at how the big cloud providers do complex file uploads by reading their documentation and studying their API's.
I need some best practice guidelines for a backend service
In no particular order:
Keep it as simple as possible - generally, the fewer moving parts the better. E.g. pay attention to the Single Responsibility Principle (SRP).
Clean up after yourself. If the upload service generates any data - make sure you have a clean-up process so you don't end up with mountains of un-needed data lying around, especially stuff like image files. If you design an upload solution that maintains state (which is independent of what happens to the images once they are uploaded) then you'll be storing data which probably won't be needed once the images are all processed.
Think about support - not just developer debugging but also operational support. Getting your solution into production is not the end result, it's just the beginning.
If designing this solution across teams (e.g. frontend and backend teams) make sure both teams are involved in the design. If the backend team can't provide a solution that works for the frontend team then it's not going to end well.
Think about the likely error scenarios and how can you handle them.
This isn't really just a question of best practice, as there are multiple ways you could implement it, more than one of which could be valid. This is actually an architecture and design question, with more than one valid answer, hence I don't think it fits as a Stack Overflow question and you will not get references to any one correct approach.
That said, by way of an answer I will outline what I think you need. At a very high level, and not necessarily in this order but taking these factors into account, I would:
Design the UI process flow. For example, you may decide that the user process will have several stages:
User selects first image for upload;
User selects each subsequent image for upload;
User presses some kind of "Go" button after selecting all images;
System now uploads the batch, and user receives a response confirming success or otherwise;
User has option to click through to detailed success/error details.
Design the required success/error reports
Design the data needed to support the overall functionality
Provide one or more APIs giving the upload function and the report function(s) the CRUD access they need to this data
If you hit any specific technical issues at any stage, then please post a new questions accordingly as you go.
As to the point you mentioned, how to send the UI response, there is more than one valid way but I would return a basic success/falure response initially, containing only minimal details such as number of successes, and return more details in further messages in response to user actions (such as clicking through to detailed success/error details), at which point I would retrieve the requested error details from the database.
As I said at the start of my answer, I don't think your question can be answered just in terms of best practices, as it's a whole architecture and design question, but I hope my answer helps you along this path.

Can I send an API response before successful persistence of data?

I am currently developing a Microservice that is interacting with other microservices.
The problem now is that those interactions are really time-consuming. I already implemented concurrent calls via Uni and uses caching where useful. Now I still have some calls that still need some seconds in order to respond and now I thought of another thing, which I could do, in order to improve the performance:
Is it possible to send a response before the sucessfull persistence of data? I send requests to the other microservices where they have to persist the results of my methods. Can I already send the user the result in a first response and make a second response if the persistence process was sucessfull?
With that, the front-end could already begin working even though my API is not 100% finished.
I saw that there is a possible status-code 207 but it's rather used with streams where someone wants to split large files. Is there another possibility? Thanks in advance.
"Is it possible to send a response before the sucessfull persistence of data? Can I already send the user the result in a first response and make a second response if the persistence process was sucessfull? With that, the front-end could already begin working even though my API is not 100% finished."
You can and should, but it is a philosophy change in your API and possibly you have to consider some edge cases and techniques to deal with them.
In case of a long running API call, you can issue an "ack" response, a traditional 200 one, only the answer would just mean the operation is asynchronous and will complete in the future, something like { id:49584958, apicall:"create", status:"queued", result:true }
Then you can
poll your API with the returned ID to see if the operation that is still ongoing, has succeeded or failed.
have a SSE channel (realtime server side events) where your server can issue status messages as pending operations finish
maybe using persistent connections and keepalives, or flushing the response in the middle, you can achieve what you point out, ie. like a segmented response. I am not familiar with that approach as I normally go for the suggesions above.
But in any case, edge cases apply exactly the same: For example, what happens if then through your API a user issues calls dependent on the success of an ongoing or not even started previous command? like for example, get information about something still being persisted?
You will have to deal with these situations with mechanisms like:
Reject related operations until pending call is resolved "server side": Api could return ie. a BUSY error informing that operations are still ongoing when you want to, for example, delete something that still is being created.
Queue all operations so the server executes all them sequentially.
Allow some simulatenous operations if you find they will not collide (ie. create 2 unrelated items)

How to handle a single publisher clogging up my RabbitMQ's queue

In my last project, I am using MassTransit (2.10.1) with RabbitMQ.
On some scenarios, a producer is allowed to send a bulk of messages to the queue.
For example - the user set to a bulk notification to his list of contacts - the list could be as large as 100000 contacts on some cases. This will send a message per each contact to the queue (I need to keep track of each message). Now since - as I understand - messages are being processed in the order of entrance, that user is clogging up the queue for a large amount of time while another user, which may have done a simple thing such as send a test message to himself, waits for the processing to end.
I have considered separating queues for regular VS bulk operations but this still doesn't solve the problem for small bulks (user with dozens of contacts waiting for users with hundred thousands) and causes extra maintenance.
The ideal solution for me - I think - would involve manipulating the routing in such a way that the consumer will be handling x messages from the same user, move the X messages from the next user, than again, and than moving back to the beginning of the queue, until all messages are processed.
Is that possible? Is there a better solution?
Thanks in advance.
You will to have to write code to manage this yourself. RabbitMQ doesn't really have any built-in mechanism to handle a scenario like this, without your code getting involved.
If you want to process a few at a time from bulk, then back to normal, then back to bulk, you'll need 2 queues and code to manage which one is being pulled from, when.
Just my opinion, seeing as how there is no built in way to my knowledge...Have you considered using whatever storage you are using to store the notifications, then just publish one message, with a List of Notifications, store it in you DB, and then have a retrieve notifications for user consumer. the response would be one message, it may have a massive payload, but even if that gets bogged down, add a skip and take property to the message and force it to be between 0 and 50 (or whatever). In what scenario would you want to show a user 100,000 notifications at once?

How to write a middle-tier http API endpoint that can stream results as they arrive to the client?

The scenario is this - I have a frontend web-server that I'm writing in node.js. I have an as-yet-unwritten middle-tier internal-API layer written in, well, anything. The internal-API is the only thing allowed to talk to the data-store (which happens to be a relational database).
Disclaimer: I'm a node.js beginner.
node.js wants to do data-access asynchronously - that makes calls like Database.query.all inefficient, since the response callback wouldn't start until the whole list has been assembled. Documentation I've read suggests that instead, it'd be better to stream results one at a time to the client.
I would like to know how to write the frontend and middle-tier http internal-API such that I can take advantage of node.js' asynchronicity, here.
I guess the question is "how do I stream structured data over http"? I guess that's the feature of the internal API that I'm asking for support for.
Should I:
Get the frontend to ask for a list of IDs, then issue one request each to the backend? Sounds crude and chatty, plus I don't see a guarantee that the requests will return in the order that I want, so I'd have to wait 'til I had everything back at the frontend anyway..?
Get the frontend to make a series of requests against the internal API for pages of data, and treat each chunk as a stream-segment...?
Fetch only enough data for the first screen's worth, then request for subsequent chunks, writing each one to the end of the list as it arrives?
something cleverer!?
(Note: please don't say "get rid of the middle-tier so you can talk to the database directly" - that's not an option)
I am not sure what exactly you mean by "streaming"; from the ideas you give, it could be either interpreted as some HTTP server push or long polling technique, or simply making subsequent XHR requests.
Since you're using node, I recommend Socket.io, which allows you to really push data to the browser whenever you want.
If you chose to go with XHRs, simply tell the browser what to request next.
If that doesn't fit you, and you want to use server push or long polling, response.write() seems the way to go. But you will probably run into problems with request timeouts and such.