Vue js should I filter objects in js or at my backend and make xhr to get results? - vue.js

Let's say I have table with 2 columns:
PROJECTS, TASKS
Project can have many tasks.
And lets say that I will have up to 15 000 tasks
So imagine: I click on projects1 from the first column => I should see in the second column all the tasks that belong to this project.
Should I make an xhr via api to get data, or should I store everything in a json on front-end and filter with js?
something like this structure:
[{project_name: 'some_project', tasks[{id:1, name:'dosomething'},{},{}]}
What will be faster? And what is reccomended?

It is a bad idea to filter information in the client, it should just display the data sent by the server, because backend tools are designed to fetch and process data and are better for this purpose.

Related

Vue2 track changes on table to passed for add/update/removed

I have a table that shows a paginated result of large data. Those data are loaded at once. The client wants to have a Save All functionality where only those data that the user added/modified/deleted will passed down to the API.
The table:
Is there a way to only get the data that was added/modified/deleted by the user? I am using Vue2 with Veevalidate and my plan at first is to get only those who are dirty (from veevalidate) but there is no way to do that in Veevalidate. I am not sure now how I can achieve this.
General idea can be accepted.

Vue API Best Practice Duplicate Calls (Vuex?)

Within my app there are multiple pages that display a drop down of "clients". The select options are loading via an GET call made in Axios. Every time a page is displayed it makes that get call.
I'm curious if it's better to store those clients in Vuex, and then just load them that way so I don't make a call every time? The only thing I am concerned about is when a new "client" is added the best way to tell the app it needs to make a new get call to update the data in Vuex.
There are many possible solutions to this.
You could use a cache in back-end suchlike Redis, or as you said, cache it in the front-end.
You can abstract this caching with a get function which will check a maximum threshold of cache age.
For example, you can set it to last for 15 minutes. If another request is made before it you could answer with the last obtained data, else it will request the data to the server again.

Implement Search functionality with large data in react native

I have 50K+ records in my db,I want to add search filter without affecting performance of the application .Please suggest me what approach I should adopt to search the large data.
I am calling api to fetch data from server. And used react native search functionality. But due to large data i have implemented pagination at server side so each time new api being called and new data fetched from server. Now issue is it will search only from fetched page records and i want to search from all 50K+ records. And i want to search for each character typed. So I think it is not fissible to call api at each character typed.
So what is the best approch?
I have a quote finder app which holds 400k quotes on mongodb and I am using Node JS as backend. In my point of view if you are going to search more than 100 items in your front end lets say you are going to use flatlist you can create your searching algoritms in front-end side, than you can dynamically render your list according to search results. 100 items are not a specific limit it is just my idea. Because in lists more data than that would look ugly.
For 50k search you definitely have to come up with search algorithms on server-side. After you get your search data you can use
https://github.com/UnPourTous/react-native-search-list
And if your specific aim is to search on server-side I would recommend elastic-search.
But for 50k data it is better you implement your own algoritms. When you send fetch request let your server run the search and in response get the data you want.
You can use redux for this situation. When you start your application get all records(50+K) from server it will take time(You can get records in splashscreen) based on your server and store all records in redux store. Now you can search data from your redux store so don't need to call API on every search.
Make sure your server send only those data in response which you required to show in mobile application. So for getting records it will reduce response time.
You can use redux-search for this.

API design pattern to be integrated both by own web app and other systems

So this backend will be consumed by an ad-hoc front end application. But will also be integrated by other systems and we will expose API for them.
When designing the rest I see that there is ONE database table (we call it for table A) that can join many other tables, lets say about 10 to 20 other tables.
Now, my strategy would be to build routes in my backend that will "reason" according to the ad-hoc frontend we have.
So if there is a page in the frontend (let's call this page for page1) that requires to get rows from the table A but also fields from let's say 3 other join tables, then I want to create a route in the backend called maybe "page1" which will return rows from table A and also from the other 3 tables.
This is of course an ordinary way to build a backend. But as it will also be used by other systems then somebody could argue that these systems maybe don't have any need for the route "page1". Their frontend will maybe never build a "page1".
So according to people here, it would better to build the API more agnostically. And instead of creating the route "page1" I should build it according to "hateoas". And if I understand that principle, instead of letting my ad-hoc frontend to request the resource "page1" it would request "pageForTableA". And then, the resource "pageForTableA" should return which are the possible table to be joined.
In this case, for my frontend's page1, I would need to make 4 subsequent request to the server, instead of one like I would like to do if there was a "page1" resource in the backend.
What do you think?
I also see a thirt strategy. I don't know if there is a name for this pattern but it would be this way:
A resource in backend that returns only rows from table A. BUT, the route also takes arguments. And the argument is an array with the name of all the other tables someone want to include.
So if frontend calls:
getTableA(array('tableB', 'tableD', 'tableF'))
Then the resource would include/join the tables B, D and F. In short: API resource let's the frontend decide what it want to get delivered.
Which of these 3 strategies are best do you think? Or there is some more that could be taken in consideration?
You need to architect your API in a way that consumers shouldn't know about how the data is stored in the underlying data store.
Furthermore, if you want to allow consumers to decide which fields you want to project in the response, you could give them using some query string format.
BTW, maybe you should avoid re-inventing the wheel. There's a standard called Open Data (OData) which already defines a lot of things like you already require in your API, and since it has been made by Microsoft, it has deep support on .NET.
Check this tutorial (Create an OData v4 Endpoint Using ASP.NET Web API 2.2) to get more in touch with OData.

How to consolidate API calls for the ASANA API

I'm a freelance web dev and I work with a lot of clients across many different workspaces in Asana. Not being able to get a consolidated view makes this a tedious and difficult thing to manage, so I'm putting together my own little utility to help me get a 'superview' of tasks assigned to me in order of the due date. In order to make this easier for me to scan, I need to have the project name next to the task details.
The easiest way, in my mind, would be a single API call for all tasks assigned to me and request the project name, task name, task id, due date, and workspace name all at once.
The API doesn't seem to allow this consolidated type of request, however, so instead, the workflow goes something like this;
API call to get all my workspaces
Loop through the workspaces, making an API call for each to get all tasks
Use PHP to sort those tasks accordingly
Loop through those tasks making an API call for the first instance of each project in order to get the project name (I cache the data as I
go so that I'm only making a call once per project)
The issue I'm getting is a 500 error when I start making API calls to get the project details. I doubt I'm hitting the 100 call per minute limit, but I'm still getting the errors none the less. In light of this, I'm looking for a way to make a consolidated call that contains all the data I need, but I can't seem to figure it out.
Anyone have some guidance on this?
Good news! We actually do support Input/Output options that allow you to specify which fields you want, including nested fields. So, while you still need to make separate calls for each workspace, you can do something like this:
workspaces = GET /workspaces
for id in workspaces
tasks = GET /workspaces/:id/tasks?assignee=me&opt_fields=name,due_on,projects.name
(If you're only interested in incomplete tasks, you can add &completed_since=now - or if you want incomplete and recently completed tasks, &completed_since=... with the timestamp you want to exclude any tasks that were completed before)
Additionally, 500 is not the code we send for rate limiting - it's likely an issue with the request itself. How are you requesting the project details?