Flutter run multiple http request take much time - api

I want to ask about increase performance when i do multiple future http request in single page. In case , i want to build a dashboard page. In dashboard, i've 4 endpoints url that return different result in every endpoint and should be shown in dashboard page.
here example code when load data
var client = new http.Client();
Future main() async {
var newProducts = await client.get("${endpoint}/product?type=newly&limit=5");
ProductListResponse newProductResponse = ProductListResponse.fromJson(json.decode(newProducts.body));
var bestSeller = await client.get("${endpoint}/product?type=best-seller&limit=5");
ProductListResponse bestSellerResponse = ProductListResponse.fromJson(json.decode(bestSeller.body));
var outOfStock = await client.get("${endpoint}/product?type=out-of-stock&limit=5");
ProductListResponse outOfStockResponse = ProductListResponse.fromJson(json.decode(outOfStock.body));
var lastRequest = await client.get("${endpoint}/product-request?type=newly&limit=5");
ProductRequestListResponse productRequestResponse = ProductRequestListResponse.fromJson(json.decode(lastRequest.body));
}
every endpoint when i hit manually using postman it takes 200ms for return the result. But when i implement in flutter app, it took almost 2s.
can i improve performance when getting data?

The reason why your code run so slow is that you are making those HTTP requests one by one. Each await will take quite some time.
You can either not use await and implement the logic using callbacks (.then) or you can combine the Futures into one using Future.wait and use await for that combined Future.
Your code will look something like this:
var responses = await Future.wait([
client.get("${endpoint}/product?type=newly&limit=5"),
client.get("${endpoint}/product?type=best-seller&limit=5"),
client.get("${endpoint}/product?type=out-of-stock&limit=5"),
client.get("${endpoint}/product-request?type=newly&limit=5")
]);

Related

Sync data from remote api and save it to my localdb

I want to sync data from remote api! something like 1M record! but the whole process talks about 5Mins.
as a user experience, that's very bad thing to do! I want the whole process takes less than 1S!
I mainly use .net core web api 6.0 with SQLite, EF Core!
I search a lot and I used BulkInsert! and BlukSaveChangesAsync and same it talks a long time!
Same it's very bad user experience. I tried the following commented solutions and same problem! I want to make it very fast! as the user! does not feel that there is sync in background or thow.
Note: also I stopped all indexes while inserting the data, to make the process faster! and same problem.
Note: My app is Monolithic.
I know I can use something like Azure function but that would be considered as over engineering.
I want the simpliest way to solve this! I searched a lot in YouTube, GitHub and Stack overflow and I found nothing that would help me as I wish.
Note: I'm writing the data in two tables!
first table: contains only 5 rows.
second table: contains 3 rows.
`
public async Task<IEnumerable<DatumEntity>> SyncCities()
{
var httpClient = _httpClientFactory.CreateClient("Cities");
var httpResponseMessage = await httpClient.GetAsync(
"API_KEY_WITH_SOME_CREDS");
if (httpResponseMessage.IsSuccessStatusCode)
{
using var contentStream =
await httpResponseMessage.Content.ReadAsStreamAsync();
var result = await JsonSerializer.DeserializeAsync<Result>(contentStream);
var datums = result!.Data;
if (datums.Any())
{
//First solution
//_context.Datums.AddRange(datums);
//await _context.SaveChangesAsync();
//second solution
//await _context.BulkInsertAsync(datums);
//await _context.BulkSaveChangesAsync();
//Thread solution
//ThreadPool.QueueUserWorkItem(async delegate
//{
// _context.Datums.AddRange(datums);
// await _context.BulkSaveChangesAsync();
//});
}
return datums;
}
return Enumerable.Empty<DatumEntity>();
}
Tried: I tried bulkInsert! tried ThreadPool!stopped all indexes! I tried a lot of things. and nothing helped me as I tought!
I want the whole process takes less than 1S as the user does not move away from the application! because the bad user experience.
This ThreadPool solved the issue for me:
if (datums.Any())
{
ThreadPool.QueueUserWorkItem(async _ =>
{
using (var scope = _serviceScopeFactory.CreateScope())
{
var context = scope.ServiceProvider
.GetRequiredService<CitiesDbContext>();
context.Datums.AddRange(datums);
await context.SaveChangesAsync();
};
});
}

How to I slow down API requests from Google App Scripts?

I have a Google sheets formula that fetches data from an API:
function GETDATA(data){
var res = UrlFetchApp.fetch("https://some.api.com/" + data);
var content = res.getContentText();
return content;
}
While everything works fine, this script fires an API call each time it's used in a cell (=GETDATA("cars") for example). This can quickly result in hundreds of API calls, at which the API gives up and responds with a code 429: too many requests.
Is there a way, that I can slow these requests down in Google App Script? I already tried
function GETDATA(data){
Utilities.sleep(150);
var res = UrlFetchApp.fetch("https://some.api.com/" + data);
var content = res.getContentText();
return content;
}
but this just results in all the cells waiting 150 ms and then executing all together again.

Running data downloading on background thread

Im building a new app and since i want it to be smooth as everyone, I want to use a background thread that would be responsible for all the data downloading using restsharp. Im also following the MVVM pattern.
I've been reading a lot about task.run and how to use it properly and the whole async-await topic. But since Im new to all this, Im not sure how I should procceed to do things right. I have a lot of code so I will breifly try to explain what Im doing and then put a snippet.
So I started with creating a service class that contains all the functions that are using restsharp to get the data. And inside my ViewModel Im calling those functions in the very begining. Im trying to use tasks and run those functions on the background thread but the app get blocked on the splash screen. And abviously thats because Im doing things wrong ... so I decided to ask you guys.
I have this function for exemple :
public string GetResPor()
{
var restClient = new RestClient { BaseUrl = new Uri("http://xxx.xxx.xxx.xxx:xxxx") };
var request = new RestRequest
{
Resource = "getCliPor",
Method = Method.GET
};
request.AddParameter(new Parameter { Name = "idt", Value = GetImAsync().GetAwaiter().GetResult(), Type = ParameterType.GetOrPost });
var result = restClient.Execute(request);
Port = result.Content;
return Port;
}
When I convert this on a Task :
public async Task<string> GetResPor()
{
var restClient = new RestClient { BaseUrl = new Uri("http://xxx.xxx.xxx.xxx:xxxx") };
var request = new RestRequest
{
Resource = "getCliPor",
Method = Method.GET
};
request.AddParameter(new Parameter { Name = "idt", Value = GetImAsync().GetAwaiter().GetResult(), Type = ParameterType.GetOrPost });
var result = await restClient.ExecuteTaskAsync(request);
Port = result.Content;
return Port;
}
on the ViewModel I start by creating a new instance of my service class and then:
Port = RD.GetRestauPort().GetAwaiter().GetResult();
And this is where the app get blocked, no exceptions no nothing.
To keep things simple, let's start with the basics. The easiest thing to do, in order to run something in a background thread, is to call it inside a Task.Run(). What this does is:
Queues the specified work to run on the ThreadPool and returns a task or Task<TResult> handle for that work.
Basically, you are delegating your work to the TreadPool and it handles everything for you - looks for a worker, waits for the worker to finish its job (on a new thread) and then notifies you of the result.
So, basically, whatever you want to be in a background thread, the simples solution will be to wrap it inside a Task.Run() and await its result, in case you need it.
Also, avoid using GetAwaiter().GetResult(). The simple rule in asynchronous programming is - if you can await, await all the way up.
You can read more about the topics in
this SO post
Advanced Tips for Using Task.Run With Async/Await
Using Task.Run in Conjunction with Async/Await

.net core 2.2 httpclient Factory not giving Full data as response

I am using .net core 2.2 for my flight listing application and i am using wego api for that. but while i am using the below code for getting flights from wego api i am not getting the complete response, but in postman i am getting full result set at one request.
public async Task<SearchResultMv> GetFlights(FlightParam flightParam, AuthResult auth)
{
var request = new HttpRequestMessage(HttpMethod.Get, "https://srv.wego.com/metasearch/flights/searches/" + flightParam.SearchId + "/results?offset=0&locale=" + flightParam.locale + "&currencyCode=" + flightParam.currencyCode);
request.Headers.Add("Bearer", auth.access_token);
request.Headers.Add("Accept", "application/json");
var client = _httpClient.CreateClient();
client.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", auth.access_token);
var response = await client.SendAsync(request).ConfigureAwait(false);
SearchResultMv json = new SearchResultMv();
response.EnsureSuccessStatusCode();
if (response.IsSuccessStatusCode)
{
json = await response.Content.ReadAsAsync<SearchResultMv>().ConfigureAwait(false);
return json;
}
}
Some time I am not getting any result set by the above code. Wego api is not providing any pagination or filtration on this api. so Please help me to achieve this. Thanks for advance.
According the their documentation you need to poll their API to get the results gradually. You also need to increment the offset as the results are returned.
For example, if the first set of results gives you 100 results, the following request should have the offset value set as 100. offset=100.
Documentation:
https://developers.wego.com/affiliates/guides/flights
Edit - Added sample solution
Sample code that polls the API every second until reaching the desired number of results. This code hasn't been tested so you'll need to adjust it to your needs.
const int numberOfResultsToGet = 100;
var results = new List<SearchResultMv>();
while (results.Count < numberOfResultsToGet)
{
var response = await GetFlights(flightParam, auth);
results.AddRange(response.Results);
// update offset
flightParam.Offset += response.Results.Count;
// sleep for 1 second before sending another request
Thread.Sleep(TimeSpan.FromSeconds(1));
}
Change your request to use a dynamic Offset value. You can add the Offset property to the FlightParam class.
var request = new HttpRequestMessage(
HttpMethod.Get,
$"https://srv.wego.com/metasearch/flights/searches/{flightParam.SearchId}/results?" +
$"offset={flightParam.Offset}" +
$"&locale={flightParam.locale}" +
$"&currencyCode={flightParam.currencyCode}");

API Request Pagination

I am making a simple API request to Github to get all the repositories. The problem is that Github has a limitation and the max that it can send is 100 per request. There are users that have more than 100 repositories and I don't know how to access it or how to make pagination.
I am making GET request with Axios like this:
https://api.github.com/users/<AccountName>/repos?per_page=100
I can also put page number like so
https://api.github.com/users/<AccountName>/repos?page=3&per_page=100
But how do I make this work in app without making 10 API requests? I wouldn't even know how many requests I should make because I don't know what is the number that gets returned, does somebody have 100 or 1000 repos? I would like for everything to be returned and saved in array, for example.
EDIT:
Example: I am passing in accountName
var config = {
headers: {'Authorization': `token ${ACCESS_TOKEN}`}
}
const REQUEST: string = 'https://api.github.com/users/'
const apiCall = {
getData: async function (accountName) {
const encodedAccountName = encodeURIComponent(accountName)
const requestUrl = `${REQUEST}${encodedAccountName}`
const user = await axios.get(requestUrl, config)
// This return user and inside of user there is a link for fetching repos
const repo = await axios.get(`${user.data.repos_url}?per_page=100`, config)
...
You can get the repo count by requesting from the user account URL first. For example here is mine:
https://api.github.com/users/erikh2000
The response there includes a "public_repos" value. Bam! That's the magic number you want.
You next need to make multiple fetches if the repo count is over 100. I know you didn't want to, but hey... can't blame web services for trying to conserve their bandwidth. The good news is you can probably put them in a Promise.all() block and have them all fetch together and return at once. So code like...
const fetchAllTheRepos = (userName, repoCount) => {
const MAX_PER_PAGE = 100;
const baseUrl = 'https://api.github.com/users/' + userName +
'/repos?per_page=' + MAX_PER_PAGE;
//Start fetching every page of repos.
const fetchPromises = [], pageCount = Math.ceil(repoCount /
MAX_PER_PAGE);
for (let pageI = 1; pageI <= pageCount; ++pageI) {
const fetchPagePromise = fetch(baseUrl + '&page=' + pageI);
fetchPromises.push(fetchPagePromise);
}
//This promise resolves after all the fetching is done.
return Promise.all(fetchPromises)
.then((responses) => {
//Parse all the responses to JSON.
return Promise.all( responses.map((response) => response.json()) );
}).then((results) => {
//Copy the results into one big array that has all the friggin repos.
let repos = [];
results.forEach((result) => {
repos = repos.concat(result);
});
return repos;
});
};
//I left out the code to get the repo count, but that's pretty easy.
fetchAllTheRepos('erikh2000', 7).then((repos) => {
console.log(repos.length);
});
Simultaneously fetching all the pages may end up being more than Github wants to let you do at once for those accounts with lots of repos. I would put some "good citizen" limit on the number of repos you'll try to get at once, e.g. 1000. And then see if api.github.com agrees with your definition of a good citizen by watching for HTTP error responses. You can get into throttling solutions if needed, but probably a "grab it all at once" approach like above works fine.
On the other hand, if you are spidering through multiple accounts in one session, then maybe design the throttling in from the beginning just to you know... be nice. For that, look at a queue/worker pattern.