Apollo server is very slow even with light queries - express

I know that GraphQL can get slower with large data sets, but what I have is an array of around 60 items. E.g. in a simple query, I fetch, the id, title and country and it takes more than 3 ms on average for the Apollo server to return data while the resolver reruns data calling a REST endpoint in less than 200ms. I use Apollo studio to debug queries and what I get does not help me at all - everywhere less than 1 ms.
If I go for a more complex query with 2 nested queries, it take 15-20 seconds for the same 60 items and the nested queries are quite light.
I googled and found that people face similar issues with really heavy queries with thousands of items and they talk about a few seconds, not 15-20 seconds.
Feels like there is something wrong in my setup. I do not get any errors.
I am using apollo-server-express with a basic setup:
const server = new ApolloServer({
typeDefs,
resolvers,
introspection: true,
playground: !(process.env.NODE_ENV === 'production'),
engine: {
reportSchema: true,
graphVariant: 'current',
},
subscriptions: {
onConnect: () => winston.info('Connected to websocket'),
onDisconnect: webSocket => winston.info(`Disconnected from websocket ${webSocket}`),
},
context: ({ req }) => ({
//eslint-disable-line
req,
pubSub,
}),
});

Related

best way to serve local file via express.js server

Our server serves single json file which has 1GB size. When request incomes, server reads that file and if certain element is included(matched) then we server that file, but if not, just throws an error.
There are two tackling points,
every time request is comming, read same file every time. we just want to read certain line
serve the file to thousands of users "concurrently".
How to design, non-blocking, asynchronous, efficient way of serving file?
// pseudo code
(req, res) => {
fs.readFile('/path/to/file.json', (result) => {
if (!someValidationLogic(result, req.body.someParameter)) {
throw new Error();
}
return res.send(result);
});
};
this is bad because, it does not utilize 'stream' functionality of node.js

GET request that triggers PATCH request (express) [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 3 years ago.
Improve this question
On my express server I have a script which retrieves items through scraping. I want to trigger this script once in a while and push the retrieved items into my database.
My first ideas was to create an endpoint in my API (e.g. /api/scrape-items). The problem is that it would be a GET request responsible for running the script, retrieving the items AND PATCH the items (update) my database. It doesn't seem right to let a GET request do all of that, especially to make a PATCH request, but I can't change the GET request to a POST request either because I have no body.
Can someone help me come up with a better approach? Thanks!
UPDATE: Example of triggering endpoint:
router.get('/scrape-items/', async (req, res) => {
try {
const resultFromScraping = await [
{ id: 1, data: 'updated data' },
{ id: 2, data: 'updated data' }
]
await Promise.all(
resultFromScraping.map(
async item =>
await axios.patch(
`/api/items/${item.id}`,
item.data
)
)
)
} catch (err) {
res.status(500).json({ message: err.message })
}
})
A POST request is perfectly acceptable for uploading content to a database. PATCH is usually reserved for when you are partially updating and item. So if you are just updating stuff in your database with this request, then don't hesitate to use PATCH. If you are completely replacing the resource in the database though (or you require the entire resource in the HTTP request, not just the modified stuff), then I'd recommend using PUT instead.
A GET request would be acceptable as well in this situation if you were returning data to the user.

Express GraphQL TTFB is extremely long

I have a very simple setup. Express-GraphQL API with MongoDB database. MongoDB responses are in general quite fast but when I would like to return from GrapQL API, TTFB is taking too long especially for multi-user queries.
For example, when I request a user, TTFB is 25.65 ms and content download is around 0.6 ms. But when I request all users with same fields TTFB is 4.32s and content download is around 1.28s.
Content download is no problem but I feel like TTFB is longer than it should be. You can check part of my Schema with RootQuery below.
const RootQuery = new GraphQLObjectType({
name: 'RootQueryType',
fields: {
user: {
type: UserType,
args: {mail: {type: GraphQLString}},
resolve(parent, args){
return User.findOne({mail: args.mail});
}
},
users: {
type: new GraphQLList(UserType),
resolve(parent,args){
return User.find({}).collation({ locale: "en" }).sort({name: 1, surname: 1});
}
}
}
});
What would be the best way to decrease TTFB?
From your code snippet I can't see how the UserType is defined and I don't know also how the graphql query you are performing is exactly. Having said that, high TTFB numbers usually indicate that the server is performing heavy tasks, so, is very likely you are requesting a field from UserType in the query that has an expensive resolver associated (performing another MongoDB query maybe) which will be executed as many times as users exist. This is known as the N+1 problem and you could get rid of it using a dataloder, which will allow you to batch those expensive MongoDB queries in one query.
If you could give more information about the UserType and the query you are performing it would help a lot.
References:
- https://itnext.io/what-is-the-n-1-problem-in-graphql-dd4921cb3c1a
- https://github.com/graphql/dataloader

Change execution timeout for a Google Cloud Function running a ExpressJS service in code?

How do you change execution timeout for a Google Cloud Function running a ExpressJS service in code?
I found the documentation for Google Functions to change the default timeout of 60 seconds for a simple function.
https://cloud.google.com/functions/docs/concepts/exec
exports.afterTimeout = (req, res) => {
setTimeout(() => {
// May not execute if function's timeout is <2 minutes
console.log('Function running...');
res.end();
}, 120000); // 2 minute delay
};
Express
const express = require('express');
const app = express();
...
module.exports.app = app;
Thanks
Independently from what you run in your Cloud Function, when you deploy it using the gcloud command, you just need to set the --timeout flag to the value you want (in seconds), up to 9 minutes.
If you are using the Console to create you Cloud Function, there is a dropdown menu right above the "create" button that will show you the advanced options, where you can choose the timeout desired (between 1 and 540 seconds).
If you want to do it in execution time, from within the very function, you could do an API call to change the timeout. However, it will not affect any already running function execution.

cakephp functions that should take about 100 milliseconds at most take much longer

My app is being used for 6 months and I've stopped being asked to solve new bugs a long while ago.. life was great :)
Now I finished working on other projects and would like to speed things up with my application.
PROBLEM
E.G.: I have a very simple function like this, that I call using AJAX.
In the browser console I see, that the function takes 700 miliseconds to finish. I counted how much millis does it take for the actual code in the body of the function to fire. Not surprisingly, only about 100 milliseconds, which would be OK.
public function getObjVisibility()
{
$start = round( microtime( true ) * 1000 );
$this->autoRender = false;
$tmp = $this->Obj->find ( 'first', array
(
'conditions' => array
(
'obj_id' => $_POST['id']
),
'fields' => array
(
'visible'
)
)
);
$result = $tmp['Obj']['visible']; //added field so I could count even the assigning
$end = round( microtime ( true ) * 1000 );
fb::log( "time: ", $start - $end ); // firePHP logging to console
return $tmp['Obj']['visible'];
}
So the function should take at most about 100 millis, takes at least about 700.
Does any of you have any idea what's going on? I was unable to formulate a reasonable question for the google to give me a reasonable answer, so I asked you guys :)
A cake request/response goes through dispatch, routing, rendering, etc... so it is more than just the body of that action. Have you tried the DebugKit? It has some profiling features which might help you isolate slow parts.
The main reason for these problems to occur was obviously the fact, that my application takes several synchronized AJAX calls to perform some actions.
As I was synchronizing and asynchronizing my calls during the implementation I did not think about the packets that have to be sent and received from the server and am sending several small requests, which each take some time to load depending on the clients connection.
I am solving this by resynchronizing the app ( deleting async: false parameter from jQuery AJAX call ) and/or getting several information from the server via one request.
If you think this is a bad idea, let me know :-)