I'm trying to use morgan to log requests for my api server. With my routes configured like this;
app.use logger('dev')
app.use '/api/collab/dataobjects/:do', if config.disable_auth then authMid.noAuthCheck else authMid.authCheck
app.use '/api/collab/dataobjects/:do', queryRouter(config.testing)
app.use '/api/collab/dataobjects/:do', queryRouter()
app.use (req, res, next) ->
console.warn "Test"
err = new Error('Not Found')
err.status = 404
next err
app.use (err, req, res, next) ->
res.status(err.status || 500)
console.warn err
res.send {
message: err.message
error: err
}
Morgan mostly works as expected, but on a few routes gives some nonsense output;
POST /api/collab/dataobjects/1/update - - ms - -
After checking some timings, it was clear that morgan was logging these responses before they had actually been returned. To fix this, I moved the app.use logger('dev') line after the api routes, but before the error catching routes. Placed there, Morgan would display the status code and size of long requests, unlike before, but now on all requests it doesn't show the time it took;
GET /api/collab/dataobjects/1 200 - ms - 4119
Why is Morgan failing to show the response time, and how can I fix it?
I just noticed this question is 2+ years old now, but I've already done the legwork so I'll post my response anyway.
I've seen similar problems myself so I spent a little time digging around to try to figure this out. I'm not sure I can fully answer your question (yet?) but I can explain a few of the things you are seeing:
STARTING THE TIMER:
Morgan starts its timer when the middleware-handler-method (the one with the (req, res, next) signature) is invoked, so in this case:
app.use logger('dev')
app.use '/api/foo/:bar', handler
the reported time should include the time to process /api/foo/:bar, but this case:
app.use '/api/foo/:bar', handler
app.use logger('dev')
it should not include the time to process /api/foo/:bar since the timer starts after the handler method runs.
STOPPING THE TIMER:
Morgan will not stop the timer until it is formatting the log line to be written.
Unless configured otherwise (e.g. with the immediate option), Morgan doesn't write the line to the log until the response has been completely processed, using the on-finished module to get called-back when the express request processing is complete.
REPORTING - INSTEAD OF THE RESPONSE TIME
I think there are a few scenarios that will cause Morgan to write - instead of the response time:
Based on the source code it looks like Morgan will write - to the log when it can't find the temporary variable it set when it "started the timer", therefore writing - to indicate the value is more or less "null".
Morgan also writes - to the log if the request "never" finished processing -- i.e., if the request timed out without completing a valid response. (In this case I guess - more or less indicates "infinity").
Morgan might also write - when the value is literally 0, which might explain why you started seeing - all the time once you moved the app.use(logger) code below your actual routes. Alternatively, since the response is probably already processed by the time Morgan invokes onFinished in your second scenario, the on-finished callback fires immediately, possibly before the temporary start-time variable has been written, leading to #1.
SO WHY DOES MORGAN SOMETIMES WRITE - IN YOUR ORIGINAL SET-UP?
I think the most likely scenario is that your "long-running" requests are timing out according to one part of your infrastructure or another. For example the service that sits in front of your expressjs application (a web server like nginx or an end-user's web browser) will eventually give up on waiting for a response close the connection.
I would need to dig around in the on-finished codebase (or have someone explain this to me :)) to understand what Morgan will get back from on-finished in this scenario, and what it will do with that information, but I think the response time-out is consistent with the information you've shared.
Related
What I want to achieve is when an express handler fails either by throwing an unhandled exception or returning an empty response like undefined or [], I want the handler to return a predefined mock response rather than failing. This means my server never fails as it either returns the normal real data or predefined mock data.
Of course I will only turn this on in development environment and never in production.
I think a middleware is ideal because I don't want to pollute every handler logic by injecting the response check.
Is this possible with a middleware in express?
If not, what's a cleaner way of achieving this?
The return value of a middleware handler is irrelevant, because a middleware handler is asynchronous by nature. It does one of three things:
It completes the response by calling res.end(...) or similar.
It reports an error by calling next(err).
It delegates the decision to the next middleware by calling next().
Errors can be caught with an additional error-handling middleware, and exceptions can be converted into errors as discussed in [ExpressJs]: Custom Error handler do not catch exceptions.
However, you cannot change a response after it has been sent. Moreover, you write
an express handler fails ... by ... returning an empty response
but an empty response is not a failure. If you want to treat empty responses as failures, but only in production, I suggest that you handle them as special errors. Instead of responding with res.json([]), say, you write next({emptyResponse: []}) and have special error-handling middleware in development only to handle these:
app.use(function(err, req, res, next) {
if (err.emptyResponse) {
console.error(err.emptyResponse);
res.end("Mock response");
} else next(err); // delegate to the standard error handler
});
Perhaps there is a misconception what a response is. The server streams the response to the client, only the client can "get" the response in this sense. Responses cannot be passed between middlewares.
I am trying to use HttpContext.Session in my ASP.NET Core Blazor Server application (as described in this MS Doc, I mean: all correctly set up in startup)
Here is the code part when I try to set a value:
var session = _contextAccessor.HttpContext?.Session;
if (session != null && session.IsAvailable)
{
session.Set(key, data);
await session.CommitAsync();
}
When this code called in Razor component's OnAfterRenderAsync the session.Set throws following exception:
The session cannot be established after the response has started.
I (probably) understand the message, but this renders the Session infrastructure pretty unusable: the application needs to access its state in every phase of the execution...
Question
Should I forget completely the DistributedSession infrastructure, and go for Cookies, or Browser SessionStorage? ...or is there a workaround here still utilizing HttpContext.Session? I would not want to just drop the distributed session infra for a way lower level implementation...
(just for the record: Browser's Session Storage is NOT across tabs, which is a pain)
Blazor is fundamentally incompatible with the concept of traditional server-side sessions, especially in the client-side or WebAssembly hosting model where there is no server-side to begin with. Even in the "server-side" hosting model, though, communication with the server is over websockets. There's only one initial request. Server-side sessions require a cookie which must be sent to the client when the session is established, which means the only point you could do that is on the first load. Afterwards, there's no further requests, and thus no opportunity to establish a session.
The docs give guidance on how to maintain state in a Blazor app. For the closest thing to traditional server-side sessions, you're looking at using the browser's sessionStorage.
Note: I know this answer is a little old, but I use sessions with WebSockets just fine, and I wanted to share my findings.
Answer
I think this Session.Set() error that you're describing is a bug, since Session.Get() works just fine even after the response has started, but Session.Set() doesn't. Regardless, the workaround (or "hack" if you will) includes making a throwaway call to Session.Set() to "prime" the session for future writing. Just find a line of code in your application where you KNOW the response hasn't sent, and insert a throwaway call to Session.Set() there. Then you will be able to make subsequent calls to Session.Set() with no error, including ones after the response has started, inside your OnInitializedAsync() method. You can check if the response is started by checking the property HttpContext.Response.HasStarted.
Try adding this app.Use() snippet into your Startup.cs Configure() method. Try to ensure the line is placed somewhere before app.UseRouting():
...
...
app.UseHttpsRedirection();
app.UseStaticFiles();
//begin Set() hack
app.Use(async delegate (HttpContext Context, Func<Task> Next)
{
//this throwaway session variable will "prime" the Set() method
//to allow it to be called after the response has started
var TempKey = Guid.NewGuid().ToString(); //create a random key
Context.Session.Set(TempKey, Array.Empty<byte>()); //set the throwaway session variable
Context.Session.Remove(TempKey); //remove the throwaway session variable
await Next(); //continue on with the request
});
//end Set() hack
app.UseRouting();
app.UseEndpoints(endpoints =>
{
endpoints.MapBlazorHub();
endpoints.MapFallbackToPage("/_Host");
});
...
...
Background Info
The info I can share here is not Blazor specific, but will help you pinpoint what's happening in your setup, as I've come across the same error myself. The error occurs when BOTH of the following criteria are met simultaneously:
Criteria 1. A request is sent to the server with no session cookie, or the included session cookie is invalid/expired.
Criteria 2. The request in Criteria 1 makes a call to Session.Set() after the response has started. In other words, if the property HttpContext.Response.HasStarted is true, and Session.Set() is called, the exception will be thrown.
Important: If Criteria 1 is not met, then calling Session.Set() after the response has started will NOT cause the error.
That is why the error only seems to happen upon first load of a page--it's because often in first loads, there is no session cookie that the server can use (or the one that was provided is invalid or too old), and the server has to spin up a new session data store (I don't know why it has to spin up a new one for Set(), that's why I say I think this is a bug). If the server has to spin up a new session data store, it does so upon the first call to Session.Set(), and new session data stores cannot be spun up after the response has started. On the other hand, if the session cookie provided was a valid one, then no new data store needs to be spun up, and thus you can call Session.Set() anytime you want, including after the response has started.
What you need to do, is make a preliminary call to Session.Set() before the response gets started, so that the session data store gets spun up, and then your call to Session.Set() won't cause the error.
SessionStorege has more space than cookies.
Syncing (two ways!) the sessionStorage is impossible correctly
I think you are thinking that if it is on the browser, how can you access that in C#? Please see some examples. It actually read from the browser and transfers (use) on the server side.
sessionstorage and localstorage in blazor are encrypted. We do not need to do extra for encryption. The same applies for serialization.
My ASP.NET Core 3.0 in a particular configuration/deployment logs:
[INF] CORS policy execution failed.
[INF] Request origin https://bla.com does not have permission to access the resource.
How can I log at that point the resource that was requested for debugging ?
(note this question is not about the actual issue or solving it)
(note that I am not after globally increasing the log level etc)
Well, that middleware is locked down pretty badly, and I haven't found any sensible way to hook into it.
If you want to replace the CorsMiddleware, you can't just create a decorator that calls Invoke() on the middleware, because you'll have no idea what happened.
Another solution might be to replace the CorsService:ICorsService registration in the service collection with a decorator, and then check the CorsResult after delegating the call to EvaluatePolicy(). That way you could emit an additional log message close to where the original message is emitted.
But there is another possible solution, both very simple and very crude: To check what happened in the request. Albeit that is a bit farther away from the original logged message.
The code below is a delegate added to the pipeline (in Startup/Configure, before .UseCors()) that checks if the request was a preflight request (the same way CorsService does), and if it was successful, i.e. the AccessControlAllowOrigin header is present. If it wasn't successful, it logs a message with the same EventId and source as the CorsService.
app.Use(async (ctx, next) =>
{
await next();
var wasPreflightRequest = HttpMethods.IsOptions(ctx.Request.Method)
&& ctx.Request.Headers.ContainsKey(CorsConstants.AccessControlRequestMethod);
var isCorsHeaderReturned = ctx.Response.Headers.ContainsKey(HeaderNames.AccessControlAllowOrigin);
if (wasPreflightRequest && !isCorsHeaderReturned)
{
ctx.RequestServices.GetRequiredService<ILoggerFactory>()
.CreateLogger<CorsService>()
.LogInformation(new EventId(5, "PolicyFailure"),
$"CORS preflight failed at resource: {ctx.Request.Path}.");
}
});
Based on my testing it seems to work. ¯\_(ツ)_/¯
It might not be what you were looking for, but who knows, maybe it will be useful for someone.
(Obviously a good way to deal with these things is to use a structured logging solution, like Serilog, and add enrichers to capture additional request information, or add stuff manually to a diagnostic context. But setting that up is quite a bit more involved.)
I have a google cloud function that's seems to timeout after being inactive for a certain amount of time or if I re-deploy it. Subsequent calls to the end point work just fine, it's just the initial invocation which doesn't work. The following is an over simplified version of what my cloud function is. I basically use an express app as a handler. Perhaps the issue is with the express app not running the first time around, but running on subsequent invocations?
const express = require('express');
const app = express();
const cors = require('cors');
app.use(cors())
app.get('/health', (req, res) => {
res.send('OK');
});
module.exports = app;
Currently have out set to 60s, and a route like the health route shouldn't take that long.
Some interesting log entries
"Function execution took 60004 ms, finished with status: 'timeout'"
textPayload: "Error: Retry total timeout exceeded before any response was received
at repeat (/srv/functions/node_modules/google-gax/build/src/normalCalls/retries.js:80:31)
at Timeout.setTimeout [as _onTimeout] (/srv/functions/node_modules/google-gax/build/src/normalCalls/retries.js:113:25)
at ontimeout (timers.js:436:11)
at tryOnTimeout (timers.js:300:5)
at listOnTimeout (timers.js:263:5)
at Timer.processTimers (timers.js:223:10)"
Cloud Function execution time is limited by the timeout duration, which you can specify at function deployment time. By default, a function times out after 1 minute.
As it is stated in the official documentation:
When function execution exceeds the timeout, an error status is immediately returned to the caller. CPU resources used by the timed-out function instance are throttled and request processing may be immediately paused. Paused work may or may not proceed on subsequent requests, which can cause unexpected side effects.
Note that this period can be extended up to 9 minutes. In order to set the functions timeout limit you can use this gcloud command:
gcloud functions deploy FUNCTION_NAME --timeout=TIMEOUT FLAGS...
More details about your options could be found over here.
But, maybe if your code takes a long time to execute, you may also consider using another serverless option, like Cloud Run.
A Google Cloud Function can be thought of as the event handler for an incoming event request. A cloud function can be triggered from a REST request, pub/sub or cloud storage. For a REST request, consider the function that you supply as the one and only "handler" that the function offers.
The code that you supply (assuming Node.JS) is a function that is passedin an express request object and response object. In the body of the function, you are responsible for handling the request.
Specifically, your Cloud Function should not set up express or attempt to otherwise modify the environment. The Cloud Function provides the environment to be called externally and you provide the logic to be called. Everything else (scaling etc) is handled by Google.
my head is spinning cause of the following issue. I'm accessing my webservice (running on my localhost:4434) with AngularJS and if something goes wrong, the webservice sends a response 400 containing a json body which contains a message that tells you what exactly went wrong.
Problem is I cannot access the message on the client? It is almost as if it never reaches the client?? (This isn't the case, I've confirmed that it reaches the client already) This is the angular code that I use on the client site.
$scope.create = function() {
$http.post('http://localhost:4434/scrapetastic/foo', $scope.bar).
success(function(data, status, headers, config) {
console.log("Call to log: "+status);
console.log("Call to log: "+data);
}).
error(function(data, status) {
console.log("Error|Data:"+data);
console.log(status);
});
}
If I submit malformed data a corresponding error response is generated but as I said ... somehow I cannot access the message that is contained in the response body. This is what I get:
I've tried all sorts of things but am seriously stuck now...perhaps someone has an idea on how to access the payload of the response or at least what to do next? I'm also dealing with CORS perhaps it has something to do with that.
Thanks!
I'm going to take a wild guess here and say that your problem is an XSS issue.
Not only do you not have the data variable, but as far as I can tell from your screenshot, status == 0.
Your screenshot also says Origin: http://localhost, which makes this request considered XSS (since the port is different). That would explain why status is 0.
Edit: You can use jsonp to get around the issue.