How to Modify Logging Fields on Pino express Log - express

I am using express-pino-logger for logging system. It is all working fine but it is logging lot of unwanted data all Request data. So how can i restrict specific Fields while logging.
var expressPino = require('express-pino-logger')({ prettyPrint: { colorize: true } });
app.use(pino);
app.get('/test',function(req, res) {
req.log.info("Something");
});
Above Code Logging Lot of Unwanted Result like below Json.
{"level":30,"time":1559044530446,"pid":2462,"hostname":"PATRALTOP-46","prettyPrint":{"colorize":true},"req":{"id":10,"method":"GET","url":"/user/profile","headers":{"host":"localhost:3011","connection":"keep-alive","user-agent":"Mozilla36","accept":"*/*","accept-encoding":"gzip, deflate, br","accept-language":"en-US,en;q=0.9,ta;q=0.8","cookie":"menubShQ","if-none-match":"W2b7bpE08jO8lVNTEV/tg9OIRMd3fI"},"remoteAddress":"::1","remotePort":58260},"res":{"statusCode":304,"headers":{"x-powered-by":"Express","etag":"W2b7b-OpE08jO8lVNTEV/tg9OIRMd3fI"}},"responseTime":106,"msg":"something","v":1}
So How can we specify or remove fields while logging.

Set base: undefined in the options you are passing to the Pino instance. It will remove pid and hostname from each log.
There are more options available to manipulate the logs.
For more details checkout the API specs:
https://github.com/pinojs/pino/blob/HEAD/docs/api.md#base-object

Latest version supports redact feature
https://github.com/pinojs/pino/blob/master/docs/redaction.md#redaction

Related

Why do some invalid MIME types trigger a "TypeError," and other invalid MIME types bypass the error and trigger an unprompted download?

I'm making a fairly simple Express app with only a few routes. My question isn't about the app's functionality but about a strange bit of behavior of an Express route.
When I start the server and use the /search/* route, or any route that takes in a parameter, and I apply one of these four content-types to the response:
res.setHeader('content-type', 'plain/text');
res.setHeader('content-type', 'plain/html');
res.setHeader('content-type', 'html/plain');
res.setHeader('content-type', 'html/text');
the parameter is downloaded as a file, without any prompting. So using search/foobar downloads a file named "foobar" with a size of 6 bytes and an unsupported file type. Now I understand that none of these four types are actual MIME types, I should be using either text/plain or text/html, but why the download? These two MIME types behave like they should, and the following MIME types with a type but no subtype all fail like they should, they all return an error of TypeError: invalid media type:
res.setHeader('content-type', 'text');
res.setHeader('content-type', 'plain');
res.setHeader('content-type', 'html');
Why do some invalid types trigger an error, and other invalid types bypass the error and trigger a download?
What I've found out so far:
I found in Express 4.x docs that res.download(path [, filename]) transfers the file at path as an “attachment,” and will typically prompt the user for the download, but this download is neither prompted nor intentional.
I wasn't able to find any situation like this in the Express docs (or here on SO) where running a route caused a file to automatically download to your computer.
At first I thought the line res.send(typeof(res)); was causing the download, but after commenting out lines one at a time and rerunning the server, I was able to figure out that only when the content-type is set to 'plain/text' does the download happen. It doesn't matter what goes inside res.send(), when the content-type is plain/text, the text after /search/ is downloaded to my machine.
Rearranging the routes reached the same result (everything worked as it should except for the download.)
The app just hangs at whatever route was reached before /search/foo, but the download still comes through.
My code:
'use strict';
var express = require('express');
var path = require('path');
var app = express();
app.get('/', function (req, res) {
res.sendFile(path.join(__dirname+'/index.html'));
});
app.get('/search', function(req,res){
res.send('search route');
});
app.get('/search/*', function(req, res, next) {
res.setHeader('content-type', 'plain/text');
var type = typeof(res);
var reqParams = req.params;
res.send(type);
});
var server = app.listen(process.env.PORT || 3000, function(){
console.log('app listening on port ' + process.env.PORT + '!');
});
module.exports = server;
Other Details
Express version 4.15.2
Node version 4.7.3
using Cloud9
am Express newbie
my repo is here, under the branch "so_question"
Why do some invalid types trigger an error...
Because a MIME-type has a format it should adhere to (documented in RFC 2045), and the ones triggering the error don't match that format.
The format looks like this:
type "/" subtype *(";" parameter)
So there's a mandatory type, a mandatory slash, a mandatory subtype, and optional parameters prefixed by a semicolon.
However, when a MIME type matches that format, it's only syntactically valid, not necessarily semantically, which brings us to the second part of your question:
...and other invalid types bypass the error and trigger a download?
That follows from that is written in RFC 2049:
Upon encountering any unrecognized Content-Type field, an implementation must treat it as if it had a media type of "application/octet-stream" with no parameter sub-arguments. How such data are handled is up to an implementation, but likely options for handling such unrecognized data include offering the user to write it into a file (decoded from its mail transport format) or offering the user to name a program to which the decoded data should be passed as input.
(emphasis mine)
The order in which you define your routes matters a lot in express, you probably need to move your default '/' route to be after the '/search/*' route.

casperjs/slimerjs: get request headers

I'm trying to do a crawler using casperjs. Some requests need raw headers edition: I have to get the raw post data, cookies, etc etc, and once I get them, I'd like to modify them (still raw) and do another request with those modified headers. But I can't find a way to do that.
I've found how to retrieve cookies using Phantomjs, but I did not found anything in casperjs/slimerjs documentation.
Thank you for your help
You can listen for the page.resource.requested event and access the headers property of the requestData:
var casper = require('casper').create();
var utils = require('utils');
casper.start('https://example.com/');
casper.on('page.resource.requested', function (requestData, networkRequest) {
utils.dump(requestData.headers);
});
casper.run();

Receiving "Invalid policy document or request headers!"

I am attempting to upload a file to S3 following the examples provided in your documentation and source files. Unfortunately, I'm receiving the following errors when attempting an upload:
[Fine Uploader 5.3.2] Invalid policy document or request headers!
[Fine Uploader 5.3.2] Policy signing failed. Invalid policy document
or request headers!
I found a few posts on here with similar errors, but those solutions didn't help me.
Here is my jQuery:
<script>
$('#fine-uploader').fineUploaderS3({
request: {
endpoint: "http://mybucket.s3.amazonaws.com",
accessKey: "changeme"
},
signature: {
endpoint: "endpoint.php"
},
uploadSuccess: {
endpoint: "success.html"
},
template: 'qq-template'
});
</script>
(Please note that I changed the keys/bucket names for security sake.)
I used your endpoint-cors.php as a model and have included the portions that I modified here:
require 'assets/aws/aws-autoloader.php';
use Aws\S3\S3Client;
// These assume you have the associated AWS keys stored in
// the associated system environment variables
$clientPrivateKey = $_ENV['changeme'];
// These two keys are only needed if the delete file feature is enabled
// or if you are, for example, confirming the file size in a successEndpoint
// handler via S3's SDK, as we are doing in this example.
$serverPublicKey = $_ENV['AWS_SERVER_PUBLIC_KEY'];
$serverPrivateKey = $_ENV['AWS_SERVER_PRIVATE_KEY'];
// The following variables are used when validating the policy document
// sent by the uploader.
$expectedBucketName = $_ENV['mybucket'];
// $expectedMaxSize is the value you set the sizeLimit property of the
// validation option. We assume it is `null` here. If you are performing
// validation, then change this to match the integer value you specified
// otherwise your policy document will be invalid.
// http://docs.fineuploader.com/branch/develop/api/options.html#validation-option
$expectedMaxSize = (isset($_ENV['S3_MAX_FILE_SIZE']) ? $_ENV['S3_MAX_FILE_SIZE'] : null);
I also changed this:
// Only needed in cross-origin setups
function handleCorsRequest() {
// If you are relying on CORS, you will need to adjust the allowed domain here.
header('Access-Control-Allow-Origin: http://test.mydomain.com');
}
The POST seems to work:
POST http://test.mydomain.com/somepath/endpoint.php 200 OK
318ms
...but that's where the success ends.
I think part of the problem is that I'm not sure what to enter for "clientPrivateKey". Is that my "Secret Access Key" I set up with IAM?
And I'm definitely unclear on where I get the serverPublicKey and serverPrivateKey. Where am I generating a key-pair on the S3? I've combed through the docs, and perhaps I missed it.
Thank you in advance for your assistance!
First off, you are using endpoint-cors.php in a non-CORS environment. Communication between the browser and your endpoint appears to be same-origin, based on the URL of your signature endpoint. Switch to the endpoint.php example.
Regarding your questions about the keys, you should have create two distinct IAM users: one for client-side operations (heavily restricted) and one for server-side operations (an admin user). For each user, you'll have an access key (public) and a secret key (private). You always supply Fine Uploader with your client-side public key, and use your client-side private key to sign requests server-side. To perform other, more restricted operations (such as deleting files), you should use your server user keys.

Worklight: send client logs to server

I am using worklight 6.1 and I'm trying to send logs that are created in my client to the server in order to be able to view the logs in case the application crashes. What I have done is (based on this link http://pic.dhe.ibm.com/infocenter/wrklight/v5r0m6/index.jsp?topic=%2Fcom.ibm.worklight.help.doc%2Fdevref%2Fc_using_client_log_capture.html):
Set the below in wlInitOptions.js
logger : {
enabled: true,
level: 'debug',
stringify: true,
pretty: false,
tag: {
level: false,
pkg: true
},
whitelist: [],
blacklist: [],
nativeOptions: {
capture: true
}
},
In the client I have set the below where I want to send a log:
WL.Logger.error("test");
WL.Logger.send();
Implemented the necessary adapter WLClientLogReceiver-impl.js with the log function based on the link
Unfortunately I can't see the log in the messages.log. Anyone have any ideas?
I have also tried to send the log in the analytics DB based on this link http://www-01.ibm.com/support/knowledgecenter/SSZH4A_6.2.0/com.ibm.worklight.monitor.doc/monitor/c_op_analytics_data_capture.html.
What I did is:
WL.Analytics.log( { "_activity" : "myCustomActivity" }, "My log" );
however no new entry is added in the app_Activity_Report table. Is there something I am missing?
Couple of things:
Follow Idan's advice in his comments and be sure you're looking at the correct docs. He's right; this feature has changed quite a bit between versions.
You got 90% of the configuration, but you're missing the last little bit. Simply sending logs to your adapter is not enough for them to show in your messages.log. You need to do one of the following to get it into messages.log:
set audit="true" attribute in the <procedure> tag of the WLClientLogReceiver.xml file, or
log the uploaded data explicitly in your adapter implementation. Beware, however, that the WL.Logger API on the server is subject to the application server level configuration.
Also, WL.Analytics.log data does not go into the reports database. The only public API that populates the database is WL.Client.logActivity. I recommend sticking with the WL.Logger and WL.Analytics APIs.

Caching JSON with Cloudflare

I am developing a backend system for my application on Google App Engine.
My application and backend server communicating with json. Like http://server.example.com/api/check_status/3838373.json or only http://server.example.com/api/check_status/3838373/
And I am planning to use CloudFlare for caching JSON pages.
Which one I should use on header? :
Content-type: application/json
Content-type: text/html
Is CloudFlare cache my server's responses to reduce my costs? Because I'll not use CSS, image, etc.
The standard Cloudflare cache level (under your domain's Performance Settings) is set to Standard/Aggressive, meaning it caches only certain types by default scripts, stylesheets, images. Aggressive caching won't cache normal web pages (ie at a directory location or *.html) and won't cache JSON. All of this is based on the URL pattern (e.g. does it end in .jpg?) and regardless of the Content-Type header.
The global setting can only be made less aggressive, not more, so you'll need to setup one or more Page Rules to match those URLs, using Cache Everything as the custom cache rule.
http://blog.cloudflare.com/introducing-pagerules-advanced-caching
BTW I wouldn't recommend using an HTML Content-Type for a JSON response.
By default, Cloudflare does not cache JSON file. I've ended up with config a new page rule:
https://example.com/sub-directiory/*.json*
Cache level: Cache Everything
Browser Cache TTL: set a timeout
Edge Cache TTL: set a timeout
Hope it saves someone's day.
The new workers feature ($5 extra) can facilitate this:
Important point:
Cloudflare normally treats normal static files as pretty much never expiring (or maybe it was a month - I forget exactly).
So at first you might think "I just want to add .json to the list of static extensions". This is likely NOT want you want with JSON - unless it really rarely changed - or is versioned by filename. You probably want something like 60 seconds or 5 minutes so that if you update a file it'll update within that time but your server won't get bombarded with individual requests from every end user.
Here's how I did this with a worker to intercept all .json extension files:
// Note: there could be tiny cut and paste bugs in here - please fix if you find!
addEventListener('fetch', event => {
event.respondWith(handleRequest(event));
});
async function handleRequest(event)
{
let request = event.request;
let ttl = undefined;
let cache = caches.default;
let url = new URL(event.request.url);
let shouldCache = false;
// cache JSON files with custom max age
if (url.pathname.endsWith('.json'))
{
shouldCache = true;
ttl = 60;
}
// look in cache for existing item
let response = await cache.match(request);
if (!response)
{
// fetch URL
response = await fetch(request);
// if the resource should be cached then put it in cache using the cache key
if (shouldCache)
{
// clone response to be able to edit headers
response = new Response(response.body, response);
if (ttl)
{
// https://developers.cloudflare.com/workers/recipes/vcl-conversion/controlling-the-cache/
response.headers.append('Cache-Control', 'max-age=' + ttl);
}
// put into cache (need to clone again)
event.waitUntil(cache.put(request, response.clone()));
}
return response;
}
else {
return response;
}
}
You could do this with mime-type instead of extension - but it'd be very dangerous because you'd probably end up over-caching API responses.
Also if you're versioning by filename - eg. products-1.json / products-2.json then you don't need to set the header for max-age expiration.
You can cache your JSON responses on Cloudflare similar to how you'd cache any other page - by setting the Cache-Control headers. So if you want to cache your JSON for 60 seconds on the edge (s-maxage) and the browser (max-age), just set the following header in your response:
Cache-Control: max-age=60, s-maxage=60
You can read more about different cache control header options here:
https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Cache-Control
Please note that different Cloudflare plans have different value for minimum edge cache TTL they allow (Enterprise plan allows as low as 1 second). If your headers have a value lower than that, then I guess they might be ignored. You can see the limits here:
https://support.cloudflare.com/hc/en-us/articles/218411427-What-does-edge-cache-expire-TTL-mean-#summary-of-page-rules-settings