Server-side logging of html generated by dynamic site - apache

I'm curious about how to capture the dynamic html generated via php, aspx..., server-side, just as page requests themselves are logged.
Is this a scripting task (using python for instance), a web-server log format configuration setting (I've searched Apache's docs but could only find the usual log formatting directives) or a totally different approach?

Related

Is It Possible to Have a Meaningful/Secure Content Security Policy With Next.js + Styled-Components and a Static Host (eg. S3)

Recently Google's Lighthouse tool alerted me to the fact that I wasn't providing a Content Security Policy. However, when I try to add one (or at least one without the word "unsafe" in it), I wind up with a bunch of violations, seemingly coming from Next.js and Styled-Components.
Both libraries seem to use dynamic script/style tags which violate any sane CSP. But the only way I've found to work around them is to use a "nonce". However, that seems to require having an actual server running: if you're using Next to generate static files (to host on a static host like AWS S3), you can't provide nonces.
My question is simple: am I missing anything? Is there some non-nonce-based way, or a static-host-nonce-based way, to host a site on S3 using Next.js and Styled Components?
Or is it just impossible to use those libraries together with a strict CSP (without a server-generated nonce)?
I hope you:
do not use inline styles like <tag style='display:none;'> or JS call of element.setAttribute('style', ...).
do not use built-in inline event handlers like <tag onclick='...'> and JS-navigation like <a href='javascript:void(0)'>
because all of above require 'unsafe-inline' in styles/scripts respectively since 'unsafe-hashes' token is not supported by Safary and bugly supported by Firefox.
For Single Page Applications (SPA) (without server-side rendering), using 'nonce-value' is not useful, because the SPA does not reload the page, but only partially updates its contents, but you must generate new nonce for each page loading.
For serverless apps (like static file hosting) and SPA apps you can use 'hash-value' instead of 'nonce-value' to allow inline scripts and styles.
If you use Webpack, it has some plugins, for instance, csp-html-webpack-plugin plugin will generate content for your Content Security Policy meta tag and input the correct data into your HTML template, generated by html-webpack-plugin. All inline JS and CSS will be hashed, and inserted into the policy.

Redirect url based on ID using lua

I'm extremely new to Lua as well as nginx.we're trying to set up authentication.
I'm trying to write a script that could be injected in my NGINX which would actually listen to a an endpoint.
My api would give give me a token. I would receive this token and check if it exists in my YAML file or probably JSON file .
based on the privilege mentioned in the file, I would like to redirect it the respective url with necessary permissions.
Any help would be highly appreciated.
First of all, nginx on its own has no Lua integration whatsoever; so if you just have an nginx server, you can't script it in Lua at all.
What you probably mean is openresty, aka. the lua-nginx-module, which lets you run Lua code in nginx to handle requests programatically.
Assuming that you have a working nginx + lua-nginx-module installed and running, what you're looking for is the rewrite_by_lua directive, which lets you redirect the client to a different address based on their request.
(Realistically, you'd likely want to use rewrite_by_lua_block or rewrite_by_lua_file instead)
Within the Lua block, you can make API calls, execute some logic, etc. and then redirect to some URI internally with ngx.exec or send an actual redirect to the client with ngx.redirect.
If you want to read in a JSON or YAML file, you should do so in the init_by_lua so the file gets loaded only once and then stays in memory. The lua-cjson module comes with nginx, so you can just use that to parse your json data into a Lua table.

Script for checking siebel application documents is viewable or not

As a part of routine healthcheck in Siebel application we open few documents from different navigation in Siebel application and check whether it is viewable in browser or not.
If i want to automate then can we prepare some script in which it returns the response code of the documents.
For ex :- 404 error code means not available. In the same way html response code between 200 to 400 means everything is alright.
OR
Any other ways in which i can know whether documents are viewable in browser or not.
Given that the browser directly accesses the documents from the browser, it would be best to record the manual executed event and then replayed. Tools like JMeter or SoapUI. As it is probably a few requests at most one can look at recreating them using wget, or curl.
It is also possible to make this part of a larger test approach and include them in a open source test approach like Robot Framework. It has a HTTP Requests library that allows you to perform tests using http requests. This in addition to the Web service, web browser, database and many other types of libraries that allow an integrated test approach.

How to Prevent Cross-Site Request Forgery Attack?

We ran Burp Suite on our product and found some security vulnerabilities. The tool detected some of the CGI files which are vulnerable to Cross-Site Request Forgery attacks (CSRF).
As usual I did search for CSRF protection module on CPAN and found CGI::Application::Plugin::ProtectCSRF.
I'm wondering how can I integrate this module into our application in a generalized way? The documentation is not clear to me. How do I configure this module and make minimal changes to make sure whole application is secured from CSRF.
I also came across mod_csrf (an Apache module to prevent CSRF). Is installing this module and setting below in apache configuration file enough to prevent CSRF?
<VirtualHost>
CSRF_Enable on
CSRF_Action deny
CSRF_EnableReferer off
</VirtualHost>
I can understand that you found the documentation for CGI::Application::Plugin::ProtectCSRF unclear: it is a little impregnable
All that the Perl module appears to do is to add a hidden field to each HTML form with the name _csrf_id and a random value derived from various sources and encoded through SHA1. The protection comes when the response from the client requires that the same value must be returned to the server
It is quite nicely coded, but it uses custom subroutine attributes, and the documentation for the attributes pragma says this
WARNING: the mechanisms described here are still experimental. Do not rely on the current implementation
I cannot tell from my quick review whether the subroutine prototypes are essential to the module, but I recommend that you use the Apache mod_csrf module instead, which is likely to be more thoroughly tested than the Perl module, and has proper documentation
Since we were using in house server, not apache, therefore, mod_csrf was not possible to implement.
I ditched ProtectCSRF module as the documentation was unclear.
I solved it by doing below:
Add an element in header template which is common to all pages, this element contains CSRF token which is being passed from server
Create a JavaScript function and bind it to onload event. This JS function does below tasks:
a) Find forms in current page
b) If forms are found then create a hidden "input" element and append it to each form
c) Take the value which was put in header and assign it to above created elements
d) Now all forms have a hidden input element which contains CSRF token from point 1
Now whenever a form gets submitted this hidden element will also be submitted, whose value we are verifying at server end. If tokens do not match then there is CSRF, for which we throw the error and block request

Who knows which files should be included in a website?

When the browser requests a website, any website from a HTTP server, which of the two parses the site's content in order to know which other files need to be included on the webpage?
What I mean is this:
the browser asks for the html file and then observers that it needs to import some external css files and HE is the one who requests them.
OR
the HTTP server when faced with a request for a website, parses (already knows) which sites need to be linked to a certain webpage and sends them alongside the html page?
I'm guessing the first case is the correct one, but if someone can confirm and maybe clarify it, I'd appreciate it.
It's all done by the client (which is usually a browser). When it sees <script>, <iframe>, <img>, <link>, etc. tags that reference other documents, it downloads them if necessary.
According to Wikipedia -
The primary function of a web server is to cater web page to the
request of clients using the Hypertext Transfer Protocol (HTTP). This
means delivery of HTML documents and any additional content that may
be included by a document, such as images, style sheets and scripts.
and
The primary purpose of a web browser is to bring information resources
to the user ("retrieval" or "fetching"), allowing them to view the
information ("display", "rendering"), and then access other
information ("navigation", "following links").
It is the Browser that parses the HTML and request for the associated contents.