In order to support running my ASP.NET Core application on Linux with a reverse proxy (nginx in this case), I had to add the following code snippet:
// Forward headers in order to be able to operate behind a reverse proxy
app.UseForwardedHeaders(new ForwardedHeadersOptions
{
ForwardedHeaders = ForwardedHeaders.XForwardedFor | ForwardedHeaders.XForwardedProto
});
// The above does not appear to be enough to get the right redirect URI result when
// logging in with OpenID Connect. This code snippet from
// https://github.com/aspnet/Docs/issues/2384 fixed it.
app.Use((context, next) =>
{
if (context.Request.Headers.TryGetValue(XForwardedPathBase, out StringValues pathBase))
{
context.Request.PathBase = new PathString(pathBase);
}
if (context.Request.Headers.TryGetValue(XForwardedProto, out StringValues proto))
{
context.Request.Protocol = proto;
}
return next();
});
I'm unable to find any definite advice on whether I can leave this enabled by default, or if I should put this behind some explicit configuration flag?
It seems to me that this could be used to have strange effects if these headers were added when a reverse proxy isn't being used? I can't think of a way it could be exploited, but perhaps I'm missing something.
So, is there an advantage to introducing a flag for this specific piece of configuration, aside from a very minor performance improvement perhaps?
No, don't leave this in without a reverse proxy, it's dangerous. Clients could use it to provide false values (spoofing) and trick any app logic you have that checks these values.
Related
I have a LitElement based SPA with an ASP.NET Core backend which hosts the static files and a REST API for loading data into the SPA.
The user starts with /index.html and the client-side router brings him to, f.e.,
/data-analysis or /dashboard.
When the user now presses the browser refresh button she gets 404 which is to be expected since the server does not know any of these subpaths.
I read elsewhere that I have to take care of this on the server side, so I came up with this middleware in my Startup.cs Configure method:
app.Use(async (c, next) =>
{
//Identify subpaths...
c.Request.Path = "/index.html";
await next();
});
For all subpaths it brings the user back to index.html which is nice. Better yet would be to let the client side know which subpath to restore. For this I added the following lines to the code above:
var url = c.Request.Path.Value;
c.Request.QueryString = c.Request.QueryString.Add("page", $"{url.Substring(1)}");
I expected the client to see a window.location of, f.e.
.../index.html?page=data-analysis
but the query string never arrives, window.location.search is always empty on the client side.
Is this possible at all or am I misunderstanding something here?
Nicolas
Studying Microsofts documentation about the URL rewriting middleware led me to the conclusion that I should redirect and not rewrite! The following code produces the desired result. Note that the 'return' statement is critical, because no further middleware should be called!
app.Use(async (c, next) =>
{
var url = c.Request.Path.Value;
//Identify subpaths...
if(... should redirect ...)
{
c.Response.Redirect($"/index.html?page={url.Substring(1)}");
return;
}
await next();
});
During development, i have used Swagger on the server side of my Blazor WebAssembly App. Always launching (debug) using kestrel instead of IIS Express.
Routing worked as expected, all my component routed properly and if i manually typed /swagger, i got to the swagger page. All good.
We have deployed under IIS on our pre-prod servers, the Server side and Blazor WebAssembly App (client) work as expected and are usable, however, my /swagger url gets rewritten (I assume) to go somewhere in my App instead of letting it go to Swagger, obviously there isn't any component that answers to /swagger.
My only guess is that, when hosted on IIS, the aspnet core app takes care of telling IIS what to rewrite and how (similar to the configs that could be provided thru a web.config for a "Standalone" deployment.)
I can't find how to specify exceptions, I've been following the doc at
https://learn.microsoft.com/en-us/aspnet/core/host-and-deploy/blazor/webassembly?view=aspnetcore-3.1#iis
Any idea how i could add an exception for /swagger ?
EDIT:
Turns out it works without issues in Chrome, only Firefox has the unwanted behavior. If i clear my cache, or use Incognito mode, the issue does not happen in Firefox. So, it seems that Firefox caches some stuff and tries to send my URL input to the Blazor Wasm instead of going thru to the server. I will debug some more with the dev tools and fiddler open to try and figure it out, will report back.
Turns out there this is part of the service-worker.js file that is published. It is different in dev than what gets published (which makes sense).
During my debugging i was able to reproduce the issue on all browsers (Edge, Chrome and Firefox), regardless of being in Incognito/Private mode or not.
Once the service-worker is running, it handles serving requests from cache/index.html of the Blazor WebAssembly app.
If you go into your Blazor WebAssembly Client "wwwroot" folder, you'll find a service-worker.js and a service-worker.published.js. In the service-worker.published.js, you will find a function that looks like this :
async function onFetch(event) {
let cachedResponse = null;
if (event.request.method === 'GET') {
// For all navigation requests, try to serve index.html from cache
// If you need some URLs to be server-rendered, edit the following check to exclude those URLs
const shouldServeIndexHtml = event.request.mode === 'navigate'
&& !event.request.url.includes('/connect/')
&& !event.request.url.includes('/Identity/');
const request = shouldServeIndexHtml ? 'index.html' : event.request;
const cache = await caches.open(cacheName);
cachedResponse = await cache.match(request);
}
return cachedResponse || fetch(event.request);
}
Simply following the instructions found in the code comments is gonna fix the issue. So we ended up adding an exclusion for "/swagger" like so :
&& !event.request.url.includes('/swagger')
Hopefully this post is useful for people who are gonna want to serve things outside of the service worker, not only Swagger.
Do you have UseSwagger first in your Startup.Configure method?
public static void Configure(IApplicationBuilder app, IWebHostEnvironment env)
{
app.UseSwagger();
app.UseSwaggerUI(c =>
c.SwaggerEndpoint("/swagger/v1/swagger.json", "YourAppName V1")
);
In Startup.ConfigureServices I have the Swagger code last.
public void ConfigureServices(IServiceCollection services)
{
services.AddMvc();
services.AddSwaggerGen(c =>
c.SwaggerDoc(
name: "v1",
info: new OpenApiInfo
{
Title = "YourAppName",
Version = "V1",
}));
}
This is working just fine for us.
Note: You must navigate to https://yourdomain/swagger/index.html
Can i use JWT authentication with gundb? And if so, would it dramatically slow down my sync speed? I was going to try and implement a test using the tutorial here but wanted to see if there were any 'gotchas' I should be aware of.
The API has changed to use a middleware system. The SEA (Security, Encryption, Authorization) framework will be published to handle stuff like this. However, you can roll your own by doing something like this on the server:
Gun.on('opt', function(ctx){
if(ctx.once){ return }
ctx.on('in', function(msg){
var to = this.to;
// process message.
to.next(msg); // pass to next middleware
});
});
Registering the in listener via the opt hook lets this middleware become 1st in line (before even gun core), that way you can filter all inputs and reject them if necessary (by not calling to.next(msg)).
Likewise to add headers on the client you would want to register an out listener (similarly to how we did for the in) and modify the outgoing message to have msg.headers = {token: data} and then pass it forward to the next middleware layers (which will probably be websocket/transport hooks) by doing to.next(msg) as well. More docs to come on this as it stabilizes.
Old Answer:
A very late answer, sorry this was not addressed sooner:
The default websocket/ajax adapter allows you to update a headers property that gets passed on every networked message:
gun.opt({
headers: { token: JWT },
});
On the server you can then intercept and reject/authorize requests based on the token:
gun.wsp(server, function(req, res, next){
if('get' === req.method){
return next(req, res);
}
if('put' === req.method){
return res({body: {err: "Permission denied!"}});
}
});
The above example rejects all writes and authorizes all reads, but you would replace this logic with your own rules.
This was previously achieved by adding some configuration to the web.config file, but now this file is to be extinguished.
I was expecting to find some methods or properties in the middleware declaration, but I haven't found:
app.UseStaticFiles();
So, which is now the procedure to cache static content as images, scripts, etc.?
Is there another middleware to do this or is this feature not implemented yet in MVC 6?
I'm looking for a way to add the cache-control, expires, etc. headers to the static content.
It is all about Middleware with AspNet Core;
Add the following to your Configure method in the Startup.cs file
app.Use(async (context, next) =>
{
context.Response.Headers.Add("Content-encoding", "gzip");
context.Response.Body = new System.IO.Compression.GZipStream(context.Response.Body,
System.IO.Compression.CompressionMode.Compress);
await next();
await context.Response.Body.FlushAsync();
});
By the way for caching you would add this to the ConfigureServices method
services.AddMvc(options =>
{
options.CacheProfiles.Add("Default",
new CacheProfile()
{
Duration = 60
});
options.CacheProfiles.Add("Never",
new CacheProfile()
{
Location = ResponseCacheLocation.None,
NoStore = true
});
});
And decorate the control with
[ResponseCache(CacheProfileName = "Default")]
public class HomeController : Controller
{
...
Your title says compress, but your question body says cache. I'll assume you mean both.
Minification of css/javascript is already handled by the grunt task runner on publish. Caching and compression outside this seem like something a webserver is more suited to, rather than the application layer, so here's a great article that details the config for nginx to manage caching and compression for kestrel.
If you're using IIS, you can configure caching and compression directly on it, here's a tutorial. Considering the previous versions of MVC configured this functionality in web.config\system.Webserver which basically sets IIS config values, you can likely still use a web.config for the purposes of configuring IIS (only).
I've been spending hours upon hours on this problem, but to no avail.
EDIT: solution found (see my answer)
Project background
I'm building a project in Symfony2, which requires a module for uploading large files. I've opted for Node.js and Socket.IO (I had to learn it from scratch, so I might be missing something basic).
I'm combining these with the HTML5 File and FileReader API's to send the file in slices from the client to the server.
Preliminary tests showed this approach working great as a standalone app, where everything was handled and served by Node.js, but integration with Apache and Symfony2 seems problematic.
The application has an unsecured and secured section. My goal is to use Apache on ports 80 and 443 for serving the bulk of the app built in Symfony2, and Node.js with Socket.io on port 8080 for file uploads. The client-side page connecting to the socket will be served by Apache, but the socket will run via Node.js. The upload module has to run over HTTPS, as the page resides in a secured environment with an authenticated user.
The problem is events using socket.emit or socket.send don't seem to work. Client to server, or server to client, it makes no difference. Nothing happens and there are no errors.
The code
The code shown is a simplified version of my code, without the clutter and sensitive data.
Server
var httpsModule = require('https'),
fs = require('fs'),
io = require('socket.io');
var httpsOptions =
{
key: fs.readFileSync('path/to/key'),
cert: fs.readFileSync('/path/to/cert'),
passphrase: "1234lol"
}
var httpsServer = httpsModule.createServer(httpsOptions);
var ioServer = io.listen(httpsServer);
httpsServer.listen(8080);
ioServer.sockets.on('connection', function(socket)
{
// This event gets bound, but never fires
socket.on('NewFile', function(data)
{
// To make sure something is happening
console.log(data);
// Process the new file...
});
// Oddly, this one does fire
socket.on('disconnect', function()
{
console.log("Disconnected");
});
});
Client
// This is a Twig template, so I'll give an excerpt
{% block javascripts %}
{{ parent() }}
<script type="text/javascript" src="https://my.server:8080/socket.io/socket.io.js"></script>
<script type="text/javascript">
var socket = io.connect("my.server:8080",
{
secure: true,
port: 8080
});
// Imagine this is the function initiating the file upload
// File is an object containing metadata about the file like, filename, size, etc.
function uploadNewFile(file)
{
socket.emit("NewFile", item);
}
</script>
{% endblock %}
So the problem is...
Of course there's much more to the application than this, but this is where I'm stuck. The page loads perfectly without errors, but the emit events never fire or reach the server (except for the disconnect event). I've tried with the message event on both client and server to check if it was a problem with only custom events, but that didn't work either. I'm guessing something is blocking client-server communication (it isn't the firewall, I've checked).
I'm completely at a loss here, so please help.
EDIT: solution found (see my answer)
After some painstaking debugging, I've found what was wrong with my setup. Might as well share my findings, although they are (I think) unrelated to Node.js, Socket.IO or Apache.
As I mentioned, my question had simplified code to show you my setup without clutter. I was, however, setting up the client through an object, using the properties to configure the socket connection. Like so:
var MyProject = {};
MyProject.Uploader =
{
location: 'my.server:8080',
socket: io.connect(location,
{
secure: true,
port: 8080,
query: "token=blabla"
}),
// ...lots of extra properties and methods
}
The problem lay in the use of location as a property name. It is a reserved word in Javascript and makes for some strange behaviour in this case. I found it strange that an object's property name can't be a reserved word, so I decided to test. I had also noticed I was referencing the property incorrectly, I forgot to use this.location when connection to the socket. So I changed it to this, just as a test.
var MyProject = {};
MyProject.Uploader =
{
location: 'my.server:8080',
socket: io.connect(this.location,
{
secure: true,
port: 8080,
query: "token=blabla"
}),
// ...lots of extra properties and methods
}
But to no avail. I was still not getting data over the socket. So the next step seemed logical in my frustration-driven debugging rage. Changing up the property name fixed everything!
var MyProject = {};
MyProject.Uploader =
{
socketLocation: 'my.server:8080',
socket: io.connect(this.socketLocation,
{
secure: true,
port: 8080,
query: "token=blabla"
}),
// ...lots of extra properties and methods
}
This approach worked perfectly, I was getting loads of debug messages. SUCCESS!!
Whether it is expected behaviour in Javascript to override (or whatever is happening here, "to misuse" feels like a better way of putting it to me right now) object properties if you happen to use a reserved word, I don't know. I only know I'm steering clear of them from now on!
Hope it helps anyone out there!