Frontend no longer accessible after dependency updates - vue.js

We have a rather standard web app, that consists of a Flask backend and a Vue.js frontend. In production, we use uWSGI to serve that application. We have uWSGI configured to serve frontend pages and access backend calls for their respective routes.
[uwsgi]
module = app
callable = create_app()
buffer-size=65535
limit-post=0
wsgi-disable-file-wrapper=true
check-static=./public
# enable threads for sentry
enable-threads = true
# dont show errors if the client disconnected
ignore-sigpipe=true
ignore-write-errors=true
disable-write-exception=true
; redirect all frontend requests that are not static files to the index
route-host = ^$(FRONTEND_HOST_NAME)$ goto:frontend
; also handle if the host name is frontend, for the dokku checks
route-host = ^frontend$ goto:frontend
; continue if its a backend call
route-host = ^$(BACKEND_HOST_NAME)$ last:
route-host = ^backend$ last:
; log and abort if none match
route-run = log:Host Name "${HTTP_HOST}" is neither "$(FRONTEND_HOST_NAME)" nor "$(BACKEND_HOST_NAME)"
route-run = break:500
route-label = frontend
route-if = isfile:/app/src/backend/public${PATH_INFO} static:/app/src/backend/public${PATH_INFO}
route-run = static:/app/src/backend/public/index.html
This worked perfectly fine and behaved just like our dev setup, where we use containers for both front- and backend. But after the update of some vulnerable dependencies, trying to access the frontend results in a 404.
In the frontend we moved from vue-cli ~4.5.9 to ~5.0.4. We long suspected that this might be the main issue, but we're not so sure about that anymore.
We also upgraded from Flask ~1.1 to ^2.0.3 but we kept the version 2.0 of uWSGI. The configuration of that should therefore probably not have changed.
We're treading in the dark on this one. Does anyone of you have any idea on what might be going wrong in here?
I tried to isolate the problem by creating a rather small setup, but have not been able to track down the underlying issue until now.

I have no idea what it exactly was, but I did in the end upgrade each dependency one by one until all of them were upgraded and things still worked. It must have been something related to the Dockerfile that we use. That one is now slightly more like the old one rather than the one I used previously to doing things one by one.

Related

How do make an SSL Connection from a Kong serverless function using a client certificate

I'm trying to create a serverless function for Kong for authentication purposes. I'm required to use a client certificate to authenticate with the remote service that we have to use. I can't seem to get this working and there appears to be no clear documentation on how to do this. I've tried pintsized/lua-resty-http, ngx.socket.tcp(), and luacurl (failed to build) without success. I'm using the newest version of Kong in an Alpine Linux container in case that matters.
What is the best way to do this? Right now I'm considering simply calling curl from within Lua as I know that works, but I was hoping for a better solution that I can do with just Lua/OpenResty.
Thanks.
UPDATE: I just wanted to add, just in case it helps, that I'm already building a new image based on the official Kong one as I had to modify the nginx configuration templates, so installing new software into the container is not an issue.
All,
Apologies for the ugly code, but it looks like a found an answer that works:
require("socket")
local currUrl= "https://some.url/"
local https = require("ssl.https")
local ltn12 = require("ltn12")
local chunks = {}
local body, code, headers, status = https.request{
mode = "client",
url = currUrl,
protocol = "tlsv1_2",
certificate = "/certs/bundle.crt",
key = "/certs/bundle.key",
verify = "none",
sink = ltn12.sink.table(chunks),
}
If someone has a better answer, I'd appreciate it, but it's hard to complain about this one. The main issue is that while this works for a GET request, I'll be wanting to do POSTs to a service in a future and I have no idea how to do it using similar code. I'd like one libary/API that can do any type of REST request.
This blog got me on the right track: http://notebook.kulchenko.com/programming/https-ssl-calls-with-lua-and-luasec

How can I replace the server in Web Component Tester

I have a project set up based around the Polymer Starter Kit, which includes Web-Component-Tester
This project includes php server code which I would also like to test by writing tests to run in the browser which will utilise the PHP server code through Ajax Calls.
This implies replacing the server that Web Component Tester is using ONLY when testing server side code. I hope to make a separate gulp task for this.
Unfortunately, I don't understand the relationship between WCT, Selenium and what ever server is run currently. I can see that WCT command starts Selenium, but I can't find out what the web server is and how that is started. I suspect it is WCT, because there is configuration of the mapping of directories to urls, but other than that I haven't a clue, despite trying to read the code.
Can someone explain how I go about making it run its own server when testing the client, but relying on an already set up web server (nginx) when running the server. I can set nginx to run from local host, or an other domain if that is a way to choose a different configuration.
EDIT: I have now found that runner/webserver.js starts an express server, and that urls get mapped so the base directory for the test runner and the bower_components directory both get mapped to the /components url.
What is currently confusing me is in what circumstances this gets run. It appears that loading plugins somehow does it, but my understanding from reading the code for this is tenuous.
The answer is that web component tester itself has a comment in the runner/config.js file.
In wct-conf.js, you can use registerHooks key into the Object that gets returned to add a function that does
registerHooks: function(wct) {
wct.hook('prepare:webserver', function(app, done) {
var proxy = require('express-http-proxy');
app.use('/api',
proxy('pas.dev', {
forwardPath: function(req, res) {
return require('url').parse(req.url).path;
}
})
);
done();
});
This register hook function allows you to provide a route (/api in my case) which this proxies to a server which can run the php scripts.

Can't disable WSGIErrorOverride

I am backing a web app with a Flask API that returns custom error codes. The API runs through Apache and the WSGI module, in daemon mode.
I included a WSGIErrorOverride Off instruction in the Apache conf file for the API (which is supposed to be the default but I included it anyway).
Yet anytime my Flask app returns a custom error code (they work when I run the app using the built-in server), Apache sends an error 500. How can I prevent that?
Thanks to comments by duskwuff and Graham Dumpleton, I found that the problem doesn't come from Apache WSGI but from my Flask app.
More precisely, I was using the Flask-RESTful package, which is in charge, among other things, of transforming my views' return values into actual responses.
When those views are decorated (here with an equivalent of #login_required), those decorators are called by the Flask-RESTful package itself, and when an exception is thrown, something goes wrong.
For some reason, my app returns the custom error when I run the built-in server and an error 500 when I run it over Apache. Not quite sure why yet, I'm guessing Flask-RESTful is doing something that is not WSGI-compliant. I was on the verge of dropping it anyway for other reasons, so I'm OK with this solution.
Update: it looks like the problem does indeed come from Flask-RESTful: https://github.com/flask-restful/flask-restful/issues/372

nginx + multiple instance of fastcgi-mono-server = WebResource.axd error

I'm running nginx which does load balancing over several instances of fastcgi-mono-server4 configured upstream.
Apparently when a webresource link is handled by a different fastcgi-mono-server than the one which originally produced the link it returns a 404 error.
I have set a persistent machinekey as recommended for webfarms but the problem still remains.
Any idea what could be wrong?
If it makes any difference: the application is written with F#/WebSharper and we disabled the session state and the forms authentication.
Thanks

Web Deploy API (deploy .zip package) Clarification

I'm using the web deploy API to deploy a web package (.zip file, created by MSDeploy.exe) to programmatically roll the package out to a server (we need to do some other things before we release the package which is why we're not doing it all in one go using MSDeploy.exe).
Here's the code I have. My question is really to clarify what is happening when this is executed. In the package parameters XML file I have the application name specified ("Default Web Site") but that's about it, there's no other params are specified in there. From testing the server it appears the package gets deployed successfully but my question is are any other settings on the server I'm deploying to getting changed without my knowledge, are any default settings published etc.? Things like security settings, directory browsing etc. that I might not be aware of? The code here seems to deploy the package but I'm anxious about using this on a production environment when I'm so unsure of how this API works. The MS documentation is not helpful (more like non-existant, actually).
DeploymentChangeSummary changes;
string packageToDeploy = "C:/MyPackageLocation.zip";
string packageParametersFile = "C:/MyPackageLocation.SetParameters.xml";
DeploymentBaseOptions destinationOptions = new DeploymentBaseOptions()
{
UserName = "MyUsername",
Password = "MyPassword",
ComputerName = "localhost"
};
using (DeploymentObject deploymentObject = DeploymentManager.CreateObject(DeploymentWellKnownProvider.Package,
packageToDeploy))
{
deploymentObject.SyncParameters.Load(packageParametersFile);
DeploymentSyncOptions syncOptions = new DeploymentSyncOptions();
syncOptions.WhatIf = false;
//Deploy the package to the server.
changes = deploymentObject.SyncTo(destinationOptions, syncOptions);
}
If anyone could clarify that this snippet should deploy a package to a web site application on a server, without changing any existing server settings (unless specified in the SetParameters.xml file) that would be really helpful. Any good resources on using the API or an explanation of how web deployment works behind the scenes would also be much appreciated!
The setparameters file just controls the value for the parameters defined in the package. A package might be doing much more than that. Web deploy has a concept of providers and any given package can have one or more providers.
If you want to make sure that the package is not changing server side settings the best approach you can take is to use the API but make the packages be deployed via Web Management Service. This will give you two benefits:
You can control what providers you allow through.
You can add users and give restricted permissions to them to deploy to their site or their folder etc.
The alternate approach is to:
In the package manually look at the archive.xml and look for the providers in the package. As long as you dont see any of the following providers that can cause server settings change such as apphostconfig or webserver or regkey (this is not a comprehensive list) you should be good. Runcommand is a provider that allows you to execute batch scripts or commands. While it is a good provider for admins themselves you need to consider whether you want to allow packages with such providers to run.
You can do the above mentioned inspection in code by calling getchildren on the deployment object you create out of the package and inspect the providers and the provider paths.