I'm working on a Vue 2 app that uses multiple proxies. This solution was quite helpful for getting vue.config.js setup to handle multiple proxies. It uses /one and /two as the object keys. The example in the Vue docs uses ^/api and ^/foo as the object keys. What is the significance of the ^ in the Vue docs example?
Internally, the Vue.js CLI uses http-proxy-middleware. According to their docs, they use a globbing library called micromatch. Here is their documentation on the syntax: https://github.com/micromatch/micromatch#matching-features. The ^ represents an assertion that you are replacing the start of the path. That means that if you have a path /foo/bar/api/stuff, then the /api won't be removed from the path, however, if you have /api/foo/bar, it would result in /foo/bar. This syntax is however optional to specify and would work without it. It just makes the starting assertion explicit. Here is more information about the start assertion in micromatch's source code and documentation: https://github.com/micromatch/picomatch/blob/5467a5a9638472610de4f30709991b9a56bb5613/lib/constants.js?q=%5E#L92, https://github.com/micromatch/picomatch/tree/5467a5a9638472610de4f30709991b9a56bb5613#matching-special-characters-as-literals.
Related
I'm new to vue .env. I looked everywhere for a straight answer till I got lost. According to VueJS documentation that if we have .env.local file that will be loaded in all cases but will be ignored by git which exactly what we want to hide secret API keys from the public. But it also says that if we add VUE_APP before the key name in .env.local that will make our key load to the public.
My question is. How to securely hide API key from the public and still be able to use it in production and in development without any security risks?
my .env.local file
VUE_APP_DEEPGRAM_KEY=some_API_key_that_is_secret
the above works if I log it to the console from my app. but if i removed VUE_APP It won't work so is it safe to leave it like this?
Anothe thing, in Laravel we used to save API keys in .env and refer to them from config file and then call them in app from config. So is Vue different? if not, then how to do the same here?
To answer my own question, documentation is actually pretty obvious but I got a bit confused. it simply has to start with VUE_APP_ for it to work in Vue CLI
The html file served by elm reactor contains links to _elm/styles.css and
_elm/elm.js
I am now trying to run elm reactor via a reverse proxy
which routes http://myhost/myprefix to http://localhost:8000.
However this would require that the links refer to myprefix/_elm/*
How can this be achieved ?
Elm Reactor is quite limited and only meant to get you started. Maybe something like Elm Live will give you better results. It has at least some config options regarding proxies you might want to take a look at.
Is it possible to disable client side routing in Gatsby?
I'm using Gatsby to generate a static site which only has one page and will be served from AWS/S3. I'm running into an issue caused by Gatsby removing the object suffix from the URL (https://s3.amazonaws.com/top-bucket/sub-bucket/index.html becomes https://s3.amazonaws.com/top-bucket/sub-bucket/) after the page and the Gatsby runtime loads. This issue does not happen if I disable JavaScript, so I'm pretty certain it's caused by Gatsby's use of React/Reach Router.
Is there any way to disable this behavior? I know I can probably setup a redirect on S3 to handle the request to the bucket, but I'd prefer to do this at the application level, if possible.
This is a hack and may not work in anyone else's application or break with future releases of Gatsby, but I was able to prevent this redirect by setting window.page.path = window.location.pathname; in gatsby-browser.js. This short circuits a conditional check in production-app.js, which attempts to "make the canonical path match the actual path" and results in the (IMO) unexpected behavior referenced above.
this issue is pretty old but hope it helps someone, I used this plugin: https://github.com/wardpeet/gatsby-plugin-static-site
npm install #wardpeet/gatsby-plugin-static-site --save
And just added it in gatsby-config.js
plugins: [{
`#wardpeet/gatsby-plugin-static-site`,
}]
Client side routing was then disabled!
I'm receiving the following error when I test falling back to a local file using the ASP.NET Core Script Tag Helper:
Failed to find a valid digest in the 'integrity' attribute for
resource 'http://localhost:48888/js/jquery.min.js' with computed
SHA-256 integrity 'oozPintQUive6gzYPN7KIhwY/B+d8+5rPTxI1ZkgaFU='. The
resource has been blocked.
The local file is text equal to the CDN version, but is not binary equal. This becomes a problem because the integrity hash is compared against not only the main source, but the fallback source as well, and fails the check because it generates a different hash.
Here is an example:
<script src="https://ajax.googleapis.com/ajax/libs/jquery/3.3.1/jquery.min.js"
asp-fallback-src="~/js/jquery.min.js"
asp-fallback-test="window.jQuery"
crossorigin="anonymous"
integrity="sha256-FgpCb/KJQlLNfOu91ta32o/NMZxltwRo8QtmkMRdAu8=">
</script>
This works fine as long as the browser can reach the Google's CDN. But if you change source to bad value such as looking for a non-existent version such as "3.9.9" which causes it to fallback to the local file. That local file will fail to load because they are not binary equal (different hashes).
Ideally the integrity check would not be applied to local files, since we trust a local file under our control. The alternative is that we could define a different hash for the local fallback.
Are either of these options available? If not is there another workaround? I'm trying to avoid manually copying down from the CDN to make them match, because of the added maintenance work required for future updates. I'd like to use a package manager.
With recently released ASP.NET Core 2.2.0 from December 2018 both LinkTagHelper and ScriptTagHelper gained new boolean property asp-suppress-fallback-integrity. When set to true, fallback resource will bypass the integrity check.
This is necessary when CDN and NPM distribution are binary different like is the case for Font Awesome 5.
<link rel="stylesheet" href="https://use.fontawesome.com/releases/v5.6.3/css/all.css" integrity="sha384-UHRtZLI+pbxtHCWp1t77Bi1L4ZtiqrqD80Kn4Z8NTSRyMA2Fd33n5dQ8lWUE00s/" crossorigin="anonymous"
asp-fallback-href="~/lib/font-awesome/css/all.min.css"
asp-fallback-test-class="fab" asp-fallback-test-property="font-style" asp-fallback-test-value="normal"
asp-suppress-fallback-integrity="true" />
On a personal note, I see no gain in having integrity check on local resources. It is high maintenance and is running high risk of invalidating fallback which is disastrous and without complex testing requiring simulation of failing CDN with every new version, hard to spot. Therefore I prefer to add this property to all resources.
Currently, there is no an out-of-the-box solution and it is not clear when it will be implemented (see here).
Moreover, I also had this problem and I couldn't find a graceful workaround.
Nevertheless, we have two options here:
Implement an own ScriptTagHelper using its source code and exclude the integrity attribute from FallbackBlock.
Not to use ScriptTagHelper and write what it produce manually except the integrity attribute in FallbackBlock:
<script
src="https://ajax.googleapis.com/ajax/libs/jquery/3.3.1/jquery.min.js"
crossorigin="anonymous"
integrity="sha256-FgpCb/KJQlLNfOu91ta32o/NMZxltwRo8QtmkMRdAu8=">
</script>
<script>(window.jQuery||document.write("\u003Cscript src=\u0022\/js\/jquery.min.js\u0022 crossorigin=\u0022anonymous\u0022 \u003E\u003C\/script\u003E"));</script>
Option 1
You can stop using a CDN and serve the file from the NPM package instead. You can run your site behind a service like Cloudflare which can cache the files globally for you.
Option 2
Add a step to your build (webpack, gulp or whatever) to copy the file from the CDN directly. I'm not sure why their file is not binary equal.
Option 3
If either of the above options is too hard, you can stop using SRI. It's a cost vs value equation. Only you can decide if it's worth the effort. I don't think you can switch out the hash depending on whether it's a local or remote file.
So let's start with some background. I have a 3-tier system, with an API implemented in django running with mod_wsgi on an Apache2 server.
Today I decided to upgrade the server, running at DigitalOcean, from Ubuntu 12.04 to Ubuntu 14.04. Nothing special, only that Apache2 also got updated to version 2.4.7. After wasting a good part of the day figuring out that they actually changed the default folder from /var/www to /var/www/html, breaking functionality, I decided to test my API. Without touching a single line of code, some of my functions were not working.
I'll use one of the smaller functions as an example:
# Returns the location information for the specified animal, within the specified period.
#csrf_exempt # Prevents Cross Site Request Forgery errors.
def get_animal_location_reports_in_time_frame(request):
start_date = request.META.get('HTTP_START_DATE')
end_date = request.META.get('HTTP_END_DATE')
reports = ur_animal_location_reports.objects.select_related('species').filter(date__range=(start_date, end_date), species__localizable=True).order_by('-date')
# Filter by animal if parameter sent.
if request.META.get('HTTP_SPECIES') is not None:
reports = reports.filter(species=request.META.get('HTTP_SPECIES'))
# Add each information to the result object.
response = []
for rep in reports:
response.append(dict(
ID=rep.id,
Species=rep.species.ai_species_species,
Species_slug=rep.species.ai_species_species_slug,
Date=str(rep.date),
Lat=rep.latitude,
Lon=rep.longitude,
Verified=(rep.tracker is not None),
))
# Return the object as a JSON string.
return HttpResponse(json.dumps(response, indent = 4))
After some debugging, I observed that request.META.get('HTTP_START_DATE') and request.META.get('HTTP_END_DATE') were returning None. I tried many clients, ranging from REST Clients (such as the one in PyCharm and RestConsole for Chrome) to the Android app that would normally communicate with the API, but the result was the same, those 2 parameters were not being sent.
I then decided to test whether other parameters are being sent and to my horror, they were. In the above function, request.META.get('HTTP_SPECIES') would have the correct value.
After a bit of fiddling around with the names, I observed that ALL the parameters that had a _ character in the title, would not make it to the API.
So I thought, cool, I'll just use - instead of _ , that ought to work, right? Wrong. The - arrives at the API as a _!
At this point I was completely puzzled so I decided to find the culprit. I ran the API using the django development server, by running:
sudo python manage.py runserver 0.0.0.0:8000
When sending the same parameters, using the same clients, they are picked up fine by the API! Hence, django is not causing this, Ubuntu 14.04 is not causing this, the only thing that could be causing it is Apache 2.4.7!
Now moving the default folder from /var/www to /var/www/html, thus breaking functionality, all for a (in my opinion) very stupid reason is bad enough, but this is just too much.
Does anyone have an idea of what is actually happening here and why?
This is a change in Apache 2.4.
This is from Apache HTTP Server Documentation Version 2.4:
MOD CGI, MOD INCLUDE, MOD ISAPI, ... Translation of headers to environment variables is more strict than before
to mitigate some possible cross-site-scripting attacks via header injection. Headers containing invalid characters
(including underscores) are now silently dropped. Environment Variables in Apache (p. 81) has some pointers
on how to work around broken legacy clients which require such headers. (This affects all modules which use
these environment variables.)
– Page 11
For portability reasons, the names of environment variables may contain only letters, numbers, and the underscore character. In addition, the first character may not be a number. Characters which do not match this restriction will be replaced by an underscore when passed to CGI scripts and SSI pages.
– Page 86
A pretty significant change in other words. So you need to rewrite your application so send dashes instead of underscores, which Apache in turn will substitute for underscores.
EDIT
There seems to be a way around this. If you look at this document over at apache.org, you can see that you can fix it in .htaccess by putting the value of your foo_bar into a new variable called foo-bar which in turn will be turned back to foo_bar by Apache. See example below:
SetEnvIfNoCase ^foo.bar$ ^(.*)$ fix_accept_encoding=$1
RequestHeader set foo-bar %{fix_accept_encoding}e env=fix_accept_encoding
The only downside to this is that you have to make a rule per header, but you won't have to make any changes to the code either client or server side.
Are you sure Django didn't get upgraded as well?
https://docs.djangoproject.com/en/dev/ref/request-response/
With the exception of CONTENT_LENGTH and CONTENT_TYPE, as given above, any HTTP headers in the request are converted to META keys by converting all characters to uppercase, replacing any hyphens with underscores and adding an HTTP_ prefix to the name. So, for example, a header called X-Bender would be mapped to the META key HTTP_X_BENDER.
The key bits are: Django is converting '-' to underscore and also prepending 'HTTP_' to it. If you are already adding a HTTP_ prefix when you call the api, it might be getting doubled up. Eg 'HTTP_HTTP_SPECIES'