Mixing Wildcards and Parameters in Express - express

I have web content generated by Minecraft Overviewer in:
/home/mc/backups/servername/latest/overviewer
I have a very simple server with express/nodejs. Here is my app.js:
var express = require('express');
var app = express();
//----------------------------------------------------------------------------+
// Each server's root points to the latest overviewer page |
//----------------------------------------------------------------------------+
app.get('/minecraft/:server/*', function(req, res) {
console.log('HELLO?');
res.send('Finally some luck!');
/*
var
server = req.params.server,
file = req.params[0] ? req.params[0] : 'index.html',
dir = '../backups/' + server + '/latest/overviewer';
res.sendFile(file, { root: dir });
*/
});
app.use(express.static('www'));
app.listen(80, function () {
console.log('Web server listening on port 80');
});
I included a little more code than what is running so you can see my intent in case this is an xy kind of problem. I'm wanting to route static files, but I don't think I can use express.static because I want the URL to be mapped a little differently than my file structure (and based on a server name).
So what's the problem with my simple server? When I try and navigate to mysite.com/minecraft/isopre I see a white page saying Cannot GET /minecraft/isopre. If I remove the * from the end of the string I'm routing on line 7 I see Finally some luck!. But I want the star there so I can map mysite.com/minecraft/isopre to index.html or mysite.com/minecraft/isopre/overviewer.js.
So what's the right way to go about doing this?

In order to perform desirable operation i suggest you to use ? symbol for regular expression:
app.get('/minecraft/:server/:file?', function(req, res, next) {
if('undefined' != typeof req.params.file && req.params.file) {
var file = req.params.file;
}
if('undefined' != typeof req.params.server && req.params.server) {
var server = req.params.server;
}
});
In this case :file become optional and node.js won't fail your entire application if wildcard is absent.
Hereby, if req.params.file variable is undefined you could serve index.html file.

Related

How to get API call origin in NextJS API endpoint

I have an API set up that receives a token, and I want to store that token in a database. But I also want to store the origin URL.
Let's say my API endpoint is located at https://myapp.com/api/connect
Now, I want to send a token from my website https://mywebsite.net
After I send a token, I want to be able to store the token and the website URL to the database in NextJS code.
My endpoint would store this info to the database:
{
token: someRandomToken
origin: https://mywebsite.net
}
I tried logging the whole req object from the handler to see if that info exist but the console log fills my terminal fast.
Inside Next's Server-Side environment you have access to req.headers.host as well as other headers set by Vercel's or other platforms' Reverse Proxies to tell the actual origin of the request, like this:
/pages/api/some-api-route.ts:
import { NextApiRequest } from "next";
const LOCAL_HOST_ADDRESS = "localhost:3000";
export default async function handler(req: NextApiRequest) {
let host = req.headers?.host || LOCAL_HOST_ADDRESS;
let protocol = /^localhost(:\d+)?$/.test(host) ? "http:" : "https:";
// If server sits behind reverse proxy/load balancer, get the "actual" host ...
if (
req.headers["x-forwarded-host"] &&
typeof req.headers["x-forwarded-host"] === "string"
) {
host = req.headers["x-forwarded-host"];
}
// ... and protocol:
if (
req.headers["x-forwarded-proto"] &&
typeof req.headers["x-forwarded-proto"] === "string"
) {
protocol = `${req.headers["x-forwarded-proto"]}:`;
}
let someRandomToken;
const yourTokenPayload = {
token: someRandomToken,
origin: protocol + "//" + host, // e.g. http://localhost:3000 or https://mywebsite.net
};
// [...]
}
Using Typescript is really helpful when digging for properties as in this case. I couldn't tell if you are using Typescript, but in case you don't, you'll have to remove NextApiRequest.

AWS static website - how to connect subdomains with subfolders

I want to setup S3 static website and connect with my domain (for example domain: example.com).
In this S3 bucket I want to create one particular folder (name content) and many different subfolders with in, then I want to connect these subfolders with appropriate subdomains, so for example
folder content/foo should be available from subdomain foo.example.com,
fodler content/bar should be available from subdomain bar.example.com.
Any content subfolder should be automatically available from subdomain with that same prefix name like folder name.
I will be grateful for any possible solutions for this problem. Should I use redirection option or there is any better solution? Thanks in advance for help.
My solution base on this video:
https://www.youtube.com/watch?v=mls8tiiI3uc
Because above video don’t explain subdomain problem, here is few additional things to do:
to AWS Route53 hostage zone we should add records A with “*.domainname” as record name and edge address as Value
to certificate domains we should add also “*.domainname”- to have certificate for wildcard domain
when setting up Cloudfront distribution we should add to “Alternate domain name (CNAME)“ section “www.domainname” and also “*.domainname”
redirection/forwarding from subdomain to subfolder is realizing via Lambda#Edge function (function should be improve a bit):
'use strict';
exports.handler = (event, context, callback) => {
const path = require("path");
const remove_suffix = ".domain.com";
const host_with_www = "www.domain.com"
const origin_hostname = "www.domain.com.s3-website.eu-west-1.amazonaws.com";
const request = event.Records[0].cf.request;
const headers = request.headers;
const host_header = headers.host[0].value;
if (host_header == host_with_www) {
return callback(null, request);
}
if (host_header.startsWith('www')) {
var new_host_header = host_header.substring(3,host_header.length)
}
if (typeof new_host_header === 'undefined') {
var new_host_header = host_header
}
if (new_host_header.endsWith(remove_suffix)) {
// to support SPA | redirect all(non-file) requests to index.html
const parsedPath = path.parse(request.uri);
if (parsedPath.ext === "") {
request.uri = "/index.html";
}
request.uri =
"/" +
new_host_header.substring(0, new_host_header.length - remove_suffix.length) +
request.uri;
}
headers.host[0].value = origin_hostname;
return callback(null, request);
};
Lambda#Edge is just Lambda function connected with particular Cloudfront distribution
need to add to Cloudfront distribution additional setting for Lambda execution (this setting is needed if we want to have different redirection for different subdomian, instead all redirection will point to main directory or probably to first directory which will be cached - first request to our Cloudfront domain):

Express Set Custom Parameter Query Starter

I'm using express to interact with discord's oauth2 api.
When I request a user oauth token the server responds with a url like:
http://localhost:3000/index#token_type=Bearer&access_token=tkn&expires_in=int
I'm trying to extract the parameters after the # as with discords api parameters start with # unlike others which start with a ?
Because it doesn't start with a question mark I am unable to use the req.params.x property.
I thought, "No big deal, ill just get the url and extract it myself" but every single url accessor in express removes string after #. This includes req.url and req.originalUrl which both return the file path.
So how can I get url parameters started by hashtags instead of question marks?
Or How can I get the full url with content after hashtags
I was able to solve this problem by setting a custom query parser. Code snippet below.
const app = express();
app.set('query parser', (query) => {
return query.split('&').map(item => {
const [key, value] = item.split('=');
return {
key,
value
}
});
});
app.get('/', (req, res) => {
console.log(req.originalUrl); // Full URL starting with file path
})

What are cloudflare KV preview_ids and how to get one?

I have a following wrangler.toml. When I would like to use dev or preview (e.g. npx wrangler dev or npx wrangler preview) wrangler asks to add a preview_id to the KV namespaces. Is this an identifier to an existing KV namespace?
I see there is a ticket in Cloudflare Workers GH at https://github.com/cloudflare/wrangler/issues/1458 that tells this ought to be clarified but the ticket is closed with adding an error message
In order to preview a worker with KV namespaces, you must designate a preview_id in your configuration file for each KV namespace you'd like to preview."
which is the reason I'm here. :)
As for larger context I would be really glad if someone could clarify: I see that if I give a value of an existing namespace, I can preview and I see a KV namespace of type __some-worker-dev-1234-workers_sites_assets_preview is generated in Cloudflare. This has a different identifier than the KV namespace pointed by the identifier used in the preview_id and the KV namespace pointed by the identifier I used in preview_id is empty. Why does giving an identifier of an existing KV namespace remove the error message, deploys the assets and allow for previwing but the actual KV namespace is empty and a new one is created?
How do does kv-asset-handler know to look into this generated namespace to retrieve the assets?
I'm currently testing with the default generated Cloudare Worker to my site and I wonder if I have misunderstood something or if there is some mechanics that bundles during preview/publish the site namespace to the scipt.
If there is some random mechanics with automatic mapping, can this be then so that every developer can have their own private preview KV namespace?
type = "javascript"
name = "some-worker-dev-1234"
account_id = "<id>"
workers_dev = true
kv_namespaces = [
{ binding = "site_data", id = "<test-site-id>" }
]
[site]
# The location for the site.
bucket = "./dist"
# The entry directory for the package.json that contains
# main field for the file name of the compiled worker file in "main" field.
entry-point = ""
[env.production]
name = "some-worker-1234"
zone_id = "<zone-id>"
routes = [
"http://<site>/*",
"https://www.<site>/*"
]
# kv_namespaces = [
# { binding = "site_data", id = "<production-site-id>" }
# ]
import { getAssetFromKV, mapRequestToAsset } from '#cloudflare/kv-asset-handler'
/**
* The DEBUG flag will do two things that help during development:
* 1. we will skip caching on the edge, which makes it easier to
* debug.
* 2. we will return an error message on exception in your Response rather
* than the default 404.html page.
*/
const DEBUG = false
addEventListener('fetch', event => {
try {
event.respondWith(handleEvent(event))
} catch (e) {
if (DEBUG) {
return event.respondWith(
new Response(e.message || e.toString(), {
status: 500,
}),
)
}
event.respondWith(new Response('Internal Error', { status: 500 }))
}
})
async function handleEvent(event) {
const url = new URL(event.request.url)
let options = {}
/**
* You can add custom logic to how we fetch your assets
* by configuring the function `mapRequestToAsset`
*/
// options.mapRequestToAsset = handlePrefix(/^\/docs/)
try {
if (DEBUG) {
// customize caching
options.cacheControl = {
bypassCache: true,
}
}
const page = await getAssetFromKV(event, options)
// allow headers to be altered
const response = new Response(page.body, page)
response.headers.set('X-XSS-Protection', '1; mode=block')
response.headers.set('X-Content-Type-Options', 'nosniff')
response.headers.set('X-Frame-Options', 'DENY')
response.headers.set('Referrer-Policy', 'unsafe-url')
response.headers.set('Feature-Policy', 'none')
return response
} catch (e) {
// if an error is thrown try to serve the asset at 404.html
if (!DEBUG) {
try {
let notFoundResponse = await getAssetFromKV(event, {
mapRequestToAsset: req => new Request(`${new URL(req.url).origin}/404.html`, req),
})
return new Response(notFoundResponse.body, { ...notFoundResponse, status: 404 })
} catch (e) {}
}
return new Response(e.message || e.toString(), { status: 500 })
}
}
/**
* Here's one example of how to modify a request to
* remove a specific prefix, in this case `/docs` from
* the url. This can be useful if you are deploying to a
* route on a zone, or if you only want your static content
* to exist at a specific path.
*/
function handlePrefix(prefix) {
return request => {
// compute the default (e.g. / -> index.html)
let defaultAssetKey = mapRequestToAsset(request)
let url = new URL(defaultAssetKey.url)
// strip the prefix from the path for lookup
url.pathname = url.pathname.replace(prefix, '/')
// inherit all other props from the default request
return new Request(url.toString(), defaultAssetKey)
}
}
In case the format is not obvious (it wasn't to me) here is a sample config block from the docs with the preview_id specified for a couple of KV Namespaces:
kv_namespaces = [
{ binding = "FOO", id = "0f2ac74b498b48028cb68387c421e279", preview_id = "6a1ddb03f3ec250963f0a1e46820076f" },
{ binding = "BAR", id = "068c101e168d03c65bddf4ba75150fb0", preview_id = "fb69528dbc7336525313f2e8c3b17db0" }
]
You can generate a new namespace ID in the Workers KV section of the dashboard or with the Wrangler CLI:
wrangler kv:namespace create "SOME_NAMESPACE" --preview
This answer applies to versions of Wrangler >= 1.10.0
wrangler asks to add a preview_id to the KV namespaces. Is this an identifier to an existing KV namespace?
Yes! The reason there is a different identifier for preview namespaces is so that when developing with wrangler dev or wrangler preview you don't accidentally write changes to your existing production data with possibly buggy or incompatible code. You can add a --preview flag to most wrangler kv commands to interact with your preview namespaces.
For your situation here there are actually a few things going on.
You are using Workers Sites
You have a KV namespace defined in wrangler.toml
Workers Sites will automatically configure a production namespace for each environment you run wrangler publish on, and a preview namespace for each environment you run wrangler dev or wrangler preview on. If all you need is Workers Sites, then there is no need at all to specify a kv-namepsaces table in your manifest. That table is for additional KV namespaces that you may want to read data from or write data to. If that is what you need, you'll need to configure your own namespace and add id to wrangler.toml if you want to use wrangler publish, and preview_id (which should be different) if you want to use wrangler dev or wrangler preview.

Socket Hang Up when using https.request in node.js

When using https.request with node.js v04.7, I get the following error:
Error: socket hang up
at CleartextStream.<anonymous> (http.js:1272:45)
at CleartextStream.emit (events.js:61:17)
at Array.<anonymous> (tls.js:617:22)
at EventEmitter._tickCallback (node.js:126:26)
Simplified code that will generate the error:
var https = require('https')
, fs = require('fs')
var options = {
host: 'localhost'
, port: 8000
, key: fs.readFileSync('../../test-key.pem')
, cert: fs.readFileSync('../../test-cert.pem')
}
// Set up server and start listening
https.createServer(function (req, res) {
res.writeHead(200, {'Content-Type': 'text/plain'})
res.end('success')
}).listen(options.port, options.host)
// Wait a second to let the server start up
setTimeout(function() {
var clientRequest = https.request(options, function(res) {
res.on('data', function (chunk) {
console.log('Called')
})
})
clientRequest.write('')
clientRequest.end()
}, 1000)
I get the error even with the server and client running on different node instances and have tested with port 8000, 3000, and 443 and with and without the SSL certificates. I do have libssl and libssl-dev on my Ubuntu machine.
Any ideas on what could be the cause?
In
https.createServer(function (req, res) {
you are missing options when you create the server, should be:
https.createServer(options, function (req, res) {
with your key and cert inside
I had a very similar problem where the response's end event never fired.
Adding this line fixed the problem:
// Hack to emit end on close because of a core bug that never fires end
response.on('close', function () {response.emit('end')});
I found an example of this in the request library mentioned in the previous answer.
Short answer: Use the the latest source code instead of the one you have. Store it where you will and then require it, you are good to go.
In the request 1.2.0 source code, main.js line 76, I see
http.createClient(options.uri.port, options.uri.hostname, options.uri.protocol === 'https:');
Looking at the http.js source code, I see
exports.createClient = function(port, host) {
var c = new Client();
c.port = port;
c.host = host;
return c;
};
It is requesting with 3 params but the actual function only has 2. The functionality is replaced with a separate module for https.
Looking at the latest main.js source code, I see dramatic changes. The most important is the addition of require('https').
It appears that request has been fixed but never re-released. Fortunately, the fix seems to work if you just copy manually from the raw view of the latest main.js source code and use it instead.
I had a similar problem and i think i got a fix. but then I have another socket problem.
See my solution here: http://groups.google.com/group/nodejs/browse_thread/thread/9189df2597aa199e/b83b16c08a051706?lnk=gst&q=hang+up#b83b16c08a051706
key point: use 0.4.8, http.request instead of http.createClient.
However, the new problem is, if I let the program running for long time, (I actually left the program running but no activity during weekend), then I will get socket hang up error when I send a request to http Server. (not even reach the http.request). I don't know if it is because of my code, or it is different problem with http Server