I'm playing around with the new Cloudflare Functions functionality, specifically the file based routing. However I've found it tiresome to have to deploy and debug my functions in the cloud
I found this, which seems to say I can use Wrangler to connect to the Cloudfalre network and run functions locally:
https://developers.cloudflare.com/pages/platform/functions/#develop-and-preview-locally
But it keeps asking me for a root worker file: but there is no root file, its file based routing.
How do I get Cloudsflare Functions to work with wrangler local dev or miniflare?
Related
I created an application that runs on a localhost server using expressjs. And I also bought a domain.
I'm wondering if there is a way to take that localhost server and turn it into a real shared server
I tried once to use a hosting service like hostgator but I still don't know how I can turn the express app into a real website.
I have no experience with any web development services so please don't tell me to use ....... whatever because I will have no idea what that is.
For one thing it is not clear how your website actually works: if it is only express does it generate HTML or is it purely JSON passed to browser clients via get requests (to each their own).
There are so many options as to how you might do this: one of the best options is to first make sure your server runs on Docker. Find a tutorial on YouTube/google/Stack Overflow/Blogs on how to run your Express server with docker. If you do that you can deploy it to a Container manager like Google/Amazon/Digital Ocean. If this seems hard to you there are other options.
Presumably you run your server with something like npm start. This guide can show you how to do essentially that but on a cloud computer.
Before you begin make sure that you're locally working server is checked in to a cloud Git provider like Github, GitLab, Bitbucket, etc.
Since Amazon AWS, and Google Cloud have free tier or options for hosting for free for a certain amount of time (AWS 1 year) or for a certain amount of money (Google Cloud). These two seem like viable place to start.
If you find the option that you'd like you'll need to:
create an account
Create a server (choose a cheap one especially initially like mice/small/cheap etc).
Find a tutorial on how to "SSH" into that server (which basically means remotely control the terminal on that server). Google actually makes this fairly easy there's a big button that says SSH into this server.
Once you've logged into that Computer you'll be able to run the same commands you probably normally do on your home computer:
The computer you'll be getting is likely to be a virtual Linux Computer probably something like Linux Ubuntu. Find a tutorial on how to get git and node installed there (but it's something like sudo apt-get update && sudo apt-get install git node).
Once you have git and node try mkdir www and cd into that: mkdir www && cd project (This isn't critical but conventional.)
Copy the link that allows you to "Clone your repo using HTTPS" (there's a link at the top right of your GitHub (or others) repo that allows you to do that. You'll need to enter your password
Now all the files that you had on your computer are on this new computer.
Next you'll have to probably npm i to install your dependent NPM packages. (This assumes you properly used .gitignore to prevent GitHub from being filled with extra copies of your npm packages.)
Now you should be able to run your code as usual: npm run start
If all those steps work you'll want to get something that will run these "forever" like https://www.npmjs.com/package/forever npm i -g forever or even better: https://www.npmjs.com/package/pm2 will allow you to continuously run your express server.
Finally, you'll need to configure this server on AWS/Google/whatever service you're using to push traffic coming in on port 80 and 443 to port 3000 and open traffic to all. And depending on the service you chose that's different so find a tutorial for doing just that part.
This will only allow people across the internet to see your service on an AWS URL or a google URL. But it's a good chance to make sure everything works perfectly. Once you're happy with everything associate your purchased domain with that special AWS/Google domain. You can do that on the AWS side, or the GoDaddy/NameCheap/where-ever you bought your domain side.
For the docker option you can download aws-cli tools and upload your built docker container to AWS and have it available. Find a tutorial to do that.
Essentially your question is very broad so I sometimes brushed over some details, but this is essentially what you have to do.
I have some react code (written by someone else) that needs to be served. The preferred method is via a Google Storage Bucket, fronted by their Cloud CDN, and this works. However, due to some quirks in the code, there is a requirement to override 404s with 200s, and serve content from the homepage instead (i.e. if there is a 404, don't serve a 404, serve the content of the homepage and return as a 200 instead)
(If anyone is interested, this override currently is implemented in CloudFront on AWS. Google CDN does not provide this functionality yet)
So, if the code is served at "www.mysite.com/app/" and someone hits "www.mysite.com/app/not-here" (which would return a 404), what should happen is that the response should NOT be 404, but a 200 with the content being served from index.html instead.
I was able to get this working by bundling all the code inside a docker container and then using the solution here. However, this setup means if we have a code change, all the running containers need to be restarted, and the customer expects zero downtime, hence the bucket solution.
So I now need to do the same thing but with the files being proxied in (with the upstream being the CDN).
I cannot use the original solution since the files are no longer local, and httpd can't check for existence of something that is not local.
I've tried things like ProxyErrorOverride and ErrorDocument, and managed to get it to redirect, but that is not what is needed.
Does anyone know how/if this can be done?
If the question is: how to catch the 404 error provided by Cloud Storage when a file is missing with httpd/apache? I don't know.
However, I think that isn't the best solution. Serving files directly from Cloud Storage is convenient but not industrial.
Imagine, you deploy several broken files successively, how to rollback in a stable format?
The best is to package your different code release in an atomic bag, a container for instance. Each version are in a different container and performing a rollback is easier and consistent.
Now your "container restart" issue. I don't know on which platform you are running your container. If your run it on a Compute Engine (a VM) it's maybe the worse solution. Today, there is container orchestration system that allows you to deploy, scale up and down the containers, and to perform progressive rollout, to replace, without downtime, the existing running containers by a newer version.
Cloud Run is a wonderful serverless solution for that, you also have Kubernetes (GKE on Google Cloud) that you can use with Knative for a better developer experience.
I have a small device that serves a webpage using Nginx in a local network. I'm developing the webpage using Vue and I need that once a person got connected to the server and visited the page, on disconnection, the page needs to work as normal
I'm currently using Workbox plugin and I get this code:
importScripts("https://storage.googleapis.com/workbox-cdn/releases/4.3.1/workbox-sw.js");
importScripts(
"/precache-manifest.b62cf508e2c3da8c27f2635f7aab384a.js"
);
The problem is that it goes to the internet to download that file and I will not have an internet connection.
I tried downloading this file, but inside goes to the internet again.
Is there a way to get this to work in an offline environment?
You can follow the guidance in the workbox-sw docs to download a local copy of the bundled Workbox runtime libraries, and modify your service worker script to use those.
Running:
$ npx workbox-cli#4.3.1 copyLibraries /path/to/dir
from the command line will download a local copy of the runtime to the specified directory (replace /path/to/dir with the desired location).
You can then modify your service worker script so that it reads:
importScripts("/path/to/dir/workbox-v4.3.1/workbox-sw.js");
workbox.setConfig({
modulePathPrefix: '/path/to/dir/workbox-v4.3.1/'
});
importScripts(
"/precache-manifest.b62cf508e2c3da8c27f2635f7aab384a.js"
);
I'm currently developing a simple webapp with seperated frontend (Vue) and backend (quarkus REST API) project. For now, I've setup a MVP, where the frontend is displaying some simple data which is called from the backend. To get a working MVP i need to setup CORS support. However, first i want to explain my setup:
Setup
I'm starting developing environment of my frontend with npm run serve and of my backend with ./mvnw quarkus:dev. Frontend is running on localhost:8081 and backend running on localhost:8080.
Heroku allows to run your apps locally aswell with the command heroku local web. Frontend is running on port 0.0.0.0:5001 and backend on 0.0.0.0:5000.
To achieve this setup i setup two .env files on my frontend which are pointing to my backend api. If i want to work in development mode the file .env.development is loaded:
VUE_APP_ROOT_API=http://localhost:8080
and if i run heroku local web the file .env.local with
VUE_APP_ROOT_API=0.0.0.0:5000
is loaded.
In my backend I've set
quarkus.http.cors=true
in my application.properties.
Now I want to deploy those two projects to heroku and use it in production. Therefore I setup two heroku projects and set a config variable in my frontend project with the following value:
VUE_APP_ROOT_API:https://mybackend.herokuapp.com
Calls from my frontend are successfully working!
Question
For the next step, I want to restrict it more and just enable my frontend to call my API. I know i can set something like
quarkus.http.cors.origins=myfrontend.herokuapp.com
However, I dont know how i could do this on quarkus with different environments (development, local and production)? I've found this link but I don't know how to configure heroku and my backend app correctly. Do i need to setup different profiles which are applied on my different environments? Or is there another solution? Do i need Herokus Config Variables?
Thanks for the help so far!
quarkus.http.cors.origins is overridable at runtime so you have several possibilities.
You could use a profile and have everything set up in your application.properties with %prod.quarkus.http.cors.origins=.... Then you either use -Dquarkus.profile=prod when launching your application or you use QUARKUS_PROFILE=prod as an environment variable.
Another option is to use an environment variable for quarkus.http.cors.origins. That would be QUARKUS_HTTP_CORS_ORIGINS=....
My recommendation would be to use a profile. That way you can safely check that all your configuration is consistent at a glance.
I developed a web application using java and mongodb. I used glassfish server.
I tried to deploy it on jelastic cloud service
I uploaded my war file. But when I run it after deploying the war file it shows a 404 error. Why? The project works fine on my machine.
There are at least few potential causes:
your app needs some resources which are not started by default (such as DerbyDB). In this case you can check GlassFish log file - server_instance.log for more details.
you are trying to get resources from wrong context, make sure you are trying to get it via correct context name