sveltekit static adapter does not work on cloudflare pages - cloudflare

I have a sveltekit website that I deployed to cloudflare pages, the problem is that when I deploy the app with the static adapter and try to visit the site it says "No webpage was found for the web address" but when I use the cloudflare adapter it works successfully, so I was intending to use the cloudflare adapter but I noticed that the number of "Functions requests today" was increasing although my app does not have any functions (some how every request is counted as a server function), So what am I doing wrong here?

When you run npm run build does the build directory contain an index.html file? If not you may need to specify prerender.default = true like so:
import adapter from '#sveltejs/adapter-static';
/** #type {import('#sveltejs/kit').Config} */
const config = {
kit: {
adapter: adapter(),
prerender: {
default: true
}
}
};
export default config;
With that you should get a /build directory that contains index.html. Next just follow the instructions from Cloudflare Pages documentation for deploying your site https://developers.cloudflare.com/pages/framework-guides/deploy-anything/
These instructions include the following:
Deploying with Cloudflare Pages
Deploy your site to Pages by logging in to the Cloudflare dashboard > Account Home > Pages and selecting Create a project. Select the new GitHub repository that you created and, in the Set up builds and deployments section, provide the following information:
Configuration option
Value
Production branch
main
Build command (optional)
<YOUR_BUILD_COMMAND>
Build output directory
<YOUR_BUILD_DIR>
Unlike many of the framework guides, the build command and build directory for your site are going to be completely custom. If you do not need a build step, leave the Build command field empty and specify a Build output directory. The build output directory is where your application's content lives.
After configuring your site, you can begin your first deploy. Your custom build command (if provided) will run, and Pages will deploy your static site.
For the complete guide to deploying your first site to Cloudflare Pages, refer to the Get started guide.
After you have deployed your site, you will receive a unique subdomain for your project on *.pages.dev. Cloudflare Pages will automatically rebuild your project and deploy it. You will also get access to preview deployments on new pull requests, so you can preview how changes look to your site before deploying them to production.
From these instructions it looks like you only need to set Production branch to your main branch (or which ever branch you would like deployed) and Build output directory to build (unless otherwise specified in your svelte.config.json). Ensure that your .gitignore does not include the /build directory unless you want to use the Build command config then go ahead and do that.

Related

How to bundle next.js app as a javascript widget or npm package?

I have a survey built with Next.js using Incremental Static Regeneration (ISR). I would like to bundle it so I can can either publish it to npm or host a single entry file so I can use the survey on other applications.
It's currently hosted on Vercel and uses getStaticProps and getStaticPaths to call my API of 'surveys' and 'survey questions'. ISR is great because it allows me to dynamically load each step of the survey based off of the API structure and if I modify it then the revalidate property will regenerate the new order or questions from the survey. It also lets me get away with only having one page for all surveys/question types.
My App structure is like this:
- src
- pages
- [surveyid]
- [...question].tsx
Based on the request (and response that was received during build time/revalidation) the static files for the survey-id are created and the next router will route to each survey question based off the next step in the json object from the api eg., /surveyid/question-1, /surveyid/question-2 etc..
This is all working well in production when deployed to Vercel.
When it comes to bundling this so a survey can be loaded into other sites I have been quite lost.
When I run next build it builds the prod files that are served to vercel but there are many entry points and not a single .js file.
I tried running next export and serving the .out folder locally and the pages and links are accurate but this breaks the ISR and after reading next.js documentation it states that next export doesn't work with ISR.
Ideally I would like to be able to build the application to a single entry file eg., index.js and then either publish as a package to npm or host on my server. Then load the survey by installing the package or adding the direct url to the src file of a script tag in my other projects eg., <script src='https://survey.com/widget.js'></script> and provide some settings/options to the request so I can tell it which survey to return.
Is there a way for this to be done while still maintaining ISR?
Would I have to create some sort of entry file to dispatch the request and return the static files from my vercel server instead as a workaround?
I am currently trying to see if I can use rollup to build it out to a single file but I am unsure if this will break the next router when it comes to the dynamic rendering (or revalidation) of pages.
In a perfect world I would like to leverage some of the cool features of next like their middleware to determine the geolocation from the request header as well. But i'm happy if I can just get the survey to render in another project at this point.

Nuxt JS - reading conf/env file in static site generation

My project with Nuxt JS is set with target:static and ssr: false.
This app need to connect to a local endpoint to retrieve some informations.
I have multiple endpoints and I need multiple instances of the app, every app must read only his endpoint.
The question is: how change the endpoint address for every app without rebuild everyone?
I tried with env file or a json file in the static folder (in order to have access to this file in the dist folder after the build process).
But if I modify the content of the env/json file in the dist folder and then reload the webpage (or also restart the web server that serve the dist folder), the app continue to use the original endpoint provided at the build time.
There is a way or I have to switch to server side rendering mode (which I would rather not use)?
Thank you!
When you use SSG, it will bundle your app at build time. Last time I checked, there was no hack regarding that. (I don't have the Github issue under my hand but it's a popular one)
And at the same time, I don't really see how it would be done since you want to mix something static and dynamic at the same time.
SSR is the only way here.
Otherwise, you may have some other logic to generate dynamic markup when updating your endpoints (not related to Nuxt) by fetching a remote endpoint I guess.
With the module nuxt content it's possible to create a folder "/content" in project directory and read json files from that directory.
After, when creating the dist with nuxt generate command, the "content" folder it's included in "_nuxt" folder of the dist and if you modify the content of the json file and refresh the webpage that read it, will take the new values.

Blazor project publish doesn't create DLL files

I have two separate .NET6.0 projects for back-end and front-end. Backend with C# can be published just fine. Front-end is written with Blazor. When I choose publish, it goes through the process without errors or exceptions. It also builds files in the bin>Release>net6.0 directory just fine but doesn't create any of the DLL files in my chosen directory for publish. In that directory, it just creates wwwroot, libman.json and web.config as you can see in the image.
Where can be the cause of the problem?
blazor web assembly app is a static app so it will have much static files, the logic codes you wrote in .razor will be packaged into your_project_name.dll in wwwroot\_framework.
Then when we need to host the app, we need to find a static file server to serve the app. For example, IIS. When publish the static app, we need to create a website and set the website point to the publish folder which containing wwwroot folder and web.config file. Then make sure the IIS had installed url rewrite module since it's a static website. After installing the module, we will see the default rewrite rule like this:

Hosting blazor wasm asp.net core hosted app in kestrel

I am having troubles hosting the blazor wasm asp.net core hosted application.. The solution has 3 projects: Client, Shared, and Server.
when I run the command dotnet publish --configuration Release it publishes the libraries to their respective folders in solution like this:
WebWorkbench3\Client\bin\Release\net5.0\publish
WebWorkbench3\Server\bin\Release\net5.0\publish
...
I would assume that since the server project is referencing a client - then my steps to host the application are following:
Open WebWorkbench3\Server\bin\Release\net5.0\publish in powershell
Run command dotnet .\WebWorkbench3.Server.dll
Navigate to: https://localhost:5001/
Result:
Expected: client page opened
Actual: page is stuck at "Loading.." string. In the console we see that there was an error about _framework/blazor.webassembly.js not being loaded.
If we were to check the wwwroot folder contents in the server app we will see the following:
So this explains why the error is shown. However my question at this point - should the publishing process/configuration in project take care of copying client's wwwroot contents into the the server's app output directory? If we start a debugging session in the VisualStudio, then we use the server as the startup point, so the project should have some idea where to look up the blazor.webassembly.js file at..
So why doesn't the same process occurs during the publishing?
Note: I was able to fix the issue by manually copying the client's wwwroot directory and by placing the contents into the server's wwwroot directory... But I don't think that is is how serving is supposed to work?
EDIT: I have just tried to set-up the client blazor application in IIS. And it works. Kind of. The page is opened. But then when it tries to make a REST GET request to the server - it uses the same hostname:port combination. So if my app is hosted on mysite.local:50001 then the request to API will look like mysite.local:50001/data/loadall where data is the controller name and loadall is the action name.. So basically the client uses the same base address as the server.. The problem, is that I cannot start the server on the same port as the client! In attempt in doing so - you will see following output:
So basically I have the same question as before - how to host the wasm application that is split between client and the server? I am pretty sure that I can make it work by forcing the client to use the non-standard server port and serving the server part on that port.. However, I believe there should be a reason why current configuration (default configuration in the blazor wasm template) is configured in this way so it should be possible to run the project somehow without any additional changes at all..
Well this will be a self-answer.. Instead of publishing (dotnet publish --configuration Release) the application on solution level - do the publishing on project level..
before ..\repos\WebWorkbench3\WebWorkbench3
after ..\repos\WebWorkbench3\WebWorkbench3\Server
In 1 case the compiler does not copy the _framework folder (and possibly some other files) into the wwwroot.. Once you have published the Server correctly you can access the app by serving it with dotnet .\WebWorkbench3.Server.dll command.
Having the samie issue as explained above:
Before:
The solution file
had the same name
was in the same folder
as the server project
Resolved
I moved the solution to the project root (one level up).
Now, dotnet publish within the server project produced the __framework folder + content as expected.

Quarkus, Heroku and different environments

I'm currently developing a simple webapp with seperated frontend (Vue) and backend (quarkus REST API) project. For now, I've setup a MVP, where the frontend is displaying some simple data which is called from the backend. To get a working MVP i need to setup CORS support. However, first i want to explain my setup:
Setup
I'm starting developing environment of my frontend with npm run serve and of my backend with ./mvnw quarkus:dev. Frontend is running on localhost:8081 and backend running on localhost:8080.
Heroku allows to run your apps locally aswell with the command heroku local web. Frontend is running on port 0.0.0.0:5001 and backend on 0.0.0.0:5000.
To achieve this setup i setup two .env files on my frontend which are pointing to my backend api. If i want to work in development mode the file .env.development is loaded:
VUE_APP_ROOT_API=http://localhost:8080
and if i run heroku local web the file .env.local with
VUE_APP_ROOT_API=0.0.0.0:5000
is loaded.
In my backend I've set
quarkus.http.cors=true
in my application.properties.
Now I want to deploy those two projects to heroku and use it in production. Therefore I setup two heroku projects and set a config variable in my frontend project with the following value:
VUE_APP_ROOT_API:https://mybackend.herokuapp.com
Calls from my frontend are successfully working!
Question
For the next step, I want to restrict it more and just enable my frontend to call my API. I know i can set something like
quarkus.http.cors.origins=myfrontend.herokuapp.com
However, I dont know how i could do this on quarkus with different environments (development, local and production)? I've found this link but I don't know how to configure heroku and my backend app correctly. Do i need to setup different profiles which are applied on my different environments? Or is there another solution? Do i need Herokus Config Variables?
Thanks for the help so far!
quarkus.http.cors.origins is overridable at runtime so you have several possibilities.
You could use a profile and have everything set up in your application.properties with %prod.quarkus.http.cors.origins=.... Then you either use -Dquarkus.profile=prod when launching your application or you use QUARKUS_PROFILE=prod as an environment variable.
Another option is to use an environment variable for quarkus.http.cors.origins. That would be QUARKUS_HTTP_CORS_ORIGINS=....
My recommendation would be to use a profile. That way you can safely check that all your configuration is consistent at a glance.