Do I have to rebuild my frontend for production every time I edit it? I'm using Vue - vue.js

Basically what I have is my frontend (Vue) and my backend (Node.js and etc.). By following a guide, I've built the frontend for production using npm run build. I got a bunch of files in a build folder I setup within a previous step. These files were then moved to a folder in the backend. It works, but it's more a demo than anything else, and the frontend and backend will have to be modified more as I continue.
I'm just wondering if and when I edit the fronted more (let's say, when I add a new page) am I supposed to go through this process again? So I'll modify the front end folder, build that, move files, etc.
Thanks.

Yes, definitely.
If we are in a development environment, we use npm run dev or yarn run which upholds the development environment running and updates the browser whenever any modifications inside the code happen. We don't use any final build in the development environment because we make code modifications so repeatedly that it would be a sore process to make a build after every modification and check the results using that build.
But, the production is distinct from the development environment. We deploy the only code which is bug-free, entirely working, and ready for users to use. Deploying to production means all changes have been made, and the final code is ready to be deployed. So, we make a final production build and deploy it to our server.
So, don't panic to deploy to production every time you make a small change in the code. First, complete your all changes, and test the changes in the development environment, if everything is working correctly then only create a final production build, and deploy it to the server.**
I hope this helps.

Related

How make changes on vue project in hosting

I have vue project which published on Digital Ocean. The main problem is when i make some changes on FileZilla it is not affect on website. How can i solve this issue?
This is not an issue per-se. This is just the way how modern web development works. Vue.js (but also Nuxt) is using a bundler right now (Webpack, Vite are the most common), hence to go to production it needs to be bundled each time you push something to it.
If you upload something via FTP or SSH and edit some source code, a bundle step will be required in order to get any changes on the actual webapp.
Backend languages may not need that, for example you could SSH into a server and change some .php file, if you F5 the page it will be updated in real time. But this is not how frontend JS code works, it needs to be optimised.
Another thing, sending code via SSH/FTP is not really a good workflow because it is not easily trackable, no version-controlled, will not trigger any build flags in case of an error etc...
The best approach is to have a git repo + some build step included in some CI.
A common platform for it is Netlify, you connect a Github repo, you tell which command to use to build the project and each time you push some code, it may do some checks/tests/optimizations/etc... via Github Actions before being released automatically to production (updated on your webapp).
This workflow have a lot of benefits as one may tell but is also de-facto, the official/regular approach for modern Web development on the frontend.

Should UI tests be run on a build server or after deployment?

Should end to end tests be run at build time (running the application on the build server), or after deployment? I have not yet found a solid answer for which one is the standard.
Edit
I mean after deploying either to QA/SIT/UAT etc... vs. just running it on a build server without fully deploying it.
The whole point of having a build server is to create a single build of the current source code, of which you run tests and make sure that things work before you deploy them. I don't know why anyone would want to run tests after then have been deployed. What happens if you find a bug? You going to roll back the deployment? Always test before deployment.
Ideally, you would have a build environment that mimics your production environment that will allow you to run tests in a "deployed" environment. It's the reason that you have a development/staging/production servers.

Run post deployment scripts without deploying everything else

Is it possible to run the post deployment scripts without deploying other changes? There are changes in the project that I don't want to deploy to the staging environment yet, but I do need to run the post deployment scripts to make sure seed data is inserted. Normally I could use schema compare for just deploying selected changes, but obviously I can't do that with the post deployment script.

With Continuous Integration, why are tests run after committing instead of before?

While I only have a github repository that I'm pushing to (alone), I often forget to run tests, or forget to commit all relevant files, or rely on objects residing on my local machine. These result in build breaks, but they are only detected by Travis-CI after the erroneous commit. I know TeamCity has a pre-commit testing facility (which relies on the IDE in use), but my question is with regards to the current use of continuous integration as opposed to any one implementation. My question is
Why aren't changes tested on a clean build machine - such as those which Travis-CI uses for post-commit tesing - before those changes are committed?
Such a process would mean that there would never be build breaks, meaning that a fresh environment could pull any commit from the repository and be sure of its success; as such, I don't understand why CI isn't implemented using post-commit testing.
I preface my answer with the details that I am running on GitHub and Jenkins.
Why should a developer have to run all tests locally before committing. Especially in the Git paradigm that is not a requirement. What if, for instance, it takes 15-30 minutes to run all of the tests. Do you really want your developers or you personally sitting around waiting for the tests to run locally before your commit and push your changes?
Our process usually goes like this:
Make changes in local branch.
Run any new tests that you have created.
Commit changes to local branch.
Push local changes remotely to GitHub and create pull request.
Have build process pick up changes and run unit tests.
If tests fail, then fix them in local branch and push them locally.
Get changes code reviewed in pull request.
After approval and all checks have passed, push to master.
Rerun all unit tests.
Push artifact to repository.
Push changes to an environment (ie DEV, QA) and run any integration/functional tests that rely on a full environment.
If you have a cloud then you can push your changes to a new node and only after all environment tests pass reroute the VIP to the new node(s)
Repeat 11 until you have pushed through all pre-prod environments.
If you are practicing continuous deployment then push your changes all the way to PROD if all testing, checks, etc pass.
My point is that it is not a good use of a developers time to run tests locally impeding their progress when you can off-load that work onto a Continuous Integration server and be notified of issues that you need to fix later. Also, some tests simply can't be run until you commit them and deploy the artifact to an environment. If an environment is broken because you don't have a cloud and maybe you only have one server, then fix it locally and push the changes quickly to stabilize the environment.
You can run tests locally if you have to, but this should not be the norm.
As to the multiple developer issue, open source projects have been dealing with that for a long time now. They use forks in GitHub to allow contributors the chance to suggest new fixes and functionality, but this is not really that different from a developer on the team creating a local branch, pushing it remotely, and getting team buy-in via code review before pushing. If someone pushes changes that break your changes then you try to fix them yourself first and then ask for their help. You should be following the principle of "merging early and often" as well as merging in updates from master to your branch periodically.
The assumption that if you write code and it compiles and tests are passed locally, no builds could be broken is wrong. It is only so, if you are the only developer working on that code.
But let's say I change the interface you are using, my code will compile and pass tests
as long as I don't get your updated code That uses my interface.
Your code will compile and pass tests as long as you don't get my update in the interface.
And when we both check in our code, the build machine explodes...
So CI is a process which basically say: put your changes in as soon as possible
and test them in the CI server (it should be of course compiled and tested locally first).
If all developers follow those rules,
the build will still break, but we will know about it sooner rather than later.
The CI server is not the same as the version control system. The CI server, too, checks the code out of the repository. And therefore the code has already been committed when it gets tested on the CI server.
More extensive tests may be run periodically, rather than at time of checking in, on whatever is the current version of the code at the time of testing. Think of multi-platform tests or load tests.
Generally, of course, you'll unit test your code on your development machine before checking it in.

How to test code locally when using a build server?

I've never worked on tremendously huge projects and the workflow we use at work is check-out/code/compile locally to test/commit. I was wondering how a build server would change this process. How do developer test their code when the application is too huge to compile locally? They just code, commit and pray?
Absolutely not.
The developer usually has a build file which can build the project for him or her, which has some "targets" defined which do the testing. If you have a really big project, you may have certain portions of it precompiled for you, so you don't have to build the whole thing in one big chunk. You usually do your testing locally before you commit to your repository. Breaking the build in big projects can mark you as an object of ridicule and scorn. Breaking the build in really important, really big projects can be career limiting... ;-)
The build SERVER itself doesn't change this. The build server only runs your build file and the targets you tell it to.
There are also build components (I've just started using TeamCity - no affiliation) that allow "personal builds".
I haven't used it yet as we haven't got it set up properly but my understanding is that TeamCity allows running a build (and tests if they are any run on the server) with your changes before committing (and optionally the server will commit your changes if the build is succesful). in TeamCity this is called a Pre-Tested Commit.