i'm already thankfull for anyone reading this.
I've stumbled across some problems trying to deploy a Nuxt application in a correct way. For testing purposes i've created a clean installed Nuxt application so i'm sure nothing is wrong with my codebase. What my ultimate goal is is to push all my nuxt code to github which than gets picked up from a azure deployment pipeline and generates the needed build files and drops them to the webapp service and runs the needed start command. There's no documentation to be find about Azure which is really annoying for me. I'm not used to deploying stuff through a pipeline but need this to be working for this project.
Has anyone experience with Nuxt and deployments through Github -> Azure pipeline -> Build -> Web app running immediatly
What's i've tried already is pushing all the source code to a repository which get's picked up by azure pipeline. The only thing the pipeline does is paste the code in the wwwroot folder. Which is obviously not enough to make the application run automated.
What i expect is some insight from someone who is experienced with nuxt and azure deployments through github/bitbucket (doesn't really make a difference)
You have to configure your pipeline to build the nuxt files and deploy them. You can usually achieve the first step through npm commands, npm install then npm run build.
There's a few different ways to deploy them, which you'll have to decide between depending on your use case and environment. You can create an endpoint pointing to a CDN serving a blob, zip the built files and publish the artifact which is then used by a release pipeline, etc. Once the next files are created, you have all you need to deploy.
Related
My question is about separate package/deploy steps. What I want to do is to package the service at step 1 of the deployment process, then copy the content of the package to another machine and deploy from there. Can't make it work. I use no parameters, and "serverless package" seems to work fine (creates ".serverless" folder without an attempt to deploy), but when I copy the ".serverless" folder to another location and execute 'serverless deploy" it only says "packaging service" and does nothing. Is this how deployment of a package supposed to work? This happens on a vanilla aws node service.
The command serverless deploy --package path-to-package seems to be what you are looking for, as specified in the Serverless Framework documentation.
This deployment option takes a deployment directory that has already been created with serverless package and deploys it to the cloud provider. This allows you to easier integrate CI / CD workflows with the Serverless Framework.
You were probably missing the --package option.
I have an MVC 5 application we're moving from on-premise to the Azure cloud. Currently, we have several publish profiles, one per environment, which we determine using a powershell script. One of our goals is to make the building scripts and infrastructure as simple as possible, so I was wondering if I could make it so that using only my appveyor.yml file I could set the publish profile to be used, so
Is there a way to set the publish profile from the appveyor.yml file?
If not what are my choices?
You can run your PowerShell script as part of desired build step in pipeline. It is possible can run commands right from YAML file or UI or check-in your PowerShell script into repository and run .ps1 file. You might consider using secure variables to avoid checking in things like connection strings into repo in clear text.
However this custom script/profiles approach will not allow you to use built-in WAP artifacts packaging and you will be also needed to use custom script instead of automatic MSBuild mode. Which is OK, but a little bit more scripting. Also you will be needed to publish artifacts so it will be available for deployment.
Maybe easier option is to let AppVeyor do all build and WAP artifacts packaging/publishing automatically, and then use built-in Web Deployment with Web Deploy parametrization instead of multiple publishing profiles.
But if you decide to go with custom scripts, and multiple publishing profiles, you still can use use built-in Web Deployment with artifacts created by your scripts.
I am having an issue getting selenium end to end tests to work after an automated deployment using visual studio team services (VSTS).
I have a build working that generates a build artefact. This is triggered from VSTS but runs on an on premises build server. I have a deployment working that deploys to an on premises development web server. All this works including unit tests running after the build.
When I try to add testing after the deployment is when I run into the problem. The tests are to be run on the build server and point to the dev server website. The deployment has two phases. A deploy and then an agent phase that runs a test assemblies task using the build agent on the build server. The problem seems to be that the test dll's are not being included in the build artifact and so are never found when the test process runs. Deploy setup us as follows.
I have a copy files before the publish artifact in the build definition that seems to copy the files in to the right place but they are not included in the zip file artefact. I've looked at several websites and posts on here but I still seem to be missing a vital bit of knowledge that will get this working.
Use of the log did help as recommended so thanks for that.
I have managed to get this working. I separated the selenium tests out into a separate solution and built that separately creating it's own build artefact. I then added this to the agent task in the deploy. This worked. The only thing I need to get sorted now is the correct search path to find the test DLL's. It's not quite as dynamic as I would like at the moment. I can play tunes on that until I get it right though.
I accept this is working around rather than solving the original problem but needs and timescales must. I think moving the end to end UI tests out of the main solution makes sense anyway but no doubt others may think differently.
Thanks for your help everyone
I'm evaluating Bamboo to replace our Jenkins setup and have a couple questions. I have a .NET solution that generates two artifacts: a packaged website and an MSI. I have three environments I deploy to: test, stage, production. Our Jenkins server in turn has three jobs--one for each environment. Each job builds the solution, copies in configuration files for the environment it will be deployed to and then deploys the artifacts. Reading the documentation and other stuff (https://answers.atlassian.com/questions/19562/plans-stages-jobs-best-practices), I'm getting mixed signals about how deployment should work with Bamboo. It seems to me like deployment plans expect artifacts to exist and then deploy them. But, build plans include deployment steps as well. How is all of this supposed to interact together?
The reason I'm confused is because I have environment specific configuration files that get packaged during a build. Any direction on how this should work?
I posted the question to the Atlassian board as well and got an answer I think I like the best:
Jason Monsorno · 762 karma · Aug 30 '13 at 04:38 PM
Deployment projects in Bamboo seem to be dependant on the existance of
an artifact, the catch is you don't necessary need to use that
artifact so you could use an empty artifact and do completely
independent steps. Deployment projects are still fairly new to Bamboo
and your structure may favor the "normal" workflow so each environment
would be a separate manual stage.
The Deployment project do have a separate workflow and versioning. To
use Deployment projects in your scenario, I'd suggest making the
artifact the entire checkout then each Deployment environment can
build a copy of the artifact. The space-saving/less-time-efficient
option would be to just save the current revision in a file as the
artifact and use that to check it out and build in each Deployment
environment.
I am working on automating our ASP.Net projects' release, using Octopus Deploy. While creating release in Octopus, I am performing following testing completely manually:
I am checking if release is deploying
Everything expected
In the expected places
All required services or web services were restarted
All Pre/Post deploy scripts ran successfully
This means going to different servers and reading release log generated by Octopus Deploy. It leaves risk of introducing mistakes and any future change can make deployment unstable.
Is there any tool to perform a kind of integration testing for Octopus Deploy Release or automate the process mentioned above. I am also open to writing a quick tool automating my testing but then I was wondering what will be the best way to go about it.
Thanks!
For Octopus 2.0 we're building a powerful API that gives you access to everything in Octopus. Using the API you'll be able to create and deploy a release, as well as read the activity logs and see what was deployed. You could hook this up to an automated test (which is what we're doing internally for end-to-end automated testing).