I've started using AS 7 after a migration and trying to work out whether the hot deployment works the same way as the console method of uploading applications?
If the hot deployment stays in the deployment folder, where do the applications "go" when they are loaded by the console (or the cli?). Which method should I be using in an admin role? What happens if I use both?
If you use hotdeploy your application will stay in "deployments", otherwise if you use cli your application will stay in "data" folder.
You can use hotdeploy or cli deploy both, last deployed is the current.
here the documentation about deploy command:
[standalone#localhost:9999 /] deploy --help SYNOPSIS
deploy (file_path [--name=deployment_name] [--runtime_name=deployment_runtime_name] [--force | --disabled] |
--name=deployment_name)
[--server-groups=group_name (,group_name)* | --all-server-groups]
[--headers={operation_header (;operation_header)*}]
DESCRIPTION
Deploys the application designated by the file_path or enables an already existing
but disabled in the repository deployment designated by the name argument.
If executed w/o arguments, will list all the existing deployments.
ARGUMENTS
file_path - the path to the application to deploy. Required
in case the deployment
doesn't exist in the repository.
The path can be either absolute or relative to the current directory.
--name - the unique name of the deployment. If the file
path argument is specified
the name argument is optional with the file name been the default value.
If the file path argument isn't specified then the command is supposed to
enable an already existing but disabled deployment, and in this case the
name argument is required.
--runtime_name - optional, the runtime name for the deployment.
--force - if the deployment with the specified name
already exists, by default,
deploy will be aborted and the corresponding message will printed.
Switch --force (or -f) will force the replacement of the existing deployment
with the one specified in the command arguments.
--disabled - indicates that the deployment has to be added
to the repository disabled.
--server-groups - comma separated list of server group names the
deploy command should apply to.
Either server-groups or all-server-groups is required in the domain mode.
This argument is not applicable in the standalone mode.
--all-server-groups - indicates that deploy should apply to all the
available server groups.
Either server-groups or all-server-groups is required in domain mode.
This argument is not applicable in the standalone mode.
-l - in case none of the required arguments is
specified the command will
print all of the existing deployments in the repository. The presence of the -l switch
will make the existing deployments printed one deployment per line, instead of
in columns (the default).
--headers - a list of operation headers separated by a
semicolon. For the list of supported
headers, please, refer to the domain management documentation or use tab-completion.
I believe the only way to have a hot deployment is to use the file system deployments, e.g. the deployment scanner. You can get some information about that on the application deployment documentation.
When you deploy through the console or CLI the deployment stays compressed and goes into the content directory. There's not much you can really do with the content of it there though.
For production it's advised to not use the deployment scanner. There are several ways to deploy your application, but the easiest tend to be with the web console, CLI or the maven plug-in. There are Java API's as well or you could write a script to execute CLI commands.
Related
I have a task to deploy aspnet core React application to 2 different environments: development and production environments. Each of this environments should be configured separately.
I use Azure devops for CI/CD
AspNet project contains following commands for building application
<Exec WorkingDirectory="$(SpaRoot)" Command="npm install" />
<Exec WorkingDirectory="$(SpaRoot)" Command="npm run build" />
I use adal for authorization that is why I have to pass some secret variables that are different for Dev and Prod
const adalConfig = {
tenant: process.env.REACT_APP_TENANT,
clientId: process.env.REACT_APP_CLIENT_ID,
redirectUri: process.env.REACT_APP_REDIRECT_URI,
In Azure devops I set params with command:
echo ##vso[task.setvariable variable=REACT_APP_TENANT;isOutput=true]c00000-00ce-000-0f00-0000000004000
in the azure devops I have next standard commands for aspnet core build app
.Net core installer
Resore
run command (to set env variables)
Build
publish
Issues:
Environment variable is not set.
I even don't know how to build another artefact for production, but not for development.
Maybe you already had task to deploy core react app to 2 different environments? Or please give advice if I need to change deployment strategy at all.
The only solutions what I found is to use .env file but I have to commit this file to git - to deploy it from master. And I still don't know how to use different files for dev and prod.
TLDR;
You have isOutput=true in your task.setvariable command. This only sets a variable in Pipelines engine, to be available to other steps, but doesn't actually map to an env variable. Remove isOutput and you shall see REACT_APP_TENANT env variable.
But only in the following steps - the env variable is not immediately accessible in the same pipeline step.
You can define variables at pipeline level if you know their values upfront - that should simplify things. task.setvariable is more useful for dynamic variables.
If you need different process (or a different set of variables) for different environments, I recommend using multistage YAML pipelines or classic Releases. They both allow for setting up different stages, each with their set of variables.
Long story
We need to distinguish two separate processes:
Deployment pipeline that's executed on CI agent
Web application that may be hosted in many different ways - Azure Web Apps, self hosting, docker/k8s, etc.
Doing echo ##vso[task.setvariable ...] sets the variable in the pipeline (1.).
The place where reading variables takes place (like tenant: process.env.REACT_APP_TENANT) isn't that obvious. If it's nodejs server-side code, it'll be executed in 2. If it's a part of some build script, it'll be read in 1.
React is tricky, because:
It react behaves differently in development and release mode. In Release mode, during the build phase, the whole client-side code is compiled down to a static JS file. So the env variables you set in your pipeline should work.
It cannot simply access any env variable (to protect you from exposing your server env variables on client browser by accident). If using create-react-app (which is what ASP.NET React App does by default), you have to prefix env variables with REACT_APP_ to use them. If using Webpack directly, you'll need some plugin for this.
It's easy to share run configurations instances in IDEA - simply instantiate a configuration and check "Share":
I'm already version controlling the resulting files in .idea/runConfigurations (in the relevant project) and part of ~/.IntelliJIdea* (for puppetising desktops). However, I can't find where IDEA stores the configuration defaults - it doesn't seem to be in either of these places. They must obviously be persisting it somewhere, because it works across restarts. The official documentation is unusually unhelpful in this case:
This check box is not available when editing the run/debug configuration defaults.
The particular use case is that I'd like all future "Behave" configurations to have the environment variable DISPLAY set to :1 to run browser tests in VNC rather than in the foreground.
Defaults (the ones that you configure under Defaults node from your screenshot) are per-project .. and therefore stored together with other non-shared configs in .idea/workspace.xml (which is not supposed to be stored under VCS as it contains developer/computer specific settings).
You can find such entries in the aforementioned file under <component name="RunManager" node. Default entries will have default="true" attribute.
There is no defaults of defaults for run/debug configs that you can edit/provision (configs that would be applied to any new projects). They are not stored in separate config file(s) on IDE level but initiated directly from plugin code .
I have created a Bamboo build plan that is supposed to generate artifacts. And it does - I see the generated files on the server. Unfortunately, Bamboo does not copy the files to the desired location -> it does not treat them as artifacts that I can download from Bamboo server.
I am working with Bamboo 4.3.3. The documentation tells me to describe the artifacts location relative to the "working directory", so I am trying to copy everything to ${bamboo.build.working.directory}.
I have tried different location / copy pattern settings, but to no avail.
Where should I put them? I have a scripting environment, and there is no Maven or Ant to help me.
I finally understood what was going on with my artifacts and test results that Bamboo did not see:
Test results: there is a known bug that is affecting all versions up to 4.4.5, which manifests itself in scripting environments. Fortunately, it has a workaround: JUnit Parser: Test results are not found
Bamboo uses system property bamboo.fs.timestamp.precision to define FS timestamp resolution. By default it is set to 100 (ms), please set it to higher value in order to make file date check less strict. Bamboo does the check in the following way:
private boolean isFileRecentEnough(final File file)
{
return file.lastModified() >= (taskStartDate.getTime() - SystemProperty.FS_TIMESTAMP_RESOLUTION_MS.getTypedValue());
}
Other items to check
Double check the task configuration and confirm that it is configured it to look for the test results file in the current working directory of the job (Ex.: C:\Users\ssetayeshfar\bamboo-home-445\xml-data\build-dir\PROJECT-PLAN-JOB) and NOT a sub-directory (Ex. C:\Users\ssetayeshfar\bamboo-home-445\xml-data\build-dir\PROJECT-PLAN-JOB/test-results).
In case test report is not produced by the build (it was produced earlier) use a 'touch' command right before the JUnit task.
Artifacts: at the beginning of my work with Bamboo I did not understand that the working directory is defined PER JOB and tried to copy something produced in a previous job as an artifact of the current one.
I've looked at the usage, but haven't understood from it how to configure multiple servers. I added separate server elements to settings.xml - but I don't understand how to specify a different URL for every server.
The URL element belongs to the global plugin configuration. How do I configure multiple server URLs?
You could add multiple profiles to your pom.xml. One for each server. Check the Maven documentation on profiles for details!
If you examine the documentation of the Tomcat plugin you will see that it does not support multiple <configuration> sections. That should be a small addition because in the deploy phase you only copy the WAR file to the server using an HTTP PUT command. So I wonder why they have not added this capability to the plugin.
Anyhow, one possible workaround is to:
Make multiple copies of your pom.xml in the same directory but give them unique names, e.g. dev_1_pom.xml, or dev_<some_machine_name_or_IP>, qa_1_pom.xml ..... You can keep your development pom.xml file name the same because you will likely still run Maven from the command line. Personally, I prefer running the mvn command from my IDE (a button click away vs. typing an mvn command with arguments every time).
In each of the copies, change the <configuration> section under your Tomcat plugin to point to a different server that matches the name of your specific pom.xml. You will need corresponding sections in settings.xml
Create corresponding External Tools Configuration(s) (Eclipse, or other IDE) and each one call the corresponding POM file. Here is an example with Eclipse:
Open External Tools Configuration Dialog in Eclipse (either from the dropdown menu next to the button, or by going to the menu bar and clicking Run > External Tools > External Tools Configurations). Then on the Main Tab, provide values for the following fields
Location: C:\downloads\tools\apache-maven-3.0.3\bin\mvn.bat
Working directory: ${workspace_loc:/<project_name>} - replace <project_name> with the name of your project
Arguments: -f <pom_file_name> <other_arguments> - <other_arguments> could be tomcat7:redeploy
Now you can run these external tool launchers individually to deploy to different servers.
Optionally, extract the mvn commands from your launchers and create a shell script (batch or Unix bash script) that runs all of them. That way you can deploy to multiple servers at once. You can also run this script from Eclipse. Create a new External Tools Configuration launcher but this time your Location: field will point to cmd (Windows) or bash (Unix, Linux ...), not mvn
I have a project that need to be deployed into multiple environments (prod, test, dev). The main differences mainly consist in configuration properties/files.
My idea was to use profiles and overlays to copy/configure the specialized output. But I'm stuck into if I have to generate multiple artifacts with specialized classifiers (ex: "my-app-1.0-prod.zip/jar", "my-app-1.0-dev.zip/jar") or should I create multiple projects, one project for every environment ?!
Should I use maven-assembly-plugin to generate multiple artifacts for every environment ?
Anyway, I'll need to generate all them at once so it seams that the profiles does not fit ... still puzzled :(
Any hints/examples/links will be more than welcomed.
As a side issue, I'm also wondering how to achieve this in a CI Hudson/Bamboo to generate and deploy these generated artifacts for all the environments, to their proper servers (ex: using SCP Hudson plugin) ?
I prefer to package configuration files separately from the application. This allows you to run the EXACT same application and supply the configuration at run time. It also allows you to generate configuration files after the fact for an environment you didn't know you would need at build time. e.g. CERT
I use the "assembly" tool to zip up each domain's config files into named files.
I would use the version element (like 1.0-SNAPSHOT, 1.0-UAT, 1.0-PROD) and thus tags/branches at the VCS level in combination with profiles (for environments specific things like machines names, user name passwords, etc), to build the various artifacts.
We implemented a m2 plugin to build the final .properties using the following approach:
The common, environment-unaware settings are read from common.properties.
The specific, environment-aware settings are read from dev.properties, test.properties or production.properties, thus overriding default values if necessary.
The final .properties files is written to disk with the Properties instance after reading the files in given order.
Such .properties file is what gets bundled depending on the target environment.
We use profiles to achieve that, but we only have the default profile - which we call "development" profile, and has configuration files on it, and we have a "release" profile, where we don't include the configuration files (so they can be properly configured when the application is installed).
I would use profiles to do it, and I would append the profile in the artifact name if you need to deploy it. I think it is somewhat similar to what Pascal had suggested, only that you will be using profiles and not versions.
PS: Another reason why we have dev/ release profiles only, is that whenever we send something for UAT or PROD, it has been released, so if there is a bug we can track down what the state of the code was when the application was released - it is easier to tag it in SVN than trying to find its state from the commit history.
I had this exact scenario last summer.
I ended up using profiles for each higher environment with classifiers. Default profile was "do no harm" development build. I had a DEV, INT, UAT, QA, and PROD profile.
I ended up defining multiple jobs within Hudson to generate the region specific artifacts.
The one thing I would have done differently was to architect the projects a bit differently so that the region specific build was outside of the modularized main project. That was it would simply pull in the lastest artifacts for each specific build rather than rebuild the entire project for each region.
In fact, when I setup the jobs, the QA and PROD jobs were always setup to build off of a tag. Clearly this is something that you would tailor to your specific workplace rules on deployment.
Try using https://github.com/khmarbaise/multienv-maven-plugin to create one main WAR and one configuration JAR for each environment.