I deploy a node.js application with express to elastic beanstalk and my structure is like this:
project
├── app.js
├── client
├── apps
│ ├── app1
│ │ ├── models
│ │ ├── controllers
│ │ └── routers
│ └── app2
├── utils
├── views
├── logs
├── scripts
├── public
├── node_modules
├── .elasticbeanstalk
├── .ebextensions
I serve my static files in public directory with:
{
"option_settings": [
{
"option_name": "/public",
"namespace": "aws:elasticbeanstalk:container:nodejs:staticfiles",
"value": "/public"
},
]
}
in aws config files in .ebextensions directory.
but when I want to update code with
eb deploy
aws replace all "/var/app/current/" with project directory so all static files in server deleted.
Is aws pipelines help me? or any other solution?
thank you for your help.
Related
I'm having trouble running my app with NPM 7 Workspaces. I am expecting an npm install from the root folder to create a node_modules folder for each of my workspaces, similar to Lerna. However, when I run npm install at the root, I only get one node_modules, at the root level. Is this expected?
Example structure before npm i:
.
├── package.json -> { "workspaces": ["packages/*"] }
└── packages
├── a
│ ├── index.js
│ └── package.json
├── b
│ ├── index.js
│ └── package.json
└── c
├── index.js
└── package.json
Example structure after npm i (note only one package-lock.json/node_modules):
.
├── package.json -> { "workspaces": ["packages/*"] }
├── **node_modules**
├── **package-lock.json**
└── packages
├── a
│ ├── index.js
│ └── package.json
├── b
│ ├── index.js
│ └── package.json
└── c
├── index.js
└── package.json
Node version: 16.4.2
NPM version: 7.18.1
Update: After messing around a with a million things, I finally went and deleted the project and recloned it. It worked after this. I believe it was due to the fact that I was on an old node/npm version when I originally cloned the project. Must have been some funky state lingering around there. Anyway hope this helps anyone with the same problem!
I have an Electron app that uses Vue for its UI. The app downloads compressed data files from a server. The files contain compressed HTML. The app decompresses and display the HTML. That's all working fine.
The HTML may contain img tags that reference images that are also compressed in the downloaded file. I extract and decompress these images, but then need to a) put them somewhere that the app can see them, and b) construct an img tag that correctly references these images.
Rather than list the dozens of places I've tried to put these images, suffice to say that no matter where I put the images, I don't seem to be able to access them from the app. I get a 404 error, and usually a message saying the app can't reference local resources.
Any suggestions for where the app should store these images, and then how to reference them from img tags?
I have a related problem with images I could reference from the Web, but would prefer to download and cache locally so that the app can display them when there's no internet connection. I feel like if I can solve one of these problems, I can solve them both.
this below setting(s) works for me...
.
├── dist
│ ├── css
│ │ └── app.6cb8b97a.css
│ ├── img
│ │ └── icon.1ba2ae71.png
│ ├── index.html
│ └── js
│ ├── app.08f128b0.js
│ ├── app.08f128b0.js.map
│ ├── chunk-vendors.e396765f.js
│ └── chunk-vendors.e396765f.js.map
├── electron.js
├── package.json
├── package-lock.json
├── public
│ ├── favicon.ico
│ └── index.html
└── src
├── App.vue
├── components
│ ├── _breakpoint.scss
│ └── RoundList.vue
├── img
│ ├── bullet.svg
│ └── icon.png
└── index.js
vue.config.js:
const path = require("path");
module.exports = {
outputDir: path.resolve(__dirname, "./basic_app/dist"),
publicPath: './'
};
part of package.json
"scripts": {
"build:vue": "vue-cli-service build",
"serve:electron": "electron .",
"serve": "concurrently \"yarn build:vue\" \"yarn serve:electron\"",
"build:electron": ""
},
the output: https://i.stack.imgur.com/nKK7y.png
I want to npm publish my dist dir that looks like this:
dist
├── README.md
├── node_modules
│ └── clap
│ └── dist
│ └── es
│ ├── index.js
│ └── index.js.map
├── package.json
...
└── utils
├── memory-stream.js
├── memory-stream.js.map
├── mysql-users.js
├── mysql-users.js.map
├── sql.js
├── sql.js.map
├── utils.js
└── utils.js.map
Notice how there's a node_modules dir in there. I want to publish that along with everything else.
The files in there are compiled, but they're part of a local package not distributed on npm, so I do want it bundled.
My code references it like this:
var index = require('../node_modules/clap/dist/es/index.js');
So it should work perfectly fine.
The problem is that it looks like npm publish has it hardcoded to ignore directories called node_modules. Is there any way to override that behaviour?
I have setting up a configuration in 'wdio.conf.js' for "rpii html reporter". But its not generating master report for all suites.
const { ReportAggregator, HtmlReporter } = require('#rpii/wdio-html-reporter');
exports.config = {
reporters: ['spec', [HtmlReporter, {
debug: true,
outputDir: './reports/html-reports/',
filename: 'report.html',
reportTitle: 'Test Report Title',
showInBrowser:true
}
]],
onPrepare: function (config, capabilities) {
let reportAggregator = new ReportAggregator({
outputDir: './reports/html-reports/',
filename: 'master-report.html',
reportTitle: 'Master Report'
});
reportAggregator.clean() ;
global.reportAggregator = reportAggregator;
},
onComplete: function(exitCode, config, capabilities, results) {
(async () => {
await global.reportAggregator.createReport( {
config: config,
capabilities: capabilities,
results : results
});
})();
}
}
I expect single report with multiple test cases. But I'm getting multiple reports for each test cases.
The topic is pretty old atm, but I just addressed a similar issue in my project - cannot generate the report at all. In most of the case, it is just a matter of configuration, but there is no solid document or guideline for this painful wdio reporter configuration. So here I am, after a whole week of research and testing around, these are viable config you will need and other fellows out there who is/was facing the same issue.
First, let assume your project structure would be something like the below tree
.
├── some_folder1
│ ├── some_sub_folder1
│ ├── some_sub_folder2
├── some_folder2
├── #report
│ ├── html-reports
│ ├── template
│ │ ├── sanity-mobile-report-template.hbs
│ │ ├── wdio-html-template.hbs
├── specs
│ ├── test1
│ │ ├── test1.doSuccess.spec.js
│ │ ├── test1.doFail.spec.js
│ ├── test2
│ │ ├── test2.doSuccess.spec.js
│ │ ├── test2.doFail.spec.js
├── node-modules
├── package.json
Second, you should have templates for your reports, in my case, it is located in #report/template wdio-html-template.hbs and sanity-mobile-report-template.hbs for HtmlReporter and ReportAggregator respectively. As Rich Peters has notices above
Each suite is executed individually and an html and json file are
generated. wdio does not aggregate the suites, so this is done by the
report aggregator collecting all the files and creating an aggregate
file when complete
The HtmlReporter will actually need to find it template for generating the content for each .spec file, then there is a need for another template requested by ReportAggregator
Third, you need correct specs and suites declaration in your wdio config, generic for specs, and file specifically for suites.
Final, run your test using --suite parameter, reference to wdio guideline
My final project structure would look like this, notice the changes
.
├── some_folder1
│ ├── some_sub_folder1
│ ├── some_sub_folder2
├── some_folder2
├── #report
│ ├── html-reports
│ ├── ├── screenshots
│ ├── ├── suite-0-0
│ ├── ├── ├── 0-0
│ ├── ├── ├── ├── report.html
│ ├── ├── ├── ├── report.json
│ ├── ├── ├── 0-1
│ ├── ├── ├── ├── report.html
│ ├── ├── ├── ├── report.json
│ ├── ├── master-report.html
│ ├── ├── master-report.json
│ ├── template
│ │ ├── sanity-mobile-report-template.hbs
│ │ ├── wdio-html-template.hbs
├── specs
│ ├── test1
│ │ ├── test1.doSuccess.spec.js
│ │ ├── test1.doFail.spec.js
│ ├── test2
│ │ ├── test2.doSuccess.spec.js
│ │ ├── test2.doFail.spec.js
├── node-modules
├── package.json
Each suite is executed individually and an html and json file are generated. wdio does not aggregate the suites, so this is done by the report aggregator collecting all the files and creating an aggregate file when complete.
I downloaded terraform 0.9 and tried to follow the migration guide to move from remote-state to backend
But it doesn't seem to work. I replaced:
data "terraform_remote_state" "state" {
backend = "s3"
config {
bucket = "terraform-state-${var.environment}"
key = "network/terraform.tfstate"
region = "${var.aws_region}"
}
}
with
terraform {
backend "s3" {
bucket = "terraform-backend"
key = "network/terraform.tfstate"
region = "us-west-2"
}
}
yet when I run terraform init in one of my environment folders, I get:
Deprecation warning: This environment is configured to use legacy
remote state. Remote state changed significantly in Terraform 0.9.
Please update your remote state configuration to use the new 'backend'
settings. For now, Terraform will continue to use your existing
settings. Legacy remote state support will be removed in Terraform
0.11.
You can find a guide for upgrading here:
https://www.terraform.io/docs/backends/legacy-0-8.html
I also had to drop the variable interpolation since this is not allowed anymore. Does that mean that one S3 Bucket is used for multiple environments? What have I missed here?
Per upgrade guide (https://www.terraform.io/docs/backends/legacy-0-8.html) after terraform init you also have to run terraform plan to finalize the migration, which will update the remote state file at s3.
As for configuring for multiple environments we ended up using a wrapper shell script with passing in parameters for ${application_name}/${env}/${project}, and using partial configuration.
For a project structure like this:
├── projects
│ └── application-name
│ ├── dev
│ │ ├── bastion
│ │ ├── db
│ │ ├── vpc
│ │ └── web-cluster
│ ├── prod
│ │ ├── bastion
│ │ ├── db
│ │ ├── vpc
│ │ └── web-cluster
│ └── backend.config
└── run-tf.sh
for each application_name/env/component = folder (i.e. dev/vpc) we added a placeholder backend configuration file like this:
backend.tf:
terraform {
backend "s3" {
}
}
Where the folder content for each component will look like this:
│ ├── prod
│ │ ├── vpc
│ │ │ ├── backend.tf
│ │ │ ├── main.tf
│ │ │ ├── outputs.tf
│ │ │ └── variables.tf
At "application_name/" or "application_name/env" level we added a backend.config file, like this:
bucket = "BUCKET_NAME"
region = "region_name"
lock = true
lock_table = "lock_table_name"
encrypt = true
Our wrapper shell script expects parameters application-name, environment, component, and the actual terraform cmd to run.
The content of run-tf.sh script (simplified):
#!/bin/bash
application=$1
envir=$2
component=$3
cmd=$4
tf_backend_config="root_path/$application/$envir/$component/backend.config"
terraform init -backend=true -backend-config="$tf_backend_config" -backend-config="key=tfstate/${application}/${envir}/${component}.json"
terraform get
terraform $cmd
Here is how a typical run-tf.sh invocation looks like:
$ run-tf.sh application_name dev vpc plan
$ run-tf.sh application_name prod bastion apply
You got confused with terraform remote command with remote-state. You dont have to change any remote-state things you have in your tf files.
Instead of configuring your remote state with terraform remote command and use backend config file mentioned in the migration link.
See the second github comment in this link. It has nice step by step procedure on what he did to migrate.
https://github.com/hashicorp/terraform/issues/12792