sls remove without .serverless directory - serverless-framework

The Serverless Framewokrk creates the .serverless directory with configuration of AWS components.
If it is not present what will happen with sls remove?
Imagine that someone else deployed, and I just cloned the repo and need to remove everything. Should I add this directory in the repository?

sls remove will be referring to serverless.yml file and will be removing the stack.
.serverless directory is created when you do a sls deploy
It will basically have some contents like zip file which is used to deploy it to AWS.
So you need not create an .serverless directory manually for anything. It is also not recommended to push this directory to git either.

Related

Terraform Module Source: S3 source does not pull the latest file in Terraform Cloud

I have a terraform module that is stored as a zip file on S3. This module/zip file is regularly re-built and updated.
In my main terraform project I'm referencing this module using an S3 source:
module "my-module" {
source = "s3::https://s3.amazonaws.com/my_bucket/staged_builds/my_module.zip"
... More config ...
}
The issue I am having is that its really hard to get terraform cloud to deploy the latest zip file after its been initially deployed. It seems that it continues to use a cached version of the zip file sourced from S3.
The deployment is done using github actions and I did try to add the command terraform get -update to a build step to download the module updates.
# Initialize a new or existing Terraform working directory by creating initial files, loading any remote state, downloading modules, etc.
- name: Terraform Init
run: terraform init
# Update the terraform modules to get the latest versions
- name: Update Terraform modules
run: terraform get -update
- name: Terraform Apply
if: github.ref == 'refs/heads/master' && github.event_name == 'push'
run: terraform apply -auto-approve
This command works great locally, However this fails to deploy the latest modules sourced from S3 when terraform cloud is used.
I have also scanned the terraform documentation for any mention on how to taint module sources and I haven't found and mention of it so I'm assuming modules sources cant be tainted.
The only way I have found to consistently use the latest zip file from S3 is to remove the module definition from the main terraform project, deploy it, re-add the module and deploy it again.
This is a manual and time consuming process.
Is there a better process to make sure that the terraform cloud always uses (or redownloads) the latest zip file sourced from S3 for modules?

What is the proper way to upload a Vue.js app to GitHub?

I tried uploading my files to Github, but GitHub says it's too big.
I then uploaded the content of the dist folder after building for production.
This worked just fine, but it's not very useful.
What is the proper way to upload a Vue.js app to GitHub?
What you generate (binary files which can be big) should not be versioned/pushed to GitHub.
It is better to use a local plugin which will, on GitHub side, build, tag and publish a release associated to your project.
For instance: semantic-release/github.
You can combine that with a GitHub Action, which can take that release, and deploy it on a remote server of your choice (provided it is publicly accessible from GitHub): see for example "How to deploy your VueJS using Github Actions (on Firebase Hosting)" from John Kilonzi.
For your existing dist, you should delete it (from Git only, not your disk) and add it to your .gitignore:
cd /path/to/repo
git rm --cached dist
echo dist/>>.gitignore
git add .gitignore
git commit -m "Delete and ignore dist/"
git push
What happens if I add a node module( like if I decide to add an image cropper). How can the other contributers deal with that?
You need to add the declaration of that module in your project, not version the module itself. For instance, using Vue.use.
Also: I host the app on netlify. It builds the site straight from github. But it wont be able to build the site if gihub doesnt have the dist folder...
See "How to deploy your Vue app with Netlify in less than 2 min!" from Jean-Philippe Fong: Netlify itself will build the dist (from your GitHub project sources) and publish it.
You should add a .gitignore file to the root of your directory where you can ignore files and directories.
Most typical ones to ignore are the following:
node_modules
/dist
.env
.vscode

Serverless: How to remove/deploy deployment without .serverless directory for team collaboration

How do I remove/deploy deployment without .serverless directory for team collaboration?
For example if I run sls deploy --aws-profile profile1 with a .yml file it then creates this .serverless directory which I am not including in my git push to hide secrets. Now when someone else clones this repo on my team how can they now manage the same deployment? Is the .yml file and same aws profile sufficient?
The .serverless folder is created by serverless to alocate the cloud formation files. You should not handle them manually (and the folder and it’s content should not be included in source control).
The serverless.yml is the source of truth for the deployment, so it should do the same if running with the same environments.
The AWS account/profile can be set using the AWS cli. Given all the devs use the same account or use accounts with the same level of permissions, each one of you should be able to run deploy/remove.
If you project uses a .env file or environmental variables, each member of the team has to include them in their environment.

Archive localResource in YARN : customize location

I wanted to deploy a zip file as a local resource in yarn. Hence I did:
packageResource.setResource(packageUrl);
packageResource.setSize(packageFile.length());
packageResource.setTimestamp(packageFile.lastModified());
packageResource.setType(LocalResourceType.ARCHIVE);
packageResource.setVisibility(LocalResourceVisibility.APPLICATION);
If my file name is "abc.zip", Yarn unpacks all the zip contents into a folder called "abc", not the current working directory.
For example, it creates something like:
/grid/5/tmp/yarn-local/usercache/(user)/appcache/application_1394223910537_2533883/container_1394223910537_2533883_01_000002/abc
Can I customize this behavior? How do I get Yarn to unzip a file in the current working directory ,instead of creating a new directory?
The use-case is: if my app has some scripts, it would be useful to have all scripts deployed to current directory, instead of having to change the code to reference them from within the folder that Yarn creates.

CException Error while deploying yii application on OpenShift?

Friends, I tried to deploy my yii production application from cloud9 IDE to OpenShift while do so, I got this error message,
CException
Application runtime path "/var/lib/openshift/51dd48794382ecfd530001e8/app-root/runtime/repo/php/protected/runtime" is not valid. Please make sure it is a directory writable by the Web server process.
Even when I changed folder permissions to 775 (chmod -R 775 directory) on Cloud9 IDE and deployed again, but I get the same error coming.
It's an old question, but I just bumped into the same issue very recently.
When you extracted the "yii" package several folders were empty, "framework/protected/runtime" was one of them.
To deploy to OpenShift you need to commit the yii package to git, and the push the commit to OS. But, git won't commit empty folders, so they are not created in your deployment. You need to create some file inside those folders and add those files to your git repo before committing/pushing. The usual procedure would be to add a ".gitkeep" file to those folders (it's just a empty dummy file, so git would see those folders).
That would fix this particular error.
It may be due the ownership given to the folder.
Check the web server user group, is that directory is writable or not and also What effects a web server when we change the platform.
Hope my suggestion would be useful.
For Yii applications, the assets and protected/runtime folders are special. First, both folders must exist and writable by the server (httpd) process. Second, these two folders contains temporary files, and should be ignored by git. If these temporary files got committed, deployment in plain servers (not Openshift servers) would cause git merge conflicts. So I put these two folders in .gitignore :
php/assets/
php/protected/runtime/
In my deployment, I add a shell script to be called by openshift, creating both folders under $OPENSHIFT_DATA_DIR and creating symbolic link to both of them in the application's folders. This is the content of the shell script (.openshift/action_hooks/deploy) which I adapted from here :
#!/bin/bash
if [ ! -d $OPENSHIFT_DATA_DIR/runtime ]; then
mkdir $OPENSHIFT_DATA_DIR/runtime
fi
# remove symlink if already exists, fix problem when with gears > 1 and nodes > 1
rm $OPENSHIFT_REPO_DIR/php/protected/runtime
ln -sf $OPENSHIFT_DATA_DIR/runtime $OPENSHIFT_REPO_DIR/php/protected/runtime
if [ ! -d $OPENSHIFT_DATA_DIR/assets ]; then
mkdir $OPENSHIFT_DATA_DIR/assets
fi
rm $OPENSHIFT_REPO_DIR/php/assets
ln -sf $OPENSHIFT_DATA_DIR/assets $OPENSHIFT_REPO_DIR/php/assets
The shell script ensures the temporary folders created on each gear after openshift deployment. By default, a new directory's right are u+rwx, and it became writable by the httpd process because the gear runs httpd as the gear user (not apache or something else).