Strapi file upload doesn't work in production - amazon-s3

I have a problem when uploading files in production with strapi. I have deployed strapi on Heroku and I am using the upload-provider-asws-s3 for files upload.
I created a settings.js file in extensions/upload/config with the following code :
if (process.env.NODE_ENV === 'production') {
module.exports = {
provider: "aws-s3",
providerOptions: {
accessKeyId: process.env.ACCESS_KEY_ID,
secretAccessKey: process.env.SECRET_ACCESS_KEY,
region: "xxxxxxxxx",
params: {
Bucket: "xxxxxxxxx"
}
}
};
} else {
module.exports = {}
}
but when I connect to the strapi production admin and try to upload files, my s3 bucket won't load the files. I connected the same aws-s3 config to the development mode (just to test) and it worked perfectly. It is like heroku won't connect to aws-s3 but I don't get what I miss in my configuration to be able to upload files in the production mode.
Thank you all for your help!

Related

How to setup CloudFront url for AWS S3 provider in Strapi

I have installed strapi-provider-upload-aws-s3 and uploaded file to s3 successfully but is there any way we can change the link to cloudfront instead of using s3 ?
strapi": "3.0.0",
"strapi-provider-upload-aws-s3": "^3.0.0",
config/plugins.js
module.exports = ({ env }) => ({
upload: {
accessKeyId: "xxxxxxxxxx",
secretAccessKey: "xxxxxxxx",
region: "xxxxx",
params: {
Bucket: "xxxxx",
},
cloudfrontIsEnabled: 'Yes',
cloudfrontURL: "xxx.cloudfront.net",
}
}); (edited)
Found old issue but no luck in the new version
I don't think it's possible with current version of strapi and strapi-provider-upload-aws-s3
Your best bet is to fork the provider code, change the url here:
https://github.com/strapi/strapi/blob/master/packages/strapi-provider-upload-aws-s3/lib/index.js#L38
and add forked provider to your package.json file
"dependencies": {
...,
"strapi-provider-upload-aws-s3": "git+https://github.com/<YOUR_GITHUB_ACCOUNT>/strapi-provider-upload-aws-s3.git"
...
}

Nuxt static generated page and axios post

I have a Nuxt project. Everything is OK when I generate a static page.
However, I need to send a POST request to the other server.
I tried to use both a proxy in nuxt.config.js and just direct query, but after deploy to the ngnix eventually, nothing works.
Please help.
UPDATE. Steps to reproduce.
Create Nuxt App including axios and proxy
Configure your proxy for other webservice:
proxy: {
'/api': {
target: 'http://example.com:9000',
pathRewrite: {
'^/api': '/',
},
},
changeOrigin: true,
},
call this service somewhere in the code:
const result = await this.$axios.post('/api/email/subscribe', {email: email})
run "yarn dev" and test the service. It works locally properly.
run 'nuxt generate' and deploy the static code hosting service, for example, hosting.com
run your page which calls the above-mentioned service.
As a result, instead of making POST call to the hosting.com/api/email/subscribe, it calls localhost:3000/api/email/subscribe.
Be sure to install the nuxt versions of axios and proxy in your project #nuxt/axios and #nuxtjs/proxy
after that in your nuxt.config.js add axios as module plus this options for axios and proxy:
modules: [
// Doc: https://axios.nuxtjs.org/usage
'#nuxtjs/axios',
//more modules if you need
],
/*
** Axios module configuration
*/
axios: {
proxy: true,
// See https://github.com/nuxt-community/axios-module#options
},
proxy: {
'/api/': {
target: process.env.AXIOS_SERVER, // I use .env files for the variables
pathRewrite: { '^/api/': '' }, //this should be your bug
},
},
now you can use axios in any part of the code like this
const result = await this.$axios.post('/api/email/subscribe', {email: email})
it will internally resolve to AXIOS_SERVER/email/subscribe without cause cors issues.
EXTRA: test enviroments in local using multiples .env files
you can configure .env for dev and .env.prod for production, after that in local you can use yarn build && yarn start for test your app with your production enviroment. You only need add this at the top of your nuxt.config.js file
const fs = require('fs')
const path = require('path')
if (process.env.NODE_ENV === 'production' && fs.existsSync('.env.prod')) {
require('dotenv').config({ path: path.join(__dirname, `.env.prod`) })
} else {
require('dotenv').config()
}
By definition on the Nuxt docs page what nuxt generate does is: Build the application and generate every route as a HTML file (used for static hosting).
Therefore, using proxy is out of question here. Take note that you path is not even being rewritten.
And probably the result you're looking for is not hosting.com/api/email/subscribe (wit /api), but hosting.com/email/subscribe.
Nevertheless, if you use nginx then I don't think you should use Nuxt's proxy option. Nginx is built just for that so point your API calls there and in nginx config file just declare where it should point further.

Using browser-sync with vue js and framework7

I have created a PWA using vue js 2.0 and framework7 and also use Webpack for bundling. I want to use browser-sync to share my project.
I used this config in my webpack.confg file :
new BrowserSyncPlugin({
// browse to http://localhost:3000/ during development,
// ./public directory is being served
host: 'localhost,
port: 3000,
server: { baseDir: ['src'] }
}),
In src/ I have my basic files like index.html, app.vue, app.js.
After using npm run dev command I see this result :
[Browsersync] Access URLs:
----------------------------------
Local: http://localhost:3000
External: http://192.168.1.118::3000
----------------------------------
UI: http://localhost:3001
UI External: http://localhost:3001
----------------------------------
[Browsersync] Serving files from: src
After this, localhost:3000 open in my browser and say browsersync: connected but it have showed me a blank page.
Also after I enter website path (http://localhost:3000/en/#!/login) in browser, it showed me Cannot Get /en Error. What is the problem?
Any help will greatly appreciated.
Based on your comment, looks like you are also using webpack-dev-server. In that case you can proxy to it:
const BrowserSyncPlugin = require('browser-sync-webpack-plugin')
module.exports = {
// ...
devServer: {
port: 3100
}
// ...
plugins: [
new BrowserSyncPlugin(
// BrowserSync options
{
// browse to http://localhost:3000/ during development
host: 'localhost',
port: 3000,
// proxy the Webpack Dev Server endpoint
// (which should be serving on http://localhost:3100/)
// through BrowserSync
proxy: 'http://localhost:3100/'
},
// plugin options
{
// prevent BrowserSync from reloading the page
// and let Webpack Dev Server take care of this
reload: false
}
)
]
}

Nuxt.js - console.log build version

I'm hosting a Nuxt site on AWS S3 w/ Cloudfront, so when deploying I need to invalidate the CloudFront CDN. This means it takes a bit for the deploy to
I'd like to console.log(buildHash) when the application starts. What's the best way to do this?
I could add this to a plugin, but what's the best way to get the build hash?
There is a manifest.xxx.js file which seems to hold build hashes of the other files in the project. Is that manifest hash unique for each unique build?
Thanks!
This is a bit of a tumbleweed. Here's what I did:
A plugin to print the manifest hash:
```
// Statup and small stuff
window.onNuxtReady(() => {
// Print build version of this script
var scripts = window.document.getElementsByTagName('script')
for (var i=0; i < scripts.length; i++) {
// Get the script name
var src = scripts[i].src
src = src.substr(src.lastIndexOf('/') + 1)
if (src && src.startsWith('manifest')) console.log('Hello ' + src)
}
})
```
Added to nuxt.config.js
```
plugins: [
'plugins/bootstrap-vue.js',
'plugins/scroll.js',
{ ssr: false, src: '~plugins/etc.js' },
]
```

Grunt watch: only upload files that have changed

related
I was able to set up a Grunt task to SFTP files up to my dev server using grunt-ssh:
sftp: {
dev: {
files: {
'./': ['**','!{node_modules,artifacts,sql,logs}/**'],
},
options: {
path: '/path/to/project',
privateKey: grunt.file.read(process.env.HOME+'/.ssh/id_rsa'),
host: '111.111.111.111',
port: 22,
username: 'marksthebest',
}
}
},
But this uploads everything when I run it. There are thousands of files. I don't have time to wait for them to upload one-by-one every time I modify a file.
How can I set up a watch to upload only the files I've changed, as soon as I've changed them?
(For the curious, the server is a VM on the local network. It runs on a different OS and the setup is more similar to production than my local machine. Uploads should be lightning quick if I can get this working correctly)
What you need is grunt-newer, a task designed especially to update the configuration of any task depending on what file just changed, then run it. An example configuration could look like the following:
watch: {
all: {
files: ['**','!{node_modules,artifacts,sql,logs}/**'],
tasks: ['newer:sftp:dev']
}
}
You can do that using the watch event of grunt-contrib-watch.
You basically need to handle the watch event, modify the sftp files config to only include the changed files, and then let grunt run the sftp task.
Something like this:
module.exports = function(grunt) {
grunt.initConfig({
pkg: grunt.file.readJSON('package.json'),
secret: grunt.file.readJSON('secret.json'),
watch: {
test: {
files: 'files/**/*',
tasks: 'sftp',
options: {
spawn: false
}
}
},
sftp: {
test: {
files: {
"./": "files/**/*"
},
options: {
path: '/path/on/the/server/',
srcBasePath: 'files/',
host: 'hostname.com',
username: '<%= secret.username %>',
password: '<%= secret.password %>',
showProgress: true
}
}
}
}); // end grunt.initConfig
// on watch events configure sftp.test.files to only run on changed file
grunt.event.on('watch', function(action, filepath) {
grunt.config('sftp.test.files', {"./": filepath});
});
grunt.loadNpmTasks('grunt-contrib-watch');
grunt.loadNpmTasks('grunt-ssh');
};
Note the "spawn: false" option, and the way you need to set the config inside the event handler.
Note2: this code will upload one file at a time, there's a more robust method in the same link.
You can achieve that with Grunt:
grunt-contrib-watch
grunt-rsync
First things first: I am using a Docker Container. I also added a public SSH key into my Docker Container. So I am uploading into my "remote" container only the files that have changed in my local environment with this Grunt Task:
'use strict';
module.exports = function(grunt) {
grunt.initConfig({
rsync: {
options: {
args: ['-avz', '--verbose', '--delete'],
exclude: ['.git*', 'cache', 'log'],
recursive: true
},
development: {
options: {
src: './',
dest: '/var/www/development',
host: 'root#www.localhost.com',
port: 2222
}
}
},
sshexec: {
development: {
command: 'chown -R www-data:www-data /var/www/development',
options: {
host: 'www.localhost.com',
username: 'root',
port: 2222,
privateKey: grunt.file.read("/Users/YOUR_USER/.ssh/id_containers_rsa")
}
}
},
watch: {
development: {
files: [
'node_modules',
'package.json',
'Gruntfile.js',
'.gitignore',
'.htaccess',
'README.md',
'config/*',
'modules/*',
'themes/*',
'!cache/*',
'!log/*'
],
tasks: ['rsync:development', 'sshexec:development']
}
},
});
grunt.loadNpmTasks('grunt-contrib-watch');
grunt.loadNpmTasks('grunt-rsync');
grunt.loadNpmTasks('grunt-ssh');
grunt.registerTask('default', ['watch:development']);
};
Good Luck and Happy Hacking!
I have recently ran into a similar issue where I wanted to only upload files that have changed. I'm only using grunt-exec. Providing you have ssh access to your server, you can do this task with much greater efficiency. I also created an rsync.json that is ignored by git, so collaborators can have their own rsync data.
The benefit is that if anyone makes a change it automatically uploads to their stage.
// Watch - runs tasks when any changes are detected.
watch: {
scripts: {
files: '**/*',
tasks: ['deploy'],
options: {
spawn: false
}
}
}
My deploy task is a registered task that compiles scripts then runs exec:deploy
// Showing exec:deploy task
// Using rsync with ssh keys instead of login/pass
exec: {
deploy: {
cmd: 'rsync public_html/* <%= rsync.options %> <%= rsync.user %>#<%= rsync.host %>:<%=rsync.path %>'
}
You see a lot of the <%= rsync %> stuff? I use that to grab info from rysnc.json which is ingored by git. I only have this because this is a team workflow.
// rsync.json
{
"options": "-rvp --progress -a --delete -e 'ssh -q'",
"user": "mmcfarland",
"host": "example.com",
"path": "~/stage/public_html"
}
Make sure you rsync.json is defined in grunt:
module.exports = function(grunt) {
var rsync = grunt.file.readJSON('path/to/rsync.json');
var pkg = grunt.file.readJSON('path/to/package.json');
grunt.initConfig({
pkg: pkg,
rsync: rsync,
I think it's not good idea to upload everything that changed at once to staging server. And working on the staging server is not a good idea too. You have to configure your local machine server, to be the same as staging/production
It's better to upload 1 time, when you do deployment.
You can archive all the files using grunt-contrib-compress. And push them using grunt-ssh as 1 file, then extract it on the server, that will be much faster.
that's example of compress task:
compress: {
main: {
options:{
archive:'build/build.tar.gz',
mode: 'tgz'
},
files: [
{cwd: 'build/', src: ['sites/all/modules/**'], dest:'./'},
{cwd: 'build/', src: ['sites/all/themes/**'], dest:'./'},
{cwd: 'build/', src: ['sites/default/files/**'], dest:'./'}
]
}
}
PS: Didn't ever look to rsync grunt modules.
I understand that it's might not what you are looking for. But i decided to create my answer as standalone answer.