I'm trying to delete files on a server with SSH, before copying new versions with SCP. The filenames include a hash for cache busting, so the old files have to go first.
But it seems that the transfer task doesn't wait for the SSH in clean-remote to finish first.
Is there a way to wait for the SSH session to complete before proceeding?
var gulp = require('gulp');
var scp = require('gulp-scp');
var ssh = require('gulp-ssh');
gulp.task('transfer', ['clean-remote'], function () {
return gulp.src('...')
.pipe(scp({
host: '...',
path: '...'
})
);
})
gulp.task('clean-remote', function () {
ssh.exec({
command: ['cd ...', 'rm *.js', 'exit'],
sshConfig: {
host: '...'
}
})
});
Related
I'm looking to create an SSH sub-process and then interact with the server. I'm hung up on a basic step which is to simply wait until the SSH process has connected. I know that this ssh command connects fine because when I run it with inherit instead of piped, the ssh shell shows up as expected.
If I understand correctly, p.output() listens for stdout until it reaches EOF. I'm assuming that when SSH has connected, it streams the stdout, but does not EOF, and so p.output() never gets called.
const encoder = new TextEncoder();
const decoder = new TextDecoder();
const p=Deno.run({
cmd: ["ssh", "root#mywebsite"],
stdout: "piped",
stderr: "piped",
stdin: "piped"
});
const command = (cmd : string) => p.stdin.write(encoder.encode(cmd))
const getOutput = async () => decoder.decode(await p.output())
await p.output() // <----- Hangs here
await command("cd /home/dev/www")
await command("ls -la")
console.log(await getOutput())
await p.status()
console.log("done")
It hangs because .output will resolve once the entire process output has been read, meaning that until ssh command has finished, it will not resolve.
Also have in mind that you need to add \n at the end of each command, otherwise it'll never be triggered.
await command("cd /home\n");
await command("ls -la\n");
// if you don't finish the ssh session, .output will never resolve
await command("exit\n");
// now it will work correctly
console.log(await getOutput());
In any case if you don't want to close the session to read the output of a given command, what you need to do is use p.stdout.readable or p.stdout.read(buf) instead.
for await const(const chunk of p.stdout.readable) {
// parse chunk and do something
}
I have an npm script, which is run like that:
npm run start:local -- -target.location https://192.1.1.1:8052/
The URL param is the local IP of a user.
What I would love to have is to ask users to input this value, because it's different for everybody.
Is it possible? Would be great to do it with vanila npm scripts.
Simply speaking, an npm script will run the desired command in your shell environment.
In a shell script, the arguments passed can be accessed using $N where N = Position of the argument.
Talking about your case, the command you want to run is
npm run start:local -- -target.location USER_INPUT
USER_INPUT needs to replaced with the argument that the user has passed. Assuming that the user will pass location as the first argument to the script, it can be accessed using $1.
I have created this gist to demonstrate the same.
As you can clearly see, I have defined start:local to access the first argument and then, pass it to the start script which then echoes out the passed in argument.
UPDATE:
Here is the script for ASKING a value from a user in a prompt format.
Basically, first I am asking for user input then, storing the same in a variable and passing the variable as an argument to npm start
References
Accessing Positional Arguments
Asking User Input
Use readline to get the ip value then use exec to spawn this process. This is a pure JS solution, and OS agnostic.
Example:
package.json
"scripts": {
"start": "npm run start:local -- -target.location",
"prompt": "node prompt.js"
},
prompt.js
const { spawn, execSync } = require('child_process');
const exec = commands => {
execSync(commands, { stdio: 'inherit', shell: true });
};
const spawnProcess = commands => {
spawn(commands, { stdio: 'inherit', shell: true });
};
const readline = require('readline');
const rl = readline.createInterface({
input: process.stdin,
output: process.stdout
});
rl.question('What is your current ip? example: https://192.168.1.10:9009 ', (ip) => {
console.log(`Starting server on: ${ip}`);
exec(`npm run start -- ${ip}`);
rl.close();
});
Example: If we want to run below three commands in sequence with userinput
git add .
git commit -m "With git commit message at run time"
git push
Add below command in your package.json file under the scripts
"gitPush": "git add . && echo 'Enter Commit Message' && read message && git commit -m \"$message\" && git push"
And run commands via npm run gitPush
References: ask-users-to-input-value-for-npm-script
Use Node's readline? it has methods for interactive IO.
If you're trying to ask for users to input ther response you could do something like this.
const readline = require("readline");
const reader = readline.createInterface({
input: process.stdin,
output: process.stdout,
error: process.stderr
});
const ask = (message, default_value = null) => new Promise(resolve => {
reader.question(message, (response)=>{
return resolve(response.length >= 1 ? response : default_value);
});
});
(()=>{
let ip = await ask(`Please enter your public ip: `);
})();
I am executing my protractor test suite from my desktop and I am using a remote selenium server. I have configured the 'protractor-video-reporter' to capture execution for my local windows environment (using ffmpeg codec) and when I execute using my local selenium server, the video capture works fine. But when I execute on remote VMs, it captures my desktop screen.
I understand that I need to provide the remote location path to ffmpeg codec, but I do not know how to provide appropriate user credentials so that my automation can invoke the remote plugin?
My present configuration is as follows:
const VideoReporter = require('protractor-video-reporter');
...
let config = {
...
onPrepare: () => {
...
VideoReporter.prototype.jasmineStarted = function () {
var self = this;
if (self.options.singleVideo) {
var videoPath = path.join(self.options.baseDirectory, 'protractor-specs.mpg');
self._startScreencast(videoPath);
if (self.options.createSubtitles) {
self._subtitles = [];
self._jasmineStartTime = new Date();
}
}
};
...
jasmine.getEnv().addReporter(new VideoReporter({
baseDirectory: './test-output/videoreport',
createSubtitles: false,
saveSuccessVideos: true,
singleVideo: true,
ffmpegCmd: "C:/FFmpeg/bin/ffmpeg.exe", /*Probably some changes needed here*/
ffmpegArgs: [
'-f', 'gdigrab',
'-framerate', '30',
'-video_size', 'wsxga',
'-i', 'desktop',
'-q:v', '10',
]
}));
...
}
...
}
export { config };
Considering that execution and video capture both has to happen in remote server, please suggest a suitable solution.
I need an ability to use browserSync with php support and some specific url rewrites. I came up with browserSync with Gulp-Connect-Php packages plus Gulp-Connect + modrewrite.
Here is my config:
var
browserSync = require('browser-sync'),
phpconnect = require('gulp-connect-php'),
connect = require('gulp-connect'),
modrewrite = require('connect-modrewrite'),
phpconnect.server({base:'dist/',port: 8010}, function (){
connect.server({
port: 8001,
middleware: function() {
return [
modrewrite([
'^/admin/(.*) - [L]',
'^([^.]*|.*?\.php)$ http://localhost:8010$1 [P,NC]'
])
];
}
})
browserSync({
injectChanges: true,
proxy: '127.0.0.1:8010'
});
})
This works fine and exactly as I need. The following problem occurs from time to time when I launch it:
[error] You tried to start Browsersync twice! To create multiple instances, use browserSync.create().init()
events.js:141
throw er; // Unhandled 'error' event
^
Error: listen EADDRINUSE :::8001
In other words, browserSync starts BEFORE gulp-connect and utilises port 8010 which should be used by gulp-connect and gulp-connect fails to start.
I installed npm sleep package and added the following line before launching browserSync:
sleep.sleep(15)
in other words, I added a 15 seconds delay before launching browserSync.
It works but I bet there is a more elegant solution.
Please, advise.
gulp-connect internally wraps connect, which starts a Node http server.
It emits an event once the server starts called listening. https://nodejs.org/api/net.html#net_event_listening
var
browserSync = require('browser-sync'),
phpconnect = require('gulp-connect-php'),
connect = require('gulp-connect'),
modrewrite = require('connect-modrewrite'),
phpconnect.server({base:'dist/',port: 8010}, function (){
var app = connect.server({
port: 8001,
middleware: function() {
return [
modrewrite([
'^/admin/(.*) - [L]',
'^([^.]*|.*?\.php)$ http://localhost:8010$1 [P,NC]'
])
];
}
})
app.server.on('listening', function () {
browserSync({
injectChanges: true,
proxy: '127.0.0.1:8010'
});
});
So I've got this scenario where I have separate Web server and MySQL server, and I can only connect to the MySQL server from the web server.
So basically everytime I have to go like:
step 1: 'ssh -i ~/somecert.pem ubuntu#1.2.3.4'
step 2: 'mysqldump -u root -p'password' -h 6.7.8.9 database_name > output.sql'
I'm new to gulp and my aim was to create a task that could automate all this, so running one gulp task would automatically deliver me the SQL file.
This would make the developer life a lot easier since it would just take a command to download the latest db dump.
This is where I got so far (gulpfile.js):
////////////////////////////////////////////////////////////////////
// Run: 'gulp download-db' to get latest SQL dump from production //
// File will be put under the 'dumps' folder //
////////////////////////////////////////////////////////////////////
// Load stuff
'use strict'
var gulp = require('gulp')
var GulpSSH = require('gulp-ssh')
var fs = require('fs');
// Function to get home path
function getUserHome() {
return process.env.HOME || process.env.USERPROFILE;
}
var homepath = getUserHome();
///////////////////////////////////////
// SETTINGS (change if needed) //
///////////////////////////////////////
var config = {
// SSH connection
host: '1.2.3.4',
port: 22,
username: 'ubuntu',
//password: '1337p4ssw0rd', // Uncomment if needed
privateKey: fs.readFileSync( homepath + '/certs/somecert.pem'), // Uncomment if needed
// MySQL connection
db_host: 'localhost',
db_name: 'clients_db',
db_username: 'root',
db_password: 'dbp4ssw0rd',
}
////////////////////////////////////////////////
// Core script, don't need to touch from here //
////////////////////////////////////////////////
// Set up SSH connector
var gulpSSH = new GulpSSH({
ignoreErrors: true,
sshConfig: config
})
// Run the mysqldump
gulp.task('download-db', function(){
return gulpSSH
// runs the mysql dump
.exec(['mysqldump -u '+config.db_username+' -p\''+config.db_password+'\' -h '+config.db_host+' '+config.db_name+''], {filePath: 'dump.sql'})
// pipes output into local folder
.pipe(gulp.dest('dumps'))
})
// Run search/replace "optional"
SSH into the web server runs fine, but I have an issue when trying to get the mysqldump, I'm getting this message:
events.js:85
throw er; // Unhandled 'error' event
^
Error: Warning:
If I try the same mysqldump command manually from the server SSH, I get:
Warning: mysqldump: unknown variable 'loose-local-infile=1'
Followed by the correct mylsql dump info.
So I think this warning message is messing up my script, I would like to ignore warnings in cases like this, but don't know how to do it or if it's possible.
Also I read that using the password directly in the command line is not really good practice.
Ideally, I would like to have all the config vars loaded from another file, but this is my first gulp task and not really familiar with how I would do that.
Can someone with experience in Gulp orient me towards a good way of getting this thing done? Or do you think I shouldn't be using Gulp for this at all?
Thanks!
As I suspected, that warning message was preventing the gulp task from finalizing, I got rid of it by commenting the: loose-local-infile=1 From /etc/mysql/my.cnf