Using "Remote SSH" in VSCode on a target machine that only allows inbound SSH connections - ssh

Is there a way to use the VSCode Remote SSH extension to interact with a remote host that does not allow outbound internet connections?
Is it possible to download the vscode-server files from another system and copy to host?
I read this but I can't connect the server to internet.

When you connect to a host it executes a bash script that wgets or curls a tarball and extracts it in a directory in your home directory. Here's an offline workaround.
Attempt to connect, let it fail
On server, get the commit id
$ ls ~/.vscode-server/bin
553cfb2c2205db5f15f3ee8395bbd5cf066d357d
Download tarball replacing $COMMIT_ID with the the commit number from the previous step
For Stable Version
https://update.code.visualstudio.com/commit:$COMMIT_ID/server-linux-x64/stable
For Insider Version
https://update.code.visualstudio.com/commit:$COMMIT_ID/server-linux-x64/insider
Move tarball to ~/.vscode-server/bin/$COMMIT_ID/vscode-server-linux-x64.tar.gz
Extract tarball in this directory
$ cd ~/.vscode-server/bin/$COMMIT_ID
$ tar -xvzf vscode-server-linux-x64.tar.gz --strip-components 1
Connect again
You'll still need to install any extensions manually. There's a download button next to all the extensions in the marketplace. Once you have the .vsix file you can install them through the GUI with the Install from VSIX option in the extensions manager.
This is kind of a pain and hopefully they improve this process, but if you have a network-based home directory, you only have to do this once.

open vscode -> about
Version: 1.46.1
Commit: cd9ea6488829f560dc949a8b2fb789f3cdc05f5d
Date: 2020-06-17T21:17:14.222Z
Electron: 7.3.1
Chrome: 78.0.3904.130
Node.js: 12.8.1
V8: 7.8.279.23-electron.0
OS: Darwin x64 17.7.0
$COMMIT_ID = cd9ea6488829f560dc949a8b2fb789f3cdc05f5d

A new feature is being added to support offline install
However, you can now solve this issue by a new user setting in the Remote - SSH extension. If you enable the setting remote.SSH.allowLocalServerDownload, the extension will install the VS Code Server on the client first and then copy it over to the server via SCP.
Note: This is currently an experimental feature but will be turned on by default in the next release
https://code.visualstudio.com/blogs/2019/10/03/remote-ssh-tips-and-tricks

A a work around I have done the following:
Desktop ~/.ssh/config
...
Host *
RemoteForward 54321
...
Remote: ~/bin/wget in which ~/bin is added to PATH via .bashrc
#!/bin/bash
export LD_LIBRARY_PATH=$HOME/opt/lib/tsocks/
export TSOCKS_CONF_FILE=$HOME/opt/tsocks/tsocks.conf
$HOME/bin/tsocks /usr/bin/wget $#
Remote: ~/opt/tsocks/tsocks.conf
server = 127.0.0.1
server_port = 54321
server_type = 5
note tsocks binary has been scp-ed to ~/bin/tsocks and ~/opt/tsocks/ has been created with libtsocks.so which is normally stored in /usr/lib64/libtsocks.so
This is a work around that allows me to have wget functionality with out messing with anything outside my profile to get it to work (eg: no root required ... even though I have it).

Current Version of VS Code: 1.48.2
I just kill the wget process on the server end, and let the client download the archive and transfer it to the server end. That's quite easy as below.
make sure that you set in settings.json
"remote.SSH.allowLocalServerDownload": true,
execute the shell scrpits below.
# to find the <pid>
ps aux | grep wget | grep vscode-server
# kill the process
kill -9 <pid>
# then wait for the client downloading and transferring
# optional: If you want to know the progress, just
cd ~/.vscode-server/bin/<commit-id>/
watch -n 1 -d ls -rthl

Related

code deployment via zipped file in jenkins

I am new to Jenkins and still taking baby steps to learn it. What I have could be very simple to some people but I couldn't find a straightforward way to do it. I simply want to take source code in a zipped file format and do the following:
copy to remote server in a certain directoy
delete the old code
unzip the new code
delete the zipped file
finally start apache web server
I have installed plugins like ssh2, ssh-copy, remote commands, etc but still cannot achieve what I am looking in to do. Any help would greatly appreciated it.
I have a Spring project and build it to get a .war file by Jenkins.
The following shell commands show how to copy the .war to a remote server and to run it on Tomcat.
remote_host=192.168.1.2
tomcat_home=/x/y
# stop web server
ssh root#${remote_host} "sh /root/stop.sh" || echo "something wrong ignored!"
# copy to remote server in a certain directoy
scp $WORKSPACE/build/libs/myapp-test.war root#${remote_host}:$tomcat_home/webapps/myapp.war
# delete the old code
ssh root#${remote_host} "rm -rf $tomcat_home/webapps/*"
# unzip the new code
ssh root#${remote_host} "unzip -o $tomcat_home/webapps/myapp.war -d $tomcat_home/webapps/myapp"
# delete the zipped file
ssh root#${remote_host} "rm -rf $tomcat_home/webapps/myapp.war"
# finally start apache web server
ssh root#${remote_host} "sh $tomcat_home/bin/startup.sh"
In my case, I put the commands in a Jenkins job and at the section: Build -- Execute shell -- Command

How to access a folder via SMB protocol from ASP Net Core [duplicate]

I am trying to setup a script that will:
Connect to a windows share
Using LOAD DATA LOCAL INFILE, upload the two files into their appropriate db tables
Umount share
Situation:
I can currently vpnc into this remote machine
Problem:
I cannot
mount -t cifs //ip.address/share /mnt/point -o username=u,password=p,port=445
mount error(110) Connection timed out
I am attempting to do this manually first
Remote server is open to port 445
Questions:
Do I even need to vpnc in first?
Do I need to do route add for the remote ip/mask/gw after vpnc?
Thank you!
The mount.cifs file is provided by the samba-client package. This can be installed from the standard CentOS yum repository by running the following command:
yum install samba samba-client cifs-utils
Once installed, you can mount a Windows SMB share on your CentOS server by running the following command:
Syntax:
mount.cifs //SERVER_ADDRESS/SHARE_NAME MOUNT_POINT -o user=USERNAME
SERVER_ADDRESS: Windows system’s IP address or hostname
SHARE_NAME: The name of the shared folder configured on the Windows system
USERNAME: Windows user that has access to this share
MOUNT_POINT: The local mount point on your CentOS server
I am mounting to a share from \\10.11.10.26\snaps
Make a directory under mount for your reference
mkdir /mnt/mymount
Now I am mounting the snaps folder from indiafps02, User name is the Domain credentials, i.e. Mydomain in this case
mount.cifs //10.11.10.26/snaps /mnt/mymount -o user=Girish.KG
Now you could see the content by typing
ls /mnt/mymount
So, after performing your task, just fire umount command
umount /mnt/mymount
That's it. You are done.
no need to install "samba" and "samba-client", only "cifs-utils" using command
yum install cifs-utils
after that in windows share the folder you would like to mount in centos if you didn't do that already ("c:\interpub\wwwroot" in my case).
make sure you share it with a specific username whom your know the password for ("netops" in my case).
create a directory in centos in which you would like to mount the windows share in to ("/mnt/cm" in my case).
after that run that simple command as a root
mount.cifs //10.16.0.160/wwwroot /mnt/cm/ -o user=netops
centos will prompt you for the windows username password.
you are done.

Installing fzf fuzzy finder offline

I'm behind a firewall and I have the fzf.tar.gz package which has the content of the git repo. How can I install fzf offline?
The install command ~/.fzf/install is reaching out to github.com. I'm on Redhat with no internet connection.
https://github.com/junegunn/fzf
This is just what I observed, I can't guarantee I didn't miss anything:
First, clone fzf to FZF_DIR on an online PC, then,
I'd suggest you to execute 'install' on an online PC to get necessary files
~/.fzf/bin/fzf this one is downloaded by install script
~/.fzf.bash this one is generated by install script
cp ~/.fzf/bin/fzf $FZF_DIR/bin
copy $FZF_DIR (with fzf binary in it) and .fzf.bash into your offline PC
ln -s $FZF_DIR ~/.fzf
source .fzf.bash in your .bashrc
Entire FZF_DIR is needed because it includes some other useful scripts sourced by .fzf.bash.

Subversion export/checkout in Dockerfile without printing the password on screen

I want to write a Dockerfile which exports a directory from a remote Subversion repository into the build context so I can work with these files in subsequent commands. The repository is secured with user/password authentication.
That Dockerfile could look like this:
# base image
FROM ubuntu
# install subversion client
RUN apt-get -y update && apt-get install -y subversion
# export my repository
RUN svn export --username=myUserName --password=myPassword http://subversion.myserver.com/path/to/directory
# further commands, e.g. on container start run a file just downloaded from the repository
CMD ["/bin/bash", "path/to/file.sh"]
However, this has the drawback of printing my username and password on the screen or any logfile where the stdout is directed, as in Step 2 : RUN svn export --username=myUserName --password=myPassword http://subversion.myserver.com/path/to/directory. In my case, this is a Jenkins build log which is also accessible by other people who are not supposed to see the credentials.
What would be the easiest way to hide the echo of username and password in the output?
Until now, I have not found any way how to execute RUN commands in a Dockerfile silently when building the image. Could the password maybe be imported from somewhere else and attached to the command beforehand so it does not have to be printed anymore? Or are there any methods for password-less authentication in Subversion that would work in the Dockerfile context (in terms of setting them up without interaction)?
The Subversion Server is running remotely in my company and not on my local machine or the Docker host. To my knowledge, I have no access to it except for accessing my repository via username/password authentication, so copying any key files as root to some server folders might be difficult.
The Dockerfile RUN command is always executed and cached when the docker image is build so the variables that svn needs to authenticate must be provided at build time. You can move the svn export call when the docker run is executed in order to avoid this kind of problems. In order to do that you can create a bash script and declare it as a docker entrypoint and pass environment variables for username and password. Example
# base image
FROM ubuntu
ENV REPOSITORY_URL http://subversion.myserver.com/path/to/directory
# install subversion client
RUN apt-get -y update && apt-get install -y subversion
# make it executable before you add it here otherwise docker will coplain
ADD docker-entrypoint.sh /enrypoint.sh
ENTRYPOINT /entrypoint.sh
docker-entrypoint.sh
#!/bin/bash
# maybe here some validation that variables $REPO_USER $REPO_PASSOWRD exists.
svn export --username="$REMOTE_USER" --password="$REMOTE_PASSWORD" "$REPOSITORY_URL"
# continue execution
path/to/file.sh
Run your image:
docker run -e REPO_USER=jane -e REPO_PASSWORD=secret your/image
Or you can put the variables in a file:
.svn-credentials
REPO_USER=jane
REPO_PASSWORD=secret
Then run:
docker run --env-file .svn-credentials your/image
Remove the .svn-credentials file when your done.
Maybe using SVN with SSH is a solution for you? You could generate a public/private key pair. The private key could be added to the image whereas the public key gets added to the server.
For more details you could have a look at this stackoverflow question.
One solution is to ADD the entire SVN directory you previously checked out on your builder file-system (or added as a svn:externals if your Dockerfile is itself in a SVN repository like this: svn propset svn:externals 'external_svn_directory http://subversion.myserver.com/path/to/directory' ., then do a svn up).
Then in your Dockerfile you can simply have this:
ADD external_svn_directory /tmp/external_svn_directory
RUN svn export /tmp/external_svn_directory /path/where/to/export/to
RUN rm -rf /tmp/external_svn_directory
Subversion stores authentication details (if it not disabled in configuration) at client side and use stored username|password on request for the subsequent operations on the same URL.
Thus - you have to run (successful) svn export in Dockerfile with username|password only once and allow SVN to use cached credentials (remove auth. options from command-line) later

Enabling SSH compression in Sourcetree Windows for a mercurial repository

I am on Windows 7 - Sourcetree 1.4.1.0 - Embedded Mercurial 2.6.1
Target is a private mercurial repo hosted on bitbucket.
How do I enable SSH compression so that my transactions are faster?
A quick Google search yielded this document:
Edit the Mercurial global configuration file (~/.hgrc). Add the following line to the UI section:
ssh = ssh -C
When you are done the file should look similar to the following:
[ui]
# Name data to appear in commits
username = Mary Anthony <manthony#atlassian.com>
ssh = ssh -C
On Windows, the Mercurial settings file is located here:
C:\Users\{username}\AppData\Local\Atlassian\SourceTree\hg_local\Mercurial.ini
The contents of the file are actually not to be changed, as its header explains:
; System-wide Mercurial config file.
;
; !!! Do Not Edit This File !!!
;
; This file will be replaced by the installer on every upgrade.
; Editing this file can cause strange side effects on Vista.
;
; http://bitbucket.org/tortoisehg/stable/issue/135
;
; To change settings you see in this file, override (or enable) them in
; your user Mercurial.ini file, where USERNAME is your Windows user name:
;
; XP or older - C:\Documents and Settings\USERNAME\Mercurial.ini
; Vista or later - C:\Users\USERNAME\Mercurial.ini
I don't have a Mac, so I can't test this, but this Atlassian answer states that the location of this file for Mac is:
/Applications/SourceTree.app/Contents/Resources/mercurial_local/hg_local/
In my case, I'm using TortoiseHg, but the concept should be the same.
Here is my original c:\somerepo\.hg\hgrc file:
[paths]
default = ssh://hg#bitbucket.org/someuser/somerepo
So what's happening with ssh? Let's debug a pull statement, hg pull --debug on the command-line. I noticed it is running C:\Program Files\TortoiseHg\lib\TortoisePlink.exe instead of ssh to make the call:
PS C:\somerepo> hg pull --debug
pulling from ssh://hg#bitbucket.org/someuser/somerepo
running "C:\Program Files\TortoiseHg\lib\TortoisePlink.exe" -ssh -2 hg#bitbucket.org "hg -R someuser/somerepo serve --stdio"
sending hello command
sending between command
abort: no suitable response from remote hg!
So let's just reuse the call, add compression (yay!), non-interactive (batch) and our key:
[paths]
default = ssh://hg#bitbucket.org/someuser/somerepo
[ui]
ssh = "C:\Program Files\TortoiseHg\lib\TortoisePlink.exe" -ssh -2 -C -batch -i "c:\keys\somekey.ppk"