I am still rather new to scripting and powershell. However I have come across a common problem with my scripts. In my deployment environment I mainly deal with Windows 7 and 8. I deploy several programs through the use of a batch file on a USB. I understand that everything could be pushed through the network but I do not have the permissions for that. Now to use the USB method I must make a path for E: and a path for D: and hope that when the USB was inserted that it received one of those letters or the whole script fails.
Here is an example:
Title Installing "Program"
Echo (Step # of #)
Echo Installing "Program". . .
Echo Please Wait. . .
Echo Attempting Install from D:/
pushd D:\\Path
Echo Attempting Install from E:/
pushd E:\\Path
Echo Complete
cls
Is there a way for me to condense the path to one line that will pick up D:, E: or another drive letter? This must also be able to be used on other flash drives as well. Would something like: .\path work?
This depends on the version of pwershell that you are using. Newer versions will give you the path of the script with $PSCommandPath, and you can extract the drive letter using $PSCommandPath[0].
For older versions, you can get the path using $MyInvocation.MyCommand.Path, and extract the drive letter with
split-path $MyInvocation.MyCommand.Path -Qualifier
I would recommend the second method as it will be more portable between different machines that may only have the older version.
If you are runnig this from within a batch file, instead of a powershell script, then %~do will give you the drive of the script.
Related
At first, I tried to fix my problem of npm instruction
so I added
[interop]
appendWindowsPath = false
to /etc/wsl.conf
It works, but another problem happen.
When I type code .
Command 'code' not found, did you mean:
command 'node' from deb nodejs (12.22.9~dfsg-1ubuntu3)
command 'cdde' from deb cdde (0.3.1-1build1)
command 'ode' from deb plotutils (2.6-11)
command 'tcode' from deb emboss (6.6.0+dfsg-11ubuntu1)
command 'cde' from deb cde (0.1+git9-g551e54d-1.2)
Try: sudo apt install <deb name>
The above Error message appear.
I tried the following instruction
export PATH=$PATH:"/mnt/c/Users/%USERNAME%/AppData/Local/Programs/Microsoft VS Code/bin"
It also works properly.
Whenever I restarted WSL, npm instruction still worked well, but code instruction lost its function again.
What should I do to fix the problem?
Thanks in advance!
My main suggestion would be to not use appendWindowsPath = false to fix your NPM problem. That's like using a sledgehammer as a flyswatter. As I said in this answer:
Please do not follow the recommendations (like this answer) to completely remove all Windows paths from WSL, as that will severely limit your ability to run Windows applications in WSL (one of its great features).
You'll also lose access to the ability to run PowerShell scripts and commands in WSL easily. You won't have direct access to wsl.exe itself from inside WSL (which comes in handy).
You can type the full paths to these commands, of course, but most instructions and other answers you find here are going to assume that you've left the Windows path intact.
Instead, figure out where npm is installed in your WSL distribution and then determine why it is further toward the end of the PATH than your Windows directories. Windows paths are added at the end of the Linux PATH for a reason. If something in your startup files is adding to the path, it should put it at the beginning, so it has precedence. E.g.:
export PATH="newdir:$PATH"
Note that I'm not saying that you should change your export statement above since, as mentioned, that Windows path would normally come at the end anyway. It's really not going to matter unless you put another code executable somewhere else in your path.
Whenever I restarted WSL, npm instruction still worked well, but code instruction lost its function again.
If you do want the "quick and dirty" (not recommended) solution, then you can simply add that export command that "makes it work" to your ~/.bashrc. That file is processed each time the Bash shell starts interactively.
I want to copy files remotely in a script from windows machine to Linux machine.
On the Linux machine I run the below command
scp user#remotehost:\D\mySrcCode\somefile.cpp .
I am getting an error
scp: DmySrcCodesomefile.cpp: No such file or directory
The file somefile.cpp is located at D:\mySrcCode on windows side.
Any ideas on what I am missing ?
You probably should quote or backslash the backslashes in the path.
If your interactive shell is GNU bash, read its §3.1.2 quoting chapter.
You could try:
scp user#remotehost:\\D\\mySrcCode\\somefile.cpp .
Consider also using other (more appropriate) tools, like rsync or git.
You might also use exec(3) from your C program to run /usr/bin/ssh, or look into libssh.
You could change your login shell (see chsh(1) and /etc/shells so shells(5)) to more user friendly alternatives such as zsh or fish. They could give you some warning (depending on how they are configured or used) or some autocompletion (with the tabkey).
PS. Your problem is not ssh specific. You might replace scp with echo to understand it more.
docker-machine has an scp command, but docker-cloud doesn't seem to have any way to transfer a file from my local machine to the cloud container or vice-versa.
I'm submitting an answer below that I've finally figured out (in hopes that it will help someone), but I'd love to hear better answers if there are any!
(I realize docker-cloud is going away, but perhaps this will be helpful for other cloud platforms as well)
To transfer a file from your local machine to a docker-cloud instance that is running linux with the tee command available:
docker-cloud container exec id12345 tee filename.ext < file_to_copy.ext > /dev/null
(you'll want to redirect output to /dev/null as shown unless you want the entire contents of the file to be echoed to the terminal... twice)
To transfer a file to your local machine, is somewhat easier:
docker-cloud container exec id12345 cat file_to_copy.ext > filename.ext
Note: I'm not sure this works for binary files, and it can even cause issues with linefeed characters in text files, based on terminal settings, etc. - but it's the best answer I've got short of using an external service like https://transfer.sh
I am trying to run a GNU make file with multiple jobs.
When I try executing ' make.exe -r -j3', the receive the following to errors:
make.exe: Do not specify -j or --jobs if sh.exe is not available.
make.exe: Resetting make for single job mode.
Do I have to add ' $(SH) -c' somewhere in the makefile? If so, where?
The error message suggests that make cannot find sh.exe. The file names indicate you are probably on CygWin. I would investigate setting the PATH to include the location of sh.exe, or defining the value of SHELL to the name (or, even, full path) of your shell.
Are you running this on Windows (more specifically, in the "windows" shell?). If you are, you might want to read this:
http://www.gnu.org/software/make/manual/make.html#Parallel
more specifically:
On MS-DOS, the ‘-j’ option has no effect, since that system doesn't support multi-processing.
Once again, assuming you're running on windows, you should get MinGW or CygWin
I have access to a remote Solaris terminal which crashes occasionally, and I have to ask someone with physical access to boot the machine up, which it does successfully. I would like to know which tools/files should I look at to find out the cause of the crash so that I can make the necessary configuration changes and avoid it in the future.
What tools you can use will depend on what version of solaris you have running and what the actual problem
is. The first thing to do is check the system console (which it sounds like you don't have access to) and the /var/adm/messages file. This file is updated with system messages and the newest will appear at the end.
Next, you can look for a system core file. If a core file is created, it would be in /var/crash/hostname where "hostname" is the name of the machine.
If you have an actual core file in the /var/crash/hostname directory, this set of commands will give you a good
string to search google with:
# cd /var/crash/hostname
Replace "hostname" with the hostname of your machine.
# mdb -k unix.0 vmcore.0
If you have multiple core files, select the most recent version.
> ::status
This should give you a panic message, cut and paste that into google and see what you can find.
For more core file analysis read this:
http://cuddletech.com/blog/pivot/entry.php?id=965