I have an intermittent issue when distributing indexes :( All servers are Windows Server 2008.
I have two servers to distribute to and one of them has failed twice with this error:
INFO: [MDEXHost1] Starting shell utility 'move_dgraph-input_to_dgraph-input-old'.
10-Jun-2015 06:08:36 com.endeca.soleng.eac.toolkit.script.Script runBeanShellScript
SEVERE: Utility 'move_dgraph-input_to_dgraph-input-old' failed.
With a bit of further digging I've found this error in a log file in the PlatformServices\workspace\logs\shell folder:
Failed to move D:\Firebird\config\script\..\..\.\data\dgraphs\Dgraph1\dgraph_input to
D:\Firebird\config\script\..\..\.\data\dgraphs\Dgraph1\dgraph_input_old: No such file or directory at -e line 1.
The state of the server is that is has a dgraph_input_new folder but it's struggling to create the dgraph_input_old folder. The dgraph_input folder does exist so the 'No such file or directory' is interesting.
The server has plenty of disk space for the operation and as it's intermittent I don't think it's file/folder permissions (otherwise it would fail all the time). I've even asked for on-access virus scanning to be disabled for those folders in case our virus scanner was locking files/folders.
I'm struggling to come up with a resolution to the issue, HALP!
EDIT: The forge process did stop the dgraph but the TomCat6 process is still running. Is that normal? Could TomCat be locking the folder?
EDIT: The task to move the folder is a bit of Perl that looks like this:
perl.exe -e "use strict; use File::Spec; use File::Copy; use File::Glob qw/:glob/;my $source = 'D:\Firebird\config\script\..\..\.\data\dgraphs\Dgraph1\dgraph_input'; $source =~ s/[\\\/]+$//;my #sources = bsd_glob($source); foreach my $file (#sources) {my #fromPath = File::Spec->splitdir($file); if (scalar #fromPath eq 0) { die \"Failed to split path: $!\"; } my $fromRelative = #fromPath[scalar #fromPath - 1];my $toFile = 'D:\Firebird\config\script\..\..\.\data\dgraphs\Dgraph1\dgraph_input_old'; if ( -d $toFile ) { $toFile =File::Spec->catdir($toFile, $fromRelative); } my $res = move($file, $toFile);if (! $res) { die \"Failed to move $file to$toFile: $!\"; }}"
EDIT: It seems to be a plain permission issue, I can't rename the folder without elevating myself to an administrator. The service is running as a user who's in the Administrators group.
What could happen to make this folder admin only?
I know this question is dated back 3 years ago. But recently i experienced the same issue, thought it could help others.
The logs were interesting as GogLlundain pointed out.
The way to solve this is.
Stop mdex server in the workbench which will in parallel kill
dgraph process too.
If you open the task manager in the server where mdex is defined, you can
find two dgraph.exe is running.
Kill the older task(i.e dgraph.exe) then run the baseline script, Your
process will run smoothly.
Related
I am getting this error on VS Code and have no clue why it fails
[15:14:59.543] Log Level: 2
[15:14:59.555] remote-ssh#0.51.0
[15:14:59.555] win32 x64
[15:14:59.560] SSH Resolver called for "ssh-remote+xx.xx.xx.xx", attempt 1
[15:14:59.561] SSH Resolver called for host: xx.xx.xx.xx
[15:14:59.561] Setting up SSH remote "xx.xx.xx.xx"
[15:14:59.621] Using commit id "0ba0ca52957102ca3527cf479571617f0de6ed50" and quality "stable" for server
[15:14:59.624] Install and start server if needed
[15:15:01.964] getPlatformForHost was canceled
[15:15:01.965] Resolver error: Connecting was canceled
[15:15:01.973] ------
Add one key in your settings.json as below. Please remember to replace the $remote_server_name to yours.
"remote.SSH.remotePlatform": {
"$remote_server_name": "linux"
}
Menu: File->Preference->Settings
Or click the icon to open settings.json:
In dialog box where you have typed user#host type/select Linux/Windows/etc. depends what you are using, then type/select Continue, then type password for remote session.
For those getting this error on Windows: Check if you have multiple ssh clients installed.
How I solved it was by adding my ssh-configuration to ALL ssh-config files.
In my case I had one in
C:\Users\USER_NAME.ssh\config (this is the one that the remote extension used to give me connection options)
and another in C:\Program Data\ssh\ssh-config
After adding my ssh-config setting to both I got the prompt to select virtual hosts' OS. Tried editing the settings.json file directly, but I think it gets confused because of the multiple ssh-configurations.
P.S.
Tested it for both private key and password enabled connections and it work with either.
I got a similar problem, but the error logs were bigger. Before that, I deleted the python and reinstalled it. Perhaps this led to the problem. Just reinstalled "Remote -SSH" extension in vscode and it worked for me.
In my case there were two files that look like
vscode-remote-lock.<user>.<xxx>
vscode-remote-lock.<user>.<xxx>.target
where was my remote user name and xxx the VS Code Remote Server build hash.
These two files on the remote server in the folder.
/run/user/1000/
I deleted both files and then VS Code came up right away. I have encountered this a few times now. VS Code Remote Server install is not very robust. I use it on about 7 remote machines and every once in a while something goes awry and it cannot recover from simple errors and gets stuck in installation loops.
This trick only works if there is a valid ~/.vscode-server on the remote machine with a hash that matches your local VS Code installation.
If you got here because you were trying to install VS Code in the first place and for whatever reason VS Code had issues with the remote installation, I highly recommend installing it manually by downloading and extracting the tar file to the remote machine directly.
I have tried playing with the setting "Use remote.SSH: Use Flock" and other tricks posted on StackOverflow but none of these work for me whenever I have remote installation issues. I cannot figure out why on some machines, a smooth remote installation is not possible. Even when all of my ssh keys and remote ids have been copied and tested from both the Windows command line and inside a WSL Ubuntu instance.
If VS Code Remote Server installation had slightly better error logic and better error messages none of us would be wasting hours doing this simple task.
I was getting the exact same error as the original poster received and yet none of the other answers were my issue.
Steps to reproduce are very easy.
Create a Dockerfile.
My Dockerfile has many more lines, but I have trimmed them so we can focus in the source of the problem.
Said that, these two lines alone (without anything more) show the problem.
FROM microsoft/iis
SHELL ["powershell", "-Command", "$ErrorActionPreference = 'Stop'; $ProgressPreference = 'SilentlyContinue'; $VerbosePreference = 'Continue'; "]
Run docker build . and you get hcsshim::PrepareLayer - failed failed in Win32: Función incorrecta. (0x1).
Windows 10 Pro 1909 (but it happened too in 1903)
Docker version: 2.1.0.5
Engine: 19.03.5
Machine: 0.16.2
I have found the solution to the problem.
Reading all the https://github.com/docker/for-win/issues/3884 bug, some have found a simple solution: rename C:\windows\system32\driver\cbfsconnect2017.sys so it isn't loaded the next boot.
Disabling that driver enables me to do a docker build for the first time in windows containers in almost a year.
In my case Box Sync was the one using that driver.
EDIT: #GustavoTM have found that pCloud raises the same problem.
EDIT2: #VonC have noticed that some people in the issue in GitHub has solved it deleting this other file: C:\Windows\System32\drivers\cbfs6.sys. I haven't tried that, but i put it if it helps others.
The good thing is that I don't need to uninstall Box, but only rename that file.
This is still an issue (still open) with Win10.
Looks like uninstalling cloud storage providers with file system filters like Dropbox, Box, etc. as a workaround is an option for some users.
Deinstall cloud storage providers or virus scanners; if you identify which one is not working please share in https://github.com/docker/for-win/issues/3884
In my case was the problem similar but the file cbfs6.sys was placed somewhere in the rest of uninstalled application Jungle disk, somewhere in the folder c:\Program files\Jungle disk .... It's part of Callback File System signed by EldoS Corporation.
The folder could be rename only and not delete directly. So I could delete its immediately after the PC restart, before running the Docker. So it could be delete during the Docker service restart too.
I have a self hosted asp.net core app deployed and in use in an enterprise environment on Windows Server 2012.
I am looking for a way to automate the update process, I am currently doing this through a bat file but keep getting windows file lock errors where the file cannot be deleted. The process I am following in the bat file is as follows:
kill the dotnet core process for the web app
clear the directory (after sleep for a couple of seconds)
copy the updates over
restart the web app
I am getting the errors in 2 where I try to clear out the existing directory which still has file locks even though I have killed the process - "Cannot delete output file - access is denied".
My question is how can I upgrade the self contained asp.net core web app in place and avoid the file locks? If the site is offline for a few seconds it is not an issue.
Thanks
There a several reasons i can think of that deleting the directory gives access denied errors.
Your process isn't actually stopped yet. I know you can use powershell to await until porcess is stopped. (or check if process is stopped yet and otherwise wait 3 more seconds)
Another process still runs in this folder. (maybe even a command line, or explorer.exe is opened in the folder.)
You need admin rights to delete this folder.
The bat file you are executing executes from this directory, and itself is locking the directory.
Try one of the following:
powershell Stop-Service.
It should wait until service is really stopped.
powershell Wait-Process Waits untill process is stopped. you can call this directly after Stop-Process
Try to run powershell to wait like this for example (in commandline):
powershell -Command "Wait-Process -Name MyProcess"`
(warning you might run into ExecutionPolicy problems)
Tip
Use msdeploy, you can remote execute commands and deploy your application.
You can use pre and post scripts (to stop and start the app) and msdeploy it self will sync the folder/directory for you.
I am using the 7zip standalone .exe to unzip a file. I am using the Execute Process task for this. I have tested this over and over again on multiple machines and I know it works (at least in debug mode/visual studio). I have uploaded this package the server. I have created a job that calls said package from the Package Store. The package is not able to find the .exe no matter where I put it.
My first thought was to put the .exe on the C:\ drive, which failed. I have also failed in my attempts to place the .exe on a network location that the account the package is running under has full control over.
Basically, has anybody else had issues getting the Execute Process Task to find an executable when the package is uploaded to the server?
The error message is
Can't find 7za.exe in directory C:\7zip
I'll risk a downvote for being wrong, but I believe you have a permission issue.
You say it runs fine on other servers from BIDS, try it without BIDS. Call it from a command-line on a box that it works on.
dtexec.exe /file C:\HereComesTheUnzipper.dtsx
If that works, then repeat the step on the troublesome server. RDC into the box and try again
dtexec.exe /ser localhost /sq HereComesTheUnzipper
If that still works, then you are looking at an issue with the job. What account is the SQL Agent service running as? Is the SSIS job step running as a particular set of credentials? If so, is it a SQL Server login (which wouldn't map to anything on the physical box)? Regardless of what your answer is, the resolution will be to ensure the account has access to
7z.exe
whatever scratch area 7zip may use while unpacking files (I assume %temp%)
the output folder (C:\bin\7z.exe -e e:\data\MyThing.7z)
I hava a .sh script which glues many other scripts, called by jsch ChannelExec from a windows application.
Channel channel = session.openChannel("exec");
((ChannelExec) channel).setCommand("/foo/bar/foobar.sh");
channel.connect();
if I run the command like "nohup /foo/bar/foobar.sh >> /path/to/foo.log &", all the long term jobs(database operations, file processing etc.) seems to get lost.
checking the log file, only find those echo stuffs(before and after a long term operation, calculate running time etc.).
I checked the permissions, $PATH, add source /etc/profile to my .sh yet none of these works.
but when I run the command normally (sync run, print all echo outputs to my java client on windows),all the things goes well.
it's a really specific prob.
Hope someone with experience can help me out.
thank in advance.
Han
Solved!
A different issue that often arises in
this situation is that ssh is refusing
to log off ("hangs"), since it refuses
to lose any data from/to the
background job(s).[6][7] This problem
can also be overcome by redirecting
all three I/O streams.
from
http://en.wikipedia.org/wiki/Nohup
My prob is, psql and pg_bulkload print their outputs to err stream.
In my script, I didn't redirect err streams.
Everything went fine by also redirecting err streams to the same log file.
nohup foo.sh > log.log 2>&1 &
Thanks to Atsuhiko Yamanaka, he created a great JSch library, and Paŭlo Ebermann for the documentation.