how to delete vSphere files with PowerCLI? - powercli

RVTools outputs a vHealth report that lists "zombie" (orphaned) files and shows them in the format [DatastoreName] VMfolder/virtualDiskName.vmdk. I can go to the GUI and delete the file and it's fine. I've been doing that for years. But, I want to figure out a command that can take input like that and delete the file so that I can do them all quickly as I have probably 100 to do. Can anyone point me in the right direction?
I tried looking into Get-VDisk and Remove-VDisk, but those don't seem to be the right commands. I can't even figure out what input Get-VDisk is looking for.
Get-VDisk -Name 'MyDisk' -Datastore $ds
What is "MyDisk"!? There is nothing more than that to explain it in their documentation.
Also, notably, I am using connect-viserver with a connection to vCenter Server; so, I assume that will work somehow. If I have to specify a particular ESXi server, then I would have to do much more work to figure out which one to point to.
edit: I tried Get-HardDisk and Remove-HardDisk which almost worked, but then it said, "The requested operation is only supported for devices attached to VM." And of course, the point is that it is not attached to a VM. So, next.

You can use New-PSDrive to drive to map a VMFS datastore, like so
New-PSDrive -Name TgtDS -Location (Get-Datastore MyVolName) -PSProvider VimDatastore -Root '/'
Then to delete /vmfs/volumes/MyVolName/myFolder/myfile.ext
del TgtDS:\myFolder\myfile.ext
I'm not familiar with Get-VDisk and also not finding much documentation. Parent topic here suggests it's SPBM related.
Edit: Another approach, following Get-ChildItem returns wrong path on VMware datastore. On orphaned .vmdk results of
Get-ChildItem -Path vmstores:\vCenterHostName.domain.tld#443\DCName\DatastoreName\Folder
Remove-Item has removed the files from my datastore, despite this exception: "Operation is not valid due to the current state of the object." Neither approach would be suitable for mounted .vmdk. Removing orphaned ones has not caused problems in my environment.

Related

Where to find a thorough list of node_redis commands?

I'm using redis to store the userId as a key and the socketId as the value. What's more important is that the userId doesn't change, but the socketId constantly changes. So I want to edit the socketId value inside redis, but I'm not sure what node_redis command to use. I'm currently just editing by using .set(userId, mostRecentSocketId).
In addition, I haven't found a good node_redis API anywhere with a complete list of commands. I briefly looked at the redis-commands package, but it still doesn't seem to have a full list of complete commands.
Any help is appreciated; thanks in advance :)
The full list of Redis commands can be found at https://redis.io/commands. After finding a proper command it wouldn't be hard to find how is it proxied in binding ("api") you use.
Upd. To make it clear: you have Redis Server, its commands are listed at the doc I provided. Then you have redis-commands - it's a library for working with redis (I called it a "binding"). My point was that redis-commands may have not all the commands that redis-server can handle, and also the names of some commands can differ a bit. Some other bindings can offer slightly different set of commands. So it's better to examine the list of commands that Redis Server handles, and then select a binding that allowes calling that command (I guess all the bindings have set method)

Unresolved reference to WseeFileStore

I am trying run SOA Suite and when I execute startWeblogic.sh I got the following message error:
Unresolved reference to WseeFileStore by [<domain name>]/SAFAgents[ReliableWseeSAFAgent]/Store
at weblogic.descriptor.internal.ReferenceManager.resolveReferences(ReferenceManager.java:310)
at weblogic.descriptor.internal.DescriptorImpl.validate(DescriptorImpl.java:322)
at weblogic.descriptor.BasicDescriptorManager.createDescriptor(BasicDescriptorManager.java:332)
at weblogic.management.provider.internal.DescriptorManagerHelper.loadDescriptor(DescriptorManagerHelper.java:68)
at weblogic.management.provider.internal.RuntimeAccessImpl$IOHelperImpl.parseXML(RuntimeAccessImpl.java:690)
at weblogic.management.provider.internal.RuntimeAccessImpl.parseNewStyleConfig(RuntimeAccessImpl.java:270)
at weblogic.management.provider.internal.RuntimeAccessImpl.<init>(RuntimeAccessImpl.java:115)
... 7 more
Does anyone know how to fix this error?
I am running the system over 64 bits Suse
The quick and dirty way to get your admin server back up:
cd to <domain name>/config
Back up config.xml just in case
Edit config.xml, find and remove the <saf-agent> tags that point to your non-existent WseeFileStore
When you have the admin server back up. You can look at the Store-and-Forward Agents and Persistent Stores links to see what is already configured there. It sounds like a SAF agent was somehow created but the backing Persistent Store was not.
You can always created the Persistent Store later and add that SAF agent back in if you need it.
This happens simply because the automated tool used to adapt the config.xml file to the new cluster structure is... well, far from efficient.
It can create all other relevant structures ok, but the <saf-agent> entry is wrongly created.
Just open and look briefly to the config.xml file and you should see that something is not right with this entry.
I will use my environment as an example for this situation:
I have a single cluster with two managed servers named osb1 and osb2. Both are administered by the cluster's AdminServer and all these components are in a single machine called rdaVM. The whole domain was created with the Configuration wizard and, upon the first AdminServer start, I've got that dreadful error for quite some time.
The solution does reside in the config.xml file located in <DOMAIN_HOME>/config/config.xml
When I opened this file in the editor and did a quick search for WseeFileStore I got some curious entries:
<jms-server>
<name>WseeJmsServer_auto_1</name>
<target>osb1</target>
<persistent-store>WseeFileStore_auto_1</persistent-store>
</jms-server>
<jms-server>
<name>WseeJmsServer_auto_2</name>
<target>osb2</target>
<persistent-store>WseeFileStore_auto_2</persistent-store>
</jms-server>
and
<file-store>
<name>WseeFileStore_auto_1</name>
<directory>WseeFileStore_auto_1</directory>
<target>osb1</target>
</file-store>
<file-store>
<name>WseeFileStore_auto_2</name>
<directory>WseeFileStore_auto_2</directory>
<target>osb2</target>
</file-store>
but looking at the offending entry:
<saf-agent>
<name>ReliableWseeSAFAgent</name>
<store>WseeFileStore</store>
</saf-agent>
Obviously there's something missing here. Looking at the <DOMAIN_HOME> I could see two folders there: WseeFileStore_auto_1 and WseeFileStore_auto_2. So no WseeFileStore and hence that annoying error. Also, the saf-agent element doesn't have a target.
Solution: using just the underlining logic, I adapted the <saf-agent> entry to:
<saf-agent>
<name>ReliableWseeSAFAgent_auto_1</name>
<target>osb1</target>
<store>WseeFileStore_auto_1</store>
</saf-agent>
<saf-agent>
<name>ReliableWseeSAFAgent_auto_2</name>
<target>osb2</target>
<store>WseeFileStore_auto_2</store>
</saf-agent>
I.e, created a <saf-agent> for each of the cluster's managed servers, targeted each entry to a managed server and added the _auto_# suffix, where # is the ordering number for each managed server, to the <name> and <persistent-store> entries.
After it, I was able to run the startWebLogic.sh script without problems (from this source at least...)

"File Not Found" in MSBuild Community Tasks -- Which File?

I'm trying to use the VssGet task of the MSBuild Community Tasks, and the error message "File or project not found" is beating me with a stick. I can't figure out what in particular the error message is referring to. Here's the task:
<LocalFilePath Include="C:\Documents and Settings\michaelc\My Documents\Visual Studio 2005\Projects\Astronom\Astronom.sln" />
<VssGet DatabasePath="\\ofmapoly003\Individual\michaelc\VSS\Astronom_VSS\srcsafe.ini"
Path="$/Astronom_VSS"
LocalPath="#(LocalFilePath)"
UserName="build" Password="build"
Recursive="True" />
If I write a Streamreader to read to either the database path or the local path, it succeeds fine. So the path to everything appears to be accessible. Any ideas?
Two thoughts. One, sometimes a type load exception manifests as a FNF - let's hope that's not it. But if the code is actually being honest, you can track the problem using Procmon or Filemon. Start one of those utilities and then run your task again. You should be able to track down a record of a file that couldn't be located.
#famoushamsandwich that's a great response -- I had not previously heard of procmon or filemon. Tried procmon on the problem, but even after sifting through the relevant output (my gosh the machine does a lot more stuff behind the screen than I was aware of) I couldn't find where a file I'm referencing wasn't being found.
Procmon and Filemon are good suggestions - just make sure you filter the results to only show errors. Otherwise the success messages will bury the problem entries. Also, you can filter out processes that are not at fault (either through the filter dialog or by right-clicking the entry and choosing "Exclude Process".)
A couple other thoughts:
In the LocalFilePath, you are specifying a single file as opposed to a folder. The task, on the other hand, specifies to get files recursively. Perhaps you need to remove "\Astronom.sln" from the LocalFilePath?
Is the build task being run under your account or another? It's possible you have a permissions issue
Do you already have a copy of the code pulled down in the same location? Perhaps there is a failure to overwrite an existing file/folder?

robocopy, jungledisk file copy problems

I'm a huge fan of robocopy and use it extensively to copy between various servers I need to update.
Lately I've been archiving to an Amazon S3 account that I access via a mapped drive using JungleDisk. I then robocopy my files from local PC to S3.
Sometimes I get a very strange 'Incorrect function' error message in robocopy and the file fails to copy. I've tried xcopy and straightforward copy and paste between file explorer windows. In each case I get some variation of the 'Incorrect function' or 'Illegal MS-DOS function' and the file will never copy.
I delete the target but to no avail.
Any ideas?
Don't know if you're allowed to answer your own questions but I think I've fixed it...
I found this in the jungledisk support forums
The quick solution is to zip the
files, delete the original, then unzip
the files because zip can't handle
extended attributes. Another solution
is to move them to a FAT filesystem,
then move again to NTFS filesystem,
because FAT don't manage extended
attributes.
In both cases the result is the
deletion of extended attributes, and
the files can be moved to the
jungledisk.
The files can have extended attributes
for different reasons, expecially
migrations from other filesystems: in
my case was the migration of a CVS
repository from a ext2 filesystem to
NTFS.
Seems to have worked for me...
I've had similar issues from both OSX and linux. At first I was not concerned by it but then it occurred to me these issues could result in potential data contamination or backup failure. So I have abandon JungleDisk for everything except my lightweight work.
Zipping/taring files was not an option for me because of the size of my data set. With this approach you have to upload your entire data set each and every time.
I'm not sure which attributes you refer to but could you robocopy with the /COPY:DT switch to strip off the attributes?

Force a Samba process to close a file

Is there a way to force a Samba process to close a given file without killing it?
Samba opens a process for each client connection, and sometimes I see it holds open files far longer than needed. Usually i just kill the process, and the (windows) client will reopen it the next time it access the share; but sometimes it's actively reading other file for a long time, and i'd like to just 'kill' one file, and not the whole connection.
edit: I've tried the 'net rpc file close ', but doesn't seem to work. Anybody knows why?
edit: this is the best mention i've found of something similar. It seems to be a problem on the win32 client, something that microsoft servers have a workaround for; but Samba doesn't. I wish the net rpc file close <fileid> command worked, I'll keep trying to find out why. I'm accepting LuckyLindy's answer, even if it didn't solve the problem, because it's the only useful procedure in this case.
This happens all the time on our systems, particularly when connecting to Samba from a Win98 machine. We follow these steps to solve it (which are probably similar to yours):
See which computer is using the file (i.e. lsof|grep -i <file_name>)
Try to open that file from the offending computer, or see if a process is hiding in task manager that we can close
If no luck, have the user exit any important network programs
Kill the user's Samba process from linux (i.e. kill -9 <pid>)
I wish there was a better way!
I am creating a new answer, since my first answer really just contained more questions, and really was not a whole lot of help.
After doing a bit of searching, I have not been able to find any current open bugs for the latest version of Samba, please check out the Samba Bug Report website, and create a new bug. This is the simplest way to get someone to suggest ideas as to how to possibly fix it, and have developers look at the issue. LuckyLindy left a comment in my previous answer saying that this is the way it has been for 5 years now, well the project is Open Source the best way to fix something that is wrong by reporting it, and or providing patches.
I have also found one mailing list entry: Samba Open files, they suggest adding posix locking=no to the configuration file, as long as you don't also have the files handed out over NFS not locking the file should be okay, that is if the file is being held is locked.
If you wanted too, you could write a program that uses ptrace and attaches to the program, and it goes through and unlocks and closes all the files. However, be aware that this might possibly leave Samba in an unknown state, which can be more dangerous.
The work around that I have already mentioned is to periodically restart samba as a work around. I know it is not a solution but it might work temporarily.
This is probably answered here: How to close a file descriptor from another process in unix systems
At a guess, 'net rpc file close' probably doesn't work because the interprocess communication telling Samba to close the file winds up not being looked at until the file you want to close is done being read.
If there isn't an explicit option in samba, that would be impossible to externally close an open file descriptor with standard unix interfaces.
Generally speaking, you can't meddle with a process file descriptors from the outside. Yet as root you can of course do that as you seen in that phrack article from 1997: http://www.phrack.org/issues.html?issue=51&id=5#article - I wouldn't recommend doing that on a production system though...
The better question in this case would be why? Why do you want to close a file early? What purpose does it ultimately have to close the file? What are you attempting to accomplish?
Samba provides commands for viewing open files and closing them.
To list all open files:
net rpc file -U ADadmin%password
Replace ADadmin and password with the credentials of a Windows AD domain admin. This gives you a file id, username of who's got it open, lock status, and the filename. You'll frequently want to filter the results by piping them through grep.
Once you've found a file you want to close, copy its file id number and use this command:
net rpc file close fileid -U ADadmin%password
I needed to accomplish something like this, so that I could easily unmount devices I happened to be sharing. I wrote this quick bash script:
#!/bin/bash
PIDS_TO_CLOSE=$(smbstatus -L | tail -n-3 | grep "$1" | cut -d' ' -f1 - | sort -u | sed '/^$/$
for PID in $PIDS_TO_CLOSE; do
kill $PID
done
It takes a single argument, the paths to close:
smbclose /media/drive
Any path that matches that argument (by grep) is closed, so you should be pretty specific with it. (Only files open through samba are affected.) Obviously, you need root to close files opened by other users, but it works fine for files you have open. Note that as with any other force closing of a file, data corruption can occur. As long as the files are inactive, it should be fine though.
It's pretty ugly, but for my use-case (closing whole mount points) it works well enough.