Cocoa apis reporting incorrect values for free space, what should I use? - objective-c

Does anyone know what apis Apple is using for it's Get Info panel to determine free space in Lion? All of the code I have tried to get the same Available Space that Apple is reporting is failing, even Quick Look isn't displaying the same space that Get Info shows. This seems to happen if I delete a bunch of files and attempt to read available space.
When I use NSFileManager -> NSFileSystemFreeSize I get 42918273024 bytes
When I use NSURL -> NSURLVolumeAvailableCapacityKey i get 42918273024 bytes
When I use statfs -> buffer.f_bsize * buffer.f_bfree i get 43180417024 bytes
statfs gets similar results to Quick Look, but how do I match Get Info?

You are probably seeing a result of local Time Machine snapshot backups. The following quotes are from the following Apple Support article - OS X Lion: About Time Machine's "local snapshots" on portable Macs:
Time Machine in OS X Lion includes a new feature called "local
snapshots" that keeps copies of files you create, modify or delete on
your internal disk. Local snapshots compliment regular Time Machine
backups (that are stored on your external disk or Time Capsule) giving
you a "safety net" for times when you might be away from your external
backup disk or Time Capsule and accidentally delete a file.
The article finishes by saying:
Note: You may notice a difference in available space statistics between Disk Utility, Finder, and Get Info inspectors. This is
expected and can be safely ignored. The Finder displays the available
space on the disk without accounting for the local snapshots, because
local snapshots will surrender their disk space if needed.
It looks like all the programmatic methods of measuring available disk space that you have tried give the true free space value on the disk, not the space that can be made available by removing local Time Machine backups. I doubt command line tools like df have been made aware of local Time Machine backups either.

This is a bit of a workaround, not a real api, but the good old unix command df -H will get you the same information as in the 'get info' panel, you just need to select the line of your disk and parse the output.
The df program has many other options that you might want to explore. In this particular case the -H switch tells the program to spit out the numbers in human readable format and to use base 10 sizes.
Take a look here on how to run command lines from within an app and get the output inside your program: Execute a terminal command from a Cocoa app
I believe that the underpinnings of both df and the get info panel are very likely to be the same thing.

Related

Zope Data.fs too large that I can't pack it anymore

My site's database grew too large that it can't even pack itself anymore. When I do pack it, it says that there's insufficient space.
I've tried deleting some files in the server but to no avail - the database itself just takes the whole disk space.
How would I continue on from this point? The website is basically stuck since it can't add more data since the disk space is full.
Additional details
The server is on Ubuntu 11.10
Copy the Data.fs file to another machine with more disk space, and pack it there. Then copy the smaller file back to the server, bring that down and move the packed version in place.
Depending on how much downtime you are willing to tolerate, you could remove the large unpacked Data.fs file first, then copy the replacement over.
If you are using a blobstorage with your site, you'll have to include that when copying across your ZODB.
After a few weeks, I returned to this problem and I have finally fixed it.
It is similar to the idea of #MartijnPieters, but I approached the problem differently.
My zope instance was in /dev/sda6, and that filesystem is full. I just increased the size from 27G to 60G and THEN I packed my Data.fs file.
I used GParted on my machine but it's because /dev/sda6 is a native linux filesystem. If you're running LVMs, you might need to use resize2fs.

How to protect files from power failure (Win7)

My VB.NET application uses config.xml file to store all configurations. This file is written often (every 20 seconds or so) as it also stores state of user's session etc. From time to time, users report this file to be suddenly of zero length or full of unknown characters and application is thus not executable.
I tracked the problem to find out that this happens when file is written short before power failure or even normal shutdown (!). I tested it with notepad - I edited the file and unplugged power cord immediately after editing, then after reboot, file was corrupted. File is usually empty after initiating system shutdown, or it is full of zeros (bitwise) after power outage. So this doesn't look like issue of my application, but rather general OS or disk/FS issue.
This is happening on more than ten different PCs. All have Win7, NTFS, some have SSD disk, some not.
Can this be prevented? Can I ensure that file contains either data before the edit or data after the edit, but never ends up in corrupted state?
You can't prevent, that the user will unplug power cord and system will unexpectively shutdown.
I believe, that you can work around it by storing multiple (at least two) files and if the last one is corrupted, use the previous one.
EDIT: This means, that should not save the same file when timer ticks. Because you didn't provided details, I can't suggest exact code change. You can define a boolean variable which you can negate when timer ticks and then switch saved file:
blnUseSecondaryFile = Not blnUseSecondaryFile
If blnSecondaryFile Then
savepath = "file2.xml"
Else
savepath = "file1.xml"
End If

Extracting Data from a VERY old unix machine

Firstly apologies if this question seems like a wall of text, I can't think of a way to format it.
I have a machine with valuable data on(circa 1995), the machine is running unix (SCO OpenServer 6) with some sort of database stored on it.
The data is normally accessed via a software package of which the license has expired and the developers are no longer trading.
The software package connects to the machine via telnet to retrieve data and modify data (the telnet connection no longer functions due to the license being changed).
I can access the machine via an ODBC driver (SeaODBC.dll) over a network, this was how I was planning to extract the data but so far I have retrieved 300,000 rows in just over 24 hours, in total I estimate there will be around 50,000,000 rows total so at current speed it will take 6 months!
I need either a quicker way to extract the data from the machine via ODBC or a way to extract the entire DB locally on the machine to an external drive/network drive or other external source.
I've played around with the unix interface and the only large files I can find are in a massive matrix of single character folder (eg A\G\data.dat, A\H\Data.dat ect).
Does anyone know how to find out the installed DB systems on the machine? Hopefully it is a standard and I'll be able to find a way to export everything into a nicely formatted file.
Edit
Digging around the file system I have found a folder under root > L which contains lots of single lettered folders, each single lettered folder contains more single letter folders.
There are also files which are named after the table I need (eg "ooi.r") which have the following format:
<Id>
[]
l for ooi_lno, lc for ooi_lcno, s for ooi_invno, id for ooi_indate
require l="AB"
require ls="SO"
require id=25/04/1998
{<id>} is s
sort increasing Id
I do not recognize those kinds of filenames A\G\data.dat and so on (filenames with backslashes in them???) and it's likely to be a proprietary format so I wouldn't expect much from that avenue. You can try running file on these to see if they are in any recognized format just to see...
I would suggest improving the speed of data extraction over ODBC by virtualizing the system. A modern computer will have faster memory, faster disks, and a faster CPU and may be able to extract the data a lot more quickly. You will have to extract a disk image from the old system in order to virtualize it, but hopefully a single sequential pass at reading everything off its disk won't be too slow.
I don't know what the architecture of this system is, but I guess it is x86, which means it might be not too hard to virtualize (depending on how well the SCO OpenServer 6 OS agrees with the virtualization). You will have to use a hypervisor that supports full virtualization (not paravirtualization).
I finally solved the problem, running a query using another tool (not through MS Access or MS Excel) worked massively faster, ended up using DaFT (Database Fishing Tool) to SELECT INTO a text file. Processed all 50 million rows in a few hours.
It seems the dll driver I was using doesn't work well with any MS products.

How Can I Recover a .nfs file?

I have been working on an algorithm in Python, and I was using Vim to edit this file. I opened it up, did a save, and it came up with an Error something like it occasionally does:
"WARNING: YOUR FILE CANNOT BE SAVED! ALL CHANGES WILL BE LOST! CANNOT WRITE THE FILE!"
As this happens occasionally, I did what I normally do, and I hit :q! to quit without writing any changes. No harm, no foul. When I looked at my file, everything had been erased! Everything!
I talked around the office, and it seems that the nfs mount was full, and so that was why I couldn't save anything. There was a huge script generating a ton of data, which caused the mount to be full temporarily. I believe the NFS mount is from NetApp. I found 2 files in my current directory.
One was last saved two days ago, and one was today. They are in the format of:
.nfs.xxxxxxxxxxx
When I try to attempt to open up this file, I see some of my code, here and there, splattered among unknown characters. Apparently, this must be a binary representation of the state of the file.
Is there any way to recover this file from this NFS mount? If there is a shortcut to recover this file in Emacs, I will switch to Emacs from vim!
So, I did find a way to recover the file. I found two ways, in fact. Since it was on a NetApp NFS mount, I was able to use the snapshots feature. When you are in a directory just do
ls .snapshot
And this will pull up any snapshots that your system administrators have set. For us, we have an hourly.0, hourly.1, and nightly.0, and nightly.1 backups. So, we can go back two days, and in the same day, we can go back one hour (the current hour, and the previous).
The other way was to rename the file to a vim swap file like this.
mv .nfs.xxx my_vim_file.cpp.swp
vim my_vim_file.cpp.swp
Then attempt to open it up in Vim, and it should ask you if you want to Recover the swap file, say yes, and it should be back!
Apparently your Netapp uses NFS to mount its volumes (as opposed to iSCSI, for example). Generally, each VM is stored on a unique volume (aka datastore) on the Netapp filer. To find out the volumes and snapshots, and then restore a snapshot, here are the commands to execute at the command line:
# list all volumes, snapshots are taken of volumes
vol status
# list the snapshots available for a particular volume
snap list <vol_name>
# restore a snapshot, nightly.1 for example
snap restore <vol_name> nightly.1
That's it. All that's left is to turn the VM back on and see if you've restored far back enough. If not, then do another "snap restore" but with an older snapshot.
Note that this procedure assumes your administrator didn't disable snapshots (Netapp has a snapshot schedule by default) and that the Netapp is licensed for snaprestore (use the "license" command to verify). This procedure can further be simplified if you have the Netapp OnCommand System Manager, which is a GUI for managing the Netapp. Reverting a snapshot in the GUI is simple:
Go to Storage > Volumes > click on a volume > click on Snapshot Copies (at the bottom)
Choose a snapshot and restore

How to create a fixed blocked (FB) file for IBM mainframe/FTP in VBA

I've got VBA code that generates a text file with some pretty basic information included. I then upload that file via FTP.
I got a message from the server admin of the IBM mainframe today that my file was in variable blocking (VB) format and their job process uses a fixed blocking (FB) up to a max size of 256.
How is this done? During the file creation? 3rd party tool?
B
You can simply convert the VB file into FB in mainframe before running the actual process.VB to FB conversion JCL is a small JCL step to do your conversion
You can use Locsite to set the record format on the host dataset(File).
You can find all the list of FTP sub commands in the below user guide
IP User’s Guide and Commands SC31-8780-05
Sorry all, I have a feeling I didn't explain this correctly, because I do now have an answer which is rather simple. These 2 commands seemed to have setup the environment correctly for the file to be fb and not vb.
ftp> quote site lr=94
ftp> quote site rec=fb
If I rightly remember FB is in multiples of the block sizes, that is just how DASD stores the files on disk, it must fit in that multiple block size, which increases speed and throughput on the Mainframe. If the data file is not within the boundary of multiple block sizes (This has nothing to do with the actual size of the data), the DASD system just access files in blocks of 256 bytes...there will be a host of special fields inserted into the data file to describe the blocking and so on...which will get inserted when transferred to the mainframe and that data gets transferred to magnetic tape backups...
There should be a script available on the Mainframe to convert it using JCL (Job Control Language)..ask the Mainframe administrator to do it for you...
By the way it should be noted that the character set you used in your data file, just be aware that the mainframe uses EBCDIC character set...There are plenty of tools out there that can convert from ASCII data to the format to be readable by the mainframe, just something to bear in mind of...If the data gets converted that could impact the file size...Thought it would be worth mentioning and important!
There is a Unix/Linux utility that can convert the data to a fixed block size using the dd utility, although I do not think it would be the right way to do it...
Here's a useful link that will help you in understanding this. And also here on SO a similar user was asking about MVS/TSO data...