Using Apache VFS Library Get File Size (Symbolic Link) - apache-commons-vfs

I utilize the Apache VFS library to access files on a remote server. Some files are symbolic links and when we get the file size of these files, it comes back as 80 bytes. I need to get the actual file size. Any ideas on how to accomplish this?
Using commons-vfs2 version 2.1.
OS is Linux/Unix.

You did not say which protocol/provider you are using. However it most likely also does not matter: none of them implement symlink chasing as far as I know (besides local). You only get the size reported by the server for the actual directory entry.
VFS is a rather high level abstraction, if you want to commandeer a protocol client more specially, using commons-net or httpclient or whatever protocol you want to use gives you much more options.

Related

VMWare resize disk size using vcenter api

I have been trying to solve this for the past week.
I'm using the vcenter API to add a new disk to an existing VM
https://vdc-repo.vmware.com/vmwb-repository/dcr-public/1cd28284-3b72-4885-9e31-d1c6d9e26686/71ef7304-a6c9-43b3-a3cd-868b2c236c81/doc/operations/com/vmware/vcenter/vm/hardware/disk.create-operation.html
and as able to do it successfully.
But I cannot figure out how to resize an existing VM disk.
https://vdc-repo.vmware.com/vmwb-repository/dcr-public/1cd28284-3b72-4885-9e31-d1c6d9e26686/71ef7304-a6c9-43b3-a3cd-868b2c236c81/doc/operations/com/vmware/vcenter/vm/hardware/disk.update-operation.html
This disk update operation does not allow to update the "capacity" attribute. So I'm not sure how to resolve this, unless I use an SDK.
Can someone please point me in the right direction?
I'm not 100% up to speed on the latest version, but there are several things that the REST API cannot do compared to the "old" SDK which is based on SOAP / WSDL.
The documentation on the page also states that the call only: "Updates the configuration of a virtual disk. An update operation can be used to detach the existing VMDK file and attach another VMDK file to the virtual machine." So there's no mention of changing the size (which is pretty lame I have to say...).
So I think unfortunately it seems like you either
Wait for a new version and hope this will be included
You use the good old SDK

Where are namespaces of Aeropsike in hard drive in ubuntu 14

I have installed Aerospike in Ubuntu. When I run aql command "show namespaces", it shows namespaces "test" and "bar". I tried to find out that where are they in hard drive or what is their exact location in ubuntu but no vain. Can anyone help me?
You wouldn't see any of the namespaces directly exposed on your file system when running Aerospike.
Having said that, the "bar" and "test" namespaces are default in the configuration file and both should be configured as 'storage engine memory' which means that the data will be stored in memory and not on your hard drive. Even if you were to switch those to be 'storage engine device', and either configure the underlying device as a raw SSD one or using a file, you would still not see any direct mention of the namespace...
When using raw SSD, Aerospike bypasses the file system and directly manages blocks on the device.
When using a file, Aerospike also manages blocks on the file system which makes the file not 'readable'.
There is a possibility to see existing namespace and to create other namespaces,
If you have installed Aerospike in ubuntu then see the file /etc/aerospike/aerospike.conf . This configuration file has namespaces

Enabling Extensions

I am changing some of my PHP settings such as upload size limit etc.
I manage to increase my upload size limit with some modifications to my Loaded Configuration File: php5.ini.
So it is the right file, and changes to php5.ini takes effect.
I want to enable ldap extension too. However, i couldnt do that. I added the following line:
extension=php_ldap.dll
But it did not take any effect.
Can anybody see why?
Thanks!
If you're on Linux, adding a dll extension won't do much good since these are used by Windows.
You'd rather have to enable the .so extension.
But then again just uncommenting or adding this entry in your php.ini doesn't make the extension work automatically, it will work only if the given .so file is really there and is in the right path, resp. the path is configured correctly.
If you're on Linux and you've got a chance, you should install extensions via your package manager (apt, yum, ...) which will compile the extension into your php installation. This way you won't loose your extension after server updates which include php.
If you don't have access to your server, there is always something you can do!
a) Ask the hosting provider to enable the extension for you, good hosting providers do that.
b) Get a VPS. You'll be so much better off!! It's worth it! Check out ServerGrove!

Serving dynamic zip files through Apache

One of the responsibilities of my Rails application is to create and serve signed xmls. Any signed xml, once created, never changes. So I store every xml in the public folder and redirect the client appropriately to avoid unnecessary processing from the controller.
Now I want a new feature: every xml is associated with a date, and I'd like to implement the ability to serve a compressed file containing every xml whose date lies in a period specified by the client. Nevertheless, the period cannot be limited to less than one month for the feature to be useful, and this implies some zip files being served will be as big as 50M.
My application is deployed as a Passenger module of Apache. Thus, it's totally unacceptable to serve the file with send_data, since the client will have to wait for the entire compressed file to be generated before the actual download begins. Although I have an idea on how to implement the feature in Rails so the compressed file is produced while being served, I feel my server will get scarce on resources once some lengthy Ruby/Passenger processes are allocated to serve big zip files.
I've read about a better solution to serve static files through Apache, but not dynamic ones.
So, what's the solution to the problem? Do I need something like a custom Apache handler? How do I inform Apache, from my application, how to handle the request, compressing the files and streaming the result simultaneously?
Check out my mod_zip module for Nginx:
http://wiki.nginx.org/NgxZip
You can have a backend script tell Nginx which URL locations to include in the archive, and Nginx will dynamically stream a ZIP file to the client containing those files. The module leverages Nginx's single-threaded proxy code and is extremely lightweight.
The module was first released in 2008 and is fairly mature at this point. From your description I think it will suit your needs.
You simply need to use whatever API you have available for you to create a zip file and write it to the response, flushing the output periodically. If this is serving large zip files, or will be requested frequently, consider running it in a separate process with a high nice/ionice value / low priority.
Worst case, you could run a command-line zip in a low priority process and pass the output along periodically.
it's tricky to do, but I've made a gem called zipline ( http://github.com/fringd/zipline ) that gets things working for me. I want to update it so that it can support plain file handles or paths, right now it assumes you're using carrierwave...
also, you probably can't stream the response with passenger... I had to use unicorn to make streaming work properly... and certain rack middleware can even screw that up (calling response.to_s breaks it)
if anybody still needs this bother me on the github page

Accessing a resource file from a filesystem plugin on SymbianOS

I cannot use the Resource File API from within a file system plugin due to a PlatSec issue:
*PlatSec* ERROR - Capability check failed - Can't load filesystemplugin.PXT because it links to bafl.dll which has the following capabilities missing: TCB
My understanding of the issue is that:
File system plugins are dlls which are executed within the context of the file system process. Therefore all file system plugins must have the TCB PlatSec privilege which in turn means they cannot link against a dll that is not in the TCB.
Is there a way around this (without resorting to a text file or an intermediate server)? I suspect not - but it would be good to get a definitive answer.
The Symbian file server has the following capabilities:
TCB ProtServ DiskAdmin AllFiles PowerMgmt CommDD
So any DLL being loaded into the file server process must have at least these capabilities. There is no way around this, short of writing a new proxy process as you allude to.
However, there is a more fundamental reason why you shouldn't be using bafl.dll from within a fileserver plugin: this DLL provides utility functions which interface to the file servers client API. Attempting to use it from within the filer server will not work; at best, it will lead to the file server deadlocking as it attempts to connect to itself.
I'd suggest rethinking that you're trying to do, and investigating an internal file-server API to achieve it instead.
Using RFs/RFile/RDir APIs from within a file server plugin is not safe and can potentially lead to deadlock if you're not very careful.
Symbian 9.5 will introduce new APIs (RFilePlugin, RFsPlugin and RDirPlugin) which should be used instead.
Theres a proper mechanism for communicating with plugins, RPlugin.
Do not use RFile. I'm not even sure that it would work as the path is checked in Initialise of RFile functions which is called before the plugin stack.
Tell us what kind of data you are storing in the resource file.
Things that usually go into resource files have no place in a file server plugin, even that means hardcoding a few values.
Technically, you can send data to a file server plugin using RFile.Write() but that's not a great solution (intercept RFile.Open("invalid file name that only your plugin understands") in the plugin).
EDIT: Someone indicated that using an invalid file name will not let you send data to the plugin. hey, I didn't like that solution either. for the sake of completness, I should clarify. make up a filename that looks OK enough to go through to your plugin. like using a drive letter that doesn't have a real drive attached to it (but will still be considered correct by filename-parsing code).
Writing code to parse the resource file binary in the plugin, while theoratically possible, isn't a great solution either.