NFS file open in C code - nfs

If I open a file in my C/C++/Java code using a pathname that goes to an nfs directory, how the does the read and write syntax work with NFS being stateless and all? I have tried but cant find an example code accessing NFS mounted files. My current understanding is that it is the job of the NFS client to keep state (like read and write pointer) and the application uses the same syntax.
A related question is regarding VFS and UFS. Are all files in a current unix machine accessed through their vnodes first and then (depending on local vs remote) inode or rnode structures?

NFS (short of file locking) is no different than local storage to user-level applications. It might be slower, or it might drop out unexpectedly, but that can happen to local storage too. That's probably why you can't find specific NFS-centric example code.

Related

S3Fs directory listings slow - caching somehow possible?

I'm wondering if there is any way to practically speedup directory listings of a s3fs mount? I have a WebDAV server, only for read operations, that basically access my s3fs mount. The problem is that listing directories is slow, while transfer speed is fine.
So I started to look a bit around the web a stumbled across "JuiceFS", sadly this was also not an option for several reasons. Then I tried "vmtouch" to index the mounted s3 storage to local memory, this is also not working as it's a shared resourced managed by the fuse kernel extension.
Even using S3FS built-in cache does not solve the issue, instead it makes it even worse as the file first getting downloaded from s3 into the cache locally and then served via WebDav ...
Is there no way to just speedup directory listing using S3? Basically, this is all I need in the end and no fancy POSIX compatible Block Device like JuiceFS which basically creates its own logic on top of your s3 bucket ... Not what I was searching for.
Unfortunately s3fs 1.91 has poor readdir performance. There are a few open issues and pull requests that track future improvements:
Option to not use head requests
Consider changing -o notsup_compat_dir default
Consider changing -o noobj_cache default
Increase -o multireq_max
Issue parallel requests in get_object_attribute
You can toggle #2-4 via command-line flags today but #5 is still in-progress. #1 is the big win that would give a 100x speedup but trades off less POSIX compatibility, e.g., no UID/GID, no permissions. One alternative that you can try today is goofys which implements #1.

OperationalError: Attempt to Write A ReadOnly Database on Google Cloud Application

Recently, I have been trying to deploy an interactive Google App Engine that writes to a SQLite database, which works fine when running the app locally, but when running it through the server, I receive the error:
OperationalError: attempt to write a readonly database
I tried changing the permissions on my .db, .sql but no luck.
Any advice would be greatly appreciated.
You can try changing permission of the directory and checking that .sqllite file exists and is writable
But generally speaking is not a good idea to rely on disk data when working on app engine as disk storage is ephemeral (unless you are using persistent disks on flex) but even then its better to use a cloud database solution
App Engine has exactly read-only file system, i.e. no files can be modified. It has, however, /tmp/ folder to store temporary files as the name suggests. It actually uses RAM, so not a good idea if the database is huge.
On app startup you can copy your original database file to /tmp/ folder and use it from there afterwards.
This works. However, all the changes in the database are lost when the app nodes scale to 0. Each node of the app has its own database copy and the data is not shared between the nodes. If you need the data to be shared between the app nodes, better use CloudSQL.

How to get list of mounted filesystems on NFS server

For auditing purposes I need to track all remotely mounted NFSv4 filesystems requests on an NFS server (CentOS7) to get both the identity of the mounting system AND the filesystem that they mounted. Using the 'netstat -an' command gets me the identity of the remote system but now I need to know what they mounted. It also gives no clue as to whether that system unmounted a file and then mounted a different one.
I have seen various references to both 'rmtab' and 'showmount' but they do not show me the currently mounted files and, from what I can see, they are only good for NFSv3 and older mounts. I have also seen reference to the file /proc/fs/nfsd/clients but cannot see such a file on any of my servers. Surely the information as to who has what mounted has to be available somewhere in the server even if it is a convoluted path to get there (auditing nfsservctl syscalls worked in olden days.)
Related to that, 'ps' shows me the '[nfsv4.1-svc]' process but I haven't been able to track down who/what/why that is and if it is useful.

Where are namespaces of Aeropsike in hard drive in ubuntu 14

I have installed Aerospike in Ubuntu. When I run aql command "show namespaces", it shows namespaces "test" and "bar". I tried to find out that where are they in hard drive or what is their exact location in ubuntu but no vain. Can anyone help me?
You wouldn't see any of the namespaces directly exposed on your file system when running Aerospike.
Having said that, the "bar" and "test" namespaces are default in the configuration file and both should be configured as 'storage engine memory' which means that the data will be stored in memory and not on your hard drive. Even if you were to switch those to be 'storage engine device', and either configure the underlying device as a raw SSD one or using a file, you would still not see any direct mention of the namespace...
When using raw SSD, Aerospike bypasses the file system and directly manages blocks on the device.
When using a file, Aerospike also manages blocks on the file system which makes the file not 'readable'.
There is a possibility to see existing namespace and to create other namespaces,
If you have installed Aerospike in ubuntu then see the file /etc/aerospike/aerospike.conf . This configuration file has namespaces

NFS server receives multiple inotify events on new file

I have 2 machines in our datacenter:
The public server exposes part of the internal servers's storage through ftp. When files are uploaded to the ftp, the files in fact end up on the internal storage. But when watching the inotify events on the internal server's storage, i notice the file gets written in chunks, probably due to buffering at client side. The software on the internal server, watches the inotify events, to determine if new files have arrived. But due to the NFS manner of writing the files, there is no good way of telling when a file is complete. Is there a way of telling the NFS client to write files in only one operation, or is there a work around for this behaviour?
EDIT:
The events i get on the internal server, when uploading a file of around 900 MB are:
./ CREATE big_buck_bunny_1080p_surround.avi
# after the CREATE i get around 250K MODIFY and CLOSE_WRITE,CLOSE events:
./ MODIFY big_buck_bunny_1080p_surround.avi
./ CLOSE_WRITE,CLOSE big_buck_bunny_1080p_surround.avi
# when the upload finishes i get a CLOSE_NOWRITE,CLOSE
./ CLOSE_NOWRITE,CLOSE big_buck_bunny_1080p_surround.avi
of course, i could listen to the CLOSE_NOWRITE event, but reading inotify documentation says:
close_nowrite
A watched file or a file within a watched directory was closed, after being opened in read-only mode.
Which is not exactly the same as 'the file is complete'. The only workaround I see, is to use .part or .filepart files and move them, once uploaded, to the original filename and ignore the .part files in my storage watcher. Disadvantage is I'll have to explain this to customers, how to upload with .part. Not many ftp clients support this by default.
Basically, if you want to check when the write operations is completed, monitor the event IN_CLOSE_WRITE.
IN_CLOSE_WRITE gets "fired" when a file gets closed which was open for writing. Even if the file gets transferred in chunks, the FTP server will close the file only after the whole file has been transferred.