I have installed Aerospike in Ubuntu. When I run aql command "show namespaces", it shows namespaces "test" and "bar". I tried to find out that where are they in hard drive or what is their exact location in ubuntu but no vain. Can anyone help me?
You wouldn't see any of the namespaces directly exposed on your file system when running Aerospike.
Having said that, the "bar" and "test" namespaces are default in the configuration file and both should be configured as 'storage engine memory' which means that the data will be stored in memory and not on your hard drive. Even if you were to switch those to be 'storage engine device', and either configure the underlying device as a raw SSD one or using a file, you would still not see any direct mention of the namespace...
When using raw SSD, Aerospike bypasses the file system and directly manages blocks on the device.
When using a file, Aerospike also manages blocks on the file system which makes the file not 'readable'.
There is a possibility to see existing namespace and to create other namespaces,
If you have installed Aerospike in ubuntu then see the file /etc/aerospike/aerospike.conf . This configuration file has namespaces
Related
I have been using a Unix shell to download raw reads using sratoolkit/2.8.2-1. The SRA files are from the NCBI database e.g. for "https://www.ncbi.nlm.nih.gov/sra?term=SRX1157907". When I use the prefetch command (e.g. prefetch SRR2172947) consistently get the error of "path not found while resolving tree within virtual file system module - 'SRR2172948' cannot be found." I can download other SRA files like SRR12626663 without a problem, but the mentioned link has some problem.
Would it be possible to please guide me on how to solve this problem?
Human genomics data in NCBI's SRA is often under controlled access through the dbGaP system. One must request access to these files and follow special protocols to download such data. For example, researchers must demonstrate valid research needs to gain access approval and agree to follow protocols to ensure the data is securely stored.
I utilize the Apache VFS library to access files on a remote server. Some files are symbolic links and when we get the file size of these files, it comes back as 80 bytes. I need to get the actual file size. Any ideas on how to accomplish this?
Using commons-vfs2 version 2.1.
OS is Linux/Unix.
You did not say which protocol/provider you are using. However it most likely also does not matter: none of them implement symlink chasing as far as I know (besides local). You only get the size reported by the server for the actual directory entry.
VFS is a rather high level abstraction, if you want to commandeer a protocol client more specially, using commons-net or httpclient or whatever protocol you want to use gives you much more options.
I need to be able to pass some parameters to my virtual machine during it's bootup so it sets itself properly. To do that I either have to bake the info into the image or somehow pass it as parameters to my qemu-kvm command. These parameters are just few, and if it was VMware, we would just pass it as ova paramas and when the VM launches we would call the ova-environment to get these params. But launching it from qemu-kvm I have no such options. I did some homework and found that I could use virtio-9p driver for sharing files across host and guest. Unfortuantely RHEL/Centos has decided not to support 9p.
With no option of rebuilding my RHEL kernel with the 9p options enabled, how do I solve my above problem? Either solution would work, which is, pass/share some kind of json file to the VM(pre-populated on the host), which will read this and do it's setup OR set some kind of "environment variables" which I can query from within the VM to get these params and continue with setup. Any pointers would help.
If your version of QEMU supports it, you could use its -fw_cfg option to pass information to the guest. If that guest is running a Linux kernel with CONFIG_FW_CFG_SYSFS enabled, you will be able to read out the information from sysfs. An example:
If you launch your VM like so:
qemu-system-x86_64 <OPTIONS> -fw_cfg name=opt/com.example.test,string=qwerty
From inside the guest, you can then get the value back from sysfs:
cat /sys/firmware/qemu_fw_cfg/by_name/opt/com.example.test/raw
There appears to be some driver for Windows as well, but I've never used it.
When you boot your guest with -kernel and -initrd you should be able to pass environment variables with -append.
The downside is that you have to keep track of your current kernel and initrd outside of your disk image.
Other possibilities could be a small prepared disk image (as you said) or via network/dhcp or a serial link into your guest or ... this really depends on your environment.
I was just searching to see if this situation had improved and came across this question. Apparently it has not improved.
What I do is output my variable data to a temp file (eg. /tmp/xxFoo). Usually I write text or a tar straight to that file then truncate it to a minimum size and 512 byte multiple like 64K otherwise the disk controller won't configure it. Then the VM starts with a raw drive as that file. After the VM is started the temp file is deleted. From within the guest you can read/cat the raw block device and get the variable data (in BSD use the c partition as the raw drive).
In Windows guests it's tricky to get to the data. In theory you can read \\.\PhysicalDriveN but I have not ever been able to get that to work. Cygwin can do it and it works like Linux. The other option is to make your temp file a partitioned and formatted image but that's a pain to create and update.
As far as sharing a folder I use Samba which works in just about anything. I usually use several instances of smbd running with different configurations.
One option is to create a ISO file and pass as parameter. This works for both host Win and Ubuntu and Guest Win and Ubuntu. You can read the mounted CD ROM inside the guest OS
>>qemu-system-x86_64 -drive file=c:/qemuiso/winlive1.qcow2,format=qcow2 -m 8G -drive file=c:\qemuiso\sample.iso,index=1,media=cdrom
On Guest Linux Mount CDROM in Ubuntu:-
>>blkid //to check if media is there
>>sudo mkdir /mnt/cdrom
>>sudo mount /dev/sr0 /mnt/cdrom //this step can also be put in crontab
>>cd /mnt/cdrom
I am trying to upload *.ttl files in a virtuoso server using iSQL and the function ld_dir('path','file','graph').
If I run the thing in local everything works fine. I add the path in the DirsAllowed inside the virtuoso.ini config file. Then using the isql I run ld_dir(,,) and rdf_loader_run() and I upload the file.
I would like to do the same thing but from a remote computer. How should I configure the virtuoso.ini to allow paths from a remote computer?
Thanks, and sorry for the cross-posting.
Current versions of Virtuoso do not support loading data files from remote locations. The data file must be accessible through Virtuoso's local filesystem, though it may be a remote mount (e.g., NFS), and the containing directory must be included in Virtuoso's DirsAllowed setting. Loading from remote locations is planned for a future version.
You'll generally get faster Virtuoso-specific answers by asking on the Virtuoso Users mailing list or the Virtuoso support forums
If I open a file in my C/C++/Java code using a pathname that goes to an nfs directory, how the does the read and write syntax work with NFS being stateless and all? I have tried but cant find an example code accessing NFS mounted files. My current understanding is that it is the job of the NFS client to keep state (like read and write pointer) and the application uses the same syntax.
A related question is regarding VFS and UFS. Are all files in a current unix machine accessed through their vnodes first and then (depending on local vs remote) inode or rnode structures?
NFS (short of file locking) is no different than local storage to user-level applications. It might be slower, or it might drop out unexpectedly, but that can happen to local storage too. That's probably why you can't find specific NFS-centric example code.