I have attached a disk to vm using Acopolis command
acli vm.disk_create Vm clone_from_nfs_file=filepath.raw bus=scsi
and write data to this disk using dd and detach a disk.
If I attach a disk again I am unable to see the written data to disk.
Please help me solve this problem..
When you create a vmdisk in this way, you're creating a copy-on-write clone of the original file. All writes go to the clone, not the original file. If you want to access the cloned file, it is located on NFS here:
/$container_name/.acropolis/vmdisk/$vmdisk_uuid
You can determine the container ID and vmdisk UUID by looking at the vm descriptor using the vm.get command.
Related
The multiple rdb files are from different redis servers. Now I want to combine the data files to a single redis server. By far I only find the answers to recover with a single dump.rdb file.
The simplest way to do this is by using DEBUG RELOAD, an undocumented command.
DEBUG RELOAD [MERGE] [NOFLUSH] [NOSAVE]
Save the RDB on disk and reload it back in memory. By default it will
save the RDB file and load it back.
With the NOFLUSH option the current database is not removed before loading the new one, but
conficts in keys will kill the server with an exception.
When MERGE is
used, conflicting keys will be loaded (the key in the loaded RDB file
will win).
When NOSAVE is used, the server will not save the current
dataset in the RDB file before loading.
Use DEBUG RELOAD NOSAVE when
you want just to load the RDB file you placed in the Redis working
directory in order to replace the current dataset in memory.
Use DEBUG RELOAD NOSAVE NOFLUSH MERGE when you want to add what is in the
current RDB file placed in the Redis current directory, with the
current memory content.
Use DEBUG RELOAD when you want to verify Redis
is able to persist the current dataset in the RDB file, flush the
memory content, and load it back.",
The above is taken from debug.c, applied friendly format.
So, use DEBUG RELOAD NOSAVE NOFLUSH if you want to ensure there are no duplicate keys in different RDBs. Use DEBUG RELOAD NOSAVE NOFLUSH MERGE if you know you have duplicates, load last the one you want to prevail.
Our system is currently backing up tplogs to S3. From what I have read, simply making sure these files are in the place that kdb expects them will allow for recovery if there is an issue with RDB during the day.
However, I did not see an explanation of how to use the tplogs to recover HDB. I tempted to create another backup system to sync the hdb folders to S3 also. That will be more work to set up and use at least double the storage, as well as being redundant. So if its not necessary then I would like to avoid that extra step.
Is there a way to recover the HDB from the tplogs in the event that we lose access to our HDB folders, or do I need to add another backup system for the HDB folders? Thanks.
To replay log file to HDB.
.Q.hdpf[`::;get `:tpLOgFile;.z.d;`sym]
As per my experience if you are building a HDB from TP logfile load tp log file using get function and save it using dpft that is efficient.
If you want to use -11! function then you have to provide a upd function(-11! read each row from tp log file and call upd function then insert data to in memory table) to load data in memory and then save data on disk.
In both case you have to load data in memory but by using get function you can skip upd function call
-11! function is efficient for building the RDB because it will not load the full log file.
For more details read Below link http://www.firstderivatives.com/downloads/q_for_Gods_July_2014.pdf
OK, actually found a forum answer to a similar question, with a script for replaying log files.
https://groups.google.com/forum/#!topic/personal-kdbplus/E9OkvJKGrLI
Jonny Press says:
The usual way of doing it is to use -11! to replay the log file. A basic script would be something like
// load schema
\l schema.q
// define upd
upd:insert
// replay log file
-11!`:schema2015.07.09
// save
.Q.hdpf[`::;`:hdb;2015.07.09;`sym]
This will read the full log file into memory. So you will need to have RAM available.
TorQ has a TP log replay script:
https://github.com/AquaQAnalytics/TorQ/blob/master/code/processes/tickerlogreplay.q
The sequence of events that I'm trying to make happen in Meteor is:
On the client browser, upload a zip file and send it to the server
On the server, receive the zip file and hold it in a memory object
Unzip the memory object into individual objects representing the contents
Process the individual files one at a time
Return success/failure status to the client
I have steps 1 and 2 working, using EJSON to stringify the contents of the zip file on the client and again to convert it back to its original form on the server. The problem I'm encountering is when I try to unzip the object on the server. It seems that every unzip library available wants to operate directly on a file or stream, not on a memory object.
I suppose I could write the object to disk and read it back again, but that seems like an unnecessary step. Is there a library available to unzip a memory object? Alternatively, is there a way to create a stream directly from the object that I can then feed to the unzip routine?
Any advice would be greatly appreciated.
You could use the unzip module from npm. It accepts streaming input and allows you to process output without saving to disk.
It will take some work to wrap it to work with meteor. Your two options are the meteorhacks:npm package or upgrading to the Meteor 1.3 beta.
I create disk encrypt in mac OS X ML 10.8 (use Disk utiliti or use command hdiutil ). I want read file in that disk, but I can't mount it. Because when I mount it, another app can read it before I unmount. Please help me.(hdiutil command here http://developer.apple.com/library/mac/#documentation/Darwin/Reference/ManPages/man1/hdiutil.1.htm
To do this you would have to read and decrypt the dmg file yourself and then interpret the HFS file system inside the disk image to get at your file. It's not easy but certainly possible. Take a look at the HFSExplorer source code.
But I wouldn't put too much energy into this. Either use a different file format that is easier to read to store your encrypted data, or go with pajps solution. And remember, no matter what you do, once you decrypt your file the user will be able to get to the decrypted data. You can make this harder, but you can't prevent it.
I think the only reasonable way would be to mount the disk image. To do it securely, you can use the -mountrandom and -nobrowse options to hdiutil attach. This will mount the disk image in a randomized path name, and prevent it from being visible in the UI.
hdiutil attach -mountrandom /tmp -nobrowse /tmp/secret_image.dmg
Assuming the disk image has one and exactly one HFS partition, you can parse the randomized mount path like this:
hdiutil attach -mountrandom /tmp -nobrowse /tmp/secret.dmg | awk '$2 = /Apple_HFS/ { print $3 }'
Or you can use the -plist option to get the output in plist XML format that can be parsed using XML tools or converted to json using plutil -convert json.
Of course, an attacker that has root access can still monitor for new mounts and intercept your disk image before you have the chance to unmount it, but if your attacker has root than pretty much all bets are off.
Given a snapshot of an existing redis database in a dump.rdb (or in .json format) file, I want to restore this data in my own machine to run some tests on it.
Any pointers on how to do this would be greatly appreciated.
I have resorted to trying to parse the data in the dump.rdb and then save it in a redis DB manually. I feel like there is/should be a cleaner way.
If you want to restore the entire file, simply copy it to the right directory specified in redis.conf and restart redis server. But if you want to load a subset of keys/databases, you'd have to parse the dump file.
SO:
I continued doing it the "hacky" way and found that using the parser code found here:
https://github.com/sripathikrishnan/redis-rdb-tools was a great help.
using the parser sample code i could:
1) set up a redis client
2) use the parser to parse the data
3) use the client to "set" parsed data into a new redis database.
the rdd tools can also do that,
it work independantly of .rdb files and dump/restore working redis instances
it can apply merge, split, rename, search, filter, insert, delete on dumps and/or redis