I am designing a remote CD/DVD burner to address hardware constraints on my machine.
My design works like this: (analogous to a network printer)
Unix-based machine (acts as server) hosts a burner.
Windows-based machine acts as client.
Client prepares data to burn and transfers it to the server.
Server burns the data on CD/DVD.
My question is: what is the best protocol to transfer data over the network (Keeping the same directory hierarchy) between different operating systems?
I would think some kind of archive format would be best. The *nix .tar archive format works well for most things. However, since you are burning CD/DVD disks the native .iso format of the disk may be a good choice.
You'll likely need to transfer the entire archive prior to burning to prevent buffer under-run issues.
Edit:
You can use mkisofs to create the .iso file from a folder or your CD burner software may be able to output an .iso file.
Related
I have a service that stores data to LevelDB. To make backups I simply zip the entire data folder that LevelDB writes to and upload it to S3.
If I need to restore I just unzip the data and copy it back into the data folder and it seems to work great. I can also do this across Mac OSX and Linux Machines.
If I am running the same version of LevelDB on all machines is there anything wrong with this approach?
I work with a number of different specialized and configured OS environments but I generally only use one at a time. I have a processor-beefy laptop but storage is always an issue. It would also be good to have a running backup of each environment so I can work from other hardware.
What would be ideal would be if I could run some kind of VM library server that maintained canonical copies of each environment from which I could DL local execution copies to my local machine to work with and then stream changes back to the server image as I did my work.
In my research it seems like a number of the virtual machine providers used to have services like this (Citrix Player, VMWare Mirage) but that they have all been EOLd.
Is there a way to set something like this up today? I'd love a foss solution based on KVM but id be willing to take a free proprietary solution.
I am trying to set up a new architecture for Middleware using SAP PI/PO. The problem is to determine a right mechanism for pulling file from other servers (Linux/Windows etc..)
Broadly, 2 different approach are reviewed i.e. using a managed file transfer (MFT) tool like Dazel vs using NFS mounts. In NFS mount all the boundary application machines will act as server and middleware machine will be client. In the MFT approach a agent will be installed at boundary servers which will push files to middleware. We are trying to determine advantage and disadvantage of each approach
NFS advantages:
Ease of development. No need for additional tool related to managed file transfer
NFS disadvantages:
We are trying to understand if this approach creates any tight coupling between middleware and boundary applications
How easy it will be to maintain 50+ NFS mount points?
How does NFS behave in case any boundary machine goes down or hangs?
We want to develop a reliant middleware, which is not impacted by issue at 1 boundary application
My 2 c on NFS based on my non administrator experience (I'm a developer / PI system responsible).
We had NFS mounts on AIX which were based on SAMBA basis told#
Basis told that SAMBA could expose additional security risks
We had problems getting the users on windows and AIX straight, resulting in non working mounts (probably our own inability to manage the users correctly, nothing system inherent)
I (from an integration poin of view) haven't had problems with tight coupling. Could be that I was just a lucky sod but normally PI would be polling the respective mounts. If they will be errorneous at the time the polling happens, that's just one missed poll which will be tried next poll intervall
One feature, an MFT will undoubtly give you NFS can't is an edge file platform where third partys can put files to (sFTP, FTPS).
Bottom line would be:
You could manage with NFS when external facing file services are not
needed
You need to have some organisational set of rules to know which users which shares etc
You might want to look into security aspects enabling such mounts (if things like SAMBA are involved)
Is there anyway that you can expose local partition or disk image through your computer usb to another computer to appear like external drive on mac/linux/bsd system ?
I'm trying to play with something like kernel development and I need one system for compiling and other for restarting/testing.
With USB: Not a chance. USB is unidirectional, and your development system has no way of emulating a mass storage device, or any kind of other USB device.
With Firewire: Theoretically. (This is what Apple's target disk mode is using.) However, I can't find a readily available solution for that.
I'd advice you to try either virtualization or network boot. VirtualBox is free and open software, and has a variety of command line options, which means it can be scripted. Network boot takes a little effort to set up, but can work really well.
Yet another option, is to use a minimal Linux distribution as a bootstrap which sets up the environment you want, and then uses kexec to launch your kernel, possibly with GRUB as an intermediary step.
What kind of kernel are you fiddling with? If it's your own code, will the kernel operate in real or protected mode? Do you strictly need disk access, or do you just want to boot the actual kernel?
I am seeking a backup tool to back-up virtual OS instances run through Microsoft Virtual Server 2005 R2. According to the MS docs, it should be possible to do it live through volume shadow copy service, but I am having trouble finding any tool for any.
What are the best solution to back-up MS Virtual Server instances?
I'm personally fond of using ImageX to capture the VHD to a WIM file. (This is called file-based imaging, as opposed to sector-based imaging.) WIMs are sort of like an NTFS-specific compression format. It also has a single-instance store, which means that files that appear multiple times are only stored once. The compression is superb and the filesystem is restored perfectly with ACLs and reparse points perfectly intact.
You can store multiple VHDs and multiple versions of those VHDs in a WIM. Which means you can backup incremental versions of your VHD and it'll just add a little delta to the end of the WIM each time.
As for live images, you can script vshadow.exe to make a copy of your virtual machine before backing it up.
You can capture the image to WIM format in one of two ways:
Mount the virtual machine you want to capture in Windows PE using Virtual Server. Then run ImageX with the /CAPTURE flag and save the WIM to a network drive.
Use a tool like VHDMount to mount the virtual machine as a local drive and then capture with ImageX. (In my experience VHDMount is flaky and I would recommend SmartVDK for this task. VHDMount is better for formatting disks and partitioning.)
This only skims the surface of this approach. I've been meaning to write up a more detailed tutorial covering the nuances of all of this.
http://technet.microsoft.com/en-us/library/cc720377.aspx
http://support.microsoft.com/kb/867590
There appear to be a number of ways you can do this.
I'm using BackupChain for Virtual Server 2005 as well as VMWare. It creates delta incremental files which only contain file changes and takes snapshots while the VMs are running. This way we save a lot of storage space and bandwidth because it sends the backups via FTP to another server.
Sav