I would like to merge my G drive with C drive as i need more space in primary boot partition(C drive).. But C and G drives are not physically adjacent in my computer. I have D and E drives in between. I have emptied G drive. I want to merge G drive because i could easily free it up as it has smaller size. I am using Windows 8. Kindly help me to merge these two drives without affecting my C drive..
Usually the partitions must be adjacent for beeing merged with windows tools.
There are other tools that allow this, e.g. Partition Wizard.
Partition Wizard offers a free Home Edition but this edition won't allow merging non adjacent partitions. Compare the features of the different editions.
Related
I am trying to run a SELECT query on SSMS. The output runs into 10 million + records. But as soon as half of the records are fetched, I get a System Out Of Memory Exception. I happened to check Windows Explorer and found that C drive is out of space(few KBs left). Also, as soon as I close the SQL window, the free space in C drive is back to normal. Now, I understand that data is first fetched put into RAM, but want to know why C drive gets filled. I am looking for specific details.
Look in C drive for a file called pagefile.sys (C:\pagefile.sys - modify the folder options to show hidden system files). Track the size of this file while you run your query to see if it fills the disk
I have a virtual machine of Windows Azure. I have two drive i.e. C drive and D drive. I go to property of D drive. It shows 4 gb used space but when I go to D drive it shows empty. There is no any files, data. I have also setup Sql Server 2012 on D drive.
How can I recover my data ?
When you create a VM in azure, by default you will see two drives in windows explorer: C & D. By default the D: drive will be named "Temporary Storage (D:)". This isn't a very clear warning, and I'm sorry to say you have lost whatever data you had stored there. The D: drive that is created by default in an Azure VM should be considered volatile storage, as its contents are not persisted in Azure Storage like the C: drive is. In other words only store things on D: temporarily as they can be deleted at any time.
Also, see this related SO question.
Firstly apologies if this question seems like a wall of text, I can't think of a way to format it.
I have a machine with valuable data on(circa 1995), the machine is running unix (SCO OpenServer 6) with some sort of database stored on it.
The data is normally accessed via a software package of which the license has expired and the developers are no longer trading.
The software package connects to the machine via telnet to retrieve data and modify data (the telnet connection no longer functions due to the license being changed).
I can access the machine via an ODBC driver (SeaODBC.dll) over a network, this was how I was planning to extract the data but so far I have retrieved 300,000 rows in just over 24 hours, in total I estimate there will be around 50,000,000 rows total so at current speed it will take 6 months!
I need either a quicker way to extract the data from the machine via ODBC or a way to extract the entire DB locally on the machine to an external drive/network drive or other external source.
I've played around with the unix interface and the only large files I can find are in a massive matrix of single character folder (eg A\G\data.dat, A\H\Data.dat ect).
Does anyone know how to find out the installed DB systems on the machine? Hopefully it is a standard and I'll be able to find a way to export everything into a nicely formatted file.
Edit
Digging around the file system I have found a folder under root > L which contains lots of single lettered folders, each single lettered folder contains more single letter folders.
There are also files which are named after the table I need (eg "ooi.r") which have the following format:
<Id>
[]
l for ooi_lno, lc for ooi_lcno, s for ooi_invno, id for ooi_indate
require l="AB"
require ls="SO"
require id=25/04/1998
{<id>} is s
sort increasing Id
I do not recognize those kinds of filenames A\G\data.dat and so on (filenames with backslashes in them???) and it's likely to be a proprietary format so I wouldn't expect much from that avenue. You can try running file on these to see if they are in any recognized format just to see...
I would suggest improving the speed of data extraction over ODBC by virtualizing the system. A modern computer will have faster memory, faster disks, and a faster CPU and may be able to extract the data a lot more quickly. You will have to extract a disk image from the old system in order to virtualize it, but hopefully a single sequential pass at reading everything off its disk won't be too slow.
I don't know what the architecture of this system is, but I guess it is x86, which means it might be not too hard to virtualize (depending on how well the SCO OpenServer 6 OS agrees with the virtualization). You will have to use a hypervisor that supports full virtualization (not paravirtualization).
I finally solved the problem, running a query using another tool (not through MS Access or MS Excel) worked massively faster, ended up using DaFT (Database Fishing Tool) to SELECT INTO a text file. Processed all 50 million rows in a few hours.
It seems the dll driver I was using doesn't work well with any MS products.
I need to dual boot 2 different task sequences (Win7 images) for different Pc types which require different drivers, we have 2 images one for staff and student which can be added to a particular task sequence.
I need to create a portable solution for cloning without the network using 2 different SCCM (System Center Configuration Manager) task sequences. At the moment I go through the usual steps of creating a boot media via the Configuration manager, but there seems to be know way to create a script that changes the task media on the fly so you can select which OS image.
I was looking at a possible solution using YUMI (a Usb boot tool) but each bootable image requires an ISO. The task sequence image is around 8GIG.
We use SCCM 2007. (Still awaiting for a budget to upgrade to 2012 :) )
It sounds like you want to boot two different .WIM images.
Out of the box, I haven't found any tool from MS that will allow this. I have gotten around this discrepancy by renaming the .WIM I want to use to BOOT.WIM in the \SOURCES directory.
That is the name of the .WIM that gets used by all the default settings. You have to rename the file before you attempt to boot from the USB device, but it doesn't take long and could be scripted without much effort.
Theoretically, it should be possible to configure the BCD on the USB device (\EFI\MICROSOFT\BOOT\BCD or BOOT\BCD, depending on how the computer is configured to boot) so that you could choose which .WIM to use at boot time without the need to do any messy renaming. I haven't gotten this to work yet (mostly due to lack of time/urgency), but I did write down what I had done so far. I found some useful information about booting to .WIM's from windowsitpro.com.
I have thousands of small CSV files I want to aggregate (with a little munging in-script first). They are on a NAS device, a "SNAP" Server to be more exact. I've had some success with VBA from Excel - doing about 700 files in about a minute, if I recall (was a month ago). Actually, it was half-success: the snap server is home to 80% pdfs and some proprietary-format files and only 20% CSVs. The Loop to test for filetype took the execution time north of 2 hours and the script apparently completely ignored date filtering I put in. The quick result or 'success' was on 700 copies of the CSVs I made and put on my C drive. I've been doing VBA scripting for almost 20 years, and I think I'm decent at it; I do a lot of CSV reading and writing from VBA the last 9 years. So my question is more about your experience with snap servers or NAS generally.
Can I not treat the snap server more or less like any drive/folder with VBA?
Would VBScript be more appropriate? (already using FileSystemObject, after all)
If I can use VBS can I store the script on the NAS and run it using taskscheduler?
I'd appreciate any tips or gotchas from you folks who have experience with snap servers!
Some thoughts on the choice of language:
VB Script is more lightweight than VBA in that it does not require MS Office to be installed. The syntax is similar so there is no real productivity difference.
Moving forward Powershell would be strongly recommended for Windows system admin tasks, general text file processing, etc.
Some thoughts on using the NAS server:
a) If running your script on a workstation you should be able to use a URI string \\myserver\myshare to connect to a share on the NAS. If not you may need to map a drive letter to that share before your script runs.
b) If you want to run your script on the NAS there are 2 things to consider: is the NAS OS locked so that you may not add your own scheduled task and is it Linux or some flavor of Windows. Many NAS products use embedded Linux so running a VBA or VBScript solution directly on the NAS may not work unless it is based on something like Embedded XP and you have access to Scheduled Tasks, etc.
Hope this helps...