Windows Azure D drive shows empty - azure-storage

I have a virtual machine of Windows Azure. I have two drive i.e. C drive and D drive. I go to property of D drive. It shows 4 gb used space but when I go to D drive it shows empty. There is no any files, data. I have also setup Sql Server 2012 on D drive.
How can I recover my data ?

When you create a VM in azure, by default you will see two drives in windows explorer: C & D. By default the D: drive will be named "Temporary Storage (D:)". This isn't a very clear warning, and I'm sorry to say you have lost whatever data you had stored there. The D: drive that is created by default in an Azure VM should be considered volatile storage, as its contents are not persisted in Azure Storage like the C: drive is. In other words only store things on D: temporarily as they can be deleted at any time.
Also, see this related SO question.

Related

Port Old Postgresql DB to New Computer

Due to a lightning strike on my house my old computer was recently fried. But I bought a new one and much to my delight, the C: SSD filesystem from that old machine was still working after I ported it to the new one, albeit now as a D: drive.
Now I'm getting ready to install PostgreSQL and would like to be able to access the old database that resides on the D: drive. I am stumped as how to proceed.
There does not seem to be any way to tell a running PostgreSQL instance, "Hey look over there on the D: drive - that's a data base you can use. There's a CREATE Database and a DROP database, but not a "Use this database". I should say I was running version 14 on the old machine and could certainly install that same DB again on the new one before upgrading, if there were a way to add to its catalogue.
There is no data base dump/conversion utility that works without going through a running PostgreSQL server instance, so I see no way to convert the old data out of its proprietary format and reload it to the new PostgreSQl instance.
The only thought that occurs to me is to install a version as close to the old version 14 as possible, then CREATE a second data base somewhere new (perhaps on the D: drive), then stop the PostgreSQL server instance, copy the old data over top of the new data (with all subdirectories), then restart the server and hope for the best. Sounds like a 50-50 proposition at best.
Anyone else have any other thoughts/ideas?
So, just in case someone else has this problem and finds this question, here is what I found.
The installer for PostgreSQL has a prompt for what data directory to use. (After making a backup copy of the data,) I told it to use D:\Program Files\PostgreSQL\14\data and it recognized that this was an existing PostgreSQL data repository and it preserved all my tables.
As an experiment afterward I copied the backup data back into the data directory (after stopping the DB), restarted the DB, and everything was fine after PostgreSQL complained a little about the LOG files locations. I would say this can work as long as you are running the same version of PostgreSQL that last worked with the database on your old computer.

How to backup on premise data to AWS S3 bucket using tool/service?

Let me explain a little bit, we are keeping users data to a centralized Share folder(Configured on Domain Controller, permission set via NTFS & Security group), like M-Marketing, S-Sales & T-Trading, etc(These drives are mapped to windows login profile according to their work profile).
On-premise back is already configured. Now I want to back up some of the important drives (Like M, S, T) to AWS S3 to keep data safe & whenever is Source data is not available for whatever reason, I must be able to map those drives according to their work profile.

Running big SELECT query on a SSMS(connected to a remote server) floods the C drive's free space

I am trying to run a SELECT query on SSMS. The output runs into 10 million + records. But as soon as half of the records are fetched, I get a System Out Of Memory Exception. I happened to check Windows Explorer and found that C drive is out of space(few KBs left). Also, as soon as I close the SQL window, the free space in C drive is back to normal. Now, I understand that data is first fetched put into RAM, but want to know why C drive gets filled. I am looking for specific details.
Look in C drive for a file called pagefile.sys (C:\pagefile.sys - modify the folder options to show hidden system files). Track the size of this file while you run your query to see if it fills the disk

How to merge a drive with primary boot partition in windows 8?

I would like to merge my G drive with C drive as i need more space in primary boot partition(C drive).. But C and G drives are not physically adjacent in my computer. I have D and E drives in between. I have emptied G drive. I want to merge G drive because i could easily free it up as it has smaller size. I am using Windows 8. Kindly help me to merge these two drives without affecting my C drive..
Usually the partitions must be adjacent for beeing merged with windows tools.
There are other tools that allow this, e.g. Partition Wizard.
Partition Wizard offers a free Home Edition but this edition won't allow merging non adjacent partitions. Compare the features of the different editions.

Any changes required on connection string when the database is move to a different drive?

I'm planning on moving a database from C: drive to E: drive because the database is growing and the C: drive does not have enough capacity to handle that.
I wonder if I need to changes anything in the connection on the web.config page in order to access the database.
The database still has the same name, is still on the same server, only is moved to a different drive.
thanks,
aein
There are situations where the path to the database file is important (See the AttachDBFilename parameter here). However, if you don't currently have any file path information in your connection string you shouldn't need to make any changes.
Not needed. It does not impact the web app.