I know about increasing, but is there a way reduce the size of an EBS volume? Like I've put effort into my AMI but soon realized it's way to big for my needs. It's a windows 2008 instance.
You can make a snapshot of the volume and then create an ebs volume from the snapshot which is smaller. The snapshot will have a 'volume size' to give you an idea of how small you can go as I understand it.
I guess if the snapshot route doesn't work, you can just create the smaller volume, copy the larger to the smaller and then get rid of the larger.
No answer. It seems like you can't do it.
Related
I have a database which is around 20GB in size but actually free space is around 17GB. This has happened because I have moved out quite a few audit tables that were large in size to another database altogether.
I mistakenly tried to perform a shrink database whilst in business hours but didn't realise the time it would take it to complete this so managed to stop it.
I've now done some research on this and I've read articles where shrinking a database shouldnt be done because it can cause "massive" index fragmentation. I'm no SQL guru but this does ring alarm bells.
People have suggested using a shrink with truncate only.
Are there any sql experts out there that can help me out with the right thing to do here??
Shrinking a database after you remove a large amount of data in a one-time operation is proper. You want to leave plenty of free space, as re-growing can be expensive. And you want to rebuild indexes after shrinking, to remove the fragmentation.
But here the database is only 20GB, so shrinking it won't really solve any problems. If you wanted to shrink it to 6GB or so, that would be fine. But I would just leave it at 20GB.
i have recently discovered MonetDB and i am evaluating it for an internal project, so probably my questions are from a really newbie point of view. Maybe someone could point me to a site and/or document where i could find more info (i haven't found too much googling)
regarding scalability, correct me please if i am wrong, but what i understand is that if i need to scale, i would launch more server instances and discover them from the control node, is it right?
is there any limit on the number of servers?
the other point is about storage, is it possible to use amazon S3 to back MonetDB readonly instances?
update we would need to store a massive amount of Call Detail Records from different sources, on a read-only basis. We would aggregate/reduce that data for the day-to-day operation, accessing the bigger tables only when the full detail is required.
We would store the historical data as well to perform longer-term analysis. My concern is mostly about memory, disk storage wouldn't be the issue i think; if the hot dataset involved in a report/analysis eats up the whole memory space (fast response times needed, not sure about how memory swapping would impact), i would like to know if i can scale somehow instead of reingeneering the report/analysis process (maybe i am biased by the horizontal scaling thing :-) )
thanks!
You will find advantages of monetdb easily on net so let me highlight some disadvantages
1. In monetdb deleting rows does not free up the space
Solution: copy data in other table,drop existing table, and rename the other table
2. Joins are little slower
3. We can can not give table name as dynamic variable
Eg: if you have table name stored in one main table then you can't make a query like "for each (select tablename from mytable) select data from tablename)" the sql
You can't make functions with tablename as variable argument.
But it is still damn fast and can store large amount of data.
I'm setting up a virtuoso server on my local machine, the databse is not big (about 2GB)
The application I'm using the server for needs to make a very large number of queries and the results need to come fast.
The HDD I'm using is mechanical, so it's not that fast, I am now trying to find a way to allocate part of my main memory as a local storage so that I can put the database file on it.
is there's an easy way to do that ?
That's not what RAM is for.
If your server ever lost power, you would lose all of the data.
If you want a faster HDD, get one with a higher RPM, or get an SSD.
Take a look at the performance Tuning Guide...
It details, how to configure exactly what you are looking for.
Data is still held on disk - but the more data that can be loaded into memory too will see better performance.
get all your data into memory and that's probably as fast as it gets :-)
There's a software called RamDisk plus
you can see a demo here:
http://www.youtube.com/watch?v=vAdRsQJBEBE
This software allows you to create a disk partition right out of your RAM
I'm currently developing an application that loads lots of images from the internet and saves them locally (I'm using SDURLCache). However, old images have get removed from the disk again so I was wondering what the best cache size is.
The advantage of a big cache is obviously that more images get saved which leads to better UX.
The disadvantage is that images need a lot of space and the user will run out of disk space faster. The size I am thinking of is 20MB. It seems so big to me though so I'm asking you what you're opinion is.
The best way to decide on an appropriate cache size is to test. Run the app under Instruments to measure both performance and battery usage. Keep increasing the cache size until you can't discern a difference in performance. That's the largest size you'd need, at least under the test conditions. Once you've established that size, reduce the size until performance is just barely acceptable to determine the smallest acceptable size.
The right size is somewhere between those two sizes, depending on what you think is important. If you can't determine a right size, then either pick a size or add a slider to the app's settings to let the user decide. (I'd avoid making it user-adjustable if you can -- users shouldn't have to think about such things.)
Considering that the smallest iDevices have 8GB of storage, I don't think a 20MB cache is too big, especially if it significantly improves the performance of the app. Also, keep in mind the huge advantage a network cache can have for battery life, since network usage is very expensive in battery time.
Determining the ideal size however is hard without some more information. How often is the same picture accessed? How large is each picture (i.e. how many pictures can 20MB hold). How often will images need to be removed from the cache to add new ones?
If you are constantly changing the images in the cache, it could actually have an adverse effect on the battery life due to the increased disk usage.
I'm about to create 2 new SQL Server databases for our data warehouse:
Datawarehouse - where the data is stored
Datawarehouse_Stage - where the ETL is done
I'm expecting both databases to be able 30GB and grow about 5GB per year. They probably will not get bigger than 80GB (when we'll start to archive).
I'm trying to decide what settings I should use when creating these databases:
what should the initial size be?
...and should I increase the database size straight after creating it?
what should the auto-growth settings be?
I'm after any best practice advice on creating those databases.
UPDATE: the reason I suggest increasing the database size straight after creating it, because you can't shrink a database to less than its initial size.
•what should the initial size be?
45gb? 30 + 3 years grow, especially given that this fits on a LOW END CHEAP SSD DISC ;) Sizing is not an issue if your smallest SSD is 64GB.
...and should I increase the database size straight after creating it?
That would sort of be stupid, or? I mean, why create a db with a small size jsut to resize IMMEDIEATLEY after, instead of putting the right size into the script in the first step.
what should the auto-growth settings be?
This is not a data warehouse question. NO AUTOGROW. Autogrow fragemnts your discs.
Make sure you format the discs according to best practices (64kb node size, aligned partitions).