Creating data files in SQL Server - how would the data/tables be distributed? - sql

My database has 1 data file (.mdf) and 1 log file (.ldf). We have about 500 tables in this database. If I create more data files, say 3 more (.ndf files), what would exactly happen? Would these new data files remain unused until the .mdf runs out of space? Or how would the data/tables be distributed?

New pages are distributed along all files in a file group. It is strongly recommended to not ad files later if you want proper distribution - that really is ahrd to achieve. Better make a new filegroup, add files, copy tables over (recreate clustered index on new filespace).

Related

copy databases using log files

I have a database which is needed to replicate.
I tried with copy database scenario but after 1 day it was failed.
I tried using to create .bak file but no enough space on disk....
I need an alternate scenario to achieve this..
Can I Create a new database and replace its mdf and ldf files with original database and rename files...?
The fastest and the easiest way - to use an additional disk with more free space.

Create database backup, ignore column

I'd like to create a database backup using SSMS.
The backup file will be a .bak file, but I would like to ignore 1 column in a certain table, because this column isn't necessary, but it takes up 95% of the backup size.
The column values should all be replaced by 0x00 (column type is varbinary(max), not null).
What's the best way to do this?
FYI: I know how to generate a regular backup using Tasks => Back Up..
There is a long way of doing what you ask. Its basically create a new restored database, remove the non required data and then do a new backup again.
Create a Backup of the production database.
Restore the backup locally on production with a new name
Update the column with 0x00
Shrink the database (Shrink is helpful when doing a restore. This wont reduce the bak file size)
Take the backup of the new database (Also use Backup Compression to reduce the size even more)
Ftp the bak file
If you only needed a few tables, you could have used bcp but that looks out of the picture for your current requirement.
From SQL Server native backups, you can't. You'd have to restore the database to some other location and then migrate usefull data.
You can create a copy of your table without the column and backup using filegroups https://msdn.microsoft.com/en-us/library/ms191539(SQL.90).aspx

SQL Server 2008: Filestream how to physically delete uploaded file from filestreamgroup?

I have created filestream group at C:\Test\FilestreamGroup1
and a table with varBinary Filstream column
Now when file is uploaded then it physically stored at FilestreamGroup1...
Now here I want to know two things
In which format file is stored at FilestreamGroup1 (for every single uploaded file I found 2 encoded file)?
secondly how to delete uploaded file physically (i.e. deleting a record from the table is like execute delete command, but doing this I'll not result in physical deletion of file from NTFS...so how can I delete a file physically)
If you want to delete files from FileSystem instanly then you need to force garbage collection manually by using checkpoint
Link
This is not a StackOverflow question, this belongs to ServerFault (admin). It toucehs dev though-
i.e. deleting a record from the table is like execute delete command, but doing this I'll not result in physical
deletion of file from NTFS...so how can I delete a file physically
Do you know what the primary reason is to hav a database? Guarantee data integrity.
A delete must keep the data around until a backup is taken. What is your backup policy? YOU may note that when you make an update, another copy of the file is created.... for that simple reason. The old one must still b e available for backup, and that is just how they integrate it.
In which format file is stored at FilestreamGroup1 (for every single uploaded file I found 2 encoded file)?
No, files are stored raw. What would be the sense to encode them... if there are SQL functions to get the path and it is a supported scenario that the client does not use SQL to load the file (but: asks SQL for the file name and path, then accesses it via NTFS file share). This also supports interop (as any program loading from a network can be fed a SQL driven location.
I strongly assume you have 1 copy only and somehow make an update resulting in a second file written.
http://msdn.microsoft.com/en-us/library/cc645962.aspx
has an explanation how to access FileSTream data with SQL.
http://technet.microsoft.com/en-us/library/cc645940(v=sql.105).aspx
has an explanation how to access FIleStream data using Win32.
FILESTREAM files being left behind after row deleted
explains while files are left behind when a row is deleted. I found that using the extremely trivial goodle search for "sql filestream delete file" and it was item 1 on the result list - did you even try google?
secondly how to delete uploaded file physically (i.e. deleting a record from the table is like execute delete command, but doing this I'll not result in physical deletion of file from NTFS...so how can I delete a file physically)
Checkpoint does not remove the files, files are removed in a backgroundprocess and it can take quite a while. To force deletion use
sp_filestream_force_garbage_collection
EDIT: works with SQL Server 2012 only
Write "checkpoint" after deleting a row. it will remove physical existence of file.
Run the below query and check, the file getting deleted from file system automatically
DELETE FROM TableName CHECKPOINT
Thanks.

Importing an .RPT (6 gigs) file into SQL Server 2005

I'm trying to import two seperate .RPT files into SQL, one is small, one is large. Both have issues with determining where the columns are seperated.
My solution for this was to import the file into access, define the columns and then save it as a txt file.
This worked perfectly.
The problem however is the larger file is 6 gigs and MS Access won't allow me to open it. When trying to change the extension to simply .txt and importing it into SQL, everything comes down under one column (despite there being 10) and there is no way to accurately seperate the data.
Please help!
As Tony stated Access has a hard 2GB limit on database size.
You don't say what kind of file the .RPT file is. If it is a text file, then you could break it into smaller chunks by reading it line by line and appending it into temporary files. Then import/export these smaller files one at a time.
Keep in mind the 2GB limit is on the Access database, so your temporary text files will need to be somewhat smaller because the import will likely introduce some additional overhead. Also, you may need to compact/repair the database in between import/export cycles to reclaim space in the database; simply deleting the records is not enough.
If the file has column delimiters or fixed column widths you can try the following in SQL Management Studio:
Right click on a database, select "Tasks" and then "Import data...". This will take you through a wizard where you can define the source columns and map them to an existing or new table.

SQL Server 2005 backup and restore

I have two backup files
1) is named 'backup.sql' with a bunch of SQL defining TABLES
2) is named 'backup' with a bunch of encoded data, which I believe are the ROWS
I need to restore these TABLES + ROWS, but all I am able to figure out is how to restore the tables.
Any tips on dealing with these files? It's the first time I ever deal with SQL Server.
The backup process would not create a file with actual SQL statements, it would create a binary file. So #1 is not a backup file (it's probably a script someone saved to re-create the schema).
I would try to use SQL Server Management Studio to restore the second file and see what happens. I don't think it will allow you to restore an invalid file, but I would take some basic precautions like backing up the system first.
What is the extension for the 'backup' file? Is the filename backup.bak? If you have a backup file created by sql server then it 'should' contain the logic to create both the tables and restore the data, but it could depend on how the backup was created.
---Edit
It is possible for a .SQL file to contain data values as well as the logic to create the tables/columns for a database. I used to run backups of a MySql database in this way a long time ago...it just is not seen very often with SQL server since it has built in backup/restore funcationality.
Seems unlikely they would export all the rows from all tables into CSV file, and given you said it looks encrypted, it's making me think that's your actual backup file.
try this, save a copy of the "backup" file, rename it to backup.bak and run this from SQL Server Management Studio
restore filelistonly from disk='C:\backup.bak'
(assuming your file is saved on the root of the C: drive)
Any results/errors?