I have created filestream group at C:\Test\FilestreamGroup1
and a table with varBinary Filstream column
Now when file is uploaded then it physically stored at FilestreamGroup1...
Now here I want to know two things
In which format file is stored at FilestreamGroup1 (for every single uploaded file I found 2 encoded file)?
secondly how to delete uploaded file physically (i.e. deleting a record from the table is like execute delete command, but doing this I'll not result in physical deletion of file from NTFS...so how can I delete a file physically)
If you want to delete files from FileSystem instanly then you need to force garbage collection manually by using checkpoint
Link
This is not a StackOverflow question, this belongs to ServerFault (admin). It toucehs dev though-
i.e. deleting a record from the table is like execute delete command, but doing this I'll not result in physical
deletion of file from NTFS...so how can I delete a file physically
Do you know what the primary reason is to hav a database? Guarantee data integrity.
A delete must keep the data around until a backup is taken. What is your backup policy? YOU may note that when you make an update, another copy of the file is created.... for that simple reason. The old one must still b e available for backup, and that is just how they integrate it.
In which format file is stored at FilestreamGroup1 (for every single uploaded file I found 2 encoded file)?
No, files are stored raw. What would be the sense to encode them... if there are SQL functions to get the path and it is a supported scenario that the client does not use SQL to load the file (but: asks SQL for the file name and path, then accesses it via NTFS file share). This also supports interop (as any program loading from a network can be fed a SQL driven location.
I strongly assume you have 1 copy only and somehow make an update resulting in a second file written.
http://msdn.microsoft.com/en-us/library/cc645962.aspx
has an explanation how to access FileSTream data with SQL.
http://technet.microsoft.com/en-us/library/cc645940(v=sql.105).aspx
has an explanation how to access FIleStream data using Win32.
FILESTREAM files being left behind after row deleted
explains while files are left behind when a row is deleted. I found that using the extremely trivial goodle search for "sql filestream delete file" and it was item 1 on the result list - did you even try google?
secondly how to delete uploaded file physically (i.e. deleting a record from the table is like execute delete command, but doing this I'll not result in physical deletion of file from NTFS...so how can I delete a file physically)
Checkpoint does not remove the files, files are removed in a backgroundprocess and it can take quite a while. To force deletion use
sp_filestream_force_garbage_collection
EDIT: works with SQL Server 2012 only
Write "checkpoint" after deleting a row. it will remove physical existence of file.
Run the below query and check, the file getting deleted from file system automatically
DELETE FROM TableName CHECKPOINT
Thanks.
Related
I want to read a tab delimited file using PLSQL and insert the file data into a table.
Everyday new file will be generated.
I am not sure if external table will help here because filename will be changed based on date.
Filename: SPRReadResponse_YYYYMMDD.txt
Below is the sample file data.
Option that works on your own PC is to use SQL*Loader. As file name changes every day, you'd use your operating system's batch script (on MS Windows, these are .BAT files) to pass a different name while calling sqlldr (and the control file).
External table requires you to have access to the database server and have (at least) read privilege on its directory which contains those .TXT files. Unless you're a DBA, you'll have to talk to them to provide environment. As of changing file name, you could use alter table ... location which is rather inconvenient.
If you want to have control over it, use UTL_FILE; yes, you still need to have access to that directory on the database server, but - writing a PL/SQL script, you can modify whatever you want, including file name.
Or, a simpler option, first rename input file to SPRReadResponse.txt, then load it and save yourself of all that trouble.
I want a script to delete NDF file from a file group completely without using "shrinkfile " command
A file can be removed from the database only if the file is empty. Without SHRINKFILE, the implication is that the file must be the only file in a user-defined filegroup and you must first drop or move all the objects (or partitions) from the filegroup to a different file group. The empty file can then be dropped with ALTER DATABASE...REMOVE FILE.
It seems your objective is to delete data older than 6 months. It would be easier to just delete/truncate the data and not bother with files/filegroups at all.
I'm trying to use oracle external tables to load flat files into a database but I'm having a bit of an issue with the location clause. The files we receive are appended with several pieces of information including the date so I was hoping to use wildcards in the location clause but it doesn't look like I'm able to.
I think I'm right in assuming I'm unable to use wildcards, does anyone have a suggestion on how I can accomplish this without writing large amounts of code per external table?
Current thoughts:
The only way I can think of doing it at the moment is to have a shell watcher script and parameter table. User can specify: input directory, file mask, external table etc. Then when a file is found in the directory, the shell script generates a list of files found with the file mask. For each file found issue a alter table command to change the location on the given external table to that file and launch the rest of the pl/sql associated with that file. This can be repeated for each file found with the file mask. I guess the benefit to this is I could also add the date to the end of the log and bad files after each run.
I'll post the solution I went with in the end which appears to be the only way.
I have a file watcher than looks for files in a given input dir with a certain file mask. The lookup table also includes the name of the external table. I then simply issue an alter table on the external table with the list of new file names.
For me this wasn't much of an issue as I'm already using shell for most of the file watching and file manipulation. Hopefully this saves someone searching for ages for a solution.
I am currently doing a project in backing-up SQL Server database. My question is how can I find whether the specified .bak file is present in the DEFAULT BACKUP FOLDER or not.
Till now what I have learnt is to check whether a file is present or not, I can find that out using:
EXEC master.dbo.xp_fileexist 'D:\Backup\dbbackup.bak'`
I also found out that, RESTORE VERIFYONLY FROM DISK = 'FileName', but it returns a message, the thing is I want a value in table as result, so that I can access it via java's ResultSet.
But the issue for me is how can I find whether the file is present are not, without giving the file path, but only giving the file name, assuming the file is located in DEFAULT BACKUP FOLDER
Also, can I know how to check the integrity of that backup file?
I have two backup files
1) is named 'backup.sql' with a bunch of SQL defining TABLES
2) is named 'backup' with a bunch of encoded data, which I believe are the ROWS
I need to restore these TABLES + ROWS, but all I am able to figure out is how to restore the tables.
Any tips on dealing with these files? It's the first time I ever deal with SQL Server.
The backup process would not create a file with actual SQL statements, it would create a binary file. So #1 is not a backup file (it's probably a script someone saved to re-create the schema).
I would try to use SQL Server Management Studio to restore the second file and see what happens. I don't think it will allow you to restore an invalid file, but I would take some basic precautions like backing up the system first.
What is the extension for the 'backup' file? Is the filename backup.bak? If you have a backup file created by sql server then it 'should' contain the logic to create both the tables and restore the data, but it could depend on how the backup was created.
---Edit
It is possible for a .SQL file to contain data values as well as the logic to create the tables/columns for a database. I used to run backups of a MySql database in this way a long time ago...it just is not seen very often with SQL server since it has built in backup/restore funcationality.
Seems unlikely they would export all the rows from all tables into CSV file, and given you said it looks encrypted, it's making me think that's your actual backup file.
try this, save a copy of the "backup" file, rename it to backup.bak and run this from SQL Server Management Studio
restore filelistonly from disk='C:\backup.bak'
(assuming your file is saved on the root of the C: drive)
Any results/errors?