How to access files stored in SQL Server's FileTable? - sql

As I know SQL Server since version 2012 has a new feature, FileTable. It allows us to store files in the file system and to use them from T-SQL.
I am trying to use this feature and I have no idea how to do it properly.
Generally, I don't know how to access files stored in the file table. Let's suppose I have asp.net MVC app and there are a lot of images which I show on web pages in img tags. I would like to store these images in Filetable and access them as files from the filesystem. But I don't know where these files are stored and how to use them as files. Now my images are stored in web application directory in folder images and I write something like this:
<img src='/images/mypicture.png' />
And if I move my images to file table what I should write in src?
<img src='path-toimage-in-filetable' />

I don't think you still need this, anyways I'll post my answer for anyone else interested.
First, a filetable still being a table, so, if you want to access to data from it you need to use a Select SQL statement. So you'd need something like:
select name, file_stream from filetable_name
where
name = 'file_name',
file_type = 'file_extension'
just execute an statement like this in your asp.net app, then fetch the results and use the file_stream column to get the binary data of the stored file. If you want to retrieve the file from HTML, first you need to create an action in your controller, which will return the retrieved file:
public ActionResult GetFile(){
..
return File(file.file_stream,file.file_type);
}
After this, put in you HTML tag something like:
<img src="/controller/GetFile" />
hope this could help!
If you want to know the schema of a filetable see
here

I assume by FileTable you actually mean FileStream. A couple notes about that:
This feature is best used if your files are actually files
The files should be, on average, greater than 1mb - although there can be exceptions to this rule, if they're smaller than 1mb on average, you may be better off using a VARBINARY(MAX) or XML data type as appropriate. If your images are very small on average (only a few KB), consider using a VARBINARY(MAX) column.
Accessing these files will require an open transaction and that the database is properly configured for FILESTREAM
You can get some significant advantages bypassing the normal SQL engine/database file method of data access by telling SQL Server that you want to access the file directly, however it's not meant for directly accessing the file on the file system and attempting to do so can break SQL's management of these files (transactional consistency, tracking, locking, etc.).
It's pretty likely that your use case here would be better served by using a CDN and storing image URLs in the table if you really need SQL for this. You can use FILESTREAM to do this (see code sample below for one implementation), but you'll be hammering your SQL server for every request unless you store the images somewhere else anyway that the browser can properly cache (my example doesn't do that) - and if you store them somewhere else for rendering int he browser you might as well store them there to begin with (you won't have transactional consistency for those images once they're copied to some other drive/disk/location anyway).
With all that said, here's an example of how you'd access the FILESTREAM data using ADO.NET:
public static string connectionString = ...; // get your connection string from encrypted config
// assumes your FILESTREAM data column is called Img in a table called ImageTable
const string sql = #"
SELECT
Img.PathName(),
GET_FILESTREAM_TRANSACTION_CONTEXT()
FROM ImageTagble
WHERE ImageId = #id";
public string RetreiveImage(int id)
{
string serverPath;
byte[] txnToken;
string base64ImageData = null;
using (var ts = new TransactionScope())
{
using (var conn = new SqlConnection(connectionString))
{
conn.Open();
using (SqlCommand cmd = new SqlCommand(sql, conn))
{
cmd.Parameters.Add("#id", SqlDbType.Int).Value = id;
using (SqlDataReader rdr = cmd.ExecuteReader())
{
rdr.Read();
serverPath = rdr.GetSqlString(0).Value;
txnToken = rdr.GetSqlBinary(1).Value;
}
}
using (var sfs = new SqlFileStream(serverPath, txnToken, FileAccess.Read))
{
// sfs will now work basically like a FileStream. You can either copy it locally or return it as a base64 encoded string
using (var ms = new MemoryStream())
{
sfs.CopyTo(ms);
base64ImageData = Convert.ToBase64String(ms.ToArray());
}
}
}
ts.Complete();
// assume this is PNG image data, replace PNG with JPG etc. as appropraite. Might store in table if it will vary...
return "data:img/png;base64," + base64ImageData;
}
}
Obviously, if you have lots of images to handle like this this is not an ideal method - don't try to make an instance of SQL server into what you should be using a CDN for.... However, if you have other really good reasons, you should try to grab as many images as possible in a single request/transaction (e.g. if you know you're displaying 50 images on a page, get all 50 with a single transaction scope, don't use 50 transaction scopes - this code won't handle that).

Related

Create a sqllte file with password [duplicate]

I'm just learning to use SQLite and I was curious if such is possible:
Encryption of the database file?
Password protect opening of the database?
PS. I know that there is this "SQLite Encryption Extension (SEE).", but according to the documentation, "The SEE is licensed software...." and "The cost of a perpetual source code license for SEE is US $2000."
SQLite has hooks built-in for encryption which are not used in the normal distribution, but here are a few implementations I know of:
SEE - The official implementation.
wxSQLite - A wxWidgets style C++ wrapper that also implements SQLite's encryption.
SQLCipher - Uses openSSL's libcrypto to implement.
SQLiteCrypt - Custom implementation, modified API.
botansqlite3 - botansqlite3 is an encryption codec for SQLite3 that can use any algorithms in Botan for encryption.
sqleet - another encryption implementation, using ChaCha20/Poly1305 primitives. Note that wxSQLite mentioned above can use this as a crypto provider.
The SEE and SQLiteCrypt require the purchase of a license.
Disclosure: I created botansqlite3.
You can password protect SQLite3 DB.
For the first time before doing any operations, set password as follows.
SQLiteConnection conn = new SQLiteConnection("Data Source=MyDatabase.sqlite;Version=3;");
conn.SetPassword("password");
conn.open();
then next time you can access it like
conn = new SQLiteConnection("Data Source=MyDatabase.sqlite;Version=3;Password=password;");
conn.Open();
This wont allow any GUI editor to view Your data.
Later if you wish to change the password, use conn.ChangePassword("new_password");
To reset or remove password, use conn.ChangePassword(String.Empty);
The .net library System.Data.SQLite also provides for encryption.
You can get sqlite3.dll file with encryption support from http://system.data.sqlite.org/.
1 - Go to http://system.data.sqlite.org/index.html/doc/trunk/www/downloads.wiki and download one of the packages. .NET version is irrelevant here.
2 - Extract SQLite.Interop.dll from package and rename it to sqlite3.dll. This DLL supports encryption via plaintext passwords or encryption keys.
The mentioned file is native and does NOT require .NET framework. It might need Visual C++ Runtime depending on the package you have downloaded.
UPDATE
This is the package that I've downloaded for 32-bit development: http://system.data.sqlite.org/blobs/1.0.94.0/sqlite-netFx40-static-binary-Win32-2010-1.0.94.0.zip
Keep in mind, the following is not intended to be a substitute for a proper security solution.
After playing around with this for four days, I've put together a solution using only the open source System.Data.SQLite package from NuGet. I don't know how much protection this provides. I'm only using it for my own course of study. This will create the DB, encrypt it, create a table, and add data.
using System.Data.SQLite;
namespace EncryptDB
{
class Program
{
static void Main(string[] args)
{
string connectionString = #"C:\Programming\sqlite3\db.db";
string passwordString = "password";
byte[] passwordBytes = GetBytes(passwordString);
SQLiteConnection.CreateFile(connectionString);
SQLiteConnection conn = new SQLiteConnection("Data Source=" + connectionString + ";Version=3;");
conn.SetPassword(passwordBytes);
conn.Open();
SQLiteCommand sqlCmd = new SQLiteCommand("CREATE TABLE data(filename TEXT, filepath TEXT, filelength INTEGER, directory TEXT)", conn);
sqlCmd.ExecuteNonQuery();
sqlCmd = new SQLiteCommand("INSERT INTO data VALUES('name', 'path', 200, 'dir')", conn);
sqlCmd.ExecuteNonQuery();
conn.Close();
}
static byte[] GetBytes(string str)
{
byte[] bytes = new byte[str.Length * sizeof(char)];
bytes = System.Text.Encoding.Default.GetBytes(str);
return bytes;
}
}
}
Optionally, you can remove conn.SetPassword(passwordBytes);, and replace it with conn.ChangePassword("password"); which needs to be placed after conn.Open(); instead of before. Then you won't need the GetBytes method.
To decrypt, it's just a matter of putting the password in your connection string before the call to open.
string filename = #"C:\Programming\sqlite3\db.db";
string passwordString = "password";
SQLiteConnection conn = new SQLiteConnection("Data Source=" + filename + ";Version=3;Password=" + passwordString + ";");
conn.Open();
You can always encrypt data on the client side. Please note that not all of the data have to be encrypted because it has a performance issue.
You can use SQLite's function creation routines (PHP manual):
$db_obj->sqliteCreateFunction('Encrypt', 'MyEncryptFunction', 2);
$db_obj->sqliteCreateFunction('Decrypt', 'MyDecryptFunction', 2);
When inserting data, you can use the encryption function directly and INSERT the encrypted data or you can use the custom function and pass unencrypted data:
$insert_obj = $db_obj->prepare('INSERT INTO table (Clear, Encrypted) ' .
'VALUES (:clear, Encrypt(:data, "' . $passwordhash_str . '"))');
When retrieving data, you can also use SQL search functionality:
$select_obj = $db_obj->prepare('SELECT Clear, ' .
'Decrypt(Encrypted, "' . $passwordhash_str . '") AS PlainText FROM table ' .
'WHERE PlainText LIKE :searchterm');
Well, SEE is expensive. However SQLite has interface built-in for encryption (Pager). This means, that on top of existing code one can easily develop some encryption mechanism, does not have to be AES. Anything really.
Please see my post here: https://stackoverflow.com/a/49161716/9418360
You need to define SQLITE_HAS_CODEC=1 to enable Pager encryption. Sample code below (original SQLite source):
#ifdef SQLITE_HAS_CODEC
/*
** This function is called by the wal module when writing page content
** into the log file.
**
** This function returns a pointer to a buffer containing the encrypted
** page content. If a malloc fails, this function may return NULL.
*/
SQLITE_PRIVATE void *sqlite3PagerCodec(PgHdr *pPg){
void *aData = 0;
CODEC2(pPg->pPager, pPg->pData, pPg->pgno, 6, return 0, aData);
return aData;
}
#endif
There is a commercial version in C language for SQLite encryption using AES256 - it can also work with PHP, but it needs to be compiled with PHP and SQLite extension. It de/encrypts SQLite database file on the fly, file contents are always encrypted. Very useful.
http://www.iqx7.com/products/sqlite-encryption
I had also similar problem. Needed to store sensitive data in simple database (SQLite was the perfect choice except security). Finally I have placed database file on TrueCrypt encrypted valume.
Additional console app mounts temporary drive using TrueCrypt CLI and then starts the database application. Waits until the database application exits and then dismounts the drive again.
Maybe not suitable solution in all scenarios but for me working well ...

How to open local bitcoin database

I am trying to extract data from local bitcoin database. As I know, bitcoin-qt is using BerkeleyDB. I have installed BerkleyDB from Oracle web site, and found there a DLL for .NET: libdb_dotnet60.dll. I am trying to open a file, but I get a DatabaseException. Here is my code:
using BerkeleyDB;
class Program
{
static void Main(string[] args)
{
var btreeConfig = new BTreeDatabaseConfig();
var btreeDb = BTreeDatabase.Open(#"c:\Users\<user>\AppData\Roaming\Bitcoin\blocks\blk00000.dat", btreeConfig);
}
}
Does anyone have examples how to work with a Bitcoin database (in any other language)?
What are you trying to extract? Only the wallet.dat file is Berkeley database.
Blocks are stored one after the other in the blkxxxxx.dat files with four bytes representing a network identifier and four bytes giving the block size, before each block.
An index for unspent outputs in stored as a leveldb database.
Knowing what type of information you are looking for would help.
There is library NBitcoin: https://github.com/MetacoSA/NBitcoin
How to enumerate blocks:
var store = new BlockStore(#"C:\Bitcoin\blocks\", Network.Main);
// this loop will enumerate all blocks ordered by height starting with genesis block
foreach (var block in store.EnumerateFolder())
{
var item = block.Item;
string blockID = item.Header.ToString();
foreach (var tx in item.Transactions)
{
string txID = tx.GetHash().ToString();
string raw = tx.ToHex();
}
}
In .NET you could use something like BitcoinBlockchain that is available as a NuGet package at https://www.nuget.org/packages/BitcoinBlockchain/. Its usage is trivial. If you want o see how it is implemented the sources are available on GitHub.
If you want to store the blockchain in a SQL database that you could query faster and in more ways that the raw blockchain you could use something like the BitcoinDatabaseGenerator tool available at https://github.com/ladimolnar/BitcoinDatabaseGenerator.

How to restore an unknown type BLOB field from Firebird

I am trying to restore a BLOB field stored in a Firebird database, and the only information I have is that the content of the BLOB field is a document.
I've tried using IBManager to right-click on the cell and click "Save BLOB to file", but the saved file is unreadable (as if it was encrypted). I tried to open it with Microsoft Word, notepad, adobe etc, with no success. I also tried opening it with WinRAR (I thought that it might have been compressed before being stored to the database) but still nothing.
Is there a way to find out whether and how the BLOB file was compressed, and how to restore it?
Thanks in advance!
Update:
I have converted the firebird database to SQL and I use the following code to extract the Unencoded BLOB documents:
conn.Open();
dr = comm.ExecuteReader();
while (dr.Read())
{
byte[] document_byte = null;
if (dr[1] != System.DBNull.Value)
{
document_byte = (byte[])dr[1];
}
string subPath = "C:\\Documents\\" + dr[0] + "\\";
System.IO.Directory.CreateDirectory(subPath);
if (document_byte != null)
{
System.IO.File.WriteAllBytes(subPath + "Document", document_byte);
}
}
How can I adjust my code to decode the BLOB file from Base64 since I know is Base64 encoded?
Unless the field uses BLOB filter the data is stored into database as is, ie Firebird doesn't alter it in any way. Check the field's definition, if it does have SUB_TYPE 0 (or binary) then it is "ordinary" binary data, ie Firebird doesn't apply any filter to it. And even in case the field uses some filter, unless there is a bug in the filter code you should get the original data back when reading the content of the BLOB.
So it comes down to the program which stored the document into DB, it is quite possible that it compressed or encrypted the file, but there is no way Firebird can help you to figure out what algorithm was used... One option would be to save the content of the BLOB into file and then try the *nix file command, perhaps it is able to detect the file format used.
I would also check the DB for corruptions, just for case (Firebird's gfix command line tool).

How can I get references to BlockBlob objects from CloudBlobDirectory.ListBlobs?

I am using the Microsoft Azure .NET client libraries to interact with Azure cloud storage. I need to be able to access additional information about each blob in its metadata collection. I am currently using CloudBlobDirectory.ListBlobs() method to get a list of blobs in a particular directory of a directory structure I've devised in the blob names. The ListBlobs() method returns a list of IListBlobItem objects. They only have a couple of properties: Url and references to parent directory and parent container. I need to get to the metadata of the actual blob objects.
I envisioned there would be a way to either cast the IListBlobItem to a BlockBlob object or use the IListBlockItem to get a reference to the BlockBlob, but can't seem to find a way to do that.
My question is: Is there a way to get a BlockBlob object from this method, or do I have to use a different way of getting the actual BlockBlob objects? If different, then can you suggest a way to achieve this, while also being able to filter by the "directory" scheme?
OK... I found a way to do this, and while it seems a little clunky and indirect, it does achieve the main thing I thought should be doable, which is to cast the IListBlobItem directly to a CloudBlockBlob object.
What I am doing is getting the list from the Directory object's ListBlobs() method and then looping over each item in the list and casting the item to a CloudBlockBlob object and then calling the FetchAttributes() method to retrieve the properties (including the metadata). Then add a new "info" object to a new list of info objects. Here's the code I'm using:
CloudBlobDirectory dir = container.GetDirectoryReference(dirPath);
var blobs = dir.ListBlobs(true);
foreach (IListBlobItem item in blobs)
{
CloudBlockBlob blob = (CloudBlockBlob)item;
blob.FetchAttributes();
files.Add(new ImageInfo
{
FileUrl = item.Uri.ToString(),
FileName = item.Uri.PathAndQuery.Replace(restaurantId.ToString().PadLeft(3, '0') + "/", ""),
ImageName = blob.Metadata["Name"]
});
}
The whole "Blob" concept seems needlessly complex and doesn't seem to achieve what I'd have thought would have been one of the main features of the Blob wrapper. That is, a way to expand search capabilities by allowing a query over name, directory, container and metadata. I'd have thought you could construct a linq query that would read somewhat like: "return a list of all blobs in the 'images' container, that are in the 'natural/landscapes/' directory path that have a metadata key of 'category' with the value of 'sunset'". There doesn't seem to be a way to do that and that seems to be a missed opportunity to me. Oh, well.
If I'm wrong and way off base here, please let me know.
This approach has been developed for Java, but I hope it can somehow be modified to fit any other supported language. Despite the functionality you ask has not been explicitly developed yet, I think I found a different (hopefully less clunky) way to access CloudBlockBlob data from a ListBlobItem element.
The following code can be used to delete, for example, every blob inside a specific directory.
String blobUri;
CloudBlobClient blobClient = /* Obtain your blob client */
try{
CloudBlobContainer container = /* Obtain your blob container */
for (ListBlobItem blobItem : container.listBlobs(blobPrefix)) {
if (blobItem instanceof CloudBlob) {
blob = (CloudBlob) blobItem;
if (blob.exists()){
System.out.println("Deleting blob " + blob.getName());
blob.delete();
}
}
}
}catch (URISyntaxException | StorageException ex){
Logger.getLogger(BlobOperations.class.getName()).log(Level.SEVERE, null, ex);
}
The previous answers are good. I just wanted to point out 2 things:
1) Nowadays ASYNC programming is recommended to do and supported by Azure SDK as well. So try to use it:
CloudBlobDirectory dir = container.GetDirectoryReference(dirPath);
var blobs = dir.ListBlobs(true);
foreach (IListBlobItem item in blobs)
{
CloudBlockBlob blob = (CloudBlockBlob)item;
await blob.FetchAttributesAsync(); //Use async calls...
}
2) Fetching Metadata in a separate call is not efficient. The code makes 2 HTTP request per blob object. ListBlobs() method supports getting Metadata with as well in one call by setting BlobListingDetails parameter:
CloudBlobDirectory dir = container.GetDirectoryReference(dirPath);
var blobs = dir.ListBlobs(useFlatBlobListing: true, blobListingDetails: BlobListingDetails.Metadata);
I recommend to use second code it it is possible. Since it is the most efficient way to fetch Metadata.

Grails - store sql that will be used by services

I am writing a Grails application that will mostly be using the springws web services plugin with endpoints backed by services. The services will retrieve data from a variety of back end databases (i.e., not via domain classes and GORM). I would like to store the sql that my services will be using to fetch the data for the web services in external files.
I'm looking for suggestions on:
Where is the best place to keep the files (i.e., I'd like to put them somewhere obvious like grails-app/sql) and best format (i.e., xml, configslurper, etc.)
Best way to abstract the retrieving of the sql text so my services that will execute the sql will not need to know where or how they are fetched. Services will just provide a sqlid and get the sql.
I was working on a project recently where I needed to do something similar. I created the following directory to store the sql files:
./grails-app/conf/sql
For example there is a file ./grails-app/conf/sql/hr/FIND_PERSON_BY_ID.sql that has something like the following:
select a.id
, a.first_name
, a.last_name
from person
where id = ?
I created a SqlCatalogService class that would load all files in that directory (and subdirectories) and store the filenames (minus extension) and file text in a Map. The service has a get(id) method that returns the sql text that is cached in the Map. Since files/directories stored in grails-app/conf are placed in the classpath, the SqlCatalogService uses the following code to read in the files:
....
....
Map<String,String> sqlCache = [:]
....
....
void loadSqlCache() {
try {
loadSqlCacheFromDirectory(new File(this.class.getResource("/sql/").getFile()))
} catch (Exception ex) {
log.error(ex)
}
}
void loadSqlCacheFromDirectory(File directory) {
log.info "Loading SQL cache from disk using base directory ${directory.name}"
synchronized(sqlCache) {
if(sqlCache.size() == 0) {
try {
directory.eachFileRecurse { sqlFile ->
if(sqlFile.isFile() && sqlFile.name.toUpperCase().endsWith(".SQL")) {
def sqlKey = sqlFile.name.toUpperCase()[0..-5]
sqlCache[sqlKey] = sqlFile.text
log.debug "added SQL [${sqlKey}] to cache"
}
}
} catch (Exception ex) {
log.error(ex)
}
} else {
log.warn "request to load sql cache and cache not empty: size [${sqlCache.size()}]"
}
}
}
String get(String sqlId) {
def sqlKey = sqlId?.toUpperCase()
log.debug "SQL Id requested: ${sqlKey}"
if(!sqlCache[sqlKey]) {
log.debug "SQL [${sqlKey}] not found in cache, loading cache from disk"
loadSqlCache()
}
return sqlCache[sqlKey]
}
Services that use various datasources use the SqlCatalogService to retrieve the sql by calling the get(id) method:
class PersonService {
def hrDataSource
def sqlCatalogService
private static final String SQL_FIND_PERSON_BY_ID = "FIND_PERSON_BY_ID"
Person findPersonById(String personId) {
try {
def sql = new groovy.sql.Sql(hrDataSource)
def row = sql.firstRow(sqlCatalogService.get(SQL_FIND_PERSON_BY_ID), [personId])
row ? new Person(row) : null
} catch (Exception ex) {
log.error ex.message, ex
throw ex
}
}
}
For now we only have a few sql statements so storing all the text in a Map is not an issue. If you lots of sql files to store you may need to think about using something like Ehcache and defining an eviction strategy (i.e., least recently used or least frequently used) and only storing the most used in memory and leaving the rest on disk until needed.
Before doing this I thought about using GORM and storing the sql text in the database. But decided that having the sql in files made it easier to develop with since we could pretty much save the sql to file directly from our sql tool (replacing hard-code params with question marks) and are able to let our revision control system track the changes. I'm not saying the above service is the most efficient or correct way to handle this, but it's worked so far for our needs.
Have you considered using Grails GORM and a HSQLDB database to store the SQL you want executed? You could then put in a record for each service containing that services SQL and retrieve it using normal Grails GORM functions. You could generate a default set of controllers and views that would allow you to edit the SQL. If you want to store the SQL in external files you can create a sub directory in the web-app directory called sql, then store your SQL statements as text files. You could create a class that would take a service name, load the associated text file containing the SQL and return the contents of that file. With out knowing how complex your SQL will be I cant' say what the best format would be. If your dealing with normal select statements with no parameter substitution plain text would be best. If your dealing with more complex SQL with substitutions and multiple queries you may want to use XML.