I'm just learning to use SQLite and I was curious if such is possible:
Encryption of the database file?
Password protect opening of the database?
PS. I know that there is this "SQLite Encryption Extension (SEE).", but according to the documentation, "The SEE is licensed software...." and "The cost of a perpetual source code license for SEE is US $2000."
SQLite has hooks built-in for encryption which are not used in the normal distribution, but here are a few implementations I know of:
SEE - The official implementation.
wxSQLite - A wxWidgets style C++ wrapper that also implements SQLite's encryption.
SQLCipher - Uses openSSL's libcrypto to implement.
SQLiteCrypt - Custom implementation, modified API.
botansqlite3 - botansqlite3 is an encryption codec for SQLite3 that can use any algorithms in Botan for encryption.
sqleet - another encryption implementation, using ChaCha20/Poly1305 primitives. Note that wxSQLite mentioned above can use this as a crypto provider.
The SEE and SQLiteCrypt require the purchase of a license.
Disclosure: I created botansqlite3.
You can password protect SQLite3 DB.
For the first time before doing any operations, set password as follows.
SQLiteConnection conn = new SQLiteConnection("Data Source=MyDatabase.sqlite;Version=3;");
conn.SetPassword("password");
conn.open();
then next time you can access it like
conn = new SQLiteConnection("Data Source=MyDatabase.sqlite;Version=3;Password=password;");
conn.Open();
This wont allow any GUI editor to view Your data.
Later if you wish to change the password, use conn.ChangePassword("new_password");
To reset or remove password, use conn.ChangePassword(String.Empty);
The .net library System.Data.SQLite also provides for encryption.
You can get sqlite3.dll file with encryption support from http://system.data.sqlite.org/.
1 - Go to http://system.data.sqlite.org/index.html/doc/trunk/www/downloads.wiki and download one of the packages. .NET version is irrelevant here.
2 - Extract SQLite.Interop.dll from package and rename it to sqlite3.dll. This DLL supports encryption via plaintext passwords or encryption keys.
The mentioned file is native and does NOT require .NET framework. It might need Visual C++ Runtime depending on the package you have downloaded.
UPDATE
This is the package that I've downloaded for 32-bit development: http://system.data.sqlite.org/blobs/1.0.94.0/sqlite-netFx40-static-binary-Win32-2010-1.0.94.0.zip
Keep in mind, the following is not intended to be a substitute for a proper security solution.
After playing around with this for four days, I've put together a solution using only the open source System.Data.SQLite package from NuGet. I don't know how much protection this provides. I'm only using it for my own course of study. This will create the DB, encrypt it, create a table, and add data.
using System.Data.SQLite;
namespace EncryptDB
{
class Program
{
static void Main(string[] args)
{
string connectionString = #"C:\Programming\sqlite3\db.db";
string passwordString = "password";
byte[] passwordBytes = GetBytes(passwordString);
SQLiteConnection.CreateFile(connectionString);
SQLiteConnection conn = new SQLiteConnection("Data Source=" + connectionString + ";Version=3;");
conn.SetPassword(passwordBytes);
conn.Open();
SQLiteCommand sqlCmd = new SQLiteCommand("CREATE TABLE data(filename TEXT, filepath TEXT, filelength INTEGER, directory TEXT)", conn);
sqlCmd.ExecuteNonQuery();
sqlCmd = new SQLiteCommand("INSERT INTO data VALUES('name', 'path', 200, 'dir')", conn);
sqlCmd.ExecuteNonQuery();
conn.Close();
}
static byte[] GetBytes(string str)
{
byte[] bytes = new byte[str.Length * sizeof(char)];
bytes = System.Text.Encoding.Default.GetBytes(str);
return bytes;
}
}
}
Optionally, you can remove conn.SetPassword(passwordBytes);, and replace it with conn.ChangePassword("password"); which needs to be placed after conn.Open(); instead of before. Then you won't need the GetBytes method.
To decrypt, it's just a matter of putting the password in your connection string before the call to open.
string filename = #"C:\Programming\sqlite3\db.db";
string passwordString = "password";
SQLiteConnection conn = new SQLiteConnection("Data Source=" + filename + ";Version=3;Password=" + passwordString + ";");
conn.Open();
You can always encrypt data on the client side. Please note that not all of the data have to be encrypted because it has a performance issue.
You can use SQLite's function creation routines (PHP manual):
$db_obj->sqliteCreateFunction('Encrypt', 'MyEncryptFunction', 2);
$db_obj->sqliteCreateFunction('Decrypt', 'MyDecryptFunction', 2);
When inserting data, you can use the encryption function directly and INSERT the encrypted data or you can use the custom function and pass unencrypted data:
$insert_obj = $db_obj->prepare('INSERT INTO table (Clear, Encrypted) ' .
'VALUES (:clear, Encrypt(:data, "' . $passwordhash_str . '"))');
When retrieving data, you can also use SQL search functionality:
$select_obj = $db_obj->prepare('SELECT Clear, ' .
'Decrypt(Encrypted, "' . $passwordhash_str . '") AS PlainText FROM table ' .
'WHERE PlainText LIKE :searchterm');
Well, SEE is expensive. However SQLite has interface built-in for encryption (Pager). This means, that on top of existing code one can easily develop some encryption mechanism, does not have to be AES. Anything really.
Please see my post here: https://stackoverflow.com/a/49161716/9418360
You need to define SQLITE_HAS_CODEC=1 to enable Pager encryption. Sample code below (original SQLite source):
#ifdef SQLITE_HAS_CODEC
/*
** This function is called by the wal module when writing page content
** into the log file.
**
** This function returns a pointer to a buffer containing the encrypted
** page content. If a malloc fails, this function may return NULL.
*/
SQLITE_PRIVATE void *sqlite3PagerCodec(PgHdr *pPg){
void *aData = 0;
CODEC2(pPg->pPager, pPg->pData, pPg->pgno, 6, return 0, aData);
return aData;
}
#endif
There is a commercial version in C language for SQLite encryption using AES256 - it can also work with PHP, but it needs to be compiled with PHP and SQLite extension. It de/encrypts SQLite database file on the fly, file contents are always encrypted. Very useful.
http://www.iqx7.com/products/sqlite-encryption
I had also similar problem. Needed to store sensitive data in simple database (SQLite was the perfect choice except security). Finally I have placed database file on TrueCrypt encrypted valume.
Additional console app mounts temporary drive using TrueCrypt CLI and then starts the database application. Waits until the database application exits and then dismounts the drive again.
Maybe not suitable solution in all scenarios but for me working well ...
Related
I have following C# code in a console application.
Whenever I debug the application and run the query1 (which inserts a new value into the database) and then run query2 (which displays all the entries in the database), I can see the new entry I inserted clearly. However, when I close the application and check the table in the database (in Visual Studio), it is gone. I have no idea why it is not saving.
using System;
using System.Collections.Generic;
using System.Data.Entity;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using System.Data.SqlServerCe;
using System.Data;
namespace ConsoleApplication1
{
class Program
{
static void Main(string[] args)
{
try
{
string fileName = "FlowerShop.sdf";
string fileLocation = "|DataDirectory|\\";
DatabaseAccess dbAccess = new DatabaseAccess();
dbAccess.Connect(fileName, fileLocation);
Console.WriteLine("Connected to the following database:\n"+fileLocation + fileName+"\n");
string query = "Insert into Products(Name, UnitPrice, UnitsInStock) values('NewItem', 500, 90)";
string res = dbAccess.ExecuteQuery(query);
Console.WriteLine(res);
string query2 = "Select * from Products";
string res2 = dbAccess.QueryData(query2);
Console.WriteLine(res2);
Console.ReadLine();
}
catch (Exception e)
{
Console.WriteLine(e);
Console.ReadLine();
}
}
}
class DatabaseAccess
{
private SqlCeConnection _connection;
public void Connect(string fileName, string fileLocation)
{
Connect(#"Data Source=" + fileLocation + fileName);
}
public void Connect(string connectionString)
{
_connection = new SqlCeConnection(connectionString);
}
public string QueryData(string query)
{
_connection.Open();
using (SqlCeDataAdapter da = new SqlCeDataAdapter(query, _connection))
using (DataSet ds = new DataSet("Data Set"))
{
da.Fill(ds);
_connection.Close();
return ds.Tables[0].ToReadableString(); // a extension method I created
}
}
public string ExecuteQuery(string query)
{
_connection.Open();
using (SqlCeCommand c = new SqlCeCommand(query, _connection))
{
int r = c.ExecuteNonQuery();
_connection.Close();
return r.ToString();
}
}
}
EDIT: Forgot to mention that I am using SQL Server Compact Edition 4 and VS2012 Express.
It is a quite common problem. You use the |DataDirectory| substitution string. This means that, while debugging your app in the Visual Studio environment, the database used by your application is located in the subfolder BIN\DEBUG folder (or x86 variant) of your project. And this works well as you don't have any kind of error connecting to the database and making update operations.
But then, you exit the debug session and you look at your database through the Visual Studio Server Explorer (or any other suitable tool). This window has a different connection string (probably pointing to the copy of your database in the project folder). You search your tables and you don't see the changes.
Then the problem get worse. You restart VS to go hunting for the bug in your app, but you have your database file listed between your project files and the property Copy to Output directory is set to Copy Always. At this point Visual Studio obliges and copies the original database file from the project folder to the output folder (BIN\DEBUG) and thus your previous changes are lost.
Now, your application inserts/updates again the target table, you again can't find any error in your code and restart the loop again until you decide to post or search on StackOverflow.
You could stop this problem by clicking on the database file listed in your Solution Explorer and changing the property Copy To Output Directory to Copy If Newer or Never Copy. Also you could update your connectionstring in the Server Explorer to look at the working copy of your database or create a second connection. The first one still points to the database in the project folder while the second one points to the database in the BIN\DEBUG folder. In this way you could keep the original database ready for deployment purposes and schema changes, while, with the second connection you could look at the effective results of your coding efforts.
EDIT Special warning for MS-Access database users. The simple act of looking at your table changes the modified date of your database ALSO if you don't write or change anything. So the flag Copy if Newer kicks in and the database file is copied to the output directory. With Access better use Copy Never.
Committing changes / saving changes across debug sessions is a familiar topic in SQL CE forums. It is something that trips up quite a few people. I'll post links to source articles below, but I wanted to paste the answer that seems to get the best results to the most people:
You have several options to change this behavior. If your sdf file is part of the content of your project, this will affect how data is persisted. Remember that when you debug, all output of your project (including the sdf) if in the bin/debug folder.
You can decide not to include the sdf file as part of your project and manage the file location runtime.
If you are using "copy if newer", and project changes you make to the database will overwrite any runtime/debug changes.
If you are using "Do not copy", you will have to specify the location in code (as two levels above where your program is running).
If you have "Copy always", any changes made during runtime will always be overwritten
Answer Source
Here is a link to some further discussion and how to documentation.
I have following C# code in a console application.
Whenever I debug the application and run the query1 (which inserts a new value into the database) and then run query2 (which displays all the entries in the database), I can see the new entry I inserted clearly. However, when I close the application and check the table in the database (in Visual Studio), it is gone. I have no idea why it is not saving.
using System;
using System.Collections.Generic;
using System.Data.Entity;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using System.Data.SqlServerCe;
using System.Data;
namespace ConsoleApplication1
{
class Program
{
static void Main(string[] args)
{
try
{
string fileName = "FlowerShop.sdf";
string fileLocation = "|DataDirectory|\\";
DatabaseAccess dbAccess = new DatabaseAccess();
dbAccess.Connect(fileName, fileLocation);
Console.WriteLine("Connected to the following database:\n"+fileLocation + fileName+"\n");
string query = "Insert into Products(Name, UnitPrice, UnitsInStock) values('NewItem', 500, 90)";
string res = dbAccess.ExecuteQuery(query);
Console.WriteLine(res);
string query2 = "Select * from Products";
string res2 = dbAccess.QueryData(query2);
Console.WriteLine(res2);
Console.ReadLine();
}
catch (Exception e)
{
Console.WriteLine(e);
Console.ReadLine();
}
}
}
class DatabaseAccess
{
private SqlCeConnection _connection;
public void Connect(string fileName, string fileLocation)
{
Connect(#"Data Source=" + fileLocation + fileName);
}
public void Connect(string connectionString)
{
_connection = new SqlCeConnection(connectionString);
}
public string QueryData(string query)
{
_connection.Open();
using (SqlCeDataAdapter da = new SqlCeDataAdapter(query, _connection))
using (DataSet ds = new DataSet("Data Set"))
{
da.Fill(ds);
_connection.Close();
return ds.Tables[0].ToReadableString(); // a extension method I created
}
}
public string ExecuteQuery(string query)
{
_connection.Open();
using (SqlCeCommand c = new SqlCeCommand(query, _connection))
{
int r = c.ExecuteNonQuery();
_connection.Close();
return r.ToString();
}
}
}
EDIT: Forgot to mention that I am using SQL Server Compact Edition 4 and VS2012 Express.
It is a quite common problem. You use the |DataDirectory| substitution string. This means that, while debugging your app in the Visual Studio environment, the database used by your application is located in the subfolder BIN\DEBUG folder (or x86 variant) of your project. And this works well as you don't have any kind of error connecting to the database and making update operations.
But then, you exit the debug session and you look at your database through the Visual Studio Server Explorer (or any other suitable tool). This window has a different connection string (probably pointing to the copy of your database in the project folder). You search your tables and you don't see the changes.
Then the problem get worse. You restart VS to go hunting for the bug in your app, but you have your database file listed between your project files and the property Copy to Output directory is set to Copy Always. At this point Visual Studio obliges and copies the original database file from the project folder to the output folder (BIN\DEBUG) and thus your previous changes are lost.
Now, your application inserts/updates again the target table, you again can't find any error in your code and restart the loop again until you decide to post or search on StackOverflow.
You could stop this problem by clicking on the database file listed in your Solution Explorer and changing the property Copy To Output Directory to Copy If Newer or Never Copy. Also you could update your connectionstring in the Server Explorer to look at the working copy of your database or create a second connection. The first one still points to the database in the project folder while the second one points to the database in the BIN\DEBUG folder. In this way you could keep the original database ready for deployment purposes and schema changes, while, with the second connection you could look at the effective results of your coding efforts.
EDIT Special warning for MS-Access database users. The simple act of looking at your table changes the modified date of your database ALSO if you don't write or change anything. So the flag Copy if Newer kicks in and the database file is copied to the output directory. With Access better use Copy Never.
Committing changes / saving changes across debug sessions is a familiar topic in SQL CE forums. It is something that trips up quite a few people. I'll post links to source articles below, but I wanted to paste the answer that seems to get the best results to the most people:
You have several options to change this behavior. If your sdf file is part of the content of your project, this will affect how data is persisted. Remember that when you debug, all output of your project (including the sdf) if in the bin/debug folder.
You can decide not to include the sdf file as part of your project and manage the file location runtime.
If you are using "copy if newer", and project changes you make to the database will overwrite any runtime/debug changes.
If you are using "Do not copy", you will have to specify the location in code (as two levels above where your program is running).
If you have "Copy always", any changes made during runtime will always be overwritten
Answer Source
Here is a link to some further discussion and how to documentation.
As I know SQL Server since version 2012 has a new feature, FileTable. It allows us to store files in the file system and to use them from T-SQL.
I am trying to use this feature and I have no idea how to do it properly.
Generally, I don't know how to access files stored in the file table. Let's suppose I have asp.net MVC app and there are a lot of images which I show on web pages in img tags. I would like to store these images in Filetable and access them as files from the filesystem. But I don't know where these files are stored and how to use them as files. Now my images are stored in web application directory in folder images and I write something like this:
<img src='/images/mypicture.png' />
And if I move my images to file table what I should write in src?
<img src='path-toimage-in-filetable' />
I don't think you still need this, anyways I'll post my answer for anyone else interested.
First, a filetable still being a table, so, if you want to access to data from it you need to use a Select SQL statement. So you'd need something like:
select name, file_stream from filetable_name
where
name = 'file_name',
file_type = 'file_extension'
just execute an statement like this in your asp.net app, then fetch the results and use the file_stream column to get the binary data of the stored file. If you want to retrieve the file from HTML, first you need to create an action in your controller, which will return the retrieved file:
public ActionResult GetFile(){
..
return File(file.file_stream,file.file_type);
}
After this, put in you HTML tag something like:
<img src="/controller/GetFile" />
hope this could help!
If you want to know the schema of a filetable see
here
I assume by FileTable you actually mean FileStream. A couple notes about that:
This feature is best used if your files are actually files
The files should be, on average, greater than 1mb - although there can be exceptions to this rule, if they're smaller than 1mb on average, you may be better off using a VARBINARY(MAX) or XML data type as appropriate. If your images are very small on average (only a few KB), consider using a VARBINARY(MAX) column.
Accessing these files will require an open transaction and that the database is properly configured for FILESTREAM
You can get some significant advantages bypassing the normal SQL engine/database file method of data access by telling SQL Server that you want to access the file directly, however it's not meant for directly accessing the file on the file system and attempting to do so can break SQL's management of these files (transactional consistency, tracking, locking, etc.).
It's pretty likely that your use case here would be better served by using a CDN and storing image URLs in the table if you really need SQL for this. You can use FILESTREAM to do this (see code sample below for one implementation), but you'll be hammering your SQL server for every request unless you store the images somewhere else anyway that the browser can properly cache (my example doesn't do that) - and if you store them somewhere else for rendering int he browser you might as well store them there to begin with (you won't have transactional consistency for those images once they're copied to some other drive/disk/location anyway).
With all that said, here's an example of how you'd access the FILESTREAM data using ADO.NET:
public static string connectionString = ...; // get your connection string from encrypted config
// assumes your FILESTREAM data column is called Img in a table called ImageTable
const string sql = #"
SELECT
Img.PathName(),
GET_FILESTREAM_TRANSACTION_CONTEXT()
FROM ImageTagble
WHERE ImageId = #id";
public string RetreiveImage(int id)
{
string serverPath;
byte[] txnToken;
string base64ImageData = null;
using (var ts = new TransactionScope())
{
using (var conn = new SqlConnection(connectionString))
{
conn.Open();
using (SqlCommand cmd = new SqlCommand(sql, conn))
{
cmd.Parameters.Add("#id", SqlDbType.Int).Value = id;
using (SqlDataReader rdr = cmd.ExecuteReader())
{
rdr.Read();
serverPath = rdr.GetSqlString(0).Value;
txnToken = rdr.GetSqlBinary(1).Value;
}
}
using (var sfs = new SqlFileStream(serverPath, txnToken, FileAccess.Read))
{
// sfs will now work basically like a FileStream. You can either copy it locally or return it as a base64 encoded string
using (var ms = new MemoryStream())
{
sfs.CopyTo(ms);
base64ImageData = Convert.ToBase64String(ms.ToArray());
}
}
}
ts.Complete();
// assume this is PNG image data, replace PNG with JPG etc. as appropraite. Might store in table if it will vary...
return "data:img/png;base64," + base64ImageData;
}
}
Obviously, if you have lots of images to handle like this this is not an ideal method - don't try to make an instance of SQL server into what you should be using a CDN for.... However, if you have other really good reasons, you should try to grab as many images as possible in a single request/transaction (e.g. if you know you're displaying 50 images on a page, get all 50 with a single transaction scope, don't use 50 transaction scopes - this code won't handle that).
I have following C# code in a console application.
Whenever I debug the application and run the query1 (which inserts a new value into the database) and then run query2 (which displays all the entries in the database), I can see the new entry I inserted clearly. However, when I close the application and check the table in the database (in Visual Studio), it is gone. I have no idea why it is not saving.
using System;
using System.Collections.Generic;
using System.Data.Entity;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using System.Data.SqlServerCe;
using System.Data;
namespace ConsoleApplication1
{
class Program
{
static void Main(string[] args)
{
try
{
string fileName = "FlowerShop.sdf";
string fileLocation = "|DataDirectory|\\";
DatabaseAccess dbAccess = new DatabaseAccess();
dbAccess.Connect(fileName, fileLocation);
Console.WriteLine("Connected to the following database:\n"+fileLocation + fileName+"\n");
string query = "Insert into Products(Name, UnitPrice, UnitsInStock) values('NewItem', 500, 90)";
string res = dbAccess.ExecuteQuery(query);
Console.WriteLine(res);
string query2 = "Select * from Products";
string res2 = dbAccess.QueryData(query2);
Console.WriteLine(res2);
Console.ReadLine();
}
catch (Exception e)
{
Console.WriteLine(e);
Console.ReadLine();
}
}
}
class DatabaseAccess
{
private SqlCeConnection _connection;
public void Connect(string fileName, string fileLocation)
{
Connect(#"Data Source=" + fileLocation + fileName);
}
public void Connect(string connectionString)
{
_connection = new SqlCeConnection(connectionString);
}
public string QueryData(string query)
{
_connection.Open();
using (SqlCeDataAdapter da = new SqlCeDataAdapter(query, _connection))
using (DataSet ds = new DataSet("Data Set"))
{
da.Fill(ds);
_connection.Close();
return ds.Tables[0].ToReadableString(); // a extension method I created
}
}
public string ExecuteQuery(string query)
{
_connection.Open();
using (SqlCeCommand c = new SqlCeCommand(query, _connection))
{
int r = c.ExecuteNonQuery();
_connection.Close();
return r.ToString();
}
}
}
EDIT: Forgot to mention that I am using SQL Server Compact Edition 4 and VS2012 Express.
It is a quite common problem. You use the |DataDirectory| substitution string. This means that, while debugging your app in the Visual Studio environment, the database used by your application is located in the subfolder BIN\DEBUG folder (or x86 variant) of your project. And this works well as you don't have any kind of error connecting to the database and making update operations.
But then, you exit the debug session and you look at your database through the Visual Studio Server Explorer (or any other suitable tool). This window has a different connection string (probably pointing to the copy of your database in the project folder). You search your tables and you don't see the changes.
Then the problem get worse. You restart VS to go hunting for the bug in your app, but you have your database file listed between your project files and the property Copy to Output directory is set to Copy Always. At this point Visual Studio obliges and copies the original database file from the project folder to the output folder (BIN\DEBUG) and thus your previous changes are lost.
Now, your application inserts/updates again the target table, you again can't find any error in your code and restart the loop again until you decide to post or search on StackOverflow.
You could stop this problem by clicking on the database file listed in your Solution Explorer and changing the property Copy To Output Directory to Copy If Newer or Never Copy. Also you could update your connectionstring in the Server Explorer to look at the working copy of your database or create a second connection. The first one still points to the database in the project folder while the second one points to the database in the BIN\DEBUG folder. In this way you could keep the original database ready for deployment purposes and schema changes, while, with the second connection you could look at the effective results of your coding efforts.
EDIT Special warning for MS-Access database users. The simple act of looking at your table changes the modified date of your database ALSO if you don't write or change anything. So the flag Copy if Newer kicks in and the database file is copied to the output directory. With Access better use Copy Never.
Committing changes / saving changes across debug sessions is a familiar topic in SQL CE forums. It is something that trips up quite a few people. I'll post links to source articles below, but I wanted to paste the answer that seems to get the best results to the most people:
You have several options to change this behavior. If your sdf file is part of the content of your project, this will affect how data is persisted. Remember that when you debug, all output of your project (including the sdf) if in the bin/debug folder.
You can decide not to include the sdf file as part of your project and manage the file location runtime.
If you are using "copy if newer", and project changes you make to the database will overwrite any runtime/debug changes.
If you are using "Do not copy", you will have to specify the location in code (as two levels above where your program is running).
If you have "Copy always", any changes made during runtime will always be overwritten
Answer Source
Here is a link to some further discussion and how to documentation.
I have a project where we need to migrate a lot of users that has their password in plain text into a new database where we will hash the password.
The new system use Entity Framework and it needs to be authenticated with the Asp.Net Identity framework.
I found that I can generate in C# a correct hashed password that Entity Framework can read without problem.
public static string HashPassword(string password)
{
byte[] salt;
byte[] buffer2;
using (var bytes = new Rfc2898DeriveBytes(password, 0x10, 0x3e8))
{
salt = bytes.Salt;
buffer2 = bytes.GetBytes(0x20);
}
byte[] dst = new byte[0x31];
Buffer.BlockCopy(salt, 0, dst, 1, 0x10);
Buffer.BlockCopy(buffer2, 0, dst, 0x11, 0x20);
return Convert.ToBase64String(dst);
}
Is there something similar in SQL that I could use within INSERT statement form a SELECT to the other table?
Not built in, hashing is cpu intensive and normally an operation you would want to avoid on the DB server, I realize migration are not a normal operation though. The solution depends a bit on why you want to run in SQL.
If it's because of simplicity I would look at something like in this question Is there a SQL implementation of PBKDF2?
If it's because of performance I would consider just building a small .net migrator and use bulk inserting/updating. For example with
https://github.com/MikaelEliasson/EntityFramework.Utilities#batch-update-entities you could read only the UserId and the plain text Password with a select. Hash it in .net and then update the database in one bulk at probably over 100k updates / second.
Now two slight warnings
Make sure you don't end up with plain text passwords in the transaction log. Preferebly by doing the hashing in the source database before it ends up in the new one. Otherwise it's possible to clear the transaction log after the initial import How do you clear the SQL Server transaction log?
Instead of writing the hashing method yourself you can use PasswordHasher which is what Asp.net identity is using by default. It in turn is using Rfc2898DeriveBytes. See this answer https://stackoverflow.com/a/21496255/507279