Query across two SQLite databases in Delphi TFDQuery [duplicate] - sql

I have an application that uses a SQLite database and everything works the way it should. I'm now in the process of adding new functionalities that require a second SQLite database, but I'm having a hard time figuring out how to join tables from the different databases.
If someone can help me out with this one, I'd really appreciate it!
Edit: See this question for an example case you can adapt to your language when you attach databases as mentioned in the accepted answer.

If ATTACH is activated in your build of Sqlite (it should be in most builds), you can attach another database file to the current connection using the ATTACH keyword. The limit on the number of db's that can be attached is a compile time setting(SQLITE_MAX_ATTACHED), currently defaults to 10, but this too may vary by the build you have. The global limit is 125.
attach 'database1.db' as db1;
attach 'database2.db' as db2;
You can see all connected databases with keyword
.databases
Then you should be able to do the following.
select
*
from
db1.SomeTable a
inner join
db2.SomeTable b on b.SomeColumn = a.SomeColumn;
Note that "[t]he database names main and temp are reserved for the primary database and database to hold temporary tables and other temporary data objects. Both of these database names exist for every database connection and should not be used for attachment".

Here is a C# example to complete this Question
/// <summary>
/// attachSQL = attach 'C:\\WOI\\Daily SQL\\Attak.sqlite' as db1 */
/// path = "Path of the sqlite database file
/// sqlQuery = #"Select A.SNo,A.MsgDate,A.ErrName,B.SNo as BSNo,B.Err as ErrAtB from Table1 as A
/// inner join db1.Labamba as B on
/// A.ErrName = B.Err";
/// </summary>
/// <param name="attachSQL"></param>
/// <param name="sqlQuery"></param>
public static DataTable GetDataTableFrom2DBFiles(string attachSQL, string sqlQuery)
{
try
{
string conArtistName = "data source=" + path + ";";
using (SQLiteConnection singleConnectionFor2DBFiles = new SQLiteConnection(conArtistName))
{
singleConnectionFor2DBFiles.Open();
using (SQLiteCommand AttachCommand = new SQLiteCommand(attachSQL, singleConnectionFor2DBFiles))
{
AttachCommand.ExecuteNonQuery();
using (SQLiteCommand SelectQueryCommand = new SQLiteCommand(sqlQuery, singleConnectionFor2DBFiles))
{
using (DataTable dt = new DataTable())
{
using (SQLiteDataAdapter adapter = new SQLiteDataAdapter(SelectQueryCommand))
{
adapter.AcceptChangesDuringFill = true;
adapter.Fill(dt);
return dt;
}
}
}
}
}
}
catch (Exception ex)
{
MessageBox.Show("Use Process Exception method An error occurred");
return null;
}
}

Well, I don't have much experience with SQLite you have to access both databases in a single query.
You can have something like :
select name from DB1.table1 as a join DB2.table2 as b where a.age = b.age;
In databases like SQLServer you can access other databases in this hierarchical fashion, this should also work for SQLite.
I think you can initiate an instance of sqlite with more than 1 databases !

Related

Script table as CREATE TO by using vb.net

In SQL server I can create a table which is duplicate of another table with all constraints set in it. I can use script table as CREATE TO in SQL server management studio to do this. Then I can run the script in another database so that same table is recreated but without data. I want to do same by using vb.net code. Important point is that all the constraints and table properties are set properly.
You can use the SMO (SQL Server Management Objects) assembly to script out tables to a string inside your application. I'm using C# here, but the same can be done easily in VB.NET, too.
// Define your database and table you want to script out
string dbName = "YourDatabase";
string tableName = "YourTable";
// set up the SMO server objects - I'm using "integrated security" here for simplicity
Server srv = new Server();
srv.ConnectionContext.LoginSecure = true;
srv.ConnectionContext.ServerInstance = "YourSQLServerInstance";
// get the database in question
Database db = new Database();
db = srv.Databases[dbName];
StringBuilder sb = new StringBuilder();
// define the scripting options - what options to include or not
ScriptingOptions options = new ScriptingOptions();
options.ClusteredIndexes = true;
options.Default = true;
options.DriAll = true;
options.Indexes = true;
options.IncludeHeaders = true;
// script out the table's creation
Table tbl = db.Tables[tableName];
StringCollection coll = tbl.Script(options);
foreach (string str in coll)
{
sb.Append(str);
sb.Append(Environment.NewLine);
}
// you can get the string that makes up the CREATE script here
// do with this CREATE script whatever you like!
string createScript = sb.ToString();
You need to reference several SMO assemblies.
Read more about SMO and how to use it here:
Getting Started with SQL Server Management Objects (SMO)
Generate Scripts for database objects with SMO for SQL Server

Cascading deletion with Ado.net

I have an application which i need to delete a row from the table Client
public void Delete_Client(int _id_client)
{
Data.Connect();
using (Data.connexion)
{
string s = "Delete From CLIENT where id_client = " + _id_client;
SqlCommand command = new SqlCommand(s, Data.connexion);
try
{
command.ExecuteNonQuery();
}
catch { }
}
}
the table Client contains a foreign references to another table. So an exception appears indicates that the deletion must be cascade.
So how can i change my code to do this ( i'am using sql server as a dbms) ?
IMO you should avoid using on delete cascade because:
You lose control what is being removed
Table references has to be altered to enable it
Use parametrized query (as all around advice)
So lets change your query. I added ClientOrder as example table which holds foreign key reference to our soon to be deleted client.
First of all I remove all orders linked to client, then I delete client itself. This should go like this for all the other tables
that are linked with Client table.
public void Delete_Client(int _id_client)
{
Data.Connect();
using (Data.connexion)
{
string query = "delete from ClientOrder where id_client = #clientId; delete From CLIENT where id_client = #clientid";
SqlCommand command = new SqlCommand(query, Data.connexion);
command.Parameters.AddWithValue("#clientId", _id_client);
try
{
command.ExecuteNonQuery();
}
catch { } //silencing errors is wrong, if something goes wrong you should handle it
}
}
Parametrized query has many advantages. First of all it is safer (look at SQL Injection attack). Second types are resolved by framework (especially helpful for DateTime with formatting).

Updating Data Source Login Credentials for SSRS Report Server Tables

I have added a lot of reports with an invalid data source login to an SSRS report sever and I wanted to update the User Name and Password with a script to update it so I don't have to update each report individually.
However, from what I can tell the fields are store as Images and are encrypted. I can't find anything out about how they are encrypted or how to update them. It appears that the User Name and password are stored in the dbo.DataSource tables. Any ideas? I want the script to run in SQL.
Example Login Info:
I would be very, very, VERY leery of hacking the Reporting Services tables. It may be that someone out there can offer a reliable way to do what you suggest, but it strikes me as a good way to clobber your entire installation.
My suggestion would be that you make use of the Reporting Services APIs and write a tiny app to do this for you. The APIs are very full-featured -- pretty much anything you can do from the Report Manager website, you can do with the APIs -- and fairly simple to use.
The following code does NOT do exactly what you want -- it points the reports to a shared data source -- but it should show you the basics of what you'd need to do.
public void ReassignDataSources()
{
using (ReportingService2005 client = new ReportingService2005)
{
var reports = client.ListChildren(FolderName, true).Where(ci => ci.Type == ItemTypeEnum.Report);
foreach (var report in reports)
{
SetServerDataSource(client, report.Path);
}
}
}
private void SetServerDataSource(ReportingService2005 client, string reportPath)
{
var itemSources = client.GetItemDataSources(reportPath);
if (itemSources.Any())
client.SetItemDataSources(
reportPath,
new DataSource[] {
new DataSource() {
Item = CreateServerDataSourceReference(),
Name = itemSources.First().Name
}
});
}
private DataSourceDefinitionOrReference CreateServerDataSourceReference()
{
return new DataSourceReference() { Reference = _DataSourcePath };
}
I doubt this answers your question directly, but I hope it can offer some assistance.
MSDN Specifying Credentials
MSDN also suggests using shared data sources for this very reason: See MSDN on shared data sources

ADODB strange behavior

Recently I had a very strange problem.
The application is written in classic ASP, but I guess it is the same case for every connection that uses ADO/OLEDB.
Those are connection parameters.
conn=Server.CreateObject("ADODB.Connection");
conn.Provider="Microsoft.Jet.OLEDB.4.0";
conn.Open("D:/db/testingDb.mdb");
In short this code:
conn.Open("myconnection");
bigQuery = "...";
rs = conn.execute(bigQuery);
while (!rs.eof) {
...
smallQuery = "..."
rssmall = conn.execute(smallQuery);
...
rssmall.close();
...
rs.movenext();
}
rs.close();
conn.close();
Doesn't work if bigQuery returns more than certain number of rows (in my case ~20).
But if I use one more connection for inner loop like stealthyninja suggested:
conn.Open("myconnection");
conn2.Open("myconnection")
bigQuery = "...";
rs = conn.execute(bigQuery);
while (!rs.eof) {
...
smallQuery = "..."
rssmall = conn2.execute(smallQuery);
...
rssmall.close();
...
rs.movenext();
}
rs.close();
conn2.close();
conn.close();
Problem vanishes.
I am using Access database and IIS7 if that matters.
Does anyone has a logical explanation for this?
Michael Todd's comment has it. ADODB doesn't support MARS (Multiple Active Result Sets) which is what you are trying to do. The reason it seems to work with only 20 records is because that's how much it is initially transmitting to the client-side.
Standard solutions to this are
Retrieve the whole outer rowset into a holding structure or cache first, then process it and execute the inner queries, or
Use two different connections, as you have demonstrated.

Grails - store sql that will be used by services

I am writing a Grails application that will mostly be using the springws web services plugin with endpoints backed by services. The services will retrieve data from a variety of back end databases (i.e., not via domain classes and GORM). I would like to store the sql that my services will be using to fetch the data for the web services in external files.
I'm looking for suggestions on:
Where is the best place to keep the files (i.e., I'd like to put them somewhere obvious like grails-app/sql) and best format (i.e., xml, configslurper, etc.)
Best way to abstract the retrieving of the sql text so my services that will execute the sql will not need to know where or how they are fetched. Services will just provide a sqlid and get the sql.
I was working on a project recently where I needed to do something similar. I created the following directory to store the sql files:
./grails-app/conf/sql
For example there is a file ./grails-app/conf/sql/hr/FIND_PERSON_BY_ID.sql that has something like the following:
select a.id
, a.first_name
, a.last_name
from person
where id = ?
I created a SqlCatalogService class that would load all files in that directory (and subdirectories) and store the filenames (minus extension) and file text in a Map. The service has a get(id) method that returns the sql text that is cached in the Map. Since files/directories stored in grails-app/conf are placed in the classpath, the SqlCatalogService uses the following code to read in the files:
....
....
Map<String,String> sqlCache = [:]
....
....
void loadSqlCache() {
try {
loadSqlCacheFromDirectory(new File(this.class.getResource("/sql/").getFile()))
} catch (Exception ex) {
log.error(ex)
}
}
void loadSqlCacheFromDirectory(File directory) {
log.info "Loading SQL cache from disk using base directory ${directory.name}"
synchronized(sqlCache) {
if(sqlCache.size() == 0) {
try {
directory.eachFileRecurse { sqlFile ->
if(sqlFile.isFile() && sqlFile.name.toUpperCase().endsWith(".SQL")) {
def sqlKey = sqlFile.name.toUpperCase()[0..-5]
sqlCache[sqlKey] = sqlFile.text
log.debug "added SQL [${sqlKey}] to cache"
}
}
} catch (Exception ex) {
log.error(ex)
}
} else {
log.warn "request to load sql cache and cache not empty: size [${sqlCache.size()}]"
}
}
}
String get(String sqlId) {
def sqlKey = sqlId?.toUpperCase()
log.debug "SQL Id requested: ${sqlKey}"
if(!sqlCache[sqlKey]) {
log.debug "SQL [${sqlKey}] not found in cache, loading cache from disk"
loadSqlCache()
}
return sqlCache[sqlKey]
}
Services that use various datasources use the SqlCatalogService to retrieve the sql by calling the get(id) method:
class PersonService {
def hrDataSource
def sqlCatalogService
private static final String SQL_FIND_PERSON_BY_ID = "FIND_PERSON_BY_ID"
Person findPersonById(String personId) {
try {
def sql = new groovy.sql.Sql(hrDataSource)
def row = sql.firstRow(sqlCatalogService.get(SQL_FIND_PERSON_BY_ID), [personId])
row ? new Person(row) : null
} catch (Exception ex) {
log.error ex.message, ex
throw ex
}
}
}
For now we only have a few sql statements so storing all the text in a Map is not an issue. If you lots of sql files to store you may need to think about using something like Ehcache and defining an eviction strategy (i.e., least recently used or least frequently used) and only storing the most used in memory and leaving the rest on disk until needed.
Before doing this I thought about using GORM and storing the sql text in the database. But decided that having the sql in files made it easier to develop with since we could pretty much save the sql to file directly from our sql tool (replacing hard-code params with question marks) and are able to let our revision control system track the changes. I'm not saying the above service is the most efficient or correct way to handle this, but it's worked so far for our needs.
Have you considered using Grails GORM and a HSQLDB database to store the SQL you want executed? You could then put in a record for each service containing that services SQL and retrieve it using normal Grails GORM functions. You could generate a default set of controllers and views that would allow you to edit the SQL. If you want to store the SQL in external files you can create a sub directory in the web-app directory called sql, then store your SQL statements as text files. You could create a class that would take a service name, load the associated text file containing the SQL and return the contents of that file. With out knowing how complex your SQL will be I cant' say what the best format would be. If your dealing with normal select statements with no parameter substitution plain text would be best. If your dealing with more complex SQL with substitutions and multiple queries you may want to use XML.