I am writing a Grails application that will mostly be using the springws web services plugin with endpoints backed by services. The services will retrieve data from a variety of back end databases (i.e., not via domain classes and GORM). I would like to store the sql that my services will be using to fetch the data for the web services in external files.
I'm looking for suggestions on:
Where is the best place to keep the files (i.e., I'd like to put them somewhere obvious like grails-app/sql) and best format (i.e., xml, configslurper, etc.)
Best way to abstract the retrieving of the sql text so my services that will execute the sql will not need to know where or how they are fetched. Services will just provide a sqlid and get the sql.
I was working on a project recently where I needed to do something similar. I created the following directory to store the sql files:
./grails-app/conf/sql
For example there is a file ./grails-app/conf/sql/hr/FIND_PERSON_BY_ID.sql that has something like the following:
select a.id
, a.first_name
, a.last_name
from person
where id = ?
I created a SqlCatalogService class that would load all files in that directory (and subdirectories) and store the filenames (minus extension) and file text in a Map. The service has a get(id) method that returns the sql text that is cached in the Map. Since files/directories stored in grails-app/conf are placed in the classpath, the SqlCatalogService uses the following code to read in the files:
....
....
Map<String,String> sqlCache = [:]
....
....
void loadSqlCache() {
try {
loadSqlCacheFromDirectory(new File(this.class.getResource("/sql/").getFile()))
} catch (Exception ex) {
log.error(ex)
}
}
void loadSqlCacheFromDirectory(File directory) {
log.info "Loading SQL cache from disk using base directory ${directory.name}"
synchronized(sqlCache) {
if(sqlCache.size() == 0) {
try {
directory.eachFileRecurse { sqlFile ->
if(sqlFile.isFile() && sqlFile.name.toUpperCase().endsWith(".SQL")) {
def sqlKey = sqlFile.name.toUpperCase()[0..-5]
sqlCache[sqlKey] = sqlFile.text
log.debug "added SQL [${sqlKey}] to cache"
}
}
} catch (Exception ex) {
log.error(ex)
}
} else {
log.warn "request to load sql cache and cache not empty: size [${sqlCache.size()}]"
}
}
}
String get(String sqlId) {
def sqlKey = sqlId?.toUpperCase()
log.debug "SQL Id requested: ${sqlKey}"
if(!sqlCache[sqlKey]) {
log.debug "SQL [${sqlKey}] not found in cache, loading cache from disk"
loadSqlCache()
}
return sqlCache[sqlKey]
}
Services that use various datasources use the SqlCatalogService to retrieve the sql by calling the get(id) method:
class PersonService {
def hrDataSource
def sqlCatalogService
private static final String SQL_FIND_PERSON_BY_ID = "FIND_PERSON_BY_ID"
Person findPersonById(String personId) {
try {
def sql = new groovy.sql.Sql(hrDataSource)
def row = sql.firstRow(sqlCatalogService.get(SQL_FIND_PERSON_BY_ID), [personId])
row ? new Person(row) : null
} catch (Exception ex) {
log.error ex.message, ex
throw ex
}
}
}
For now we only have a few sql statements so storing all the text in a Map is not an issue. If you lots of sql files to store you may need to think about using something like Ehcache and defining an eviction strategy (i.e., least recently used or least frequently used) and only storing the most used in memory and leaving the rest on disk until needed.
Before doing this I thought about using GORM and storing the sql text in the database. But decided that having the sql in files made it easier to develop with since we could pretty much save the sql to file directly from our sql tool (replacing hard-code params with question marks) and are able to let our revision control system track the changes. I'm not saying the above service is the most efficient or correct way to handle this, but it's worked so far for our needs.
Have you considered using Grails GORM and a HSQLDB database to store the SQL you want executed? You could then put in a record for each service containing that services SQL and retrieve it using normal Grails GORM functions. You could generate a default set of controllers and views that would allow you to edit the SQL. If you want to store the SQL in external files you can create a sub directory in the web-app directory called sql, then store your SQL statements as text files. You could create a class that would take a service name, load the associated text file containing the SQL and return the contents of that file. With out knowing how complex your SQL will be I cant' say what the best format would be. If your dealing with normal select statements with no parameter substitution plain text would be best. If your dealing with more complex SQL with substitutions and multiple queries you may want to use XML.
Related
As I know SQL Server since version 2012 has a new feature, FileTable. It allows us to store files in the file system and to use them from T-SQL.
I am trying to use this feature and I have no idea how to do it properly.
Generally, I don't know how to access files stored in the file table. Let's suppose I have asp.net MVC app and there are a lot of images which I show on web pages in img tags. I would like to store these images in Filetable and access them as files from the filesystem. But I don't know where these files are stored and how to use them as files. Now my images are stored in web application directory in folder images and I write something like this:
<img src='/images/mypicture.png' />
And if I move my images to file table what I should write in src?
<img src='path-toimage-in-filetable' />
I don't think you still need this, anyways I'll post my answer for anyone else interested.
First, a filetable still being a table, so, if you want to access to data from it you need to use a Select SQL statement. So you'd need something like:
select name, file_stream from filetable_name
where
name = 'file_name',
file_type = 'file_extension'
just execute an statement like this in your asp.net app, then fetch the results and use the file_stream column to get the binary data of the stored file. If you want to retrieve the file from HTML, first you need to create an action in your controller, which will return the retrieved file:
public ActionResult GetFile(){
..
return File(file.file_stream,file.file_type);
}
After this, put in you HTML tag something like:
<img src="/controller/GetFile" />
hope this could help!
If you want to know the schema of a filetable see
here
I assume by FileTable you actually mean FileStream. A couple notes about that:
This feature is best used if your files are actually files
The files should be, on average, greater than 1mb - although there can be exceptions to this rule, if they're smaller than 1mb on average, you may be better off using a VARBINARY(MAX) or XML data type as appropriate. If your images are very small on average (only a few KB), consider using a VARBINARY(MAX) column.
Accessing these files will require an open transaction and that the database is properly configured for FILESTREAM
You can get some significant advantages bypassing the normal SQL engine/database file method of data access by telling SQL Server that you want to access the file directly, however it's not meant for directly accessing the file on the file system and attempting to do so can break SQL's management of these files (transactional consistency, tracking, locking, etc.).
It's pretty likely that your use case here would be better served by using a CDN and storing image URLs in the table if you really need SQL for this. You can use FILESTREAM to do this (see code sample below for one implementation), but you'll be hammering your SQL server for every request unless you store the images somewhere else anyway that the browser can properly cache (my example doesn't do that) - and if you store them somewhere else for rendering int he browser you might as well store them there to begin with (you won't have transactional consistency for those images once they're copied to some other drive/disk/location anyway).
With all that said, here's an example of how you'd access the FILESTREAM data using ADO.NET:
public static string connectionString = ...; // get your connection string from encrypted config
// assumes your FILESTREAM data column is called Img in a table called ImageTable
const string sql = #"
SELECT
Img.PathName(),
GET_FILESTREAM_TRANSACTION_CONTEXT()
FROM ImageTagble
WHERE ImageId = #id";
public string RetreiveImage(int id)
{
string serverPath;
byte[] txnToken;
string base64ImageData = null;
using (var ts = new TransactionScope())
{
using (var conn = new SqlConnection(connectionString))
{
conn.Open();
using (SqlCommand cmd = new SqlCommand(sql, conn))
{
cmd.Parameters.Add("#id", SqlDbType.Int).Value = id;
using (SqlDataReader rdr = cmd.ExecuteReader())
{
rdr.Read();
serverPath = rdr.GetSqlString(0).Value;
txnToken = rdr.GetSqlBinary(1).Value;
}
}
using (var sfs = new SqlFileStream(serverPath, txnToken, FileAccess.Read))
{
// sfs will now work basically like a FileStream. You can either copy it locally or return it as a base64 encoded string
using (var ms = new MemoryStream())
{
sfs.CopyTo(ms);
base64ImageData = Convert.ToBase64String(ms.ToArray());
}
}
}
ts.Complete();
// assume this is PNG image data, replace PNG with JPG etc. as appropraite. Might store in table if it will vary...
return "data:img/png;base64," + base64ImageData;
}
}
Obviously, if you have lots of images to handle like this this is not an ideal method - don't try to make an instance of SQL server into what you should be using a CDN for.... However, if you have other really good reasons, you should try to grab as many images as possible in a single request/transaction (e.g. if you know you're displaying 50 images on a page, get all 50 with a single transaction scope, don't use 50 transaction scopes - this code won't handle that).
We have a reporting read only database clone set up as an alternate datasource in our Grails application named 'reporting'. This works great when using dynamic finders or criteria as per the grails MyDomain.reporting.findByXXXX(..etc..)
However there are some nasty queries that have to be done in raw SQL. Our current way of doing this (in a service) is
def sessionFactory;
public static List getSomeBigNastyData(...)
{
sessionFactory.currentSession.createSQLQuery(
"""
Big Ugly Query
"""
).list();
}
But this does not go to the reporting database and there doesn't seem to be a way of specifying 'reporting' - is there a way to specify the datasource to execute raw SQL against?
It's possible to use the dataSource as an injected bean and groovy.sql.Sql to run your queries. Below is a simple example of a service that will use your data source and allow you to run a query against it.
package com.example
import groovy.sql.GroovyRowResult
import groovy.sql.Sql
class ExampleSqlService {
def dataSource_reporting // your named data source
List<GroovyRowResult> query(String sql) {
def db = new Sql(dataSource_reporting)
return db.rows(sql)
}
}
Using a service (like the above example) allows you to access it from basically anywhere (Controller, Service, TagLib, Domain, etc.)
Is there a convenient way to force Grails / Hibernate to recreate the database schema from an integration test?
If you add the following in DataSource.groovy an empty database will be created before the integration tests are run:
environments {
test {
dataSource {
dbCreate = "create"
}
}
}
By default each integration test executes within a transaction that is rolled-back at the end of the test, so unless you're not using this default behaviour there shouldn't be any need to programatically recreate the database.
Update
Based on your comment, it seems you really do want to recreate the schema before some integration tests. In that case, the only way I can think of, is to run
drop and recreate the schema
use grails schema-export to import a fresh schema
class MyIntegrationTest {
SessionFactory sessionFactory
/**
* Helper for executing SQL statements
* #param jdbcWork A closure that is passed an <tt>Sql</tt> object that is used to execute the JDBC statements
*/
private doJdbcWork(Closure jdbcWork) {
sessionFactory.currentSession.doWork(
new Work() {
#Override
void execute(Connection connection) throws SQLException {
// do not close this Sql instance ourselves
Sql sql = new Sql(connection)
jdbcWork(sql)
}
}
)
}
private recreateSchema() {
doJdbcWork {Sql sql ->
// use the sql object to drop the database and create a new blank database
// something like the following might work for MySQL
sql.execute("drop database my-schema")
sql.execute("create database my-schema")
}
// generate the DDL and import it
// there must be a better way to execute a grails command from within an
// integration test, but unfortunately I don't know what it is
'grails test schema-export export'.execute()
}
#Test
void myTestMethod() {
recreateSchema()
// now do the test
}
}
First and foremost, this code is completely untested, so treat with deep suspicion and low expectations. Secondly, you may need to change the default transational behaviour of integration tests (with #Transactional) in order for this to work.
This seems to work fine, but it's obviously very tightly coupled to H2 so it would have been nice if the Hibernate plugin had exposed an api to take care of this.
http://h2database.com/html/grammar.html#script
class SomethingTestingTransactionsSpec extends IntegrationSpec {
static transactional = false // Why I need this
SessionFactory sessionFactory // Injected by Spring
DataSource dataSource // Also injected
File schemaDump
Sql sql
void setup() {
sql = new Sql(dataSource)
schemaDump = File.createTempFile("test-database-dump", ".sql") // Java 7 API
sql.execute("script drop to ${schemaDump.absolutePath}")
}
void cleanup() {
sql.execute("runscript from ${schemaDump.absolutePath}")
sessionFactory.currentSession.clear()
schemaDump.delete()
}
// Spock tests ...
}
It should be trivial to extract this code into a bean registered only for test environments. That should clean up the test code a bit and improve efficiency by only having to dump the schema once.
Well, you have access to executing arbitrary sql via sessionFactory, so you could call a grails schema export at the beginning of your tests and then just re-import the schema into your DB when needed.
Alternatively, I wonder if calling database migration plugin externally will accomplish the same.
Or you can trick grails into thinking your domain class has changed and force a reload via https://github.com/grails/grails-core/blob/v2.1.1/grails-hibernate/src/main/groovy/org/codehaus/groovy/grails/plugins/orm/hibernate/HibernatePluginSupport.groovy#L340 ( don't ask me how )
I have added a lot of reports with an invalid data source login to an SSRS report sever and I wanted to update the User Name and Password with a script to update it so I don't have to update each report individually.
However, from what I can tell the fields are store as Images and are encrypted. I can't find anything out about how they are encrypted or how to update them. It appears that the User Name and password are stored in the dbo.DataSource tables. Any ideas? I want the script to run in SQL.
Example Login Info:
I would be very, very, VERY leery of hacking the Reporting Services tables. It may be that someone out there can offer a reliable way to do what you suggest, but it strikes me as a good way to clobber your entire installation.
My suggestion would be that you make use of the Reporting Services APIs and write a tiny app to do this for you. The APIs are very full-featured -- pretty much anything you can do from the Report Manager website, you can do with the APIs -- and fairly simple to use.
The following code does NOT do exactly what you want -- it points the reports to a shared data source -- but it should show you the basics of what you'd need to do.
public void ReassignDataSources()
{
using (ReportingService2005 client = new ReportingService2005)
{
var reports = client.ListChildren(FolderName, true).Where(ci => ci.Type == ItemTypeEnum.Report);
foreach (var report in reports)
{
SetServerDataSource(client, report.Path);
}
}
}
private void SetServerDataSource(ReportingService2005 client, string reportPath)
{
var itemSources = client.GetItemDataSources(reportPath);
if (itemSources.Any())
client.SetItemDataSources(
reportPath,
new DataSource[] {
new DataSource() {
Item = CreateServerDataSourceReference(),
Name = itemSources.First().Name
}
});
}
private DataSourceDefinitionOrReference CreateServerDataSourceReference()
{
return new DataSourceReference() { Reference = _DataSourcePath };
}
I doubt this answers your question directly, but I hope it can offer some assistance.
MSDN Specifying Credentials
MSDN also suggests using shared data sources for this very reason: See MSDN on shared data sources
In SQL Server 2005, is there a way to specify more than one connection string from within a .NET Application, with one being a primary preferred connection, but if not available it defaults to trying the other connection (which may be going to a diff DB / server etc)?
If nothing along those exact lines, is there anything we can use, without resorting to writing some kind of round-robin code to check connections?
Thanks.
We would typically use composition on our SqlConnection objects to check for this. All data access is done via backend classes, and we specify multiple servers within the web/app.config. (Forgive any errors, I am actually writing this out by hand)
It would look something like this:
class MyComponent
{
private SqlConnection connection;
....
public void CheckServers()
{
// Cycle through servers in configuration files, finding one that is usable
// When one is found assign the connection string to the SqlConnection
// a simple but resource intensive way of checking for connectivity, is by attempting to run
// a small query and checking the return value
}
public void Open()
{
connection.Open();
}
public ConnectionState State
{
get {return connection.State;}
set {connection.State = value;}
}
// Use this method to return the selected connection string
public string SelectedConnectionString
{
get { return connection.ConnectionString; }
}
//and so on
}
This example includes no error checking or error logging, make sure you add that, so the object can optionally report which connections failed and why.
Assuming that you'd want to access the same set of data, then you'd use clustering or mirroring to provide high availability.
SQLNCLI provider supports SQL Server database mirroring
Provider=SQLNCLI;Data Source=myServer;Failover Partner=myMirrorServer
Clustering just uses the virtual SQL instance name.
Otherwise, I can't quite grasp why you'd want to do this...
Unfortunately there are no FCL methods that do this - you will need to implement this yourself.