MS Access has limited capabilities to manage raw SQL queries: the editor is quite bad, no syntax highlighting, it reformats your raw SQL into a long string and you can't insert comments.
Debugging complex SQL queries is a pain as well: either you have to split it into many smaller queries that become difficult to manage when your schema changes or you end-up with a giant query that is a nightmare to debug and update.
How do you manage your complex SQL queries in MS Access and how do you debug them?
Edit
At the moment, I'm mostly just using Notepad++ for some syntax colouring and SQL Pretty Printer for reformatting sensibly the raw SQL from Access.
Using an external repository is useful but keeping there's always the risk of getting the two versions out of sync and you still have to remove comments before trying the query in Access...
For debugging, I edit them in a separate text editor that lets me format them sensibly. When I find I need to make changes, I edit the version in the text editor, and paste it back to Access, never editing the version in Access.
Still a major PITA.
I have a few tips that are specific to SQL in VBA.
Put your SQL code with a string variable. I used to do this:
Set RS = DB.OpenRecordset("SELECT ...")
That is hard to manage. Do this instead:
strSQL = "SELECT ..."
Set RS = DB.OpenRecordset(strSQL)
Often you can't fix a query unless you see just what's being run. To do that, dump your SQL to the Immediate Window just before execution:
strSQL = "SELECT ..."
Debug.Print strSQL
Stop
Set RS = DB.OpenRecordset(strSQL)
Paste the result into Access' standard query builder (you must use SQL view). Now you can test the final version, including code-handled variables.
When you are preparing a long query as a string, break up your code:
strSQL = "SELECT wazzle FROM bamsploot" _
& vbCrLf & "WHERE plumsnooker = 0"
I first learned to use vbCrLf when I wanted to prettify long messages to the user. Later I found it makes SQL more readable while coding, and it improves the output from Debug.Print. (Tiny other benefit: no space needed at end of each line. The new line syntax builds that in.)
(NOTE: You might think this will let you add add comments to the right of the SQL lines. Prepare for disappointment.)
As said elsewhere here, trips to a text editor are a time-saver. Some text editors provide better syntax highlighting than the official VBA editor. (Heck, StackOverflow does better.) It's also efficient for deleting Access cruft like superfluous table references and piles of parentheses in the WHERE clause.
Work flow for serious trouble shooting:
VBA Debug.Print > (capture query during code operation)
query builder > (testing lab to find issues)
Notepad++ > (text editor for clean-up and review)
query builder > (checking, troubleshooting)
VBA
Of course, trouble shooting is usually a matter of reducing the complexity of a query until you're able to isolate the problem (or at least make it disappear!). Then you can build it back up to the masterpiece you wanted. Because it can take several cycles to solve a sticky problem, you are likely to use this work flow repeatedly.
I wrote Access SQL Editor-- an Add-In for Microsoft Access-- because I write quite a lot of pass-through queries, and more complex SQL within Access. This add-in has the advantage of being able to store formatted SQL (with comments!) within your Access application itself. When queries are copied to a new Access application, formatting is retained. When the built-in editor clobbers your formatting, the tool will show your original query and notify you of the difference.
It currently does not debug; if there was enough interest, I would pursue this-- but for the time being the feature set is intentionally kept small.
It is not free for the time being, but purchasing a license is very cheap. If you can't afford it, you can contact me. There is a free 14-day trial here.
Once it's installed, you can access it through your Add-Ins menu (In Access 2010 it's Database Tools->Add Ins).
Debugging is more of a challenge. If a single column is off, that's usually pretty easy to fix. But I'm assuming you have more complex debugging tasks that you need to perform.
When flummoxed, I typically start debugging with the FROM clause. I trace back to all the tables and sub-queries that comprise the larger query, and make sure that the joins are properly defined.
Then I check my WHERE clause. I run lots of simple queries on the tables, and on the sub-queries that I've already checked or that I already trust, and make sure that when I run the larger query, I'm getting what I expect with the WHERE conditions in place. I double-check the JOIN conditions at the same time.
I double-check my column definitions to make sure I'm retrieving what I really want to see, especially if the formulas involved are complicated. If you have something complicated like a coordinated subquery in a column definition
Then I check to see if I'm grouping data properly, making sure that "DISTINCT"'s and "UNION"'s without UNION ALL don't remove necessary duplicates.
I don't think I've ever encountered a SQL query that couldn't be broken down this way. I'm not always as methodical as this, but it's a good way to start breaking down a real stumper.
One thing I could recommend when you write your queries is this: Never use SELECT * in production code. Selecting all columns this way is a maintenance nightmare, and it leads to big problems when your underlying schemas change. You should always write out each and every column if you're writing SQL code that you'll be maintaining in the future. I saved myself a lot of time and worry just by getting rid of "SELECT *"'s in my projects.
The downside to this is that those extra columns won't appear automatically in queries that refer to "SELECT *" queries. But you should be aware of how your queries are related to each other, anyway, and if you need the extra columns, you can go back and add them.
There is some hassle involved in maintaining a code repository, but if you have versioning software, the hassle is more than worth it. I've heard of ways of versioning SQL code written in Access databases, but unfortunately, I've never used them.
If you're doing really complex queries in MS Access, I would consider keeping a repository of those queries somewhere outside of the Access database itself... for instance, in a .sql file that you can then edit in an editor like Intype that will provide syntax highlighting. It'll require you to update queries in both places, but you may end up finding it handy to have an "official" spot for it that is formatted and highlighted correctly.
Or, if at all possible, switch to SQL Server 2005 Express Edition, which is also free and will provide you the features you desire through the SQL Management Studio (also free).
Expanding on this suggestion from Smandoli:
NO: DoCmd.RunSQL ("SELECT ...")
YES: strSQL = "SELECT ..."
DoCmd.RunSQL (strSQL)
If you want to keep the SQL code in an external file, for editing with your favorite text editor (with syntax coloring and all that), you could do something like this pseudo-code:
// On initialization:
global strSQL
f = open("strSQL.sql")
strSQL = read_all(f)
close(f)
// To to the select:
DoCmd.RunSQL(strSQL)
This may be a bit clunky -- maybe a lot clunky -- but it avoids the consistency issues of edit-copy-paste.
Obviously this doesn't directly address debugging SQL, but managing code in a readable way is a part of the problem.
Similar to recursive, I use an external editor to write my queries. I use Notepad++ with the Light Explorer extension for maintaining several scripts at a time, and Notepad2 for one-off scripts. (I'm kind of partial to Scintilla-based editors.)
Another option is to use the free SQL Server Management Studio Express, which comes with SQL Server Express. (EDIT: Sorry, EdgarVerona, I didn't notice you mentioned this already!) I normally use it to write SQL queries instead of using Access, because I typically use ODBC to link to a SQL Server back end anyway. Beware that the differences in the syntax of T-SQL, used by SQL Server, and Jet SQL, used by Access MDB's, are sometimes substantial.
Are you talking here about what MS-Access calls 'queries' and SQL call 'views' or about the 'MS-Access pass-through' queries which are SQL queries? Someone could get easily lost! My solution is the following
free SQL Server Management
Studio Express, where I will
elaborate and test my queries
a query table on the client
side, with one field for the query
name (id_Query) and another one
(queryText, memo type) for the
query itself.
I then have a small function getSQLQuery in my VBA code to be used when I need to execute a query (either returning a recordset or not):
Dim myQuery as string, _
rsADO as ADODB.recorset
rsADO = new ADODB.recordset
myQuery = getSQLQuery(myId_Query)
'if my query retunrs a recordset'
set rsADO = myADOConnection.Execute myQuery
'or, if no recordset is to be returned'
myADOConnection.Execute myQuery
For views, it is even possible to keep them on the server side and to refer to them from the client side
set rsADO = myADOConnection.execute "dbo.myViewName"
Well to my knowledge there are 2 options:
Notepad++ with Poor man's t-sql formatter plugin ..i know there is already a mention for SQL Pretty Printer but i haven't used it..so my workflow is ..i create the query in Access..i copy paste it to Notepad++ ...i format it..i work on it ...back to Access..only issue..it pads in some cases spaces in this case : [Forms]![AForm].[Ctrl] and they become [Forms] ! [AForm].[Ctrl] but i am used to and it doesn't bother me..
SoftTree SQL Assistant (http://www.softtreetech.com/sqlassist/index.htm) bring just about everything you wanted on a SQL editor...i have worked a bit in the past(trial) but its price tag is a bit stiff
Related
I want to modify a column name to new name present in a table
but here problem i want to manually modify the column name present in Triggers or SP's.
Is there a any better way of doing it.
To rename a column am using this
sp_RENAME 'Tablename.old_Column', 'new_column' , 'COLUMN';
similarly how can i do it for triggers or SP's.? without opening each script?
Well, there are a bunch of 3rd party tools that are promising this type of "safe rename", some for free and some are not:
ApexSQL has a free tool for that, as MWillemse wrote in his answer,
RedGate have a commercial tool called SQLPrompt that also have a safe renaming feture, However it is far from being free.
Microsoft have a visual studio add-in called SQL Server Data Tools (or SSDT in the short version), as Dan Guzman wrote in his comment.
I have to say I've never tried any of these specific tools for that specific task, but I do have some experience with SSDT and some of RedGate's products and I consider them to be very good tools. I know nothing about ApexSQL.
Another option is to try and write the sql script yourself, However there are a couple of things to take into consideration before you start:
Can your table be accessed directly from outside the sql server? I mean, is it possible that some software is executing sql statement directly on that table? If so, you might break it when you rename that column, and no sql tool will help in this situation.
Are your sql scripting skills really that good? I consider myself to be fairly experienced with sql server, but I think writing a script like that is beyond my skills. Not that it's impossible for me, but it will probably take too much time and effort for something I can get for free.
Should you decide to write it yourself, there are a few articles that might help you in that task:
First, Microsoft official documentation of sys.sql_expression_dependencies.
Second, an article called Different Ways to Find SQL Server Object Dependencies that is written by a 13 years experience DBA,
and last but not least, a related question on StackExchange's Database Administrator's website.
You could, of course, go with the safe way Gordon Linoff suggested in his comment, or use synonyms like destination-data suggested in his answer, but then you will have to manually modify all of the columns dependencies manually, and from what I understand, that is what you want to avoid.
Renaming the Table column
Deleting the Table column
Alter Table Keys
Best way use Database Projects in Visual Studio.
Refer this links
link 1
link 2
you can do what #GorDon suggested.
Apart from this,you can also play with this query,
select o.name, sc.* from sys.syscomments sc inner join sys.objects o
on sc.id=o.object_id where sc.text like '%oldcolumnname%'
this will return list of all proc and trigger.Also you can modify filter to get exact list.then it will be very easy for you to modify,manually.
But whatever you decide,don't simply drop old column.
To be safe,even keep back up.
This suggestion relates to Oracle DB, however there may be equivalent solutions in other DBMS's.
A temporary solution to your issue is to create a pseudocolumn. This solution looks a little hacky because the syntax for a pseudocolumn requires an expression. The simplest expression I can think of is the case statement below. Let me know if you can make it more simple.
ALTER TABLE <<tablename>> ADD (
<<new_column_name>> AS (
CASE
WHEN 1=1 THEN <<tablename>>.<<old_column_name>>
END)
);
This strategy basically creates a new column on the fly by evaluating the case statement and copying the value of <<old_column_value>> to <<new_column_value>>. Because you are dynamically interpolating this column there is a performance penalty vs just selecting the original column.
The one gotcha is that this will only work if you are duplicating a column once. Multiple pseudocolumns cannot contain duplicate expressions in Oracle.
The other strategy you can consider is to create a view and you can name the columns whatever you want. You can even INSERT/UPDATE/DELETE (execute DML) against views, but this would give you a whole new table_name, not just a new column. You could however rename the old table, and name your view the same as your old table. This also has a performance penalty vs just accessing the underlying table.
You might want to replace that text in definition. However, you will be needing a dedicated administrator connection in sql server. Versions also vary in setting up a dedicated administrator connection. Setting up the startup parameter by adding ;-T7806 under advanced. And by adding Admin: before the servername upon logging in. By then, you may be able to modify the value of the definition.
I am working on developing an application for my company. From the beginning we were planning on having a split DB with an access front end, and storing the back end data on our shared server. However, after doing some research we realized that storing the data in a back end access DB on a shared drive isn’t the best idea for many reasons (vpn is so slow to shared drive from remote offices, access might not be the best with millions of records, etc.). Anyways, we decided to still use the access front end, but host the data on our SQL server.
I have a couple questions about storing data on our SQL server. Right now when I insert a record I do it with something like this:
Private Sub addButton_Click()
Dim rsToRun As DAO.Recordset
Set rsToRun = CurrentDb.OpenRecordset("SELECT * FROM ToRun")
rsToRun.AddNew
rsToRun("MemNum").Value = memNumTextEntry.Value
rsToRun.Update
memNumTextEntry.Value = Null
End Sub
It seems like it is inefficient to have to use a sql statement like SELECT * FROM ToRun and then make a recordset, add to the recordset, and update it. If there are millions of records in ToRun will this take forever to run? Would it be more efficient just to use an insert statement? If so, how do you do it? Our program is still young in development so we can easily make pretty substantial changes. Nobody on my team is an access or SQL expert so any help is really appreciated.
If you're working with SQL Server, use ADO. It handles server access much better than DAO.
If you are inserting data into a SQL Server table, an INSERT statement can have (in SQL 2008) up to 1000 comma-separated VALUES groups. You therefore need only one INSERT for each 1000 records. You can just append additional inserts after the first, and do your entire data transfer through one string:
INSERT INTO ToRun (MemNum) VALUES ('abc'),('def'),...,('xyz');
INSERT INTO ToRun (MemNum) VALUES ('abcd'),('efgh'),...,('wxyz');
...
You can assemble this in a string, then use an ADO Connection.Execute to do the work. It is frequently faster than multiple DAO or ADO .AddNew/.Update pairs. You just need to remember to requery your recordset afterwards if you need it to be populated with your newly-inserted data.
There are actually two questions in your post:
Will OpenRecordset("SELECT * FROM ToRun") immediately load all recordsets?
No. By default, DAO's OpenRecordset opens a server-side cursor, so the data is not retrieved until you actually start to move around the recordset. Still, it's bad practice to select lots of rows if you don't need to. This leads to the next question:
How should I add records in an attached SQL Server database?
There are a few ways to do that (in order of preference):
Use an INSERT statment. That's the most elegant and direct solution: You want to insert something, so you execute INSERT, not SELECT and AddNew. As Monty Wild explained in his answer, ADO is prefered. In particular, ADO allows you to use parameterized commands, which means that you don't have to put-into-quotes-and-escape your strings and correctly format your dates, which is not so easy to do right.
(DAO also allows you to execute INSERT statements (via CurrentDb.Execute), but it does not allow you to use parameters.)
That said, ADO also supports the AddNew syntax familiar to you. This is a bit less elegant but requires less changes to your existing code.
And, finally, your old DAO code will still work. As always: If you think you have a performance problem, measure if you really have one. Clean code is great, but refactoring has a cost and it makes sense to optimize those places first where it really matters. Test, measure... then optimize.
It seems like it is inefficient to have to use a sql statement like SELECT * FROM ToRun and then make a recordset, add to the recordset, and update it. If there are millions of records in ToRun will this take forever to run?
Yes, you do need to load something from the table in order to get your Recordset, but you don't have to load any actual data.
Just add a WHERE clause to the query that doesn't return anything, like this:
Set rsToRun = CurrentDb.OpenRecordset("SELECT * FROM ToRun WHERE 1=0")
Both INSERT statements and Recordsets have their pros and cons.
With INSERTs, you can insert many records with relatively little code, as shown in Monty Wild's answer.
On the other hand, INSERTs in the basic form shown there are prone to SQL Injection and you need to take care of "illegal" characters like ' inside your values, ideally by using parameters.
With a Recordset, you obviously need to type more code to insert a record, as shown in your question.
But in exchange, a Recordset does some of the work for you:
For example, in the line rsToRun("MemNum").Value = memNumTextEntry.Value you don't have to care about:
characters like ' in the input, which would break an INSERT query unless you use parameters
SQL Injection
getting the date format right when inserting date/time values
Question
Hi,
I'm trying to determine if I can query Intersystems Caché to obtain database properties and license properties. For the database, I'm mostly interested in properties like the current size, maximum size, block size, and the directory associated with the database. For the licenses, I'm total authorized, current available, minimum available, current active users, and maximum active users.
Background
I know that details about the database and licenses are available using the System Management Portal, but I'm trying to automate some actions that depend on these details.
I know that the %FREECNT utility is available to display space statistics for the database, but the only way I'll be able to use this utility to obtain the info I need is to write a script using AWK or SED (the system is on a Unix server) and I'd like to avoid that since I'm not as well-versed in Unix scripting as I'd like to be.
I know the ^DATABASE routine and the $SYSTEM.License.ShowCounts() function are available, but I will have to use AWK and SED for these too to eliminate the text returned that I don't need. In all cases, straight SQL will return a set of data that I can iterate over that will eliminate the extraneous text that's included by the routines/functions.
Additional Info
I've written queries similar to the one below and I'm hoping there are equivalent tables for database and license that will allow me the same access:
Select * From %SYS.ProcessQuery Where Namespace = 'HL7'
I don't have access to Caché Studio, so I'm forced to use the command line on the server. I know I can use the SQL.Shell to enter SQL statements and, from the documentation, it looks like I can create routines from the command line, but I haven't found any documentation that will allow me to enter multiline statements for routines from the command line. If that's not possible, then I probably can't use routines in my solution.
Thanks for the help.
In your Caché routine, you can use Query Summary for get some license information, or other queries in class %SYSTEM.License
For Databases you can use queries from SYS.Database from %SYS namespace
I was able to get something close.
From the command line, I entered the following:
%SYS>SET rs = ##Class(%Library.ResultSet).%New()
%SYS>SET rs.ClassName = "SYS.Database"
%SYS>SET rs.QueryName = "FreeSpace"
%SYS>SET sc = rs.Prepare(rs.QueryName)
%SYS>SET sc = rs.Execute($LISTBUILD("/my/database/directory"), 0)
%SYS>WHILE rs.Next(){WRITE rs.Data("DatabaseName")," "_rs.Data("Size")," "_rs.Data("MaxSize"),!}
It's not quite as clean as I would have hoped because I have to write the entire loop on a single line, but that's better than nothing. At least the statement wraps across lines.
I'm getting a little confused about using parameters with SQL queries, and seeing some things that I can't immediately explain, so I'm just after some background info at this point.
First, is there a standard format for parameter names in queries, or is this database/middleware dependent ? I've seen both this:-
DELETE * FROM #tablename
and...
DELETE * FROM :tablename
Second - where (typically) does the parameter replacement happen? Are parameters replaced/expanded before the query is sent to the database, or does the database receive params and query separately, and perform the expansion itself?
Just as background, I'm using the DevArt UniDAC toolkit from a C++Builder app to connect via ODBC to an Excel spreadsheet. I know this is almost pessimal in a few ways... (I'm trying to understand why a particular command works only when it doesn't use parameters)
With such data access libraries, like UniDAC or FireDAC, you can use macros. They allow you to use special markers (called macro) in the places of a SQL command, where parameter are disallowed. I dont know UniDAC API, but will provide a sample for FireDAC:
ADQuery1.SQL.Text := 'DELETE * FROM &tablename';
ADQuery1.MacroByName('tablename').AsRaw := 'MyTab';
ADQuery1.ExecSQL;
Second - where (typically) does the parameter replacement happen?
It doesn't. That's the whole point. Data elements in your query stay data items. Code elements stay code elements. The two never intersect, and thus there is never an opportunity for malicious data to be treated as code.
connect via ODBC to an Excel spreadsheet... I'm trying to understand why a particular command works only when it doesn't use parameters
Excel isn't really a database engine, but if it were, you still can't use a parameter for the name a table.
SQL parameters are sent to the database. The database performs the expansion itself. That allows the database to set up a query plan that will work for different values of the parameters.
Microsoft always uses #parname for parameters. Oracle uses :parname. Other databases are different.
No database I know of allows you to specify the table name as a parameter. You have to expand that client side, like:
command.CommandText = string.Format("DELETE FROM {0}", tableName);
P.S. A * is not allowed after a DELETE. After all, you can only delete whole rows, not a set of columns.
I'm trying to use ADO to create several tables at once, into MS Access. Is it possible to do multiple statements in the one operation? For instance:
...
// I have omitted the field details
CString sQuery = "CREATE TABLE [Table1] (..., PRIMARY KEY ([ID])); \nCREATE TABLE [Table2] (..., PRIMARY KEY ([ID]));";
oRecordset.Open(oDatabase.m_pConnection, sQuery)
This fails due to a "Syntax Error in CREATE TABLE statement", although each of the create statements work on their own perfectly. Is there a way of doing this sort of thing? There will also be statements to add constraints, add indexing, etc., and I'd really like to be able to do it so that I don't have to split up the string into separate parts.
ADO isn't the issue: the ACE/Jet engine simply does not support multiple SQL statements within a single operation. In other words, ACE/JET SQL lacks procedural syntax found in most 'industrial-strength' SQL products. See #David-W-Fenton's answer for more detail.
Bottom line: You will need to issue a Connection.Execute for each CREATE TABLE statement i.e. client side procedural code. But they can (perhaps should) all be run in the same transaction, of course.
ADO to MS Access does not support batch SQL statements. You need to run each statement as a separate execution.
People who think you can send multiple SQL statements to Jet in a batch just aren't thinking.
Jet is a file-server database engine -- there is no centralized server process controlling interaction between clients and the actual data store. Instead, clients are all running individual instances of Jet and cooperatively editing a file in a way that is controlled by the Jet locking file (LDB). Without a centralized process to serialize and prioritize the SQL statements, you wouldn't want Jet to be able to process multiple statements in a batch.
Those who are offering the suggestion of using ADO and separating the statements with a CrLf should code it up and give it a try and then get back to us about how useful their speculative advice actually is.
If you're sample set of commands is typical, just do something like this in VBA or the language of your choice:
public sub ExeuteBatch(BatchString as String)
var s as string
var abatch as array
sbatch = replace(sbatch, "\n", "")
abatch = split(BatchString, ";")
for each s in abatch
** adodb execute s here **
next s
end sub
That's off the top of my head, but you should be able to take it from there I hope.
Crude but it works - create the necessary number of queries with one SQL statement each, then use a Macro to run the queries successively. That's about as good as can be done with ADO/Jet.
I don't know if ADO is constructed over JET OleDB Engine, which I suppose, if it is this way, The Jet Engine doesn't support execution of multiple statements in one single batch, we tryed separating with ; and with the GO reserved word, but it does not work.
I think you can run multiple commands in one ADO Command.
You just need proper line feeds between then. i.e. \n doesn't work.
Try something like this:
(Using VB Syntaxish)
MyQuery = "Select * from Whatever " & vbLf <br>
MyQuery = MyString & "Select * from SomethingElse " & vbLF
oRecordset.Open(oDatabase.m_pConnection, MyQuery )