Problem with a MS Access query after a "Compact and repair" operation - sql

I have an Access application that use the classical front-end/back-end approach. Yesterday, the backend got corrupted for a reason I don't know. So I opened the backend with Access 2003 and access asked me if I wanted to repair the file, I said yes and it seemed to work.
I can open the database see the tables contents and run most of the queries.
However there is an access query that doesn't work with a specific where clause.
Example :
// This works in the original DB, but not in the compacted one :
SELECT a, b, c
FROM tbl1 INNER JOIN tbl2 ON tbl1.d = tbl2.d
WHERE e = 3 AND tbl2.f = 1;
// This works in both the original and the compacted one :
SELECT a, b, c
FROM tbl1 INNER JOIN tbl2 ON tbl1.d = tbl2.d
WHERE e = 3;
When I try to run the queries, nothing happens. The access process start to use most of the CPU and the GUI stop responding. If I run the query from the query editor, I can use Ctrl+Break to stop the execution. I tried to give the query lot of time and it didn't help.
I've checked the execution plan in showplan.out and it seems correct (at least it should not takes forever to execute)
I tried to compact the DB again. I tried to import the tables in a new DB. I even tried to import the tables and their data in a mdb file that was in a now good state (from a backup).
Anyone have an idea?

Sounds like an index was corrupted and when that happens, it's dropped during the compact. Check for a system table called MSysCompactErrors -- you'll have to show hidden objects and/or system objects in Tools | Options | VIEW.
Never compact a Jet MDB without making a backup beforehand. Because of that rule, the COMPACT ON CLOSE function is completely useless, as it's not cancellable, so you always make sure it's turned off in all MDBs.

I don't know what type of meta data Access brings along when it imports a table from one database into another one. If the meta data is corrupted, importing the table to another database wouldn't necessarily resolve the problem. If practical, you might try creating the tables from scratch in a brand new database and then just exporting and importing (or copying and paste appending) the data into the new database.
I've never seen a table get corrupted like this in such a small database, although with Access anything is possible. Could there be something wrong with the data?

I'd try recreating the query fresh (new name, etc.), and see what happens.
You could even try copying it (even within the same DB or to a brand new one). If that works, the worst case scenario is you have to copy all the objects across to a new DB.

Is there an index on the field tbl2.f?
Also try going into that table in datasheet view, sort tbl2.f in ascending sequence and see if there is anything really strange in the first or last records.

Do you have access to a SQL Server installation? You could use the Upsizing Wizard under the Tools -> Database Utilities menu to copy the data to SQL Server, and see if you get the same problem there.

Related

How to copy from table design from database to another vb.net access

The aim is when updating the application and update the access database without altering the data so update by update only the new tables or new columns so i want to copy the exact table with it's structure to the old database vb.net and access database.
what I've tried is detecting the differences between the old database and the new one by getting in combobox1 the only missed table and in combobox2 the missed columns in the old database in exact table already there in both database and get it's data type .
so i want to copy the entire table and then create only missed columns
thank you
There is not a built in tool to do this.
But, worse yet, there is no "generate" change scripts in Access
(Like say with SQL server).
So, how do you approach this issue? What do some of the accounting systems or commercial programs that use ms-access as the database?
Well, you have to build a kind of "up-grade" system in your software.
This means two things:
To add a new column to a table (for example), you NEVER go open up the access database with access, but "add" or "write" the code to add that field in question.
In fact, I had an applcation deployed out in the field - many desktops.
So, I had a code module called upgrade. And each time I needed a new field or whatever, then I would write the code to add that new colum.
AS LONG as I always added things into that code module, I was ok. (never break the rule for adding new fields, tables or even increasing the length of some field? - use code).
And it became quite easy after I had some code written. I would in fact often cut + paste a previous bit of code to add a new column to a table.
However, after about 5 years, that messy code module had 800+ lines of code in it!!!
But, I ALSO realized that MOST things like adding a new column or whatever? Same code over and over.
So, what I did next was built a "upgrade" table. It looked like this:
Version action SQL RunCode
2.5 AddTable tblCustomers
2.5 AddField "sql here to add table"
etc. etc.
So, I had a version number, and then I compare against the up-grade table. I had "action", and the code would simple loop this table, and do whatever.
So, for example, to add a field, you can use access "DDL" command (data definition commands - most SQL systems support this, and so does Access).
so, say like this:
' any new table code goes here:
If lngVer < 1148 Then
' add event Invoice text option
ExecuteSQLNR "ALTER TABLE dbo.Events ADD InvoiceText ntext NULL"
ExecuteSQLNR "ALTER TABLE dbo.Events ADD HideEventDate bit NULL default 0"
Or, say to increase a column lengh from 50 to 55
db.Execute "ALTER TABLE tblGroupRemind ALTER COLUMN Anotes text(255)", dbFailOnError
As noted, since oh so many the commands were VERY similar, then I started putting that information into a table, and then I would execute the required upgrades in a loop.
For a whole new table? Well, I thought that was too much code, so I always included a blank empty database - and for new tables, I would place them in that upgrade.accDB table - and "transfer/copy" the table from that upgrade database to the real one. That way, I could with great ease create a whole new table, and create in Access designers, and then add/copy that table to the "upgrade.accDB" database.
As noted? The above ideas an approaches work quite well.
In fact, over time, I found it LESS hassle while coding away to add the new column or whatever LESS effort then having to open up ms-acces, and then the table, and then the designer and make the changes.
However, the BIG issue with above?
Well, you have to get all users at least upgraded to your EXISTING schema, and there is no automated tools.
in fact, before I had any automated tools? I would open up note pad, and if I added some field to some table? I would simple type into note pad that new field in such and such table is required).
Then, when on customer site, I would open up their database, and then go look at the note pad document for the list of changes I was to make. (that is what I was doing before I started automating the process - and of course it not always practical to be "on site" or have the customers database.
But, ONCE I had all of the above working?
Then during development, I would open up my "upgrade" database, add the new row and action (new table, new column, (and more).
I even had a column that defined the function to run AFTER that one command. I mean, quite often when you add a new column, or change somthing in a table, often you need to copy data, or at least process some data after you make that change.
Once you get above going?
Then you simple NEVER make changes in the data tables directly, but use your "system" for this. And that works REALLY well.
For one, a customer could open up a older data file - say one from 4 or 5 years ago. The applcation version number would be detected, and then the upgrade code would run all though the versions to update that database. (and I did this automatic on startup - so they never even knew such a upgrade had occurred).
So, you just have to make sure that for each change you make, you put that code in your upgrade system, and you are done.
But, for existing systems? You have to look at what changes you made since last deploy, and write out the "ddl" commands (the alter table SQL commands).
There is no automated way of doing this.
As FYI?
One of the BEST and more valuable free tools in Visual Tools is the SQL server compare utility. It will not only automatic detect and tell you the changes between two SQL server databases, but will also upgrade for you. (very nice).
But, such a system is not available for Access. In fact, so valuable is that utility for SQL server, you might consider upgrading from Access to SQL server for this applcation. With that utility? I can work local, add fields, columns, tables and even stored procedures to that SQL database. When I am on site (or even by VPN), then I run that compare tool - it shows the changes, and ALSO has a button to update the target schema.
I don't know of a automated "schema" checker and updater for Access.
So, what I suggest for above ONLY works if you put such a system in place, and THEN as a developer always make your schema changes to your upgrade system, and never directly in the database with ms-access.

application is not using the correct connection string

In app config file. I used initial catalog ='jana' as database name.
Then i run it use it for some times.
Then later i changed initial catalog ='siva' as database name . That i changed database name alone and saved it. Whenever i run this 'siva' as initial catalog , select queries is using 'siva' database at the same time inserting /updating queries using my previous database 'jana'. Its really weird.
thanks for considering my query. I rectified mistake by myself.
Actually while inserting/updating queries , i have used databasename.dbo.tablename for all queries. That is the cause whenever i used different database , the one which i used on creating was always affected.
Hope u understand my scenario.
Thanks for all.

How to get the query displayed when a change is made to a table or a field in a table in Postgresql?

I have used mysql for some projects and recently I moved to postgresql. In mysql when I alter a table or a field the corresponding query will be displayed in the page. But such a feature was not found in postgresql(kindly excuse me if I'm wrong). Since the query was readily available it was very helpful for me to test something in the local database(without explicitly typing the query), copy the printed query and run it in the server. Now it seems like I've to manually do all the trick. Even though I'm familiar with the query operations,at times it can be pretty time consuming process. Can anybody help me? How can I get the corresponding query to get displayed in postgresql(like in mysql) whenever a change is made to the table?
If you use SELECT * FROM ... there should not be any reason for your output to not include newly added columns, no matter how you get your results - would that be psql in command line, PgAdmin3 or any other IDE.
After you add new columns, it is possible that these changes are still in open transaction in other window or SQL command - be sure to COMMIT such transaction. Note that your changes to data or schema will not be visible to any other database clients until transaction commits.
If your IDE still does not show changes, maybe you need to refresh list of tables or if that option is not available, restart your IDE. If that does not work still, maybe you should use better IDE.
If you have used SELECT field1, field2, ... FROM ... then you must add new fields into your SELECT statement(s) - but this would be true for any other SQL implementation, MySQL included.
You could use the LISTEN / NOTIFY mechanism in PostgreSQL to notify your client on altering the database schema.

How to recover the old data from table

I have made an update statement in a table in SQL 2008 which updated the table with some wrong data.
I didn't have a backup for the DB.
It's some important dates which got updated.
Is there anyway where i can recover the old data from the table.
Thanks
SNA
Basically no unless you want to use a commercial log reader and try go through it with a fine tooth comb. No backup of the database can be an 'update resume, leave town' scenario - harsh but it just should not happen.
Andrew basically has called it. I just want to add a few ideas you can consider if you are desperate:
Are there any reports or printouts lying around? Perhaps you can reconstruct the data from there.
Was this data entered via a web application? If so, there is a remote chance you can find the original data in the web server logs, depending upon how the app was constructed, etc.
Does this app interface (pass data to) any other applications? They may have a buffered copy of data...
Can the data be derived from any other existing data? Is there an audit log table, or another date in your schema based on this one, from which you can reconstruct the original date?
Edit:
Some commenters are mentioning that is is a good idea to test your update/delete statements before running them. For this to become habit, it helps if you have an easy method. I usually create my DELETE statements like this:
--delete --select *
from [User]
where UserID=27
To run the select in order to test your query, highlight everything from select onwards. To then run the delete if you are satisfied with the filter criteria, highlight everything from delete onwards. The two dashes in front of delete are so that if the query accidentally gets run, it will just crash due to invalid syntax.
You can use a similar construct for UPDATE statements, although it is not quite as clean.
SQL server keeps log for every transation.So you can recover your modified data from the log as well without backup.
Select [PAGE ID],[Slot ID],[AllocUnitId],[Transaction ID]
,[RowLog Contents 0], [RowLog Contents 1],[RowLog Contents 3],[RowLog Contents 4]
,[Log Record]
FROM sys.fn_dblog(NULL, NULL)
WHERE
AllocUnitId IN
(Select [Allocation_unit_id] from sys.allocation_units allocunits
INNER JOIN sys.partitions partitions ON (allocunits.type IN (1, 3)
AND partitions.hobt_id = allocunits.container_id) OR (allocunits.type = 2
AND partitions.partition_id = allocunits.container_id)
Where object_id=object_ID('' + 'dbo.student' + ''))
AND Operation in ('LOP_MODIFY_ROW','LOP_MODIFY_COLUMNS')
And [Context] IN ('LCX_HEAP','LCX_CLUSTERED')
Here is the artcile, that explains step by step, how to do it.
http://raresql.com/2012/02/01/how-to-recover-modified-records-from-sql-server-part-1/
Imran
Thanks for all the responses.
The problem was actually accidentally ---i missed to select the where condition in the update statement.---Rest !.
It was a quick 5 minutes task --Like just changing the date to test for one customer data--so we didn't think of taking a backup.
Yes of course you are true ..This is a lesson.
Now onwards i will be careful to write "my update statements in a transaction." or "test my update statements"
Thanks once again--for spending your time to give some insight rather ignoring the question since the only answer is "NO".
Thanks
SNA
Always take a backup before major UPDATE statements, even if it's not used, there's the peace of mind
Especially with Red Gate's Object Level Restore, one can restore individual table/row now given a backup file
Good luck, I'd suggest finding an old copy elsewhere (DEV/QA) etc...
Isn't it possible to do a rollback on an UPDATE statement?
Late one but hopefully useful…
If database is in full recovery mode then all transactions are logged in transaction log and can be retrieved. Problem is that this is not natively supported because this is not the main purpose of the transaction log.
Options are:
Commercial tools such as Apex Log (more expensive, more options) or Quest Toad (less expensive, less options for this purpose main focus is on SQL Server management)
Trying to do this yourself, like user1059637 pointed out. Problem with this approach is that it can’t read transaction log backups and is more tedious.
It comes down to how much your data is worth to you in terms of time and $.

SQL query giving wrong result on linked server

I'm trying to pull user data from 2 tables, one locally and one on a linked server, but I get the wrong results when querying the remote server.
I've cut my query down to
select * from SQL2.USER.dbo.people where persId = 475785
for testing and found that when I run it I get no results even though I know the person exists.
(persId is an integer, db is SQL Server 2000 and dbo.people is a table by the way)
If I copy/ paste the query and run it on the same server as the database then it works.
It only seems to affect certain user ids as running for example
select * from SQL2.USER.dbo.people where persId = 475784
works fine for the user before the one I want.
Strangely I've found that
select * from SQL2.USER.dbo.people where persId like '475785'
also works but
select * from SQL2.USER.dbo.people where persId > 475784
brings back records with persIds starting at 22519 not 475785 as I'd expect.
Hope that made sense to somebody
Any ideas ?
UPDATE:
Due to internal concerns about doing any changes to the live people table, I've temporarily moved my database so they're both on the same server and so the linked server issue doesn't apply. Once the whole lot is migrated to a separate cluster I'll be able to investigate properly. I'll update the update once this happens and I can work my way through all the suggestions. Thanks for your help.
The fact that LIKE operates is not a major clue: LIKE forces integers to string (so you can say WHERE field LIKE '2%' and you will get all records that start with a 2, even when field is of integer type). Your incorrect comparisons would lead me to think your indexes are corrupt, but you say they work when not used via the link... however, the selected index might be different depending on the use? (I seem to recall an instance when I had duplicate indexes and only one was stale, although that was too long ago to recall the exact cause).
Nevertheless, I would try rebuilding your index using the DBCC DBREINDEX (tablenname) command. If it turns out that doing so fixes your query, you may want to rebuild them all: here is a script for rebuilding them all easily.
Is dbo.people a table or a view? I've seen something similar where the underlying table schema had been changed and dropping and recreating the view fixed the problem, although the fact that the query works if run directly on the linked server does indicate something index based..
Is the linked server using the same collation? Depending on the index used, I could see something like this perhaps happening if the servers were not collation compatible, but the linked server was set up with collation compatible (which tells Sql Server it can run the query on the remote server).
I would check the following:
Check your definition on the linked server, and confirm that SQL2 is the
server you expect it to be
Check and compare the execution plans both from the remote and local servers
Try linking by IP address rather than name, to ensure you have the proper machine
Put the code into a stored procedure on the remote machine, and try calling that instead
Sounds like a bug to me - I;ve read of some issues along these lines, btu can't remember specifically what. What version of SQL Server are you running?
select * from SQL2.USER.dbo.people where persId = 475785
for a PersID which fails how does:
SELECT *
FROM OpenQuery(SQL2, 'SELECT * FROM USER.dbo.people WHERE persId = 475785')
behave?