PRO*C Why the sql user & password should be mentioned in make file - sql

I have a wierd question and i am trying to backtrace to the root cause.
The scenario is,
I have a bunch of c, cpp & Pro*C code scattered in multiple folders [huge # of files]. Dozens of make files. We regularly run the makefiles to create the latest executables when we change something in code/config/libs.
The problem is:
My SQL User id and passwords are expiring at regular intervals.
These ID's are being used in make files and in Pro*C code to connect to DB. we have to regularly get them reset and it has become a problem requesting the same again and again.
The question is,
Why the SQL credentials are failing after getting them reset?
On every reset, the ID works for 3 or 4 times, after which again the ID fails.
In Make files, the ID's are used to check the semantics. and the proc by default checks for syntactical errors. The exe cannot fail if the ID's are working. So why the ID's are getting failed. Please advice on how this could be fixed.
Should i reqeust the DBA to change any settings of this ID? Or could there be something in my code, which makes the ID's fail by tring wrongly. We copy the ID's into an env variable, which are being read by all EXE's. Could this be causing the problem? Should we take any precautions when using the data from an ENV variable? [The exe's work 3 times perfectly, after which the SQL ID fails.]
Please advice on what all precautions should be taken from my side.

Related

Is there a way to extract Access Modules without opening the file?

I ended up corrupting my database to where every time I attempt to open it, I get error 3022, "changes you requested to the table were not successful because they would create duplicate values in the index."
Recovery of the file does not seem possible and my previous back up is a month ago. I have been able to extract everything but the Modules, which is what I need to recover the most. None of the standard ways I have found work because they require the ability to open the database (For example, trying to set it as a VBA reference still give the same error.)
Is there any way to get the modules or code out of the file without opening it?
Edit:
Was finally able to get access to the file. Using DBEngine.CompactDatabase it was able to do a compact and repair. The issue has boiled down to the "MSysAccessStorage" table is corrupt, and says "Id is not an index in this table". I know have access to everything, except the modules, which I can't open without the MSysAccessStorage working.
I'm going to keep poking at it but I'm not sure what options I have for fixing a system table. Any ideas would be helpful.
Unfortunately, the Visual Basic for Applications project has been corrupted. The original database doesn't even have any VBProjects when listing a count. I'm going to call this one a lost cause. Thanks everyone that tried to help.

sqlplus hangs after being called from a batch file, without throwing up error message

Essentially, where I work we run a variety of reporting processes that follow the same basic structure...
A batch file calls an sql script which executes a stored procedure. Another script extracts the data from Oracle and writes to a csv. Finally, an excel macro runs to create the final output.
We have been encountering an issue recently where if the procedure takes approximately longer than an hour to run, it will then hang indefinitely without moving on to the next line of the batch file. No error message is thrown up.
The most frustrating part is that certain procedures sometimes have the issue, and then the next day they do not.
Has anyone else ever encountered this issue? Or have any idea what could be causing this problem? I feel like it could be connection/firewall related, but it really is not my area of expertise!
You should instrument the batch file and use extended SQL tracing to reveal where ALL of your time is going. Nothing can escape proper instrumentation. You will find the source of the problem. What you do about it varies depending upon the particular problem (i.e., anti-pattern).
I see issues like this all the time. What I do it to connect to the DB and see what is running by checking gV$session. Key is to identify what SQL the script is running, then see if there are any reasons for it to be "hung" (there are MANY possible reasons). for example, missing indexes ; missing or not up to date stats ; workload on the instance ; blocking locks ; ...
If you have the SQL Tuning Advisor, you can run the SQL through there to get some ideas on solutions. Also ADDM Report may provide some additional solutions.

sql server batch database alters, batch database changes - best and safest way

We have a small development team of 5 developers working on a large enterprise level web based asp.net/c# system.
We do a lot of database updates which include stored procedure creations and alters as well as new table creation, column creation, record inserts, record updates and so on and so forth.
Today all of the developers place all change scripts in one large sql change script file that gets ran on our Test and Production environments. So this single file contains stored proc alters and record inserts, updates etc etc. The file can end up being quite lengthy as we may only do a test or production release every 1 to 2 months.
The problem that I am currently facing is this:
Once in a while there is a script error that may occur at any given location in this large "batch change script". Perhaps an insert fails or perhaps an alter fails for a proc for instance.
When this occurs, it is very difficult to tell what changes succeeded and what failed on the database.
Sometimes even if one alter fails for instance code will continue to execute throughout the script and sometimes it will stop execution and nothing further gets ran.
So I end up manually checking procs and records today to see what actually worked and what actually did not and this is a bit painstaking.
I was hoping I could roll up this entire change script into one big transaction so that if any problem occurred I could just roll every change back, but that does not appear to be possible with batch scripts like this in sql server.
So, then I tried to backup the databases before I ran the scripts so that if an error occurred I could simply restore the db, fix the problem and then re-run the fixed script. However in order to restore a database I have to turn off our database mirroring so this is also not totally ideal.
So my question is, what is the safest way to run batch scripts on a production database?
Is there some way that I can wrap the entire script in a transaction that i can roll back that I am not seeing?
Would it possibly be better for us to track and run separate script files so that if 1 file fails we can just shove it off in a failed directory to be looked at and continue running all other files?
Looking for advice and expertise.
thank you for your time.
Matt
The batch script should be run on your QC database first so that any errors are picked up before production.
The QC database should be identical to production or as close as it can be to identical.
Each script should be trapping for an error and reporting the name of the script along with the location of the error using print statements, then if an error occurs when applying to production you at least have the name of the script and the location of the error within the script.
If your QC database is identical or very close, productions errors should be very rare.

How to log Command Line activity? [duplicate]

This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
DOS command to Display result on console and redirect the output to a file
tried various Google searches but nothing seemed to solve my problem.
Basically I'm working for a company who need me to work with their in place database and extract the various data need for reports. They are using Sqlite (please, I've heard enough comments about how it might not be the best choice for a DB, so leave them out) and I either want all my activity on Windows command prompt to be logged, or at least everything I do from the Sqlite command line to appear in a .txt, just in case I need to refer back to it later.
Can anybody here explain to me how to do this? I'm a bit of a beginner and need this stuff broken down step by step. Not done anything like this before.
Cheers!
I'm reasonably certain you can't do this directly -- i.e., the Windows command prompt doesn't provide a way to log the input you provide to it. You can capture outputs (e.g., from commands you run), but for your purposes that's probably not adequate.
You probably need to create a "shell" of your own that takes inputs from the user, logs each one, sends it on to the command prompt, captures the output from the command prompt, and logs that as well.
In an answer to a previous question, I posted some code that handles most of what you need to do. The big difference is that you'll want to look in its handle_output (for example) and instead of just displaying the captured output to the console, write it to a file as well. As it stands right now, that example redirects the child's standard input to come from a file, but changing it to read from the console instead should be fairly straightforward -- you'll basically use a function about like the handle_output and handle_error that it already includes, but instead of displaying output, you'll read input from the user, and each line you'll 1) write to the log, and 2) send to the child via anonymous pipe (much like handle_output and handle_error read from anonymous pipes).

Accessing tables from different .mdb files

I need to show a grid of saved projects (compare "orders") in a datagrid, where the projects are saved in an Access 2000 database with a similar schema as follows:
ID Name Country_ID Plant_Type
1 'Test' 1 1
2 'Second' 2 2
Let's call the file "Projects.mdb". This is then showed in the datagrid as:
ID Name Country Plant Type
1 'Test' 'Germany' 'Free Range'
2 'Second' 'France' 'Inclined Roof'
where the countries and "Plant Types" are fetched from a different table in a different .mdb file (also Access 2000, call it "Language.mdb", although there is a lot of different background data in it), depending on the current user's language preference. It is unfortunately not an alternative to merge these .mdb's into one file.
To be able to show the datagrid I have so far linked the tables from "Language.mdb" into "Projects.mdb", but this screws up when the project is being installed on another computer with the .msi file i created (we'd like to have this easily packaged and installed), as the "Language.mdb" doesn't exist on the linked path on the target computer (Basically the problem here).
I can come up with the following solutions:
Force all users to install on the same path, so that the links will work (undesirable)
Use connection strings in the query as shown here on MSDN (still trying this out, but I need to work on the details)
make a post-install script that relinks the tables according to the correct path.
But I think I'm doing something wrong here. As stated above, it is not an option to merge the .mdb-files, but other suggestions to changing the database schema or whatever it could be (I'm not very experienced with databases) would be very appreciated.
To get around the 'different install paths' problem I use code (on every database load) which first looks for any back end databases in the current db folder; if not found, it asks the user to locate the missing .mdb file. Then the code relinks the database(s). Once the dbs have been successfully linked, the database saves the path and checks this path first on subsequent loads.
Well, based on the constraints that you have put on the solution. I would either go with option 2 or 3. There is not an elegant solution to this at all.
I would however, lean towards your third option, as a "one time" fix to get the files linked, so that the path between them is known, and you are not dynamically adding path information into every query.
note
I'll just mention, but I'm sure you already know this, that if you are looking at doing something like this, it just feels wrong to be doing it with Access, let alone access 2000 at this time for client deployments. I would strongly recommend additionally truly evaluating the solution and see if you can either merge to one, or possibly move to SQL Server Express or something that you could send off to the user as an installer
Is Project split, as it should be, to allow a front end on each user's computer? If so, can you not store the path on the front-end and only re-link if it changes? Code to re-link tables is quite simple, for the most part. The user can be allowed to browse for the location and the Connect property can be updated accordingly.