Determine which SQL script file is being used - sql

I'm starting to learn SQL on class and using SQL command line to start some of the pre-built sql files.
ie
start "directory/file_1.sql"
problem arose during class last week when we were switching to and from multiple different script and I lost track which file I was working on.
at this time, I'm using another prebuilt script that drops all table then start the second script
Is there a way to check which file is currently being used?
//edit
Using sql plus tool for oracle 12c
//edit2
In ongoing confusion of which program I am using, I've attached the screenshot
//edit3
location of the script file is within windows10
so once I login as scott I do
start "C:\Users\Documents\Oracle SQL\JLDB_Build.sql"
but sometime have to work on
start "C:\Users\Documents\Oracle SQL\JLDB_Build_5.sql"
then I forget if I'm working on build or build_5

Related

Increment Version number when checking in to TFS

I have a folder in TFS which has SQL Scripts. At the moment I am manually adding a comment and updating a version number inside the comment every time i make a change and check it back it. This works however was hoping there might be a better way. Is there a way to automate this in TFS?
I have read the following article
Version control project files
do i have to go through such a process for simple .sql files? Are there any other simple ways.
There are a few ways you can do this:
Create an automated build in TFS and write a custom build step / PowerShell script to parse the appropriate SQL scripts, read the version, increment it, and store the new version by either checking in the updated file or a local store
Use a database project (part of SQL Server Data Tools) which will output a DACPAC. Inside the database project, you can set the version as specified here. This stores the version in the project file. If you update your TFS build number to be digits only, you can then update the project file to set that value to match the build using a custom build task. For example, if your build number was yyyy.m.d.R where R is the number of times that build was run today (TFS manages that - it's the revision variable). Or, you could set the the <DacVersion> tag to something like 2.1.0.0 and your build replaces the last digit with yyyymmddr.
I'd recommend using a database project. It's pretty easy to create a new database project off an existing database.
The first way mentioned by Jacob above can achieve that if you just want to incremental the version number of the script/folder, just create a CI build definition.
Actually you can just enable Label sources and set the Label format with predefined environment variables such as $(build.buildNumber), and set without publish any artifacts during build process.
Thus, it will automatically trigger the CI build when you check in files, and the source (SQL Script /folder) will be labeled with the incremental number.
Then you can find the specific versions with the label.

Update changes from Developement instance to Production instance in Odoo

I have 2 instances of Odoo v9 running in the same server (Ubuntu 14.04). I want to make changes (install modules, change source code or anything) in the developement instance and after confirming they are OK, move the changes to the Production Instance. Is there anyway of doing that without repeating the whole process of development?
Thank you.
As I can understand you do not want to stop the production instance.
If they are only XML files you might be able to get away by only updating the module from the frontend (Apps-> Your Module -> Update. Although if you have modified the __openerp__.py file inside your module you have to enter the debug mode and click Update Apps List first of all.
For changes in files that are inside the static folder of your module, you do not need to stop the server. Although, your users must click ctr + shift + R in order to flush their caches and bring to their browsers the new content.
For Python source code I am afraid that you have to stop both instances of the server so that the code can be correctly recompiled.
(See note 1 on this)
In the end you should stop and update everything because unexpected things might pop up at random times due to resources not been properly updated.
Note 1: The Python documentation about the compilation of Python modules above others mentions:
As an important speed-up of the start-up time for short programs that
use a lot of standard modules, if a file called spam.pyc exists in the
directory where spam.py is found, this is assumed to contain an
already-“byte-compiled” version of the module spam. The modification
time of the version of spam.py used to create spam.pyc is recorded in
spam.pyc, and the .pyc file is ignored if these don’t match.
So theoretically if you modify fileA.py in a module and a new fileA.pyc is generated the server will be able to interpret and use it. In any case I had an issue with two instances running where the py file was creating the field and the XML file was using it and the server reported that a filed had not been created for the XML view, that means that the server did pick up and parse the XML file but did not recompile the py.

How to automate source control with Oracle database

I work in an Oracle instance that has hundreds of schemas and multiple developers. We have a development instance where developers can integrate their work before test or production.
We want to have source control for all the DDL run in this integrated development database. Currently this is done through a product Red Gate which we run manually after we make a change to the database. Redgate finds the changes between what is in the schema and what was last checked into source control and makes a script of the differences and puts this into source control.
The problem however is of course that running regdate can take some time and people run it infrequently or not at all for small changes. Also redgate will only look in one schema at a time and it would be VERY time consuming to manually run it against all schemas to guarantee that they are up to date. However if the source controlled code cannot be relied upon it becomes less useful...
What would seem to be ideal would be to have some software that could periodically (even once a day), or when triggered by DDL being run, update the source control (preferably github as this is used by other teams) from all the schemas.
I cannot seem to see any existing software which can be simply used to do this.
Is there a problem with doing this? (there is no need to address multiple developers overwriting each others work on the same day as we have this covered in a separate process) Is anyone doing this? Can anyone recommend a way to do this?
We do this with help of a PL/SQL function, a python script and a shell script:
The PL/SQL function can generate the DDL of a whole schema and returns this as CLOB
The python script connects to the database, fetches the DDL and stores it in files
The shell script runs the Source Control to add the modifications (we use Bazaar here).
You can see the scripts on PasteBin:
The PL/SQL function is here: http://pastebin.com/AG2Fa9zL
The python program (schema_exporter.py): http://pastebin.com/nd8Lf0gK
The shell script:
The shell script:
python schema_exporter.py
d=$(date +%Y-%m-%d__%H_%M_%S)
bzr add
bzr st | grep -q -E 'added|modified' && commit -m "Database objects on $d"
exit 0
This shell script is configured to run from cron every day.
Being in the database version control space for 5 years (as director of product management at DBmaestro) and having worked as a DBA for over two decades, I can tell you the simple fact that you cannot treat the database objects as you treat your Java, C# or other files and save the changes in simple DDL scripts.
There are many reasons and I'll name a few:
Files are stored locally on the developer’s PC and the change s/he
makes do not affect other developers. Likewise, the developer is not
affected by changes made by her colleague. In database this is
(usually) not the case and developers share the same database
environment, so any change that were committed to the database affect
others.
Publishing code changes is done using the Check-In / Submit Changes /
etc. (depending on which source control tool you use). At that point,
the code from the local directory of the developer is inserted into
the source control repository. Developer who wants to get the latest
code need to request it from the source control tool. In database the
change already exists and impacts other data even if it was not
checked-in into the repository.
During the file check-in, the source control tool performs a conflict
check to see if the same file was modified and checked-in by another
developer during the time you modified your local copy. Again there
is no check for this in the database. If you alter a procedure from
your local PC and at the same time I modify the same procedure with
code form my local PC then we override each other’s changes.
The build process of code is done by getting the label / latest
version of the code to an empty directory and then perform a build –
compile. The output are binaries in which we copy & replace the
existing. We don't care what was before. In database we cannot
recreate the database as we need to maintain the data! Also the
deployment executes SQL scripts which were generated in the build
process.
When executing the SQL scripts (with the DDL, DCL, DML (for static
content) commands) you assume the current structure of the
environment match the structure when you create the scripts. If not,
then your scripts can fail as you are trying to add new column which
already exists.
Treating SQL scripts as code and manually generating them will cause
syntax errors, database dependencies errors, scripts that are not
reusable which complicate the task of developing, maintaining,
testing those scripts. In addition, those scripts may run on an
environment which is different from the one you though it would run
on.
Sometimes the script in the version control repository does not match
the structure of the object that was tested and then errors will
happen in production!
There are many more, but I think you got the picture.
What I found that works is the following:
Use an enforced version control system that enforces
check-out/check-in operations on the database objects. This will
make sure the version control repository matches the code that was
checked-in as it reads the metadata of the object in the check-in
operation and not as a separated step done manually. This also allow
several developers to work in parallel on the same database while
preventing them to accidently override each other code.
Use an impact analysis that utilize baselines as part of the
comparison to identify conflicts and identify if a difference (when
comparing the object's structure between the source control
repository and the database) is a real change that origin from
development or a difference that was origin from a different path and
then it should be skipped, such as different branch or an emergency
fix.
Use a solution that knows how to perform Impact Analysis for many
schemas at once, using UI or using API in order to eventually
automate the build & deploy process.
An article I wrote on this was published here, you are welcome to read it.
To me it seems like your way of working is backwards: developers run DDL against the DB in an unordered fashion and then you need an automated tool for inferring the changes (and the DDL) that was run.
The process would be in better control if you did the following instead:
Developers write DDL as SQL scripts, preferably using a migration tool such as Flyway (http://flywaydb.org/documentation/migration/sql.html).
Migration scripts are checked into version control
Migration scripts are periodically run against the DB (e.g. by the migration tool)
In this workflow, the DB would only get altered through automated migration scripts and no-one is allowed to do changes manually. Could this work for you?
(I develop the Oracle tools for Redgate)
Actually using the tools you can already what I think you're asking for using Schema Compare for Oracle.
You can compare multiple schemas either in the UI or via the command line - I think what you're after is automating the command line tool which can create difference scripts, sync between source and destination (live, snapshot or scripts) and generate reports.
You can automate the command line to sync to a scripts folder which is your source code checkout and then subsequently run a command to commit the changes.
I think that's all good :)
We built a commerical tool that bridges Oracle with Git. It helps you manage your database objects with Git. Basically, the database becomes the working directory for the developer. You can perform git operations in the database such as reset, commit, branch, merge etc... and the database code is updated automatically. It might be worth taking a look: https://www.gitora.com

Timed out when access SQL Server 2008 r2

I'm developing a web project and for a start I need to create tables, procedures, views, etc.
At the first (and in debug mode) the code is running fine and the first tables are created but suddenly the transaction throws me an error Timed out
If I start again to run the code (in debug mode) it doesn't make any change
In order to run and continue to create the rest's I have to build the code again (without have made any change on it). I Published again and then continues to create more until will stops again and make the same actions as before.
I haven't test it yet in my web to see what will happen, and the reason is simple... my ISP didn't giving me the choice to create one more data base (because that is my contract I use).
Any way I need to know why that happens?
I have to say that I use some delays in my code especially when it reads from an xml file this file contains structure's of tables, procedures etc. And from where I read and execute thru my code behind in vb.net.
After many attempts finding solution, I noticed that the problem starts from the reading of the XML File, I set in this file some sleeps of totally 300 milliseconds and the code now runs ok but slowly...
Thanks to all for there assistance

ms-access: doing repetitive processes with vba/sql

i have an access database backend that contains three tables. i have distributed the front end to several users. this is a very simple database with minimal functionality. i need to import certain rows from a file every hour into one of the tables in the database. i would like to know what is the best way to automate this process so that i can have it running hourly. i need it to be running sort of as a service in the background. can you tell me how you would do this?
You could have for example:
a ms-access file with all necessary code to run the import proc
a BAT file containing the command line(s) that will run this ms-access file with all requested parameters. Check ms-access command line parameters to see the available options.
a task scheduler service software to launch the BAT file: depending on the task scheduler and the command line to be sent, you could even avoid the BAT file step
If all you want to do is run some queries, I would not do this by automating all of Access, but instead by writing a VBScript that uses DAO to execute the SQL directly. That's a much more efficient way to do it, and will run without a console logon (which may or may not be required for full Access to be run by the task scheduler).