ms-access: doing repetitive processes with vba/sql - sql

i have an access database backend that contains three tables. i have distributed the front end to several users. this is a very simple database with minimal functionality. i need to import certain rows from a file every hour into one of the tables in the database. i would like to know what is the best way to automate this process so that i can have it running hourly. i need it to be running sort of as a service in the background. can you tell me how you would do this?

You could have for example:
a ms-access file with all necessary code to run the import proc
a BAT file containing the command line(s) that will run this ms-access file with all requested parameters. Check ms-access command line parameters to see the available options.
a task scheduler service software to launch the BAT file: depending on the task scheduler and the command line to be sent, you could even avoid the BAT file step

If all you want to do is run some queries, I would not do this by automating all of Access, but instead by writing a VBScript that uses DAO to execute the SQL directly. That's a much more efficient way to do it, and will run without a console logon (which may or may not be required for full Access to be run by the task scheduler).

Related

How to automate source control with Oracle database

I work in an Oracle instance that has hundreds of schemas and multiple developers. We have a development instance where developers can integrate their work before test or production.
We want to have source control for all the DDL run in this integrated development database. Currently this is done through a product Red Gate which we run manually after we make a change to the database. Redgate finds the changes between what is in the schema and what was last checked into source control and makes a script of the differences and puts this into source control.
The problem however is of course that running regdate can take some time and people run it infrequently or not at all for small changes. Also redgate will only look in one schema at a time and it would be VERY time consuming to manually run it against all schemas to guarantee that they are up to date. However if the source controlled code cannot be relied upon it becomes less useful...
What would seem to be ideal would be to have some software that could periodically (even once a day), or when triggered by DDL being run, update the source control (preferably github as this is used by other teams) from all the schemas.
I cannot seem to see any existing software which can be simply used to do this.
Is there a problem with doing this? (there is no need to address multiple developers overwriting each others work on the same day as we have this covered in a separate process) Is anyone doing this? Can anyone recommend a way to do this?
We do this with help of a PL/SQL function, a python script and a shell script:
The PL/SQL function can generate the DDL of a whole schema and returns this as CLOB
The python script connects to the database, fetches the DDL and stores it in files
The shell script runs the Source Control to add the modifications (we use Bazaar here).
You can see the scripts on PasteBin:
The PL/SQL function is here: http://pastebin.com/AG2Fa9zL
The python program (schema_exporter.py): http://pastebin.com/nd8Lf0gK
The shell script:
The shell script:
python schema_exporter.py
d=$(date +%Y-%m-%d__%H_%M_%S)
bzr add
bzr st | grep -q -E 'added|modified' && commit -m "Database objects on $d"
exit 0
This shell script is configured to run from cron every day.
Being in the database version control space for 5 years (as director of product management at DBmaestro) and having worked as a DBA for over two decades, I can tell you the simple fact that you cannot treat the database objects as you treat your Java, C# or other files and save the changes in simple DDL scripts.
There are many reasons and I'll name a few:
Files are stored locally on the developer’s PC and the change s/he
makes do not affect other developers. Likewise, the developer is not
affected by changes made by her colleague. In database this is
(usually) not the case and developers share the same database
environment, so any change that were committed to the database affect
others.
Publishing code changes is done using the Check-In / Submit Changes /
etc. (depending on which source control tool you use). At that point,
the code from the local directory of the developer is inserted into
the source control repository. Developer who wants to get the latest
code need to request it from the source control tool. In database the
change already exists and impacts other data even if it was not
checked-in into the repository.
During the file check-in, the source control tool performs a conflict
check to see if the same file was modified and checked-in by another
developer during the time you modified your local copy. Again there
is no check for this in the database. If you alter a procedure from
your local PC and at the same time I modify the same procedure with
code form my local PC then we override each other’s changes.
The build process of code is done by getting the label / latest
version of the code to an empty directory and then perform a build –
compile. The output are binaries in which we copy & replace the
existing. We don't care what was before. In database we cannot
recreate the database as we need to maintain the data! Also the
deployment executes SQL scripts which were generated in the build
process.
When executing the SQL scripts (with the DDL, DCL, DML (for static
content) commands) you assume the current structure of the
environment match the structure when you create the scripts. If not,
then your scripts can fail as you are trying to add new column which
already exists.
Treating SQL scripts as code and manually generating them will cause
syntax errors, database dependencies errors, scripts that are not
reusable which complicate the task of developing, maintaining,
testing those scripts. In addition, those scripts may run on an
environment which is different from the one you though it would run
on.
Sometimes the script in the version control repository does not match
the structure of the object that was tested and then errors will
happen in production!
There are many more, but I think you got the picture.
What I found that works is the following:
Use an enforced version control system that enforces
check-out/check-in operations on the database objects. This will
make sure the version control repository matches the code that was
checked-in as it reads the metadata of the object in the check-in
operation and not as a separated step done manually. This also allow
several developers to work in parallel on the same database while
preventing them to accidently override each other code.
Use an impact analysis that utilize baselines as part of the
comparison to identify conflicts and identify if a difference (when
comparing the object's structure between the source control
repository and the database) is a real change that origin from
development or a difference that was origin from a different path and
then it should be skipped, such as different branch or an emergency
fix.
Use a solution that knows how to perform Impact Analysis for many
schemas at once, using UI or using API in order to eventually
automate the build & deploy process.
An article I wrote on this was published here, you are welcome to read it.
To me it seems like your way of working is backwards: developers run DDL against the DB in an unordered fashion and then you need an automated tool for inferring the changes (and the DDL) that was run.
The process would be in better control if you did the following instead:
Developers write DDL as SQL scripts, preferably using a migration tool such as Flyway (http://flywaydb.org/documentation/migration/sql.html).
Migration scripts are checked into version control
Migration scripts are periodically run against the DB (e.g. by the migration tool)
In this workflow, the DB would only get altered through automated migration scripts and no-one is allowed to do changes manually. Could this work for you?
(I develop the Oracle tools for Redgate)
Actually using the tools you can already what I think you're asking for using Schema Compare for Oracle.
You can compare multiple schemas either in the UI or via the command line - I think what you're after is automating the command line tool which can create difference scripts, sync between source and destination (live, snapshot or scripts) and generate reports.
You can automate the command line to sync to a scripts folder which is your source code checkout and then subsequently run a command to commit the changes.
I think that's all good :)
We built a commerical tool that bridges Oracle with Git. It helps you manage your database objects with Git. Basically, the database becomes the working directory for the developer. You can perform git operations in the database such as reset, commit, branch, merge etc... and the database code is updated automatically. It might be worth taking a look: https://www.gitora.com

Counting how many script in SSIS have been completed

In SSIS package i have multiple scripts running within a job. At any given time i want to read how many scripts have been executed eg, 5/10 (50%) have been completed. Please tell how can i achieve that?
Currently there is no such functionality provided by SSIS to track progress of package execution.
It seems you need to write your own custom utility/application to implement same or use third party one.
There are few ways to do -
Using a switch called /Reporting or /Rep of DTEXEC at the command-line . For example:
DTEXEC /F ssisexample.dtsx /Rep P > progress.txt
2.Implement package logging or customize it.
3 . Implement Event handler on required executable. You can also use OnPipelineRowsSent log of Data Flow Task.
If you want to write your own application then below thread will provide nice starting point.
How do you code the Package Execution Progress window in C#
In my experience, we are using another software to monitor the jobs that are running. http://en.wikipedia.org/wiki/CA_Workload_Automation_AE
You can also try to create your own application that runs on the background that checks that status of your jobs, through checking the logs.

Execute Multiple PowerShell Files using a SSIS Package

I have multiple PowerShell script files that I need to execute in a sequential flow(one after other). Can someone please help me how to schedule multiple PowerShell files to be executed using a SSIS Package. And I need to build a fault tolerant model were I need to re-execute a powershell script in case of failure.
Running PowerShell
There isn't a built-in Execute PowerShell task (pity) so you'll need to use an Execute Process Task with the path to powershell.exe
Something that you will need to take into consideration is that the default execution policy for PowerShell is Restricted which cannot run a script. Further complicating matters is the account that runs the SSIS package will also need to have its execution policy modified to be able to fire off those scripts. It's a simple matter of Set-ExecutionPolicy RemoteSigned or whatever level you feel is appropriate but you'll need to do this from within the account.
Fault Tolerance
The simple approach is to ignore the return code in the Execute Process Task. Alternatively, if the desire is to keep running the PS1 until it doesn't fail, then you'd wrap a For Loop Container around the Execute Process Task and only set the terminal condition once the task returns a success value. Things might still go sideways depending on what the failure is.

Need suggestions on what tool to use for manipulating a file

We have a need to create a daily process that will manipulate a file that is now being manually generating before FTPing it to a vendor. The issues with the current file are as follows:
1) It is currently comma delimited and it needs to be pipe delimited.
2) The vendor only want specific columns to be sent. They have a limit of 26 columns.
We need to develop an automated process that can be scheduled to run once a day and pick up a file with a specific extension, do the file manipulation and FTP the file.
Ideally, we would like to have some error handling in the process. We would want an email to get sent out if there was no file to process or if there was an error during the manipulation or FTP process.
My first thought was to use SQL Server Import/Export. I've done this before but that was only for packages that could be run manually. This process needs to be fully automated (after the existing file is manually generated.) I don't see a way to pick up any file with a specific extension. It looks like I have to select a specific file.
Is there a way to use Import/Export or some similar tool?
Or, do I need to write a program to do this sort of task? It seems to me like it would be more work to write a program. So, I am trying to avoid that.
Thank you for your help!
You should write a program. Seriously.

Check for multiple files

Okay, I'll try to explain as good as I can... Quite a particular case.
Tools: SSIS 2008
We have a control flow that now needs to be triggered by an event: the presence of one or multiple files. (1,2 or 3)
The variables used:
BO_FileLocation_1
BO_FileLocation_2
BO_FileLocation_3
BO_FileName_1
BO_FileName_2
BO_FileName_3
There can be one, two or three files: defined in above variables. When they are filled in,
they should be processed. When they are empty, this means there's just one file file, the process should ignore them and jump to the next (file watcher?) task.
For example:
BO_FileLocation_1= "C:\"
BO_FileLocation_2 NULL
BO_FileLocation_3 NULL
BO_FileName_1= "test.csv"
BO_FileName_2 NULL
BO_FileName_3 NULL
The report only needs one file.
I'd need a generic concept that checks the presence of these files, it could be more generic than my SSIS knowledge can handle right now. For example handy, when there's a 4th file in the future. I was also thinking to work with a single script to handle all the logic.
Thanks in advance
A possibly irrelevant image:
If all you want is to trigger the Copy Source File to handle if one or more of the files is present, just use the OR Constraint in your flow. The following image shows you how:
First connect all to the destination:
Then click one of the green arrows. This will make its properties window pop up. Select the Logical ORinstead of the Logical AND:
If everything went well, you should now see the connections as dashed lines:
There are several possible solutions:
Create a sequence container and include all the file imports in the sequence container. Add int variables for RowCountFile1, RowCountFile2, and RowCountFile3 and set the value to 0 (this is the default value when you create an int variable). Add a RowCount transformation to each of the data flows. Create a precedence constraint from the sequence container to the "Do something" task. Set the precedence constraint to success and expression. Set the expression value to #RowCountFile1 > 0 || #RowCountFile2 > 0 || #RowCountFile3 > 0. The advantage of this approach is that you can take an action as soon as the files are detected, you import all available files, and you only take an action after all the files have been imported. You could then schedule running this SSIS package as a SQL Server Agent job step and run it as frequently as you want.
A variant on solution 1 is to use for each file enumerator containers inside the sequence container. This would be useful if you don't know the exact name of the file and you expect to import more than one under some circumstances. For instance, if you get a file every few minutes with a timestamp in its file name and your process doesn't run for some reason, then you may have to process multiple files to get caught up and then take an action once it has been done.
You could use the file watcher task as you outlined in your question. The only problem I have with the file watcher task is that the package has to be in a constantly running state. This makes it hard to troubleshoot problems and performance. It also can introduce other problems since I remember having some problems with the file watcher task years ago when it first came out. It may well be a totally stable task now, but I prefer other methods over the task after having been burned previously. If you really want the package to run continously instead of having it be called by a job, then you could always use a script task to check for file, sleep thread if not found, check again, etc. I'm sure that's what the file watcher task does, but I would trust my own C# over the task. Power to anyone who has had better experiences than me with File Watcher...
Use PowerShell. If you just want to take an action if a file appears and you aren't importing the data, then a PowerShell script could do this just as well as a SSIS package. The drawback is that you have to learn some basic PowerShell, it may be hard to maintain in the future since PowerShell is probably not your bread and butter core language, and you may have to rewrite the code again to a SSIS package if you want to import the data. You would probably call the PowerShell script from a SQL Server Agent job step, so scheduling can be handled pretty easily.
There are more options than what I listed, so let me know if you still want more suggestions.