Querying Abra Alerts 5.1 Logs? - sql

We are trying to track down a particular ABRA alert which we believe is attached to some sort of custom code which generates msaccess *.snp files. It is believed that we will have a better chance of tracking down the alert by looking at the logs of the ABRA Alerts and seeing which ones ran during the timestamps of the generated files.
Here is an image of the Abra Alert main window, there are many, many, Abra alerts listed, and each have quite a few log entries associated with them.
The log entries from various alerts can be sorted, and filtered, but they cannot be filtered within a specific date / time range:
So I was wondering if maybe there was a way to query the log file data directly. From what I understand Abra Alerts 5.1 uses a FoxPro database (Sage Abra Suite uses Visual Fox Pro 09.00.00). It was my thought that perhaps it could be connected to using ODBC, for purposes of querying a specific date range.

You can connect to a FoxPro database using several different types of drivers, including OLE DB and ODBC. You will need to download the drivers specific for FoxPro.
Microsoft states that they no longer support the Visual FoxPro ODBC driver (although I have never found any problems with it). But they do support the OLE DB driver...
http://www.microsoft.com/en-us/download/details.aspx?id=14839
If interested, here is an article which discusses why they have chosen to stop support of ODBC... http://support.microsoft.com/kb/277772
There are many tools out there that will allow you to view and query the FoxPro tables. Basically any tools that can connect using an OLE DB driver can be used. I use Visual Studio. Here is another that I have not used personally but I have heard good things about it... http://www.ultradiff.com/

The Abra Alerts log database is actually an Access database called DASLOGDB.MDB. That can be monitored using the Jet driver.
The .snp files you are seeing are actual the snapshot files where the monitor stores the results from the monitor. They are binary files and cannot be viewed directly or through ODBC/OLEDB. If you are looking for which processes are associated with a .snp file, just do a search through the Processes folder looking for the name of the snp file within the text of the tsk files. That tsk file that has the .snp filename will also have the name of the process.

You should find the log database in Data folder either in the installation location or in the program data folders. Or if you look at the system DSN called DAS 4.0 Log Database you kind find the path.
If you go to the View-Options menu and look at the Log tab you can see the current log database definition.

Related

Uipath automation

I want to take the report of last one download details from the mysql database and sent to my project manager on every monday through the email so the process remains same only date would be change dynamically so, I would like to automate this process using RPA UIpath.
Anyone could you help me to achieve this process.
Thanks
Use 'Database' package to connect to sql db and and run the query.
Schedule the program to run every Monday using Orchestrator.
Create the report with current date using DateTime function
There will be some extra steps like transforming data as necessary etc, but the basic outline of this process is as follows
Download MySQL ODBC Drivers MySQL site
Control Panel -> Setup ODBC Data Sources (32-bit) setup User or System DSN, make sure to test your connection to see if it works OK (it's similar to how you set it up in MySQL Workbench or read about it here)
In your UiPath Studio Package Manager, add Database Activities Pack.
Get your data using UiPath.Database.Activities.ExecuteQuery activity to a DataTable
Write your data to an Excel file using Write Range activity
Send your mail through SMTP using UiPath.Mail.SMTP.Activities.SendMail including freshly created Excel file as attachment
First step Take the data using database activity
Create an automation in UiPath to attach and send to your respective email
Schedule it own orchestrator.
Though not exactly but atleast sending a mail by taking credential from Orchestrator is shown in this article
https://www.c-sharpcorner.com/article/create-a-sequence-project-for-sending-a-mail-using-smtp-activity-by-taking-crede/

Distributing .mdf files to field sites

I am trying to find the best procedure to get data from our SQL server at headquarters to update apps running on local machines in various locations not connected to our network. Our current data and application is in Foxpro where you simply copied the data file, so I am not very familiar with using SQL databases.
The field app uses localdb and users don't save anything to the database. When the app opens it checks a web site to for updates. I tried detaching our HQ .mdf and .ldf, downloading it and overwriting it on the local machine, but localdb would not attach to the new file (same name). I thought localdb closes and detaches when the application closes , but maybe I am wrong. I also wonder if I need the log file since no changes are made and I dont need to rollback anything. I have searched for a good article on this topic but haven't found anything. This must be a fairly common scenario in many companies.
You want to look into using replication, probably snapshot replication. This allows you to distribute on whatever schedule is applicable to send one or more tables, or other objects, to off site sql server instances. You can use Http to send data.

Track changes made to a database

Background:
I have a MS SQL Server database and I want to track changes to it. For example if a column needed to be added or removed or a table needed to be dropped. Something similar to Version control for regular code.
The problem:
While looking around I saw that there were some tools that can be used:
RedGate SQL Source Control
Visual Studio Database project
I am more interested in knowing if either of these tools will track changes to my database? More specifically I have a TFS server that is the source control for my MVC code, can I use either of these with TFS? Will it allow us to restore from older versions? Will it allow multiple developers to work on the database simultaneously?
For this type of work, ApexSQL Source Control shown to be all that you need. With this SSMS add-in you can work directly on a database, and all of your changes will be tracked in real time.
Yes, several developers can work in the same time on the same database. When one developer works on a one or several objects, other developers can see which those objects are, and until the first one does not finish changing the others cannot change that object, they will not be allowed to.
If by any case, object is changed wrong, previous version or any earlier version can be restored at any moment.
This add-in has all necessary options and features to allow the developers to work without losing time for checking changes made against object, since the add-in does that for them. And you can always see by whom, when and what that change is.
Being in the database version control space for 5 years (as director of product management at DBmaestro) and having worked as a DBA for over two decades, I can tell you the simple fact that you cannot treat the database objects as you treat your Java, C# or other files and save the changes in simple DDL scripts.
There are many reasons and I'll name a few:
Files are stored locally on the developer’s PC and the change s/he
makes do not affect other developers. Likewise, the developer is not
affected by changes made by her colleague. In database this is
(usually) not the case and developers share the same database
environment, so any change that were committed to the database affect
others.
Publishing code changes is done using the Check-In / Submit Changes /
etc. (depending on which source control tool you use). At that point,
the code from the local directory of the developer is inserted into
the source control repository. Developer who wants to get the latest
code need to request it from the source control tool. In database
the change already exists and impacts other data even if it was not
checked-in into the repository.
During the file check-in, the source control tool performs a conflict
check to see if the same file was modified and checked-in by another
developer during the time you modified your local copy. Again there
is no check for this in the database. If you alter a procedure from
your local PC and at the same time I modify the same procedure with
code form my local PC then we override each other’s changes.
The build process of code is done by getting the label / latest
version of the code to an empty directory and then perform a build –
compile. The output are binaries in which we copy & replace the
existing. We don't care what was before. In database we cannot
recreate the database as we need to maintain the data! Also the
deployment executes SQL scripts which were generated in the build
process.
When executing the SQL scripts (with the DDL, DCL, DML (for static
content) commands) you assume the current structure of the
environment match the structure when you create the scripts. If not,
then your scripts can fail as you are trying to add new column which
already exists.
Treating SQL scripts as code and manually generating them will cause
syntax errors, database dependencies errors, scripts that are not
reusable which complicate the task of developing, maintaining,
testing those scripts. In addition, those scripts may run on an
environment which is different from the one you though it would run
on.
Sometimes the script in the version control repository does not match
the structure of the object that was tested and then errors will
happen in production!
There are many more, but I think you got the picture.
What I found that works is the following:
Use an enforced version control system that enforces
check-out/check-in operations on the database objects. This will
make sure the version control repository matches the code that was
checked-in as it reads the metadata of the object in the check-in
operation and not as a separated step done manually. This also allow
several developers to work in parallel on the same database while
preventing them to accidently override each other code.
Use an impact analysis that utilize baselines as part of the
comparison to identify conflicts and identify if a change (when
comparing the object's structure between the source control
repository and the database) is a real change that origin from
development or a change that was origin from a different path and
then it should be skipped, such as different branch or an emergency
fix.
An article I wrote on this was published here, you are welcome to read it.
If you're looking for a product that will track changes into TFS from your SQL Server automatically, I'd invite you take a look at our product, Sql Historian. It's different from most other SQL version control systems (including the ones you've listed) in that it does not require developers to perform a check-in ritual to synchronize version control with what's already committed to the db.
However, features common with Sql Historian and the other two systems you mention are: working with TFS, the ability to view older versions of your db objects, and allowing multiple users on the db at the same time.

VB.NET - Local database (SQL Server Compact 3.5 database) data gone after update?

I have created an application in VB.NET (using Microsoft Visual Basic 2010 Express) with a local database (SQL Server Compact 3.5 database) to store data.
I have installed this on the users computer, and added a "search online for updates" functionallity (which can be selected when publishing)
Now i have noticed, that sometimes when i upload a new version, the data from the database gets cleared. (possibly when i opened the dtb while developing)
This is offcourse not how i want the system to behave, and the data should always remain on the users computer.
In 'Application Files' the database file (*.sdf) is currently set to 'Data File (Auto)', but i'm unsure of the exact way this works.
Could anyone help me to understand how all of this works, and tell me how i can be sure that the data in the users database will remain, even after an update?
If there is no solution to ensure this, is there a way to safely backup the data and reload it?
Thanks in advance!!
Basically, the click one install overwrites everything in the program folder that is included in your publish. So if you include the .sdf then it will overwrite it when the installer is executed. What you need to do is select "exclude" on the sdf instead. This will keep the database in its previous state.
So my recommendation would be to have 2 different publishes. One that you create that contains the .sdf which is only used on first time installation, and then in all the update ones you exclude it.
To perform updates on your tables you would have to write the SQL for it in your software. Basically do a check on all tables to see that they have the proper setup on startup. If they don't then you add the missing columns.
Hope this helps.

Lotus Notes ODBC Connection

I need to connect and send/receive information from an MS SQL server in my Lotus Notes app using #formula in realtime (I can connect using an agent, but I need to use inline code for this).
The commands themselves seem pretty straight forward, but setting up the configurations seems to be a topic with scarce documentation. Apparently I need to install an ODBC driver. Where would I find that, and do I install that onto the server or onto the workstations that will run this app?
If any Lotus gurus could step me through setting this up, it would be greatly appreciated.
Thanks
You'll need to install the ODBC driver on the workstations that run this app, if the users will be triggering the ODBC connections. If at all possible, I highly suggest setting this up on the server side, and having it run via an agent. That'll save you from a few headaches, including having to maintain the ODBC connections on each workstation and worrying if each workstation has access to the data and server.
You first just want to make sure your ODBC setup is correct. You'll need the appropriate driver, of course, and the connection information. This site has a walkthrough to give you an idea of how to setup an ODBC database connection
If you have MS Access you can use it to test querying from the ODBC data source. Once you've tested the connection works, you'll just refer to the data source name (DSN) in your #DbColumn, #DbLookup, or #DbCommand formulas.
Back to my suggestion on setting this up on the server side, that would mean you'd keep a copy of the data you're querying within the Notes database itself, and then users would be interacting with read-only data in Notes. You could schedule updates regularly on the server side of that read-only data and effectively create a cache of the data in your Notes environment. Then that data would replicate around to other replicas of the database, but remove the trouble of the ODBC connection being needed everywhere.
If you need realtime data, though, that solution is out the window and you'll have to go with a local solution. In that case, you might want to look at the LCConnection class or using an ADODB.Connection from script, as both will allow you to create DSN-less connections to data sources. You'd then save the trouble of requiring ODBC data sources on each workstation, and only have to worry about whether they can access the server from their workstation.
I would add another option to Ken's list. It involves having the server do the queries of the external database (therefore you are only setting up ODBC in on the server - you don't have to deal with it on the workstations). You create an agent that is launched on the server using the 'run on server' technique. When the workstation needs to query the external data, the code creates a throw-away document in the database, puts the query criteria into the temporary document, saves the document, then calls the 'run on server' agent passing a reference to the temporary document. The server launches the agent, reads the criteria from the temporary document, does the query, and writes the results back to the temporary document. Then the workstation can access the query results from the temporary document. A scheduled agent can delete the temp docs on a regular basis.
It sounds complicated, and it all has to be done in script, but I've done this in many applications and it is fast, flexible, easy to administer, and gives your applications a lot of power. Note that end users must have the ACL rights to create a document in the db (the temp doc) in order for this to work.
Good luck!