Unzip HTML Email Bodies from a SQL Server database - sql

I am in the process of migrating from a recruitment ATS called Bullhorn, who also acted as our mail server. We have been provided with a backup of our data which includes all our emails stored in a table - so far so good. However, each of those emails' bodies is stored in the database table as an Image type as a zipped binary (Not so helpful).
I'm pretty stumped with how to get this stuff out. There is a forum in which there is a thread dedicated to this but predictably Bullhorn are not so enthusiastic spending time supporting customers who are migrating away. Here is the thread: http://supportforums.bullhorn.com/viewtopic.php?f=34&t=1672
As I explain in that thread, I have used a program called Chilkat with which I can successfully read a table row, decompress the Email Body Column and display the raw HTML in the console window. But now, I don't really know where to go. I've read in to Datasets and Table Adapters but I'm not too sure how I can get the Chilkat software to work on data within that dataset.
So, I wonder if anyone can help out with this and provide some guidance?
Or.... I've been reading about using SQL to store binaries and it seems it has it's own compression - so could I use a CLR as a stored procedure to be able to give better access to this data.
Ultimately, I'd like to be able to output this in to either a CSV file or use GeniusConnect to recreate an Outlook file - ultimately ending up in Gmail.
Any help will be greatly appreciated.
(Originally posted not being logged in - added this to stop it being flagged as a duplicate)
Thanks!
Chris

Related

SQL Database in GitHub

I am building a Java app that uses an SQLite database to hold most of its data. For the end-user, the database would be almost entirely read-only, with very occasional edits. I'll (theoretically) be displaying/distributing it through my GitHub page, so my question is:
What's the best way to load the database into GitHub? (I'm using IntelliJ with DataGrip.)
I'd prefer to be able to update the database when I commit/push, instead of having to overwrite the whole file. The closest question I can find is How to include MySQL database schema on GitHub? but there could potentially be hundreds or thousands of entries, so I can't just rebuild the tables when the user installs the app.
I'm applying for entry-level developer jobs, and this project is going to be my main portfolio piece during job-hunting. I'm trying to make sure it is not only functional but also makes a good impression. Any help is (very) greatly appreciated.
EDIT:
After moving my .db file into the folder connected to GitHub (same level as my src folder) apparently I can now commit/push it with the rest of my files. How do I make sure that the connection from my Java code to the database stays valid once it is loaded onto another user's system? Can I just stick with
connection = DriverManager.getConnection("jdbc:sqlite:mydatabase.db");
or do I need to rework the path?
Upon starting, if your application can't find a corresponding sqlite database file, have it create one. Then do initial load of your tables from either CSV, JSON or XML files.
You can upload these files to Git, as they are text formats.

Get list of files and programs touched by AS400/iSeries service account

I am trying to get a list of the programs (RPG/CL/SQL) and files a service account on the iSeries has touched. The idea is that having this list we can tie specific permissions (I know this will really complicate things) to the user account in order to achieve a more secure application specific service account. Is there any way to do this and maybe get a report by running a command. Maybe there is a SQL statement?
Please excuse me if my terms are not appropriate, I am still new to the iSeries.
The audit journal will have what you are looking for....if so configured.
http://pic.dhe.ibm.com/infocenter/iseries/v7r1m0/topic/rzarl/rzarlusesecjnl.htm
The newest 7.1 TR includes stored procedures to allow easy read of journals.
https://www.ibm.com/developerworks/community/wikis/home/wiki/IBM%20i%20Technology%20Updates/page/DISPLAY_JOURNAL%20(easier%20searches%20of%20Audit%20Journal)
Charles
So though Charles' answer might be the one one should set up to get a thorough report. I wound up doing the following as suggested by one of my peers.
Please note that my goal though not properly explained as so, was to create an application specific user/service account for a program. This is to avoid using one with many privileges and thus gain some security.
1.Go through the source code (in my case classic ASP) and jot down all the names of the procedures used by that program.
2.Create a CL program that outputs the program references to a display file. Then export the file's contents onto Excel and massage where necessary.
PGM
DSPPGMREF PGM(MYLIB/PGM001) OUTPUT(*OUTFILE) OUTFILE(MYLIB/DSPPGMREF) OUTMBR(*FIRST *REPLACE)
DSPPGMREF PGM(MYLIB/PGM002) OUTPUT(*OUTFILE) OUTFILE(MYLIB/DSPPGMREF) OUTMBR(*FIRST *ADD)
ENDPGM
I was told however that service programs references cannot be displayed with DSPPGMREF. So the following was done for those.
PGM
ADDLIBLE LIB(ABSTRACT) POSITION(*LAST)
MONMSG MSGID(CPF0000)
WRKOBJR OBJ(SRVPGM01) OBJTYPE(*SRVPGM) OUTPUT(*OUTFILE) OUTFILE(MYLIB/WRKOBJR) MBROPT(*REPLACE)
WRKOBJR OBJ(SRVPGM02) OBJTYPE(*SRVPGM) OUTPUT(*OUTFILE) OUTFILE(MYLIB/WRKOBJR) MBROPT(*ADD)
WRKOBJR OBJ(SRVPGM03) OBJTYPE(*SRVPGM) OUTPUT(*OUTFILE) OUTFILE(MYLIB/WRKOBJR) MBROPT(*ADD)
ENDPGM
Thank you for all your help. I apologize that my answer is a little more specific than my question but in the end this was what I wanted to achieve, I had to generalize to ask the question. I'd thought i'd post post my answer anyways in case it helps someone in the future.

Many user using one program (.exe) that includes datasets

I created a time recording program in vb.net with a sql-server as backend. User can send there time entries into the database (i used typed datasets functionality) and send different queries to get overviews over there working time.
My plan was to put that exe in a folder in our network and let the user make a link on their desktops. Every user writes into the same table but can only see his own entries so there is no possibility that two user manipulate the same dataset.
During my research i found a warning that "write contentions between the different users" can be occur. Is that so in my case?
Has anyone experience with "many user using the same exe" and where that is using datasets and could give me an advice whether it is working or what i should do instead?
SQL Server will handle all of your multi-user DB access concerns.
Multiple users accessing the same exe from a network location can work but it's kind of a hack. Let's say you wanted to update that exe with a few bug fixes. You would have to ensure that all users close the application before you could release the update. To answer you question though, the application will be isolated to each user running it. You won't have any contention issues when it comes to CRUD operations on the database due to the network deployment.
You might consider something other than a copy/paste style publishing of your application. Visual Studio has a few simple tools you can use to publish your application to a central location using ClickOnce deployment.
http://msdn.microsoft.com/en-us/library/31kztyey(v=vs.110).aspx
My solution was to add a simple shutdown-timer in the form, which alerts users to saving their data before the program close att 4 AM.
If i need to upgrade, i just replace the .exe on the network.
Ugly and dirty, yes... but worked like a charm for the past 2 years.
good luck!

How to archive data in Lotus Domino DB and keep attachments stored in DAOS

I logged the following question in server fault, and it was suggested I log a dev question related to that question, so here it is.
I have a Lotus Domino DB being archived using the LS method CopyToDatabase. I am about to implement DAOS on the database and need to ensure that attachments are preseved when copied to the archive.
The person who answered the first question suggested that this would not work and that I would lose the attachments. Can anyone advise how to code the archiving (I can only think of using CopyToDatabase) to ensure that the attachment is not lost?
I had assumed Domino would:
move the attachment data from DOAS into Domino when the CopyToDatabase was run.
then move the attachment data back into DOAS if the DB it is copied to also has DAOS enabled.
Thanks,
A
It really is an admin question, but the reasoning does involve understanding things from a developer's perspective, so it's pretty reasonable to ask here.
DAOS is 100% invisible to Notes code at all levels. It doesn't matter whether it is LotusScript, Java, or the Notes C API. The code is 100% unaware of DAOS. You actually cannot write special code that deals directly with DAOS objects.
So, your assumption is basically correct. You just didn't mention the actual part where the attachment is deleted, and a couple of other miscellaneous details.
I.e., if the archive database that you are copying to exists on the same server as the source database, and both are enabled for DAOS, then the attachment will remain in DAOS even after you delete it from the source database.
But if the archive database that you are copying to is on a different server, or if it is on a users's local hard drive, and if the attachment does not also exist in some other DAOS-enabled database on the server, then the attachment will be removed from DAOS. This will occur at the next DAOS purge after the time that you delete the document (or just the attachment) from the source database.

Automating WebTrends analysis

Every week I access server logs processed by WebTrends (for about 7 profiles) and copy ad clickthrough and visitor information into Excel spreadsheets. A lot of it is just accessing certain sections and finding the right title and then copying the unique visitor information.
I tried using WebTrends' built-in query tool but that is really poorly done (only uses a drag-and-drop system instead of text-based) and it has a maximum number of parameters and maximum length of queries to query with. As far as I know, the tools in WebTrends are not suitable to my purpose of automating the entire web metrics gathering process.
I've gotten access to the raw server logs, but it seems redundant to parse that given that they are already being processed by WebTrends.
To me it seems very scriptable, but how would I go about doing that? Is screen-scraping an option?
I use ODBC for querying metrics and numbers out of webtrends. We even fill a scorecard with all key performance metrics..
Its in German, but maybe the idea helps you: http://www.web-scorecard.net/
Michael
Which version of WebTrends are you using? Unless this is a very old install, there should be options to schedule these reports to be emailed to you, and also to bookmark queries. Let me know which version it is and I can make some recommendations.