Scenario: I have an IBM Domino web application (.NSF) and contains divisions (refer to the image). I want to migrate the content (such as blogs, main content and the article) to my WebSphere Portal. What is the first thing to do?
My understanding in migration is database to database migration and seems I don't know where to find the database for my contents knowing that database in creating domino application is built in. Advance thanks!
Refer to this image: http://postimg.org/image/irxsjdk8p/
The database is the NSF file - although it's possible that the information might be aggregated from several NSF files so you'll have to do some analysis. The first thing to do is to use the Notes client to identify the documents that contain the content that you want to migrate, check their properties to determine what NSF file they are stored in and what form they are based on, and then use Domino Designer to open the NSF file (or files, possibly) and analyze the form and determine which items contain the values that need to be migrated.
Related
I'm building a MS Project VSTO tool (written in C#) that in many instances needs to either read or write data from a field in MS Project. Since I don't always know what field will contain the data I need, I many times need to allow the user the option to select the field they want. Getting all the basic fields is easy, my issue arises if a user is in a Project Server environment and using Enterprise fields. So my question is 2 fold:
Is there a way to check if the user is in a Project Server environment?
Is there a way to easily get all of the custom enterprise fields that are being used in MS Project? I'd like to be able to capture these fields in a collection like a list or array.
Is there a way to check if the user is in a Project Server
environment?
Look at the collection of Profiles to see if there is a project server one and check its ConnectionState to see if it's connected to a project server.
Is there a way to easily get all of the custom enterprise fields that
are being used in MS Project? I'd like to be able to capture these
fields in a collection like a list or array.
If you have access to the project server, take a look at this page, Accessing Project Online enterprise custom fields. Without access to the server I suggest:
Loop through all tables and their fields to find enterprise ones.
Allow the user to enter the name of enterprise fields and store that information for future use so that it's a one-time 'setup' for the user.
I have very a rudimentary understanding of Microsoft Access and VBA Code.
On my work desktop, I have Microsoft Office Professional Plus 2013 Access
I've been tasked to create a MS Access application with an Access DB.
I started developing an MS Access application with Forms , and the corresponding DB
I'm using VBA code event handlers(or Event Procedures) for the UI control buttons.
I wanted to create a common configuration settings area for said application( like ASP.NET web application have web.config files or app.config files )
I failed to find anything similar for MS Access application development.
Could someone please provide me with an explanation as to how to implement an MS access implementation model/software design pattern for common configuration settings area that is modular, reusable, clear and concise?
As noted, I great way to do this is to simply create a table in the front end. It is assumed that you will split your database into two parts. The code/forms etc. is the so called front end,and then you have the back end part (the database - it can be a accDB file, or it can be say SQL server).
So the typical update and deploy of your software will be:
Re-link your tables from test database to the actual live production database.
Compile your accDB into a accDE.
Deploy this new updated "next" version of your software to all the desktops.
So, since any change or addition to settings will be in the new front end then any application wide settings you have will thus roll out with your update.
It often depends on the user base. In the case that we had multiple customer sites running our software, then using a local table would not suffice, since things like path names, connection strings to the database etc. are customer specific. So, in this case we moved the settings table out to a text file (setup.ini). So we now use a setup.ini file that is external to the program and assumed to be deployed in the same folder as the front end. On startup we use the windows API to read ".ini" files.
So, both ideas (external setup.ini) or a local table in the front end are rather good choices from a development cycle point of view.
So once you down the road in developing your application, and the table/data structure changes are down to a dull roar, then it is time to split your application. (use the built in split wizard for this). I will say that even for my .net applications, I still often use a external setup.ini file for settings, since once again with multiple customer sites, it not practical to have customer specific settings in the application as opposed to a external settings file.
I have a web solution with about 1000 clients. Nowadays they have access to reports that I provide as a PDF: an ASP.NET script executes SP's in the database and with PDFReactor I create a PDF. This is online and as you may already be guessing, it is unmanageable for large data sets.
I was thinking of using a reporting solution, as Power BI, Tableau or... Qlik. In my investigations I have found out that Qlik is the most complete solution for what I want. Nevertheless, I still have some questions, regarding the application architecture and how I can integrate Qlik with my application.
The first question is how I manage my users? I have a custom authentication provider and user management inside my application... What is the best way to integrate with Qlik? I manually configure all the clients in the Qlik Server also? There are endpoints to do it automatically?
The second question is how I manage what a client can see inside a Qlik document (QVW). I want to have several unique QVD, where several QVW get their data from. But I don't want to create a QVW for each client!! Can I filter a QVW based on a user authentication?
I would like that my clients could see the reports dynamically inside my web app. But my conclusion so far is that the web view of Qlik is static and that for more dynamism my users would have to use Qlik View desktop. Correct?
And this leads me to the final question: does my clients would have to pay licenses or is everything on my side and responsibility?
Thanks for your help!
Regards,
David
Out of the box QlikView Server (see 4.2) uses NTFS authentication (local windows users or Active Directory). There is something called DMS - ticket based authentication which allow non Windows users to be authorized to access apps. A bit more info here
QlikView have something called Section Access which is on file (qvw) level. Section Access is part of the QV script. Basically you can create link between user name and data field. The main file will contain all the data but when the user open the document he/she can see only the data that its relevant to this username. More info here and here
For example:
user1 -> can see only UK data
user2 -> can see UK and USA data
user3 -> can see all the data
... etc
The web app and the desktop app are the same. The web client provide same level of interaction as the desktop so you can slice and dice your data as you want
4.1 QV desktop (personal edition) is free but you can open up to 5 qvw files which are not "yours" (not created with the current instance of the personal edition) and the clients need to install it locally on their computers
4.2 QV server is not free. With QV server all the clients are accessing the QV apps via the browser. You (as administrator) are buying the licenses from Qlik Partner and it's up to you if you want to charge your customers or not (and how much)
I'm creating cubes (xml schemas) via schema-workbench or ivy schema editor.
When I'm publishing it, I would like to know where the schemas (mondrian.xml files) are actually saved,What is the location of these files?
Thanks,
Which version of Pentaho BA server are you using? Pre or Post 5.0?
Pre 5.0: you choose the file path when publishing. The path is under you ${BISERVER}/pentaho-solutions.
5.0 and beyond: there's no physical file, it's stored in Pentaho's Jackrabbit repository only.
If you create any Analysis in Mondrian 5.*, and you have a test server and prod server, don't export Mondrian schemas from one to another. For a mystical reason you won't be able to get rid of them later.
This was my case, when I started searching for the pentaho-solutions/system/olap/datasources.xml in order to delete the problematic mondrian scheme. This source just doesn't exist anymore.
All data is now saved with the help of Jackrabbit. Jackrabbit stores your Mondrian Schemas together with all other Reports and Analysis to the database (the path is given in the jackrabbit preference file). But in the database you can only see their IDs. So there is no chance to get rid of one object - you leave everything or delete everything by truncating the table. The main problem is, that in the same table are saved not only schemas, but also all other reports, which you have downloaded to the server.
We are trying to track down a particular ABRA alert which we believe is attached to some sort of custom code which generates msaccess *.snp files. It is believed that we will have a better chance of tracking down the alert by looking at the logs of the ABRA Alerts and seeing which ones ran during the timestamps of the generated files.
Here is an image of the Abra Alert main window, there are many, many, Abra alerts listed, and each have quite a few log entries associated with them.
The log entries from various alerts can be sorted, and filtered, but they cannot be filtered within a specific date / time range:
So I was wondering if maybe there was a way to query the log file data directly. From what I understand Abra Alerts 5.1 uses a FoxPro database (Sage Abra Suite uses Visual Fox Pro 09.00.00). It was my thought that perhaps it could be connected to using ODBC, for purposes of querying a specific date range.
You can connect to a FoxPro database using several different types of drivers, including OLE DB and ODBC. You will need to download the drivers specific for FoxPro.
Microsoft states that they no longer support the Visual FoxPro ODBC driver (although I have never found any problems with it). But they do support the OLE DB driver...
http://www.microsoft.com/en-us/download/details.aspx?id=14839
If interested, here is an article which discusses why they have chosen to stop support of ODBC... http://support.microsoft.com/kb/277772
There are many tools out there that will allow you to view and query the FoxPro tables. Basically any tools that can connect using an OLE DB driver can be used. I use Visual Studio. Here is another that I have not used personally but I have heard good things about it... http://www.ultradiff.com/
The Abra Alerts log database is actually an Access database called DASLOGDB.MDB. That can be monitored using the Jet driver.
The .snp files you are seeing are actual the snapshot files where the monitor stores the results from the monitor. They are binary files and cannot be viewed directly or through ODBC/OLEDB. If you are looking for which processes are associated with a .snp file, just do a search through the Processes folder looking for the name of the snp file within the text of the tsk files. That tsk file that has the .snp filename will also have the name of the process.
You should find the log database in Data folder either in the installation location or in the program data folders. Or if you look at the system DSN called DAS 4.0 Log Database you kind find the path.
If you go to the View-Options menu and look at the Log tab you can see the current log database definition.