Access to Documents file and metada on Content Server using DQL - documentum

I would like to know whether there is a way to execute DQL queries against a content Documentum server apart from the JAVA API DFC classes interface, using other languages such as Python or R.
I need to access and old version of the documentum content Server, either 5.2 or 6.3 version. I have seen a REST API for the 7.1 and 7.2 versions of the content server
Thank you very much.

About Documentum 5.2, it's impossible by DFS, even DFC if i'm not mistaken.
But 6.3 DFS could works.
I suggest you use the idql(32).exe or iapi(32).exe (*.sh if linux) CMD program inside the Content Server Machine (<DOCUMENTUM_HOME>\product\<6.3|5.3>\bin\*.*).
You can even copy it to another machine, but probably you will need some extra DLLs also.
So, encapsulate this program using python, passing all necessary parameters.
Ex.: Calling an external command in Python
check their sintaxs below:
idql <docbase> [-U<user>] [-D<domain>] [-P<password>] [-K<secure flag>] [-R<input file>] [-n] [-l<error level>] [-X] [-z] [-w<display width>]
iapi <docbase> [-U<username>] [-P<password>] [-D<domain>] [-K<secure flag>] [-R<input file>] [-i|-I] [-v] [-l<log file>] [-Q] [-F<failure level>] [-X] [-z] [-Sapi] [-w<column width>]
I dont remember if Documentum 5.3 is "dmbasic.exe" program instead "idql", but i'm pretty sure that "iapi" is there.
If works, you can not only execute querys, but even download a file from repository using iapi... totally old-school, but is possible. Search for "Content Server API Reference
Manual Version 5.3 guide to see what is possible to do.
I have a copy with me for emergencies. If you want i can send you by email.
Good luck.

Related

CMake Server: Docs mismatch, clarification needed

I was running across this piece of documentation: https://cmake.org/cmake/help/v3.13/manual/cmake-server.7.html
But as it turns out, the description given on how to start the server, is not the same as shown in the documentation. When executing cmake -E server --debug, I am getting the following message:
CMake Error: No protocol versions defined. Maybe you need --experimental?
When adding the suggested flag --experimental, the server will launch, reading from standard input and responding to standard output, as expected. However, all the examples shown on the documentation do not suggest the use of this flag. Plus, I have version 3.13.4 installed as well.
What is the proper way of starting the CMake server, as in how would I specify the protocol version, and why am I asked to use --experimental?
My main goal is to find out if I can extract values like targets and their associated information (like CFLAGS) off CMake. Using this server command would certainly let me do this by just writing a small piece of NodeJS code to interact with it.

Migrating Trac Wiki

I am trying to move Trac data from an old server at my workplace to a new server but I am stuck on the last step of migrating our wiki data. We use Trac 1.0.1 and are trying to update to Trac 1.2. The part I am stuck on is dumping the wiki. I have been trying to use
trac-admin wiki dump
This works for my tests but when I try to use it on the actual wiki I get an error saying that the filename is too long. This happens because the hierarchical files try to make a filename like this
child1%2child2%2child3%2child4%2child5%2.....
instead of
child1/child2/child3/child4/child5/.....
Since linux is seeing this path as one name it throws an error saying that the file name is too long. Has anyone ran into this problem before and have a solution for it????
I have also tried making a hotcopy of trac and transfering it but this doesnt work either. If anyone knows where the wikis are stored and how to copy that from our old server to our new server that would be the most optimal solution I am looking for

Maxmind .MMDB to .DAT? mod_maxminddb in Debian Repo?

There is allready another Thread about this which isn't really answered:
How to Convert a Maxmind .MMDB to .DAT?
Here Greg Oschwald, working at Maxmind, said that "The Legacy GeoIP builds (.dat) are not going away in the near future". Yeah, but the future is now, and they are going away on 1. April 2018 which is in a month ;) I really liked my current Apache-Configuration (Debian) using mod_geoip2 and GeoIP .dat-Databases. Works like a charm. So it's kind of anoying to change everything now. Especialy because there is no native Apache-Module like mod-geoip2 to use, but I have to build a module from source, install libraries and mess with my whole apache-config to enable apxs. And I don't have an automatic Update of the new module by repository but have to update it manually with new libraries and new tarballs when they are available. This is not very convenient.
Well, I could download the CSV-release, Add the IP-Ranges with Maxminds provided CSV-Converter (https://github.com/maxmind/geoip2-csv-converter/releases) write a script which transforms the bunch of csv-Files into a single "Legacy-Like" csv-File and transform this with the Debian Program (https://github.com/dankamongmen/sprezzos-world/blob/master/packaging/geoip/debian/src/geoip-csv-to-dat.cpp) to a .dat File. >Maybe< this could work for some time. But it's very ugly. Isn't there a better solution?
If not: Will there be a native Apache Module in the Debian Repository which removes the "build/install it yourself" part? Then I would have no issue with the new format.
Greetings daily

Apache/Perl Cannot Find MDAC without CommonProgramFiles(x86)

I am having a problem with using Apache/Perl to get access to Excel files using Microsoft Data Access Component (MDAC). Somehow I must set the "CommonProgramFiles(x86)" system environment variable in order to get this to work. Otherwise, I keep getting this error message:
System.InvalidOperationException: The .Net Framework Data Providers
require Microsoft Data Access Components(MDAC). Please install
Microsoft Data Access Components(MDAC) version 2.6 or later. --->
System.IO.FileNotFoundException: Retrieving the COM class factory for
component with CLSID {2206CDB2-19C1-11D1-89E0-00C04FD7A829} failed due
to the following error: 8007007e.
The server configuration is:
Windows Server 2008 R2 in 64-bit
Server is installed with Microsoft Access Database Engine 2010
Apache 2.2.25 (that is 32-bit)
Perl 5.12.3 v5 (that is also in 32-bit)
I have my Perl CGI script to call my C# program (that is built for "Any CPU").
The C# program uses MDAC to open and read Excel files (not trying to launch Excel, only try to read data from the Excel files).
I have verified that the server has the latest MDAC available in these 2 folders:
C:\Program Files\Common Files\System\Ole DB
C:\Program Files (x86)\Common Files\System\Ole DB
I have also checked the registries and they look fine. Anyway, I don't have any problem running my C# program directly at the command prompt (it can use MDAC to get access to Excel files). I only have the problem when I use Apache/Perl to use my Perl CGI script to call my C# program (that is when I get that error with MDAC).
I can work around this problem by specifying CommonProgramFiles(x86) in my Perl CGI script, like this:
$ENV{ "CommonProgramFiles(x86)" } = "C:\\Program Files (x86)\\Common Files";
I have this question:
Why do I have this problem? And why setting that CommonProgramFiles(x86) system environment variable can workaround this problem? Why that system environment variable is empty before I set it? Does this have to do with the fact that I am running 32-bit Apache/Perl in a Windows operating system that is 64-bit?
Please help me to understand this issue. Thanks in advance.
(The original version of this post had a question about a second problem. Turned out that problem had to do with an extra double quote in the string. I fixed this, and that problem has gone away. That's why I have removed that second question from the post)
Jay Chan
I did some more research and the issue is the following: until release 2.4.9, the Apache startup routines have mapped "commonprogramfiles(x86)" to "commonprogramfiles_x86_" and that variable does not exists in the environment unless you create it... I have not tested it, but creating that environment variable and making it point to the same location as commonprogramfiles(x86) would probably fix the issue too.
Since the compiled Apache distributions are only using up to version 2.4.46 as we speak, they don't have the fix that allows the parenthesis in environment variables. That's why you still need the PassEnv directive to ensure that Apache passes the correct values to 32-bit CGI scripts.
The following post has some useful details about this:
https://bz.apache.org/bugzilla/show_bug.cgi?id=46751
I used to have the same problem in with Apache 2.4 with dBase compiled apps using ADO-32 bit as dBase is 32-bit. Something has recently changed, possibly with Windows 10 2004 20H2. I needed this fix in July-Aug 2020 but now I don't, the environment variable already exists. Since my Apache version is dated April 2020, that cannot be the reason for the change.
I tried to do some research about this but all I could find is that those environment variables are system ones existing at least since 2017, so why I needed to set this var is a mystery to me, but I would like to understand this, so if you find something, post a follow-up...
https://learn.microsoft.com/en-us/windows/deployment/usmt/usmt-recognized-environment-variables

Debugging Solaris OS crash

I have access to a remote Solaris terminal which crashes occasionally, and I have to ask someone with physical access to boot the machine up, which it does successfully. I would like to know which tools/files should I look at to find out the cause of the crash so that I can make the necessary configuration changes and avoid it in the future.
What tools you can use will depend on what version of solaris you have running and what the actual problem
is. The first thing to do is check the system console (which it sounds like you don't have access to) and the /var/adm/messages file. This file is updated with system messages and the newest will appear at the end.
Next, you can look for a system core file. If a core file is created, it would be in /var/crash/hostname where "hostname" is the name of the machine.
If you have an actual core file in the /var/crash/hostname directory, this set of commands will give you a good
string to search google with:
# cd /var/crash/hostname
Replace "hostname" with the hostname of your machine.
# mdb -k unix.0 vmcore.0
If you have multiple core files, select the most recent version.
> ::status
This should give you a panic message, cut and paste that into google and see what you can find.
For more core file analysis read this:
http://cuddletech.com/blog/pivot/entry.php?id=965