More than 20 years ago I created a Domino agent that calls Java code and uses JDBC to write information into a DB2 database. Despite its questionable quality this agent has been running since then and now needs to be migrated to a completely different platform because we are shutting down Domino completely.
The colleagues that are now in charge of the agent need to have that JAR in order to decompile it and analyze its behaviour.
Yes, I know sourcecode should be available in a repository, but it isn't ;-)
Yes, I know that this reverse engineering approach has some risks, but we don't have better options ;-)
Unfortunately we can't find the JAR file anywhere. From what I remember it was uploaded to the NSF file and is stored and handled by Domino.
What we already tried:
analyzed the Lotusscript code
checked the Java and JAR sections in the NSF file
looked into the filesystem of the server and searched for JAR files with a name that points into the direction of DB2 drivers or my DB access classes
listed the content of all JAR files and searched for the name of the class I know is used by the agent (DatabaseAccessToDb2)
We are on Domino 9, I don't remember which version of Notes I used while creating the agent.
Here are parts of the agent code:
Option Public
Option Explicit
Use "LinkRegistry"
Uselsx "*javacon"
Use "DB2 Connect"
Use "database"
Use "hilfe"
Option Base 0
...
Sub Initialize
...
Call ConnectDB2( opendoc )
...
End Sub
Sub ConnectDB2( opendoc As NotesDocument )
...
Dim DatabaseAccessToDb2 As JavaClass
Set DatabaseAccessToDb2 = jsession.GetClass( "de.my.company.forms.database.DatabaseAccessToDb2" )
Set db2 = DatabaseAccessToDb2.CreateObject
Call db2.enableLog()
db2.doInsert( "..." )
db2.close
...
End Sub
This is the structure of the agent, we looked "everywhere" and couldn't find the compiled java code:
Where should the JAR file be located ? Are we searching for the wrong kind of file (I don't think WAR files are relevant) ?
We checked SO posts like this one but they didn't bring us closer to a solution.
With an agent using LS2J as yours does, there are four places that the Java code could be, and code could be used from more than one of these places at once:
As source code in a script library, compiled when the script library is saved. You'll find this under "Src" after opening a script library in Designer;
As a jar file attached to a script library. You'll find this under "Archive" after opening a script library in Designer;
As a jar file in the Domino server file system (only if the agent runs on the server); or
As a jar file in the Notes client file system (only if the agent runs on Notes clients).
You don't mention script libraries in your question. Check those first if you haven't yet done so.
If Domino or Notes uses a custom jar file from the file system, the jar file can be in the jvm\lib or jvm\lib\ext subdirectory of the Domino or Notes installation directory, or in a custom location specified by the JavaUserClasses key in the appropriate Notes.ini file.
Related
Every time I tried to config a remote interpreter, Pycharm asked me to set a sync folder. In my routine, I usually have the Cannot find declaration to go to error which can not be solved by invalidating caches. So I have to config the interpreter again. And these caused the redundant folders in my remote machine. And another situation is that I want to create other projects with the same interpreter. Where I have to config the folder mapping for each project to make the interpreter valid.
I do not understand this way. In my opinion, the sync folders should correspond to my local project. And the interpreter should be independent of the projects.
Every time I tried to config a remote interpreter, Pycharm asked me to set a sync folder.
To be able to execute a script on the remote machine, it is necessary to make sure it exists on it. This is by design, but if you already have a project folder deployed, you can change the suggested paths to needed ones during the interpreter configuration.
See step 7. https://www.jetbrains.com/help/pycharm/configuring-remote-interpreters-via-ssh.html#ssh
And another situation is that I want to create other projects with the same interpreter. Where I have to config the folder mapping for each project to make the interpreter valid.
Unfortunately, this setup does not work, please vote for
https://youtrack.jetbrains.com/issue/PY-40680/Allow-reusing-a-single-remote-interpreter-in-multiple-project
to increase its priority.
In my home folder in Linux I have several config files that have "rc" as a file name extension:
$ ls -a ~/|pcregrep 'rc$'
.bashrc
.octaverc
.perltidyrc
.screenrc
.vimrc
What does the "rc" in these names mean?
It looks like one of the following:
run commands
resource control
run control
runtime configuration
Also I've found a citation:
The ‘rc’ suffix goes back to Unix's grandparent, CTSS. It had a command-script feature called "runcom". Early Unixes used ‘rc’ for the name of the operating system's boot script, as a tribute to CTSS runcom.
Runtime Configuration normally if it's in the config directory. I think of them as resource files. If you see rc in file name this could be version i.e. Release Candidate.
Edit: No, I take it back officially... "run commands"
[Unix: from runcom files on the CTSS system 1962-63, via the startup script /etc/rc]
Script file containing startup instructions for an application program (or an entire operating system), usually a text file containing commands of the sort that might have been invoked manually once the system was running but are to be executed automatically each time the system starts up.
Thus, it would seem that the "rc" part stands for "runcom", which I believe can be expanded to "run commands". In fact, this is exactly what the file contains, commands that bash should run.
Quoted from What does “rc” in .bashrc stand for?
I learnt something new! :)
In the context of Unix-like systems, the term rc stands for the phrase "run commands". It is used for any file that contains startup information for a command. It is believed to have originated somewhere in 1965 from a runcom facility from the MIT Compatible Time-Sharing System (CTSS).
Reference: https://en.wikipedia.org/wiki/Run_commands
In Unix world, RC stands for "Run Control".
http://www.catb.org/~esr/writings/taoup/html/ch10s03.html
To understand rc files, it helps to know that Ubuntu boots into several different runlevels. They are 0-6, 0 being "halt", 1 being "single-user", 2 being "multi-user"(the default runlevel), etc. This system has now been outdated by the Upstart and initd programs in most Linux Distros. It is still maintained for backwards compatibility.
Within the /etc directory are several folders labeled "rc0.d, rc1.d" etc, through rc6.d. These are the directories the kernel refers to to know which init scripts it should run for that runlevel. They are symbolic links to the system service scripts residing in the /etc/init.d directory.
In the context you are using it, it would appear that you are listing any files with rc in the name. The code in these files will set the way the services/tasks startup and run when initialized.
I'm trying to create an updater for my app in VB.NET, No, I do not want to use clickonce, it sucks because I have to deal with managing self signed certs etc.
I know the code to check for new update files:
http://pastebin.com/ZjYBWABu
I also know the code for specifying where those files download to, the issue is I dont want to just download 1 .exe...I want to download all the latest build files which I would have uploaded to my server, which i would have taken from my Bin\release folder of my project.
Then when the updater downloads the files to a directory, it would go to the directory of the application, and somehow overwrite/replace all the files that have changed...maybe by using a hash or something?
I do not know how to proceed with this. What I do know is this.
The updater and the main app would have to be separate so that the updater could do the replacing while the app is closed so it doesn't get file in use errors. After the updater app has finished it would then start up the main app from the new exe.
Would appreciate help here thank you guys.
I am currently working on a project for which I have to implement a similar approach for updates. The project is lengthy, it would take some time to finish. But this is how I have planned to apply the updates:
There will be two main parts of the application Launcher (main application program) and Updater (To download files from server and replace them with the new ones and then launch the new file)
The application will have the option to manually check for update and also to check for update on startup.
If an update is available, it asks the user to apply the update now or later.
If the user selects to apply the update now then Updater application is executed in a separate process and then Launcher application is closed from within the code in Launcher. I have following approaches in my mind to launch another program from within first one and then exit:
Execute the Updater directly from within the Launcher using Process.Start
If that causes problem then as second approach launch command prompt from Process.Start, execute another program (Updater) from command prompt, close the command prompt and then exit the Launcher.
The Updater application then downloads all the relevant files from the server and upon completion old application files are replaced with the new ones.
Update availability information from server will include the new Version_No of application. For the purpose of providing all files for update, I will compress (zip) all of them in a single file named as Application.Version_No (as given by the server).
Upon download completion decompress (unzip) them to a folder named as the same Application.Version_No.
After decompressing all the files in this (Application.Version_No) folder will be copied to the Bin folder of application.
The new application Launcher file is executed in a separate process and Updater application is closed from within the code in Updater.
I have NOT yet tried this scenario as currently my focus is on completing the main application, but surely this must work.
UPDATE:
Another approach to check for updates is to use a bootstrap like application startup. It will be the main entry point of the program. Upon execution it will check for the updates and if there is none the Launcher is executed otherwise it will download the files, replace the old ones and then execute the new / updated Launcher.
For copying / overriding the files
One approach is to include only those files in the compressed (zip) file which are required to be replaced with the old ones and then after the download completes, either directly decompress them to the Bin folder or decompress them to a designated folder and then copy all of them to the Bin folder.
As another approach which seems somewhat lengthy, an additional helper file (XML, text or any other format) could be prepared for the download.
This helper file contains information of updated files like version number of each file, location where these are to be copied etc.
The files may be downloaded to a specific folder named as the new application version.
After downloading all the required files to a specific folder process each file mentioned in the helper file. Compare version of every old file with the new downloaded file. If it is latest then replace it in the folder mentioned in the helper file.
As another step in between all the downloads may be verified prior to copying and replacing.
Built an updater that ships with a daemon. Main project here:
https://github.com/UVLabs/dotNetUpdatify
There should be a way to eliminate the use of the daemon, if i figure it out i will update.
If I try to build an application with the application class outside the default package, so the application file path is /app/AppClass.mxml instead of /AppClass.mxml (as would normally be the case), Flash builder cannot launch the application for debugging because it is looking for the SWF in debug/app/AppClass.swf and the SWF is being output to debug/AppClass.swf instead. Changing the output folder to debug/app makes it put the swf in debug/app, but then it puts the application configuration file "AppClass-app.xml" in /debug/app/app and then that can't be found.
Is there a way to change only the SWF output folder, or the location of the xml configuration file in the run-configuration?
You may use symbolic link to created swf file - http://en.wikipedia.org/wiki/Symbolic_link
for example for Windows :
cd project/path/bin-debug/package/path/
MKLINK ClassName.swf project/path/bin-debug/ClassName.swf
and it's work
or you can use symbolic link for folder:
cd project/path/bin-debug/package/
MKLINK path project/path/bin-debug/ /D
I think I remember this worked for me. But it was long time ago. And, yes, it is a known problem, I also recall Adobe people mentioning it as a limitation of FB.
In my Ant script, you'll need to do the adjustments to reflect your actual file names and directory structure. Also note that it will make it more cumbersome to debug it from FB. You'll need to use the debugging target in Ant, and then connect the debugger to the running application (so that some info, especially on the startup) will be lost. The only way you would be able to debug it, though I've never tried it, is with the commandline tools (I'm not sure of adl syntax for breakpoints / printing / stack frames, so idk how to do it.
Also, for the released application you will probably want to change the signing mechanism.
I have a project where I create a JAR which contains a bunch of classes with main() plus a set of scripts which set the environment to invoke them. Most of those are long running processes which log a lot (~10-20GB).
This means I have a pretty complex log4j.xml file which, being in src/main/resources/, goes into the JAR. When something breaks in the production system, I'd like to modify the logging on the fly for a single run.
So I came up with the idea to have a conf/ directory on the production and put that into the classpath, first. Then, I thought that it would be great if M2 would put the config files in there (instead of the JAR). But that would overwrite any manual changes during an automated deployment which I strongly dislike. I'm also not fond of timestamps and things like that.
So my next ideas was this: M2 should leave the config files in the JAR but create copies of the files with the name *.tpl in the conf/ directory. The admin could then copy a template to the basename to override the files in the JARs. .tpl-Files would be overwritten but that wouldn't hurt. Admins would have full control over which version of the log was active and they could run a diff to see whether any important changes were made.
Now the question: Has someone seen a plugin which automates this process? That is which creates a conf/ directory with all or a selected subset of everything in src/main/resources/ and which renames the files?
Best practice in Maven handling config files is to place them in a separate conf directory, and pack them in a binary assembly using the assembly plugin. Placing configuration files, like log4j.xml in the src/main/resources doesn't make sense, since it is not a true application resource, but more of a configuration file.
We cope with the overwriting, by packing the configuration files with the posfix .def. For example: myapp.properties is packed into the assembly as myapp.properties.def. When the person who uses the assembly unpacks it, it will not overwrite his original files. After unpacking he simply merges them by an external tool (we use meld in Fedora Core).
I may be missing something and this doesn't answer directly the question but did you consider producing a zip assembly of the exploded content of required artifacts (to be unzipped on the target environment)?
Sounds like you're attacking the problem the wrong way. Why not just run the application with -Dlog4j.configuration=/some/where/my-log4j.properties? If you want, you can add a command line flag to main() which invokes the PropertyConfigurator directly.