How can we get the Exact java business code from the TBO deployed in Documentum? - documentum

Currently we need to customize one of the existing TBO to add some functionality. However we are unable to find the exact code of TBO which we deployed last time.
Is there any way we can get it back from repository itself?
Usually we convert our Code to a single Jar file and add that to jardef. And we deploy the Dar file to Documentum repository. So is there a way to retrieve that Jar file?

Yes, there is a way. Navigate to
System/Modules/TBO/<your TBO name>
There should be jardef under dmc_jar object type.
You can also query for dmc_module or dmc_jar to find them all, though I think all should be under TBO folder level.

Related

FlywayDB ignore sub-folder in migration

I have a situation where I would like to ignore specific folders inside of where Flyway is looking for the migration files.
Example
/db/Migration
2.0-newBase.sql
/oldScripts
1.1-base.sql
1.2-foo.sql
I want to ignore everything inside of the 'oldScripts' sub folder. Is there a flag that I can set in Flyway configs like ignoreFolder=SOME_FOLDER or scanRecursive=false?
An example for why I would do this is say, I have 1000 scripts in my migration folder. If we onboard a new member, instead of having them run the migration on 1000 files, they could just run the one script (The new base) and proceed from there. The alternative would be to never sync those files in the first place, but then people would need to remember to check source control to prior migrations instead of just looking on their local drive.
This is not currently supported directly. You could put both directories at the same level in the hierarchy (without nesting them) and selectively configure flyway.locations to achieve the same thing.
Since Flyway 6.4.0 wildcards are supported in flyway.locations. Examples:
db/**/test
db/release1.*
db/release1.?
More info at https://flywaydb.org/blog/organising-your-migrations

How can I specify an endpoint class with NServiceBus.Host.exe?

I have have a solution that I created with the new modeler tools. This gave me
two full "endpoints" in a single solution.
Now when I run them through my automated build, I have two dlls in the same
folder that implement IConfigureThisEndpoint.
If I just run NServiceBus.Host.exe \install (to get a Windows Service), it gives
me the (expected) error that there is more than one class that can be used.
I did some searching and Udi states here:
http://tech.groups.yahoo.com/group/nservicebus/message/3937 that "You can
specify which class you want loaded and avoid these issues - as the server
project in the pub/sub sample shows".
I looked at the pub/sub sample and I can't see how I can specify my class (at
least not at the command line).
Is there a way to get around having to modify my build to put the files in
separate folders? (Not really an easy task for me.)
Add a config entry to your app settings with the key EndpointConfigurationType and the value being the assembly qualified name of the type.

VB.NET - Is there a way to get file info from a file on a subversion repository? (Toriose svn)

I have a utility that checks various file info (size, date, location, etc) against a manifest to see that it all matches. Would anyone know if there's a way to get the last write date of a file in a svn repository, using VB.NET. The equivalent of using FileInfo.LastWriteDate.
Any thoughts?
I think you can call svn from the command line with the appropiate parameters to get this information. If that is possible, you can write a class which does that for a given file.
Other than that, there might be some library out there which does this and more things, but if what you asked for is the only thing you need from svn, using a library might be overkill

How to resolve an Error after importing a package in Enterprse Architect Sparx Systems

Everytime I want to change some properties in some class I get the following error messages:
:Microsoft Cursor Engine [-2147217864]
Row cannot be located for updating. Some values may have been changed since it was last read.
ADODB.Recordset[-2146825069]
Operation is not allowed in this context.
How can I solve them??
Even if this question was posted a long time ago:
Now and then this error occurs in my projects, too.
Every time I try to edit specific elements in Enterprise Architect projects i get exactly the same error messages. The only solution to this is to delete the element completely and create it again.
#TomO:
When you are importing a package, is this from XMI or are you import a source code directory?
I import only via XMI file.
What are you using as a repository?
I'm using a PostgreSQL-Server based repository, which I access via ODBC Driver.
In your ODBC Data Source Configuration do you have "Return matched rows instead of affected rows" and "Allow big result sets".
Could specify where I can find these options? Perhaps this is outdated, becaus I can't find any of these options under the Options/Datasource Menu in my ODBC driver.
If you are importing form XMI are you stripping the GUIDs on import, this is always a good idea if you are making a copy of an existing folder in your model as having two elements with the same GUID is not ideal ;-)
I strip GUIDs when I'm exporting and again when I'm importing XMI files.
I would really apprechiate any help concerning this topic.
If possible i might need a little more information. When you are importing a package, is this from XMI or are you import a source code directory? What are you using as a repository? Given the error I am assuming it is not the local EAP file.
In your ODBC Data Source Configuration do you have "Return matched rows instead of affected rows" and "Allow big result sets"
If you are importing form XMI are you stripping the GUIDs on import, this is always a good idea if you are making a copy of an existing folder in your model as having two elements with the same GUID is not ideal ;-)
I have also noticed that you asked this on Apr 14th - sorry it has taken me so long to find your request. I hope this helps!
Are you accessing your ea repository as a cloud repository please? If so, you could try to switch to access the repository as an odbc datasource, and this problem might be solved. I think it is a bug of the Sparx enterprise architect cloud service.

File not found exception while reading .sql file present in grails-app/conf/sql

I have an sql query in my controller action (select * from table....where.....et al). This query is fired when the user submits a page.
It is a 50 lines code that takes in 3 parameters.
For eg: select * from employee where empDate='params.empDt' and empNum=params.empNum order by params.sort asc.
In the above case the query takes in 3 parameters (params.empDt, params.empNum and params.sort) dynamically and is executed.
Since it is a huge query, I am looking at externalising the query to an sql file. The file thus externalised would be called in the service and the query would be executed.
So i created the .sql file in grails-app/conf/sql/read_date.sql.
When i read this file and the run the app, using run-app, it wrks fine. I am able to read the file and execute the query.
However, when I create a war and deploy it on tomcat, the appn doesnt read the file and I get a FileNotFound Exception.
java.io.FileNotFoundException: grails-app/conf/sql/read_date.sql (No such file or directory)
at java.io.FileInputStream.open(Native Method)
at java.io.FileInputStream.(FileInputStream.java:106)
Inputs??
Well you see the thing is, the grails-app/conf structure does not exist within the war. If you use something like 7-zip to open up the war and take a look around in it, you will see that all the Classes in conf are now in Web-Inf/classes. The directory structure is only really guaranteed through development. If you created an Sql file in web-app/sql, this might solve the problem for you as the directories under web-app are respected during War generation. Web-app contains thing like javascript files that are required by the application and are then accessible via http://myhost:port/AppName/sql/my.sql. If I were you, I would database the sql though.
Hope this helps.
John
I can't really see the point of putting the actual SQL in an external file. Just put the SQL statements in the service class, things will be fine. There is probably no increase in speed if you have the SQL statements in an external file (that needs to be read from disk every time the query needs to be executed). If you need to change the SQL Statement often (which would be a good reason to externalise the statement), you could evaluate the possibility of reading it from the database itself (like from a text field, or something)...
If you really want to use the external text file approach from a service or controller the easiest method is probably to use ServletContext.getResource to get a reference to the data packaged in your web app. Since getResource does take a context-relative URL, you can use
getResource("/someSQLstatements.txt") and trust it to work, regardless of the location of your web application on the server's local filesystem OR the path it's mapped to via the servlet container. That should work well. (see also :http://www.velocityreviews.com/forums/t131134-specifying-path-for-file-to-be-read-by-servlet-with-tomcat.html)
have fun.