OSB sbconfig.jar issue - JVM Char length Issue - jvm

I am working on an Oracle OSB Build job using Jenkins.
The issue I am facing is: In the sbconfig.jar that's gets crated is not having the full service name.
Ex. If my OSB service name is EmployeeRecordDetailReturnsStorageBOService but in the sbconfig.jar its getting created as EmployeeRecordDetailReturnsStorageBOSer only.
I need the full service name in the sbconfig.jar for my further processing of this jar.
I am using the eclipse based jar <java dir="${eclipse.home}" jar="${eclipse.home}/plugins/org.eclipse.equinox.launcher_1.1.0.v20100507.jar" ...> in my ant build file.
But what I have observed is the java command that we are using has a folder length limit. It could not created the full service name in the sbconfig.jar because the ant build file is in a deep location under the folders like abcd/efgh/ijkl/mnop/qrst/xyz/build.xml so thats why its not able to create the full service name in the sbconfig.jar file. Its seems to be an character length issue in java/JVM.
Can anybody pls let me know how to to over some this problem.

It will not cause any issue when you will be importing the sbconfig.jar back into the sbconsole. So, do not worry about name getting shortened in sbconfig.jar.
If you are facing any issue while importing then let us know.

Related

mule-deploy.properties over written when I choose Run As "Mule Application" Anypoint Studio July 2014 Release Build Id: 201407311443

Strange event is happening in a Mule project. I have the application xml which is JPC.xml. This normally appears in the mule-deploy.properties as follows
redeployment.enabled=true
encoding=UTF-8
config.resources=JPC.xml
domain=default
When I choose Run As, Mule Application Which kicks off the build in the background prior to the deploy and run. During that time the mule-deploy.properties becomes:
redeployment.enabled=true
encoding=UTF-8
config.resources=
domain=default
And when the application runs it says it is missing the mule-config.xml
What is erasing it?
I think I may have found the root of this issue.
It seems to be a bug related to jdk_1.7.0_45 having to do with xml parsing. see: What's causing these ParseError exceptions when reading off an AWS SQS queue in my Storm cluster
I noticed several errors logged in eclipse/anypoint as:
!ENTRY org.mule.tooling.core 4 0 2014-11-19 14:16:41.081
!MESSAGE Error opening resource measurement_scheduler.xml
!STACK 0
javax.xml.stream.XMLStreamException: ParseError at [row,col]:[1,1]
Message: JAXP00010001: The parser has encountered more than "64000" entity expansions in this document; this is the limit imposed by the JDK.
I also noticed that after restarting Anypoint, I would be able to build with maven successfully and my mule-deploy.properties file would again have content. Until...at some point after several edits to things in Anypoint, I would again get mvn build that wiped out the contents of mule-deploy.properties.
I further noticed that once this problem started to happen in one project in Anypoint, it would ALSO start happening in ANY project I built in Anypoint...until restart of Anypoint.
It seems this bug in jdk 1.7.0_45 mistakenly applies this limit in the xml parser to all opened files cumulatively, instead of per file. I suspect this causes Anypoint to not finish parsing all of the xml docs that make up my app and therefore couldnt re-create the mule-deploy.properties...leaving it blank.
Upgrading to newer jdk would fix this.
Another way to work around it is to override this limit for xml parser by adding the following to ${JAVA_HOME}/jre/lib/jaxp.properties:
jdk.xml.entityExpansionLimit=0
jdk.xml.maxGeneralEntitySizeLimit=0
I am not certain that both limits need set to work-around this. Possibly only entityExpansionLimit is needed.
After making this change I am now happily able to use Anypoint again. Beware that using this work-around possibly opens you up to a denial-of-service attack through the xml parser if your same jre is used for other not-so-trusted processes.

Pentaho (5.0.5) Configuring for Mysql

I installed Pentaho BA suite 5.0.5 on linux platform. Everything works well in postgreSQL repository.
I REFERRED THIS LINK To Configure Mysql as reposity
But if try to configure mysql for pentaho, i'm facing Errors.
This are the changes i did :
1.Edited /home/pentaho/server/biserver-ee/pentaho-solutions/system/quartz/quartz.properties
line:300
org.quartz.jobStore.driverDelegateClass = org.quartz.impl.jdbcjobstore.StdJDBCDelegate
2.Edited /home/pentaho/server/biserver-ee/pentaho-solutions/system/hibernate/hibernate-setting.xml line:15
system/hibernate/mysql5.hibernate.cfg.xml
3.Edited /home/pentaho/server/biserver-ee/pentaho-solutions/system/applicationContext-spring-security-hibernate.properties
jdbc.driver=com.mysql.jdbc.Driver
jdbc.url=jdbc:mysql://localhost:3306/hibernate
jdbc.username=hibuser
jdbc.password=password
hibernate.dialect=org.hibernate.dialect.MySQLDialect
4.I copied audit_sql.xml file from /home/pentaho/server/biserver-ee/pentaho-solutions/system/dialects/mysql5 to /home/pentaho/server/biserver-ee/pentaho-solutions/system
Edited /home/pentaho/server/biserver-ee/pentaho-solutions/system/jackrabbit/repository.xml file and uncommented SQL-configuration
I copied mysql-connector-java-5.1.25-bin.jar file to tomcat/lib folder
i made changes in /home/pentaho/server/biserver-ee/tomcat/webapps/pentaho/META-INF/conetxt.xml file
driverClassName="com.mysql.jdbc.Driver" url="jdbc:mysql://localhost:3306/hibernate"
validationQuery="select 1" /> in jdbc /hibernate section
driverClassName="com.mysql.jdbc.Driver" url="jdbc:mysql://localhost:3306/quartz"
validationQuery="select 1"/>
in jdbc/Quartz section
I'm facing these errors :
1.pentaho.log file :EmbeddedQuartzSystemListener.ERROR_0001 - Scheduler was not properly initialized at startup
2.In pentaho user console ,Loading symbol remain forever without displaying files.
3.i'm not able to save reports.
That might be very simple - but pentaho platform is not shipped with mysql jdbc driver by default due to some kind of licensing issues.
So it is required to manually obtain mysql jdbc driver and put it into web server lib folder.
For tomcat (default installation) that will be /servers/.../tomcat/lib folder.
For this particular case that may be
/home/pentaho/server/biserver-ee/tomcat/lib
One more advice is to check full log under
/home/pentaho/server/biserver-ee/logs
That is the main place where pentaho platform keeps logs info.
Hope it will help.
By the way there is a pretty good pentaho info portal about configuring pentaho platform:
http://infocenter.pentaho.com/help/nav/2_3
Make sure you are pointing to Java 7 in your JAVA_HOME. When you have your JAVA_HOME pointing to Java 8, BI Server will not start correctly
This is a pentaho bug. Create a table in quartz database of mysql like this postgresql table:
CREATE TABLE "QRTZ"
(
name character varying(200) NOT NULL,
CONSTRAINT "QRTZ_pkey" PRIMARY KEY (name)
)

Deploying ClickOne application with network share

We have a server PC and other client PCs working connected to server using LAN. We have a application for our internal use which is developed using VB.Net. I used steps in http://www.codeproject.com/Articles/17003/ClickOnce-Quick-steps-to-Deploy-Install-and-Update to deploy clickone statergy for updating our application.After publishing while installing application,this error is coming
I searched i details and found this error
ERROR SUMMARY
Below is a summary of the errors, details of these errors are listed later in the log.
* Activation of D:\Desktop\publish\Global.application resulted in exception. Following failure messages were detected:
+ Downloading file:///D:/Desktop/publish/Application Files/Global_1_0_0_0/Global.XmlSerializers.dll.deploy did not succeed.
+ Could not find file 'D:\Desktop\publish\Application Files\Global_1_0_0_0\Global.XmlSerializers.dll.deploy'.
+ Could not find file 'D:\Desktop\publish\Application Files\Global_1_0_0_0\Global.XmlSerializers.dll.deploy'.
+ Could not find file 'D:\Desktop\publish\Application Files\Global_1_0_0_0\Global.XmlSerializers.dll.deploy'.
I have checked Application files in publish options and Global.XmlSerializers.dll is included. Anyone know why this is happening?
Is there any way to copy some extra files to installation folder(C:\Users\name\AppData\Local\Apps..) when installing or updating clickone application? Because we use some outside support files for our application. Is it possible?
Is there any way to pass argument to clickone application short cut,like passing argument to .exe shortcut("\Global.exe" ?/?customer?/?)?
EDIT:
This is how I published
I checked by giving network path for publishing folder location.But same error coming.
Here is application files included
As you can see Global.XmlSerializers.dll is included
It is looking for a file on the D: drive. It is unlikely that your users all have their D drive mapped to the same location. When you publish you should use the full path rather than mapped drive letters.
//Servername/shareddirectory/appdirectory
Does the install work for you?
Well, there no magic involved in ClickOnce: you can just look into the deployment folder - is the required file there or not?
If not, you need to change the settings in the Publish options for the required file. This message - in my experience - is always a sign that one of the required assemblies has not been published.
In addition it seems that you published to a mapped network drive instead of publishing to an UNC path. You need to publish to a path following the \\server\name\ scheme.
When I have used the wizard and deployed to a network share, in the Publish Wizard:
Specify the location to publish this application:
UDP Path
Click Next
How will users install the application?
From a UNC path of file share
Specify the UNC path:
The same UDP Path (copy pasted from before)

Weblogic forces recompile of EJBs when migrating from 9.2.1 to 9.2.3

I have a few EJBs compiled with Weblogic's EJBC complient with Weblogic 9.2.1.
Our customer uses Weblogic 9.2.3.
During server start Weblogic gives the following message:
<BEA-010087> <The EJB deployment named: YYY.jar is being recompiled within the WebLogic Server. Please consult the server logs if there are any errors. It is also possible to run weblogic.appc as a stand-alone tool to generate the required classes. The generated source files will be placed in .....>
Consequently, server start takes 1.5 hours instead of 20 min. The next server start takes exactly the same time, meaning Weblogic does not cache the products of the recompilation. Needless to say, we cannot recompile all our EJBs to 9.2.3 just for this specific customer, so we need an on-site solution.
My questions are:
1. Is there any way of telling Weblogic to leave those EJB jars as they are and avoid the re-compilation during server start?
2. Can I tell Weblogic to cache the recompiled EJBs to avoid prolonged restarts?
Our current workaround was to write a script that does this recompilation manually before the EAR's creation and deployment (by simply running java weblogic.appc <jar-name>), but we would rather avoid this solution being used in production.
I FIXED this problem by spending a great deal of time researching
and decompiling some classes.I encountered this when migrating from weblogic8 to 10
by this time you might have understood the pain in dealing with oracle weblogic tech support.
unfortunately they did not have a server configuration setting to disable this
You need to do 2 things
Step 1.You if you open the EJB jar files you can see
ejb-jar.xml=3435671213
com.mycompany.myejbs.ejb.DummyEJBService=2691629828
weblogic-ejb-jar.xml=3309609440
WLS_RELEASE_BUILD_VERSION_24=10.0.0.0
you see these hascodes for each of your ejb names.Make these hadcodes zero.
pack the jar file and deploy it on server.
com.mycompany.myejbs.ejb.DummyEJBService=0
weblogic-ejb-jar.xml=0
This is just a Marker file that weblogic.appc keeps in each ejb jar to trigger the recompilation
during server boot up.i automated this process of making these hadcodes to zero.
This hashcodes remain the same for each ejb even if you execute appc for more than once
if you add a new EJB class or delete a class those entries are added to this marker file
Note 1:
how to get this file?
if you open domains/yourdomain/servers/yourServerName/cache/EJBCompilerCache/XXXXXXXXX
you will see this file for each ejb.weblogic makes the hashcodes to zero after it recompiles
Note 2:
When you generate EJB using appc.generate them to a exploded directory using -output C:\myejb
instead of C:\myejb.jar.This way you can play around with the marker file
Step2.
Also you need a PATCH from weblogic.When you install the patch you see some message like this
"PATH CRXXXXXX installed successfully.Eliminate EJB recomilation for appc".
i dont remember the patch number but you can request weblogic for that.
You need to use both steps to fix the problem.The patch fixes only part of the problem
Goodluck!!
cheers
raj
the Marker file in EJBs is WL_GENERATED
Just to update the solution we went with - eventually we opted to recompile the EJBs once at the Customer's site instead of messing with the EJBs' internal markers (we don't want Oracle saying they cannot support problems derived from this scenario).
We created two KSH scripts - the first iterates over all the EJB jars, copies them to a temp dir and then re-compiles them in parallel by running several instances of the 2nd script which does only one thing: java -Drecompiler=yes -cp $CLASSPATH weblogic.appc $1 (With error handling of course :))
This solution reduced compilation time from 70min to 15min. After this we re-create the EAR file and redeploy it with the new EJBs. We do this once per several UAT environment creations, so we save quite a lot of time here (55min X num of envs per drop X num of drops)

Debug on symbian

i am using trk for phone debug
it is working properly for Helloworld project
but it is showing error for my project when i start project in phone debug mode
1)Load failed
2)TrkProtocolPlugin:failed to download specified file to target
(please verify that target path is writable)
if any body understand what problem i am facing plz help me out from this problem
Thanks in advance
In your case, I would check:
if the application has correct privileges assigned (along with appropriate certificate)
if ID of the application is not in conflict with some other application on device
if installation package does not contain problematic commands (e.g. copy commands to non-accessible directories)
Can you manually install the application on the phone? That is the first test you must perform before even attempting to use TRK.
Also, can your application start, at least to the point of showing a panic? TRK cannot help you if the applications cannot even load its DLL dependencies due to for example Platform Security capability mismatch. TRK needs a process to attach too in order to do its job ...