Jobs or transformations store in database repository (mysql).
configure carte
<slave_config>
<slaveserver>
<name>master1</name>
<hostname>localhost</hostname>
<port>8080</port>
<master>Y</master>
</slaveserver>
<max_log_lines>10000</max_log_lines>
<max_log_timeout_minutes>1000</max_log_timeout_minutes>
<object_timeout_minutes>5</object_timeout_minutes>
<repository>
<name>10-3-20-66-repository</name>
<username>admin</username>
<password>love1314</password>
</repository>
</slave_config>
Startup carte with this configuration,first run ,no problem. But after one day, remote call is successful, but nothing run.
HttpClient client = new HttpClient();
client.getState()
.setCredentials(AuthScope.ANY,
new UsernamePasswordCredentials(carteUserName, cartePassword));
PostMethod post = new PostMethod(carteRunJobUrl+"/?job="+jobpath);
post.setDoAuthentication(true);
try {
return client.executeMethod(post)+"";
} catch (Exception e) {
throw e;
}finally{
post.releaseConnection();
}
Can't it be connected to database repository?
It works perfectly fine on file based repository and without any repository as well. I did testing on windows machine.Let me know if you need any inputs regarding this.
Related
I am developing a web app where batch programs need to run for specific times. I used Quartz library to schedule the jobs. The web app is deployed on Websphere 8.5.5 and its working fine, accessing the tables through datasources (Datasource given in code is java:comp/env/jdbc/db_datasource). The job is also triggered at the mentioned times.
I am getting an error when the scheduled job makes a DB connection through the datasource and the error is:
javax.naming.ConfigurationException: A JNDI operation on a "java:" name cannot be completed because the server runtime is not able to associate the operation's thread with any J2EE application component. This condition can occur when the JNDI client using the "java:" name is not executed on the thread of a server application request. Make sure that a J2EE application does not execute JNDI operations on "java:" names within static code blocks or in threads created by that J2EE application. Such code does not necessarily run on the thread of a server application request and therefore is not supported by JNDI operations on "java:" names. [Root exception is javax.naming.NameNotFoundException: Name comp/env/jdbc not found in context "java:".]
at com.ibm.ws.naming.java.javaURLContextImpl.throwExceptionIfDefaultJavaNS(javaURLContextImpl.java:522)
at com.ibm.ws.naming.java.javaURLContextImpl.throwConfigurationExceptionWithDefaultJavaNS(javaURLContextImpl.java:552)
at com.ibm.ws.naming.java.javaURLContextImpl.lookupExt(javaURLContextImpl.java:481)
at com.ibm.ws.naming.java.javaURLContextRoot.lookupExt(javaURLContextRoot.java:485)
at com.ibm.ws.naming.java.javaURLContextRoot.lookup(javaURLContextRoot.java:370)
I understand from the error message is that the job is running outside the J2ee container and so the datasource is not available for the Job to make the connection, which I cannot agree as the Quartz is implemented as the ServletContextListener and the same is mentioned in web.xml.
Web.xml
<listener>
<listener-class>com.ehacampaign.helper.EHAJobSchedulerListener</listener-class>
</listener>
EHAJobSchedulerListener.java
public class EHAJobSchedulerListener implements ServletContextListener {..}
As you can see the code, the class is registered in the web and I do not understand why it cannot use the datasource in the J2EE container.
Questions are:
Why servlet registered class cannot access the datasource in J2EE
container?
If datasource in container cannot be used, then how to make a
connection to the DB while executing the job?
NOTE: I have the same setup in JBoss AS 7.1 and the jobs are running smoothly accessing the datasource configured in JBoss AS 7.1. I have to develop this in Websphere as the customer demands it.
UPDATED
I have attached the modified quartz property file. Even after adding the workmanagerthread, I am getting the same error.
org.quartz.threadPool.threadCount=1
org.quartz.threadPool.class=org.quartz.simpl.SimpleThreadPool
org.quartz.jobStore.class=org.quartz.simpl.RAMJobStore
org.quartz.threadExecutor.class=org.quartz.commonj.WorkManagerThreadExecutor
org.quartz.threadExecutor.workManagerName=wm/default
In order to perform JNDI lookups in WebSpehre, your code must be running on a managed thread. In order to have Quartz run on one of WebSphere's managed threads, you must set the following 2 properties in your quartz.properties (as Alasdair mentioned in the comments):
org.quartz.threadExecutor.class=org.quartz.commonj.WorkManagerThreadExecutor
org.quartz.threadExecutor.workManagerName=wm/default
The name for org.quartz.threadExecutor.workManagerName can be the JNDI name of any Work Manager that you have configured in WebSphere. I recommend simply using wm/default because it is in your configuration by default.
With all the help provided by aguibert and Alasdair and reference from here, I am able to fix the issue.
The Quartz property file is:
org.quartz.threadPool.threadCount=1
org.quartz.jobStore.class=org.quartz.simpl.RAMJobStore
org.quartz.threadExecutor.class=org.quartz.commonj.WorkManagerThreadExecutor
org.quartz.threadExecutor.workManagerName=wm/default
The database connection or JNDI lookup should happen within the empty constructor of the JOB Implemented class. For ex,
public class ContractIdFromPartyServiceJob implements Job {
private DataSource ds;
public ContractIdFromPartyServiceJob() {
try {
Logger.info("Gets the data source");
Context context = new InitialContext();
ds = (DataSource) context.lookup(ApplicationConstants.RESOURCE_REFERENCE_JDBC);
} catch (RException e) {
e.printStackTrace();
}
}
#Override
public void execute(JobExecutionContext arg0) throws JobExecutionException
{
EHAMarsDAO marsDao = new EHAMarsDAO();
Connection con = getConnection();
try {
marsDao.callDBMethod(con);
} finally {
con.close();
}
}
public Connection getConnection() throws RACVException
{
Connection con = null;
try {
con = ds.getConnection();
con.setAutoCommit(false);
con.setTransactionIsolation(Connection.TRANSACTION_READ_COMMITTED);
} catch (SQLException e) {
throw new RException(Constants.ERROR_CODE_002, Constants.E012_DB_CONNECTION_ERROR, e);
}
return con;
}
}
I am new to google api. I am trying to create a simple web application (Java EE) to read DocumentListFeed from google doc. My code in the servlet is:
protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException
{
try
{
DocsService service = new DocsService("Document List Demo");
service.setUserCredentials(NAME, PASSWORD);
response.getWriter().println("helloooooo");
//URL documentListFeedUrl = new URL("http://docs.google.com/feeds/documents/private/full");
URL documentListFeedUrl = new URL("https://docs.google.com/feeds/default/private/full?v=3");
DocumentListFeed feed = service.getFeed(documentListFeedUrl, DocumentListFeed.class);
for(DocumentListEntry entry : feed.getEntries())
{
response.getWriter().println(entry.getTitle().getPlainText());
}
}
catch (Exception e)
{
response.getWriter().println(e);
}
}
But it is showing me the error: java.lang.NoClassDefFoundError: com/google/gdata/client/docs/DocsService
I am using Glassfish server and Ecllipse. And added external jar file: activation.jar, guava-r07.jar, mail.jar, servlet.jar, gdata-client-1.0.jar, gdata-client-meta-1.0.jar, gdata-core-1.0.jar, gdata-media-1.0.jar, gdata-docs-3.0.jar, gdata-docs-meta-3.0.jar.
I have copied this same code to java standard edition and it is working fine. Could please tell me why this thing is not working in Java EE? Is it a problem in GlassFish server?
It just means that the jars are not present in your Glassfish server classpath.
Add all the jars you listed to yuor glassfish server classpath. Since am not an Glassfish expert i cannot help you in adding the jars to your server.
In case of weblogic, you just need to package all the jars in your project APP-INF directory.
Hope it helps.
I am testing JMS with glassfish server so for that i want to send simple text message on glassfish server queue. I have tried with ActiveMQ and that is going fine but i unable to understand what can i put in configuration jndi.properties file and which jar is needed for glassfish server. Please give me some idea to implement this.
thanks in advance
Since you're using Glassfish, the easiest way is to write simple application (EJB) that will perform the task. You have to define in GF:
ConnectionFactory (Resources -> JMS Resources -> Connection Factory),
let's give it JNDI name jms/ConnectionFactory
Message queue (Resources -> JMS Resources -> Destination Resources),
let's give it JNDI name jms/myQueue
Next step is to use these in some EJB that you need to write. It's not hard: firstly, you have to inject:
#Resource(mappedName="jms/ConnectionFactory")
private ConnectionFactory cf;
#Resource(mappedName="jms/myQueue")
private Queue messageQueue;
and then use it like this:
..
javax.jms.Connection conn = null;
javax.jms.Session s = null;
javax.jms.MessageProducer mp = null
try {
conn = cf.createConnection();
s = conn.createSession(false, Session.AUTO_ACKNOWLEDGE);
mp = s.createProducer(messageQueue);
javax.jms.TextMessage msg = s.createTextMessage();
msg.setStringProperty("your-key", "your-value");
msg.setText("Your text message");
mp.send(msg);
}
catch(JMSException ex) {
// exception handling
}
finally {
try {
// close Connection, Session and MessageProducer
} catch (JMSException ex) {
//exception handling
}
}
Regarding configuration, you don't need any external JAR, everything that is needed is shipped. If you don't want to write EJB, but regular Java (standalone) application, then you'll have to include jms.jar and imq.jar.
I'm using Sap Jco to connect to SAP database with the front end being Java(JSF), When I connect to SAP with:
try {
mConnection =JCO.createClient("400", // SAP client
"c3026902", // userid
"********", // password
"EN", // language
"iwdf5020", // host name
"00"); // system number
mConnection.connect();
}
catch (Exception ex) {
ex.printStackTrace();
System.exit(1);
}
Problem I'm facing is when run the application for the first time, data is displayed but when I re-run it says "Could not load middleware layer 'com.sap.mw.jco.rfc.MiddlewareRFC' "
Can any one help me in resolving the issue?????
This sounds like the API cannot load the native driver files.
The SAP Java Connector consists of a native runtime part, that does the actuall communication and a Java API that wraps this functionality with a java api.
The Java API is inside the sapjco.jar and the native drivers are e.g on windows inside librfc32.dll and sapjcorfc.dll.
Place these dll's into your system path (e.g. windows: C:\WiNDOWS\system32) and it should run.
Cheers
Sebastian
Are your DLLs located in the Windows system32 folder? If so, are you probably using the wrong architecture? (x64 DLL on 32 bit or vice versa)
Also, are the DLLs the same version as the java api? If you have SAP GUI installed there could be older DLLs around.
Defining SAP connection:
For the Version 3,0 of the sapjco library there exists plenty of useful information. To create a connection following the instructions in:
http://www.browseye.com/linkShare.html?url=http://help.sap.com/saphelp_nwpi711/helpdata/en/46/fb807cc7b46c30e10000000a1553f7/content.htm?bwsCriterion=%22Setting%20Up%20Connection%22&bwsMatch=1&bwsCriterion=%22Setting%20Up%20Connection%22&bwsMatch=1
There are a few thing that you should take into account:
Place the dll file in the same place that the jar.
The dll must be the right version for your operating system and architecture otherwise you will get a native library error.
Example of code to create a connection to the server.
public class StepByStepClient
{
static String DESTINATION_NAME1 = "ABAP_AS_WITHOUT_POOL";
static String DESTINATION_NAME2 = "ABAP_AS_WITH_POOL";
static
{
Properties connectProperties = new Properties();
connectProperties.setProperty(DestinationDataProvider.JCO_ASHOST, "ls4065");
connectProperties.setProperty(DestinationDataProvider.JCO_SYSNR, "85");
connectProperties.setProperty(DestinationDataProvider.JCO_CLIENT, "800");
connectProperties.setProperty(DestinationDataProvider.JCO_USER, "homofarber");
connectProperties.setProperty(DestinationDataProvider.JCO_PASSWD, "laska");
connectProperties.setProperty(DestinationDataProvider.JCO_LANG, "en");
createDestinationDataFile(DESTINATION_NAME1, connectProperties);
connectProperties.setProperty(DestinationDataProvider.JCO_POOL_CAPACITY, "3");
connectProperties.setProperty(DestinationDataProvider.JCO_PEAK_LIMIT, "10");
createDestinationDataFile(DESTINATION_NAME2, connectProperties);
}
static void createDestinationDataFile(String destinationName, Properties connectProperties)
{
File destCfg = new File(destinationName+".jcoDestination");
try
{
FileOutputStream fos = new FileOutputStream(destCfg, false);
connectProperties.store(fos, "for tests only !");
fos.close();
}
catch (Exception e)
{
throw new RuntimeException("Unable to create the destination files", e);
}
}
public static void step1Connect() throws JCoException
{
JCoDestination destination = JCoDestinationManager.getDestination(DESTINATION_NAME1);
System.out.println("Attributes:");
System.out.println(destination.getAttributes());
System.out.println();
}
}
In SAPJco 3.0 connections are build from the info contained in a “Destination”.
The documentation example use a properties file to save the “Destination”. However it is a non-secure way to keep connection info. As is indicated on the documentation in the hightlighted paragraph you can see on next link.
http://help.sap.com/saphelp_nwpi711/helpdata/en/48/5fb9f9b523501ee10000000a421937/content.htm?bwsCriterion=%22In%20practice%20you%20should%20avoid%20this%20for%20security%20reasons.%22&bwsMatch=1
You can keep connection info on a database or any other storage system if you create a custom “DestinationDataProvider” In the Examples provided with the SAPJco library there is an example of how to create a custom DestinationDataProvider.
I really am having a nightmare configuring Tomcat to set up a connection pool. I have done a lot of reading of various forums and the documents from Tomcat but am having to ask here as a last resort. This is the first time I have tried to get connections from the container so it's all new to me.
I have been having NameNotFoundException's which only seem to be fixed when I put the context.xml file back from MyApp/META-INF/context.xml to Tomcat 6.0/conf/context.xml, so for some reason it's not seeing the context.xml file in MyApp's META-INF directory. Any ideas?
Now I am getting an SQLNestedException: Cannot create PoolableConnectionFactory (''#'localhost' (using password:YES))
First of all it suprises me that the user is blank because I have specified 'root' in the context.xml. As for not being able to create a PoolableConnectionFactory, I have seen a couple of example context.xml files that had a factory attribute. Do I need this? If so what class should I specify there?
My context.xml is:
<?xml version='1.0' encoding='utf-8'?>
<Context>
<!-- Configure a JDBC DataSource for the user database -->
<Resource name="jdbc/searchdb"
type="javax.sql.DataSource"
auth="Container"
user="root"
password="mypassword"
driverClassName="com.mysql.jdbc.Driver"
url="jdbc:mysql://localhost:3306/search"
maxActive="8"
maxIdle="4"/>
<!-- Default set of monitored resources -->
<WatchedResource>WEB-INF/web.xml</WatchedResource>
<!--<WatchedResource>META-INF/context.xml</WatchedResource>-->
</Context>
I have seen a context.xml with a WatchedResource elemnt for the META-INF/context.xml. I tried it but it didn't seem to make a difference and it seems strange to me so I have commented it out. Should I actually be including it?
My test servlet:
package search.web;
import javax.servlet.*;
import javax.servlet.http.*;
import java.io.*;
import java.util.*;
import java.sql.*;
import javax.sql.*;
import javax.naming.*;
import search.model.*;
public class ConPoolTest extends HttpServlet {
public void doGet(HttpServletRequest request,
HttpServletResponse response)
throws IOException, ServletException {
Context ctx = null;
DataSource ds = null;
Connection conn = null;
try {
ctx = new InitialContext();
ds = (DataSource)ctx.lookup("java:comp/env/jdbc/searchdb");
conn = ds.getConnection();
if(conn != null) {
System.out.println("have a connection from the pool");
}
} catch(SQLException e) {
e.printStackTrace();
} catch(NamingException e) {
e.printStackTrace();
} finally {
try {
if(conn!=null) {
conn.close();
}
} catch(SQLException e) {
e.printStackTrace();
}
}
}
}
I look forward to your suggestions.
Many thanks
Joe
PS I would have also listed the stack trace, but for some reason it is showing up in the console, but not in the logs.
Update:
Now that I look at the error message again I'm wondering what it is that is denying me access. I assumed that it was the database, but is itactually the container? Do I need to set up some sort of authentication in the tomcatusers.xml file?
The Tomcat docs for JNDI Datasources contain complete examples how to setup JDBC data sources in the context.xml
Some comments to your question:
Tomcat should copy the context.xml from your app's WAR to conf/Catalina/localhost/app.xml during deployment (when it unpacks your app). The file should not go to conf/. Check whether you have an old copy lying around in these places and clean that up.
The error that it's using the wrong user also suggests that there is more than a single context.xml and you're looking at the wrong one.
You don't need PoolableConnectionFactory with Tomcat 6. This might be cruft left from an update from Tomcat 5 or something broke and they tried several things and forgot to clean up the config file.
Tomcat automatically watches web.xml; there is no need to make it a WatchedResource a second time.