NullPointErexception when calling showAtLocation - nullpointerexception

Good day for all programmers.
I have a problem NullPointerException when i call the method showAtLocation of PopupWindow. As in many forums was written, this exception happens because first parameter of method showAtLocation is null. So i check it this way:
showAtLocation (View parent, int gravity, int x, int y)
parent.equals(null) //- it returns false
LinearLayout lout = (LinearLayout) parent;
lout.getChildCount() //- it returns true count of child elements
((TextView) lout.getChildAt(1)).getText() //- it returns a text which i write in android:text field
I have a Gridview and its adapter is CustomAdapter (extends BaseAdapter). In this class (CustomAdapter) has OnClickListener in getView method. I want to set popupwindow for each item of this gridview. So in OnClickListener i call a method showPopup:
private void showPopup(final Activity context, Point p) {
int popupWidth = 200;
int popupHeight = 150;
// Inflate the popup_layout.xml
LinearLayout viewGroup = (LinearLayout) context.findViewById(R.id.popup);
LayoutInflater layoutInflater = (LayoutInflater) context.getSystemService(Context.LAYOUT_INFLATER_SERVICE);
//LayoutInflater layoutInflater = prnt.getLayoutInflater();
View parent = layoutInflater.inflate(R.layout.popup_layout, viewGroup);
final PopupWindow popup = new PopupWindow(context);
popup.setContentView(parent);
popup.setWidth(popupWidth);
popup.setHeight(popupHeight);
popup.setFocusable(true);
int OFFSET_X = 30;
int OFFSET_Y = 30;
popup.setBackgroundDrawable(new BitmapDrawable());
/*
LinearLayout lout = (LinearLayout) parent;
showMsg(parent.equals(null) + " : type " + lout.getChildCount() + " - " + ((TextView) lout.getChildAt(1)).getText());
*/
popup.showAtLocation(parent, 0, p.x + OFFSET_X, p.y + OFFSET_Y); //error occurs here
}
Please, i need your help

02-10 13:49:34.148 2376-2376/com.iyb.wi.mobi I/art: Not late-enabling -Xcheck:jni (already on)
02-10 13:49:34.255 2376-2376/com.iyb.wi.mobi W/System: ClassLoader referenced unknown path: /data/app/com.iyb.wi.mobi-2/lib/x86
02-10 13:49:34.331 2376-2376/com.iyb.wi.mobi I/GMPM: App measurement is starting up, version: 8487
02-10 13:49:34.331 2376-2376/com.iyb.wi.mobi I/GMPM: To enable debug logging run: adb shell setprop log.tag.GMPM VERBOSE
02-10 13:49:34.501 2376-2392/com.iyb.wi.mobi D/OpenGLRenderer: Use EGL_SWAP_BEHAVIOR_PRESERVED: true
02-10 13:49:34.582 2376-2392/com.iyb.wi.mobi I/OpenGLRenderer: Initialized EGL, version 1.4
02-10 13:49:34.639 2376-2392/com.iyb.wi.mobi W/EGL_emulation: eglSurfaceAttrib not implemented
02-10 13:49:34.639 2376-2392/com.iyb.wi.mobi W/OpenGLRenderer: Failed to set EGL_SWAP_BEHAVIOR on surface 0xad6dfbc0, error=EGL_SUCCESS
02-10 13:49:45.041 2376-2390/com.iyb.wi.mobi I/GMPM: Tag Manager is not found and thus will not be used
02-10 13:50:05.590 2376-2386/com.iyb.wi.mobi I/art: Background sticky concurrent mark sweep GC freed 11567(906KB) AllocSpace objects, 10(200KB) LOS objects, 22% free, 2MB/3MB, paused 25.010ms total 151.250ms
02-10 13:50:05.670 2376-2392/com.iyb.wi.mobi W/EGL_emulation: eglSurfaceAttrib not implemented
02-10 13:50:05.670 2376-2392/com.iyb.wi.mobi W/OpenGLRenderer: Failed to set EGL_SWAP_BEHAVIOR on surface 0xad6e5d00, error=EGL_SUCCESS
02-10 13:50:08.094 2376-2376/com.iyb.wi.mobi I/Choreographer: Skipped 141 frames! The application may be doing too much work on its main thread.
02-10 13:50:08.165 2376-2392/com.iyb.wi.mobi E/Surface: getSlotFromBufferLocked: unknown buffer: 0xab793b90
02-10 13:50:34.728 2376-2392/com.iyb.wi.mobi W/EGL_emulation: eglSurfaceAttrib not implemented
02-10 13:50:34.728 2376-2392/com.iyb.wi.mobi W/OpenGLRenderer: Failed to set EGL_SWAP_BEHAVIOR on surface 0xa1c51100, error=EGL_SUCCESS
02-10 13:50:35.582 2376-2392/com.iyb.wi.mobi E/Surface: getSlotFromBufferLocked: unknown buffer: 0xab793c00
02-10 13:50:35.885 2376-2392/com.iyb.wi.mobi W/EGL_emulation: eglSurfaceAttrib not implemented
02-10 13:50:35.885 2376-2392/com.iyb.wi.mobi W/OpenGLRenderer: Failed to set EGL_SWAP_BEHAVIOR on surface 0xa1c51d00, error=EGL_SUCCESS
02-10 13:50:38.395 2376-2392/com.iyb.wi.mobi E/Surface: getSlotFromBufferLocked: unknown buffer: 0xab793c70
02-10 13:50:45.007 2376-2392/com.iyb.wi.mobi W/EGL_emulation: eglSurfaceAttrib not implemented
02-10 13:50:45.007 2376-2392/com.iyb.wi.mobi W/OpenGLRenderer: Failed to set EGL_SWAP_BEHAVIOR on surface 0xa17bad20, error=EGL_SUCCESS
02-10 13:50:45.947 2376-2392/com.iyb.wi.mobi E/Surface: getSlotFromBufferLocked: unknown buffer: 0xb3fd6830
02-10 13:50:46.023 2376-2386/com.iyb.wi.mobi I/art: Background sticky concurrent mark sweep GC freed 3519(315KB) AllocSpace objects, 2(40KB) LOS objects, 0% free, 4MB/4MB, paused 12.041ms total 40.080ms
02-10 13:50:50.583 2376-2376/com.iyb.wi.mobi D/AndroidRuntime: Shutting down VM
02-10 13:50:50.583 2376-2376/com.iyb.wi.mobi E/AndroidRuntime: FATAL EXCEPTION: main
Process: com.iyb.wi.mobi, PID: 2376
java.lang.NullPointerException: Attempt to read from field 'int android.graphics.Point.x' on a null object reference
at com.iyb.wi.mobi.CustomAdapter.showPopup(CustomAdapter.java:136)
at com.iyb.wi.mobi.CustomAdapter.access$000(CustomAdapter.java:23)
at com.iyb.wi.mobi.CustomAdapter$1.onClick(CustomAdapter.java:84)
at android.view.View.performClick(View.java:5198)
at android.view.View$PerformClick.run(View.java:21147)
at android.os.Handler.handleCallback(Handler.java:739)
at android.os.Handler.dispatchMessage(Handler.java:95)
at android.os.Looper.loop(Looper.java:148)
at android.app.ActivityThread.main(ActivityThread.java:5417)
at java.lang.reflect.Method.invoke(Native Method)
at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:726)
at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:616)

Related

Caused by: com.google.firebase.database.DatabaseException:

E/AndroidRuntime: FATAL EXCEPTION: main
Process: com.example.poster, PID: 23677
java.lang.RuntimeException: Unable to start activity ComponentInfo{com.example.poster/com.example.poster.MainActivity}: com.google.firebase.database.DatabaseException: Failed to get FirebaseDatabase instance: Specify DatabaseURL within FirebaseApp or from your getInstance() call.
at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2984)
at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:3045)
at android.app.ActivityThread.-wrap14(ActivityThread.java)
at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1642)
at android.os.Handler.dispatchMessage(Handler.java:102)
at android.os.Looper.loop(Looper.java:154)
at android.app.ActivityThread.main(ActivityThread.java:6776)
at java.lang.reflect.Method.invoke(Native Method)
at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:1496)
at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:1386)
Caused by: com.google.firebase.database.DatabaseException: Failed to get FirebaseDatabase instance: Specify DatabaseURL within FirebaseApp or from your getInstance() call.
at com.google.firebase.database.FirebaseDatabase.getInstance(com.google.firebase:firebase-database##16.0.4:114)
at com.google.firebase.database.FirebaseDatabase.getInstance(com.google.firebase:firebase-database##16.0.4:71)
at com.example.poster.MainActivity.onCreate(MainActivity.java:48)
at android.app.Activity.performCreate(Activity.java:6955)
at android.app.Instrumentation.callActivityOnCreate(Instrumentation.java:1126)
at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2927)
at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:3045) 
at android.app.ActivityThread.-wrap14(ActivityThread.java) 
at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1642) 
at android.os.Handler.dispatchMessage(Handler.java:102) 
at android.os.Looper.loop(Looper.java:154) 
at android.app.ActivityThread.main(ActivityThread.java:6776) 
at java.lang.reflect.Method.invoke(Native Method) 
at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:1496) 
at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:1386) 
add these dipendencies for java
implementation platform('com.google.firebase:firebase-bom:28.4.2')
implementation 'com.google.firebase:firebase-database'
for kotlin
implementation platform('com.google.firebase:firebase-bom:28.4.2')
implementation 'com.google.firebase:firebase-database-ktx'
then if using instance outside us1 for the database add url
FirebaseDatabase database = FirebaseDatabase.getInstance("**url here**);
DatabaseReference myRef = database.getReference("**path here**");

Adding custom Authentication jar to kafka

currently, I am using the PlainLoginModule to authenticate users. However, I now created a jar with the code listed here and want to use that instead of PlainLoginModule: https://cwiki.apache.org/confluence/display/KAFKA/KIP-86%3A+Configurable+SASL+callback+handlers#KIP-86:ConfigurableSASLcallbackhandlers-sample_plainSampleCallbackHandlerforSASL/PLAIN.
I have placed the jar file into the ~/libs folder and added listener.name.sasl_ssl.plain.sasl.server.callback.handler.class=com.synopsys.demo.DemoApplication
to my server.properties and my kafka_server_jaas.conf into:
KafkaServer {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="admin"
password="admin-secret"
user_admin="admin-secret";
};
And when I start up my server, I get the error:
Part 1:
14:36:41.924 [main] DEBUG org.apache.kafka.common.network.Selector - [KafkaServer id=1] Successfully authenticated with swe-analyticsdb-prod2/10.15.164.233
14:36:41.924 [Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.common.network.Selector - [Controller id=1, targetBrokerId=1] Successfully authenticated with swe-analyticsdb-prod2/10.15.164.233
14:36:41.924 [Controller-1-to-broker-1-send-thread] INFO kafka.controller.RequestSendThread - [RequestSendThread controllerId=1] Controller 1 connected to swe-analyticsdb-prod2:9093 (id: 1 rack: null) for sending state change requests
14:36:41.925 [data-plane-kafka-network-thread-1-ListenerName(SASL_SSL)-SASL_SSL-1] DEBUG org.apache.kafka.common.network.Selector - [SocketServer brokerId=1] Connection with swe-analyticsdb-prod2.internal.synopsys.com/10.15.164.233 disconnected
java.io.EOFException: null
>---at org.apache.kafka.common.network.SslTransportLayer.read(SslTransportLayer.java:573)
>---at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:94)
>---at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:424)
>---at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:385)
>---at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:651)
>---at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:572)
>---at org.apache.kafka.common.network.Selector.poll(Selector.java:483)
>---at kafka.network.Processor.poll(SocketServer.scala:830)
>---at kafka.network.Processor.run(SocketServer.scala:730)
>---at java.lang.Thread.run(Thread.java:748)
14:36:41.925 [main] DEBUG org.apache.kafka.common.metrics.Metrics - Removed sensor with name connections-closed:
14:36:41.925 [main] DEBUG org.apache.kafka.common.metrics.Metrics - Removed sensor with name connections-created:
14:36:41.925 [main] DEBUG org.apache.kafka.common.metrics.Metrics - Removed sensor with name successful-authentication:
14:36:41.925 [main] DEBUG org.apache.kafka.common.metrics.Metrics - Removed sensor with name successful-reauthentication:
14:36:41.925 [main] DEBUG org.apache.kafka.common.metrics.Metrics - Removed sensor with name successful-authentication-no-reauth:
14:36:41.926 [main] DEBUG org.apache.kafka.common.metrics.Metrics - Removed sensor with name failed-authentication:
14:36:41.926 [main] DEBUG org.apache.kafka.common.metrics.Metrics - Removed sensor with name failed-reauthentication:
14:36:41.926 [main] DEBUG org.apache.kafka.common.metrics.Metrics - Removed sensor with name reauthentication-latency:
14:36:41.926 [main] DEBUG org.apache.kafka.common.metrics.Metrics - Removed sensor with name bytes-sent-received:
14:36:41.927 [main] DEBUG org.apache.kafka.common.metrics.Metrics - Removed sensor with name bytes-sent:
14:36:41.927 [main] DEBUG org.apache.kafka.common.metrics.Metrics - Removed sensor with name bytes-received:
14:36:41.927 [main] DEBUG org.apache.kafka.common.metrics.Metrics - Removed sensor with name select-time:
14:36:41.927 [main] DEBUG org.apache.kafka.common.metrics.Metrics - Removed sensor with name io-time:
14:36:41.928 [main] WARN kafka.utils.CoreUtils$ - org.apache.kafka.common.requests.ControlledShutdownRequest$Builder.<init>(IJS)V
java.lang.NoSuchMethodError: org.apache.kafka.common.requests.ControlledShutdownRequest$Builder.<init>(IJS)V
>---at kafka.server.KafkaServer.doControlledShutdown$1(KafkaServer.scala:520)
>---at kafka.server.KafkaServer.controlledShutdown(KafkaServer.scala:563)
>---at kafka.server.KafkaServer.$anonfun$shutdown$2(KafkaServer.scala:585)
>---at kafka.utils.CoreUtils$.swallow(CoreUtils.scala:86)
>---at kafka.server.KafkaServer.shutdown(KafkaServer.scala:585)
>---at kafka.server.KafkaServer.startup(KafkaServer.scala:342)
>---at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:38)
>---at kafka.Kafka$.main(Kafka.scala:75)
>---at kafka.Kafka.main(Kafka.scala)
14:36:41.929 [main] INFO kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread - [/config/changes-event-process-thread]: Shutting down
14:36:41.929 [/config/changes-event-process-thread] INFO kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread - [/config/changes-event-process-thread]: Stopped
14:36:41.929 [main] INFO kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread - [/config/changes-event-process-thread]: Shutdown completed
14:36:41.930 [main] INFO kafka.network.SocketServer - [SocketServer brokerId=1] Stopping socket server request processors
14:36:41.931 [data-plane-kafka-socket-acceptor-ListenerName(SASL_SSL)-SASL_SSL-9093] DEBUG kafka.network.Acceptor - Closing server socket and selector.
14:36:41.933 [data-plane-kafka-network-thread-1-ListenerName(SASL_SSL)-SASL_SSL-0] DEBUG kafka.network.Processor - Closing selector - processor 0
14:36:41.934 [data-plane-kafka-network-thread-1-ListenerName(SASL_SSL)-SASL_SSL-0] DEBUG kafka.network.Processor - Closing selector connection 10.15.164.233:9093-10.15.164.233:44774-0
14:36:41.935 [Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.common.network.Selector - [Controller id=1, targetBrokerId=1] Connection with swe-analyticsdb-prod2/10.15.164.233 disconnected
Part 2:
07:22:21.223 [main] DEBUG kafka.utils.KafkaScheduler - Shutting down task scheduler.
07:22:21.223 [main] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-Heartbeat]: Shutting down
07:22:21.254 [Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.clients.NetworkClient - [Controller id=1, targetBrokerId=1] Initiating connection to node swe-analyticsdb-prod2:9093 (id: 1 rack: null) using address swe-analyticsdb-prod2/10.15.164.233
07:22:21.254 [Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - Set SASL client state to SEND_APIVERSIONS_REQUEST
07:22:21.254 [Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - Creating SaslClient: client=null;service=kafka;serviceHostname=swe-analyticsdb-prod2;mechs=[PLAIN]
07:22:21.255 [Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.common.network.Selector - [Controller id=1, targetBrokerId=1] Connection with swe-analyticsdb-prod2/10.15.164.233 disconnected
java.net.ConnectException: Connection refused
>---at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
>---at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
>---at org.apache.kafka.common.network.SslTransportLayer.finishConnect(SslTransportLayer.java:119)
>---at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:216)
>---at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:531)
>---at org.apache.kafka.common.network.Selector.poll(Selector.java:483)
>---at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:539)
>---at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:74)
>---at kafka.controller.RequestSendThread.brokerReady(ControllerChannelManager.scala:282)
>---at kafka.controller.RequestSendThread.doWork(ControllerChannelManager.scala:236)
>---at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:82)
07:22:21.255 [Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.clients.NetworkClient - [Controller id=1, targetBrokerId=1] Node 1 disconnected.
07:22:21.255 [Controller-1-to-broker-1-send-thread] WARN org.apache.kafka.clients.NetworkClient - [Controller id=1, targetBrokerId=1] Connection to node 1 (swe-analyticsdb-prod2/10.15.164.233:9093) could not be established. Broker may not be available.
07:22:21.256 [Controller-1-to-broker-1-send-thread] WARN kafka.controller.RequestSendThread - [RequestSendThread controllerId=1] Controller 1's connection to broker swe-analyticsdb-prod2:9093 (id: 1 rack: null) was unsuccessful
java.io.IOException: Connection to swe-analyticsdb-prod2:9093 (id: 1 rack: null) failed.
>---at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:71)
>---at kafka.controller.RequestSendThread.brokerReady(ControllerChannelManager.scala:282)
>---at kafka.controller.RequestSendThread.doWork(ControllerChannelManager.scala:236)
>---at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:82)
07:22:21.356 [Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.clients.NetworkClient - [Controller id=1, targetBrokerId=1] Initiating connection to node swe-analyticsdb-prod2:9093 (id: 1 rack: null) using address swe-analyticsdb-prod2/10.15.164.233
07:22:21.356 [Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - Set SASL client state to SEND_APIVERSIONS_REQUEST
07:22:21.356 [Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - Creating SaslClient: client=null;service=kafka;serviceHostname=swe-analyticsdb-prod2;mechs=[PLAIN]
07:22:21.357 [Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.common.network.Selector - [Controller id=1, targetBrokerId=1] Connection with swe-analyticsdb-prod2/10.15.164.233 disconnected
java.net.ConnectException: Connection refused
>---at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
>---at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
>---at org.apache.kafka.common.network.SslTransportLayer.finishConnect(SslTransportLayer.java:119)
>---at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:216)
>---at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:531)
>---at org.apache.kafka.common.network.Selector.poll(Selector.java:483)
>---at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:539)
>---at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:74)
>---at kafka.controller.RequestSendThread.brokerReady(ControllerChannelManager.scala:282)
>---at kafka.controller.RequestSendThread.doWork(ControllerChannelManager.scala:236)
>---at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:82)
07:22:21.357 [Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.clients.NetworkClient - [Controller id=1, targetBrokerId=1] Node 1 disconnected.
07:22:21.357 [Controller-1-to-broker-1-send-thread] WARN org.apache.kafka.clients.NetworkClient - [Controller id=1, targetBrokerId=1] Connection to node 1 (swe-analyticsdb-prod2/10.15.164.233:9093) could not be established. Broker may not be available.
UPDATE
I notice this behavior occurring even if I don't use the jar/class I made, but by just leave it inside the "../libs" directory. The error above will always occur, using built in or custom AuthenticateCallBackHandler classes.
Am I missing a step/steps? I know I have to add the jar to Kafka, so it can recognize and use it but I don't see any tutorials/documentation that explains how to use a custom call back handler with PLAIN. Anyone know how to do this?
I am using Kafka 2.2
My custom class code:
import org.apache.kafka.common.errors.AuthenticationException;
import org.apache.kafka.common.security.auth.AuthenticateCallbackHandler;
import org.apache.kafka.common.security.plain.PlainAuthenticateCallback;
import kafka.common.KafkaException;
import javax.naming.AuthenticationNotSupportedException;
import javax.naming.Context;
import javax.naming.NamingException;
import javax.naming.directory.DirContext;
import javax.naming.directory.InitialDirContext;
import javax.security.auth.callback.Callback;
import javax.security.auth.callback.NameCallback;
import javax.security.auth.callback.UnsupportedCallbackException;
import javax.security.auth.login.AppConfigurationEntry;
import java.io.IOException;
import java.util.Hashtable;
import java.util.List;
import java.util.Map;
public class CustomCallback implements AuthenticateCallbackHandler {
#Override
public void configure(Map<String, ?> configs, String mechanism, List<AppConfigurationEntry> jaasConfigEntries) {
}
#Override
public void handle(Callback[] callbacks) throws IOException, UnsupportedCallbackException {
String username = null;
for (Callback callback: callbacks) {
if (callback instanceof NameCallback)
username = ((NameCallback) callback).getDefaultName();
else if (callback instanceof PlainAuthenticateCallback) {
PlainAuthenticateCallback plainCallback = (PlainAuthenticateCallback) callback;
boolean authenticated = authenticate(username, plainCallback.password());
plainCallback.authenticated(authenticated);
} else
throw new UnsupportedCallbackException(callback);
}
}
protected boolean authenticate(String username, char[] password) throws IOException {
if (username == null)
return false;
else {
// Return true if password matches expected password
Hashtable<String, String> environment = new Hashtable<String, String>();
System.out.println("Custom class is being called");
environment.put(Context.INITIAL_CONTEXT_FACTORY, "com.sun.jndi.ldap.LdapCtxFactory");
environment.put(Context.PROVIDER_URL, "ldap://adldap.internal.synopsys.com:389");
environment.put(Context.SECURITY_AUTHENTICATION, "simple");
environment.put(Context.SECURITY_PRINCIPAL, "CN=" + username+",CN=Users,DC=internal,DC=synopsys,DC=com");
environment.put(Context.SECURITY_CREDENTIALS, new String(password));
try
{
DirContext context = new InitialDirContext(environment);
context.getEnvironment();
context.close();
return true;
}
catch (AuthenticationNotSupportedException exception)
{
System.out.println("The authentication is not supported by the server");
return false;
}
catch (AuthenticationException exception)
{
System.out.println("Incorrect password or username");
return false;
}
catch (NamingException exception)
{
System.out.println("Error when trying to create the context");
return false;
}
}
}
#Override
public void close() throws KafkaException {
}
public static void main(String[] args) throws IOException {
char[] pass = new char[]{'P', '0', 'm', 'e', 'l', '0', '2', '0', '1', '9', '!'};
CustomCallback test = new CustomCallback();
System.out.println(test.authenticate("<username>",pass));
System.out.println(test.getClass().getName());
//SpringApplication.run(DemoApplication.class, args);
}
}
server.properties contents:
advertised.listeners=SASL_SSL://<machine name>:9093
ssl.endpoint.identification.algorithm=HTTPS
ssl.client.auth=required
ssl.truststore.location=/remote/sde108/kafka/kafka/SSL2/client/server.truststore.jks
ssl.truststore.password=password
ssl.keystore.location=/remote/sde108/kafka/kafka/SSL2/client/server.keystore.jks
ssl.keystore.password=password
authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
zookeeper.set.acl=false
listeners=SASL_SSL://<machine name>:9093
security.inter.broker.protocol=SASL_SSL
sasl.mechanism.inter.broker.protocol=PLAIN
sasl.enabled.mechanisms=PLAIN
offsets.retention.minutes=1
#listener.name.sasl_sasl.plain.sasl.server.callback.handler.class=<package name>.CustomCallbackApplication
pom.xml:
<?xml version="1.0" encoding="UTF-8"?>
http://maven.apache.org/xsd/maven-4.0.0.xsd">
4.0.0
<groupId>synopsys</groupId>
<artifactId>synopsys</artifactId>
<version>1.0-SNAPSHOT</version>
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<configuration>
<source>6</source>
<target>6</target>
</configuration>
</plugin>
</plugins>
</build>
<dependencies>
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka_2.12</artifactId>
<version>2.2.0</version>
</dependency>
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-api</artifactId>
<version>1.7.25</version>
</dependency>
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-log4j12</artifactId>
<version>1.7.25</version>
</dependency>
</dependencies>
the meta-inf was made using project structure and build artifacts
I think I reply you email two days ago. I custom SASL/PLAIN authentication mechanism by storing username/password in mysql instead file. I also find that KIP-86 very confusing because it provides different ways to do the same thing and do not tell the differences between them.
This is what I do and what works.
The interface I implemented is AuthenticateCallbackHandler
The generated jar should not be placed under ~/libs. There is a lib subdirectory where you install your Kafka.
I did not modify kafka_server_jaas.conf

APP suddenly die with React Devtool under windows system

I init a new RN 0.40 project under win10( brand new system ) or win 7, didn't modify any code, run it on a real Android device.
If I do not choose 'debug in Chrome', the APP runs well.
But if I choose 'debug in Chrome', the APP would run for a liitle while then exit suddenly.
My React Devtool is 0.14.6.
And below is the log in Android Studio when exit.
01-13 14:19:12.545 24105-28940/com.learnrn I/art: Alloc partial concurrent mark sweep GC freed 408625(107MB) AllocSpace objects, 0(0B) LOS objects, 40% free, 20MB/34MB, paused 10.528ms total 235.319ms
01-13 14:19:12.545 24105-24105/com.learnrn I/art: WaitForGcToComplete blocked for 155.528ms for cause Alloc
01-13 14:19:25.805 24105-29639/com.learnrn W/libc: pthread_create failed: couldn't allocate 1064960-byte stack: Out of memory
01-13 14:19:25.805 24105-29639/com.learnrn E/art: Throwing OutOfMemoryError "pthread_create (1040KB stack) failed: Try again"
01-13 14:19:25.805 24105-29639/com.learnrn E/AndroidRuntime: FATAL EXCEPTION: OkHttp Dispatcher
Process: com.learnrn, PID: 24105
java.lang.OutOfMemoryError: pthread_create (1040KB stack) failed: Try again
at java.lang.Thread.nativeCreate(Native Method)
at java.lang.Thread.start(Thread.java:1063)
at java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:920)
at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1338)
at okhttp3.ConnectionPool.put(ConnectionPool.java:135)
at okhttp3.OkHttpClient$1.put(OkHttpClient.java:149)
at okhttp3.internal.connection.StreamAllocation.findConnection(StreamAllocation.java:188)
at okhttp3.internal.connection.StreamAllocation.findHealthyConnection(StreamAllocation.java:129)
at okhttp3.internal.connection.StreamAllocation.newStream(StreamAllocation.java:98)
at okhttp3.internal.connection.ConnectInterceptor.intercept(ConnectInterceptor.java:42)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:92)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:67)
at okhttp3.internal.cache.CacheInterceptor.intercept(CacheInterceptor.java:109)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:92)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:67)
at okhttp3.internal.http.BridgeInterceptor.intercept(BridgeInterceptor.java:93)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:92)
at okhttp3.internal.http.RetryAndFollowUpInterceptor.intercept(RetryAndFollowUpInterceptor.java:124)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:92)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:67)
at okhttp3.RealCall.getResponseWithInterceptorChain(RealCall.java:170)
at okhttp3.RealCall.access$100(RealCall.java:33)
at okhttp3.RealCall$AsyncCall.execute(RealCall.java:120)
at okhttp3.internal.NamedRunnable.run(NamedRunnable.java:32)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1112)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:587)
at java.lang.Thread.run(Thread.java:818)
01-13 14:19:25.855 24105-24105/com.learnrn D/SensorManager: unregisterListener ::
01-13 14:19:25.885 24105-24166/com.learnrn W/libc: pthread_create failed: couldn't allocate 1064960-byte stack: Out of memory
01-13 14:19:25.885 24105-24166/com.learnrn E/art: Throwing OutOfMemoryError "pthread_create (1040KB stack) failed: Try again"
01-13 14:19:25.885 24105-24166/com.learnrn W/libc: pthread_create failed: couldn't allocate 1064960-byte stack: Out of memory
01-13 14:19:25.885 24105-24166/com.learnrn E/art: Throwing OutOfMemoryError "pthread_create (1040KB stack) failed: Try again"
01-13 14:19:25.885 24105-24166/com.learnrn I/Process: Sending signal. PID: 24105 SIG: 9

SimpleMessageListenerContainer: Consumer raised exception, processing can restart if processing supports it

I have a Spring Boot application interfacing with a rabbitmq broker which manages to startup fine but I am getting a constant start/shutdown of the message consumer regardless of there being a message on the queue. Below is a copy of my application log and client class:
2016-10-19 11:40:25,909 WARN t:[SimpleAsyncTaskExecutor-106] SimpleMessageListenerContainer: Consumer raised exception, processing can restart if the connection factory supports it
java.lang.NullPointerException: null
at com.rabbitmq.client.impl.ChannelN.validateQueueNameLength(ChannelN.java:1232) ~[amqp-client-3.5.5.jar:na]
at com.rabbitmq.client.impl.ChannelN.queueDeclarePassive(ChannelN.java:884) ~[amqp-client-3.5.5.jar:na]
at com.rabbitmq.client.impl.ChannelN.queueDeclarePassive(ChannelN.java:61) ~[amqp-client-3.5.5.jar:na]
at sun.reflect.GeneratedMethodAccessor334.invoke(Unknown Source) ~[na:na]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.8.0_51]
at java.lang.reflect.Method.invoke(Method.java:497) ~[na:1.8.0_51]
at org.springframework.amqp.rabbit.connection.CachingConnectionFactory$CachedChannelInvocationHandler.invoke(CachingConnectionFactory.java:666) ~[spring-rabbit-1.4.6.RELEASE.jar:na]
at com.sun.proxy.$Proxy181.queueDeclarePassive(Unknown Source) ~[na:na]
at org.springframework.amqp.rabbit.listener.BlockingQueueConsumer.attemptPassiveDeclarations(BlockingQueueConsumer.java:533) ~[spring-rabbit-1.4.6.RELEASE.jar:na]
at org.springframework.amqp.rabbit.listener.BlockingQueueConsumer.start(BlockingQueueConsumer.java:453) ~[spring-rabbit-1.4.6.RELEASE.jar:na]
at org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer$AsyncMessageProcessingConsumer.run(SimpleMessageListenerContainer.java:1083) ~[spring-rabbit-1.4.6.RELEASE.jar:na]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_51]
2016-10-19 11:40:25,909 INFO t:[SimpleAsyncTaskExecutor-106] SimpleMessageListenerContainer: Restarting Consumer: tags=[{}], channel=Cached Rabbit Channel: AMQChannel(amqp://guest#127.0.0.1:5672/,1), acknowledgeMode=AUTO local queue size=0
2016-10-19 11:40:25,910 DEBUG t:[SimpleAsyncTaskExecutor-106] BlockingQueueConsumer: Closing Rabbit Channel: Cached Rabbit Channel: AMQChannel(amqp://guest#127.0.0.1:5672/,1)
2016-10-19 11:40:25,910 DEBUG t:[SimpleAsyncTaskExecutor-106] CachingConnectionFactory: Closing cached Channel: AMQChannel(amqp://guest#127.0.0.1:5672/,1)
2016-10-19 11:40:25,911 DEBUG t:[SimpleAsyncTaskExecutor-107] DefaultListableBeanFactory: Returning cached instance of singleton bean 'macRequestQueue'
2016-10-19 11:40:25,911 DEBUG t:[SimpleAsyncTaskExecutor-107] BlockingQueueConsumer: Starting consumer Consumer: tags=[{}], channel=null, acknowledgeMode=AUTO local queue size=0
2016-10-19 11:40:25,912 DEBUG t:[SimpleAsyncTaskExecutor-107] CachingConnectionFactory: Creating cached Rabbit Channel from AMQChannel(amqp://guest#127.0.0.1:5672/,1)
2016-10-19 11:40:25,912 DEBUG t:[SimpleAsyncTaskExecutor-107] SimpleMessageListenerContainer: Recovering consumer in 5000 ms.
Below is a copy of my client class:
public class RabbitClientConfiguration extends AbstractEMCRabbitConfiguration {
#Inject
private JacksonConfiguration jacksonConfiguration;
#Inject
private MessagingConfiguration messagingConfiguration;
//#Value("${data.core.pattern}")
//private String coreDataRoutingKey;
//#Inject
//private ClientHandler clientHandler;
#Override
public void configureRabbitTemplate(RabbitTemplate rabbitTemplate) {
rabbitTemplate.setRoutingKey(MAC_REQUEST_ROUTING_KEY);
}
#Bean
public SimpleMessageListenerContainer messageListenerContainer(){
SimpleMessageListenerContainer container = new SimpleMessageListenerContainer();
container.setConnectionFactory(connectionFactory());
container.setQueueNames(MAC_REQUEST_QUEUE_NAME);
container.setMessageListener(messageListenerAdapter());
container.setAcknowledgeMode(AcknowledgeMode.AUTO);
return container;
}
#Bean
MessageListenerAdapter messageListenerAdapter(){
return new MessageListenerAdapter(clientHandler(jacksonConfiguration.objectMapper(),messagingConfiguration.macMessageHandler(messagingConfiguration.messagingFactory())),jsonMessageConverter());
//return new MessageListenerAdapter(clientHandler,"receiveMessage");
}
#Bean
ClientHandler clientHandler(ObjectMapper objectMapper, IMessageHandler macMessageHandler){
ClientHandler clientHandler = new ClientHandler();
clientHandler.setObjectMapper(objectMapper);
clientHandler.setMacMessageHandler(macMessageHandler);
return clientHandler;
}
#Bean
public AmqpAdmin rabbitAdmin() { return new RabbitAdmin(connectionFactory());}
}
java.lang.NullPointerException: null at com.rabbitmq.client.impl.ChannelN.validateQueueNameLength
It means MAC_REQUEST_QUEUE_NAME contains null - it looks like the container doesn't check that when you call the setter.
I opened a JIRA Issue to detect this condition.

Calling remotely tasks deployed via a GAR file

I use UriDeploymentSpi bean to load GAR files from a directory in one of my nodes
I have following GAR ignite.xml file (took me a while to figure this one out btw, nowhere documented?)
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:util="http://www.springframework.org/schema/util"
xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd
http://www.springframework.org/schema/util http://www.springframework.org/schema/util/spring-util-2.5.xsd">
<util:list id="myList" value-type="java.lang.String">
<value>myproject.HelloWorldTask</value>
<value>myproject.SimpleTask</value>
</util:list>
</beans>
HelloWorldTask:
package myproject;
public class HelloWorldTask extends ComputeTaskAdapter<String, Integer> {
static {
System.out.println("TheGlue: Loading HelloWorldTask ");
}
public HelloWorldTask() {
}
#Nullable
#Override
public Map<? extends ComputeJob, ClusterNode> map(List<ClusterNode> nodes, #Nullable String arg) throws IgniteException {
System.out.println("Hello from GAR file");
return null; //To change body of implemented methods use File | Settings | File Templates.
}
#Nullable
#Override
public Integer reduce(List<ComputeJobResult> results) throws IgniteException {
return null; //To change body of implemented methods use File | Settings | File Templates.
}
}
SimpleTask:
package myproject;
#ComputeTaskName("SimpleTaskName")
public class SimpleTask implements ComputeTask<String, Integer> {
static {
System.out.println("Loading SimpleTask");
}
public SimpleTask() {
}
#Override
public Map<? extends ComputeJob, ClusterNode> map(List<ClusterNode> subgrid, String arg) throws IgniteException {
System.out.println("Computing Job in SimpleTask ");
return null; //To change body of implemented methods use File | Settings | File Templates.
}
#Override
public ComputeJobResultPolicy result(ComputeJobResult res, List<ComputeJobResult> rcvd) throws IgniteException {
return null; //To change body of implemented methods use File | Settings | File Templates.
}
#Override
public Integer reduce(List<ComputeJobResult> results) throws IgniteException {
return null; //To change body of implemented methods use File | Settings | File Templates.
}
}
The 2 classes can be found by Ignite (debugged through GridUriDeploymentSpringDocument and GridUriDeploymentFileProcessor and they are found and loaded). Ignite says that it found the GAR, but as far as I can see, the classes are not instantiated. No errors in the log files, no indications that the Tasks are deployed either.
I am trying to execute the following code on a node where the GAR file is not deployed (ie. client node of the cluster), but the Task is not executed on the cluster:
public class _03GarTest {
public static void main(String[] args) {
System.out.println("Start urideployment test");
IgniteConfiguration cfg = new IgniteConfiguration();
cfg.setPeerClassLoadingEnabled(true); //needs to be the same as in the XML for the server
cfg.setClientMode(true);
try(Ignite ignite = Ignition.start(cfg)) {
ignite.compute(ignite.cluster().forRemotes()).execute("SimpleTaskName", null);
}
}
}
Log file where I execute the _03GarTest class (same if I run with "SimpleTaskName" or "myproject.SimpleTaskName"), dumps the following stacktraces on the client node:
Exception in thread "main" class org.apache.ignite.IgniteDeploymentException: Unknown task name or failed to auto-deploy task (was task (re|un)deployed?): SimpleTaskName
at org.apache.ignite.internal.util.IgniteUtils$7.apply(IgniteUtils.java:761)
at org.apache.ignite.internal.util.IgniteUtils$7.apply(IgniteUtils.java:759)
at org.apache.ignite.internal.util.IgniteUtils.convertException(IgniteUtils.java:877)
at org.apache.ignite.internal.IgniteComputeImpl.execute(IgniteComputeImpl.java:154)
at _03GarTest.main(_03GarTest.java:55)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:144)
Caused by: class org.apache.ignite.internal.IgniteDeploymentCheckedException: Unknown task name or failed to auto-deploy task (was task (re|un)deployed?): SimpleTaskName
at org.apache.ignite.internal.processors.task.GridTaskProcessor.startTask(GridTaskProcessor.java:515)
at org.apache.ignite.internal.processors.task.GridTaskProcessor.execute(GridTaskProcessor.java:447)
at org.apache.ignite.internal.IgniteComputeImpl.execute(IgniteComputeImpl.java:151)
... 6 more
And on the server, following logs are produced:
[13:13:33,057][INFO][disco-event-worker-#48%null%][GridDiscoveryManager] Added new node to topology: TcpDiscoveryNode [id=b70dce5e-c0fd-4ffe-8dc2-b72b18db76da, addrs=[0:0:0:0:0:0:0:1, 10.1.26.59, 127.0.0.1, 192.168.8.103, 192.168.99.1], sockAddrs=[/192.168.8.103:0, /0:0:0:0:0:0:0:1:0, /192.168.99.1:0, /10.1.26.59:0, /10.1.26.59:0, /127.0.0.1:0, /192.168.8.103:0, /192.168.99.1:0], discPort=0, order=12, intOrder=7, lastExchangeTime=1452600812926, loc=false, ver=1.5.0#20151229-sha1:f1f8cda2, isClient=true]
[13:13:33,063][INFO][disco-event-worker-#48%null%][GridDiscoveryManager] Topology snapshot [ver=12, servers=1, clients=1, CPUs=8, heap=1.5GB]
[13:13:33,085][WARNING][disco-event-worker-#48%null%][CourtesyConfigNotice]
>>> +-------------------------------------------------------------------+
>>> + Courtesy notice that joining node has inconsistent configuration. +
>>> + Ignore this message if you are sure that this is done on purpose. +
>>> +-------------------------------------------------------------------+
>>> Remote Node ID: B70DCE5E-C0FD-4FFE-8DC2-B72B18DB76DA
>>> Remote SPI with the same name is not configured: UriDeploymentSpi
>>> => Local node: o.a.i.spi.deployment.uri.UriDeploymentSpi
[13:13:33,103][INFO][exchange-worker-#51%null%][GridCachePartitionExchangeManager] Skipping rebalancing (nothing scheduled) [top=AffinityTopologyVersion [topVer=12, minorTopVer=0], evt=NODE_JOINED, node=b70dce5e-c0fd-4ffe-8dc2-b72b18db76da]
[13:13:33,907][INFO][disco-event-worker-#48%null%][GridDiscoveryManager] Node left topology: TcpDiscoveryNode [id=b70dce5e-c0fd-4ffe-8dc2-b72b18db76da, addrs=[0:0:0:0:0:0:0:1, 10.1.26.59, 127.0.0.1, 192.168.8.103, 192.168.99.1], sockAddrs=[/192.168.8.103:0, /0:0:0:0:0:0:0:1:0, /192.168.99.1:0, /10.1.26.59:0, /10.1.26.59:0, /127.0.0.1:0, /192.168.8.103:0, /192.168.99.1:0], discPort=0, order=12, intOrder=7, lastExchangeTime=1452600812926, loc=false, ver=1.5.0#20151229-sha1:f1f8cda2, isClient=true]
[13:13:33,908][INFO][disco-event-worker-#48%null%][GridDiscoveryManager] Topology snapshot [ver=13, servers=1, clients=0, CPUs=8, heap=1.0GB]
[13:13:33,918][INFO][exchange-worker-#51%null%][GridCachePartitionExchangeManager] Skipping rebalancing (nothing scheduled) [top=AffinityTopologyVersion [topVer=13, minorTopVer=0], evt=NODE_LEFT, node=b70dce5e-c0fd-4ffe-8dc2-b72b18db76da]
[13:14:03,193][INFO][grid-timeout-worker-#33%null%][IgniteKernal]
Any ideas on how to call a task deployed via a GAR file on another node?
----UPDATE----
As suggested in one of the answers, I have added the following code in the client
System.out.println("Start urideployment test");
IgniteConfiguration cfg = new IgniteConfiguration();
cfg.setPeerClassLoadingEnabled(true); //needs to be the same as in the XML for the server
cfg.setClientMode(true);
UriDeploymentSpi deploymentSpi = new UriDeploymentSpi();
deploymentSpi.setUriList(Arrays.asList("file:///Users/sbeaupre/Dropbox/prorabel/Projects/IgniteTests/ignite/gar"));
cfg.setDeploymentSpi(deploymentSpi);
try(Ignite ignite = Ignition.start(cfg)) {
...
But this doesn't work either, I got following stack trace on the client node and nothing on the server node:
Jan 14, 2016 5:42:23 PM org.apache.ignite.logger.java.JavaLogger info
INFO: Topology snapshot [ver=4, servers=1, clients=1, CPUs=8, heap=1.5GB]
Jan 14, 2016 5:42:23 PM org.springframework.beans.factory.xml.XmlBeanDefinitionReader loadBeanDefinitions
INFO: Loading XML bean definitions from resource loaded from byte array
Jan 14, 2016 5:42:23 PM org.apache.ignite.logger.java.JavaLogger info
INFO: User version is not explicitly defined (will use default version) [file=META-INF/ignite.xml, clsLdr=GridUriDeploymentClassLoader [urls=[file:/var/folders/t3/595tz_px2j9__wl37f0b5nw40000gn/T/gg.uri.deployment.tmp/301a4cb8-6fc7-4aa9-b050-3083183f4cd0/dirzip_Archive8035449106801616883.gar/]]]
Jan 14, 2016 5:42:23 PM org.apache.ignite.logger.java.JavaLogger info
INFO: Task locally deployed: class myproject.SimpleTask
Loading SimpleTask
Computing Job in SimpleTask
Jan 14, 2016 5:42:23 PM org.apache.ignite.logger.java.JavaLogger error
SEVERE: Failed to map task jobs to nodes: GridTaskSessionImpl [taskName=SimpleTaskName, dep=GridDeployment [ts=1452789743727, depMode=SHARED, clsLdr=GridUriDeploymentClassLoader [urls=[file:/var/folders/t3/595tz_px2j9__wl37f0b5nw40000gn/T/gg.uri.deployment.tmp/301a4cb8-6fc7-4aa9-b050-3083183f4cd0/dirzip_Archive8035449106801616883.gar/]], clsLdrId=cc234014251-301a4cb8-6fc7-4aa9-b050-3083183f4cd0, userVer=0, loc=true, sampleClsName=myproject.SimpleTask, pendingUndeploy=false, undeployed=false, usage=1], taskClsName=myproject.SimpleTask, sesId=bc234014251-301a4cb8-6fc7-4aa9-b050-3083183f4cd0, startTime=1452789743638, endTime=9223372036854775807, taskNodeId=301a4cb8-6fc7-4aa9-b050-3083183f4cd0, clsLdr=GridUriDeploymentClassLoader [urls=[file:/var/folders/t3/595tz_px2j9__wl37f0b5nw40000gn/T/gg.uri.deployment.tmp/301a4cb8-6fc7-4aa9-b050-3083183f4cd0/dirzip_Archive8035449106801616883.gar/]], closed=false, cpSpi=null, failSpi=null, loadSpi=null, usage=1, fullSup=false, subjId=301a4cb8-6fc7-4aa9-b050-3083183f4cd0, mapFut=IgniteFuture [orig=GridFutureAdapter [resFlag=0, res=null, startTime=1452789743739, endTime=0, ignoreInterrupts=false, lsnr=null, state=INIT]]]
class org.apache.ignite.IgniteCheckedException: Task map operation produced no mapped jobs: GridTaskSessionImpl [taskName=SimpleTaskName, dep=GridDeployment [ts=1452789743727, depMode=SHARED, clsLdr=GridUriDeploymentClassLoader [urls=[file:/var/folders/t3/595tz_px2j9__wl37f0b5nw40000gn/T/gg.uri.deployment.tmp/301a4cb8-6fc7-4aa9-b050-3083183f4cd0/dirzip_Archive8035449106801616883.gar/]], clsLdrId=cc234014251-301a4cb8-6fc7-4aa9-b050-3083183f4cd0, userVer=0, loc=true, sampleClsName=myproject.SimpleTask, pendingUndeploy=false, undeployed=false, usage=1], taskClsName=myproject.SimpleTask, sesId=bc234014251-301a4cb8-6fc7-4aa9-b050-3083183f4cd0, startTime=1452789743638, endTime=9223372036854775807, taskNodeId=301a4cb8-6fc7-4aa9-b050-3083183f4cd0, clsLdr=GridUriDeploymentClassLoader [urls=[file:/var/folders/t3/595tz_px2j9__wl37f0b5nw40000gn/T/gg.uri.deployment.tmp/301a4cb8-6fc7-4aa9-b050-3083183f4cd0/dirzip_Archive8035449106801616883.gar/]], closed=false, cpSpi=null, failSpi=null, loadSpi=null, usage=1, fullSup=false, subjId=301a4cb8-6fc7-4aa9-b050-3083183f4cd0, mapFut=IgniteFuture [orig=GridFutureAdapter [resFlag=0, res=null, startTime=1452789743739, endTime=0, ignoreInterrupts=false, lsnr=null, state=INIT]]]
at org.apache.ignite.internal.processors.task.GridTaskWorker.body(GridTaskWorker.java:497)
at org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
at org.apache.ignite.internal.processors.task.GridTaskProcessor.startTask(GridTaskProcessor.java:678)
at org.apache.ignite.internal.processors.task.GridTaskProcessor.execute(GridTaskProcessor.java:447)
at org.apache.ignite.internal.IgniteComputeImpl.execute(IgniteComputeImpl.java:151)
at _03GarTest.main(_03GarTest.java:55)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:144)
Sven,
You should configure URI deployment SPI on client node as well to make GAR deployment work properly.
When you call compute.execute("taskName"); a lot of things have to be done locally on client prior to first request is sent to any of the node in your topology and after results start coming back. At least, Ignite should be able to get mapped jobs and be able to process results from all remote jobs and reduce all the results - please see ComputeTask.map() and ComputeTask.result() and ComputeTask.reduce(). So, you should be able to instantiate task on client node and that is why you should have task classes available.
I think after you configure URI deployment on client nodes you should have your code work fine.
Please post a comment here if you need any additional info.
Thanks!
UPDATE Jan, 18 2016
This is update in response to question update.
Please note that task in question returns null from map() method which is illegal. You can refer to org.apache.ignite.examples.computegrid.ComputeTaskMapExample in binary release or directly via https://git-wip-us.apache.org/repos/asf?p=ignite.git;a=blob;f=examples/src/main/java/org/apache/ignite/examples/computegrid/ComputeTaskMapExample.java;h=3de5293a814e527b57e3984f6d3ab96bb1b62daf;hb=HEAD