Handle partially failed batch with spring cloud streams - error-handling

When using spring-cloud-stream for streaming application (functional style) with batches, is there a way to retry/DLQ a failed message but also process (stream) the non-failing records?
for example:
function received batch of 10 records, and attempts to convert them to other type and return the new records for producing. let's say record 8 failed on mapping, is it possible to complete the producing of records 0-7 and then retry/DLQ record 8?
throwing BatchListenerFailedException with the index does not cause the prior messages to be sent.
spring kafka version: 2.8.0
code:
#Override
public List<Message<Context>> apply(Message<List<Context>> listMessage) {
List<Message<Context>> output = new ArrayList<>();
IntStream.range(0, listMessage.getPayload().size()).forEach(index -> {
try {
Record<Context> record = Record.fromBatch(listMessage, index);
output.add(MessageBuilder.withPayload(record.getValue()).build());
if (index == listMessage.getPayload().size() - 1) {
throw new TransientError("offset " + record.getOffset() + "failed", new RuntimeException());
}
} catch (Exception e) {
throw new BatchListenerFailedException("Tigger retry", e, index);
}
});
return output;
}
customizer:
private CommonErrorHandler getCommonErrorHandler(String group) {
DefaultErrorHandler errorHandler = new DefaultErrorHandler(getRecoverer(group), getBackOff());
errorHandler.setLogLevel(KafkaException.Level.DEBUG);
errorHandler.setAckAfterHandle(true);
errorHandler.setClassifications(Map.of(
PermanentError.class, false,
TransientError.class, true,
SerializationException.class, properties.isRetryDesErrors()),
false);
errorHandler.setRetryListeners(getRetryListener());
return errorHandler;
}
private ConsumerRecordRecoverer getRecoverer(String group) {
KafkaOperations<?, ?> operations = new KafkaTemplate<>(new DefaultKafkaProducerFactory<>(producerProperties()));
DeadLetterPublishingRecoverer recoverer = new DeadLetterPublishingRecoverer(
operations, getDestinationResolver(group));
recoverer.setHeadersFunction(this::buildAdditionalHeaders);
return recoverer;
}
yaml:
spring:
cloud:
function:
definition: batchFunc
stream:
default-binder: kafka-string-avro
binders:
kafka-string-avro:
type: kafka
environment.spring.cloud.stream.kafka.binder.consumerProperties:
key.deserializer: org.springframework.kafka.support.serializer.ErrorHandlingDeserializer
value.deserializer: org.springframework.kafka.support.serializer.ErrorHandlingDeserializer
spring.deserializer.key.delegate.class: org.apache.kafka.common.serialization.StringDeserializer
spring.deserializer.value.delegate.class: io.confluent.kafka.serializers.KafkaAvroDeserializer
schema.registry.url: ${SCHEMA_REGISTRY_URL:http://localhost:8081}
value.subject.name.strategy: io.confluent.kafka.serializers.subject.TopicNameStrategy
specific.avro.reader: true
environment.spring.cloud.stream.kafka.binder.producerProperties:
key.serializer: org.apache.kafka.common.serialization.StringSerializer
value.serializer: io.confluent.kafka.serializers.KafkaAvroSerializer
schema.registry.url: ${SCHEMA_REGISTRY_URL:http://localhost:8081}
value.subject.name.strategy: io.confluent.kafka.serializers.subject.TopicNameStrategy
bindings:
batchFunc-in-0:
binder: kafka-string-avro
destination: records.publisher.output
group: function2-in-group
contentType: application/*+avro
consumer:
batchMode: true
function2-out-0:
binder: kafka-string-avro
destination: reporter.output
producer:
useNativeEncoding: true
kafka:
binder:
brokers: ${KAFKA_BROKER_ADDRESS:localhost:9092}
autoCreateTopics: ${KAFKA_AUTOCREATE_TOPICS:false}
default:
consumer:
startOffset: ${START_OFFSET:latest}
enableDlq: false
default:
consumer:
maxAttempts: 1
defaultRetryable: false

Related

Rabbit saving extra bytes for AMQP 1.0 messages

I have an environment where some AMQP 1.0 and some AMQP 0.9.1 clients need to write to/read from a RabbitMQ queue. I enabled the AMQP 1.0 rabbit plugin and it is working, but I get extra bytes in body for each AMQP 1.0 message.
I'm sending messages through AMQP1.0 by using rhea (typescript):
const connection: Connection = new Connection (
{
host: 'localhost',
port: 5672,
id: 'my_id',
reconnect: true
}
);
const senderName = "sender01";
const senderOptions: SenderOptions = {
name: senderName,
target: {
address: "target.queue"
},
onError: (context: EventContext) => {},
onSessionError: (context: EventContext) => {}
};
await connection.open();
const sender: Sender = await connection.createSender(senderOptions);
sender.send({
body: JSON.stringify({"one": "two", "three": "four"}),
content_encoding: 'UTF-8',
content_type: 'application/json'
});
console.log("sent");
await sender.close();
await connection.close();
console.log("connection closed");
This example works but this is what is stored in the queue:
The base64 encoded message is AFN3oRx7Im9uZSI6InR3byIsInRocmVlIjoiZm91ciJ9, which after decoding becomes:
Sw{"one":"two","three":"four"}
There is an additional Sw which I didn't send.
I tried setting up a java client with the official RabbitMQ library (which talks AMQP 0.9.1) to see if those extra bytes were sent to clients:
ConnectionFactory factory = new ConnectionFactory();
factory.setHost("localhost");
Connection connection = factory.newConnection();
Channel channel = connection.createChannel();
channel.basicConsume(
"target.queue",
true,
(consumerTag, delivery) -> {
String message = new String(delivery.getBody(), "UTF-8");
System.out.println(" [x] Received '" + message + "'");
},
ignored -> {}
);
This is the output:
[x] Received ' Sw�{"one":"two","three":"four"}'
The weird thing is that if I try consuming the exact same message with an AMQP 1.0 client, those extra bytes don't appear in the received message body, extra bytes appear only when publishing with AMQP 1.0 and subscribing with AMQP 0.9.1.
Why is that? Is there any way to avoid extra bytes when using both AMQP versions?
UPDATE
I also tried with SwiftMQ:
int nMsgs = 100;
int qos = QoS.AT_MOST_ONCE;
AMQPContext ctx = new AMQPContext(AMQPContext.CLIENT);
String host = "localhost";
int port = 5672;
String queue = "target.queue";
try {
Connection connection = new Connection(ctx, host, port, false);
connection.setContainerId(UUID.randomUUID().toString());
connection.setIdleTimeout(-1);
connection.setMaxFrameSize(1024 * 4);
connection.setExceptionListener(Exception::printStackTrace);
connection.connect();
{
Session session = connection.createSession(10, 10);
Producer p = session.createProducer(queue, qos);
for (int i = 0; i < nMsgs; i++) {
AMQPMessage msg = new AMQPMessage();
System.out.println("Sending " + i);
msg.setAmqpValue(new AmqpValue(new AMQPString("{\"one\":\"two\",\"three\":\"four\"}")));
p.send(msg);
}
p.close();
session.close();
}
connection.close();
} catch (Exception e) {
e.printStackTrace();
}
The issue is still there, but the first bytes changed, now I get:
[x] Received '□�□□□□□□□w�{"one":"two","three":"four"}'
Please see this response which clarifies how to encode the payload and avoid these extra bytes:
if AMQP 1.0 client sends message to 0-9-1 client and encodes her payload as binary in a "data section" (i.e. not in amqp-sequence section, not in amqp-value section) the 0-9-1 client shall get the the complete payload without any extra bytes
NOTE: the RabbitMQ team monitors the rabbitmq-users mailing list and only sometimes answers questions on StackOverflow.

[vertx.redis.client]No handler waiting for message

Version
vert.x core:3.5.0
vert.x redis client:3.5.0
Context
2018-06-02 17:40:55.981 ERROR 4933 --- [ntloop-thread-2] io.vertx.redis.impl.RedisConnection : No handler waiting for message: 14751915
2018-06-02 17:41:10.937 ERROR 4933 --- [ntloop-thread-2] io.vertx.redis.impl.RedisConnection : No handler waiting for message: false
2018-06-02 17:41:10.947 ERROR 4933 --- [ntloop-thread-2] io.vertx.redis.impl.RedisConnection : No handler waiting for message: false
2018-06-02 17:41:20.937 ERROR 4933 --- [ntloop-thread-2] io.vertx.redis.impl.RedisConnection : No handler waiting for message: true
2018-06-02 17:41:30.937 ERROR 4933 --- [ntloop-thread-2] io.vertx.redis.impl.RedisConnection : No handler waiting for message: true
2018-06-02 17:41:35.927 ERROR 4933 --- [ntloop-thread-2] io.vertx.redis.impl.RedisConnection : No handler waiting for message: false
2018-06-02 17:41:40.937 ERROR 4933 --- [ntloop-thread-2] io.vertx.redis.impl.RedisConnection : No handler waiting for message: true
2018-06-02 17:41:50.948 ERROR 4933 --- [ntloop-thread-2] io.vertx.redis.impl.RedisConnection : No handler waiting for message: true
After view code of the io.vertx.redis.impl.RedisConnectioni find the why:
When server started, create redis connetion,it's ok for runing.
After long time (eg.days),the state of connection is DISCONNECTED. Vert.x redis client reconnect redis server when send command to redis server:
void send(final Command command) {
// start the handshake if not connected
if (state.get() == State.DISCONNECTED) {
connect();
}
connect() call clearQueue()
clearQueue(): waiting command quene will be empty.
Call handleReply() when receive from redis server with new connection.
note: An error log appears here(the third line to bottom).
private void handleReply(Reply reply) {
final Command cmd = waiting.poll();
if (cmd != null) {
switch (reply.type()) {
case '-': // Error
cmd.handle(Future.failedFuture(reply.asType(String.class)));
return;
case '+': // Status
switch (cmd.responseTransform()) {
case ARRAY:
cmd.handle(Future.succeededFuture(new JsonArray().add(reply.asType(String.class))));
break;
default:
cmd.handle(Future.succeededFuture(reply.asType(cmd.returnType())));
break;
}
return;
case '$': // Bulk
switch (cmd.responseTransform()) {
case ARRAY:
cmd.handle(Future.succeededFuture(new JsonArray().add(reply.asType(String.class, cmd.encoding()))));
break;
case INFO:
String info = reply.asType(String.class, cmd.encoding());
if (info == null) {
cmd.handle(Future.succeededFuture(null));
} else {
String lines[] = info.split("\\r?\\n");
JsonObject value = new JsonObject();
JsonObject section = null;
for (String line : lines) {
if (line.length() == 0) {
// end of section
section = null;
continue;
}
if (line.charAt(0) == '#') {
// begin section
section = new JsonObject();
// create a sub key with the section name
value.put(line.substring(2).toLowerCase(), section);
} else {
// entry in section
int split = line.indexOf(':');
if (section == null) {
value.put(line.substring(0, split), line.substring(split + 1));
} else {
section.put(line.substring(0, split), line.substring(split + 1));
}
}
}
cmd.handle(Future.succeededFuture(value));
}
break;
default:
cmd.handle(Future.succeededFuture(reply.asType(cmd.returnType(), cmd.encoding())));
break;
}
return;
case '*': // Multi
switch (cmd.responseTransform()) {
case HASH:
cmd.handle(Future.succeededFuture(reply.asType(JsonObject.class, cmd.encoding())));
break;
default:
cmd.handle(Future.succeededFuture(reply.asType(JsonArray.class, cmd.encoding())));
break;
}
return;
case ':': // Integer
switch (cmd.responseTransform()) {
case ARRAY:
cmd.handle(Future.succeededFuture(new JsonArray().add(reply.asType(Long.class))));
break;
default:
cmd.handle(Future.succeededFuture(reply.asType(cmd.returnType())));
break;
}
return;
default:
cmd.handle(Future.failedFuture("Unknown message type"));
}
} else {
// **An error log appears here**
log.error("No handler waiting for message: " + reply.asType(String.class));
}
}
question:
It's a bug or not?
If not a bug , the post commands will be discarded when reconnect redis server.
What's a good way to deal with this situation?
The problem has been solved. The reason for the above problem is that the connection has been reused and has not been closed. The solution is:
`
RedisClient redisClient = RedisClient.create(this.vertx, redisOptions);
//do some thing;
redisClient.close(h-{})...
`
for each session.

webrtc onRemote Stream null

I am fairly new in webRTC finding problems with its documentation. I cannot figure out why joiner does not receive stream from initiator since the messages on console look to me quite normal. Also i receive warnings about deprecated methods but not sure what to correct.
html:
navigator.getUserMedia = navigator.getUserMedia ||
navigator.webkitGetUserMedia || navigator.mozGetUserMedia || navigator.mediaDevices.getUserMedia ||
navigator.msGetUserMedia;
var isInitiator = false
, isChannelReady = false
, isStarted = false;
//Send 'create' or 'join' message to singnalling server
console.log('Create or join room', room);
socket.emit('create or join', room);
//Handle 'created' message coming back from server:
//this peer is the initiator
socket.on('created', function (room){
console.log('Created room ' + room);
isInitiator = true;
var video = $('#sidebar').append("<video class='student' autoplay='true'></video>");
});
//Handle 'join' message coming back from server:
//another peer is joining the channel
socket.on('join', function (room){
console.log('Another peer made a request to join room ' + room);
console.log('This peer is the initiator of room ' + room + '!');
isChannelReady = true;
var video = $('#sidebar').append("<video class='student' autoplay='true'></video>");
});
//Handle 'joined' message coming back from server:
//this is the second peer joining the channel
socket.on('joined', function (room){
console.log('This peer has joined room ' + room);
isChannelReady = true;
var video = $('#sidebar').append("<video class='student' autoplay='true'></video>");
navigator.getUserMedia({ video: true, audio: true },
function (stream) {
$('#sidebar').children().last().attr('src', window.URL.createObjectURL(stream))
if(!isStarted && typeof stream!= 'undefined' && isChannelReady) {
createPeerConnectionInit(stream);
isStarted = true;
} else{
}
}, function (error) {
console.log('error'+error);
});
});
socket.on('message', function (message){
if (message === 'got user media') {
}
else if (message.type === 'offer') {
if(isChannelReady){
console.log('Im the initiator. Channel is ready!!!');
createPeerConnectionNotInit(message);
isStarted = true;
}
}
else if (message.type === 'answer' && isStarted) {
peer.addAnswerSDP(message);
console.log('addAnswerSDP:', message);
}
else if (message.type === 'candidate' && isStarted) {
console.log("im adding candidate!!!");
var candidate = new RTCIceCandidate({sdpMLineIndex:message.label,
candidate:message.candidate});
peer.addICE(candidate);// εδω ο initiator προσθέτει στο ice τον candidate
}
else if (message === 'bye' && isStarted) {
}
});
function createPeerConnectionInit(stream){
peer = RTCPeerConnection({
attachStream : stream,
onICE : function (candidate) {
if (candidate) {
sendMessage({
type: 'candidate',
label: candidate.sdpMLineIndex,
id: candidate.sdpMid,
candidate: candidate.candidate});
} else {
console.log('End of candidates.');
}
},
onRemoteStream : function (stream) {
console.log('onRemoteStream Init = yes');
document.getElementById('video').src = stream;
},
onOfferSDP : function(sdp) {
console.log('sdp ='+JSON.stringify(sdp));
sendMessage(sdp);
}
});
}
function createPeerConnectionNotInit(offer_sdp){
peer = RTCPeerConnection({
onICE : function (candidate) {
if (candidate) {
sendMessage({
type: 'candidate',
label: candidate.sdpMLineIndex,
id: candidate.sdpMid,
candidate: candidate.candidate});
} else {
console.log('End of candidates.');
}
},
onRemoteStream : function (stream) {
console.log('onRemoteStream Not Init = yes');
document.getElementById('video').src = URL.createObjectURL(stream);;
},
// see below two additions ↓
offerSDP : offer_sdp,
onAnswerSDP : function(sdp) {
sendMessage(sdp);
}
});
}
Console log from initiator view:
sdp ={"type":"offer","sdp":"v=0\r\no=mozilla...THIS_IS_SDPARTA-53.0.3 8347568228516099874 0 IN IP4 0.0.0.0\r\ns=-\r\nt=0 0\r\na=fingerprint:sha-256
Sending message: Object { type: "candidate", label: 0, id: "sdparta_0", candidate: "candidate:0 1 UDP 2122187007 2a02:5…" } boardWithD3.js.html:798:13
Many same messages following..
addAnswerSDP: Object { type: "answer", sdp: "v=0 o=mozilla...THIS_IS_SDPARTA-53.…" } boardWithD3.js.html:727:17
adding-ice candidate:0 1 UDP 2122187007 2a02:582:1096:a500:f03d:34da:c0a:75b0 50729 typ host RTCPeerConnection-v1.5.js:264:13
A couple similar messages following…..
Console log from joiner view:
offer_sdp:{"type":"offer","sdp":"v=0\r\no=mozilla...THIS_IS_SDPARTA-53.0.3 8347568228516099874 0 IN IP4 0.0.0.0\r\ns=-\r\nt=0 0\r\na=fingerprint:sha-256 ………
ice-servers [{"url": "stun:23.21.150.121" },
{"url": "stun:stun.services.mozilla.com" }]
sdp-constraints {
"OfferToReceiveAudio": true,
"OfferToReceiveVideo": true
} RTCPeerConnection-v1.5.js:123:5
offer-sdp v=0 o=mozilla...THIS_IS_SDPARTA-53.0.3 8347568228516099874 0 IN IP4 0.0.0.0
……
WebRTC interfaces with the “moz” prefix (mozRTCPeerConnection, mozRTCSessionDescription, mozRTCIceCandidate) have been deprecated. RTCPeerConnection-v1.5.js:79:15
RTCIceServer.url is deprecated! Use urls instead. RTCPeerConnection-v1.5.js:79
onaddstream is deprecated! Use peerConnection.ontrack instead. RTCPeerConnection-v1.5.js:101
onRemoteStream Not Init = yes boardWithD3.js.html:784:21
Sending message: Object { type: "answer", sdp: "v=0 o=mozilla...THIS_IS_SDPARTA-53.…" } boardWithD3.js.html:798:13……
adding-ice candidate:0 1 UDP 2122187007 2a02:582:1096:a500:f03d:34da:c0a:75b0 50006 typ host RTCPeerConnection-v1.5.js:264:13
**Uncaught TypeError: Cannot set property 'src' of null
at Object.onRemoteStream (boardWithD3.js.html?andora:785)
at RTCPeerConnection.peer.onaddstream (RTCPeerConnection-v1.5.js:110)
**
adding-ice candidate:3 1 UDP 2122252543 2a02:582:1096:a500:7de6:9361:ecf4:476a 50007 typ host RTCPeerConnection-v1.5.js:264:13
followed by many similar messages…..
Sending message: Object { type: "candidate", label: 0, id: "sdparta_0", candidate: "candidate:0 1 UDP 2122187007 2a02:5…" } boardWithD3.js.html:798:13
followed by many similar messages…..
Thanks in advance
As per below log
Uncaught TypeError: Cannot set property 'src' of null
at Object.onRemoteStream (boardWithD3.js.html?andora:785)
at RTCPeerConnection.peer.onaddstream (RTCPeerConnection-v1.5.js:110)
Issue is in below method with video element.
onRemoteStream : function (stream) {
console.log('onRemoteStream stream', stream);
//document.getElementById('video').src = stream; // Issue could be here, add a video element with id="video" in html body.
document.getElementById('video').srcObject = stream; //As per latest API use srcObject instead of src
}
As #jib mentioned, you using old API's.
See the samples and demo for latest API's

How do we know when webRTC already finished collecting ICE candidates

I am using Kurento Utils for WebRTC connection with Kurento Media Server (ver 5.x)
Inside the kurento-utils-js library during init the simplify codes are shown below:
if (!this.pc) {
this.pc = new RTCPeerConnection(server, options);
}
var ended = false;
pc.onicecandidate = function(e) {
// candidate exists in e.candidate
if (e.candidate) {
ended = false;
return;
}
if (ended) {
return;
}
var offerSdp = pc.localDescription.sdp;
console.log('ICE negotiation completed');
self.onsdpoffer(offerSdp, self);
ended = true;
};
My question is it seems like it is waiting until onicecandidate passing "null" value that signifies the process has ended and thus able to continue with creating SDP offer, but I couldn't find this behaviour in WebRTC specs?
My next question is, how else we can know the process of finding ice candidates has ended?
One of my PC in my office couldn't reached the code console.log('ICE negotiation completed'); as null value is not passed.
You could check the iceGatheringState property (run in chrome):
var config = {'iceServers': [{ url: 'stun:stun.l.google.com:19302' }] };
var pc = new webkitRTCPeerConnection(config);
pc.onicecandidate = function(event) {
if (event && event.target && event.target.iceGatheringState === 'complete') {
alert('done gathering candidates - got iceGatheringState complete');
} else if (event && event.candidate == null) {
alert('done gathering candidates - got null candidate');
} else {
console.log(event.target.iceGatheringState, event);
}
};
pc.createOffer(function(offer) {
pc.setLocalDescription(offer);
}, function(err) {
console.log(err);
}, {'mandatory': {'OfferToReceiveAudio': true}});
window.pc = pc;
http://www.w3.org/TR/webrtc/
4.3.1
" If the intent of the ICE Agent is to notify the script that:
[...]
The gathering process is done.
Set connection's ice gathering state to completed and let newCandidate be null."
So, you can either check for the ice gathering state against "completed" (in real life, this is not very reliable), or wait for a null candidate (super reliable).

Change HTTP URL in Worklight adapter

I need to create an HTTP adapter for worklight but the url must be programmatically provided via a parameter.
1) I was able to pass the user/password but not the url. Is there a way to do that?
I also try to create my own java adapter to call the REST API, It works when I test the adapter but it seems my response is not in the expected format for worklight. I got this error:
2) BAD_PARAMETER_EXPECTED_DOCUMENT_OR_ARRAY_OF_DOCUMENT.
my Java adapter returns a JSONArtifact (JSONObject) but it seems that worklight want this to be embedded in another JSONObject such as { "array":{...}}.
Is there a way to convert a JSONObject to the format expected by worklight.
import org.apache.wink.json4j.JSON;
import org.apache.wink.json4j.JSONArtifact;
import org.apache.wink.json4j.JSONException;
private Header headerUserAgent = new Header("User-Agent", "Mozilla");
private Header headerAccept = new Header("Accept", "application/json");
private String hostName;
private String baseURL;
protected MyHttpClient(String userName, String userPassword, String hostName, String baseURL ) {
super();
Credentials defaultcreds = new UsernamePasswordCredentials(userName,
userPassword);
this.getState().setCredentials(AuthScope.ANY, defaultcreds);
this.hostName = hostName;
this.baseURL = baseURL;
}
private GetMethod getGetMethod(String url) throws URIException {
GetMethod httpMethod = new GetMethod(new HttpsURL("https://"+hostName+baseURL+url).getEscapedURI());
addCommonHeaders(httpMethod);
return httpMethod;
}
private JSONArtifact getResponseAsJSONObject(InputStream inputStream) throws IOException {
InputStreamReader reader = new InputStreamReader(inputStream);
try {
JSONArtifact json = JSON.parse(reader);
return json;
} catch (NullPointerException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (JSONException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
return null;
}
Adapter:
function getResponse(user,password) {
var client = new com.itdove.mypackage.MyHttpClient(user,password,"myurl","mybaseurl");
return {
array : client.executeGet("mypath")
};
}
it works with this but this solution doesn't provide the service url as parameter:
function getResponseAdapters(path, username, password) {
var input = {
method : 'get',
returnedContentType : 'json',
headers: {
'User-Agent':'Mozilla',
'Authorization': 'Basic '+Base64.encode(username+':'+password),
} ,
path : '/resources/' + path
};
return WL.Server.invokeHttp(input);
}
function getResponse(username, password) {
return getMySCAWSAdapters(path, username, password);
}
Collection
vAPPArrayAdapterOptions = {
name: 'myResponseAdapter',
replace: '',
remove: '',
add: '',
load: {
procedure: 'getResponse',
params: ["user","password"],
key: 'array'
},
accept: function (data) {
return (data.status === 200);
}
},
...
vAPPArray = wlJsonStore.initCollection(
"vAPPArray",
vAPPArraySearchFields,
{adapter: vAPPArrayAdapterOptions,
onSuccess: initCollectionSuccessCallback,
onFailure: initCollectionFailureCallback,
load:true});
Many Thanks
Dominique
Found the solution:
First, I was using apache wink JSONArtifact instead of the com.ibm.json.java.JSONArtifact!
Secondly I modified my collector implement method as follow to add the status (not sure if it is needed or not)
function getResponse(user,password,hostname) {
var client = new com.itdove.mypackage.IWDHttpClient(user,password,hostname,"mypath");
return {
array :client.executeGet("mymethod"),
statusCode: client.getStatusCode(),
statusReason: client.getStatusReason()
};
}
in myCollector.js I set the user, password, hostname as follow before calling my initCollection.
params = [ settings.json.user, settings.json.password, settings.json.hostname ];
myAdapterOptions.load["params"] = params;