playsms - kannel -smsc Able to receive but not to send sms - sms-gateway

I use playsms and kannel, I can receive sms but i cannot send sms.
When I try to send a sms I got:
1. 0: Accepted for delivery in the browser
tail -f /var/log/kannel/smsbox.log
sudo tail -f /var/log/kannel/smsbox.log
2015-07-20 15:20:12 [2711] [3] INFO: smsbox: Got HTTP request from <127...*>
2015-07-20 15:20:12 [2711] [3] INFO: sendsms used by
2015-07-20 15:20:12 [2711] [3] INFO: sendsms sender:***> (127...) to:<2*******> msg:
2015-07-20 15:20:12 [2711] [3] DEBUG: Stored UUID f5a87ab8-bd95-413b-a9a6-eb37e9b454da
2015-07-20 15:20:12 [2711] [3] DEBUG: message length 14, sending 1 messages
2015-07-20 15:20:12 [2711] [3] DEBUG: Status: 202 Answer:
2015-07-20 15:20:12 [2711] [3] DEBUG: Delayed reply - wait for bearerbox
2015-07-20 15:20:12 [2711] [0] DEBUG: Got ACK (0) of f5a87ab8-bd95-413b-a9a6-eb37e9b454da
2015-07-20 15:20:12 [2711] [0] DEBUG: HTTP: Destroying HTTPClient area 0x7f827c000a90.
2015-07-20 15:20:12 [2711] [0] DEBUG: HTTP: Destroying HTTPClient for 127...*)'.
2015-07-20 15:31:13 [2711] [2] DEBUG: HTTP: Creating HTTPClient for127...*)'.
2015-07-20 15:31:13 [2711] [2] DEBUG: HTTP: Created HTTPClient area 0x7f827c000a90.
In playsms the sms is pending and not sent
Have you any idea what is my problem?

Related

AWS SDK for Java V2: PutObject corrupts uploaded file when checksumAlgorithm is specified

I'm trying to use the AWS SDK for Java v2 to upload a file to S3 and have the upload integrity checked using the SHA256 trailing checksum. My code works fine to upload the file if the checksumAlgorithm(ChecksumAlorithm.SHA256) property of PutObjectRequest is not present but when it is included the uploaded file has additional characters at its start and is truncated at the end.
Here's the code I'm using:
byte[] smallJsonBytes = getObjectFile("UploadTest\\smallFile.txt");
PutObjectRequest putOb = PutObjectRequest.builder()
.bucket(credsProvider.getS3Bucketname())
.key(s3Key)
**.checksumAlgorithm(ChecksumAlgorithm.SHA256)**
.build();
PutObjectResponse response = this.myS3Client.putObject(putOb, RequestBody.fromBytes(smallJsonBytes));
Below is the debug output when the checksumAlgorithm property is included. You can see the AWS SDK is generating a trailing checksum as expected but, as noted already, the uploaded file is corrupted: You can see the insertion of the string "14" in the debug output and this string appears at the beginning of the uploaded file. Additionally, the uploaded file is truncated.
Clearly from the debug output there is a problem with the evaluation of the file's content length.
I've tried specifying the contentLength directly on the PutObjectRequest (as per documentation - .contentLength(long length)), and also tried converting smallJsonBytes to a ByteArrayInputStream and specifying the contentLength in the RequestBody.fromInputStream, both with the same result.
14:56:56.150 [main] DEBUG org.apache.http.wire - http-outgoing-0 >> "x-amz-content-sha256: STREAMING-UNSIGNED-PAYLOAD-TRAILER[\r][\n]"
14:56:56.150 [main] DEBUG org.apache.http.wire - http-outgoing-0 >> "X-Amz-Date: 20230216T215655Z[\r][\n]"
14:56:56.150 [main] DEBUG org.apache.http.wire - http-outgoing-0 >> "x-amz-decoded-content-length: 20[\r][\n]"
14:56:56.150 [main] DEBUG org.apache.http.wire - http-outgoing-0 >> "x-amz-sdk-checksum-algorithm: SHA256[\r][\n]"
14:56:56.150 [main] DEBUG org.apache.http.wire - http-outgoing-0 >> "x-amz-trailer: x-amz-checksum-sha256[\r][\n]"
14:56:56.150 [main] DEBUG org.apache.http.wire - http-outgoing-0 >> "Content-Length: 99[\r][\n]"
14:56:56.150 [main] DEBUG org.apache.http.wire - http-outgoing-0 >> "Connection: Keep-Alive[\r][\n]"
14:56:56.150 [main] DEBUG org.apache.http.wire - http-outgoing-0 >> "[\r][\n]"
14:56:56.206 [main] DEBUG org.apache.http.wire - http-outgoing-0 << "HTTP/1.1 100 Continue[\r][\n]"
14:56:56.206 [main] DEBUG org.apache.http.wire - http-outgoing-0 << "[\r][\n]"
14:56:56.211 [main] DEBUG org.apache.http.headers - http-outgoing-0 << HTTP/1.1 100 Continue
14:56:56.211 [main] DEBUG software.amazon.awssdk.core.internal.io.SdkLengthAwareInputStream - Specified InputStream length of 20 has been reached. Returning EOF.
14:56:56.213 [main] DEBUG software.amazon.awssdk.core.internal.io.SdkLengthAwareInputStream - Specified InputStream length of 20 has been reached. Returning EOF.
14:56:56.216 [main] DEBUG org.apache.http.wire - http-outgoing-0 >> "14[\r][\n]"
14:56:56.217 [main] DEBUG org.apache.http.wire - http-outgoing-0 >> "Nothing much in here[\r][\n]"
14:56:56.217 [main] DEBUG org.apache.http.wire - http-outgoing-0 >> "0[\r][\n]"
14:56:56.217 [main] DEBUG org.apache.http.wire - http-outgoing-0 >> "x-amz-checksum-sha256:BthkalBn8SFAnEAFIWr2vtfO0h+TEdX/vOH+iZqKJ4k=[\r][\n]"
14:56:56.217 [main] DEBUG org.apache.http.wire - http-outgoing-0 >> "[\r][\n]"
14:56:56.389 [main] DEBUG org.apache.http.wire - http-outgoing-0 << "HTTP/1.1 200 OK[\r][\n]"

Duplicate headers are added at runtime in Rest assured

I am trying to send a request with one header(authorization) but when i check the logs i am seeing same header is added twice to the request which is causing my tests to fail.
Below is the post method which i am using
public static Response doPost(String basePath, Object payload, String header) {
return reqSpec.given().header("X-HSBC-E2E-Trust-Token", header).config(RestAssuredConfig.config())
.body(payload.toString()).log().all()
.post(basePath).then().extract().response();
}
Log:
Request URI: https:Url
Proxy: <none>
Request params: <none>
Query params: <none>
Form params: <none>
Path params: <none>
Headers:E2E-Trust-Token=0JDX09VRIMl9TRVJWRVJfREVWIn0.eyJzaXQiOiJhZDp1c3I6ZW1wbG95ZWVJZCIsInN1YiI6IkdCLVNWQy1GV0tUQVBJRlAiLCJhbHQiOlt7InNpdCI6ImVtYWlsIiwic3ViIjoiR0ItU1ZDLUZXS1RBUElGUEBOb3RSZWNlaXZpbmdNYWlmhzYmMuY29tIn0seyJzaXQiOiJuYW1lIiwic3ViIjoiR0ItU1ZDLUZXS1RBUElGUCJ9LHsic2l0IjoibG9naW5JZCIsInN1YiI6IkdCLVNWQy1GV0tUQVBJRlAifV0sImdycCI6WyJDTj1JbmZvZGlyLUFQSUF1dG9GV0stUHJvZFN W5kYXJkLE9VPUlUSURBUEksT1U9QXBwbGljYXRpb25zLE9VPUdyb3VwcyxEQz1JbmZvRGlyLERDPVByb2QsREM9SFNCQyIsIkNOPUluZm9kaXItQVBJQXV0b0ZXSy1Qcm9kQWRtaW4sT1U9SVRJREFQSSxPVT1BcHBsaWNhdGlvbnM1U9R3JvdXBzLERDPUluZm9EaXIsREM9UHJvZCxEQz1IU0JDIl0sInNjb3BlIjoiSVRJREFQSSIsImp0aSI6IjBlMGQ0YzM5LThlMjQtNDI5MC1iZGExLTE0MzllOTRlMzcxYyIsImlzcyI6ImdibDIwMTA5MDk2LmhjLmNsb3VkLnVmhzYmMiLCJpYXQiOjE2NTc1MzE2NTEsImV4cCI6MTY1NzUzMjI1MSwiYXVkIjoiR0ItU1ZDLUFQSUZXREVWQEhCRVUiLCJ1c2VyX25hbWUiOiJHQi1TVkMtRldLVEFQSUZQIn0.HgxR7j7fbl5JRQxTFX0Z7OJLk_11Vc4fUFPEZ9E
Accept=*/*
E2E-Trust-Token=0JDX09BVVRIMl9TRVJWRVJfREVWIn0.eyJzaXQiOiJhZDp1c3I6ZW1wbG95ZWVJZCIsInN1YiI6IkdCLVNWQy1GV0tUQVBJRlAiLCJhbHQiOlt7InNpdCI6ImVtYWlsIiwic3ViIjoiR0ItU1ZDLUZXS1RBUElGUEBOb3RSZWNlaXZmdNYWlsLmhzYmMuY29tIn0seyJzaXQiOiJuYW1lIiwic3ViIjoiR0ItU1ZDLUZXS1RBUElGUCJ9LHsic2l0IjoibG9naW5JZCIsInN1YiI6IkdCLVNWQy1GV0tUQVBJRlAifV0sImdycCI6WyJDTj1JbmZvZGlyLUFQSUF1dG9GV0sHJvZFN0YW5kYXJkLE9VPUlUSURBUEksT1U9QXBwbGljYXRpb25zLE9VPUdyb3VwcyxEQz1JbmZvRGlyLERDPVByb2QsREM9SFNCQyIsIkNOPUluZm9kaXItQVBJQXV0b0ZXSy1Qcm9kQWRtaW4sT1U9SVRJREFQSSxPVT1BcHBsaWN
3VkLnVrLmhzYmMiLCJpYXQiOjE2NTc1MzE2NTEsImV4cCI6MTY1NzUzMjI1MSwiYXVkIjoiR0ItU1ZDLUFQSUZXREVWQEhCRVUiLCJ1c2VyX25hbWUiOiJHQi1TVkMtRldLVEFQSUZQIn0.HgxR7j7fbl5JRQxTFX0Z7OJLk_11Vc4FPEZ9Ec2xVXmUxeE6M3f8wBe4HOrQDJw_gWm_Kvf_hepsvAu0S0ACRQ3P_4i5yf2z0FXtBoUd3v9AvZcB299PZGJhNZ4HxUEP5SdzjQWOuZgdHAFg6GJdgsYBLTuCXQqy-kdqgPczgx7bgKTBfqrWH2I4qH1ZSE2OlksvGPj0Ia1Ts
Content-Type=application/json; charset=UTF-8
Cookies: <none>
Multiparts: <none>
Body:
{
"name": "TestRole11072733",
"requiresApproval": true,
"signatures": [
1091
]
}
10:27:36.574 [main] DEBUG org.apache.http.impl.conn.BasicClientConnectionManager - Get connection for route {s}->https:URL
10:27:36.574 [main] DEBUG org.apache.http.impl.conn.DefaultClientConnectionOperator - Connecting to Endpoint
10:27:37.001 [main] DEBUG org.apache.http.client.protocol.RequestAddCookies - CookieSpec selected: ignoreCookies
10:27:37.001 [main] DEBUG org.apache.http.client.protocol.RequestAuthCache - Auth cache not set in the context
10:27:37.001 [main] DEBUG org.apache.http.client.protocol.RequestProxyAuthentication - Proxy auth state: UNCHALLENGED
10:27:37.001 [main] DEBUG org.apache.http.impl.client.DefaultHttpClient - Attempt 1 to execute request
10:27:37.001 [main] DEBUG org.apache.http.impl.conn.DefaultClientConnection - Sending request: POST Endpoint

Sleuth 3.0.1 + Spring Cloud Gateway = traceids do not correlate on request/response

Well I have a new setup with Sleuth 3.0.1 where by default spring.sleuth.reactor.instrumentation-type is set to manual
Therefore I wrapped all my places where I need to trace with WebFluxSleuthOperators.withSpanInScope(tracer, currentTraceContext, exchange, () -> {}) logic and it generally puts span in scope but the traceid is always different before and after the response is received:
INFO [gateway,2b601780f2242aa9,2b601780f2242aa9] 9 --- [or-http-epoll-1] ...LogFilter : Request - Username: ..., method: GET, uri: ...
INFO [gateway,,] 9 --- [or-http-epoll-1] ...ValidationFilter : ... Validation passed
INFO [gateway,6a396715ff0979a6,6a396715ff0979a6] 9 --- [or-http-epoll-4] ...LogFilter : Request - Username: ..., method: GET, uri: ...
INFO [gateway,,] 9 --- [or-http-epoll-4] ...ValidationFilter : ... Validation passed
INFO [gateway,020c11ab0c37f8ae,020c11ab0c37f8ae] 9 --- [or-http-epoll-2] ...LogFilter : Request - Username: ..., method: GET, uri: ...
INFO [gateway,,] 9 --- [or-http-epoll-2] ...ValidationFilter : Validation passed
INFO [gateway,189a49553566f771,f51aaaf68e5738d0] 9 --- [or-http-epoll-1] ...LogFilter : Response - Status: 200 OK
INFO [gateway,92f25dcff9833079,87db5f7b1cbefedb] 9 --- [or-http-epoll-2] ...LogFilter : Response - Status: 200 OK
INFO [gateway,d8b133f2200e7808,1743df4b5ad37d07] 9 --- [or-http-epoll-1] ...LogFilter : Response - Status: 400 BAD_REQUEST
Digging deeper it seems the span returned on exchange.getAttribute(Span.class.getName()); has different trace ID in post routing phase.
Of course changing instrumentation-type to decorate_on_last fixes the issue but would like to avoid unnecessary performance degradations.

Is Spring webclient non-blocking client?

I don't understand reactive webclient works. It says that spring webclient is non-blocking client, but this webclient seems waiting signal onComplete() from remote api, then it can process each item that emitted from the remote api.
I'm expecting that webclient can process each item when onNext() is fired from the target api
I'm new in the spring webflux worlds. I read about it and it says it uses netty as default server. And this netty using eventloop. So to understand how it works I try to create 2 small apps, client and server.
Server app only return simple flux with delay 1 second each item.
Client app using webclient to call remote api.
Server:
#GetMapping(ITEM_END_POINT_V1)
public Flux<Item> getAllItems(){
return Flux.just(new Item(null, "Samsung TV", 399.99),
new Item(null, "LG TV", 329.99),
new Item(null, "Apple Watch", 349.99),
new Item("ABC", "Beats HeadPhones",
149.99)).delayElements(Duration.ofSeconds(1)).log("Item : ");
}
Client:
WebClient webClient = WebClient.create("http://localhost:8080");
#GetMapping("/client/retrieve")
public Flux<Item> getAllItemsUsingRetrieve() {
return webClient.get().uri("/v1/items")
.retrieve()
.bodyToFlux(Item.class).log();
}
Log from server:
2019-05-01 22:44:20.121 INFO 19644 --- [ctor-http-nio-2] Item : : onSubscribe(FluxConcatMap.ConcatMapImmediate)
2019-05-01 22:44:20.122 INFO 19644 --- [ctor-http-nio-2] Item : : request(unbounded)
2019-05-01 22:44:21.126 INFO 19644 --- [ parallel-1] Item : : onNext(Item(id=null, description=Samsung TV, price=399.99))
2019-05-01 22:44:22.129 INFO 19644 --- [ parallel-2] Item : : onNext(Item(id=null, description=LG TV, price=329.99))
2019-05-01 22:44:23.130 INFO 19644 --- [ parallel-3] Item : : onNext(Item(id=null, description=Apple Watch, price=349.99))
2019-05-01 22:44:24.131 INFO 19644 --- [ parallel-4] Item : : onNext(Item(id=ABC, description=Beats HeadPhones, price=149.99))
2019-05-01 22:44:24.132 INFO 19644 --- [ parallel-4] Item : : onComplete()
Log from client:
2019-05-01 22:44:19.934 INFO 24164 --- [ctor-http-nio-2] reactor.Flux.MonoFlatMapMany.1 : onSubscribe(MonoFlatMapMany.FlatMapManyMain)
2019-05-01 22:44:19.936 INFO 24164 --- [ctor-http-nio-2] reactor.Flux.MonoFlatMapMany.1 : request(unbounded)
2019-05-01 22:44:19.940 TRACE 24164 --- [ctor-http-nio-2] o.s.w.r.f.client.ExchangeFunctions : [7e73de5c] HTTP GET http://localhost:8080/v1/items, headers={}
2019-05-01 22:44:24.159 TRACE 24164 --- [ctor-http-nio-6] o.s.w.r.f.client.ExchangeFunctions : [7e73de5c] Response 200 OK, headers={masked}
2019-05-01 22:44:24.204 INFO 24164 --- [ctor-http-nio-6] reactor.Flux.MonoFlatMapMany.1 : onNext(Item(id=null, description=Samsung TV, price=399.99))
2019-05-01 22:44:24.204 INFO 24164 --- [ctor-http-nio-6] reactor.Flux.MonoFlatMapMany.1 : onNext(Item(id=null, description=LG TV, price=329.99))
2019-05-01 22:44:24.204 INFO 24164 --- [ctor-http-nio-6] reactor.Flux.MonoFlatMapMany.1 : onNext(Item(id=null, description=Apple Watch, price=349.99))
2019-05-01 22:44:24.204 INFO 24164 --- [ctor-http-nio-6] reactor.Flux.MonoFlatMapMany.1 : onNext(Item(id=ABC, description=Beats HeadPhones, price=149.99))
2019-05-01 22:44:24.205 INFO 24164 --- [ctor-http-nio-6] reactor.Flux.MonoFlatMapMany.1 : onComplete()
I'm expecting that client won't wait for 4 seconds then get the actual result.
As you can see that server start emit onNext() on 22:44:21.126, and client get result on 22:44:24.159.
So I don't understand why webclient is called non-blocking client if it has this behaviour.
The WebClient is non-blocking in a sense that the thread sending HTTP requests through the WebClient is not blocked by the IO operation.
When the response is available, netty will notify one of the worker threads and it will process the response according to the reactive stream operations that you defined.
In your example the server will wait until all the elements in a Flux are available (4 seconds), serialize them to the JSON array, and send it back in a single HTTP response.
The client waits for this single response, but non of its threads are blocked during this period.
If you want to achieve the streaming effect, you need to leverage different content-type or the underlying protocol like WebSockets.
Check-out the following SO thread about the application/stream+json content-type:
Spring WebFlux Flux behavior with non streaming application/json

JSON data sink to Apache Phoenix with Apache Flume Error

I want to sink JSON data into Apache Phoenix with Apache Flume, followed an online guide http://kalyanbigdatatraining.blogspot.com/2016/10/how-to-stream-json-data-into-phoenix.html, but met the following error. How to resolve it? Many thanks!
My environment list as:
hadoop-2.7.3
hbase-1.3.1
phoenix-4.12.0-HBase-1.3-bin
flume-1.7.0
In flume, I added phoenix sink related jars in $FLUME_HOME/plugins.d/phoenix-sink/lib
commons-io-2.4.jar
twill-api-0.8.0.jar
twill-discovery-api-0.8.0.jar
json-path-2.2.0.jar
twill-common-0.8.0.jar
twill-discovery-core-0.8.0.jar
phoenix-flume-4.12.0-HBase-1.3.jar
twill-core-0.8.0.jar
twill-zookeeper-0.8.0.jar
2017-11-11 14:49:54,786 (lifecycleSupervisor-1-1) [DEBUG - org.apache.phoenix.jdbc.PhoenixDriver$2.onRemoval(PhoenixDriver.java:159)] Expiring localhost:2181:/hbase because of EXPLICIT
2017-11-11 14:49:54,787 (lifecycleSupervisor-1-1) [INFO - org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.closeZooKeeperWatcher(ConnectionManager.java:1712)] Closing zookeeper sessionid=0x15fa8952cea00a6
2017-11-11 14:49:54,787 (lifecycleSupervisor-1-1) [DEBUG - org.apache.zookeeper.ZooKeeper.close(ZooKeeper.java:673)] Closing session: 0x15fa8952cea00a6
2017-11-11 14:49:54,787 (lifecycleSupervisor-1-1) [DEBUG - org.apache.zookeeper.ClientCnxn.close(ClientCnxn.java:1306)] Closing client for session: 0x15fa8952cea00a6
2017-11-11 14:49:54,789 (lifecycleSupervisor-1-1-SendThread(localhost:2181)) [DEBUG - org.apache.zookeeper.ClientCnxn$SendThread.readResponse(ClientCnxn.java:818)] Reading reply sessionid:0x15fa8952cea00a6, packet:: clientPath:null serverPath:null finished:false header:: 3,-11 replyHeader:: 3,2620,0 request:: null response:: null
2017-11-11 14:49:54,789 (lifecycleSupervisor-1-1) [DEBUG - org.apache.zookeeper.ClientCnxn.disconnect(ClientCnxn.java:1290)] Disconnecting client for session: 0x15fa8952cea00a6
2017-11-11 14:49:54,789 (lifecycleSupervisor-1-1-SendThread(localhost:2181)) [DEBUG - org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1086)] An exception was thrown while closing send thread for session 0x15fa8952cea00a6 : Unable to read additional data from server sessionid 0x15fa8952cea00a6, likely server has closed socket
2017-11-11 14:49:54,789 (lifecycleSupervisor-1-1) [INFO - org.apache.zookeeper.ZooKeeper.close(ZooKeeper.java:684)] Session: 0x15fa8952cea00a6 closed
2017-11-11 14:49:54,789 (lifecycleSupervisor-1-1-EventThread) [INFO - org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:512)] EventThread shut down
2017-11-11 14:49:54,790 (lifecycleSupervisor-1-1) [ERROR - org.apache.flume.lifecycle.LifecycleSupervisor$MonitorRunnable.run(LifecycleSupervisor.java:251)] Unable to start SinkRunner: { policy:org.apache.flume.sink.DefaultSinkProcessor#2d2052a0 counterGroup:{ name:null counters:{} } } - Exception follows.
java.lang.NoSuchMethodError:
org.apache.twill.zookeeper.ZKClientService.startAndWait()Lcom/google/common/util/concurrent/Service$State;
at org.apache.phoenix.transaction.TephraTransactionContext.setTransactionClient(TephraTransactionContext.java:147)
at org.apache.phoenix.query.ConnectionQueryServicesImpl.initTxServiceClient(ConnectionQueryServicesImpl.java:401)
at org.apache.phoenix.query.ConnectionQueryServicesImpl.openConnection(ConnectionQueryServicesImpl.java:415)
at org.apache.phoenix.query.ConnectionQueryServicesImpl.access$500(ConnectionQueryServicesImpl.java:257)
at org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:2384)
at org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:2360)
at org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:76)
at org.apache.phoenix.query.ConnectionQueryServicesImpl.init(ConnectionQueryServicesImpl.java:2360)
at org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:255)
at org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.createConnection(PhoenixEmbeddedDriver.java:150)
at org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:221)
at java.sql.DriverManager.getConnection(DriverManager.java:664)
at java.sql.DriverManager.getConnection(DriverManager.java:208)
at org.apache.phoenix.flume.serializer.BaseEventSerializer.initialize(BaseEventSerializer.java:140)
at org.apache.phoenix.flume.sink.PhoenixSink.start(PhoenixSink.java:119)
at org.apache.flume.sink.DefaultSinkProcessor.start(DefaultSinkProcessor.java:45)
at org.apache.flume.SinkRunner.start(SinkRunner.java:79)
at org.apache.flume.lifecycle.LifecycleSupervisor$MonitorRunnable.run(LifecycleSupervisor.java:249)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
2017-11-11 14:49:54,792 (lifecycleSupervisor-1-1) [INFO - org.apache.flume.instrumentation.MonitoredCounterGroup.stop(MonitoredCounterGroup.java:149)] Component type: SINK, name: Phoenix Sink__1 stopped
2017-11-11 14:49:54,792 (lifecycleSupervisor-1-1) [INFO - org.apache.flume.instrumentation.MonitoredCounterGroup.stop(MonitoredCounterGroup.java:155)] Shutdown Metric for type: SINK, name: Phoenix Sink__1. sink.start.time == 1510382993516
Here is my flume-agent.properties
agent.sources = exec
agent.channels = mem-channel
agent.sinks = phoenix-sink
agent.sources.exec.type = exec
agent.sources.exec.command = tail -F /Users/chenshuai1/tmp/users.json
agent.sources.exec.channels = mem-channel
agent.sinks.phoenix-sink.type = org.apache.phoenix.flume.sink.PhoenixSink
agent.sinks.phoenix-sink.batchSize = 10
agent.sinks.phoenix-sink.zookeeperQuorum = localhost
agent.sinks.phoenix-sink.table = users2
agent.sinks.phoenix-sink.ddl = CREATE TABLE IF NOT EXISTS users2 (userid BIGINT NOT NULL, username VARCHAR, password VARCHAR, email VARCHAR, country VARCHAR, state VARCHAR, city VARCHAR, dt VARCHAR NOT NULL CONSTRAINT PK PRIMARY KEY (userid, dt))
agent.sinks.phoenix-sink.serializer = json
agent.sinks.phoenix-sink.serializer.columnsMapping = {"userid":"userid", "username":"username", "password":"password", "email":"email", "country":"country", "state":"state", "city":"city", "dt":"dt"}
agent.sinks.phoenix-sink.serializer.partialSchema = true
agent.sinks.phoenix-sink.serializer.columns = userid,username,password,email,country,state,city,dt
agent.sinks.phoenix-sink.channel = mem-channel
agent.channels.mem-channel.type = memory
agent.channels.mem-channel.capacity = 1000
agent.channels.mem-channel.transactionCapacity = 100