This is for Flink on KDA and so limited to version 1.13
The desired flow of the application:
ingest from kinesis/kafka source
sink to s3
post s3 sink get objectkey/filename and publish to kafka (inform other processes that a file is ready to be examined)
The first 2 steps are simple enough. The 3rd is the tricky one.
From my understanding the S3 object key will be made of the following:
bucket partition (lets assume the defaul here of processing date)
filename. The filename is made up of + + + . It then goes through a series of changes - in-progress, pending and final.
The file will be in a finished state when a checkpoint occurs and all pending files are move to finished.
What I would like is this information as a trigger to kafka publish.
On checkpoint give me a list of all the files(object keys) that have moved to a finished state. These can then be put on a kafka topic.
Is this possible?
Thanks in advance
Related
I have a flink batch job that reads a very large parquet file from S3 then it sinks a json into Kafka topic.
The problem is how can I make the file reading process stateful? I mean whenever the job interrupted or crushed, the job should start from previous reading state? I don't want send duplicate item to Kafka when the job restarted.
Here is my example code
val env = ExecutionEnvironment.getExecutionEnvironment
val input = Parquet.input[User](new Path(s"s3a://path"))
env.createInput(input)
.filter(r => Option(r.token).getOrElse("").nonEmpty)
When using bigquery data transfer to move data into BigQuery from S3, I get intermittent success (I've actually only seen it work correctly one time).
The success:
6:00:48 PM Summary: succeeded 1 jobs, failed 0 jobs.
6:00:14 PM Job bqts_5f*** (table test_json_data) completed successfully. Number of records: 516356, with errors: 0.
5:59:13 PM Job bqts_5f*** (table test_json_data) started.
5:59:12 PM Processing files from Amazon S3 matching: "s3://bucket-name/*.json"
5:59:12 PM Moving data from Amazon S3 to Google Cloud complete: Moved 2661 object(s).
5:58:50 PM Starting transfer from Amazon S3 for files with prefix: "s3://bucket-name/"
5:58:49 PM Starting transfer from Amazon S3 for files modified before 2020-07-27T16:48:49-07:00 (exclusive).
5:58:49 PM Transfer load date: 20200727
5:58:48 PM Dispatched run to data source with id 138***3616
The usual instance those is just 0 success, 0 failures, like the following:
8:33:13 PM Summary: succeeded 0 jobs, failed 0 jobs.
8:32:38 PM Processing files from Amazon S3 matching: "s3://bucket-name/*.json"
8:32:38 PM Moving data from Amazon S3 to Google Cloud complete: Moved 3468 object(s).
8:32:14 PM Starting transfer from Amazon S3 for files with prefix: "s3://bucket-name/"
8:32:14 PM Starting transfer from Amazon S3 for files modified between 2020-07-27T16:48:49-07:00 and 2020-07-27T19:22:14-07:00 (exclusive).
8:32:13 PM Transfer load date: 20200728
8:32:13 PM Dispatched run to data source with id 13***0415
What might be going on such that the second log above doesn't have the Job bqts... run? Is there somewhere I can get more details about these data transfer jobs? I had a different job that ran into a JSON error, so I don't believe it was that.
Thanks!
I was a bit confused by the logging, since it finds and moves the objects like
I believe I misread the docs, I had thought previously that an amazon URI of s3://bucket-name/*.json would crawl the directory for the json files, but even though the message above seems to indicate such, it only loads files into bigquery that are at the top level (for the s3://bucket-name/*.json URI).
Using flink 1.7.0, but also seen on flink 1.8.0. We are getting frequent but somewhat random errors when reading gzipped objects from S3 through the flink .readFile source:
org.apache.flink.fs.s3base.shaded.com.amazonaws.SdkClientException: Data read has a different length than the expected: dataLength=9713156; expectedLength=9770429; includeSkipped=true; in.getClass()=class org.apache.flink.fs.s3base.shaded.com.amazonaws.services.s3.AmazonS3Client$2; markedSupported=false; marked=0; resetSinceLastMarked=false; markCount=0; resetCount=0
at org.apache.flink.fs.s3base.shaded.com.amazonaws.util.LengthCheckInputStream.checkLength(LengthCheckInputStream.java:151)
at org.apache.flink.fs.s3base.shaded.com.amazonaws.util.LengthCheckInputStream.read(LengthCheckInputStream.java:93)
at org.apache.flink.fs.s3base.shaded.com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:76)
at org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.fs.s3a.S3AInputStream.closeStream(S3AInputStream.java:529)
at org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.fs.s3a.S3AInputStream.close(S3AInputStream.java:490)
at java.io.FilterInputStream.close(FilterInputStream.java:181)
at org.apache.flink.fs.s3.common.hadoop.HadoopDataInputStream.close(HadoopDataInputStream.java:89)
at java.util.zip.InflaterInputStream.close(InflaterInputStream.java:227)
at java.util.zip.GZIPInputStream.close(GZIPInputStream.java:136)
at org.apache.flink.api.common.io.InputStreamFSInputWrapper.close(InputStreamFSInputWrapper.java:46)
at org.apache.flink.api.common.io.FileInputFormat.close(FileInputFormat.java:861)
at org.apache.flink.api.common.io.DelimitedInputFormat.close(DelimitedInputFormat.java:536)
at org.apache.flink.streaming.api.functions.source.ContinuousFileReaderOperator$SplitReader.run(ContinuousFileReaderOperator.java:336)
ys
Within a given job, we generally see many / most of the jobs read successfully, but there's pretty much always at least one failure (say out of 50 files).
It seems this error is actually originating from the AWS client, so perhaps flink has nothing to do with it, but I'm hopeful someone might have an insight as to how to make this work reliably.
When the error occurs, it ends up killing the source and canceling all the connected operators. I'm still new to flink, but I would think that this is something that could be recoverable from a previous snapshot? Should I expect that flink will retry reading the file when this kind of exception occurs?
Maybe you can try to add more connection for s3a like
flink:
...
config: |
fs.s3a.connection.maximum: 320
I'm fresher in oozie bundle. I want to run multiple coordinators one after another in bundle job.My requirement is after completion of one coordinator job _SUCCESS file will be generated, then by using that _SUCCESS file second coordinator should be triggered. I don't know how to do that.For that i used data dependency technique which will keep track for generated output files of previous coordinator. I'm sharing some code which i tried.
Lets say there are 2 coordinator jobs:A and B.and i want to trigger only A coordinator.and if _SUCCESS file for Coordinator A generated then only Coordinator B should get start.
A - coordinator.xml
<workflow>
<app-path>${aDir}/aWorkflow</app-path>
</workflow>
this will call respective workflow.and _SUCCESS file is generated at ${aDir}/aWorkflow/final_data/${date}/aDim location so i included this location in
B coordinator:
<dataset name="input1" frequency="${freq}" initial-instance="${START_TIME1}" timezone="UTC">
<uri-template>${aDir}/aWorkflow/final_data/${date}/aDim</uri-template>
</dataset>
<done-flag>_SUCCESS</done-flag>
<data-in name="coordInput1" dataset="input1">
<instance>${START_TIME1}</instance>
</data-in>
<workflow>
<app-path>${bDir}/bWorkflow</app-path>
</workflow>
but when i run it first coordinator gets KILLED itself, but if i run individually they are running successfully.i'm not getting why these are all getting KILLED.
help to sort out
I find out easy way to do that. I'm sharing solution.For coordinator B coordinator.xml I'm sharing.
1)For Data-set instance should be start time of second one but it should not be time instance of first coordinator.otherwise that particular coordinator will get KILLED.
2)If you want to run multiple coordinators one after another then you can also include controls in coordinator.xml. e.g. concurrency, timeout or throttle. Detailed information about these controls you can find out in "apache oozie" book's 6th chapter.
3)in "" i included latest(0) it will take latest generated folder in mentioned output path.
4)for "input-events" it is mandatory to include it's name as a input to ${coord:dataIn('coordInput1')}.otherwise oozie will not consider dataset.
30
1
${aimDir}/aDimWorkflow/final_data/${date}/aDim
_SUCCESS
${coord:latest(0)}
${bDir}/bWorkflow
input_files
${coord:dataIn('coordInput1')}
I am running a Spark job on a two node standalone cluster (v 1.0.1).
Spark execution often gets stuck at the task mapPartitions at Exchange.scala:44.
This happens at the final stage of my job in a call to saveAsTextFile (as I expect from Spark's lazy execution).
It is hard to diagnose the problem because I never experience it in local mode with local IO paths, and occasionally the job on the cluster does complete as expected with the correct output (same output as with local mode).
This seems possibly related to reading from s3 (of a ~170MB file) immediately prior, as I see the following logging in the console:
DEBUG NativeS3FileSystem - getFileStatus returning 'file' for key '[PATH_REMOVED].avro'
INFO FileInputFormat - Total input paths to process : 1
DEBUG FileInputFormat - Total # of splits: 3
...
INFO DAGScheduler - Submitting 3 missing tasks from Stage 32 (MapPartitionsRDD[96] at mapPartitions at Exchange.scala:44)
DEBUG DAGScheduler - New pending tasks: Set(ShuffleMapTask(32, 0), ShuffleMapTask(32, 1), ShuffleMapTask(32, 2))
The last logging I see before the task apparently hangs/gets stuck is:
INFO NativeS3FileSystem: INFO NativeS3FileSystem: Opening key '[PATH_REMOVED].avro' for reading at position '67108864'
Has anyone else experience non-deterministic problems related to reading from s3 in Spark?