I am looking to find a way to export CW logs in their original form to s3. I used the console to export a days worth of logs from a log group, and it seems that a timestamp was prepended on each line, breaking the original JSON formatting. I was looking to import this into glue as a json file for a test transformation script. The original data used is formated as a normal json string when imported to cloudwatch and normally process the data it looks like:
{ "a": 123, "b": "456", "c": 789 }
After exporting and decompressing the data it looks like
2019-06-28T00:00:00.099Z { "a": 123, "b": "456", "c": 789 }
Which breaks reading the line as a json string since its no long a standard format.
The dataset is fairly large(100GB+) for this run, and will possibly grow larger in the future, so running the command a CLI command and processing each line locally isn't feasible in my opinion. Is there any known way to do what I am looking to do?
Thank you
TimeStamps are automatically added when you push the logs to the CloudWatch.
All the log events present in the CloudWatch has timestamp.
You can create a subscription filter to Kinesis Firehose and on Kinesis using lambda function you can formate the log events(remove the timestamp) then store the logs in the S3.
https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/Subscriptions.html
Related
Some process is producing into my Kafka binary files (from Java it comes as bytearray).
I'm trying to consume from Kafka with Logstash and upload the file into s3.
My pipeline:
input {
kafka {
bootstrap_servers => "my-broker:9092"
topic => "my-topic"
partition_assignment_strategy => "org.apache.kafka.clients.consumer.StickyAssignor"
value_deserializer_class => "org.apache.kafka.common.serialization.ByteArrayDeserializer"
}
}
filter {
mutate {
remove_field => ["#timestamp", "host"]
}
}
output {
s3 {
region => "eu-west-1"
bucket => "my_bucket"
time_file => 1
prefix => "files/"
rotation_strategy => "time"
}
}
As you can see I used a different deserializer class. However, it seems that Logstash uses by default a coded that converts the bytes array to string. My goal is to upload the file to s3 as it is. Is there any codec known that doesn't do anything to the input data and upload it as it is to?
Right now the files are uploaded to s3, but I can't read them or open them. The binary content was corrupted by Logstash somehow. For example - I tried sending a gzip that contains multiple files inside and I can't open it afterwards in s3.
The warning that I get on Logstash:
0-06-02T10:49:29,149][WARN ][logstash.codecs.plain ][my_pipeline] Received an event that has a different character encoding than you configured. {:text=>"7z\\xBC\\xAF'\\u001C\\u0000\\u0002\\xA6j<........more binary data", :expected_charset=>"UTF-8"}
I'm not sure that Logstash is the best fit for passing binary data and I in the end implemented a Java consumer but the following solution worked for me with Logstash:
The data was sent to Kafka can be serialized to binary data. For
example, I used filebeat to send the binary data so if Kafka`s
output module there is a parameter that is called "value_serializer"
and it should be set to
"org.apache.kafka.common.serialization.ByteArraySerializer"
In your Logstash settings (kafka input) define the
value_deserializer_class to
"org.apache.kafka.common.serialization.ByteArrayDeserializer" just
as I did in the post
Your output in logstash can be any resource that can get binary data.
Be aware, that the output will get a binary data and you will need to deserialize it.
I don't think you really understand what logstash is for.
As it's name log-stash it is for streaming ascii type of files using EOL delimiter to deffer between different log events.
I did managed to find community developed kafkaBeat for reading data from Kafka Topics, there are 2 options:
kafkabeat - Reads data from Kafka topics.
kafkabeat2 - Reads data (json or plain) from Kafka topics.
I didn't test those my own, but using the S3 output option with those might do the trick. If the S3 option is not yet supported you can develop it yourself and push it to the open-source so everyone can enjoy it :-)
I currently have an architecture where my kinesis -> kinesis firehouse -> s3
I am creating records directly in kinesis using:
aws kinesis put-record --stream-name <some_kinesis_stream> --partition-key 123 --data testdata --profile sandbox
The data when I run:
aws kinesis get-records --shard-iterator --profile sandbox
looks like this:
{
"SequenceNumber": "49597697038430366340153578495294928515816248592826368002",
"ApproximateArrivalTimestamp": 1563835989.441,
"Data": "eyJrZXkiOnsiZW1wX25vIjo1Mjc2OCwiZGVwdF9ubyI6ImQwMDUifSwidmFsdWUiOnsiYmVmb3JlIjpudWxsLCJhZnRlciI6eyJlbXBfbm8iOjUyNzY4LCJkZXB0X25vIjoiZDAwNSIsImZyb21fZGF0ZSI6Nzk2NSwidG9fZGF0ZSI6MjkzMjUzMX0sInNvdXJjZSI6eyJ2ZXJzaW9uIjoiMC45LjUuRmluYWwiLCJjb25uZWN0b3IiOiJteXNxbCIsIm5hbWUiOiJraW5lc2lzIiwic2VydmVyX2lkIjowLCJ0c19zZWMiOjAsImd0aWQiOm51bGwsImZpbGUiOiJteXNxbC1iaW4tY2hhbmdlbG9nLjAwMDAwMiIsInBvcyI6MTU0LCJyb3ciOjAsInNuYXBzaG90Ijp0cnVlLCJ0aHJlYWQiOm51bGwsImRiIjoiZW1wbG95ZWVzIiwidGFibGUiOiJkZXB0X2VtcCIsInF1ZXJ5IjpudWxsfSwib3AiOiJjIiwidHNfbXMiOjE1NjM4MzEzMTI2Njh9fQ==",
"PartitionKey": "-591791328"
}
but in s3, it looks like:
`testdatatestdatatestdatatestdatatestdatatestdatatestdatatestdata`
because I ran the putrecords several times.
So what is going on? When I run get-records, what records am I obtaining? What is that data? How is that data then decrypted into my original string? What is going on?
15 days old now, so hopefully you found the answer already.
If not, it seems the reason you have a mismatch in data between get-records and what you see in S3 is based on how you performed the aws kinesis get-records --shard-iterator --profile sandbox call, you didn't explicitly provide a shard iterator value.
What you saw in S3 is correct and expected based on your --data testdata put-record calls.
testdatatestdatatestdatatestdatatestdatatestdatatestdatatestdata
What you saw in Kinesis is base64 encoded:
"Data": "eyJrZXkiOnsiZW1wX25vIjo1Mjc2OCwiZGVwdF9ubyI6ImQwMDUifSwidmFsdWUiOnsiYmVmb3JlIjpudWxsLCJhZnRlciI6eyJlbXBfbm8iOjUyNzY4LCJkZXB0X25vIjoiZDAwNSIsImZyb21fZGF0ZSI6Nzk2NSwidG9fZGF0ZSI6MjkzMjUzMX0sInNvdXJjZSI6eyJ2ZXJzaW9uIjoiMC45LjUuRmluYWwiLCJjb25uZWN0b3IiOiJteXNxbCIsIm5hbWUiOiJraW5lc2lzIiwic2VydmVyX2lkIjowLCJ0c19zZWMiOjAsImd0aWQiOm51bGwsImZpbGUiOiJteXNxbC1iaW4tY2hhbmdlbG9nLjAwMDAwMiIsInBvcyI6MTU0LCJyb3ciOjAsInNuYXBzaG90Ijp0cnVlLCJ0aHJlYWQiOm51bGwsImRiIjoiZW1wbG95ZWVzIiwidGFibGUiOiJkZXB0X2VtcCIsInF1ZXJ5IjpudWxsfSwib3AiOiJjIiwidHNfbXMiOjE1NjM4MzEzMTI2Njh9fQ==",
So decoding gets you:
{
"key":
{
"emp_no": 52768,
"dept_no": "d005"
},
"value":
{
"before": null,
"after":
{
"emp_no": 52768,
"dept_no": "d005",
"from_date": 7965,
"to_date": 2932531
},
"source":
{
"version": "0.9.5.Final",
"connector": "mysql",
"name": "kinesis",
"server_id": 0,
"ts_sec": 0,
"gtid": null,
"file": "mysql-bin-changelog.000002",
"pos": 154,
"row": 0,
"snapshot": true,
"thread": null,
"db": "employees",
"table": "dept_emp",
"query": null
},
"op": "c",
"ts_ms": 1563831312668
}
}
The reason why it didn't match your "testdata" is because you were looking into the wrong shard iterator on possibly the wrong shard. Unsure what your kinesis setup is exactly.
Give this article a once over, https://docs.aws.amazon.com/streams/latest/dev/fundamental-stream.html . Should give you the steps to test this workflow.
It seems that you've configured your firehose to enable server-side data encryption. If this is the case then the following applies:
When you configure a Kinesis data stream as the data source of a Kinesis Data Firehose delivery stream, Kinesis Data Firehose no longer stores the data at rest. Instead, the data is stored in the data stream.
When you send data from your data producers to your data stream, Kinesis Data Streams encrypts your data using an AWS Key Management Service (AWS KMS) key before storing the data at rest. When your Kinesis Data Firehose delivery stream reads the data from your data stream, Kinesis Data Streams first decrypts the data and then sends it to Kinesis Data Firehose. Kinesis Data Firehose buffers the data in memory based on the buffering hints that you specify. It then delivers it to your destinations without storing the unencrypted data at rest.
Find out more at: https://docs.aws.amazon.com/firehose/latest/dev/encryption.html
I am trying to use the TimeBasedPartitioner of the Confluent S3 sink. Here is my config:
{
"name":"s3-sink",
"config":{
"connector.class":"io.confluent.connect.s3.S3SinkConnector",
"tasks.max":"1",
"file":"test.sink.txt",
"topics":"xxxxx",
"s3.region":"yyyyyy",
"s3.bucket.name":"zzzzzzz",
"s3.part.size":"5242880",
"flush.size":"1000",
"storage.class":"io.confluent.connect.s3.storage.S3Storage",
"format.class":"io.confluent.connect.s3.format.avro.AvroFormat",
"schema.generator.class":"io.confluent.connect.storage.hive.schema.DefaultSchemaGenerator",
"partitioner.class":"io.confluent.connect.storage.partitioner.TimeBasedPartitioner",
"timestamp.extractor":"Record",
"timestamp.field":"local_timestamp",
"path.format":"YYYY-MM-dd-HH",
"partition.duration.ms":"3600000",
"schema.compatibility":"NONE"
}
}
The data is binary and I use an avro scheme for it. I would want to use the actual record field "local_timestamp" which is a UNIX timestamp to partition the data, say into hourly files.
I start the connector with the usual REST API call
curl -X POST -H "Content-Type: application/json" --data #s3-config.json http://localhost:8083/connectors
Unfortunately the data is not partitioned as I wish. I also tried to remove the flush size because this might interfere. But then I got the error
{"error_code":400,"message":"Connector configuration is invalid and contains the following 1 error(s):\nMissing required configuration \"flush.size\" which has no default value.\nYou can also find the above list of errors at the endpoint `/{connectorType}/config/validate`"}%
Any idea how to properly set the TimeBasedPartioner? I could not find a working example.
Also how can one debug such a problem or gain further insight what the connector is actually doing?
Greatly appreciate any help or further suggestions.
After studying the code at TimeBasedPartitioner.java and the logs with
confluent log connect tail -f
I realized that both timezone and locale are mandatory, although this is not specified as such in the Confluent S3 Connector documentation. The following config fields solve the problem and let me upload the records properly partitioned to S3 buckets:
"flush.size": "10000",
"storage.class": "io.confluent.connect.s3.storage.S3Storage",
"format.class": "io.confluent.connect.s3.format.avro.AvroFormat",
"schema.generator.class": "io.confluent.connect.storage.hive.schema.DefaultSchemaGenerator",
"partitioner.class": "io.confluent.connect.storage.partitioner.TimeBasedPartitioner",
"path.format": "'year'=YYYY/'month'=MM/'day'=dd/'hour'=HH",
"locale": "US",
"timezone": "UTC",
"partition.duration.ms": "3600000",
"timestamp.extractor": "RecordField",
"timestamp.field": "local_timestamp",
Note two more things: First a value for flush.size is also necessary, files are partitioned eventually into smaller chunks, not larger than specified by flush.size. Second, the path.format is better selected as displayed above so a proper tree structure is generated.
I am still not 100% sure if really the record field local_timestamp is used to partition the records.
Any comments or improvements are greatly welcome.
Indeed your amended configuration seems correct.
Specifically, setting timestamp.extractor to RecordField allows you to partition your files based on the timestamp field that your records have and which you identify by setting the property timestamp.field.
When instead one sets timestamp.extractor=Record, then a time-based partitioner will use the Kafka timestamp for each record.
Regarding flush.size, setting this property to a high value (e.g. Integer.MAX_VALUE) will be practically synonymous to ignore it.
Finally, schema.generator.class is no longer required in the most recent versions of the connector.
So I am trying to create a simple pipeline in Amazon AWS. I want to execute a step function using data generated by a stream which triggers the first lambda of the state machine
What I want to do is following.
Input data is streamed by AWS Kinesis
This Kinesis stream is used as a trigger for a lambda1 that executes and writes to S3 Bucket.
This would trigger (using step function) a lambda2 that would read the content from the given bucket and write it to another bucket
Now I want to implement a state machine using Amazon Step Function. I have created the state machine which is quite straightforward
{
"Comment": "Linear step function test",
"StartAt": "lambda1",
"States": {
"lambda1": {
"Type": "Task",
"Resource": "arn:....",
"Next": "lambda2"
},
"lambda2": {
"Type": "Task",
"Resource": "arn:...",
"End": true
}
}
}
What I want is, that Kinesis should trigger the first Lambda and once its executed the step function would execute lambda 2. Which does not seem to happen. Step function does nothing even though my Lambda 1 is triggered from the stream and writing to S3 bucket. I have an option to manually start a new execution and pass a JSON as input, but that is not the work flow I am looking for
you did wrong to kick off State machine.
you need to add another Starter Lambda function to use SDK to invoke State Machine. The process is like this:
kinesis -> starter(lambda) -> StateMachine (start Lambda 1 and Lambda 2)
The problem of using Step Function is lack of triggers. There are only 3 triggers which are CloudWatch Events, SDK or API Gateway.
I have a collection of JSON messages in a file stored on S3 (one message per line). Each message has a unique key as part of the message. I also have a simple DynamoDB table where this key is used as the primary key. The table contains the name of the S3 file where the corresponding JSON message is located.
My goal is to extract a JSON message from the file given the key. Of course, the worst case scenario is when the message is the very last line in the file.
What is the fastest way of extracting the message from the file using the boto library? In particular, is it possible to somehow read the file line by line directly? Of course, I can read the entire contents to a local file using boto.s3.key.get_file() then open the file and read it line by line and check for the id to match. But is there a more efficient way?
Thanks much!
S3 cannot do this. That said, you have some other options:
Store the record's length and position (byte offset) instead of the line number in DynamoDB. This would allow you to retrieve just that record using the Range: header.
Use caching layer to store { S3 object key, line number } => { position, length } tuples. When you want to look up a record by { S3 object key, line number }, reference the cache. If you don't already have this data, you have to fetch the whole file like you do now -- but having fetched the file, you can calculate offsets for every line within it, and save yourself work down the line.
Store the JSON record in DynamoDB directly. This may or may not be practical, given the 64 KB item limit.
Store each JSON record in S3 separately. You could then eliminate the DynamoDB key lookup, and go straight to S3 for a given record.
Which is most appropriate for you depends on your application architecture, the way in which this data is accessed, concurrency issues (probably not significant given your current solution), and your sensitivities for latency and cost.
you can use the built-in readline with streams:
const readline = require('readline');
const AWS = require('aws-sdk');
const s3 = new AWS.S3();
const params = {Bucket: 'yourbucket', Key: 'somefile.txt'};
const readStream = s3.getObject(params).createReadStream();
const lineReader = readline.createInterface({
input: readStream,
});
lineReader.on('line', (line) => console.log(line));
You can use S3 SELECT to accomplish this. Also works on parquet files.
https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-glacier-select-sql-reference-select.html