I am currently using fluentbit as a sidecar container to push the log into New Relic for services which is deployed as docker in ECS Fargate.
Currently log message looks as below: [37m[Info] 2022-12-9T09:08:15.346, isRequestSuccess: False, totalTime: 2603, category: <Category>, callerIpAddress: <IP address>, timeGenerated: 12/09/2022 09:08:15, region: QA, correlationId: fecdafdb-c6af-41ac-a653-ecabbc682392, method: GET, url: <Request URL>, backendResponseCode: 503, responseCode: 503, responseSize: 370, cache: none, backendTime: 1600, apiId: <API Id>, operationId: HealthCheck, productId: <Product Id>, clientProtocol: HTTP/1.1, backendProtocol: HTTP/1.1, apiRevision: 1, clientTlsVersion: 1.2, backendMethod: GET, backendUrl: <Bakend URL>, correlationId: fecdafdb-c6af-41ac-a653-ecabbc682392[0m
Its logged as a unstrcutured data and I cannot use new relic query on the specific field in the log as unstructured data.
FluentBit with following configuration:
[OUTPUT]
name nrlogs
match *
license_key <license-key>
base_uri <host>
Does anyone know how to push the logs to new relic in structured way? I have tried with few new relic parsers and that did not help me.
Any help is appreciated.
I'd recommend one of the following approaches:
Reconfigure your service's logging framework to output the log in JSON format. New Relic can natively ingest JSON logs and all fields will be converted to attributes in New Relic, which you can use for querying/filtering/alerting.
Set up a parsing rule in New Relic (using Grok expressions or plain ol' regex) to parse the logs as they're ingested into New Relic, which will result in attributes being created at ingest time. (See: https://docs.newrelic.com/docs/logs/ui-data/parsing/)
Use the awesome NRQL power features aparse(..) or capture(..) to extract the relevant fields at query time. (See: https://newrelic.com/blog/how-to-relic/nrql-improvements and https://newrelic.com/blog/how-to-relic/using-regex-capture)
I hope this helps!
Related
I started a weed filer.backup process to backup all the data to an S3 bucket. Lot of logs are getting generated with the below error messages. Do i need to update any config to resolve or these messages can be ignored?
s3_write.go:99] [persistent-backup] completeMultipartUpload buckets/persistent/BEV_Processed_Data/2011_09_30/2011_09_30/GT_BEV_Output/0000000168.png: EntityTooSmall: Your proposed upload is smaller than the minimum allowed size
Apr 21 09:20:14 worker-server-004 seaweedfs-filer-backup[3076983]: #011status code: 400, request id: 10N2S6X73QVWK78G, host id: y2dsSnf7YTtMLIQSCW1eqrgvkom3lQ5HZegDjL4MgU8KkjDG/4U83BOr6qdUtHm8S4ScxI5HwZw=
Another message
malformed xml the xml you provided was not well formed or did not validate against
This issue happens with empty files or files with small content. Looks like aws s3 multipart upload does not accept streaming empty files. Is there any setting on SeaweedFs that i am missing?
I am generating API documentation for our Java endpoints. I am using widdershins to convert our openAPI3.0 yaml file to markdown. Then, I am using shins to convert the markdown file to html. The request body for all of our endpoints does not appear in the generated cURL examples. Why is this? This defeats the purpose of having cURL examples because copying and pasting a cURL example without the required body will not work. Can anyone recommend a workaround or alternative tool that generates good documentation with complete cURL examples?
Example endpoint from our openAPI.yaml file...
post:
tags:
- Tools
description: Installs a tool on a user's account
operationId: Install Tool
requestBody:
description: UserTool object that needs to be installed on the user's account
content:
application/json:
schema:
$ref: '#/components/schemas/UserTool'
required: true
parameters:
responses:
default:
description: default response
content:
application/json:
schema:
$ref: '#/components/schemas/Message'
This is the documentation our toolchain generates from this yaml file...
We would like to add a line just like the one below (grey highlight) to our cURL examples. This is a chunk from the markdown file that Widdershins produces from our openAPI yaml file. I manually added the –“d
This stack overflow Q/A suggests the answer is it is impossible to include a body parameter in a code example using swagger or openAPI. Is this correct? If so, why is this the case? What's the reasoning?
Cheers,
Gideon
I also had the same problem.
As a result of trial and error, it was found that the behavior displayed on curl varies depending on the in value.
Please look at the ParameterIn enum.
public enum ParameterIn {
DEFAULT(""),
HEADER("header"),
QUERY("query"),
PATH("path"),
COOKIE("cookie");
I tried by like below at first time:
new Parameter().name("foo").in(ParameterIn.HEADER.name())
But name return like "HEADER", So swagger(or OpenAPI) recognized to header.
It should be lower character like "header" follow ParameterIn enum.
So, you can fix it like this
new Parameter().name("foo").in(ParameterIn.HEADER.toString())
or
new Parameter().name("foo").in("header")
I also encountered the same problem and I did a little digging. It turns out I had to set options.httpSnippet option in widdershins to true so that the requestBody params will show up. However, setting that to true just shows the params if content type is of application/json. For multipart-form-data, you need to set options.experimental to true as well.
Unfortunately, there is a bug in widdershins for handling application/x-www-form-urlencoded content-type.. I created a PR for it which you can probably manually patch on the widdershins package. PR link: https://github.com/Mermade/widdershins/pull/492/files
I am trying to use the TimeBasedPartitioner of the Confluent S3 sink. Here is my config:
{
"name":"s3-sink",
"config":{
"connector.class":"io.confluent.connect.s3.S3SinkConnector",
"tasks.max":"1",
"file":"test.sink.txt",
"topics":"xxxxx",
"s3.region":"yyyyyy",
"s3.bucket.name":"zzzzzzz",
"s3.part.size":"5242880",
"flush.size":"1000",
"storage.class":"io.confluent.connect.s3.storage.S3Storage",
"format.class":"io.confluent.connect.s3.format.avro.AvroFormat",
"schema.generator.class":"io.confluent.connect.storage.hive.schema.DefaultSchemaGenerator",
"partitioner.class":"io.confluent.connect.storage.partitioner.TimeBasedPartitioner",
"timestamp.extractor":"Record",
"timestamp.field":"local_timestamp",
"path.format":"YYYY-MM-dd-HH",
"partition.duration.ms":"3600000",
"schema.compatibility":"NONE"
}
}
The data is binary and I use an avro scheme for it. I would want to use the actual record field "local_timestamp" which is a UNIX timestamp to partition the data, say into hourly files.
I start the connector with the usual REST API call
curl -X POST -H "Content-Type: application/json" --data #s3-config.json http://localhost:8083/connectors
Unfortunately the data is not partitioned as I wish. I also tried to remove the flush size because this might interfere. But then I got the error
{"error_code":400,"message":"Connector configuration is invalid and contains the following 1 error(s):\nMissing required configuration \"flush.size\" which has no default value.\nYou can also find the above list of errors at the endpoint `/{connectorType}/config/validate`"}%
Any idea how to properly set the TimeBasedPartioner? I could not find a working example.
Also how can one debug such a problem or gain further insight what the connector is actually doing?
Greatly appreciate any help or further suggestions.
After studying the code at TimeBasedPartitioner.java and the logs with
confluent log connect tail -f
I realized that both timezone and locale are mandatory, although this is not specified as such in the Confluent S3 Connector documentation. The following config fields solve the problem and let me upload the records properly partitioned to S3 buckets:
"flush.size": "10000",
"storage.class": "io.confluent.connect.s3.storage.S3Storage",
"format.class": "io.confluent.connect.s3.format.avro.AvroFormat",
"schema.generator.class": "io.confluent.connect.storage.hive.schema.DefaultSchemaGenerator",
"partitioner.class": "io.confluent.connect.storage.partitioner.TimeBasedPartitioner",
"path.format": "'year'=YYYY/'month'=MM/'day'=dd/'hour'=HH",
"locale": "US",
"timezone": "UTC",
"partition.duration.ms": "3600000",
"timestamp.extractor": "RecordField",
"timestamp.field": "local_timestamp",
Note two more things: First a value for flush.size is also necessary, files are partitioned eventually into smaller chunks, not larger than specified by flush.size. Second, the path.format is better selected as displayed above so a proper tree structure is generated.
I am still not 100% sure if really the record field local_timestamp is used to partition the records.
Any comments or improvements are greatly welcome.
Indeed your amended configuration seems correct.
Specifically, setting timestamp.extractor to RecordField allows you to partition your files based on the timestamp field that your records have and which you identify by setting the property timestamp.field.
When instead one sets timestamp.extractor=Record, then a time-based partitioner will use the Kafka timestamp for each record.
Regarding flush.size, setting this property to a high value (e.g. Integer.MAX_VALUE) will be practically synonymous to ignore it.
Finally, schema.generator.class is no longer required in the most recent versions of the connector.
I am trying to, in Microsoft Access 2013, create a real-time link to data provided by a REST-based API (this API, to be specific). The ultimate goal is for the data to be available in a query as if it were a local database.
How can this be accomplished? Specifically, I am struggling with how to have Access call the API upon request. The only way I can think to achieve a similar result is to write a script that pulls the entire database via the API and translates it to an Access-readable format, then run that script at set intervals. But I'd really like to find a solution that works in real time, even if it's a skosh slower than locally caching the database.
Since a call to a RESTful Web Service is really just a specific kind of HTTP request you could, at the very least, use the Microsoft XML library to shoot an HTTP request to the web service and parse whatever it returns. For example, when I run the following VBA code
' VBA project Reference required:
' Microsoft XML, v3.0
Dim httpReq As New MSXML2.ServerXMLHTTP
httpReq.Open "GET", "http://whois.arin.net/rest/poc/KOSTE-ARIN", False
httpReq.send
Dim response As String
response = httpReq.responseText
Debug.Print response
the string variable response contains the XML response to my request. It looks like this (after reformatting for readability):
<?xml version='1.0'?>
<?xml-stylesheet type='text/xsl' href='http://whois.arin.net/xsl/website.xsl' ?>
<poc xmlns="http://www.arin.net/whoisrws/core/v1" xmlns:ns2="http://www.arin.net/whoisrws/rdns/v1"
xmlns:ns3="http://www.arin.net/whoisrws/netref/v2" termsOfUse="https://www.arin.net/whois_tou.html"
inaccuracyReportUrl="http://www.arin.net/public/whoisinaccuracy/index.xhtml">
<registrationDate>2009-10-02T11:54:45-04:00</registrationDate>
<ref>http://whois.arin.net/rest/poc/KOSTE-ARIN</ref>
<city>Chantilly</city>
<companyName>ARIN</companyName>
<iso3166-1>
<code2>US</code2>
<code3>USA</code3>
<name>UNITED STATES</name>
<e164>1</e164>
</iso3166-1>
<firstName>Mark</firstName>
<handle>KOSTE-ARIN</handle>
<lastName>Kosters</lastName>
<emails>
<email>markk#kosters.net</email>
<email>markk#bjmk.com</email>
</emails>
<resources termsOfUse="https://www.arin.net/whois_tou.html"
inaccuracyReportUrl="http://www.arin.net/public/whoisinaccuracy/index.xhtml">
<limitExceeded limit="256">false</limitExceeded>
</resources>
<phones>
<phone>
<number>+ 1-703-227-9870</number>
<type>
<description>Office</description>
<code>O</code>
</type>
</phone>
</phones>
<postalCode>20151</postalCode>
<comment>
<line number="0">I'm really MAK21-ARIN</line>
</comment>
<iso3166-2>VA</iso3166-2>
<streetAddress>
<line number="0">3635 Concorde Parkway</line>
</streetAddress>
<updateDate>2015-05-26T11:36:55-04:00</updateDate>
</poc>
What gets returned by your web service might look somewhat different. Or, as in the case of the ARIN whois RWS above, you may have several data formats from which to choose; XML was just the default. I could have requested a plain text response using
httpReq.Open "GET", "http://whois.arin.net/rest/poc/KOSTE-ARIN.txt", False
in which case response would contain
#
# ARIN WHOIS data and services are subject to the Terms of Use
# available at: https://www.arin.net/whois_tou.html
#
Name: Kosters, Mark
Handle: KOSTE-ARIN
Company: ARIN
Address: 3635 Concorde Parkway
City: Chantilly
StateProv: VA
PostalCode: 20151
Country: US
RegDate: 2009-10-02
Updated: 2015-05-26
Comment: I'm really MAK21-ARIN
Phone: +1-703-227-9870 (Office)
Email: markk#bjmk.com
Email: markk#kosters.net
Ref: http://whois.arin.net/rest/poc/KOSTE-ARIN
#
# ARIN WHOIS data and services are subject to the Terms of Use
# available at: https://www.arin.net/whois_tou.html
#
I am using worklight 6.1 and I'm trying to send logs that are created in my client to the server in order to be able to view the logs in case the application crashes. What I have done is (based on this link http://pic.dhe.ibm.com/infocenter/wrklight/v5r0m6/index.jsp?topic=%2Fcom.ibm.worklight.help.doc%2Fdevref%2Fc_using_client_log_capture.html):
Set the below in wlInitOptions.js
logger : {
enabled: true,
level: 'debug',
stringify: true,
pretty: false,
tag: {
level: false,
pkg: true
},
whitelist: [],
blacklist: [],
nativeOptions: {
capture: true
}
},
In the client I have set the below where I want to send a log:
WL.Logger.error("test");
WL.Logger.send();
Implemented the necessary adapter WLClientLogReceiver-impl.js with the log function based on the link
Unfortunately I can't see the log in the messages.log. Anyone have any ideas?
I have also tried to send the log in the analytics DB based on this link http://www-01.ibm.com/support/knowledgecenter/SSZH4A_6.2.0/com.ibm.worklight.monitor.doc/monitor/c_op_analytics_data_capture.html.
What I did is:
WL.Analytics.log( { "_activity" : "myCustomActivity" }, "My log" );
however no new entry is added in the app_Activity_Report table. Is there something I am missing?
Couple of things:
Follow Idan's advice in his comments and be sure you're looking at the correct docs. He's right; this feature has changed quite a bit between versions.
You got 90% of the configuration, but you're missing the last little bit. Simply sending logs to your adapter is not enough for them to show in your messages.log. You need to do one of the following to get it into messages.log:
set audit="true" attribute in the <procedure> tag of the WLClientLogReceiver.xml file, or
log the uploaded data explicitly in your adapter implementation. Beware, however, that the WL.Logger API on the server is subject to the application server level configuration.
Also, WL.Analytics.log data does not go into the reports database. The only public API that populates the database is WL.Client.logActivity. I recommend sticking with the WL.Logger and WL.Analytics APIs.