Log to Crashlytics with tag and priority without also sending to logcat - crashlytics

There are two ways to log to Crashlytics according to the documentation.
Crashlytics.log(int priority, String tag, String msg);
In addition to writing to the next crash report, it will also write to the LogCat using android.util.Log.println(priority, tag, msg).
Crashlytics.log(msg);
which will only write to the Crashlytics crash report [not logcat].
However, this second method does not allow me to set a tag and priority. Instead it automatically sets the resulting tag as "CrashlyticsCore" and priority to debug:
From Fabric dashboard:
1 | 04:24:55:100 (UTC) | D/CrashlyticsCore ...
2 | 04:24:55:101 (UTC) | D/CrashlyticsCore ...
3 | 04:24:55:121 (UTC) | D/CrashlyticsCore ...
How can I keep my actual tag and debug value? I suppose I could create a custom message but this seems ugly and would just clutter up Fabric:
String output = String.format(Locale.US,
"Priority: %d; %s : %s", priority, tag, message);
Crashlytics.log(output);

If you need to log tags in Crashlytics but avoid them in LogCat using Crashlytics.log(int priority, String tag, String msg); I would recommend to enable SilentLogger for Fabric:
// Create Crashlytics Kit which doesn't trace crashes in debug mode
final Crashlytics crashlyticsKit = new Crashlytics.Builder().core(new CrashlyticsCore.Builder().disabled(BuildConfig.DEBUG).build()).build();
// Use SilentLogger instead of DefaultLogger to avoid writing into LogCat
Fabric.with(new Fabric.Builder(this).kits(crashlyticsKit).logger(new SilentLogger()).build());

Related

Splunk HEC - Disable multiline event splitting due to timestamp

I have a multi-line event that has timestamps on different lines as shown in the below example
[2022-02-08 08:30:23:776] [INFO] [com.example.monitoring.ServiceMonitor] Status report for services
Service 1 - Available
Service 2 - Unavailable since 2022-02-08T07:00:00 UTC
Service 3 - Available
When the log is sent to an HEC, the lines are split into multiple events as highlighted in the Splunk data pipeline's parsing phase. Due to the presence of a timestamp on line 3, it creates 2 different events.
When searching in Splunk, I see the two events as shown below while they are supposed to be part of a single event.
Event 1
[2022-02-08 08:30:23:776] [INFO] [com.example.monitoring.ServiceMonitor] Status report for services
Service 1 - Available
Event 2
Service 2 - Unavailable since 2022-02-08T07:00:00 UTC
Service 3 - Available
I can solve the issue by setting DATETIME_CONFIG to NONE in props.conf but that creates another issue, Splunk will stop recognizing timestamps in the event.
Is it possible to achieve the same result but without disabling the above property?
The trick is to set TIME_PREFIX correctly:
https://ibb.co/PCG5TqY
This will only look for timestamps in lines starting with a "[".
Here is the entry for props.conf:
[changeme]
disabled = false
pulldown_type = true
category = Custom
DATETIME_CONFIG =
NO_BINARY_CHECK = true
TIME_PREFIX = ^\[
TIME_FORMAT = %Y-%m-%d %H:%M:%S:%3N
SHOULD_LINEMERGE = true

Can you combine multivalue fields to form a consolidated Splunk alert?

I have a Splunk search which returns several logs of the same exception, one for each ID number (from a batch process). I have no problem extracting the field from the log with reg-ex and can build a single alert for each ID number easily.
Slack Message: "Reference number $result.extractedField$ has failed processing."
Since the error happens in batches, sending out an alert for every reference ID that failed would clutter up my Slack channel very quickly. Is it possible to collect all of the extracted fields and set the alert to send only one message? Like this...
Slack Message: "Reference numbers $result.listOfExtractedFields$ have failed to process."
To have a consolidated alert you need consolidated search results. Do that like this:
index=the_index_youre_searching "the class where the error occurs" "the exception you're looking for"
| stats values(*) as * by referenceID
Be sure to select the "Once" Trigger Condition in the alert setup.

Flux from WebClient behaves differently than Flux from File.readLines

I have two different sources of some IDs I have to do work with. One is from a file, another is from URL. When I'm creating Flux from the Files' lines, I can perfectly well work on it. When I'm switching the Flux-creating function with the one that uses WebClient....get(), I get different results; the WebClient does never get called for some reason.
private Flux<String> retrieveIdListFromFile(String filename) {
try {
return Flux.fromIterable(Files.readAllLines(ResourceUtils.getFile(filename).toPath()));
} catch (IOException e) {
return Flux.error(e);
}
}
Here the WebClient part...
private Flux<String> retrieveIdList() {
return client.get()
.uri(uriBuilder -> uriBuilder.path("capdocuments_201811v2/selectRaw")
.queryParam("q", "-P_Id:[* TO *]")
.queryParam("fq", "DateLastModified:[2010-01-01T00:00:00Z TO 2016-12-31T00:00:00Z]")
.queryParam("fl", "id")
.queryParam("rows", "10")
.queryParam("wt", "csv")
.build())
.retrieve()
.bodyToFlux(String.class);
}
When I do a subscribe(System.out::println)on the WebClient's flux, nothing happens. When I do a blockLast(), it works (URL is called, data returned). I don't get why, and how to correct this, and what I'm doing wrong.
With the flux that originates from the file, even the subscribe works fine. I sort of thought, that Fluxes are interchangable...
When I do a retrieveIdList().log().subscribe():
INFO [main] reactor.Flux.OnAssembly.1 | onSubscribe([Fuseable] FluxOnAssembly.OnAssemblySubscriber)
INFO [main] reactor.Flux.OnAssembly.1 | request(unbounded)
When I do the same with blockLast() instead of subscribe():
INFO [main] reactor.Flux.OnAssembly.1 | onSubscribe([Fuseable] FluxOnAssembly.OnAssemblySubscriber)
INFO [main] reactor.Flux.OnAssembly.1 | request(unbounded)
INFO [reactor-http-nio-4] reactor.Flux.OnAssembly.1 | onNext(id)
.
.
.
Judging from your question update, it seems that nothing is waiting on the processing to finish. I assume this is a batch or CLI application, and not a web application?
Assuming the following:
Flux<User> users = userService.fetchAll();
Calling blockLast on a Flux will trigger the processing and block until the result is there.
Calling subscribe on it will trigger the processing asynchronously; we're seeing the subscriber request elements in your logs, but nothing more. This probably means that the JVM exits before any elements are published - nothing is waiting on the result.
If you're effectively writing some CLI/batch application and not processing requests within a web application, you can block on the final reactive pipeline to get the result. If you wish to write that result to a file or send it to a different service, then you should compose on it with reactor operators.

Splunk Error Log Dashboard

I've to create a dashboard in splunk which will show error reporting within the log file:
[2011-09-12 14:13:00:605 GMT][com.abc.rest.Security][http-8080-Processor15] ERROR Unable to decrypt token [abc.com=3502639832.36895.0000; path=/] due to error: Input length must be multiple of 16 when decrypting with padded cipher
[2011-09-12 14:13:00:608 GMT][com.abc.filters.AuthenticationFilter][http-8080-Processor15] DEBUG ValidAuthToken: false
[2011-09-13 16:43:40:134 GMT][com.abc.PerfManager][http-8080-Processor13] ERROR Operation Failed: GET_ACCOUNT_ORDER [Status Code: 0150 Message: ACCESS_DENIED]
[2011-09-13 16:43:40:137 GMT][com.abc.rest.ResolvePackage][http-8080-Processor13] WARN MCE error occurred [StatusCode: 0150].
The above errors are occurring at different times & I want to count those all & show pie chart of all these errors with their count. Basically, these errors could be anything which starts with ERROR.
I should also get the Top10 warnings in the logs with their count.
I couldn't find a good way to implement it in Splunk. Can any one help me out on how to implement it in splunk?
Thanks!
... | rex "ERROR (?<error_type>[^\[]+)" | stats count by error_type
something like that should work. check our http://splunk-base.splunk.com/answers/ if it doesn't work.

Apple iOS ASlog, polling for messages.. [code]

After reading these links:
Using Objective C to read log messages posted to the device console
https://developer.apple.com/library/ios/#documentation/System/Conceptual/ManPages_iPhoneOS/man3/asl.3.html
I've successfully posted messages to the ASlog using
aslmsg m = asl_new(ASL_TYPE_MSG);
asl_log(NULL, m, ASL_LEVEL_INFO, result);
The problem is that when I go to query the log there is extreme lag in getting the results. It seems to be searching everything since I started printing with NSLog earlier today.
My current code to get the information is:
q = asl_new(ASL_TYPE_QUERY);
asl_set_query(q, ASL_KEY_SENDER, "db_poc", ASL_QUERY_OP_EQUAL);
asl_set_query(q, ASL_KEY_TIME, "1306768140", ASL_QUERY_OP_GREATER);
I'm trying to get my app to send messages to the console (from javascript/UIWebview). I want to then watch the console for these messages so I can send data back to the UIWebviews javascript code..
I wonder are there any extra flags I can set on either send or receive side to speed up things? Also, is there a way to clear this ASlog?
Any ideas..?
Thanks.
Have you tried creating your own aslclient using ASL_OPT_NO_DELAY?