Setting up alerts for metrics in Splunk - splunk

I'm sending data to Splunk and everything is working just fine, i can see the data that i'm sending and run a query and get results. Right now I'm only using a test data set, but eventually people will be sending their own fields (as well as the mandatory ones). My question is, since I don't know what kind of data they will be sending, can I still set up alerts for them? Can I create something general?

It's pretty hard to create a generic alert that's actually useful. You may be able to craft something using the mandatory fields, but it may not be all that helpful.
If you're opposed to letting users create their own alerts then let them come to you with what they want.

Related

Offline tracking with gtag.js

With ga.js I could track offline activity and send to my custom server or store in localhist by overriding the sendHitTask.
Like this:
https://www.google.se/amp/s/www.simoahava.com/amp/analytics/track-users-who-are-offline-in-google-analytics/
How can I achieve the same with gtag.js so I can customize where and in what data structure to send hits?
I know it's been a while since you've asked this question, but I wanted to share my findings regarding this matter:
Although gtag.js doesn't support sendHitTask (or an equivalent, as far as I know) it does support the transport_url parameter.
With this parameter you can set an alternate Transport URL to send the data to. It's originally designed to work with Server Side Tracking but it can also be used to capture requests to /g/collect/ or /r/collect/ endpoints on your own server.
I know it only partially resolves your issue, because you can't fully decide where to send the data. But at least it allows to alter the server where the data is sent.
Hope it helps!

Filter Pushes by Type

I'm working on building a bash script that will listen for pushes of the type link which will then extract just the URL and write it to a local file.
The idea is to be able to gather links I find and save them for automated processing later.
However, it doesn't appear that the API allows you to filter by type - only by active, modified_after, cursor, and limit - which seems a bit sparse to me, but oh well.
Since type isn't a filterable option, I'm assuming that I will need to add an additional step to parse the JSON returned and filter out any that don't have the type I'm looking for, then pass those to another step to extract the link and write it down to a file.
Before I go bashing (heh) my head against the wall writing this, is anyone aware of something that will already handle this, or at least keep me from re-inventing part of the wheel?
I've solved this issue by using jq to process and filter the json payload, allowing me to only select the types of pushes that I want.

Counting the amount of users or executions of an application.

I made a program that gets the data from the clipboard and saves it in a string variable. Then it looks for specific words in that string and generates several URLs. Afterwards it open the browser and shows each URL in an own tab.
Some of my friends already use this program frequently and I want to have some statistics about how often. I simple counter variable would be enough but I need to get access to it.
I came up with two options that could work:
I could send an email to a specific adress every time my app is executed. Then I can track the amount of uses by manually or automaticly counting the amount of emails in the postbox. I think this would be a Vers dirty solution.
I could create and publish a website containing a counter. This counter could be refreshed by my application. This solution is a bit better I think but a lot more work for just one single counter.
Do you have better ideas to solve my problem or is one of mine already a good one?
Thank you in advace!
You can use Measurement Protocol Overview. This provides you statistics of usage your application compared with Google Analytics. You can see even a geo statistic, version distribution, crash reports. It is easy to use it from .net. It is just about requesting http request to google.

May we use Yii flash messages on this scenario?

I haven't seen this scenario covered here:
Yii Framework: How to work with Flash Messages.
So, after user registration, I wish to redirect the user to a thank you page where he/she could read more about what he/she should do, and what would happen next. It's a nice amount of information, so adding that message to an already existing page is not an option, because it would get to noisy. Making temporary displaying msg isn't an option neither, because it's a fair amount of text to be read.
On cases like this:
Should we still use flash messages and use a conditional so that what normally exists on the page stays hidden while display a success flash message ?
OR
Should we simply redirect to a given thank you view (by creating the respective thankyou action?)
Is there a better option?
You could use a flash message. But these are really for things like "Your account is now created".
If you want to include a good amount of information, I think it best to have a separate thankyou action/view that people are redirected to after the sign up process is complete.

How to Verify whether a Robot is Entering Information

I have a web form which the users fill and the info send to server and stored on a database. I am worried that Robots might just fill in the form and I will end up with a database full of useless records. How can I prevent Robots from filling in my forms? I am thinking maybe something like Stackoverflow's robot detection, where if it thinks you are a robot, it asks you to verify that you are not. Is there a server-side API in Perl, Java or PHP?
There are several solutions.
Use a CAPTCHA. SO uses reCAPTCHA as far as I know.
Add an extra field to your form and hide it with CSS (display:none). A normal user would not see this field and therefore will not fill it. You check at the submission if this field is empty. If not, then you are dealing with a robot that has carefully filled out all form fields. This technique is usually referred to as a "honeypot".
Add a JavaScript timer function. At the page load it starts a value at zero and then increases it as time passes. A normal user would read and fill out your form for some time and only then submit it. A robot would just fill out and submit the form immediately upon receiving it. You check if the value has gone much from zero at the submission. If it has, then it is likely a real user. If you see just a couple of seconds (or even no value at all due to the robots not executing JavaScript) then it is likely a robot. This will however only work if you decide you will require your users have JavaScript on in order to perform "write" operations.
There are other techniques for sure. But these are quite simple and effective.
You can use reCAPTCHA (same as stackoverflow) - they have libraries for a number of programming languages.
I've always preferred Honeypot captcha (article by phil haack), as its less invasive to the user.
Captchas bring accessibility problems and will be ultimately defeated by software recognition.
I recommand the reading of this short article about bot traps, which include hidden fields, as Matthew Vines and New in town already suggested.
Anyway, you are still free to use both captcha and bot traps.
CAPTCHA is great. The other thing you can do that will prevent 99% of your robot traffic yet not annoy your users is to validate fields.
My site, I check for text in fields like zip code and phone number. That has removed all of the non-targeted robot misinformation.
You could create a two-step system in which a user fills the form, but then must reply to an e-mail to "activate" the record within a set period of time - say 24 hours.
In the back end, instead of populating your current table with all the form submissions, you could put them into a temporary table that automatically deletes any row that is older than your time allotment. Unless you have a serious bot problem, then I would think that the table wouldn't get that big, especially if the first form is just a few fields.
A benifit of this approach is that you don't have to use captcha or some other technology like that that might create some accessibility problems.