How to find traffic and number of hits per URL in Splunk? - splunk

I have been using Splunk as a log monitoring tool but recently got to know that we will get network traffic and number of hits per URL.
For example, I have a URL like the one below and I want to know the total number of hits that occurred over the last week:
https://stackoverflow.com/
What would be the query that I need to write to get the number of hits (count) per day/period of time in Splunk?
I tried this:
"url" | stats sum(linecount) as Total
which is returning >1000 hits count for the last 15 minutes, which is not correct.
Thanks in advance.

It would be quick and accurate when you mention index, host and site names.
index name = environment of the application like SIT/UAT/QA/pre-prod/production
host name = In which instance application is hosted
site name = in my example it will be https://stackoverflow.com
Query = index="SIT*" host="*host_name*" "https://stackoverflow.com" "/questions" | stats sum(linecount) as Total
by executing above query I can get number of hits for stackoverflow.com/questions url.
The above query has given accurate results and in splunk we do have drop down option to select period of time.

Try one of these queries to return the total number of hits:
"url" | stats count
Or:
"url" | stats sum(count) as total

Hi This below query is one of good example to get the site requests
index="bcom" "https://www.bloomingdales.com/" | stats sum(linecount) as Total
#Ravindra'S

Related

Is there a way to stop CloudWatchLogsInsight from searching after a first match?

I am searching through a week worth of flow logs to check if an IP is in existents or not, however whenever there's a match, the query will still continue consuming resources and time.
How do I query and return only the latest event matching an IP address
I have set limit = 1, but the query still continues.
sample query:
filter isIpv4InSubnet(srcAddr,"127.0.0.1/32") | limit 1

Splunk query using time an event occurs in one index and using it as a starting point to filter events in another index

What's the most efficient way to perform the following search?
Event occurs on index A at X time
Take X time and use it as a start point in index B
Search all occurrences of a field within index B, with additional filters, 5 minutes after that initial event time that occurred from index A
Example using Windows logs: after every successful login via event ID 4624 (index="security") for a particular user on a host, search all Sysmon event ID 1 (index="sysmon") process creation events on that specific host that occurred in a 5 minute window after the login event. My vision is to examine user logins on a particular host and correlate subsequent process creation events over a short period of time.
I've been trying to play with join, stats min(_time), and eval starttimeu, but haven't had any success. Any help/pointers would be greatly appreciated!
Have you tried map? The map command runs a search for each result of another search. For example:
index=security sourcetype=wineventlog EventCode=4624
```Set the latest time for the map to event time + 5 minutes (300 seconds)```
| eval latest=_time+300
| map search="search index=sysmon host=$host$ earliest=$_time$ latest=$latest$"
Names within $ are field names from the main search.

How to write a Splunk query to count response codes of multiple endpoints

I'm trying to monitor performance/metrics of my application as an external system is going through a heavy data ingest. Currently, I can easily watch one endpoint using the following
index=my_index environment=prod service=myservice api/myApi1 USER=user1 earliest=07/19/2021:12:00:00 | stats count by RESPONSECODE
How can I adjust this query to include the additional endpoints I'd like to monitor? Ultimately I'd like a pie chart showing the total numbers of successes and failures across this API for the user.
Thanks all!
Edit: In the above query, api/myApi1 is the field I'm referring to. How can I include additional api/myApi# endpoints properly?
Include additional endpoints by adding them to the base query or by making the base query less specific.
index=my_index environment=prod service=myservice api/myApi1 USER IN (user1 user2 user3) earliest=07/19/2021:12:00:00
| stats count by USER, RESPONSECODE
OR
index=my_index environment=prod service=myservice api/myApi1 USER=* earliest=07/19/2021:12:00:00
| stats count by USER, RESPONSECODE

Splunk Failed Login Report

I am relatively new to Splunk and I am trying to create a reportthat will display a hostname and the amount of times that host failed to login within the past five minutes, when they failed 3 or more times. The only way I was able to get the initial search results I want is to look only within the past 5 minutes, as you can see in my query:
index="wineventlog" EventCode=4625 earliest=-5min | stats count by host,_time | stats count by host | search count > 2
This returns the host and the count. The issue is if I use this query in my report, it can run every five minutes, but the hosts that were listed previously get removed as they no longer are included in the search results.
I found ways to generate logs that I can then search for separately (http://docs.splunk.com/Documentation/Splunk/6.6.2/Alert/LogEvents) but it didn't work the way I expected.
I am looking for an answer to any of these questions that can help me get the intended results:
Can my original search be improved to still only get results where the failed logins were within 5 minutes but be able to search over any time period?
Is there a way to send the results from the query I already have to a report, where the results will not be cleared out when the search is run again?
Is there any other option I haven't considered to achieve the desired result?
If you only care about the last 5 minutes then search only the last 5 minutes. Searching more is just wasting resources.
Consider writing your results to a summary index (using collect) with a scheduled search and have your report/dashboard display values from the summary index.

Reach and Social Reach are zero when we specify date criteria

We try to get the unique stats on Facebook Ads API and we've some difficult to get. Currently, we can only pull the reach (unique_impressions) and the social reach (social_unique_impressions) for one day otherwise when we specify date criteria these fields are returned as zero. We know that unique stats aren't meant to be aggregated. How to retrieve these values? how do you calculate?
Ok! I got a response from Facebook team, it isn't possible to get the unique stats via our own date criteria. It's limit!
This is a limitation of Facebook's data and is specifically mentioned in the Ad Statistics Documentation - the unique stats are only calculated on Facebook's side in 1, 7 or 28 day ranges:
Querying Unique Stats
Unique stats are available for time ranges of 1,
7 or 28 days exactly.