Dashboard shows 79 Succeeded jobs but where is the list? - hangfire

When I select the "Succeeded" menu in the dashboard, I only show a single job, yet there is a number beside the text "Succeeded" indicating the number of jobs that have executed without error. How do I see those?

Hangfire automatically clears jobs after a certain amount of time (usually 1 day by default)
The number beside the succeeded is the total number of successful jobs since the beginning.
See this answer from the Hangfire forum

Related

report scheduler system design using database as master

Problem
we have ~50k scheduled financial reports that we periodically deliver to clients via email
reports have their own delivery frequency (date&time format - as configured by clients)
weekly
daily
hourly
weekdays only
etc.
Current architecture
we have a table called report_metadata that holds report information
report_id
report_name
report_type
report_details
next_run_time
last_run_time
etc...
every week, all 6 instances of our scheduler service poll the report_metadata database, extract metadata for all reports that are to be delivered in the following week, and puts them in a timed-queue in-memory.
Only in the master/leader instance (which is one of the 6 instances):
data in the timed-queue is popped at the appropriate time
processed
a few API calls are made to get a fully-complete and current/up-to-date report
and the report is emailed to clients
the other 5 instances do nothing - they simply exist for redundancy
Proposed architecture
Numbers:
db can handle up to 1000 concurrent connections - which is good enough
total existing report number (~50k) is unlikely to get much larger in the near/distant future
Solution:
instead of polling the report_metadata db every week and storing data in a timed-queue in-memory, all 6 instances will poll the report_metadata db every 60 seconds (with a 10 s offset for each instance)
on average the scheduler will attempt to pick up work every 10 seconds
data for any single report whose next_run_time is in the past is extracted, the table row is locked, and the report is processed/delivered to clients by that specific instance
after the report is successfully processed, table row is unlocked and the next_run_time, last_run_time, etc for the report is updated
In general, the database serves as the master, individual instances of the process can work independently and the database ensures they do not overlap.
It would help if you could let me know if the proposed architecture is:
a good/correct solution
which table columns can/should be indexed
any other considerations
I have worked on a differt kind of sceduler for a program that reported analyses on a specific moment of the month/week and what I did was combining the reports to so called business cycle based time moments. these moments are on the "start of a new week", "start of the month", "start/end of a D/W/M/Q/Y'. So I standardised the moments of sending the reports and added the id's to a table that would carry the details of the report. - now you add thinks to the cycle of you remove it when needed, you could do this by adding a tag like(EOD(end of day)/EOM (End of month) SOW (Start of week) ect, ect, ect,).
So you could index the moments of when the clients want to receive the reports and build on that track. Hope that this comment can help you with your challenge.
It seems good to simply query that metadata table by all 6 instances to check which is the next report to process as you are suggesting.
It seems odd though to have a staggered approach with a check once every 60 seconds offset by 10 seconds for your servers. You have 6 servers now but that may change. Also I don't understand the "locking" you are suggesting, why now simply set a flag on the row such as [State] = "processing", then the next scheduler knows to skip that row and move on to the next available one. Once a run is processed, you can simply update a [Date_last_processed] column, or maybe something like [last_cycle_complete] = 'YES'.
Alternatively you could have one server-process to go through the table, and for each available row, sends it off to one of the instances, in a round-robin fashion (or keep track of who is busy and who isn't).

Splunk Failed Login Report

I am relatively new to Splunk and I am trying to create a reportthat will display a hostname and the amount of times that host failed to login within the past five minutes, when they failed 3 or more times. The only way I was able to get the initial search results I want is to look only within the past 5 minutes, as you can see in my query:
index="wineventlog" EventCode=4625 earliest=-5min | stats count by host,_time | stats count by host | search count > 2
This returns the host and the count. The issue is if I use this query in my report, it can run every five minutes, but the hosts that were listed previously get removed as they no longer are included in the search results.
I found ways to generate logs that I can then search for separately (http://docs.splunk.com/Documentation/Splunk/6.6.2/Alert/LogEvents) but it didn't work the way I expected.
I am looking for an answer to any of these questions that can help me get the intended results:
Can my original search be improved to still only get results where the failed logins were within 5 minutes but be able to search over any time period?
Is there a way to send the results from the query I already have to a report, where the results will not be cleared out when the search is run again?
Is there any other option I haven't considered to achieve the desired result?
If you only care about the last 5 minutes then search only the last 5 minutes. Searching more is just wasting resources.
Consider writing your results to a summary index (using collect) with a scheduled search and have your report/dashboard display values from the summary index.

Using Linq to bind dependent SQL rows

enter image description here
I'm trying to find an efficient way to select all dependencies. The idea behind this table is:
Each taskID is a task that will be display to the user.
When taskid 22180 is selected by the user and completed it will
automatically enable the other tasks (activateTaskId = 22180) to
become available to the user hence inactive = 0. Up to that point
everything is fine.
What i need help at is when you have subsequent tasks.
So when taskid (22180) is selected and completed by the user the following tasks become available (22181,22182,22185 and 22186). As you can see if the user selects 22182 then the task dependent on it is 22183 and subsequently 22184 will .
how would i efficiently select all the task that branch off the initial one (22182) no matter how many may exist ?
Thank you

Updating a single SharePoint 2010 list item through multiple instances of a workflow

I have a list called "Tasks"with a number column called "Count".
On the "Workflow Tasks" list, I have a workflow that gets triggered on item added or item changed. When a workflow task is created/edited, the workflow checks the status of the task and either adds 1 or minuses 1 to the "Tasks" "Count" column.
If I add a single task to the workflow task list, the workflow picks it up and adds 1 to the "Count" field perfectly. However, if i add two tasks to the workflow task list, one after the other, the second task's workflow results in an error.
It's almost like i'm unable to update the "Count" field multiple times. I assumed that the row was somehow "locked" while the first instance of the workflow was updating it, so I added a pause, which didn't help. (I guess because the 2 instances pause at the same time.) I then added another column to the "Tasks" list called "Busy" and set this to "Yes" while the first instance updated the row. When the second instance runs, it first checks if "Busy" is "Yes" and if so, it pauses for a duration and then carries on. This still does not work.
Is my assumption of row locking correct? Or what am I missing?
TIA!
Edit: I don't have access to the error logs and the error simply states "An error has occurred in [Workflow Name]".
do you have many workflows and only one task to change? workflow runs, when task is changed or workflow list is changed?

How to create a simple "Expires" workflow to repeat in Sharepoint 2010

I noticed that the "Announcements" web part has the "Expires" feature, but any other links web part I create does not have it. I wanted to add the "Expires" to my custom links web part. So far I got it to delete itself with a simple
If Current Item:Expires is less than Today
Delete item in Current Item
then Pause for 0 days, 0 hours, 1 minutes
But realized it does not repeat itself, only runs when the item is created. How would I get this workflow to run everyday as a background process?
Update: Workflow I'm trying
Step 1
If Current Item:Expires is less than Today
Delete item in Current Item
Step 2
Pause until Current Item:Expires
then Pause for 1 days, 0 hours, 0 minutes
then Delete item in Current Item
for this, I think you could just place it where workflow runs on item creation. Since you have a field that would represent when this item should be deleted, you could do this in the workflow:
if expire date
I'm not sure if the condition is a best practice though, i do this so that it goes to the action every time
in the "this time" value, place in the field "expire"
then it would proceed to deleting the item
What happens here is that the item is left pending until the date you specified in your "expire" field, then it automatically deletes the item.