How to create a simple "Expires" workflow to repeat in Sharepoint 2010 - sharepoint-2010

I noticed that the "Announcements" web part has the "Expires" feature, but any other links web part I create does not have it. I wanted to add the "Expires" to my custom links web part. So far I got it to delete itself with a simple
If Current Item:Expires is less than Today
Delete item in Current Item
then Pause for 0 days, 0 hours, 1 minutes
But realized it does not repeat itself, only runs when the item is created. How would I get this workflow to run everyday as a background process?
Update: Workflow I'm trying
Step 1
If Current Item:Expires is less than Today
Delete item in Current Item
Step 2
Pause until Current Item:Expires
then Pause for 1 days, 0 hours, 0 minutes
then Delete item in Current Item

for this, I think you could just place it where workflow runs on item creation. Since you have a field that would represent when this item should be deleted, you could do this in the workflow:
if expire date
I'm not sure if the condition is a best practice though, i do this so that it goes to the action every time
in the "this time" value, place in the field "expire"
then it would proceed to deleting the item
What happens here is that the item is left pending until the date you specified in your "expire" field, then it automatically deletes the item.

Related

report scheduler system design using database as master

Problem
we have ~50k scheduled financial reports that we periodically deliver to clients via email
reports have their own delivery frequency (date&time format - as configured by clients)
weekly
daily
hourly
weekdays only
etc.
Current architecture
we have a table called report_metadata that holds report information
report_id
report_name
report_type
report_details
next_run_time
last_run_time
etc...
every week, all 6 instances of our scheduler service poll the report_metadata database, extract metadata for all reports that are to be delivered in the following week, and puts them in a timed-queue in-memory.
Only in the master/leader instance (which is one of the 6 instances):
data in the timed-queue is popped at the appropriate time
processed
a few API calls are made to get a fully-complete and current/up-to-date report
and the report is emailed to clients
the other 5 instances do nothing - they simply exist for redundancy
Proposed architecture
Numbers:
db can handle up to 1000 concurrent connections - which is good enough
total existing report number (~50k) is unlikely to get much larger in the near/distant future
Solution:
instead of polling the report_metadata db every week and storing data in a timed-queue in-memory, all 6 instances will poll the report_metadata db every 60 seconds (with a 10 s offset for each instance)
on average the scheduler will attempt to pick up work every 10 seconds
data for any single report whose next_run_time is in the past is extracted, the table row is locked, and the report is processed/delivered to clients by that specific instance
after the report is successfully processed, table row is unlocked and the next_run_time, last_run_time, etc for the report is updated
In general, the database serves as the master, individual instances of the process can work independently and the database ensures they do not overlap.
It would help if you could let me know if the proposed architecture is:
a good/correct solution
which table columns can/should be indexed
any other considerations
I have worked on a differt kind of sceduler for a program that reported analyses on a specific moment of the month/week and what I did was combining the reports to so called business cycle based time moments. these moments are on the "start of a new week", "start of the month", "start/end of a D/W/M/Q/Y'. So I standardised the moments of sending the reports and added the id's to a table that would carry the details of the report. - now you add thinks to the cycle of you remove it when needed, you could do this by adding a tag like(EOD(end of day)/EOM (End of month) SOW (Start of week) ect, ect, ect,).
So you could index the moments of when the clients want to receive the reports and build on that track. Hope that this comment can help you with your challenge.
It seems good to simply query that metadata table by all 6 instances to check which is the next report to process as you are suggesting.
It seems odd though to have a staggered approach with a check once every 60 seconds offset by 10 seconds for your servers. You have 6 servers now but that may change. Also I don't understand the "locking" you are suggesting, why now simply set a flag on the row such as [State] = "processing", then the next scheduler knows to skip that row and move on to the next available one. Once a run is processed, you can simply update a [Date_last_processed] column, or maybe something like [last_cycle_complete] = 'YES'.
Alternatively you could have one server-process to go through the table, and for each available row, sends it off to one of the instances, in a round-robin fashion (or keep track of who is busy and who isn't).

Best practice for pagination based on item updated time

Let's consider I have 30 items in my db. And clientA will make an api call to get the first 10 records based on item updated time. And think of a use case where clientB updated the 11th record (item) by making some changes in it. But now when clientA makes an api call for next set of items based on the pagination page 2 (items from 11 to 20) It's because the clientB has updated the 11th item the pagination is going to break here (Bases on updated time 11th item will become 1 and 1 become 2, 2 become 3 ...10 becomes 11).There is a chance that clientA is will receive the duplicate data.
Is there any better approach for this kind of problem ??
Any help would be thankfull
I think you could retrieve all elements each time using no pagination at all, to prevent this kind of "false information" at your table.
If visualizing the actual values of each record is mandatory, you could always add a new function to your api working as a trigger. Each time a user modifies any record, this api's function will trigger a message for all active sessions to notify the user some data has been changed. As an example, think about something like the "twitter's live feed". In which when a new bunch of tweets are created, Twitter will notify all users to reload the page if they want to see realtime information.

Updating a single SharePoint 2010 list item through multiple instances of a workflow

I have a list called "Tasks"with a number column called "Count".
On the "Workflow Tasks" list, I have a workflow that gets triggered on item added or item changed. When a workflow task is created/edited, the workflow checks the status of the task and either adds 1 or minuses 1 to the "Tasks" "Count" column.
If I add a single task to the workflow task list, the workflow picks it up and adds 1 to the "Count" field perfectly. However, if i add two tasks to the workflow task list, one after the other, the second task's workflow results in an error.
It's almost like i'm unable to update the "Count" field multiple times. I assumed that the row was somehow "locked" while the first instance of the workflow was updating it, so I added a pause, which didn't help. (I guess because the 2 instances pause at the same time.) I then added another column to the "Tasks" list called "Busy" and set this to "Yes" while the first instance updated the row. When the second instance runs, it first checks if "Busy" is "Yes" and if so, it pauses for a duration and then carries on. This still does not work.
Is my assumption of row locking correct? Or what am I missing?
TIA!
Edit: I don't have access to the error logs and the error simply states "An error has occurred in [Workflow Name]".
do you have many workflows and only one task to change? workflow runs, when task is changed or workflow list is changed?

Delete record 24 hours after insert

Is there a way to automatically delete a row 24 hours after its creation in Transact-SQL?
I'm making a site (learning experience) where the user needs to click a validation link sent by e-mail once they register. I want the users to validate themselves within 24 hours.
I suppose what I'd need is a trigger, but I'm really not sure on the syntax, nor if it is even possible.
I'm not sure of your schema but I would do it a different way. I would have a date/time against the database record that corresponds to the validation link. When they click the link, verify that the date and time of the database record is within 24 hours of the current time. If so, allow it, otherwise reject it.
Q: Is there a way to automatically delete a row 24 hours after its creation in Transact-SQL?
A: Sure. Write a "sqlcmd" script, wrap it in a .bat file, and invoke it from Windows Scheduled Tasks:
http://windows.microsoft.com/en-US/windows7/schedule-a-task
Alternatively, depending on your version, you could schedule the same SQL script from SQL Server Agent:
http://msdn.microsoft.com/en-us/library/ms189237.aspx
Putting a different spin on things:
When the user clicks your link, you can check if the current time (with respect to MSSQL) is >> 24 hours. If so, you'll reply with a "Too late" message (rather than validating the entry).
In any case - you absolutely, completely, totally, do NOT want to use a trigger!

Dealing with gaps in timeline

I'm looking for some assistance to sort out the logic for how I am going to deal with gaps in a feed timeline, pretty much like what you would see in various Twitter clients. I am not creating a Twitter client, however, so it won't be specific to that API. I'm using our own API, so I can possibly make some changes to the API as well to accomodate for this.
I'm saving each feed item in Core Data. For persistance, I'd like to keep the feed items around. Let's say I fetch 50 feed items from my server. The next time the user launches the app, I do a request for the latest feed items and I am returned with 50 feed items and do a fetch to display the feed items in a table view.
Enough time may have passed between the two server requests that a time gap exists between the two sets of feed items.
50 new feed items (request 2)
----- gap ------
50 older feed items (request 1)
* end of items in core data - just load more *
I keep track of whether a gap exists by comparing the oldest timestamp for the feed items in request 2 with the newest timestamp in set of feed items from request 1. If the oldest timestamp from request 2 is greater then the newest timestamp from request 1 I can assume that a gap exists and I should display a cell with a button to load 50 more. If the oldest timestamp from request 2 is less than or equal to the newest timestamp from request 1 the gap has been filled and there's no need to display the loader.
My first issue is the entire logic surrounding keeping track of whether or not to display the "Load more" cell. How would I know where to display this gap? Do I store it as the same NSManagedObject entity as my feed items with an extra bool + a timestamp that lies in between the two above and then change the UI accordingly? Would there be another, better solution here?
My second issue is related to multiple gaps:
50 new feed items
----- gap ------
174 older feed items
----- gap ------
53 older feed items
* end of items in core data - just load more *
I suppose it would help in this case to go with an NSManagedObject entity so I can just do regular fetches in my Core Data and if they show up amongst the objects, then display them as loading cells and remove them accordingly (if gaps no longer exist between any sets of gaps).
I'd ultimately want to wipe the objects after a certain time has passed as the user probably wouldn't go back in time that long and if they do I can always fetch them from my server if needed.
Any experiences and advice anybody has with this subject is be greatly appreciated!