How to store variables for the next time a program runs - sql

I have a program that reports how many times a customer has visited between a certain our of the day. In this case, the program runs each day, to find out how many customers came in between the hours of 6 and 7. The problem I'm running into is I also need to keep a running tally of the number of customers visted between the hours. So I need the output to look like:
Today: 5
Total: 5
Today: 5
Total: 10
Today 5
Total 15
I can store the info in an XML file, but I have 16 different locations I'm tracking so that's a lot of writing, and reading from an xml file, so I assume there is a better way to handle this? I basically need the program to load the value of the "total" which is Today, plus previous days.
I fill the values like this:
firsthour = ds.Tables(0).Rows(i).Item(x)
secondhour = ds2.Tables(0).Rows(i).Item(x)
Percentage = Math.Round(ds2.Tables(0).Rows(i).Item(x) / ds.Tables(0).Rows(i).Item(x) * 100, 2)
firsthourtotal = ds.tables(0).Rows(i).item(x)
secondhourtotal = ds2.tables(0).Rows(i).item(x)
Obviously, I need the Fisthourtotal, secondhourtotal, to be stored for each of the 16 results in the array, to be accessed each day when the program runs.

Probably not the best way but....
If I was doing it I would use My.Settings and use the System.Collections.Specialized.StringCollection type for the situation described.
Nice and easy to read and write to, just remember to My.Settings.Save after writing to it otherwise it will wait till you close the application before updating the record!

Related

Get Difference in Dates Over Time SQL/Excel

I am trying to get a date difference to determine cycle time of when something arrives to completion. However, I need the product to count towards cycle time average for all days it is here. So, something along the lines if arrivdate='8/16' but completiondate='8/24', I need for the cycle time for this product to be 1 on '8/17', 2 on '8/18', etc until it is 8 on '8/24' and then stops counting. I am willing to do it in either Excel or SQL, if there is a fast way to do it. Below is an example of the data in an Excel sheet
https://drive.google.com/open?id=0B4xYGwf8uS7ZdE5YRDYzXzNuOTQ is a link to the file, as I'm not sure how to insert a table in here.
Does this suit?
If I understand your question properly this simple formula should work. You can replace the 0 with "" to leave it blank

Movable window of fixed length of time - SQL

I have a database of about 100 million customer relation records and 3 million distinct clients
I need to write a SQL script to work out which clients have registered 5 or more complaints within 30 days of each other, over the entire history of the client
I thought that a window function would be the answer but I haven't had any luck
Any ideas would be useful, but efficient ones would be even better, as I have low system priority and my codes takes hours to run.

Schedule algorithm for nightly SQL extract of data

I am looking for an algorithm to extract data from one system in to another but on a sliding scale. Here are the details:
Every two weeks, 80 weeks of data needs to be extracted.
Extracts take a long time and are resource intensive so we would like to distribute the load of the extract over time.
The first 8-12 weeks are the most important and need be updated more often over the two week window. Data further out can be updated less frequently to the point where the last 40 weeks+ could even just be extracted once every two weeks.
Every two weeks, the start date shifts two weeks ahead and so two new weeks are extracted.
Extract procedure takes a start and end date (this is already made and should be treated like a black box). The procedure could be run for multiple date spans in a day if required but contiguous dates are faster than multiple blocks of dates.
Extracts blocks should be no smaller than 2 weeks and probably no greater than 16 weeks. Longer blocks are possible but at 16 weeks are already a significant load to the system.
4 contiguous weeks of data takes about 1 hour approximately. It takes a long time because the data needs to be generated/calculated.
Data that is newly extracted replaces the old data for the timespan. No need to merge or diff the data, it is just replaced.
This algorithm needs to be built into a SQL job which will handle the daily process (triggered once a day only).
My initial thought was to create a sliding schedule pretty much. Rotate the first 4 week block every second day and then the second 4 week block every 3 to 4 days. The rest of the data would be extracted in blocks in smaller chunks over the two week period.
What I am going to do will work but I wanted to spend some time seeing if there might be a better way to approach the problem. Mainly looking for an algorithm to do the start/end date schedule for the daily extract.

SSRS report takes a long time to appear but is quick to process

EDIT
I was working on a new report and I tried to make a blank one. Even this one takes ~12 seconds to appear. There's clearly something wrong but I'd like to try possible solutions you might have before reinstalling.
My application has a couple of SSRS reports. They're all simple tables listing the data. In the designer's preview tab, they all appear within 1-3 seconds. The SQL queries or Stored procedure they use all execute fast enough. The reports are hosted in a ReportViewer, inside a tabcontrol.
My problem is that most of these reports are going to be generated around 20 times in a row, each with different parameters, and it take a report 12 seconds or more to appear in the actual application, which is a big problem. What could be causing such a slowdown?
Here's a example of the simplest report I have, which still takes way too much time:
**Table Users**
Id uniqueidentifier PK
Name varchar(50)
Salary decimal
TimeInBank decimal
Enabled bit
The table has less than 100 rows, nothing special.
The query:
Select * from Users where Enabled=1
In the execution log, this report has the following statistics:
TimeRendering: 79
TimeProcessing: 54
TimeDataRetrieval: 22
Status: rsSuccess
ByteCount: 5305
RowCount: 7
Nothing seems wrong with these numbers but it still takes at least 12 seconds from the moment I press the button to the moment I can see the report.
If the data in your reports is not changing very often, you could consider caching the reports:
http://msdn.microsoft.com/en-us/library/ms155927.aspx
You still may want to figure out what the bottleneck is in the first place, but at least the viewers of the report won't have to wait in the meantime.

Monitoring Updates over Time Frames and/or SQL Query with Regex counting dates within string

This is probably a fork in the road question. I have a journal blog that date stamps a continuation of a single field within a record.
Example:
Proj #1 (ID): Notes (memo field:) 10/12/2012 - visited site. 10/11/2012 - updated information. 10/11/2012 - call client. 10/10/2012 - Input information.
Proj #2 (ID): Notes (memo field:) 10/10/12 - visited site. 10/10/2012 - call client. 10/9/2012 - Input information. 10/1/2012 - Started project. etc etc...
I need to count how many updates where made over a specific time frame. I know I can create a hidden field and add + 1 everytime there is an update which is useful for an OVERALL update count... but how can i keep track of number of updates over the last 5 days. Like the example above you may update it twice in one day and I may not care about updates made 2 weeks ago.
I think I need to create an SQL that counts the number of "dates" since 10/10/12 or since 10/2/12 etc.
I have done the SQL: SELECT memo FROM Projects WHERE memo IN ('%10/10/12%', '%10/9/2012%' etc)
and then the Len(memoStringCombined) - Len(Replace(searchword""etc)/Len(searchword) and it works fine for countings a single date... but if I have count multiple dates over 30 days it gets to be quite cumbersome to keep rewriting each search word. Is there a regex or obj that can loop through this for me?
Otherwise any other suggestions for counting updates between time frames would be greatly appreciated.
BTW - I can't really justify creating a new table dedicated to tracking updates because there will be 100's of updates for close to 10,000 records which means the update tracking table will be more monstrous than the data... or am I wrong with that idea too?