Shared Data Storage Strategy for 'Live' Dashboards in Excel VBA - vba

I'm doing an UI in excel which the goal is to have "live" information on Orders and Order Status between three users, I'll name them DataUser, DashboardOne, and DashboardTwo for examples sake.
The process is that the DataUser will fill in the Orders data, that data is going to be used to populate information on two dashboards. The dashboards are going to be updated live with changes from the DataUser(Orders Increases/Decreases), and changes on order status between DashboardOne and DashboardTwo. For the live updates I'm thinking on using Application.OnTime event call to refresh the View/Dashboards. The two dashboards will be active about 8 hours a day.
Where I'm struggling in on how/where to store the Data, I've though about a couple of options but I don't know the implications of one over the other, especially considering that I intend that the dashboards will run/refresh every 30 sec. with Application.OnTime which could prove expensive.
The options I thought about where:
A Master Workbook that would create separate Workbooks for DashboardOne and DashboardTwo and act database and main UI for DataUser.
Three separate workbooks that would all refer to the one DataWorkbook or another flat data file (perhaps and XML or JSON).
Using an actual database for the data, although this would bring other implications (don't currently have one).
I'm not considering a shared workbook as I've tried something similar in the past (and this time ^^, early steps) and it went rather poorly, nightmare to sync and poor data integrity.
In short:
Which would be the best Data storage strategy for Excel that wouldn't jeopardise the integrity of the data nor be so expensive as to interfere with the uptime rest of the code? Are there better options that I should be considering?

There are quite a number of alternatives, depending on the time you want to invest and the tools at hand. I'll give you a couple of options here.
But first, the basic assumptions:
The amount of data items that you need to share (being a dashboard) is of few tens (let's say, less than 100),
You have at least basic programming skills,
From your description, you have one client with READ-WRITE capabilities while there are two clients with READ-ONLY capability.
OPTION 1:
You can have the Excel saving the data in CSV format (very small amount of data and hence it would take a small fraction of a second to save it and to read it).
The two clients would then open the file in read-only mode, load the data and update the display. You would need to include exception handling at both types of client:
At the one writing, handle the condition of error when it attempts to write at the same time one of the clients attempts to read,
At the two reading, handle the condition of error when attempting to open the file (for read only) while the other process is writing.
Since the write and read operations are going to take a very, VERY short time (as stated, a small fraction of a second), these conditions will be very rare. Additional, since both dashboard clients would be open the file for read-only, they will not disturb each other if they make their attempt at the same moment.
If you wish to drastically reduce the chances of collision, you may set the timers (of the update process on one hand and of the reading processes on the other) to be a primary number of seconds. For instance, the timer of the updating process would be every 11 seconds while that of the reading process would be every 7 seconds.
OPTION 2:
Establish a TCP/IP channel between the processes, where the main process (meaning the one that would have WRITE privilege) would send a triggering message to the other two requesting to start an update whenever a new version of the data had been saved. Upon reception of the trigger, both READ-ONLY processes would approach the file and fetch the data.
In this case, the chances of collision would become near to null.

Related

Programmatically purge document deletion

I've a database with an agent that periodically delete (via Java agent, "removePermanently" method) all documents in a view and re-create them.
After some month, i've noticed that database size is considerably increased.
Showing database information through this command
sh database <dbpath>
it results that i've a lot of deleted documents (i suppose they are deletion stubs)
Document Type Live Deleted
Documents 1,922 817,378
Compacting database, 80% space was recovered.
Is there a way to programmatically delete stubs definitively to avoid "database explosion"? Or, is there a way to correctly manage this scenario (deletion and creation of documents)?
Don't delete the documents! Re-use them. That's the best answer. Seriously. Take the existing documents, clear the fields and set Form := "Obsolete". Modify the selection formula for all your views by appending & Form != "Obsolete" Create a new hidden view called "Obsolete" with selection formula Form = "Obsolete", and instead of creating new documents, change your code to go to the Obsolete view, grab an available document and set new field values (including changing the Form field). Only create new documents if there are not enough available in the Obsolete view. Any performance that you lose by doing this, which really should be minimal with the number of documents that you seem to have, will be more than offset by what you will gain by avoiding the growth and fragmentation of the NSF file that you are creating by doing all the deletions and creating new documents.
If, however, there's no possible way for you to do that -- maybe some third party tool that is outside of your control is creating the documents -- then it's important to know if the database you are talking about is replicated. If it is replicated, then you must be very careful because purging deletion stubs before all replicas are brought up to date will cause deleted documents to "come back to life" if a replica that has been off-line since before the delete occurs comes back on-line.
If the database is not replicated at all, or is reliably replicated across all replicas quickly, then you can reduce the purge interval. Go to the Replication Settings dialog, find the checkbox labeled "Remove documents not modified in the last __ days". Do not check the box, but enter a small number into the number of days. The purge interval for deletion stubs will be set to 1/3 of this number. So if you set it to 3 the effect will be that stubs are kept for 1 day and then purged, giving you 24 hours to assure that all replicas are up to date. If you need more, set the interval higher, maintaining the 3x multiple as needed. If a server is down for an extended period of time (longer than your purge interval), then adjust your operations procedures so that you will be sure to disable replication of the database to that server before it comes back on line and the replica can be deleted and recreated. Be aware, though, that user replicas pose the same problem, and it's not really possible to control or be aware of user replicas that might go off-line for longer than the purge interval. In any case, remember: do not check the box. To reduce the purge interval for deletion stubs only, just reduce the number.
Apart from this, the only way to programmatically delete deletion stubs requires use of the Notes C API. It's possible to call the required routines from LotusScript, but in my experience once the total number of stubs plus documents gets too high you will likely run into an error and may have to create and deploy a new non-replica copy of the database to get past it. You can find code along with my explanation in the answer to this previous question.
I have to second Richard's recommendation to reuse documents. I recently had a similar project, and started the way you did with deleting everything and importing half a million records every night. Deletion stubs and the growth of the FT index quickly became problems, eating up huge amounts of disk space and slowing performance significantly. I tried to manage the deletion stubs, but I was clearly going against the grain of Domino's architecture.
I read Richard's suggestion here, and adopted that approach. Here's what I did:
1) create 2 views based on form - one for 'active' records, and another for 'inactive' records
2) start the agent by setting autoupdate = false for both views
3) use stampall("form", "inactive") to change all fo the active records to inactive
4) manually refresh the 2 views using notesview.refresh()
5) start importing data. for each record, pull a document out of the pool of inactive records (by walking the 'inactive' view)
6) if if run out of inactive records in the pool, create new ones
7) when import is complete, manually refresh the views again
8) use db.createftindex(0, true) to re-create the FT index
the code is really not that complex, and it runs in about the same amount of time, if not faster, than my original approach.
Thanks Richard!
Also, look at the advanced db properties - several things there that will help optimize the db.
It sounds like you are "refreshing" the contents of the database by periodically deleting all the documents and creating new ones from some other source. Cut that out. If the data are in the Notes database already, leave the document alone. What you're doing is very inefficient.

How should data be provided to a web server using a data warehouse?

We have data stored in a data warehouse as follows:
Price
Date
Product Name (varchar(25))
We currently only have four products. That changes very infrequently (on average once every 10 years). Once every business day, four new data points are added representing the day's price for each product.
On the website, a user can request this information by entering a date range and selecting one or more products names. Analytics shows that the feature is not heavily used (about 10 users requests per week).
It was suggested that the data warehouse should daily push (SFTP) a CSV file containing all data (currently 6718 rows of this data and growing by four each day) to the web server. Then, the web server would read data from the file and display that data whenever a user made a request.
Usually, the push would only be once a day, but more than one push could be possible to communicate (infrequent) price corrections. Even in the price correction scenario, all data would be delivered in the file. What are problems with this approach?
Would it be better to have the web server make a request to the data warehouse per user request? Or does this have issues such as a greater chance for network errors or performance issues?
Would it be better to have the web server make a request to the data warehouse per user request?
Yes it would. You have very little data, so there is no need to try and 'cache' this in some way. (Apart from the fact that CSV might not be the best way to do this).
There is nothing stopping you from doing these requests from the webserver to the database server. With as little information as this you will not find performance an issue, but even if it would be when everything grows, there is a lot to be gained on the database-side (indexes etc) that will help you survive the next 100 years in this fashion.
The amount of requests from your users (also extremely small) does not need any special treatment, so again, direct query would be the best.
Or does this have issues such as a greater chance for network errors or performance issues?
Well, it might, but that would not justify your CSV method. Examples and why you need not worry, could be
the connection with the databaseserver is down.
This is an issue for both methods, but with only one connection per day the change of a 1-in-10000 failures might seem to be better for once-a-day methods. But these issues should not come up very often, and if they do, you should be able to handle them. (retry request, give a message to user). This is what enourmous amounts of websites do, so trust me if I say that this will not be an issue. Also, think of what it would mean if your daily update failed? That would present a bigger problem!
Performance issues
as said, this is due to the amount of data and requests, not a problem. And even if it becomes one, this is a problem you should be able to catch at a different level. Use a caching system (non CSV) on the database server. Use a caching system on the webserver. Fix your indexes to stop performance from being a problem.
BUT:
It is far from strange to want your data-warehouse separated from your web system. If this is a requirement, and it surely could be, the best thing you can do is re-create your warehouse-database (the one I just defended as being good enough to query directly) on another machine. You might get good results by doing a master-slave system
your datawarehouse is a master-database: it sends all changes to the slave but is inexcessible otherwise
your 2nd database (on your webserver even) gets all updates from the master, and is read-only. you can only query it for data
your webserver cannot connect to the datawarehouse, but can connect to your slave to read information. Even if there was an injection hack, it doesn't matter, as it is read-only.
Now you don't have a single moment where you update the queried database (the master-slave replication will keep it updated always), but no chance that the queries from the webserver put your warehouse in danger. profit!
I don't really see how SQL injection could be a real concern. I assume you have some calendar type field that the user fills in to get data out. If this is the only form just ensure that the only field that is in it is a date then something like DROP TABLE isn't possible. As for getting access to the database, that is another issue. However, a separate file with just the connection function should do fine in most cases so that a user can't, say open your webpage in an HTML viewer and see your database connection string.
As for the CSV, I would have to say querying a database per user, especially if it's only used ~10 times weekly would be much more efficient than the CSV. I just equate the CSV as overkill because again you only have ~10 users attempting to get some information, to export an updated CSV every day would be too much for such little pay off.
EDIT:
Also if an attack is a big concern, which that really depends on the nature of the business, the data being stored, and the visitors you receive, you could always create a backup as another option. I don't really see a reason for this as your question is currently stated, but it is a possibility that even with the best security an attack could happen. That mainly just depends on if the attackers want the information you have.

SQLite and multiple writes

I am new to databases and I'm trying to decide which one would suit my needs the most. I am in the planning stages of a program that will store directory references to thousands of PDF files at multiple locations (all under one domain). Basically, all it will be is date, location, originator's name and link to the file in each of the fields within the database. Nothing more (no files, nothing fancy). All I'll need to do with the database is sort fields by location, date, name of the originator -- that's it. There will be instances where multiple writes would need to occur at the same time. I've read up on SQLite website that only one write is supported at any given time. Does it mean writes to a specific file or database period?
What I mean is that multiple records would need to be added from different clients at any given time, but the existing records would not need to be modified (and if they do, it would be done from a specific client). To give a little bit more detail, what I'll have is several locations at which service application will be running in the background and listening to folders. Once a file enters a folder, it gets renamed under a specific format and added to the database. It is very likely that two folder listening apps would try to add files to the database at the same time.
Would I be able to accomplish this with SQLite or is it one write at any given time to the entire database? If only one write is possible, period, to the entire database, is there a way to implement some sort of spooling system (sort of like on a printer), where writes would wait in a queue with life timers on them?
If it's not possible, then I will look at MySQL. Cost is of a concern, so I'm steering towards these two.
Only one write can occur at the exact time, but by default writes are automatically queued and you can achieve thousands of writes per second.
The main concern is what type of application is this? Is it a web application developed on one machine and deployed to a single other production machine? Then the extra trouble of installing and maintaining MySQL is not a concern and you're better off using MySQL. If this is a desktop application installed on many desktops, then using an embedded database is far easier for development, installation, and maintenance and in that case use SQLite.

Updating database with multiple records

We use SQL Server and have Winforms application. In our product, sometimes the records exceeds more than 50000 in single transaction and we face Performance issue there.
When we have huge amount of data, we generally do that in multiple database calls. So in one of our Import functionality we are updating servers in a bunch of 1000 rows. So if we have 5000 records, then while processing them (in a for loop) we update the first 1000 rows and then continue processing until we get new 1000 rows to update. This performs better but honestly not the best I feel in terms of performance.
But we have seen in other Import/Export functionality that updating database with every 5000 rows is giving good results when compared to 1000. So there is a lot of confusion we are facing and also code does not look to be same across our applications.
Can anyone give me an idea what makes this happen. You don't have sample data, database schema etc. and yes I do agree. But are there any scenarios which should be taken care/considered while working with database? And why different number of records are giving us the good results, is there something we are ignoring? I am not a champ in database and more of a programming guy in .Net. Will be happy to hear your suggestions.
Not sure if this is helpful, our data generally contains employee details like payroll information, personal details, Accrual Benefits, Compensation etc. Data is fed from an excel and also we generate lot of data in our internal process. Let me know if you need more information. Thanks!!
The more database callouts you have, the more connection management you will need (open connection, use connection, cleanup & close, are we using connection pooling etc.etc.). You're sending the same amount of data over the wire, but you are opening and closing the taps more often, which brings overhead.
The downside of this is that the amount of data held in a transaction is greater.
However, if I may make a suggestion, you might want to consider achieving this in a different way, by loading all data into the database as fast as possible (into interim tables where the contraints are deativated and with transactional management turned off, if possible) and then allowing the database to carry out the task of checking and validating the data.
Since you are using SQL Server, you can just turn on SQL Profiler, define an appropriate event filter, and watch what happens under different loads.

Best practice for inserting and querying data from memory

We have an application that takes real time data and inserts it into database. it is online for 4.5 hours a day. We insert data second by second in 17 tables. The user at any time may query any table for the latest second data and some record in the history...
Handling the feed and insertion is done using a C# console application...
Handling user requests is done through a WCF service...
We figured out that insertion is our bottleneck; most of the time is taken there. We invested a lot of time trying to finetune the tables and indecies yet the results were not satisfactory
Assuming that we have suffecient memory, what is the best practice to insert data into memory instead of having database. Currently we are using datatables that are updated and inserted every second
A colleague of ours suggested another WCF service instead of database between the feed-handler and the WCF user-requests-handler. The WCF mid-layer is supposed to be TCP-based and it keeps the data in its own memory. One may say that the feed handler might deal with user-requests instead of having a middle layer between 2 processes, but we want to seperate things so if the feed-handler crashes we want to still be able to provide the user with the current records
We are limited in time, and we want to move everything to memory in short period. Is having a WCF in the middle of 2 processes a bad thing to do? I know that the requests add some overhead, but all of these 3 process(feed-handler, In memory database (WCF), user-request-handler(WCF) are going to be on the same machine and bandwidth will not be that much of an issue.
Please assist!
I would look into creating a cache of the data (such that you can also reduce database selects), and invalidate data in the cache once it has been written to the database. This way, you can batch up calls to do a larger insert instead of many smaller ones, but keep the data in-memory such that the readers can read it. Actually, if you know when the data goes stale, you can avoid reading the database entirely and use it just as a backing store - this way, database performance will only affect how large your cache gets.
Invalidating data in the cache will either be based on whether its written to the database or its gone stale, which ever comes last, not first.
The cache layer doesn't need to be complicated, however it should be multi-threaded to host the data and also save it in the background. This layer would sit just behind the WCF service, the connection medium, and the WCF service should be improved to contain the logic of the console app + the batching idea. Then the console app can just connect to WCF and throw results at it.
Update: the only other thing to say is invest in a profiler to see if you are introducing any performance issues in code that are being masked. Also, profile your database. You mention you need fast inserts and selects - unfortunately, they usually trade-off against each other...
What kind of database are you using? MySQL has a storage engine MEMORY which would seem to be suited to this sort of thing.
Are you using DataTable with DataAdapter? If so, I would recommend that you drop them completely. Insert your records directly using DBCommand. When users request reports, read data using DataReader, or populate DataTable objects using DataTable.Load (IDataReader).
Storying data in memory has the risk of losing data in case of crashes or power failures.