I'm wanting to create a temporary table, purely to make a simpler user interface feature, and it has to be visible only to the current user. I can't see how to achieve this in Access: apparently the "create" in Access is not like the standard SQL: there is a "Temporary" option however (according to http://msdn.microsoft.com/en-us/library/bb177893(v=office.12).aspx):
"When a TEMPORARY table is created it is visible only within the
session in which it was created. It is automatically deleted when the
session is terminated. Temporary tables can be accessed by more than
one user."
It's that last sentence that I have a problem with. I can see "normal databases" you would use SQL "create table #whatever"... so I want to imitate that with Access.
It's a bit long winded to explain the whole situation, apologies if i'm not being clear enough as I'm trying to avoid writing a stupid amount of unnecessary detail: essentially what I have is an "employee" record with a number of "tasks" they perform. My "EmployeeTasks" table has a "percentage" field for each task (i.e. in plain english "employee A(f.key) performs task B(f.key) X% of the day".
To maintain that information in the user interface, it's a bit "messy" to ask users to manually enter percentages... to my mind, people don't really think "well I work 7.6 hours a day, i do 10 tasks, i do this task 3.528% of the time, this task 9.813% of the time... " etc... What I want to present to the user, is their list of tasks, (in a continuous form) with their task effort expressed as hours and minutes per day.
So my theory is, create a temporary table including hours and minute extrapolation, display a form based on that table, the user can then edit those hours and minutes, and "update" function will take those figures and convert back to percentages based on sum. This way the user doesn't have to worry about being accurate in assuring all hours and minutes add to 7.6 hours, and the don't have to worry about all percentages adding to 1 etc... There's a large acceptable margin of error (because obviously most people don't perform tasks for a regimented amount of time, we're only gathering rough information)
It seems the easiest approach is to create a form based on a temp table [EDIT ADDITION]: but if more than one user edits a different employee, they would be overwriting each others temporary tables unless I can create a user-unique table somehow [/EDIT]. Another method I guess would be to dynamically create a list of controls for each task and read from them, but that would get messy quickly when employees have a large number of tasks. Thanks for your help, Simon
It is true that Access SQL does not support CREATE TABLE #TableName to create session-specific temporary tables like T-SQL does, but practically speaking it doesn't need to. Here's why:
For your Microsoft Access database application to support multiple concurrent users
you must split your database into a front-end database file (containing queries, forms, reports, code) linked to a back-end database file (containing just the data tables), and
each user must have their own (local) copy of the front-end database file.
No two users should ever directly open the same .mdb or .accdb file at the same time, e.g., by double-clicking it or doing File > Open in Access. (More details here).
Your VBA code in the front-end can create a temporary table in the front-end and your application can use it. Access allows us to build queries that JOIN local tables with linked tables, so the (local) temporary table can be used like a #Temporary table in T-SQL.
Since each user has their own copy of the front-end file (point #2, above), they each have their own copy of any temporary tables that your application might create.
Related
Let's say I have a database with lots of tables, but there's one big table that's being updated regularly. At any given point in time, this table contains billions of rows, and let's say that the table is updated so regularly that we can expect a 100% refresh of the table by the end of each quarter. So the volume of data being moved around is in the order tens of billions. Because this table is changing so constantly, I want to implement a PITR, but only for this one table. I have two options:
Hack PostgreSQL's in-house PITR to apply only for one table.
Build it myself by creating a base backup, set up continuous archiving, and using a python script to execute the log of SQL statements up to a point in time (or use PostgreSQL's EXECUTE statement to loop through the archive). The big con with this is that it won't have the timeline functionality.
My problem is, I don't know if option 1 is even possible, and I don't know if option 2 even makes sense (looping through billions of rows sounds like it defeats the purpose of PITR, which is speed and convenience.) What other options do I have?
I am working with a catalogue system at present with many user settings and preferences. As such when we setup a session we create a list of allowed products. These are currently stored in a table named like "allowedProducts_0001" where 0001 is the session ID.
We handle the data this way because there is a lot of complexity around product visibility that we do not wish to repeatedly process.
I have been asked to produce a TVF to select from this table, e.g.
SELECT * FROM allowedProducts('0001')
The problem I have is that I cannot query from a dynamic table name, even though the output would be in a static format.
I have considered creating a single table with a column for session id, hence removing the need for dynamic sql, but the table would be too large to be efficient (100k+ products per session for some clients with many open sessions at once).
I cannot use temp tables because the calling system doesn't keep the sql connection open constantly (several hundred possible sessions at once).
IWe're currently supporting back as far as MSSQL2008-R2, but have the option of upgrading to newer servers as part of an upgrade program.
I'm looking for suggestions of how to work around these conditions. Anybody have any ideas?
Many thanks in advance.
So here's the scenario: I have a set of tables named job_listings_yyyyMMdd. Every day, a new table, using the aforementioned naming convention is created and populated with that day's job listings.
When that table is populated, a process is kicked off that transforms the data in the table so that a front-end app can use it.
So, as time goes on, I have a set of tables, something like
job_listings_20151226,
job_listings_20151227,
job_listings_20151228,
...
They all have the exact same table structure, but each table contains only that day's job listings.
What I'd like to do is reference a table, from the service that provides the front-end with this data, named job_listings. Ideally, my daily process would create the new day's table, and after all processing is done and that day's data is ready to be served, then have the process change the synonym/alias (i.e., job_listings) to point to the newly populated and processed table for that day.
The idea is that there is not any seam in between data refreshes. Oracle used to have a concept called Synonyms, but I'm having a hard time figuring out how to do this with PostgreSQL.
Different database systems have different methods for federation and aliasing. But I think what you want is something that is available in any SQL system -- a view.
CREATE VIEW MOST_RECENT_JOB_LISTINGS AS
SELECT * FROM job_listings_20151228
Just change this view's definition every day after the new table is created.
I know you can run SELECT queries on top of SELECT queries in Access, but the application also provides the Make Table query type.
I'm wondering what the benefits/reasons for using Make Table might be?
You would usually use Make Table for performance reasons. If you have a fairly complex query that returns a subset of your table's data, and that you may need to retrieve multiple times, it can be expensive to re-run the query multiple times.
Using Make Table allows you to incur the cost of running the expensive query once, and make a copy of the query results into a table. Querying this copy would then be a lot less expensive than running your original expensive query.
This is usually a good option when you don't expect your original data to change frequently, or if you don't care that you are working of a copy of the data that may not be 100% up-to-date with the original data.
Notice what the following article on Create a make table query has to say:
Typically, you create make table queries when you need to copy or archive data. For example, suppose you have a table (or tables) of past sales data, and you use that data in reports. The sales figures cannot change because the transactions are at least one day old, and constantly running a query to retrieve the data can take time — especially if you run a complex query against a large data store. Loading the data into a separate table and using that table as a data source can reduce workload and provide a convenient data archive. As you proceed, remember that the data in your new table is strictly a snapshot; it has no relationship or connection to its source table or tables.
The main defense here is that a make table query creates a table. And when you done with the table then effort and time to delete that table and recover the VERY LARGE increase in the database file will have to occur. For general reports and a query of data make much more send. A comparison would be to build a NEW garage every time you want to park your car.
The database engine and query system can fetch and pull rows at a very high rate and those results are then able to be rendered into a report or form, and this occurs without having to create a temp table. It makes little sense to go through all of the trouble of having the system create a WHOLE NEW table for such results of data when they can with ease be sent to a report.
In other words creating a whole table just to display or use some data that the database engine already fetched and returned makes little sense. A table is a set of rows that holds data that can be updated and the results are permanent. A query is a “on the fly” results or sub set of data that only exists in memory and is discarded after you use the results.
So for general reporting and display of data, it makes no sense to create a temp table. MUCH WORSE of an issue is that if you have two users wanting to run a report, if they both need different results and you send the results to the SAME temp table, then you have a big mess and collision between the two users. So use of a temp table in Access for the most part makes little sense, and this is EVEN MORE so when working in a multi-user environment. And as noted, once the table is created, then after you are done you need to delete and remove the table. And with many users in a multi-user database this becomes even more of a problem and issue.
However in a multi-user environment as pointed out that if the resulting data needs additional processing, then sending the results to a temp table can be of use. This approach however suggests that EACH USER has their own front end and own copy of the application side. And better is that the temp table is created outside of the front end application that resides on each computer. Since the application part (front end) is placed on each computer, then creating of a temp table does not occur in the production database (back end) and as a result you can have multiple users function correctly without each individual user creating a temp table in the production back end database. So if one is to adopt a make table query, it likely should occur on each local workstation and not in the back end database when you have a multiple user database application.
Thus for the most part a make table and that of reports and query of data are VERY different goals and tasks. You don't want nor as a general rule create a whole brand new table for a simple query. In a multi user database system the users might run 100's of reports in a given day and FEW if any systems will send such data to a temp table in place of sending the query results directly to the report.
It creates a table - which is useful if you have a need for that table which you may have for temporary use where you have to modify the data for calculations or further processing while not disturbing the original data.
I have a table of around 60 columns and 400,000 rows and increasing. Our company laptops and MS Excel cannot handle this much data in RAM. So I decided to store the data in MS Access and link it to Excel.
However the pivot in Excel still downloads all the data into Excel, and then performs the filters and operations on the data. Which worked with lesser data, but with more data now has started giving memory errors. Also even though the data in the pivot might be only 50 cells, the file size is 30+ MBs...
So is it possible to create a connection to Access in such a way that it downloads only the data that is queried, does the operations before hand and then sends the revised data to Excel?
I saw this setup in my previous company (where the Excel pivot would only download what it needed). But it was querying an SQL DB as far as I remember. (Sadly couldn't learn more about it since the IT director was intent on being the only guy who knew core operations (He basically had the company's IT operations hostage in exchange for his job security))... But I digress.
I've tried searching for this on the internet for a few days, but it's a very specific problem that I can't find in Google :/
Any help or even pointers would be highly appreciated!
Edit: I'd just like to point out that I'm trying to create an OLAP connection for analysis, so the pivot would be changing fields. My understanding of how pivots work, was that when we select the fields in the pivot, excel would design a query (based on the select fields) and send it to the connection DB to retrieve the data requested. If this is not how it happens, how do I make something like this happen? Hope that elaborates.
I suppose that you created a single massive table in Access to store all your data, so if you just link that table as the data source, Excel won't know which particular bit of data is relevant and will most probably have to go through all of it itself.
Instead, you can try a combination of different approaches:
Create a query that pre-filters the data from Access and link that query to Excel.
Use a SQL Command Type for your Connection Properties instead of a Table.
Test that query in Access to make sure it runs well and is fast enough.
Make sure that all important fields have indexes (fields you filter, fields you group by, any field that Excel has to go through to decide whether it should be included or not in the pivot, make sure that that field has a sensible index).
Make sure that you have set a Primary Key in your table(s) in Access. Just use the default auto-increment ID if it's not already used.
If all else fails, break down that huge table: it's not so much the amount of records that's too much it's more the high number of columns.
If you use calculated fields in your pivot or filter data based on some criteria, consider adding columns to your table(s) in Access that contain pre-calculated data. For instance you could run a query from Access to update these additional fields or add some VBA to do that.
It should works pretty well though: to give you an idea, I've made some tests with Excel 2013 linked to a 250MB ACCDB containing 33 fields and 392498 rows (a log of stock operations): most operations on the pivot in Excel only take a fraction of a second, maybe a couple of seconds for the most data-intensive ones.
Another thing: Access has support pivot tables and pivot charts. Maybe you don't need Excel if Access is enough. You can use the Access Runtime 2013 or 2013 (it's free) as a front-end on each machine that needs access to the data. Each front-end can then be linked to the backend database that holds the data on a network share. The tools are a bit more clunky than in Excel, but they work.
Another possible solution, to avoid the creation of queries in the Access DB, is to use PowerPivot add-in in Excel, implementing there queries and normalizations.