Performance of MS Access when JOINing from linked tables in different servers? - sql

If I have an MS Access database with linked tables from two different database servers (say one table from an SQL Server db and one from an Oracle db) and I write a query which JOINs those two tables, how will Access (or the Jet engine, I guess?) handle this query? Will it issue some SELECTs on each table first to get the fields I'm JOINing on, figurre out which rows match, then issue more SELECTs for those rows?

The key thing to understand is this:
Are you asking a question that Access/Jet can optimize before it sends its request to the two server databases? If you're joining the entirety of both tables, Jet will have to request both tables, which would be ugly.
If, on the other hand, you can provide criteria that limit one or both sides of the join, Access/Jet can be more efficient and request the filtered resultset instead of the full table.

Yep, you can have some serious performance issues. I have done this type of thing for years. Oracle, Sql, and DB2 - ugh. Sometimes I have had to set it up on a timer at 5:00am so when I get in at 7:00 it's done.
If your dataset is significant enough, it is often faster to build a table locally and then link the data. For remote datasets, also look into passthroughs.
For example, lets say you are pulling all of yesterday's customers from the oracle db and all of the customer purchases from the sql db. Let's say you have an average of 100 customers daily but a list of 30,000 and lets say your products have a list of 500,000. You could query the oracle db for your list of 100 customers, then write it as in IN statement in a passthrough query to the sql db. You'll get your data almost instantly.
Or if your recordsets are huge, build local tables of the two IDs, compare them locally and then just pull the necessary matches.
It's ugly but you can save yourself hours literally.

That would be my guess. It helps if there are indexes on both sides of the join but, as neither server has full control over the query, further query optimization is not possible.

I have no practical experience joining tables from two different data systems. However, depending on the requirements, etc, etc, you may find it faster to run SELECT queries with only the records and fields required into Access tables and do the final join and query in Access.

Related

What is possible in SOQL but not in SQL?

Are there any functions / keywords / syntax in SOQL queries that does NOT have an equivalent operation in SQL?
Basically, does there exist a SOQL query that you couldn't convert directly into a SQL query?
Weird question, why do you ask? And which SQL you mean exactly, Oracle, SQL Server flavour, Maria DB or what?
I'd say you'll have hard time
mapping SELECT Account.Owner.Manager.Profile.Name FROM Opportunity into "normal" joins
replicating TOLABEL() (translate picklist values on the fly)
replicating anything SF-specific like WITH (say knowledge base's data categories) or USING SCOPE (you can pull "my accounts" but can you pull "my team's accounts"? "My territory's accounts"? Without an orgy of joins)
doing joins over mutant polymorphic fields like Task.WhatId or ContentDocumentLink.LinkedEntityId
doing any kind of SOSL, especially if org uses synonyms
converting currencies on the fly
doing things like FISCAL_YEAR() without orgy of joins to Period table
replicating any geolocation-related queries (accounts up to 10 km away from me) without knowing exactly what GIS (or whatever) SF uses
doing soft deletes (or however Recycle Bin really works) without impact on performance. I mean if records go to another table - columns are identical and join/view happens magically when you query ALL ROWS?
doing any Person Account stuff, silently querying and updating effectively 2 tables (as materialised view maybe?)
Some differences of SOQL:
No SELECT *
No views
SOQL read-only
Limited indexes
Object-relational mapping is automatic
Schema changes protected

Database Choice for a CSV file

I have a CSV file.
It has 5 columns, 4000 rows.
The database will have a single table, and each year I will add a new table to the database.
The tables itself will never be updated, they will be only created once.
I expect many multiple reads, queries at the same time.
There won't be any complex queries. Queries will be basically filtering on only one column.
The users will use sorting on one column.
Based on this, my gut feeling tells me that I should use a SQL solution, like MySQL or PostgreSQL. I am wondering your thoughts, should I use SQL, NoSQL or something else (Redis maybe?)
In my opinion, MySQL. Providing you have enough DB storage.

How to best query across both Oracle and SQL Server databases for large tables?

I have a stored procedure in SQL Server that also queries tables in the same database and in a different Oracle database. This is for a data warehouse project that joins several large tables across databases and queries them.
Is it better to copy the table(with ~3 mil records) to the same database and then query it, or is the slowdown not significant from the table being in a different database? The query is complicated and can take hours.
I'm not necessarily looking for a specific answer, informed opinion and/or specific further reading are also very appreciated. Thanks!
I always prefer stage layer, or somebody calls it integration layer.
In your case (on blind) it's perhaps best solution to:
Copy table once
Create a sync step (Insert/Update) based on primary key(s)
Schedule step 2
Run your query
If there is some logical data-integrity rule, you can create second step by simple SQL based on timestamps.

BigQuery best practice for segmenting tables by dates

I am new to columnar DB concepts and BigQuery in particular. I noticed that for the sake of performance and cost efficiency it is recommended to split data across tables not only logically - but also by time.
For example - while I need a table to store my logs (1 logical table that is called "logs"), it is actually considered a good practice to have a separate table for different periods, like "logs_2012", "logs_2013", etc... or even "logs_2013_01", "logs_2013_02", etc...
My questions:
1) Is it actually the best practice?
2) Where would be best to draw the line - an annual table? A monthly table? A daily table? You get the point...
3) In terms of retrieving the data via queries - what is the best approach? Should I construct my queries dynamically using the UNION option? If I had all my logs in one table - I would naturally use the where clause to get data for the desired time range, but having data distributed over multiple tables makes it weird. I come from the world of relational DB (if it wasn't obvious so far) and I'm trying to make the leap as smoothly as possible...
4) Using the distributed method (different tables for different periods) still raises the following question: before querying the data itself - I want to be able to determine for a specific log type - what is the available range for querying. For example - for a specific machine I would like to first present to my users the relevant scope of their available logs, and let them choose the specific period within that scope to get insights for. The question is - how do I construct such a query when my data is distributed over a number of tables (each for a period) where I don't know which tables are available? How can I construct a query when I don't know which tables exist? I might try to access the table "logs_2012_12" when this table doesn't actually exist, or event worst - I wouldn't know which tables are relevant and available for my query.
Hope my questions make sense...
Amit
Table naming
For daily tables, the suggested table name pattern is the specific name of your table + the date like in '20131225'. For example, "logs20131225" or "logs_20131225".
Ideal aggregation: Day, month, year?
The answer to this question will depend on your data and your queries.
Will you usually query one or two days of data? Then have daily tables, and your costs will be much lower, as you query only the data you need.
Will you usually query all your data? Then have all the data in one table. Having many tables in one query can get slower as the number of tables to query grow.
If in doubt, do both! You could have daily, monthly, yearly tables. For a small storage cost, you could save a lot when doing queries that target only the intended data.
Unions
Feel free to do unions.
Keep in mind that there is a limit of a 1000 tables per query. This means if you have daily tables, you won't be able to query 3 years of data (3*365 > 1000).
Remember that unions in BigQuery don't use the UNION keyword, but the "," that other databases use for joins. Joins in BigQuery can be done with the explicit SQL keyword JOIN (or JOIN EACH for very big joins).
Table discovery
API: tables.list will list all tables in a dataset, through the API.
SQL: To query the list of tables within SQL... keep tuned.
New 2016 answer: Partitions
Now you can have everything in one table, and BigQuery will analyze only the data contained in the desired dates - if you set up the new partitioned tables:
https://cloud.google.com/bigquery/docs/creating-partitioned-tables

What is the best way to query data from multilpe tables and databases?

I have 5 databases which represent different regions of the country. In each database, there are a few hundred tables, each with 10,000-2,000,000 transaction records. Each table is a representation of a customer in the respective region. Each of these tables has the same schema.
I want to query all tables as if they were one table. The only way I can think of doing it is creating a view that unions all tables, and then just running my queries against that. However, the customer tables will change all the time (as we gain and lose customers), so I'd have to change the query for my view to include new tables (or remove ones that are no longer used).
Is there a better way?
EDIT
In response to the comments, (I also posted this as a response to an answer):
In most cases, I won't be removing any tables, they will remain for historic purposes. As I posted in comment to one response, the idea was to reduce the time it takes a smaller customers (one with only 10,000 records) to query their own history. There are about 1000 customers with an average of 1,000,000 rows (and growing) a piece. If I were to add all records to one table, I'd have nearly a billion records in that table. I also thought I was planning for the future, in that when we get say 5000 customers, we don't have one giant table holding all transaction records (this may be an error in my thinking). So then, is it better not to divide the records as I have done? Should I mash it all into one table? Will indexing on customer Id's prevent delays in querying data for smaller customers?
I think your design may be broken. Why not use one single table with a region and a customer column?
If I were you, I would consider refactoring to one single table, and if necessary (for reverse compatibility for example), I would use views to provide the same info as in the previous tables.
Edit to answer OP comments to this post :
One table with 10 000 000 000 rows in it will do just fine, provided you use proper indexing. Database servers are built to cope with this kind of volume.
Performance is definitely not a valid reason to split one such table into thousands of smaller ones !
The architecture of this system smells like it needs a vastly different approach if there are a few hundred tables and each has the same schema
Why are you adding or removing tables at all? This should not be happening under any normal circumstances.
Agree with Brann,
That's an insane DB Schema Design. Why didn't you go with (or is an option to change to) a single normalised structure with columns to filter by region and whatever condition separates each table within a region database.
In that structure you're stuck with some horribly large (~500 tables) unioned view that you would have to dynamically regenerate as regularly as new tables appear in the system.
2 solutions
1. write a stored procedure who build the view for you by parsing all table names in the 5 databases and build the view with union as you would do it by hand.
create a new database with one table and import each night per example all the records of all the tables in this one.
Sounds like your stuck somewhere between a multi and single tenant database shema. Specifically your storing it as "light"multi-tenant (separate tables vs separate databases) but querying it as single-tenant, one query to rule them all.
In the short term have your data access layer dynamically pick the table to query and not union everything together for one uber query.
In the long term pick one approach and stick too it. One database and one table or many databases.
Here are some posts on the subject.
What are the advantages of using a single database for EACH client?
http://msdn.microsoft.com/en-us/library/aa479086.aspx