Azure SQL tables require a clustered index and will not accept insertions if one is not present. If one is present LogParser is complaining about a mismatch on the number of columns in the select list vs. the target table.
Is there any way to square this circle? Perhaps embed an expression in LP's select list like '"SELECT DateTime,Thread,Level,Logger,Message,Exception,(select max(id)+1 from loggerTbl)...
It's becoming amazingly difficult to parse plain old azure logs into sql where god intends them to be.
Azure SQL tables require a clustered index and will not accept insertions if one is not present.
That is no longer true. Azure SQL DB v12 no longer has this restrictions. Move your DB to one of the new tiers (Basic, Standard) and your DB will be upgraded to v12.
Related
I am running an assessment using the Azure SQL Data Migration Assistant (3.4.3948.1). In my initial assessment, I had a function that was calling fn_varbintohexstr so I fixed it (read removed the function). I also deleted all our synonyms.
Now I run the assessment more times and it continues to give the 'cross-database queries' error but without listing any more specifics. How can I find out what particular objects it means? Or is it possible that it has somehow cached my result and I need to invalidate it somehow?
This means you have programming objects in that database that reference another database. For example a query like this:
SELECT * FROM Database1.dbo.Table1
One of the options you have is to import those external objects to your database and change the three and four-part name references that SQL Azure does not support.
You can also use CREATE EXTERNAL DATA SOURCE and CREATE EXTERNAL TABLE on SQL Azure to query tables that belong to other databases that you have to migrate to SQL Azure too.
I have a MS Access query that is based on a linked ODBC table (Oracle).
I'm troubleshooting the poor performance of the query here: Access not properly translating TOP predicate to ODBC/Oracle SQL.
SELECT ri.*
FROM user1_road_insp AS ri
WHERE ri.insp_id = (
select
top 1 ri2.insp_id
from
user1_road_insp ri2
where
ri2.road_id = ri.road_id
and year(insp_date) between [Enter a START year:] and [Enter a END year:]
order by
ri2.insp_date desc,
ri2.length desc,
ri2.insp_id
);
The documentation says:
When you spot a problem, you can try to resolve it by changing the local query. This is often difficult to do successfully, but you may
be able to add criteria that are sent to the server, reducing the
number of rows retrieved for local processing.
In many cases you will find that, despite your best efforts, Office Access still retrieves some entire tables unnecessarily and
performs final query processing locally.
However, it's occurred to me that I don't really understand what sort of SQL I should be writing to make both Access and ODBC/Oracle happy.
Should I be writing some sort of generic SQL that Access can understand in a local query AND that can be easily translated to ODBC/Oracle SQL? Is generic SQL a real thing?
What kind of SQL does the ODBC driver use? It depends as typically MS Access has three types of external data connections that interfaces with different SQL dialects each with the ODBC API.
Linked tables that acts like local tables but are ODBC connected data sources and not stored locally. Once they are incorporated in an Access app, these tables can only use MS Access' SQL dialect. They can be joined with local or even other backend tables from other sources.
Hence, why TOP is available in MS Access and not Oracle. You are essentially using Access SQL to manipulate Oracle data. ODBC serves as the origin point of data while Access' Jet/ACE SQL engine does the processing and resultset viewing in cached memory.
Pass-through queries that do not see local tables or anything else in local app's environment. Such queries use the SQL dialect of the connected database here being Oracle.
Hence, why TOP is NOT available in Oracle and double quotes are allowed in column identifiers. Such quoting would fail in MS Access. Essentially, you are using Oracle SQL to manipulate Oracle data in an Access app. You can take the output of the sqlout.txt log and run it in a pass-through query ODBC-connected to your Oracle database.
ADO/DAO Recordsets that are run entirely via code such as VBA and are direct connections to data sources and uses the connecting database's dialect.
Here, you using Oracle SQL to manipulate Oracle data in an Access app via the ODBC API.
In each one of these types, you will have to connect to a backend ODBC data source. You do not even need to use the GUI but can use Access' object library to create linked tables (see DoCmd.TransferDatabase) and pass through querydefs (see QueryDef.Connect or .Execute).
I suspect the sqlout.txt log you see are translations of the ODBC calls to its native dialect.
To build on #Parfait's point #1:
From Microsoft Access Developer's Guide to SQL Server by Mary Chipman and Andy Baron:
Optimizing Access Queries:
There's a common misconception that the Jet engine always retrieves all the data in linked SQL Server tables and then processes the data locally. This is not usually true. Jet is perfectly capable of sending efficient queries to SQL Server over ODBC and retrieving only the rows required. However, in some cases, Jet will in fact be forced to fetch all the data in certain tables first and then process it. You should be aware of when you are forcing Jet to do this and be sure that it is justified. The following are some general guidelines to follow when creating your Access queries:
Using expressions that can't be evaluated by the server will cause Jet to retrieve all the data required to evaluate those expressions locally. The impact of using Access-specific expressions, such as domain aggregate functions, Access financial functions, or custom VBA functions will vary depending on where in your query the expressions are used. Using such an expression in the SELECT clause will usually not cause a problem because no extra data will be returned. However, if the expression is in the WHERE clause, that criterion cannot be applied on the server, and all the data evaluated by the expression will have to be returned.
With multiple criteria, as many as possible will be processed on the server. This means that even if you use criteria that you know include functions that will need to be processed by Jet, adding other criteria that can be handled by the server will reduce the number of records that Jet has to process. Adding criteria on indexed columns is especially helpful.
Query syntax that includes an Access-specific extension to SQL, not supported by the ODBC driver, may force processing to be done on the client by Access. For example, even though SELECT TOP 5 PERCENT is now supported by SQL Server, it is not supported by the ODBC driver. If you use that syntax in an Access query, Jet will need to retrieve all the records and calculate which ones are in the top 5 percent. On the other hand, even though crosstab queries are specific to Access, Jet will translate them into simple GROUP BY queries and fetch just the required data in one trip to the server unless problematic criteria is used.
Heterogeneous joins between local and remote tables or between remote tables that are in different data sources will, of course, have to be processed by Jet after the source data is retrieved. However, if the remote join field is indexed and the table is large, Jet will often use the index to retrieve only the required rows by making multiple calls to the remote table, one fore each row required.
Jet allows you to mix data types within [typo - fix later] of UNION queries and within expressions, but SQL server doesn't. Such mixing of data types will force processing to be done locally.
Multiple outer joins in one query will be processed locally.
The most important factor is reducing the total number of records being fetched. Jet will retrieve multiple batches of records in the background until the result set is complete, so even though you may seem to get results back immediately, a continuing load is being placed on the server for large result sets.
Note: this book is quite old (published in 2000) and is in reference to Jet Engine. I imagine things might be slightly different in newer versions of Access which use ACE, although I don't have a source to back this up.
I'm very excited by several of the more recently-added Postgres features, such as foreign data wrappers. I'm not aware of any other RDBMS having this feature, but before I try to make the case to my main client that they should begin preferring Postgres over their current cocktail of RDBMSs, and include in my case that no other database can do this, I'd like to verify that.
I've been unable to find evidence of any other database supporting SQL/MED, and things like this short note stating that Oracle does not support SQL/MED.
The main thing that gives me doubt is a statement on http://wiki.postgresql.org/wiki/SQL/MED:
SQL/MED is Management of External Data, a part of the SQL standard that deals with how a database management system can integrate data stored outside the database.
If FDWs are based on SQL/MED, and SQL/MED is an open standard, then it seems likely that other RDBMSs have implemented it too.
TL;DR:
Does any database besides Postgres support SQL/MED?
IBM DB2 claims compliance with SQL/MED (including full FDW API);
MySQL's FEDERATED storage engine can connect to another MySQL database, but NOT to other RDBMSs;
MariaDB's CONNECT engine allows access to various file formats (CSV, XML, Excel, etc), gives access to "any" ODBC data sources (Oracle, DB2, SQLServer, etc) and can access data on the storage engines MyIsam and InnoDB.
Farrago has some of it too;
PostgreSQL implements parts of it (notably it does not implement routine mappings, and has a simplified FDW API). It is usable as readeable since PG 9.1 and writeable since 9.3, and prior to that there was the DBI-Link.
PostgreSQL communities have a plenty of nice FDW like noSQL FDW (couchdb_fdw, mongo_fdw, redis_fdw), Multicorn (for using Python output instead of C for the wrapper per se), or the nuts PGStrom (which uses GPU for some operations!)
SQL Server has the concept of Linked Servers (http://technet.microsoft.com/en-us/library/ms188279.aspx), which allows you to connect to external data sources (Oracle, other SQL instances, Active Directory, File system data via the Indexing Service provider, etc.) and, if you really needed to, you can create your own Providers that can be used by a SQL Server Linked Server.
Another option within SQL Server is the CLR, in which you can write code to retrieve data from web services or other data sources as needed.
While this may not technically be "SQL/MED", it seems to accomplish the same thing.
Distributed query using local table joined to 4-part linked server query. I think case the remotetable filter might not be applied until after the entire table is pulled local (documentation is fuzzy on this and I've found article with conflicting opinions):
SELECT *
FROM LocalDB.dbo.table t
INNER JOIN LinkedServer1.RemoteDB.dbo.remotetable r on t.val = r.val
WHERE r.val < 1000
;
Using OpenQuery, remotetable filter is applied on the remote server, as long as the filter is passed into the OpenQuery 2nd parameter:
SELECT *
FROM LocalDB.dbo.table t
INNER JOIN OPENQUERY(LinkedServer1, 'SELECT * FROM RemoteDB.dbo.remotetable r WHERE r.val < 1000') r on t.val = r.val
Right... I've got a program I'm doing some maintenance on.
Urgh. Even describing it makes me shudder... Right, okay.
Every night, a database running on what we think is SQL Server 2000 hooks up to an Informix database and copies it over into SQL Server.
The Informix/SQL data is accessed by the program I'm maintaining, which then stores some data in a different SQL Server 2000 database. This data should have foreign key constraints on the Informix data, but doesn't.
Further on down the line, data from the SQL database is put back into the Informix/SQL database, and later still, back into the actual Informix database.
Basically, the root of my problem is that there are no foreign or primary key constraints on the non-Informix SQL database. Well, some of the tables have a Primary key on a non-meaningful "ID" column, but those aren't FK'd to any other tables.
My question is: Is it possible to link SQL Server 2000 to the native Informix database in some way, so that I can add foreign key constraints within the SQL database so that SQL Server can only create rows when it can refer to existing rows within the Informix database?
I'll do my best to answer any questions anyone has, but as far as I can tell the reasoning behind these design decisions was genuine insanity, so reasons won't be particularly forthcoming, as I can't work them out, myself...
Yuck!
Bad Luck (on the mess you've inherited)!
Good Luck (with your work fixing the mess)!
Which version of Informix, and what platform (type of machine, o/s) is it running on?
Is there a reason (other than it will break because the data is a mess) that you can't update the Informix schema to enforce the real RI constraints. But you probably need to know how bad the mess is so that you can start the cleanup process. IDS (Informix Dynamic Server) does have 'violations tables' which can be used to track problematic rows of data - 'START VIOLATIONS' and 'STOP VIOLATIONS' are the statements to look for in the Informix Guide to SQL: Syntax manual You might well need to unload and delete the data from one table before starting to load the data with the violations checking enabled.
After clarification, the question seems to be "Can I set up referential integrity constraints on tables in the SQL Server databases that are constrained by (refer to) tables in the Informix databases?"
The answer to that is (sadly):
No
Most DBMS are reluctant to have cross-database referential integrity constraints, let alone cross-DBMS constraints.
The closest approximation would be to have copies of the relevant Informix tables in the SQL Server databases, but that probably adds to the data transfer workload. OTOH, cleaning up the data probably requires that - it might be possible to relax that copying later, once the data is more nearly sane. It depends, in part, on the volatility of the referenced Informix data - how often are rows added or deleted to the referenced tables.
I'm looking for a way to validate the SQL schema on a production DB after updating an application version. If the application does not match the DB schema version, there should be a way to warn the user and list the changes needed.
Is there a tool or a framework (to use programatically) with built-in features to do that?
Or is there some simple algorithm to run this comparison?
Update: Red gate lists "from $395". Anything free? Or more foolproof than just keeping the version number?
Try this SQL.
- Run it against each database.
- Save the output to text files.
- Diff the text files.
/* get list of objects in the database */
SELECT name,
type
FROM sysobjects
ORDER BY type, name
/* get list of columns in each table / parameters for each stored procedure */
SELECT so.name,
so.type,
sc.name,
sc.number,
sc.colid,
sc.status,
sc.type,
sc.length,
sc.usertype ,
sc.scale
FROM sysobjects so ,
syscolumns sc
WHERE so.id = sc.id
ORDER BY so.type, so.name, sc.name
/* get definition of each stored procedure */
SELECT so.name,
so.type,
sc.number,
sc.text
FROM sysobjects so ,
syscomments sc
WHERE so.id = sc.id
ORDER BY so.type, so.name, sc.number
I hope I can help - this is the article I suggest reading:
Compare SQL Server database schemas automatically
It describes how you can automate the SQL Server schema comparison and synchronization process using T-SQL, SSMS or a third party tool.
You can do it programatically by looking in the data dictionary (sys.objects, sys.columns etc.) of both databases and comparing them. However, there are also tools like Redgate SQL Compare Pro that do this for you. I have specified this as a part of the tooling for QA on data warehouse systems on a few occasions now, including the one I am currently working on. On my current gig this was no problem at all, as the DBA's here were already using it.
The basic methodology for using these tools is to maintain a reference script that builds the database and keep this in version control. Run the script into a scratch database and compare it with your target to see the differences. It will also generate patch scripts if you feel so inclined.
As far as I know there's nothing free that does this unless you feel like writing your own. Redgate is cheap enough that it might as well be free. Even as a QA tool to prove that the production DB is not in the configuration it was meant to be it will save you its purchase price after one incident.
You can now use my SQL Admin Studio for free to run a Schema Compare, Data Compare and Sync the Changes. No longer requires a license key download from here http://www.simego.com/Products/SQL-Admin-Studio
Also works against SQL Azure.
[UPDATE: Yes I am the Author of the above program, as it's now Free I just wanted to Share it with the community]
If you are looking for a tool that can compare two databases and show you the difference Red Gate makes SQL Compare
You didn't mention which RDMBS you're using: if the INFORMATION SCHEMA views are available in your RDBMS, and if you can reference both schemas from the same host, you can query the INFORMATION SCHEMA views to identify differences in:
-tables
-columns
-column types
-constraints (e.g. primary keys, unique constraints, foreign keys, etc)
I've written a set of queries for exactly this purpose on SQL Server for a past job - it worked well to identify differences. Many of the queries were using LEFT JOINs with IS NULL to check for the absence of expected items, others were comparing things like column types or constraint names.
It's a little tedious, but its possible.
I found this small and free tool that fits most of my needs.
http://www.wintestgear.com/products/MSSQLSchemaDiff/MSSQLSchemaDiff.html
It's very basic but it shows you the schema differences of two databases.
It doesn't have any fancy stuff like auto generated scripts to make the differences to go away and it doesn't compare any data.
It's just a small, free utility that shows you schema differences :)
Make a table and store your version number in there. Just make sure you update it as necessary.
CREATE TABLE version (
version VARCHAR(255) NOT NULL
)
INSERT INTO version VALUES ('v1.0');
You can then check the version number stored in the database matches the application code during your app's setup or wherever is convenient.
SQL Compare by Red Gate.
Which RDBMS is this, and how complex are the potential changes?
Maybe this is just a matter of comparing row counts and index counts for each table -- if you have trigger and stored procedure versions to worry about also then you need something more industrial
Try dbForge Data Compare for SQL Server. It can compare and sync any databases, even very large ones. Quick, easy, always delivers a correct result.
Try it on your database and comment upon the product.
We can recommend you a reliable SQL comparison tool that offer 3 time’s faster comparison and synchronization of table data in your SQL Server databases. It's dbForge Data Compare for SQL Server.
Main advantages:
Speedier comparison and synchronization of large databases
Support of native SQL Server backups
Custom mapping of tables, columns, and schemas
Multiple options to tune your comparison and synchronization
Generating comparison and synchronization reports
Plus free 30-day trial and risk-free purchase with 30-day money back guarantee.