Analysis Services 2008 R2, Enterprise 64, Version: 10.50.2500.0
Win7 64bit
I have setup the dimension dynamic security to use assembly to get the allowed members for the role/dimension.
The assembly looks up the SQL Server database table and returns the allowed members and logs everything into the DB so that I can debug things more easily.
So, when I test the role (via SSMS, BIDS, Excel) the assembly gets called only once and then AS (as I understand it) caches the security info (i.e. security dimension).
Thus, if I change the content of the relational table with allowed members it is not reflected even if I close and reopen the apps. Fine, it's cached.
But then I instruct the AS to CLEAR the cache as described here: http://technet.microsoft.com/en-us/library/hh230974.aspx
Nothing happens.
The XMLA query executes fine, I get:
<return xmlns="urn:schemas-microsoft-com:xml-analysis">
<root xmlns="urn:schemas-microsoft-com:xml-analysis:empty" />
</return>
but the assembly DOES NOT get called again, and the allowed members are wrong.
Also, these guys say that this approach should work: http://cwebbbi.wordpress.com/2011/05/09/why-not-to-use-the-external-assembly-approach-for-dynamic-security/
The only way to get it to work is to reprocess the cube!
Is this a bug?
Is there a workaround?
Thanks,
Igor
Related
Background:
I have a MS SQL Server database and I want to track changes to it. For example if a column needed to be added or removed or a table needed to be dropped. Something similar to Version control for regular code.
The problem:
While looking around I saw that there were some tools that can be used:
RedGate SQL Source Control
Visual Studio Database project
I am more interested in knowing if either of these tools will track changes to my database? More specifically I have a TFS server that is the source control for my MVC code, can I use either of these with TFS? Will it allow us to restore from older versions? Will it allow multiple developers to work on the database simultaneously?
For this type of work, ApexSQL Source Control shown to be all that you need. With this SSMS add-in you can work directly on a database, and all of your changes will be tracked in real time.
Yes, several developers can work in the same time on the same database. When one developer works on a one or several objects, other developers can see which those objects are, and until the first one does not finish changing the others cannot change that object, they will not be allowed to.
If by any case, object is changed wrong, previous version or any earlier version can be restored at any moment.
This add-in has all necessary options and features to allow the developers to work without losing time for checking changes made against object, since the add-in does that for them. And you can always see by whom, when and what that change is.
Being in the database version control space for 5 years (as director of product management at DBmaestro) and having worked as a DBA for over two decades, I can tell you the simple fact that you cannot treat the database objects as you treat your Java, C# or other files and save the changes in simple DDL scripts.
There are many reasons and I'll name a few:
Files are stored locally on the developer’s PC and the change s/he
makes do not affect other developers. Likewise, the developer is not
affected by changes made by her colleague. In database this is
(usually) not the case and developers share the same database
environment, so any change that were committed to the database affect
others.
Publishing code changes is done using the Check-In / Submit Changes /
etc. (depending on which source control tool you use). At that point,
the code from the local directory of the developer is inserted into
the source control repository. Developer who wants to get the latest
code need to request it from the source control tool. In database
the change already exists and impacts other data even if it was not
checked-in into the repository.
During the file check-in, the source control tool performs a conflict
check to see if the same file was modified and checked-in by another
developer during the time you modified your local copy. Again there
is no check for this in the database. If you alter a procedure from
your local PC and at the same time I modify the same procedure with
code form my local PC then we override each other’s changes.
The build process of code is done by getting the label / latest
version of the code to an empty directory and then perform a build –
compile. The output are binaries in which we copy & replace the
existing. We don't care what was before. In database we cannot
recreate the database as we need to maintain the data! Also the
deployment executes SQL scripts which were generated in the build
process.
When executing the SQL scripts (with the DDL, DCL, DML (for static
content) commands) you assume the current structure of the
environment match the structure when you create the scripts. If not,
then your scripts can fail as you are trying to add new column which
already exists.
Treating SQL scripts as code and manually generating them will cause
syntax errors, database dependencies errors, scripts that are not
reusable which complicate the task of developing, maintaining,
testing those scripts. In addition, those scripts may run on an
environment which is different from the one you though it would run
on.
Sometimes the script in the version control repository does not match
the structure of the object that was tested and then errors will
happen in production!
There are many more, but I think you got the picture.
What I found that works is the following:
Use an enforced version control system that enforces
check-out/check-in operations on the database objects. This will
make sure the version control repository matches the code that was
checked-in as it reads the metadata of the object in the check-in
operation and not as a separated step done manually. This also allow
several developers to work in parallel on the same database while
preventing them to accidently override each other code.
Use an impact analysis that utilize baselines as part of the
comparison to identify conflicts and identify if a change (when
comparing the object's structure between the source control
repository and the database) is a real change that origin from
development or a change that was origin from a different path and
then it should be skipped, such as different branch or an emergency
fix.
An article I wrote on this was published here, you are welcome to read it.
If you're looking for a product that will track changes into TFS from your SQL Server automatically, I'd invite you take a look at our product, Sql Historian. It's different from most other SQL version control systems (including the ones you've listed) in that it does not require developers to perform a check-in ritual to synchronize version control with what's already committed to the db.
However, features common with Sql Historian and the other two systems you mention are: working with TFS, the ability to view older versions of your db objects, and allowing multiple users on the db at the same time.
I have been using MS Access databases via DAO for many years, but feel that I ought to embrace newer techniques.
My main application runs on end user PCs (no server) and uses a shared database that is created and updated on-the-fly. When the application is first run it detects the absence of a database and creates a new empty one.
Any local user running the application is allowed to add or update records in this shared database. We have a couple of other shared databases, that contain templates, regional information, etc., but these are not updated directly by the application.
Updates of the application are released from time to time and each new update checks the main database version and if necessary executes code to bring the database up to the latest specification. This may involve the creation or deletion of tables and/or columns. New copies of the template databases are also included as part of the update.
Our users are not required to be computer-literate and should not need to run any sort of database management software beyond those facilities provided by the application.
It all works very nicely with DAO/Access, but I'm struggling to find how to do it with SQL Express. The databases seem to be squirrelled away in locations that are user-specific and database creation and update seems at best awkward to do by program code alone.
I came across some references "Xcopy deployment" that looks like it could be promising, but there seem to be references to "user instances" that sound suspiciously like something that's not shared. I'd appreciate advice from anyone who has done it.
It sounds to me like you haven't fully absorbed the fundamental difference between the Access Database Engine (ACE/Jet) and SQL Server:
When your users launch your Access application it connects to the Access Database Engine that has been installed on their machine. Their copy of ACE/Jet opens the shared database file (.accdb or .mdb) in the network folder. The various instances of ACE/Jet work together to manage concurrent updates, record locking, and so on. This is sometimes called a "peer-to-peer" or "shared-file" database architecture.
With an application that uses a SQL Server back-end, the copies of your application on each user's machine connect over the network to the same instance of SQL Server (that's why it's called "SQL Server"), and that instance of SQL Server manipulates the database (which is stored on its local hard drive) on behalf of all of the clients. This is called "client-server" or "server-based" database architecture.
Note that for a multi-user database you do not install SQL Server on the client machines, you only install the SQL Server Client components (OleDb and ODBC drivers). SQL Server itself is only installed in one place: the machine that will act as the SQL... Server.
re: "database creation and update seems at best awkward to do by program code alone" -- Not at all, it's just "different". Once again, you pass all of your commands to the SQL Server and it takes care of creating the actual database files. For example, once you've connected to the SQL Server if you tell it to
CREATE DATABASE NewDatabase
it will create the database files (NewDatabase.mdf and NewDatabase_log.LDF) in whatever local folder it uses to store such things, which is usually something like
C:\Program Files\Microsoft SQL Server\MSSQL10_50.SQLEXPRESS\MSSQL\DATA
on the server machine.
Note that your application never accesses those files directly. In fact it almost certainly cannot do so, and indeed your application does not even care where those files reside or what they are called. Your app simply talks to the SQL Server (e.g. ServerName\SQLEXPRESS) and the server takes care of the details.
Just to update on my progress. Inspired by suggestions here and this article on code project:
http://www.codeproject.com/Articles/63147/Handling-database-connections-more-easily,
I've created a wrapper for the ADO.NET methods that looks quite similar to the DAO stuff that I am familiar with.
I have a class that I can use just like a DAO Database. It wraps ADO methods like ExecuteReader, ExecuteNonQuery, etc. with overloads that can accept a SQL parameter. This allows me to directly replace DAO Recordsets with readers, OpenRecordset with ExecuteReader and Execute with ExecuteNonQuery.
Each method obtains and releases the connection from its parent class instance. These in turn open or close the underlying connection as required depending on the transaction state, if any. So a connection is held open for method calls that are part of a transaction, but closed immediately for a single call.
This has greatly simplified the migration of my program since much of the donkey work can be done by a simple "find and replace". The remaining issues are then relatively easy to find and sort out.
Thanks, once again to Gord and Maxwell for your advice.
This answer is too long to right down... but go to Microsoft page, there they explain how to make it: http://office.microsoft.com/en-us/access-help/move-access-data-to-a-sql-server-database-by-using-the-upsizing-wizard-HA010275537.aspx
I hope this help you!!
TThanks, for reading, I'll try to explain my issue in a detailed format as the question I'm asking is a bit high-level for my experience-level.
I'm using VS2005 and SQL Server 2005 with Reporting Services. All of my reports are built in VS2005. The reports are deployed to folders named "Amort" or :Amort_Test" on the Report server depending on the configuration I choose when I deply (Production delpoys to "Amort", Test deploys to "Amort_Test").
In Reporting Services Report manager, I have a data source setup call AMORT (and that is the datasource in my VS2005 reports). The datasource is of type Microsfot SQL Server and the connection string is "Data Source=uslibsql310;Initial Catalog=AMORT_P".
What I'd like to do is have the ability for the reports in the "Amort" folder point to a database called AMORT_P on my server (uslibsql310) while the reports in the folder "Amort_Test" point to the database called AMORT_T on the same server (uslibsql310). Obviously my current configuration, where reports in both folders point to the same datasource, says that reports point to the AMORT datasource which currently points to AMORT_P.
My initial thought was that I could create a new datasources, call it AMORT_Test and have its connection string be ""Data Source=uslibsql310;Initial Catalog=AMORT_P". However, every time I'd deploy my reports, I'd have to change the datasource in VS2005 to read AMORT_Test instead of AMort and then deply, which would be abit of a hassle.
Can anyone think of a more user-friendly solution to this? I'm one who normally finds the quickest solution and goes with it, but in this case I think there must be a way to set this up so that the reports in one folder know to pick one DB and the reports in another folder know to pick a different DB, but my current setup doesn't allow that. I'm not sure where to start in trying to figure this out as I'm a bit of an RS novice.
You're almost there, I think. If I understood correctly, here's your current setup:
One shared datasource
Reports all use that shared datasource for datasets
Two configurations: test and production, each with its own target folder
What you can do now is set OverwriteDataSources to False. Manual labor is required to set the connection string for deployed reports only:
For initial deployment of reports
When you want/choose to change the connection for deployed reports
This manual labor can be either:
Changing the connection string, temporarily enable OverwriteDataSources, and re-deploy
Going to the report manager web frontend to change the connection string
However, your default setup would be to deploy reports to both configurations, without having to worry about connecting test reports to production databases and vice versa.
Is it possible to monitor what is happening to an Access MDB (ie. what SQL queries are being executed against it), in the same way as you would use SQL Profiler for the SQL Server?
I need logs of actual queries being called.
The answer depend on the technology used from the client which use MDB. There are different tracing settings which you can configure in HKEY_LOCAL_MACHINE\Software\Microsoft\Jet\4.0\Engines\ODBC http://office.microsoft.com/en-us/access/HP010321641033.aspx. If you use OLEDB to access MDB from SQL Server you can use DBCC TRACEON (see http://msdn.microsoft.com/en-us/library/ms187329.aspx). I can continue, but before all you should exactly define which interface you use to access MDB.
MDB is a file without any active components, so the tracing can makes not MDB itself, but the DB interface only.
UPDATED: Because use use DAO (Jet Engine) and OLE DB from VB I recommend you create JETSHOWPLAN regisry key with the "ON" value under HKEY_LOCAL_MACHINE\SOFTWARE\MICROSOFT\JET\4.0\Engines\Debug (Debug subkey you have to create). This key described for example in https://web.archive.org/web/1/http://articles.techrepublic%2ecom%2ecom/5100-10878_11-5064388.html, http://msdn.microsoft.com/en-us/library/aa188211%28office.10%29.aspx and corresponds to http://support.microsoft.com/kb/252883/en allow trace OLE DB queries. If this output will be not enough for you you can additionally use TraceSQLMode and TraceODBCAPI from HKEY_LOCAL_MACHINE\Software\Microsoft\Jet\4.0\Engines\ODBC. In my practice JETSHOWPLAN gives perfect information for me. See also SHOWPLAN commend.
UPDATED 2: For more recent version of Access (like Access 2007) use key like HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Office\12.0\Access Connectivity Engine\Engines. The tool ShowplanCapturer (see http://www.mosstools.de/index.php?option=com_content&view=article&id=54&Item%20%20id=57, to download http://www.mosstools.de/download/showplan_v9.zip also in english) can be also helpful for you.
If you're accessing it via ODBC, you can turn on ODBC logging. It will slow things down a lot, though. And it won't work for any other data interface.
Another thought is using Jet/ACE as a linked server in SQL Server, and then using SQL Profiler. But that's going to tell you the SQL that SQL Server processed, not what Jet/ACE processed. It may be sufficient for your purposes, but I don't think it would be a good diagnostic for Jet/ACE.
EDIT:
In a comment, the original poster has provided this rather crucial information:
The application I am trying to monitor
is compiled and at a customer's
premises. I am trying to monitor what
queries it is attempting against an
MDB. I cannot modify the application.
I am trying to do what SQL Profiler
would do for a SQL Server.
In that case, I think that you could do this:
rename the original MDB to something else.
use a SQL Server linked server to connect to the renamed MDB file.
create a new MDB with the name of the original MDB and link to the SQL Server with ODBC.
The result will be an MDB file that has the same tables in it as the original, but they are not local, but links to the SQL Server. In that case, all access will be going through the SQL Server and can be viewed with SQL Profiler.
I don't have a clue what this would do to performance, or if it would break any of the data retrieval in the original app. If that app uses table-type recordsets or SEEK, then, yes, it will break. But this is the only way I can see to get logging.
It shouldn't be surprising that there is no logging for Jet/ACE, given that there is no single server process managing access to the data store.
Keep in mind that the file sitting on your hard drive is simply a windows file. So, there is a big difference between a server based system and that of a simple text file, or Power Point file, or in this case a mdb file just sitting on the drive.
However you can get the jet engine to display its query optimizeing via showplan.
How to do this is explained here:
http://www.databasejournal.com/features/msaccess/article.php/3658041/Queries-On-Steroids--Part-IV.htm
The above article also shows how to access the jet disk read statistics, which I also find extremely useful for optimizing things.
Just remember to turn off that data engine logging system when you’re not using it as it creates huge log files…
you could write your own profiler, based on a "transaction" object that will centralize all instructions sent to the database, You'll end up somewhere with a "transaction.execute" method, and a transaction table in your access db. This table can then be used to collect transaction's instructions, start time, end time, user sending the instruction, etc.
I'd suggest upsizing the tables to SQL Server. There is a tool from the SQL Server group that is better than the Upsizing Wizard that is included with Access.
SQL Server Migration Assistant for Access (SSMA Access)
Also see my Random Thoughts on SQL Server Upsizing from Microsoft Access Tips page
We have the standard 3 environment setup of development, testing and production. Each environment has their own report server, web server, database server, etc.
Part of our migration is to move our business objects (xi r2) reports between the servers but as of right now we need to manually update the connection settings for each report. This is mildly painful now at 40+ reports and will become a nightmare as we continue.
Due to how we generate reports we cannot dynamically change the connection string when we generate the report. We are using stored procs instead of Universes because that is what the team is most familiar with.
Any suggestions would be greatly appreciated.
There is an API that you can use to programatically update this sort of thing, although I can't remember how to do it. Check out the docs provided by Business Objects - IIRC they are not publically available (at least they weren't in 2006 when I last worked with it) so you may have to get them from the vendor.
Take a look at the Report Class' ReportLogon Class in the CrystalDecisions.Enterprise.Desktop.Report Assembly of the BusinessObjects SDK. Quite a few options for changing the database connection.
I wrote something similar for a client to make bulk changes Universes and WebI reports. I would imagine that it is very similar for Crystal Reports.
Are you changing the Universe Connection or Universe themselves?
In our environment, we worked around this by having the Universes named the same between environments but they each have a different Connection by environment. This prevents needing to change each report.
I searched far and wide and it seems like this is an unusual circumstance. My final solution which seems to be okay is to have a consistent DSN connection string in each environment. This means that each connection string is effectively the same.
It still feels wrong and if anyone has other ideas that would be great.
EDIT:
This failed miserably after a little bit of testing I found out many of our stored proceedures would not run using the DSN. After that I gave up completely.