we use Power Bi Premium, embedded, and deployment pipelines to publish reports for our production environment. One of our reports has performance issues when we publish it to production via deployment pipelines. The report is a direct query and uses a SQL native query for all its data sources. This report is also embedded into our homegrown production system and uses Row Level Security to restrict users from seeing their data only from their own company. In our test environment, this report runs fine, and the performance is ok, but when we use deployment pipelines to publish this report to production, the performance changes and runs slow. I am not sure if this has anything to do with the deployment pipelines causing the performance issues or if it's the direct query component. We have tried query caching, and I have tried using aggregation queries without any success. If anyone has experienced similar issues, please let me know what your solution was. Thanks!
Related
I have a rather strange issue. I am part of a team building control software for industrial applications. We have a web ui and an OPCUA server that both use the same backend code. When issuing queries from the web ui we match documents and updates are successful; however, when issuing the same queries through the same update function via the OPCUA server MongoDB reports that the query was acknowledged but no records matched and therefore no documents are updated. I was just wondering if anyone has experienced this issue as it is perplexing to say the least.
Thank you in advance for any assistance
One thing we have already tried was to route both queries through the same functions to ensure that aggregation and lookups are happening in the same way. I have also ran the "faulty" queries in mongo shell and they were successful.
I have an ASP.NET Web Application that is connected to a Database that is installed in several clients in production environment.
Some of those clients manage critical information (in other schemas, not accesible for the Web App, like people's money) so the access to execute scripts directly in the database to fix things in my Web App, if it's needed, requires time and also approbation, sometimes it takes weeks..
As some of my clients have a volatile reallity, my Web App has to manage a lot of changes in some short periods of time, that means script executions in the database to alter data or schema, and that means time waste !
Long story short, my question is, is it a good practice to implement a page, only for administrator users, that executes a raw query directly to database?
Think in the scenario where security issue is managed properly.
Something like: Sql Pad where you cannot see the entire database system, just the query and the result as the target database is only one.
No. It's a terrible idea. The security issue is probably not manageable - a web page that's available on the public internet which grants schema modification rights to the logged in user is a horrible security risk. Even if you can't get to another schema, you can easily bring the server to its knees by writing simple SQL which consumers all CPU, memory or disk space.
It's also terrible because you lose any track of what changes were installed in which environment.
If the IT department won't approve your scripts when run from management studio they certainly won't let you loose on your own via a web interface.
I've always solved this problem via automated deployment scripts - execute the schema changes etc. as a part of installing the new version of the web application. That way, you can do things like back up the database before running your changes, keep track of versioning and control access.
I have Standard S1 SQL database which is fine for most tasks. However I have an overnight task that needs much more computing power.
I am wondering if anyone has experience of using scheduling to scale up the database in terms of Service Tier and Performance Level, executing one or more specific SQL tasks, and then scaling back down to the original level.
I wrote the following Azure Automation workflow for your exact scenario [Azure SQL Database: Vertically Scale]. In full disclosure, there is an open issue running the script against SQL Database v12 right now. I am actively working on it and will post on the script center page when resolved.
(2/28) Update, the issue has been mitigated and the detailed steps for the temporary workaround have been posted on the main script center page.
We are having significant performance problems on azure. Various factors have made this difficult to examine precisely on azure itself. If the problems are in the performance of the code or of the database I would like to examine them by running locally. However it appears that the default configuration of our database on azure is different than it is locally, e.g. apparently an azure created database defaults to run with different configuration than my local database, e.g. the default on azure includes read committed snapshot as I understand, but that is not the default for a database I create in sql server. That means that performance issues are different for the two.
My question is how can I find all such discrepancies between the setup of the two and correct them so that when I find speed issues locally I will know they represent speed issues on azure. I am a sql server novice. I recognize that I cannot recreate "time to database" and "network time" issues that way, but I don't think those are what are killing us.
You might find my answer to this post useful.
We had great advantages in implementing telemetry to gather information and use it later for analysis, to finally find out where and how you are spending your time interacting with SQL and therefore how to improve the query plans. Here is a link to the CAT blog post: http://blogs.msdn.com/b/windowsazure/archive/2013/06/28/telemetry-basics-and-troubleshooting.aspx
I am writing some new SQL queries and want to check the query plans that the Oracle query optimiser would come up with in production.
My development database doesn't have anything like the data volumes of the production database.
How can I export database statistics from a production database and re-import them into a development database? I don't have access to the production database, so I can't simply generate explain plans on production without going through a third party hosting organisation. This is painful. So I want a local database which is in some way representative of production on which I can try out different things.
Also, this is for a legacy application. I'd like to "improve" the schema, by adding appropriate indexes. constraints, etc.
I need to do this in my development database first, before rolling out to test and production.
If I add an index and re-generate statistics in development, then the statistics will be generated around the development data volumes, which makes it difficult to assess the impact my changes on production.
Does anyone have any tips on how to deal with this? Or is it just a case of fixing unexpected behaviour once we've discovered it on production? I do have a staging database with production volumes, but again I have to go through a third party to run queries against this, which is painful. So I'm looking for ways to cut out the middle man as much as possible.
All this is using Oracle 9i.
Thanks.
See the documentation for the DBMS_STATS.EXPORT_SCHEMA_STATS and DBMS_STATS.IMPORT_SCHEMA_STATS packages. You'll have to have someone with the necessary privileges do the export in the production database for you if you don't have access. If your development hardware is significantly different than your production hardware, you should also export/import the system statistics with the EXPORT/IMPORT_SYSTEM_STATS procedures.
Remember to turn off any jobs in the development database that recalculate statistics after you do this.