I am new to SSAS. Probably there is a simple answer for my problems but I can't figure it out. We had a ransomware attack on our server that ruined all the data.
Me and my colleague restored the data on server and tried to reprocess all the dimensions and Cubes but we were getting A connection could not be made to the data source with the DataSourceID of 'xyz', Name of 'XYZ Data'.
We were also getting the error stating
Internal error: An unexpected error occurred (file 'pfcre.cpp', line 1234, function 'PFCREngine::SelectCartridge').
We tried testing the connection by right clicking on the Datasource from SSMS, properties, connection string, Test Connection. It was connecting to the data source successfully.
We then restored the database on our local machine and tried processing the data. This time it did process all the dimensions and cubes. So, we tried taking the backup of our local database, restore it in the server. Again the database on server wouldn't process it even though admin would try to process it. We are still getting the A connection could not be made to the data source with the DataSourceID of 'xyz', Name of 'XYZ Data' error.
So, for now we have to manually process the data base on our local machine everyday , backup that to restore it on server, so that users can run the reports to get the fresh data.
Your help will be highly appreciated,
Thanks.
Related
We have a clojure code that runs on Databricks, and fetches some large amount of data from Azure SQL Database.
Recently, we are getting frequent connection timeout errors.
I am new to Clojure, so I don't understand why this error occurs. Sometimes the code runs perfectly while sometimes it fails.
We have tried different connection parameters like "connectionretry" and "logintimeout" but it didn't work.
com.microsoft.sqlserver.jdbc.SQLServerException: Connection timed out (Read failed)
at com.microsoft.sqlserver.jdbc.SQLServerException.makeFromDriverError(SQLServerException.java:234)
at com.microsoft.sqlserver.jdbc.SimpleInputStream.getBytes(SimpleInputStream.java:352)
at com.microsoft.sqlserver.jdbc.DDC.convertStreamToObject(DDC.java:796)
at com.microsoft.sqlserver.jdbc.ServerDTVImpl.getValue(dtv.java:3777)
at com.microsoft.sqlserver.jdbc.DTV.getValue(dtv.java:247)
at com.microsoft.sqlserver.jdbc.Column.getValue(Column.java:190)
at com.microsoft.sqlserver.jdbc.SQLServerResultSet.getValue(SQLServerResultSet.java:2054)
at com.microsoft.sqlserver.jdbc.SQLServerResultSet.getValue(SQLServerResultSet.java:2040)
at com.microsoft.sqlserver.jdbc.SQLServerResultSet.getObject(SQLServerResultSet.java:2372)
at clojure.java.jdbc$dft_read_columns$fn__226.invoke(jdbc.clj:457)
at clojure.core$mapv$fn__6953.invoke(core.clj:6627)
at clojure.lang.LongRange.reduce(LongRange.java:233)
at clojure.core$reduce.invokeStatic(core.clj:6544)
at clojure.core$mapv.invokeStatic(core.clj:6618)
at clojure.core$mapv.invoke(core.clj:6618)
at clojure.java.jdbc$dft_read_columns.invokeStatic(jdbc.clj:457)
at clojure.java.jdbc$dft_read_columns.invoke(jdbc.clj:453)
at clojure.java.jdbc$result_set_seq$row_values__233.invoke(jdbc.clj:483)
at clojure.java.jdbc$result_set_seq$thisfn__235.invoke(jdbc.clj:493)
at clojure.java.jdbc$result_set_seq$thisfn__235$fn__236.invoke(jdbc.clj:493)
at clojure.lang.LazySeq.sval(LazySeq.java:40)
at clojure.lang.LazySeq.seq(LazySeq.java:49)
at clojure.lang.RT.seq(RT.java:521)
at clojure.core$seq__4357.invokeStatic(core.clj:137)
at clojure.core$map$fn__4785.invoke(core.clj:2637)
at clojure.lang.LazySeq.sval(LazySeq.java:40)
at clojure.lang.LazySeq.seq(LazySeq.java:49)
at clojure.lang.Cons.next(Cons.java:39)
at clojure.lang.RT.next(RT.java:688)
at clojure.core$next__4341.invokeStatic(core.clj:64)
at clojure.core$dorun.invokeStatic(core.clj:3033)
at clojure.core$doall.invokeStatic(core.clj:3039)
at clojure.core$doall.invoke(core.clj:3039)
at clojure.java.jdbc$query$fn__340$fn__341.invoke(jdbc.clj:1007)
at clojure.java.jdbc$query$fn__340.invoke(jdbc.clj:1006)
The issue isn't related to Azure SQL Database JDBC driver but the way you are establishing the connection and running queries.
Make sure you are closing the connection properly after every commit. Additionally, you have a sufficient sleep time between connections.
Also, make sure that the large data you are fetching must have required bandwidth and properly queried.
You can refer these official documentation for troubleshooting:
Troubleshooting connectivity issues and other errors with Azure SQL Database and Azure SQL Managed Instance
Resolving connectivity errors to SQL Server
i tried to insert table from too big flat file. after few hours when it didnt finished (and eat all my drive space) i cancel the query (which also take long time).
since then i cant reach all that DB tables. when i tring to expand tables' folder from object explorer window, i get error message :
(ERROR 1222) Lock request time out period exceeded. (.Net SqlClient
Data Provider).
i also tried restore this DB from backup
Restore of database 'WorkTablesDB' failed.
(Microsoft.SqlServer.Management.RelationalEngineTasks)
ADDITIONAL INFORMATION:
System.Data.SqlClient.SqlError: Exclusive access could not be obtained
because the database is in use. (Microsoft.SqlServer.SmoExtended)
any option how to get this DB to work again?
The solution to this problem is to close SSMS. it will prompt you to either close any open transaction or not.
Choose yes to commit all open transactions and lunch it again. it will run
We have been developing a new SSAS Tabular (1400) model, initially doing the PoC by loading the data directly from SQL Server (not using expressions). Once we had an idea of what we wanted, we refactored and did the data imports with Expressions to allow further Power Query (M) processing. At first, all seemed fine. Along the way we started to mash up other data from an ODBC datasource and didn't experience any issues during the process.
This morning we opened the project to make some modifications before running a new load of data and clicking "process tables" triggered an authentication dialog for the SQL Server data source again and then produced this error for all tables we were importing.
We have tried removing the data source connection string and adding it again. We removed all expressions and tried to start from scratch. No matter what we do, the Expression preview window WILL show the data preview, but attempting to import it into the model then fails with this error.
We've tried this on multiple workstations to try and rule out something with one developers settings that might have changed.
Any thoughts?
After reading this error message again, it struck me that it mentions an ODBC error. The connection to the SQL Server database shouldn't be ODBC so that got me wondering if something was wrong with that connection instead. Sure enough, once I re-authenticated the ODBC data source, the data load worked as expected.
Note that we hadn't actually created a table in the model from the ODBC source yet. Regardless, processing the tables appears to try and validate each connection and rather than more clearly saying that something was wrong with this one data source (even though no data was being pulled into tables), the error dialog made it appear that the import was failing for the SQL Server connection, which is simply wrong.
I have a SQL server reporting service production environment. I need to duplicate this environment on a totally different machine including the source data, let's call it Dev environment. I have backed up and restored both the source database (sourceDataDB) and the report databases (ReportServer and ReportServerTempDB) to this new machine. Re-configured the new reporting service to point to the new report database. Everything works except when I run a report, the report pull out data from the original source database instance instead of the newly created instance. Of course, I can manually modify the data source information from the Report Manager on the new report server. The challenge is every time the ReportServer and ReportServerTempDB got refreshed from the production database, the modified data source got replaced with the one from production environment.
I wonder if there is a way to automate the process of modifying data source information after each database refresh, either from the database end or from the report manager. I only have one data source that is shared with all the reports. This is a SQL Server 2008 R2
Thanks in advance.
Due to one of the array disk failure in HP-server, sql server 2000 is failing to start, giving error WMI provider error . However to avoid any risk i just want to take backup without being server started, by saving the primary data file MDF,Transactional file LDF, etc.
If there is any possibility then help me.
Should just be able to copy these files and attach them to a new instance on a different server. Make sure you reference the transaction when you attach to the new server. There will be a place in the attach dialog to specify the location of both the mdf and ldf file. Any transactions that were incomplete when the server went down will be rolled back, which is usually what you want.
You can usually use sp_attach_db to attach your MDF/LDF files. These are not backups but they are the next best thing if that's all you have. So copy the MDF and LDF files somewhere and then sort out the WMI issue.
A WMI failure is not that catastrophic. It is a layer that allows external processes to inspect and alter SQL Server (among other thins). If WMI failure is the worst thing that has happened then your database will not be affected.