I am new to Visual studio, and I am in the process of building my first tabular model in Visual studio, have been using powerBI for a year or so.
I have an Azure SQL database setup, with data tables. I can access the database fine through SSMS, Power BI etc fine. I can also access the data fine when using the data model on the workspace server rather than an integrated workspace with VS.
When using the model in integrated workspace mode, when I use "Getting data" through power query in my tabular project in visual studio, I can login into the database fine, I can preview the various data tables and open them in power query / transform etc. However, when I try to import the data - it can get past stage one of the processing - and I get the following error
“Failed to save modifications to the server. Error returned: 'OLE DB
or ODBC error: We're sorry, an error occurred during evaluation..”
Which leaves me with a table with headers for the columns - but no actual rows of data.
When I revert the data model back to workspace server (i.e. integrated workspace set to false) - I can upload / refresh the data.
So I was happily building my model on the workspace server - HOWEVER - when I came to adding roles and user to those roles - I cannot use the workspace server - or I get an ID error - as I am using an Azure AD. - see below
An error occurred while opening the model on the workspace database.
Reason: Failed to save modifications to the server. Error returned:
'ID cannot be specified for Azure Analysis Services role member:
'member#domain'.
This ID error is fixed when I use an integrated workspace - as recommended - but then I can't load the data in.
So I am stuck between:
integrated workspace with no data
workspace server - without the ability to add AzureAD IDs
Any help with either of the issues would be much appreciated
Thanks,
Laurence
I work with VS 2017 and experience exactly the same issue. I use SSDT compability level 1400.
I appears that simply run Visual Studio as administrator solves the issue with Integrated server.
Accidentally found this solution, I saw earlier some people had solved inability to connect to workspace server without admin rights.
Related
Specifically for my connection to an Azure SQL Server Database after upgrading to Azure Data Studio v1.41.
When I connect into my Server/Database, and expand the Tables section from the left side menu, none of my tables appear. (Also my Views and Stored Procedures are not visible.)
They do still exist. They appear when the Manage option is selected, and I can access the data.
This means I am without access to Table Design for existing tables and other quick actions that would appear there.
Please help, it was all there when I had v1.40 installed earlier!
I have tried to log in using different accounts (server admin, Azure AD and another user account), none of which showed the tables under server connection. I have also restarted, refreshed and tried the Insider v1.42 build, all with the same outcome.
This has been fixed in the latest release of Azure Data Studio 1.41.1:
Bug fixes in 1.41.1
New Item
Details
Connection
Fixed a bug causing incorrect Azure account tenant selection when connecting to server through the Azure view.
Object Explorer
Fixed a regression which caused Object Explorer to not show database objects for Azure SQL DB Basic SLO.
I can confirmed, as well, that the regression no longer occurs:
Using SQL Server Analysis Services 2019 running in Tabular mode, I get this error every time I open an existing Tabular Project solution in Visual Studio 2017 (version 15.9.3, though I don't think the version is the issue). Even creating a new Analysis Services Tabular Project, closing it, and opening a second time again causes the same error.
An error occurred while opening the model on the workspace database.
Reason: The operation cannot be executed since the database with the
name of 'Data Warehouse Tabular_5a21b9d1-2c2e-43e3-9174-981ccddf6f66',
ID of 'Data Warehouse Tabular_5a21b9d1-2c2e-43e3-9174-981ccddf6f66'
already exists in the detached state in folder '\?\C:\Program
Files\Microsoft SQL Server\MSAS15.MSSQLSERVER\OLAP\Data\Data Warehouse
Tabular_5a21b9d1-2c2e-43e3-9174-981ccddf6f66.0.db'. Either attach the
database or delete the folder and retry the operation.
This error is described very well here: https://blogs.msdn.microsoft.com/jason_howell/2013/07/22/cannot-reopen-an-analysis-services-tabular-project-the-second-time-error-database-already-exists-in-the-detached-state/
Unfortunately, implementing the suggested fix of making sure that my DataDir was referenced in my AllowedBrowsingFolders setting did not make a difference. Here are my current settings:
Running SystemGetSubdirs 'C:\Program Files\Microsoft SQL Server\MSAS15.MSSQLSERVER\OLAP\Data\' in an MDX connection returns no results. However, running SystemGetSubdirs on the parent OLAP folder does return three of the six folders in that directory (including the Data folder). I have forced the Data folder to inherit the permissions of the OLAP folder, and forced those permissions on all child objects, and I have tried giving Full Control to the Data folder to the 'Everyone' user, to my user and to the SSAS service account user. I have tried creating a new data folder on the root of my C: drive and holding the databases there, but none of this has made a difference. I've restarted the SSAS service after these changes.
I created an schema in Schema Workbench and publish with no errors, but when I got in BI Server with the standard user admin, I choose New->jpivot, then it display the name of the schema I created but it does not display the cube. For reference the error I get from catalina.out is:
17:11:45,174 ERROR [PentahoDataSourceResolver] PentahoXmlaServlet.ERROR_0002 - IDatasourceService.UNABLE_TO_INSTANTIATE_OBJECT
org.pentaho.platform.api.data.DBDatasourceServiceException: javax.naming.NameNotFoundException: Name [Esquema Salario] is not bound in this Context. Unable to find [Esquema Salario].
Name [Esquema Salario] is not bound in this Context errors usually appear if you use JNDI name, which is not defined on your system. So, I assumed, that this is the name of the datasource, which you reference while publishing Mondrian schema files to BI server.
Xml file with Mondrian schema definition generated by Schema Workbench does not contain any information regarding how to connect to database. So, you need to specify these details when you upload your schema file on BI server (It's done on step 4 below).
But first you have to create the connection itself (steps 1-2):
Create new JDBC datasource:
Define connection parameters:
If cubes still don't appear after these steps, you may republish your cube: follow same steps as in step 1, but select "Analysis" instead of "JDBC" in the end.
Upload the xml file, generated by schema workbench and select the datasource, which you have created on step 2. .
If cube still does not appear - check your log again. If you see the same Name is not bound error, you may try to restart your BI server application (new connections usually get recongnized immediately, but if you had a connection with same name before, than you might need to restart tomcat).
If that does not work, than once again, check log files. I guess, you'll have some different error in this case.
I had the same problem as the OP (blank screen when clicking New View) with the latest version of Pentaho BI server 7.1 (latest at the moment) and even with the 6.0 version one, Pivot4J SNAPSHOT 1.0 plug in version (latest as of today), Schema Workbech 3.14 (latest as of today).
And as, in line with OP, my catalina.out log was also spitting the Name [DatasourceName] is not bound in this Context. Unable to find [DatasourceName].
After several trials and errors I noticed the problem showed up when I checked the "Register the XMLA Data Source" when publishing the schema on Schema Workbench. So to fix the problem I just unchecked it before publishing.
Another way to fix this is going to the Manage Datasources option on the BI server, Import Analysis, choosing the schema created by Schema Workbench, AND manually setting the data source parameter value EnableXmla to false and saving changes. Now the schema should show up when clicking on Create New > Pivot4J view.
I have deployed my SSAS solution to production. On the production server, I want to process my cube, but when I right click on the ssas cube and choose "Process", I recieve the follwong error:
The 'Database' with 'ID' = 'XXX' doesn't exist in the collection.
Has anyone encountered this error in Microsoft SSAS? If so, can anyone tell me what to do to resove this error ?
Check to see what the Id of the database is by right clicking on the database and selecting properties. Check the equivalent in visual studio and confirm it matches. If you have renamed the analysis services project (maybe for a backup), the id of the database does not get renamed and sometimes gets messed up in the XML file.
Try re-processing the dimensions on their own and then the cube. Failing that, if this is the first process (i.e. the cube is not live), try deleting and redeploying from visual studio.
You can re-deploy the OLAP database from SSDT with a different name and before you do you just need to change the database name and set the Processing Option to Full. You can set these options under Project's Configuration Properties->Deployment.
I had the exact same problem. I solved it by editing the roles in my SSAS database :
And then, check the boxes depending on what you want your job to do :
Then, you can process your cubes with your SSIS Jobs.
it can either be roles or kerberos, I got a similar error when the kerberos authentication wasnt setup in the cube server to interact with database server.
Hi My company is deciding for switching its existing application to azure platform (only Sql Part). So we need to upload our db from local to cloud. For migration i came across various tools like
1. cerebrata 's tools
2. SqlAzure Migration wizard
3. Microsoft Sql Data Sync
4. Conventional Script way via management studio.
But all the above tools showed that they have limited capacity. A user cannot work flawlessly on either of the tool.
In cerebrata's tool - the main drawback was its field for Application User Name and Application Key , which my admin havent shared. Also there is manual mapping of fields between azure and local.
Sql Azure Migration wizard - generates scripts and executed too but with lots of error . I was using its version 2.1. Also it very slow. It seems that its a replica of Sql Srvr Mgmt Studio.
Sql Data Sync :- I found it cool as its a MS product but it has limitation too that it only connects with Windows Authentication based local sql server, or you need to explicitly allow the required but. Even after allowing while syncing , I got some Sql Azure Provisioning Error.
4 Sql Srvr Mgmt Studio :- This is most easiest way but requires a lot of manual work to do before actual migration. What i did is that I generated a script of entire db (almost 101123 lines of code for single db) and tried to execute on azure. On the very first time i faced some keyword mismatch error . Finally i removed all line after primary key declaration that With (Padding = Off ....)or something similar and also On Primary then i executed , but still got error on Set Identity Insert On. After doing a lot of hard work in removing unwanted lines waited more than 2 hrs to completed the script remotely, i got no Errors , errors and errors.
So you guys are requested to please suggest me any good alternative stated than above or i am lacking something and can do more with above.
Thanks
Amit Ranjan
I've faced a similar problem recently, running through the options you've listed.
You might give a try to Red-Gate beta for Azure (free for a few months). I found their tools to be quite good for SQL schema and data replication.
Never tried the Azure build myself, though (I migrated tables manually by the time I was told about the offer).