AfO Cast from Direct Variable to Characteristic Member Variable - hana

I have HANA Database where I have my data stored in tables. Using HAna DB studio y create Calculation views. These views are then consumed by the user through the excel plug in Analysis for Office.
One of my users gets the following error when trying to load the query:
It is in German, but essentially it says an object from type "INADirectVariable" cannot be transformed into type "INACharacteristicMemberVariable".
I have no idea what these types are and have not been able to find any literature regarding the topic.
I would really appreciate it if someone could shed some light on this topic.
Thanks

Related

Automate SSAS Tabular Model creation via script

I have a data catalog where people can browse through the DWH tables. People can select tables and send a request to the IT team. This is in the form of a table which gives all the table names and column names that a person wants to have.
In the current situation, my team has to manually create a SSAS Tabular Model with the requested tables in Visual Studio: (Create a new model, connect to the DWH, select the requested tables and columns, assign user access and deploy the model on the analysis service.)
My question: is there somebody who knows a way to automate this process? Is it possible to create Tabular Models with scripts automatically? I've come across Tabular Model Scripting Language but I'm unsure from the documentation if it's possible to create NEW tabular models. Seems like it's only possible to script and make changes in already existing models.
Any form of suggestion or guidance will be appreciated, thanks beforehand.
Michael Kovalsky has a great solution for this on GitHub. See https://github.com/m-kovalsky/ModelAutoBuild. You start with an Excel template and then use scripting in the Tabular Editor tool to create the model. It may not have all the elements you need in your models, but it is a great starting point.

Bypass RFC_READ_TABLE limitations

For my C# application I need to access some data from SAP Tables based on use selections. In this context I made use of .net connector + RFC_READ_TABLE to read the data from single table and it works. After further review I found 3 issues with this approach.
RFC_READ_TABLE is not supported RFC from SAP , so most expert agree that it should not use in production
RFC_READ_TABLE does not support table join.
Select * query does not work for most cases as data_buffer_exceed error is thrown
I did some research on ABAP side and I did not find any alternative API / RFC / BAPI that can accept SQL statement as input argument on runtime.
I need something like DataTable in C#.
1) RFC_READ_TABLE is not supported RFC from SAP ==>> This is used by millions of customers and within SAP's own development all over the place. This IS the official API for generic table access. Use it and worry not.
2) RFC_READ_TABLE does not support table join ==>> You can always join in your own application. If you don't want to do it or cannot do it (like performance reasons) then ask your ABAP contact to prepare a RFC-enabled function module for you. That has nothing to do with a thing being a BAPI. BAPI means something completely different. BAPIs can be very difficult, that is correct, but RFC enabled query function is not a BAPI. BAPIs happen to be RFC-enabled quite often, but there is no link between those things.
3) Select * query does not work for most cases as data_buffer_exceed error is thrown ==>> With all due respect you are not supposed to be reading everything anyway, you need to do your research first and request only those fields you really need. Unless this is some sort of a BI generic tool, you don't need all fields. I can tell from the experience.
cheers Otto
To allow .NET client app to send an unchecked SQL statement is a bad idea both security- and performance-wise.
The standard way would be to either create remote enabled function modules or use Gateway to expose data as ODATA.
http://scn.sap.com/community/gateway
Alternative table is DDIF_FIELDINFO_GET FM. Please use this for details
As of now there is no join in RFC_READ_TABLE. However the steps that you could incorporate (which is a lengthy one) is:
Create a batch file to pass your input SAP tables as parameters to your code.
Extract relevant data by putting filters (which is possible) in RFC_READ_TABLE or RFC_GET_TABLE_ENTRIES to get the structure & data separately; since RFC_READ_TABLE has 512 byte limitation - which can be bypassed by using RFC_GET_TABLE_ENTRIES FM.
Note: But data in the 2nd FM is in one string, which you need to filter based
on the structure that you have extracted.
Now you should have 2 outputs of both the tables.
Upload them to MS Access via a simple batch job, which should be fully
automated.
Once uploaded to MS Access, please write your join to connect both.
Check out my video - where I have bypassed the 512 byte.
I have 2 more videos up in youtube.
Hope this helps.
Thanks
Ram.S

how to insert data in master data services programmatically

I'm trying out Microsoft Master Data Services and I would like to add data to the database programmatically. I'm starting to get the model/entity/member structure but I'm not yet sure. If you have a nice explanation for this structure, please share.
Say somebody added a new employee in an ERP system and I would like to send that to the MDS. How would I do that? Is the data that I want to add a new member? Because if I look at the following information (http://technet.microsoft.com/en-us/library/hh230995), the only way to import data is through entities?
Thanks in advance for any useful information about this!
Lets start with the basics.
Entities in Master Data Services (MDS) are roughly analogous to tables in a regular database.
Every entity must live in a model.
A model can contain any number of entities.
The Metadata* methods you see on that page can be used to create, read and update models and entities. Once you have modeled your ERP tables as an MDS model, you can use the EntityMembersCreate API (with the relevant model/entity information) to create a member (roughly analogous to a row in a table). You can use EntityMembersUpdate to update members and EntityMembersDelete to delete them.
Another way to get large amounts of data into MDS is by using Entity Based Staging. Entity Based Staging allows you to use tools like SSIS to get bulk data into MDS. A good primer here: http://msdn.microsoft.com/en-us/sqlserver/hh802433.aspx.
I hope this helps. Feel free to ask more questions.
I like using a generic data-access-object that classes in my model inherit from. Each class has a one to one relationship with tables in the database.
We're using SSIS to replicate data from our CRM (as well as other data sources) into our MDS (for the time-being). If you're not familiar with the tool, I'd recommend in terms of moving data around - it's relatively easy to pickup the basics. If you go this route, here's a great resource I followed to push data into our MDS system:
http://www.sqlchick.com/entries/2013/2/16/importing-data-into-master-data-services-2012-part-2.html

how create a sql database fom a stongly typed dataset

I'm looking for an easy way to transfer a database schema I have developed inside visual studio as a strongly typed dataset (xsd file) into a corresponding sql server database. Silly me I assumed the process would be forthright, but I can't find out how to do it. I assume I could duplicate the tables column by column, but that seems so error prone. Does anyone know of a way to perform the schema transfer like this? Maybe a tool to translate the xsd file into a corresponding sql server ddl file?
Final thought once I have the schema transferred moving data around between the two data stores will be straight forward, its just getting the schemas synced that has me stumped...
Thanks,
Keith
Why didn't you implement your data model directly in SQL Server ?! It is more common and engineered and I think this is why Microsoft has not provided any wizard or tool for this case. As well you can make your data model as scripts or .sql files and they can be managed via SVN and whenever you need the model implementation you can sue them.

What are the Best Practices to follow while creating a data-dictionary?

I have large and complex SQL Server 2005 DB used by multiple applications. I want to create a data-dictionary for maintaining not only my DB objects but also cross-reference them against applications that use a specific object.
For example, if a stored procedure is used by 15 diffrent applications I want to record that additional data too.
What are the key elements to be kept in mind so that I get a efficient and scalable Data Dictionary?
So, I recently helped to build a data dictionary for a very large product. We were dealing with documenting more than one-thousand tables using a change request process. I can send you a scrubbed version of the spreadsheet we used if you want. Basically, we captured the following:
Column Name
Data Type
Length
Scale (for decimals)
Whether the column is custom for the application(s) or a default column
Which application(s)/component(s) the column is used in
Release the column was introduced in
Business definition
We also captured information about who requested the addition, their contact information, etc. Our primary focus was on business definition, and clearly identifying why a column was being used or created.
We didn't have stored procedures in our solution, but bear in mind that these would be pretty easy to add to the system.
We used Access for our front-end, even though SQL Server was on the back end. It made it pretty easy for us to build out a rich user interface without much work, using the schema we had already built out.
Hope this helps you get started--feel free to ask if you have additional questions.
I've always been a fan of using the 'extended properties' within SQL Server for storing this kind of meta data. In this way the description of each object lives alongside the object and is accessible by anyone with access to the database itself. I'm sure there are also tools out there that can read these extended properties and turn them into a nicely formatted document.
As far as being "scalable", I don't know of any issues related to adding large amounts of data as extended properties; or I should say I've never had any issues with this.
You can set these extended properties using SQL Server Management Studio 'property' dialog for each table/proc/function/etc and can also use the 'sp_addextendedproperty'.