I know nothing in sap BO yet...
I'm a BI analyst and I inherited of an old Data warehouse not well documented. I'm trying to know if a table in my warehouse is part of universe/Report/Dashboard, so I get a view of the impact in Front-End of modifying this table. Of course I want to do that programmatically since I have dozens of fact tables.
I'm sure there is a way from BO !! I know that there is a Sdk and RestAPI but I admit I don't understand what should be used. I'm a SQL developer so I sure missed something.
From where should I start ? I need to know how to do things, what tools. Then I'll get an external developer to help.
Do you have UNV or UNX type universes ? In any case would suggest first to document the universe using a third party tool. I have used http://biclever.com/software/unx-universe-documenter/ for UNX documentation
It extracts the objects used in the universe and their associated database fields / table etc in an excel file
Above would be a free approach, but will require some manual effort (checking each and every field for example)
Paid tools are there to do Impact Analysis with tools like 360 eyes, Sherlock, Metaminer, APOS and maybe SAP product like Information Steward
Hope this helps
Related
Nowadays SAP recommends to "keep the core clean" in order to be able to move to the cloud and always be able to update to the latest version without having to worry or retest, also valid for on-premise.
I got the requirement to add a Z field to the QMEL table to link its notifications to SAP PS projects (PROJ table). The QMEL table already has a structure -CI_QMEL- ready to be extended and the related BAPIs support this extension.
But in order to keep the core clean, I'm considering to challenge the functional requirement and suggest to create a ZNOTIF_PROJ table with the same key than QMEL (Notification ID). This would then become totally separated from the standard but at the same time the official BAPI wouldn't be able to support it, so a wrapper on top would be needed to update the standard and the custom and everything would become more complex.
Should I stick to the old extension style or go for a new table?
Personally I prefer extending standard tables. Having BAPIs, standard transactions, etc, work as expected is worth far more than a nebulous idea like a "clean core."
As long as you're not modding core code or extending tables in an incorrect manner, customizing the system in ways supported by SAP is not a bad thing. You should consider your future upgrade plans (S/4 on-prem vs cloud, for example) when deciding the right answer, but don't make things too hard on yourself.
S/4 on-prem or cloud already has adding new fields and tables functionality. We can do this in web UI look like SAP CRM. So there is no problem for extending existing structure. Help page about this functionality here.
I want to have an access port for non-tech savvy individuals in which they could make reports of their own without needing to know SQL what-so-ever.
It would be best if I could create custom fields of myself, and then just let the users in the access port pick and choose whichever they like with a custom date range.
I've explored the options Google Data Studio offers, but it looks to me like it mostly puts an emphasis on data visualization.
In addition, my attempts to make custom queries with it were not successful, since the platform is rigid in terms of deciding which field is a metric and which is a dimension (and it does so inaccurately). This makes it hard to query reports as you normally would using BigQuery, which doesn't have these somewhat arbitrary limitations.
Perhaps I've misunderstood something about the platform due to my limited experience with it, but it looks like Data Studio isn't going to fit the bill for me.
EDIT: In addition, the platform should have a way of exporting said reports as CSV files, a feature that Data Studio doesn't have as far as I know.
It would be great to receive suggestions for a different platform which would better fit my needs, or even suggestions on how to make better use of Data Studio.
Have you looked at using a tool like redash (https://redash.io)? Assuming your GA360 data is in BigQuery you can connect redash to BQ. Then you can author queries and visualize.
You can also use the Google Could SDK to connect to BQ and run custom queries to generate new tables in BQ based on the GA360 session data. Then use redash, or any tool, to report/visualize.
Good morning,
This is more of a concept question then anything.
I am looking to design a database and interface that will track changes to the entries (in this case people) and display those changes readily.
(user experience would look something like this)
for user A
Date Category Activity
8/8/14 change position position 1 -> position 2
8/9/14 change department department a -> department b
...
...
the visual experience seem like it would benefit from an E-A-V design, however i am designing the database to be easy to data mine and from my reading, i think that E-A-V is not the right way to go.
does it make sense to duplicate data just to display it?
if not, does anyone have a suggestion of how to query the history table and display? (currently using jquery and php to leverage the db...i suppose i could do something interesting from a coding perspective to get it done)
thank you for your help,
Travis
Creating an efficient operational database environment and a creating an 'easy-to-data mine' environment are two separate (and often opposing) goals.
Others might disagree with me but in my opinion it is best to create your database based on operational readiness (This means using your E-A-V design as mentioned above) and then worry about data transformation later. This may make it inconvenient later to transform the data to allow for easy mining but it will accomplish an incredibly important goal which is to eliminate the possibility for data error.
Once you have a good system in place where you can collect data appropriately, then you can create a warehouse or datamart environment to more conveniently extract that data.
This may sound like a lot of work but from a data integrity perspective, it is much safer than trying to create some system that is designed entirely for reporting. That's my personal opinion at least.
(sorry cannot comment yet)
You have to analyse the data you need to persist.
if you have only a couple of tables, with no relationship, you probably don't need the database.
In this case the database solution probably will be slower(connection/transmission/security overhead ...).
well if it's a few MBs of data, I would keep everything in one table.
You can easily load the whole data set in memory and do what you need to do.
This may be a pipe dream, but I'm hoping someone knows of a tool which can be configured to compare all or some (keys) of the data in two identical database and merge, perhaps based on relationships.
Specifically looking for one for SQL Server.
I'm not really asking for the best one, but if it exists it would be nice to hear how it is used.
Any other ideas for how to manage the work done or data added in dev and push it out to production without copying the entire database are welcome.
Thanks!
We use this and personally think it's excellent.
http://www.red-gate.com/products/sql-development/sql-data-compare/
There is also another product for the schema side.
http://www.red-gate.com/products/sql-development/sql-compare/
I don't know of a specific tools but you can implement in your process of publications the analysis and executions of delta files, containing the diffs from one verision to another. Magento, Wordpress are using something like this for example. They have something like this
//sql_update_001_002.sql
UPDATE some_table...
DELETE some entries
CREATE a_new_table...
// compere some keys or do other logic.
//etc
Then they have a script that analyses the current version and if needed it executes the corresponding sql.
Navicat allows to make data and structure synchronization between 2 databases (also located on different servers).
In terms of tools I agree with Chris - Redgate's toolset for both schema and data comparisons
If you are also thinking about your overall db dev process - then I have written a blog on the topic which might be of interest.
It also has some links to how others have tackled this subject.
http://michaelbaylon.wordpress.com/category/data-management/database-development/sql-script-management/
Do there exist any (ideally free or open-source) tools for performing OLAP analyses on arbitrary tables in a relational database, without requiring any advance specification of dimensional hierarchies, cardinalities, or any other meta-information about the table beyond what can be extracted automatically from the table itself?
My inability to Google for anything like what I'm describing makes me suspect I'm using incorrect terminology and what I'm searching for isn't properly considered to be OLAP. If this is the case, what I specifically want is anything that would let technically unsophisticated users create cross-tab or contingency table aggregations using tables in a relational DB without needing to write elaborate SQL queries.
Or, in other words, I'd like something that mimics Excel's PivotTables on a larger scale. I appreciate that Excel does indeed generate extensive caches behind the scenes when you make a PivotTable, but it does this without the user having to explain to it which caches need creating. This is the functionality I'm trying to find elsewhere, if it exists.
The best options I know of are Excel and Access, but of course they are not open source. This space kinda got trampled in the explosion of interest in what is now called Business Intelligence and a lot of companies got bought by MS and others. It's pretty thin now as far as I can tell. I'll watch this thread though.
The most useful paradigm to attach to is I think spreadsheets and there's not much competition there any more. Google Docs spreadsheets can import csv etc. exported from databases, and there's a pivot chart available, but not much more.
The other place I've seen OLAP capabilities is in the Adobe Flex libraries to build on with ActionScript if you have any inclination in that direction. As usual, Adobe manages to get it about 90% right but doesn't quite provide a whole product.
icCube aims to setup an OLAP cube as simply as possible. It is not schema-agnostic, but I guess this is quite simple to define dimensions and facts from existing DB tables. Nevertheless, this could be not so "simple" depending on your tables - difficult to say without knowledge about them. I guess there's no generic easy solution ;-)
Then you can use Excel pivot table (amongst others) to access the cubes. Note as far as I know Excel does not do any caching neither aggregation when connecting to a cube. Indeed, it is generating all the required MDX requests to the cube.
Hope that helps.