What is the difference between Grid and Matrix in SAP B1? - sapb1

When should I use a Matrix and when should I use a Grid?

when you want to work with UDOs, use matrix, otherwise use grid.
Some differences.
Matrix
to load the data you need Dbdatasource
effective for Udo's
Filled automatically by SAP b1 when your navigate
Grid
you can load the data using a SQL Query.
Has Problems with columns of type double (you must use a workaround for this)
you cannot modify udo data (just for viewing) - can't link to UDO
you can create levels of visualitation (extend/collapse rows)
grid is much faster with linking large ammount of data
Bye.

In short:
Grid is faster in certain situations, but matrix is more versatile (at least in Sbo version 2007 and before).
Chech out the Sap SDN forums on https://www.sdn.sap.com/, they are big source of information.

Gird is best used for display purpose. we can easily display huge data simply. where as Matrix is best used for data maintaining purpose because it can be bonded with UDO where user can easily assess data screen level. No extra code can be written like grid evey thing will be handled by SAP object only.

Related

How to dynamically update table based on selecting a single data point on a time series graph

I currently have set up Superset Apache to have two visualizations, one being a Time Series Graph and the other being a table. Both visualizations are using the same dataset. I want the values of the table to be dynamically updated based on which data point I am hovering over/selecting in the Time Series Graph. Is this type of functionality possible in Superset Apache? I know it is possible in Power BI but I'd like to know if Superset is capable of accomplishing this as well.
(As far as I can tell right now, it seems like each visualization is independent. The only time the visualizations are linked is when there is a filter applied from a filter visualization which affects the overall dataset)

Functions scaling in OpenMDAO

Is there any way to apply a logarithmic scaling to the design objectives/constraints in OpenMDAO? In my optimization problem, I have to deal with an objective function that takes large values and varies significantly over the design space (between the orders 10^6 and 10^7), so I would ideally like to make the driver handle the log of the objective. I have modified my objective function directly for now, but it would be more convenient to do it at the driver level. Is that possible?
Currently openmdao drivers don't support nonlinear scaling, and doing so through a component is the correct approach. In the past I've sometimes made a separate "objective" component that exists to apply some transformation to the raw objective. That allows you to drop it into your model when necessary without the need to change the original calculations.
A middle ground between "modify the component" and "have OpenMDAO driver api do it for you" is to use an ExecComp.
prob.model.add_subsystem('log_scale', om.ExecComp('log_f = log(f)')
prob.model.connect('some_comp.f', 'log_scale.f')
prob.driver.add_objective('log_scale.log_f')
The exec-comp will handle the derivatives of that transformation for you. All you have to do is connect your objective into the right input on the ExecComp instance.

Is there any ETL tool for any Smalltalk dialect?

...like Talend for Java, for instance, but that allows to implement processes programatically.
Multiple data sources, orchestration, calculated fields, pivot tables are some of the features I would like to have.
We've build on top of Moose for a ERP data conversion project. Works well with smaller amounts of data (that fit in a 32-bit image). In ETL with multiple sources, just use an image for each input stream/step, connect them together through files or sockets. The visualization was important for us. It allowed the domain experts to steer the process. Short feedback loop was essential.
Nearly 5 years later it is time to revisit this answer. Pharo and Moose support 64 bits. The garbage collector is not yet up to handling very large heaps, the incremental collector to avoid large pauses is in active development now. If the work is partitionable, use a solution like ImageWorker to use multiple cores with all data in one image, or TelePharo to remote control multiple images. Perhaps use MQTT to integrate. For visualization there are Roassal2 and 3 or the whole GToolkit

Is OLAP/MDX a good way to process data w/ unknown values at various aggregation levels

I'm new to OLAP, so perhaps I don't know the right terminology to use for this question, but bear with me here.
I work with lots of hierarchical, multidimensional data where parent/aggregated cells mostly have data, but child/leaf cells are often missing data (attribute values are unknown but non-zero). I currently use a combination of scripting and SQL to work with it, but that's getting unwieldy. It seems like OLAP cubes and MDX are better suited to the structure of the data, but not necessarily to tasks I need to do with it. For example:
OLAP seems mainly designed for read-only reporting; I do a lot of modifications to the data in batch processes
OLAP seems to like having complete leaf-level data to calculate aggregates; my data has missing values at various levels
Examples of what I want to do:
Load original multi-level data into cube and preserve known parents; don't overwrite or display their values as calculated aggregates of children (which may be incomplete).
Create/update/delete cells in a cube based on results from complicated queries/joins of other cubes. Sometimes a cube needs to be transformed to use a slightly different dimension definition.
Users require estimates for unknown values. I can create decent estimates, but need to adjust them so they conform to known parents/children across all dimensions and levels (this is much harder than it sounds). I am already doing this, but it involves pulling the data out of the RDBMS into a custom executable.
Queries and calculations need to be able to handle the unknowns properly. Ideally be able to easily query how much of an aggregated cell's value is made up of estimated vs. known values, possibly compute confidence/error statistics, or check whether we can derive an exact value for an unknown when it has a known parent and all known siblings, etc.
Data can be large... up to tens of millions of fact table rows. Performance needs to be decent for batch jobs (minutes are ok, hours not so much).
Could an OLAP server and MDX be a good tool for this type of work? Are there any other tools that would work well for manipulating hierarchical/multidimensional/gap-filled data?
That's some needs for an OLAP system, interesting and challenging :-) :
- Load original multi-level data into cube and preserve known parents; don't overwrite or display their values as calculated aggregates of children (which may be incomplete).
You can change the way cubes aggregate values in a hierarchy. Doing this in one hierarchy is fine doing this using in multiple hierarchies might start to get complicated. It's worth checking twice if there is a mathematical 'unique' solution to the problem with multiple 'special' hierarchies.
Create/update/delete cells in a cube based on results from complicated queries/joins of other cubes. Sometimes a cube needs to be transformed to use a slightly different dimension definition.
Here you can use writeback (MDX function Update cube), but I think it's a bit too simple for your needs. Implementation depend on the vendors. Pay attention creating cells can kill your memory as for large cubes you can quickly have millions of cells in a subcube.
What is the sparsity of your model ? -> number of cells with data / number of total cells
Some models have sparsities of 1e-30, here it's easy to explode if you're updating all cells ;-).
Users require estimates for unknown values. I can create decent estimates, but need to adjust them so they conform to known parents/children across all dimensions and levels (this is much harder than it sounds). I am already doing this, but it involves pulling the data out of the RDBMS into a custom executable.
This is looking complicated The issue here is the complexity of the algos, a possible solution using MDX language and how they match with the OLAP engige (fast enough). You're taking the risk it explodes, but have a look at Scope function
Data can be large... up to tens of millions of fact table rows. Performance needs to be decent for batch jobs (minutes are ok, hours not so much).
That should not be a real challenge..
To answer your question, I don't think so. We've a similar problem - on the genetical field - and we are going to solve the problem 'adding' a dedicated calculation module to our OLAP solution. It's an interesting on going project

Create test cube based on existing cube data (but much larger)

Is it possible to create a large cube based on existing cube data?
We'd like to test the performance of certain tools in combination with SSAS and currently do not have any cubes large enough.
e.g. We have a year's worth of data and want to expand it to be 10 year's worth.
Mostly I have created my own scripts for growing test data.
I have used Adventure Works as a base for names, address etc, also I have used Red Gate's data generator (was working at a place that had the full Red Gate product suite, you can download an evaluation copy to test it out).
Might be worth writing your own scripts. Then you can tweak the generation scripts to generate additional versions for testing.
To increase the size of your data you need to write custom scripts to copy it. There is no automatic way to "grow data" in SQL.