How can I extract 'Memory Use Statistics' from ST03N? - abap

I want to select following data from ST03N in a report:
After a performance trace, I noticed that the data might be stored in one of the tables:
MONI
SWNCMONI
I do not exactly know how to extract the CLUSTD data from the table.
I heard of using function module: SWNC_COLLECTOR_GET_AGGREGATES , but the data does not exactly match with the data from ST03N.

As one probably knows, the MONI and the newer SWNCMONI database tables are cluster tables and shouldn't be read directly, use new FM SWNC_COLLECTOR_GET_AGGREGATES for that.
Nevertheless, if you still want this:
TYPES: tt_memory TYPE TABLE OF swncaggmemory.
DATA: ms_monikey TYPE swncmonikey,
dummy TYPE tt_memory.
FIELD-SYMBOLS: <tab> TYPE ANY TABLE.
ASSIGN dummy TO <tab>.
ms_monikey-component = <instance_id>.
ms_monikey-comptype = 'NW Workload'.
ms_monikey-assigndsys = <host>.
ms_monikey-periodtype = 'D'.
ms_monikey-periodstrt = '20200713'.
IMPORT datatable TO <tab>
FROM DATABASE swncmoni(wj) ID ms_monikey
IGNORING STRUCTURE BOUNDARIES.
As you can see that data for PFCG differs from ST03n in spite it is called for the same date.
Answering on your second question: why it differ?
It may depends on data aggregation setting for memory profile
also try to play with aggregation period. Actually I also wasn't able to find correspondence between them.
Many useful info about ST03 is here
https://blogs.sap.com/2007/03/16/how-to-read-st03n-datasets-from-db/

Related

Is the column of type JSON deprecated?

In the bigquery console, when creating a table, there used to be type JSON as an option for the column types but weirdly enought it was never present in their docs We used this column type in our production tables, and discovered later on that you can't select it in queries otherwise bigquery throws an error, and the json functions also didn't work with it. So we simply stopped using this column in the queries but they still exist in our tables.
However, in the past couple of days, all queries against this table are failing with this error 400 Json is not enabled for current project. and this column type is not present in the bigquery console anymore. It seems it was removed or deprecated? I checked the release notes, but the latest release was way before the error occured. This broke our production environment, and we couldnt even export the data because exporting gave the same error. Instead we had to use a new table without this column which meant we lost all our history.
Did anyone face the same problem with any other column types before, is it normal that a type is deprecated without users being notified beforehand. This is making me question the reliability of bigquery.
Please reach out to Google Cloud support and we will help you fix your issue with that problematic table. You may also want to try fixing it yourself using the ALTER TABLE DROP COLUMN statement that is currently in public preview [1]. This will drop the erroneous column (the data in that column only will be lost). The rest of the data will remain usable.
[1] https://cloud.google.com/bigquery/docs/reference/standard-sql/data-definition-language#alter_table_drop_column_statement
I ran into the same error message few days ago and was surprised to read about this policy change that's not backed up by a mitigation process. My attempt to use Vlad Grachev suggestion to drop this column did not prevail, as the console does not allow to query this table (same "Json is not enabled for current project." error).
My only remediation at this point is:
build a new table where the json column is switched to type string
create a pipeline that transforms the objects to strings
migrate the data through the pipeline to the new table
In BigQuery Json data can be stored in a column type "Record.Are you referring the same by JSON column type?
BigQuery uses the RECORD (or STRUCT) type to represent nested structure. A column of RECORD type is in fact a large column containing multiple child columns. For more information Refer the link below,
Json Data in BigQuery
if you are not refering to the Record Data type, The Json Column type might be a test feature that might not dependent on deprecation scheme

SQL DB2 - How to SELECT or compare columns based on their name?

Thank you for checking my question out!
I'm trying to write a query for a very specific problem we're having at my workplace and I can't seem to get my head around it.
Short version: I need to be able to target columns by their name, and more specifically by a part of their name that will be consistent throughout all the columns I need to combine or compare.
More details:
We have (for example), 5 different surveys. They have many questions each, but SOME of the questions are part of the same metric, and we need to create a generic field that keeps it. There's more background to the "why" of that, but it's pretty important for us at this point.
We were able to kind of solve this with either COALESCE() or CASE statements but the challenge is that, as more surveys/survey versions continue to grow, our vendor inevitably generates new columns for each survey and its questions.
Take this example, which is what we do currently and works well enough:
CASE
WHEN SURVEY_NAME = 'Service1' THEN SERV1_REC
WHEN SURVEY_NAME = 'Notice1' THEN FNOL1_REC
WHEN SURVEY_NAME = 'Status1' THEN STAT1_REC
WHEN SURVEY_NAME = 'Sales1' THEN SALE1_REC
WHEN SURVEY_NAME = 'Transfer1' THEN Null
ELSE Null
END REC
And also this alternative which works well:
COALESCE(SERV1_REC, FNOL1_REC, STAT1_REC, SALE1_REC) as REC
But as I mentioned, eventually we will have a "SALE2_REC" for example, and we'll need them BOTH on this same statement. I want to create something where having to come into the SQL and make changes isn't needed. Given that the columns will ALWAYS be named "something#_REC" for this specific metric, is there any way to achieve something like:
COALESCE(all columns named LIKE '%_REC') as REC
Bonus! Related, might be another way around this same problem:
Would there also be a way to achieve this?
SELECT (columns named LIKE '%_REC') FROM ...
Thank you very much in advance for all your time and attention.
-Kendall
Table and column information in Db2 are managed in the system catalog. The relevant views are SYSCAT.TABLES and SYSCAT.COLUMNS. You could write:
select colname, tabname from syscat.tables
where colname like some_expression
and syscat.tabname='MYTABLE
Note that the LIKE predicate supports expressions based on a variable or the result of a scalar function. So you could match it against some dynamic input.
Have you considered storing the more complicated properties in JSON or XML values? Db2 supports both and you can query those values with regular SQL statements.

Get value from a table and save it inside of a structure

I am new in ABAP and I have to modify these lines of code:
LOOP AT t_abc ASSIGNING <fs_abc> WHERE lgart = xyz.
g_abc-lkj = g_abc-lkj + <fs_abc>-abc.
ENDLOOP.
A coworker told me that I have to use a structure and not a field symbol.
How will be the syntax and why to use a structure in this case?
I have no idea why the co-worker wants that you use a structure in this case, because using a field symbol while looping is usually more performant. The reason could be that you are doing some kind of a novice training and he wants you to learn different syntax variants.
Using a structure while looping would like this
LOOP AT t_abc INTO DATA(ls_abc)
WHERE lgart = xyz.
g_abc-lkj = g_abc-lkj + ls_abc-abc.
ENDLOOP.
Your code is correct, because Field symbol functions almost the same as a structure.
For Field symbol
Field symbol is a pointer,
so there is no data copy action for field symbol, and the performance is better
Well if we changed the value via field symbol, the internal table get changed also
For Structure
Structure is a copy of the data, so there is a data copy action, and the performance is bad if the data row is bigger than 200 bytes (based on the SAP ABAP programming guide for performance)
If changed the data in the structure, the original internal table remains the same because there are 2 copies of the data in memory

Move FMOIX/FMCOX structures into Internal Table

I am a newbie to ABAP (3 days experience) and I am currently on a task to write reports using ABAP code. It is like moving some data from a specific SAP database to a Business Intelligence staging area.
So the core difficulty is that some data on the SAP server is in the format of dictionary structures (FMOIX, FMCOX, etc.) and I need to move these data into internal tables during program runtime. I was told that OPENSQL would not work in this case.
If you still do not get what I mean, I can suggest several ways, actually given by my supervisor. First is to use GET event, say
GET FMOIX.
IF FMOIX-zhdlt > From_dat and FMOIX-zhdlt < to_dat.
Append FMOIX to itab.
ENDIF.
The thing is that I am still not very clear about this GET event. Is it just a event handler thing, or can it loop through data records?
What I googled for more than two days give me something like
LOOP at FMOIX.
MOVE FMOIX to itab.
ENDLOOP.
So what are the ways to move transactional structure like FMOIX into internal tables, say the internal table name is ITAB?
Your answer would be greatly appreciated. Though I have time, I am totally new.
Thanks a lot.
If your supervisor is suggesting that you use the GET event, it means that your program is (or should be) using a logical database - in this case probably FMF or FMF_BCS.
Doing GET FMOIX reads a set of fields defined in the logical database (as a node). Underneath your GET statement, you can use FMOIX as a structure, e.g. WRITE FMOIX-field1. The program will (implicitly, it's not explicity defined in the code like a LOOP...ENDLOOP is) loop through all the rows returned according to your selection criteria. You should be able to use MOVE-CORRESPONDING to move the contents of each row into a proper structure, and then APPEND that structure to your itab.
Quick link on GET in ABAPDocu
Note: this answer is a bit of a guess, since I've only used a logical database once, and the documentation is a little thin on the ground compared to the volumes out there about standard SELECTs and internal tables.
You can create your internal table in type of that structure such as:
data: itab like table of fmoix with header line.
And you can use this internal table to fill up wherever you are using your select codes.
Such as:
select * from ____
into corresponding fields of itab
where zhdlt gt from_dat
and zhdlt lt to_dat.
I'm not sure this is what you are looking for but I can tell you creating itab in type of that structure can be filled up with all corresponding datas that coming from your select. You cant loop FMOIX because its not a table, its a structure. So is there any specific reason to hold your datas in structures?
Hope it was helpful.
Talha

Database : best way to model a spreadsheet

I am trying to figure out the best way to model a spreadsheet (from the database point of view), taking into account :
The spreadsheet can contain a variable number of rows.
The spreadsheet can contain a variable number of columns.
Each column can contain one single value, but its type is unknown (integer, date, string).
It has to be easy (and performant) to generate a CSV file containing the data.
I am thinking about something like :
class Cell(models.Model):
column = models.ForeignKey(Column)
row_number = models.IntegerField()
value = models.CharField(max_length=100)
class Column(models.Model):
spreadsheet = models.ForeignKey(Spreadsheet)
name = models.CharField(max_length=100)
type = models.CharField(max_length=100)
class Spreadsheet(models.Model):
name = models.CharField(max_length=100)
creation_date = models.DateField()
Can you think about a better way to model a spreadsheet ? My approach allows to store the data as a String. I am worried about it being too slow to generate the CSV file.
from a relational viewpoint:
Spreadsheet <-->> Cell : RowId, ColumnId, ValueType, Contents
there is no requirement for row and column to be entities, but you can if you like
Databases aren't designed for this. But you can try a couple of different ways.
The naiive way to do it is to do a version of One Table To Rule Them All. That is, create a giant generic table, all types being (n)varchars, that has enough columns to cover any forseeable spreadsheet. Then, you'll need a second table to store metadata about the first, such as what Column1's spreadsheet column name is, what type it stores (so you can cast in and out), etc. Then you'll need triggers to run against inserts that check the data coming in and the metadata to make sure the data isn't corrupt, etc etc etc. As you can see, this way is a complete and utter cluster. I'd run screaming from it.
The second option is to store your data as XML. Most modern databases have XML data types and some support for xpath within queries. You can also use XSDs to provide some kind of data validation, and xslts to transform that data into CSVs. I'm currently doing something similar with configuration files, and its working out okay so far. No word on performance issues yet, but I'm trusting Knuth on that one.
The first option is probably much easier to search and faster to retrieve data from, but the second is probably more stable and definitely easier to program against.
It's times like this I wish Celko had a SO account.
You may want to study EAV (Entity-attribute-value) data models, as they are trying to solve a similar problem.
Entity-Attribute-Value - Wikipedia
The best solution greatly depends of the way the database will be used. Try to find a couple of top use cases you expect and then decide the design. For example if there is no use case to get the value of a certain cell from database (the data is always loaded at row level, or even in group of rows) then is no need to have a 'cell' stored as such.
That is a good question that calls for many answers, depending how you approach it, I'd love to share an opinion with you.
This topic is one the various we searched about at Zenkit, we even wrote an article about, we'd love your opinion on it: https://zenkit.com/en/blog/spreadsheets-vs-databases/