Search all programs within a package for a MODIFY statement - abap

I want to search all programs - within a package - that use the statement:
modify itab_xyz from wa_itab_xyz
Preferably, the string should be searched with wild cards like itab*
for a range of itab_(values) like itab_abc, itab_def, itab_ghi
etc..
How do i do this in SAP ABAP?
Below is a screenshot of all programs within a package one can search from.

One possibility would be to use program RS_ABAP_SOURCE_SCAN.
You can restrict the selection by package and you can also enter a specific string to search for in the code.

I use the transaction code_scanner (program is afx_code_scanner).
The biggest problem with this program and the RS_ABAP_SOURCE_SCAN provided above is that they won’t find everything. IMO the most important missing component to them is implicit enhancements. They can be very impactful to system functions, and if you are searching for an error message or hard coded value skipping them could mean not finding something critical.
At the time I looked (about 7 years ago), I was unable to find a delivered tool that would actually scan all the code in the system. I ended up enhancing the code_scanner to look for enhancements, WDA components, BSP code, and forms code.
I don’t know if the open source component above includes those. At first glance it doesn’t seem to, but I don’t have time to really dig into it.

You could use a tool from the Galileo-Open Source library. This program searches ABAP Source, OTR-Texts, Message and Textpools for static Text, wildcard patterns or regex patterns.
ABAP-Coding:
https://github.com/galileo-group/galileo-abap-lib/blob/master/%23gal%23devtools_find_text.prog.abap
Textpool:
https://github.com/galileo-group/galileo-abap-lib/blob/master/%23gal%23devtools_find_text.prog.xml
It refers to some additional classes from the library, so you either need to copy these as well or just use ABAPgit to get the whole library. You can also contact me, so I can send you a transport containing the library.
Additional information (October 1, 2020):
I created a version of the report that you can copy/paste to the ABAP editor. It is too long to include it in the response, but you can find it here.
Do not forget to copy the text elements / selection texts.
Required Text Elements:
-----------------------
B00 Scope
B01 Search pattern
H01 Type
H02 Name
H03 Key
H04 Match
Required Selection Texts:
-------------------------
P_CASE Case-sensitive
P_DEVC Package
P_LANGU Language
P_MESS Messages
P_OTR OTR Texts
P_PATT Pattern
P_REGEX Regular expression
P_SOURCE ABAP sources
P_TPOOL Textpools
P_WILDC Wildcard pattern

Related

LabVIEW Inserting/overwriting text into existing string

I was wondering if folks have found a reliable way to inject text into an existing string. Some context, I'm writing data to a string indicator formatted like a table, and I wanted to inject values into so they maintain a specific format, spacing-wise. Writing to a table would definitely be easier, however I am porting a legacy program and wanted to provide familiarity to the end user.
Essentially, I want to do the equivalent of typing into a .txt file with the INSERT function enabled, where it just overwrites the content already in the string. Example below (dashes added to show spacing) of how it is currently looking when I inject the values with hard coded spacing:
Time---value---avg. value---result
60------10---------20---------PASS
120------11---------20---------PASS
180------9---------15---------FAIL
I'd prefer it to look more lined up, like below:
Time---value---avg. value---result
60------10---------20---------PASS
120-----11---------20---------PASS
180-----9--------- 15---------FAIL
Writing my application using LabVIEW 2019
Edit: Header will obviously not change, only each subsequent line where the values can result in entries not looking lined up
What about "Replace Substring" function (https://zone.ni.com/reference/en-XX/help/371361R-01/glang/replace_substring/)? Doesn't it meet your requirements?
The diagram below outputs 01234999990123PASS890123456789. The values of the integer and the word PASS are added replacing characters in the existing string, exactly like overstrike would do.

Find MMT Unicode abbreviations for given symbol (e.g. given ☞, find "juri")

The usual IDEs/editors for MMT (e.g. IntelliJ + MMT plugin or jEdit) feature an autocompletion feature for certain useful Unicode characters. For instance, I can type jle and immediately get suggested jleftrightarrow that, upon autocompletion, is replaced by ↔.
Is there a way to find out the reverse association? E.g. I have the symbol ☞ at hand and would like to know the autocompletion abbreviation starting with j — if it exists. For that hand, I would get juri.
The MMT OnlineTools I developed allow this: https://comfreek.github.io/mmteditor.
See screenshot below: if you already have a string full of Unicode symbols that you don't know how to type, just paste it under "how do I type X?". And if you are looking for a specific abbreviation — by Unicode character or by (parts of its) name — use the "abbreviation search" feature.
Internally, my tools pulls from (a copy of) the same resource file that Dennis linked in his answer.
As far as I know, there currently isn't a good way to look up or search for the ASCII abbreviations, except to go straight for the source — which at least has the advantage that it's guaranteed to be up-to-date.
The IDE plugins all have access to an mmt.jar and load their abbreviations from a specific resource file embedded therein. You can find it here on GitHub: https://github.com/UniFormal/MMT/blob/master/src/mmt-api/resources/unicode/unicode-latex-map.
In the long term, we should consider extending that file with a third "field" that gives a short description, and e.g. have a text field in IntelliJ to search for a specific abbreviation.

Find All in a Textbox

I am working on an application to search for and build a list of all the times a string (or variable of) is in a text file. Kind of like a Find All function in a text editor that I can build a list with the info that is found, such as
S350
S250
S270
S5000
What can I use to do this search? It will have one value that does not change (The S in this case) followed by up to 4 digits
RegEx seems like a good choice.
Something like.. S(\d{1,4})? might work for you.
Expresso is my preferred regular expression composer.

TSearch2 - dots explosion

Following conversion
SELECT to_tsvector('english', 'Google.com');
returns this:
'google.com':1
Why does TSearch2 engine didn't return something like this?
'google':2, 'com':1
Or how can i make the engine to return the exploded string as i wrote above?
I just need "Google.com" to be foundable by "google".
Unfortunately, there is no quick and easy solution.
Denis is correct in that the parser is recognizing it as a hostname, which is why it doesn't break it up.
There are 3 other things you can do, off the top of my head.
You can disable the host parsing in the database. See postgres documentation for details. E.g. something like ALTER TEXT SEARCH CONFIGURATION your_parser_config
DROP MAPPING FOR url, url_path
You can write your own custom dictionary.
You can pre-parse your data before it's inserted into the database in some manner (maybe splitting all domains before going into the database).
I had a similar issue to you last year and opted for solution (2), above.
My solution was to write a custom dictionary that splits words up on non-word characters. A custom dictionary is a lot easier & quicker to write than a new parser. You still have to write C tho :)
The dictionary I wrote would return something like 'www.facebook.com':4, 'com':3, 'facebook':2, 'www':1' for the 'www.facebook.com' domain (we had a unique-ish scenario, hence the 4 results instead of 3).
The trouble with a custom dictionary is that you will no longer get stemming (ie: www.books.com will come out as www, books and com). I believe there is some work (which may have been completed) to allow chaining of dictionaries which would solve this problem.
First off in case you're not aware, tsearch2 is deprecated in favor of the built-in functionality:
http://www.postgresql.org/docs/9/static/textsearch.html
As for your actual question, google.com gets recognized as a host by the parser:
http://www.postgresql.org/docs/9.0/static/textsearch-parsers.html
If you don't want this to occur, you'll need to pre-process your text accordingly (or use a custom parser).

Controlling Doxygen's LaTeX output for making PDF documentation

I'm using Doxygen to generate documentation for my code. I need to make a PDF version of this and using Doxygen's LaTeX output appears to be the way to do it.
However I've run into a number of annoying problems, and not knowing anything about LaTeX previously haven't really got much of an idea on how to approach them, and the countless references for LaTeX related things are not much help...
I worked out how to create a custom style thing in a sty file and how to get Doxygen to use it. After a lot of searching I found out how to set the page margins etc. through this, and I'm guessing the perhaps this is the file I want for doing the other things I want, but I cant seem to find any commands for doign what I want :(
The table of contents at the start of the document contains a lot of items Id rather it didn't as it makes the contents very long. Is there some way to limit this contents to just say the first two levels, rather than having entries for every single individual function, variable, etc.? Id quite like to keep all the bookmarks however. I did try the "COMPACT_LATEX" option but as well as removing items on the contents pages, it removed the bookmarks and the member lists at the start of each section, which I do really want to keep.
Is there a way to change the order of things, like putting the full class description at the start of the section, rather than after all the members and attributes?
Wow, that's kind of evil of Doxygen.
Okay, to get around the tocdepth counter problem, add the following line to your .sty file:
\AtBeginDocument{\setcounter{tocdepth}{2}}% or whatever level you want
You can set the PDF bookmarks depth to a separate value:
% requires you \usepackage{hyperref} first
\hypersetup{
bookmarksdepth = section, % of whatever level you want
}
Also note that if you have a list of figures/tables, the tocdepth must be at least 2 for them to show up.
I don't see any way of rearranging those items within the LaTeX files---Doxygen just barfs them out there, so we can't do much. You'll have to poke around the Doxygen documentation to see if there's any way to specify the order I guess. (Here's hoping!)
You're so close.
Googling on "latex contents level" brought me to LaTeX - customizing the depth of the table of contents for different parts of the thesis which suggests
\setcounter{tocdepth}{n}
where n starts at zero for only the highest level division. This is presumable defined in all the default styles, but is worth a try in doxygen.
You could write a Perl/Awk script to simply delete the unwanted lines from the table of contents. For the file burble.tex, Latex will generate the file burble.toc, which will contain lines such as:
\contentsline {subsection}{Class F rewrites}{38}
\contentsline {subsection}{Class M rewrites}{39}
\contentsline {section}{\numberline {7}Definition and properties of the translation}{44}
\contentsline {paragraph}{Well-formedness}{54}
Simple regexes will identify which levels each line belongs to, and you can filter the file based on that. Once you have the table of contents the way you want it, insert \nofiles in the appropriate place (the style sheet?), which means that Latex will read the auxiliary files but not overwrite them.