My requirement is to display source code of RFC. I have a non SAP environment. I believe SAP stores RFC source in one of the tables. If i know the table and column, I can fetch this info. Can anybody share info on this?
Thanks
If i know the table and column, i can fetch this info.
The table is REPOSRC, the column is DATA - good luck.
(For those reading along, the source is stored in a compressed form that is not suitable for external access.)
If you're not commited to your brute-force approach, you could use the function module RPY_FUNCTIONMODULE_READ_NEW - this will return the source as well. Be aware that the source will be of limited use because it will most likely use tons of other stuff from other programs and includes.
Related
For my C# application I need to access some data from SAP Tables based on use selections. In this context I made use of .net connector + RFC_READ_TABLE to read the data from single table and it works. After further review I found 3 issues with this approach.
RFC_READ_TABLE is not supported RFC from SAP , so most expert agree that it should not use in production
RFC_READ_TABLE does not support table join.
Select * query does not work for most cases as data_buffer_exceed error is thrown
I did some research on ABAP side and I did not find any alternative API / RFC / BAPI that can accept SQL statement as input argument on runtime.
I need something like DataTable in C#.
1) RFC_READ_TABLE is not supported RFC from SAP ==>> This is used by millions of customers and within SAP's own development all over the place. This IS the official API for generic table access. Use it and worry not.
2) RFC_READ_TABLE does not support table join ==>> You can always join in your own application. If you don't want to do it or cannot do it (like performance reasons) then ask your ABAP contact to prepare a RFC-enabled function module for you. That has nothing to do with a thing being a BAPI. BAPI means something completely different. BAPIs can be very difficult, that is correct, but RFC enabled query function is not a BAPI. BAPIs happen to be RFC-enabled quite often, but there is no link between those things.
3) Select * query does not work for most cases as data_buffer_exceed error is thrown ==>> With all due respect you are not supposed to be reading everything anyway, you need to do your research first and request only those fields you really need. Unless this is some sort of a BI generic tool, you don't need all fields. I can tell from the experience.
cheers Otto
To allow .NET client app to send an unchecked SQL statement is a bad idea both security- and performance-wise.
The standard way would be to either create remote enabled function modules or use Gateway to expose data as ODATA.
http://scn.sap.com/community/gateway
Alternative table is DDIF_FIELDINFO_GET FM. Please use this for details
As of now there is no join in RFC_READ_TABLE. However the steps that you could incorporate (which is a lengthy one) is:
Create a batch file to pass your input SAP tables as parameters to your code.
Extract relevant data by putting filters (which is possible) in RFC_READ_TABLE or RFC_GET_TABLE_ENTRIES to get the structure & data separately; since RFC_READ_TABLE has 512 byte limitation - which can be bypassed by using RFC_GET_TABLE_ENTRIES FM.
Note: But data in the 2nd FM is in one string, which you need to filter based
on the structure that you have extracted.
Now you should have 2 outputs of both the tables.
Upload them to MS Access via a simple batch job, which should be fully
automated.
Once uploaded to MS Access, please write your join to connect both.
Check out my video - where I have bypassed the 512 byte.
I have 2 more videos up in youtube.
Hope this helps.
Thanks
Ram.S
I have read access to a complete database but I cannot write.
This causes a problem since I want to compare the database data with external data. (e.g. spreadsheets)
The most efficient solution would be if I can create a new table in that database with the spreadsheet its data.
I it possible to create a table which I can write to and disable writing on the rest of the database?
Based on Mitch his answer I found this explanation. http://databases.aspfaq.com/database/should-i-use-a-temp-table-or-a-table-variable.html This seems to work an solve my problem.
According to the table_info description, I should be able to call table_info('%','','') to retrieve available catalogs. I should also be able to do a similar thing for table types and schemas.
When I do this directly through ODBC call SQLTables("%","","","") it works as expected, however using perl's DBI I just get back all available tables instead.
Is this functionality supported by DBI::ODBC? If it's not, is there another way I could retrieve the available schemas, table types, and catalogs? This has to be data source agnostic, so it has to use ODBC.
Btw, I did read How do I get schemas from Perl's DBI? but it wasn't helpful in this case.
That happens because DBD::ODBC doesn't in fact pass empty strings to SQLTypeInfo, it passes NULLs. I'd say that is a bug and worth posting on rt.cpan.org. However, as I'm the current maintainer I now know about it. By all means post a bug and if you come back to me I will send you a new version to test.
EDIT: The recent version of DBD::ODBC seems to do the right thing so what version are you using?
EDIT2: Bad day for me - DBD::ODBC is passing NULL when the strings are empty so go ahead and report it and I'll fix it.
Firstly, let me apologize for the title, as it probably isn't as clear as I think it is.
What I'm looking for is a way to keep sample data in a database (SQL, 2005 2008 and Express) that get modified every so often. At present I have a handful of scripts to populate the database with a specific set of data, but every time the database is changed all the scripts have to be more or less rewritten and I was looking for some alternatives.
I've seen a number of tools and other software for creating sample data in a database, some free and some not. Are there any other methods I haven’t considered?
Thanks in advance for any input.
Edit: Also, if anyone has any advice at all in dealing with keeping data in sync with a changing application or database, that would be of some help as well.
If you are looking for tools for SQL server, go visit Red Gate Software, they have the best tools. They have a data compare tool that you can use to keep lookup type tables up-to-date and a SQL compare tool that you can use to keep the tables synched up between two datbases. So using SQL data compare, create a datbase with all the sample data you want. Then periodically refresh your testing db (or your prod db if these are strictly lookup type tables) using the compare tool.
I also like the alternative of having a script (you can use Red Gate's tool to create scripts) because that means you can store this info in your source control and use it as part of a deployment package to other servers.
You could save them in another database or the same db in different tables distinguished by the name, like employee_test
Joseph,
Do you need to keep just the data in sync, or the schema as well?
One solution to the data question would be SQL Server snapshots. You create a snapshot of your initial configuration, so any changes to the "real" database don't show up in the snapshot. Then, when you need to reset the table, select from the snapshot into a new table. I'm not sure how it will work if the schema changes, but it might be worth a try.
For generation of sample data, the Database project in Visual Studio has functionality that will create fake/random data.
Let me know if this make sense.
Erick
I have large and complex SQL Server 2005 DB used by multiple applications. I want to create a data-dictionary for maintaining not only my DB objects but also cross-reference them against applications that use a specific object.
For example, if a stored procedure is used by 15 diffrent applications I want to record that additional data too.
What are the key elements to be kept in mind so that I get a efficient and scalable Data Dictionary?
So, I recently helped to build a data dictionary for a very large product. We were dealing with documenting more than one-thousand tables using a change request process. I can send you a scrubbed version of the spreadsheet we used if you want. Basically, we captured the following:
Column Name
Data Type
Length
Scale (for decimals)
Whether the column is custom for the application(s) or a default column
Which application(s)/component(s) the column is used in
Release the column was introduced in
Business definition
We also captured information about who requested the addition, their contact information, etc. Our primary focus was on business definition, and clearly identifying why a column was being used or created.
We didn't have stored procedures in our solution, but bear in mind that these would be pretty easy to add to the system.
We used Access for our front-end, even though SQL Server was on the back end. It made it pretty easy for us to build out a rich user interface without much work, using the schema we had already built out.
Hope this helps you get started--feel free to ask if you have additional questions.
I've always been a fan of using the 'extended properties' within SQL Server for storing this kind of meta data. In this way the description of each object lives alongside the object and is accessible by anyone with access to the database itself. I'm sure there are also tools out there that can read these extended properties and turn them into a nicely formatted document.
As far as being "scalable", I don't know of any issues related to adding large amounts of data as extended properties; or I should say I've never had any issues with this.
You can set these extended properties using SQL Server Management Studio 'property' dialog for each table/proc/function/etc and can also use the 'sp_addextendedproperty'.