I am implementing Trial Balance(Version 2) FPM/Webdynpro App from Fiori Apps Library
following App Implementation : Trial Balance guide for S/4 Hana 1610.
When I launch the Trial Balance App.It says "Initialization of query 2CCFITRIALBALQ0001 Failed"(PFA for the error ).
Please let me know how to Initialize or Activate BEx Query.
Regards,
Sayed
The issue is resolved following the below steps :
1) Set a BW-client: Transaction SE37 RS_MANDT_UNIQUE_SET. If you use only one client, then fill I_MANDT with this one. If you use multiple clients, choose one of these.
2) Set user parameters RSWAD_DEV_MDVERSION = ‘072’ and RSWAD_SKIP_JAVA = ‘X’ for user DDIC(Its setting in transaction SU01 under parameter tab)
3) Logon to system with user DDIC in the client you used in step 1 and perform transaction RSTCO_ADMIN in order to activate the technical objects, which are needed for the engine. The parameters set in step 2 avoid, that unnecessary objects (related to BI tools based on JAVA) are activated here.
4) If you don't look at the OLAP-statistics you should deactivate these: Transaction SE38 - execute Report SAP_RSADMIN_MAINTAIN: with OBJECT = RSDDSTAT_GLOBAL_OFF and VALUE = X in insert mode. If you need the statistics, you can switch these on by running the program with that object but VALUE = space in update mode.
5) If you want to use OData-Services run report EQ_RS_AUTO_SETUP (transaction SE38)
6) If you want to use the BW time hierarchies, go to transaction RSRHIERARCHYVIRT and mark the hierarchies you need - for this you have to wait until the job triggered in step 3 has finished successfully
7) Call function module RSEC_GENERATE_BI_ALL.
Regards,
Rehan Sayed
Related
I'm building out an ETL process with Pentaho Data Integration (CE) and I'm trying to operationalize my Transformations and Jobs so that they'll be able to be monitored. Specifically, I want to be able to catch any errors and then send them to an error reporting service like Honeybadger or New Relic. I understand how to do row-level error reporting but I don't see a way to do job or transaction failure reporting.
Here is an example job.
The down path is where the transformation succeeds but has row errors. There we can just filter the results and log them.
The path to the right is the case where the transformation fails all-together (e.g. DB credentials are wrong). This is where I'm having trouble: I can't figure out how to get the error info to be sent.
How do I capture transformation failures to be logged?
You can not capture job-level errors details inside the job itself.
However there are other options for monitoring.
First option is using database logging for transformations or jobs (see the "Log" tab in the job/trans parameters dialog) - this way you always have up-to-date information about the execution status so you can, say, write a job that periodically scans the logging database and sends error reports wherever you need.
Meanwhile this option seems to be something pretty heavy-weight for development and support and not too flexible for further modifications. So in our company we ended up with monitoring on a job-execution level - i.e. when you run a job with kitchen.bat and it fails by any reason you get an "error" status of execution of the kitchen, so you can easily examine it and perform necessary actions with whenever tools you'd like - .bat commands, PowerShell or (in our case) Jenkins CI.
You could use the writeToLog("e", "Message") function in the Modified Java Script step.
Documentation:
// Writes a string to the defined Kettle Log.
//
// Usage:
// writeToLog(var);
// 1: String - The Message which should be written to
// the Kettle Debug Log
//
// writeToLog(var,var);
// 1: String - The Type of the Log
// d - Debug
// l - Detailed
// e - Error
// m - Minimal
// r - RowLevel
//
// 2: String - The Message which should be written to
// the Kettle Log
I am using the function getgroup() to read all of the groups of a user in the active directory.
I'm not sure if I'm doing something wrong but it is very very slow. Each time it arrives at this point, it takes several seconds. I'm also accessing the rest of Active directory using the integrated function of "Accountmanagement" and it executes instantly.
Here's the code:
For y As Integer = 0 To AccountCount - 1
Dim UserGroupArray As PrincipalSearchResult(Of Principal) = UserResult(y).GetGroups()
UserInfoGroup(y) = New String(UserGroupArray.Count - 1) {}
For i As Integer = 0 To UserGroupArray.Count - 1
UserInfoGroup(y)(i) = UserGroupArray(i).ToString()
Next
Next
Later on...:
AccountChecker_Listview.Groups.Add(New ListViewGroup(Items(y, 0), HorizontalAlignment.Left))
For i As Integer = 0 To UserInfoGroup(y).Count - 1
AccountChecker_Listview.Items.Add(UserInfoGroup(y)(i)).Group = AccountChecker_Listview.Groups(y)
Next
Item(,) contains my normal Active directory data that I display Item(y, 0) contain the username.
y is the number of user accounts in AD. I also have some other code for the other information in this loop but it's not the issue here.
Anyone know how to make this goes faster or if there is another solution?
I'd recommend trying to find out where the time is spent. One option is to use a profiler, either the one built into Visual Studio or a third-party profiler like Redgate's Ants Profiler or the Yourkit .Net Profiler.
Another is to trace the time taken using the System.Diagnostics.Stopwatch class and use the results to guide your optimization efforts. For example time the function that retrieves data from Active Directory and separately time the function that populates the view to narrow down where the bottleneck is.
If the bottleneck is in the Active Directory lookup you may want to consider running the operation asynchronously so that the window is not blocked and populates as new data is retrieved. If it's in the listview you may want to consider for example inserting the data in a batch operation.
I am running a very simple query and trying to extract the results to a text file. The entire query is essentially what is below, I am selecting everything from one single table with one piece of where criteria which is limiting the data to one month's worth. After it has extracted around 1.2 gig this error shows up. Is there any way that I can work around this other than extracting smaller date ranges? I am trying to pull a couple of years worth of data so if I can only get it a few days at a time it will take a lot of manual work.
I am currently using the free trial of a DB2 query tool - Razor SQL if that makes a difference, I can probably purchase different software if it would help. I am trying to get IBM's tool but for some reason it freezes during the download so I am still working on that. I have searched about this error but everything I see seems much more complex than what I am doing and I can't tell if it applies or not. Thanks in advance.
select *
from MyTable
where date_col between date '2014-01-01' and date '2014-01-31'
I stumbled at this error too, found out it is related to db2jcc.jar (type 4) driver.
Excerpt: If there are no items in the result set left (or to begin with), the Result set is closed automatically and therefore the Exception. Suggestion is to handle it in the application, perhaps in my case, I started checking if(rs.next()) but otherwise, there is a work around. Check out the source link below for how you can set some properties to Data source and avoid exception.
Source :
"Invalid operation: result set is closed" error with Data Server Driver for JDBC
In my case, i missed some properties in WAS, after add allowNextOnExhaustedResultSet the issue is fixed.
1.Log in to the WebSphere Application Server administration console.
2.Select Resources > JDBC > Data sources > Application Center DataSource name > Custom properties and click New.
3.In the Name field, enter allowNextOnExhaustedResultSet.
4.In the Value field, type 1.
5.Change the type to java.lang.Integer.
6.Click OK.
Sometimes you need also check whether resultSetHoldability properties exists. Details refer to here.
I encountered this failure also when ugrading from JDBC Type 2 driver (db2java.zip) JDBC type 4 driver (db2jcc4.jar)
Statement statement = results.getStatement();
if (statement != null)
{
connection = statement.getConnection(); // ** failed here
statement.close();
}
Solution was to check if the statement is closed or not as follows.
Changed to:
Statement statement = results.getStatement();
if (statement != null && !statement.isClosed()) {
{
connection = statement.getConnection();
statement.close();
}
Creating property bellow with type Integer it's worked for me:
allowNextOnExhaustedResultSet:
I had the same issue on WAS 7 so i had to add and change few this on Admin Console.
This TeamWorksRuntimeException exception should be fixed by applying APAR JR50863 which is available on top of BPM V8.5.5 or included on BPM V8.5 refresh pack 6.
For the case that the APAR does not solve the problem, try following workaround:
Log in to the WebSphere Application Server admin console
Select Resources > JDBC > Data sources > DataSource name (TeamWorksDB) > Custom properties and click New
In the Name field, enter downgradeHoldCursorsUnderXa
In the Value field, type true
Change the type to java.lang.Boolean
Click OK to save your changes
Select custom property resultSetHoldability
In the Value field, type 1
Click OK to save your changes
Source of the Answer : https://developer.ibm.com/answers/questions/194821/invalid-operation-result-set-is-closed-errorcode-4/
Restarting the app may fix the problem if connection pool lost session to Db2. If using Tomcat then connection pool property of 'testonBorrow' may reestablish the connection to Db2.
here i have got one issue.can some one please help me to resolve this.
i was trying to extract some data to DS 0FI_AP_6...
then in InfoPackage Monitor I can see like..
-->Requests (messages): Everything OK
-->Extraction (messages): Everything OK
-->Transfer (IDocs and TRFC): Missing messages or warnings
-->Info IDoc 2 : sent, not arrived ; IDoc ready for dispatch (ALE service)
Data Package 1 : 23752 Records arrived in BW
Data Package 2 : 15216 Records arrived in BW
Request IDoc : Application document posted
Info IDoc 1 : Application document posted
Info IDoc 3 : Application document posted
Info IDoc 4 : Application document posted
-->Processing (data packet): Everything OK
Data Package 1 ( 38672 Records ) : Everything OK
in Status Menu I am having message like...
Missing data packages for PSA Table
Diagnosis
Data packets are missing from PSA Table . BI processing does not
return any errors. The data transport from the source system to BI was
probably incorrect.
Procedure
Check the tRFC overview in the source system.
You access this log using the wizard or following the menu path
"Environment -> Transact. RFC -> Source System".
Error handling:
If the tRFC is incorrect, resolve the errors listed there.
Check that the source system is connected properly to BI. In
particular, check the remote user authorizations in BI.
Please suggest me how to resolve this issue...
thanks in advance for your help and quick reply is much appreciated.
But what the worst thing is I deleted the infopackage in PSA by mistake.
In the normal case, if I repeat the process again, the delta load would be OK, but now the delta load remains error.
so gurus,
1. how can I restart my delta loading correctly?
2. I want to modify the timestamp in the delta table, but how to do it ?
Go to T-Code RSA7 in the source system. This will tell you the date/timestamp that the delta is set to. If the date was changed to a range that no longer works then you will need to re-initialize the datasource in the BW system side. However, the Delta date may still be fine becauase it may have never been changed when you tried to first do your load because of the connection issues.
You can create a new infopackage and set the update to Initialize Datasource with Data Transfer. This will essentially run a full load from the datasource and then reset the delta pointer date/timestamp to when you ran it. This way you will capture all the data that you needed and anything that was already in the PSA should be overwritten.
Also note that you should delete or set the request status to red on the previous request that may contain bad data in the PSA.
From the original error it seems like you are having an RFC connection issue between the datasource and BW. Contact your BASIS support and have them check the connection to make sure it is good. To ensure that your datasource is extracting properly you can run t-code RSA3 on it in the source system. This will ensure that the extraction of data is working properly.
I'm having an issue where I'm preaty not sure how to resolve this and I want to know what is the best approach I should consider in order to achieve this task.
We are developping an application VB.net 2.0 and SQL 2005. Users are allowed to cancel a reception based on a purchase which may contains many received goods. But, during the process of cancellation, some questions are asked to users such as "Do you want to cancel Good #1". If yes, delete. Then, "Do you want to cancel Good #2", no, do not delete and one another question (if received item is issued, a process must be made manualy by the user). And, at the end, if all goods were successfully cancelled, we have to cancel the reception itself. But sometime, if an error occurs or some conditions occurs once asked to user in this process, we want to cancel any actions made from the beginning and make it back to his original state. So I thought about Transaction.
I know there is Transaction for SQL which can be used and I know good enough how to use it, but I can't realy use this as user must perform actions which possibly cancel this transaction.
I also remembered TransactionScope from .NET 2.X and over which can achieve something similar and I also know as well how to use it. The problem comes with TransactionScope and MSDTC. When using this, we still getting an error which said :
Network access for Distributed Transaction Manager (MSDTC) has been disabled. Please enable DTC for network access in the security configuration for MSDTC using the Component Services Administrative tool.
I've tried what is describe here in another stack post and it works great... until user restard their computer. EVERY time users restart their computer, they must put value back. Plus, per default, no computer have this value set to On. At least on 10 computers bases, none were activated. There is something like 300 computers on which this program is installed so it's surely not the good things to consider neither.
So anyone have an idea of how I can acheive this? Is there anything else doing transaction via code which I can use?
NOTE1 : I know some would say, first ask conditions to user and maintain values in memory. Once done, if everything went well, go with delete. But what if an error occurs when deleting let's say, goods #4? And how can I give to a store procedure a dynamic list of goods to be deleted?
NOTE2 : Sorry for my english, I usualy talk french.
NOTE3 : Any exemple in C# can be provide also as I know both VB and C#.
Assuming you already have similar stored procedure to manage cancelation:
create proc CancelGood (#goodID int)
as
SET NOCOUNT ON
SET XACT_ABORT ON
begin transaction
update table1 set canceled = 1
where GoodID = #GoodID
update table2 set on_stock = on_stock + 1
where GoodID = #GoodID
commit transaction
VB code adds a string to some canceledGoods list if user selects 'Oui'. I'm not familiar with VB.Net; in c# it would look like:
canceledGoods.Add (string.Format("exec dbo.CancelGood {0}", goodID));
Than, if there is at least one string in canceledGoods, build and execute batch:
batch = "BEGIN TRANSACTION" +
" BEGIN TRY " +
string.Join (Environment.NewLine, canceledGoods.ToArray()) +
" END TRY" +
" BEGIN CATCH " +
" -- CODE TO CALL IF THERE WAS AN ERROR" +
" ROLLBACK TRANSACTION" +
" RETURN" +
" END CATCH" +
" -- CODE TO CALL AFTER SUCCESSFULL CANCELATION OF ALL GOODS" +
" COMMIT TRANSACTION"
conn.ExecuteNonQuery (batch);