Allocation by using calculation manager in PBCS.
Dimension:
-Account
-Person
-Project
Allocate from:
Account001 -> No Person -> No Project = 100;
To:
Account001 -> Person A -> Project I = 20;
Account001 -> Person B -> Project II = 80;
By driver:
Driver -> Person A -> Project I = 2;
Driver -> Person B -> Project II = 8;
Are there any better ways than code below?
I tried the standard allocation function but allocated data does not have person dimension information.
Result was Account001 -> no Person -> Project I
instead of Account001 -> Person A -> Project I
FIX ({Entity},/*DIM:Year*/"FY19",/*DIM:Version*/"Working",/*DIM:Customer*/"No Customer",/*DIM:Period*/"Jun",/*DIM:HSP_View*/"BaseData",/*DIM:Scenario*/"Actual")
FIX ( /*DIM:Person*/#RELATIVE("Total Person",0))
FIX ( /*DIM:Project*/#RELATIVE("Total Project", 0))
/*STARTCOMPONENT:SCRIPT*/
SET CREATENONMISSINGBLK ON;
/*ENDCOMPONENT*/
/*STARTCOMPONENT:FORMULA*/
"A534001" = "534001"->"P000"->"No Project" * 100 / 100 * "Man-hour" / "Man-hour"->"Total Person"->"Total Project";
/*Project expense for one person = Total entity Expense * manhour of that person of that project / manhour of total person of total project */
/*ENDCOMPONENT*/
ENDFIX
ENDFIX
ENDFIX
Want to know if there are any better ways to achieve this. Many thanks.
Use the Essbase function #MDALLOCATE. You need to invest some time to study the documentation and get it running as required as is a complex function and all the point-of-views need to be set correct.
I recommend you do this as well:
create SmartView sheets with your test set-up (tabs: allocate from, drivers, result)
have a clear block all script, to ensure #MDALLOCATE is really creating blocks
create a very small data set on which you can set your FIX
iterate through calc script/ business rule to get it working
Process is:
run #2, send data from #1, run #3, check results in #1
Try to avoid "SET CREATEONMISSINGBLK ON;" as much as possible. I have seen this construction in allocations cause calculation times of 6 hours(!) with #MDALLOCATE it shortened to about 6 minutes.
Related
I am implementing Trial Balance(Version 2) FPM/Webdynpro App from Fiori Apps Library
following App Implementation : Trial Balance guide for S/4 Hana 1610.
When I launch the Trial Balance App.It says "Initialization of query 2CCFITRIALBALQ0001 Failed"(PFA for the error ).
Please let me know how to Initialize or Activate BEx Query.
Regards,
Sayed
The issue is resolved following the below steps :
1) Set a BW-client: Transaction SE37 RS_MANDT_UNIQUE_SET. If you use only one client, then fill I_MANDT with this one. If you use multiple clients, choose one of these.
2) Set user parameters RSWAD_DEV_MDVERSION = ‘072’ and RSWAD_SKIP_JAVA = ‘X’ for user DDIC(Its setting in transaction SU01 under parameter tab)
3) Logon to system with user DDIC in the client you used in step 1 and perform transaction RSTCO_ADMIN in order to activate the technical objects, which are needed for the engine. The parameters set in step 2 avoid, that unnecessary objects (related to BI tools based on JAVA) are activated here.
4) If you don't look at the OLAP-statistics you should deactivate these: Transaction SE38 - execute Report SAP_RSADMIN_MAINTAIN: with OBJECT = RSDDSTAT_GLOBAL_OFF and VALUE = X in insert mode. If you need the statistics, you can switch these on by running the program with that object but VALUE = space in update mode.
5) If you want to use OData-Services run report EQ_RS_AUTO_SETUP (transaction SE38)
6) If you want to use the BW time hierarchies, go to transaction RSRHIERARCHYVIRT and mark the hierarchies you need - for this you have to wait until the job triggered in step 3 has finished successfully
7) Call function module RSEC_GENERATE_BI_ALL.
Regards,
Rehan Sayed
So I have a script that is supposed to update a giant table (Postgres). Since the table has about 150m rows and I want to complete this as fast as possible, using multiple threads seemed like a perfect answer. However, I'm seeing something very weird.
When I use a single thread, the write time to an update is much much lower than when I use multiple threads.
require 'sequel'
.....
DB = Sequel.connect(DB_CREDS)
queue = Queue.new
read_query = query = DB["
SELECT id, extra_fields
FROM objects
WHERE XYZ IS FALSE
"]
read_query.use_cursor(:rows_per_fetch => 1000).each do |row|
queue.push(row)
end
Up until this point, IMO it shouldn't matter because we're just reading stuff from the DB and it has nothing to do with writing. From here, I've tried two approaches. Single-threaded and Multi-threaded.
NOTE - This is not the actual UPDATE query that I want to execute, it's just a pseudo one for demonstration purposes. The actual query is a lot longer and plays with JSON and stuff so I can't really update the entire table using a single query.
Single-threaded
until queue.empty?
photo = queue.shift
id = photo[:id]
update_query = DB["
UPDATE objects
SET XYZ = TRUE
WHERE id = #{id}
"]
result = update_query.update
end
If I execute this, I see in my DB logs that each update query takes time less than 0.01 seconds
I, [2016-08-15T10:45:48.095324 #54495] INFO -- : (0.001441s) UPDATE
objects SET XYZ = TRUE WHERE id = 84395179
I, [2016-08-15T10:45:48.103818 #54495] INFO -- : (0.008331s) UPDATE
objects SET XYZ = TRUE WHERE id = 84395181
I, [2016-08-15T10:45:48.106741 #54495] INFO -- : (0.002743s) UPDATE
objects SET XYZ = TRUE WHERE id = 84395182
Multi-threaded
MAX_THREADS = 5
num_threads = 0
all_threads = []
until queue.empty?
if num_threads < MAX_THREADS
photo = queue.shift
num_threads += 1
all_threads << Thread.new {
id = photo[:id]
update_query = DB["
UPDATE photos
SET cv_tagged = TRUE
WHERE id = #{id}
"]
result = update_query.update
num_threads -= 1
Thread.exit
}
end
end
all_threads.each do |thread|
thread.join
end
Now, in theory it should be faster right? But each update takes about 0.5 seconds. I'm so surprised what that is the case.
I, [2016-08-15T11:02:10.992156 #54583] INFO -- : (0.414288s)
UPDATE objects
SET XYZ = TRUE
WHERE id = 119498834
I, [2016-08-15T11:02:11.097004 #54583] INFO -- : (0.622775s)
UPDATE objects
SET XYZ = TRUE
WHERE id = 119498641
I, [2016-08-15T11:02:11.097074 #54583] INFO -- : (0.415521s)
UPDATE objects
SET XYZ = TRUE
WHERE id = 119498826
Any ideas on -
Why this is happening?
How can I increase the update speed for multiple threads approach.
Have you configured Sequel so that it has a connection pool of 5 connections?
Have you considered doing multiple updates per call via an IN clause?
If you haven't done 1, you have N threads fighting over N-n connections, which equates to resource starvation, which is a classic concurrency issue.
Your example can be reduced to: DB[:objects].where(:XYZ=>false).update(:XYZ=>true)
I'm guessing your actual need is not that simple. But the same approach may still work. Instead of issuing a query per row, use a single query to update all related rows.
I went through something similar on a project ("import all history from a legacy database into a new one with completely different structure and organization"). Unless you managed to shoot yourself in the foot somewhere else, you have 2 basic bottlenecks to look for:
the database's disk IO
the ruby process' CPU
Some suggestions,
database IO: use DB transactions, update 1000 records per transaction (you can tweak the exact number but 1000 is usually good) - huge DB table usually means a lot of indexes too, every couple of update actions will trigger a REINDEX and AUTOVACUUM actions within the DB which will result in a significant drop of update speed, a transaction basically allows you to push a 1000 updated records without REINDEX and AUTOVACUUM and then perform both actions, the result is MUCH faster (something like an order of magnitude)
database IO: change indexes, drop every index you can live without during the update process, ideally you will have only 1 very streamlined index which allows unique row lookups for update purposes
ruby CPU: unless you are using JRuby or Rubinius, or REALLY paying the price of network latency to your DB, threads will do you no big benefit, use fork/processes (see GIL). You did a great job choosing Sequel over AR for this
ruby CPU: if you decide to go threads + JRuby with this don't forget to try and plug in jProfiler, it's amazing at tracing bottlenecks in Java and author of SideKiq swears it is amazing for JRuby too - unfortunately, afaik, there is no equivalent of jProfiler for C Ruby (there are profiling tools, but nowhere as useful)
After you implement these suggestions you know you did all you could when:
all of the CPUs on the Ruby box are on 100% load
the hard disk IO of the DB is on 100% throughput
Find this sweet spot and don't add additional ruby update threads/processes after that (or add more hardware) and that's that
PS check out https://github.com/ruby-concurrency/concurrent-ruby - it's a great parallelization lib
I am using the function getgroup() to read all of the groups of a user in the active directory.
I'm not sure if I'm doing something wrong but it is very very slow. Each time it arrives at this point, it takes several seconds. I'm also accessing the rest of Active directory using the integrated function of "Accountmanagement" and it executes instantly.
Here's the code:
For y As Integer = 0 To AccountCount - 1
Dim UserGroupArray As PrincipalSearchResult(Of Principal) = UserResult(y).GetGroups()
UserInfoGroup(y) = New String(UserGroupArray.Count - 1) {}
For i As Integer = 0 To UserGroupArray.Count - 1
UserInfoGroup(y)(i) = UserGroupArray(i).ToString()
Next
Next
Later on...:
AccountChecker_Listview.Groups.Add(New ListViewGroup(Items(y, 0), HorizontalAlignment.Left))
For i As Integer = 0 To UserInfoGroup(y).Count - 1
AccountChecker_Listview.Items.Add(UserInfoGroup(y)(i)).Group = AccountChecker_Listview.Groups(y)
Next
Item(,) contains my normal Active directory data that I display Item(y, 0) contain the username.
y is the number of user accounts in AD. I also have some other code for the other information in this loop but it's not the issue here.
Anyone know how to make this goes faster or if there is another solution?
I'd recommend trying to find out where the time is spent. One option is to use a profiler, either the one built into Visual Studio or a third-party profiler like Redgate's Ants Profiler or the Yourkit .Net Profiler.
Another is to trace the time taken using the System.Diagnostics.Stopwatch class and use the results to guide your optimization efforts. For example time the function that retrieves data from Active Directory and separately time the function that populates the view to narrow down where the bottleneck is.
If the bottleneck is in the Active Directory lookup you may want to consider running the operation asynchronously so that the window is not blocked and populates as new data is retrieved. If it's in the listview you may want to consider for example inserting the data in a batch operation.
As a follow up to a previous question of mine, I want to find all 30 pathways that exist between two given nodes within a depth of 4. Something to the effect of this:
start startnode = node(1), endnode(1000)
match startnode-[r:rel_Type*1..4]->endnode
return r
limit 30;
My database contains ~50k nodes and 2M relationships.
Expectedly, the computation time for this query is very, very large; I even ended up with the following GC message in the message.log file: GC Monitor: Application threads blocked for an additional 14813ms [total block time: 182.589s]. This error keeps occuring, and blocks all threads for an indefinite period of time. Therefore, I am looking for a way to lower the computational strain of this query on the server by optimizing the query.
Is there any extension I could use to help optimize this query?
Give this one a try:
https://github.com/wfreeman/findpaths
You can query the extension like so:
.../findpathslen/1/1000/4/30
And it will give you a json response with the paths found. Hopefully that helps you.
The meat of it is here, using the built-in graph algorithm to find paths of a certain length:
#GET
#Path("/findpathslen/{id1}/{id2}/{len}/{count}")
#Produces(Array("application/json"))
def fof(#PathParam("id1") id1:Long, #PathParam("id2") id2:Long, #PathParam("len") len:Int, #PathParam("count") count:Int, #Context db:GraphDatabaseService) = {
val node1 = db.getNodeById(id1)
val node2 = db.getNodeById(id2)
val pathFinder = GraphAlgoFactory.pathsWithLength(Traversal.pathExpanderForAllTypes(Direction.OUTGOING), len)
val pathIterator = pathFinder.findAllPaths(node1,node2).asScala
val jsonMap = pathIterator.take(count).map(p => obj(p))
Response.ok(compact(render(decompose(jsonMap))), MediaType.APPLICATION_JSON).build()
}
I have a common test suite that attempts to create an ets table for use in all suites and all test cases. It looks like so:
-module(an_example_SUITE).
-include_lib("common_test/include/ct.hrl").
-compile(export_all).
all() -> [ets_tests].
init_per_suite(Config) ->
TabId = ets:new(conns, [set]),
ets:insert(TabId, {foo, 2131}),
[{table,TabId} | Config].
end_per_suite(Config) ->
ets:delete(?config(table, Config)).
ets_tests(Config) ->
TabId = ?config(table, Config),
[{foo, 2131}] = ets:lookup(TabId, foo).
The ets_tests function failed with a badarg. Creating/destroying the ets table per testcase, which looks like so:
-module(an_example_SUITE).
-include_lib("common_test/include/ct.hrl").
-compile(export_all).
all() -> [ets_tests].
init_per_testcase(Config) ->
TabId = ets:new(conns, [set]),
ets:insert(TabId, {foo, 2131}),
[{table,TabId} | Config].
end_per_testcase(Config) ->
ets:delete(?config(table, Config)).
ets_tests(Config) ->
TabId = ?config(table, Config),
[{foo, 2131}] = ets:lookup(TabId, foo).
Running this, I find that it functions beautifully.
I'm confused by this behavior and unable to determine why this would happen, form the docs. Questions:
Why does this happen?
How can I have an ets table to share between a per suite and per testcase?
As was already mentioned in the answer by Pascal and as discussed in the User Guide only init_per_testcase and end_per_testcase run in the same process as the testcase. Since ETS tables are bound to a owner process your only way to have a ETS table persist during a whole suite or group is to give it away or define a heir process.
You can easily spawn a process in your init_per_suite or init_per_group functions, set it as heir for the ETS table and pass its pid along in the config.
To clean up all you need is to kill this process in your end_per_suite or end_per_group functions.
-module(an_example_SUITE).
-include_lib("common_test/include/ct.hrl").
-compile(export_all).
all() -> [ets_tests].
ets_owner() ->
receive
stop -> exit(normal);
Any -> ets_owner()
end.
init_per_suite(Config) ->
Pid = spawn(fun ets_owner/0),
TabId = ets:new(conns, [set, protected, {heir, Pid, []}]),
ets:insert(TabId, {foo, 2131}),
[{table,TabId},{table_owner, Pid} | Config].
end_per_suite(Config) ->
?config(table_owner, Config) ! stop.
ets_tests(Config) ->
TabId = ?config(table, Config),
[{foo, 2131}] = ets:lookup(TabId, foo).
You also need to make sure you can still access your table from the testcase process, by making it either protectedor public
An ets table is attached to a process and destroyed as soon as the process ends, unless you use the the give_away function (which is not feasible I fear in this case)
As state in the common tets doc, each test case and the init_per_suite and end_per_suite are run in separate processes, so the ets table is destroyed as soon as you leave the init_per_suite function.
fron common_test doc
init_per_suite and end_per_suite will execute on dedicated Erlang
processes, just like the test cases do. The result of these functions
is however not included in the test run statistics of successful,
failed and skipped cases.
from ets doc
The default owner is the process that created the table. Table
ownership can be transferred at process termination by using the heir
option or explicitly by calling give_away/3.