Interacting with Petrel during interpretation process - ocean

Using Schlumberger Petrel and the Ocean framework, working with seismic interpretation.
During the process of interpreting (and autotracking), Petrel stores interpretation values (samples) in a collection on an interpretation object (e.g. HorizonInterpretation3D).
I would like to store some additional information on/in parallel with these samples. I have found suitable objects to store the extra info in, so that isn't any issue. However I have yet to find how to catch / intercept when the samples are produced/stored, so I know when to add the extra info.
I don't want to post-process the samples, I'd like to store the extra info in sync with the interpretation sample collection (the real sample data). Have yet to find any suitable event to listen to.
Anybody done anything similar with Petrel that can help?

Related

Using FastRTPS for a Command and Control Application

I'm trying to understand how to use the FAST-RTPS libraries to implement a Command and Control application. The requirement is to allow multiple writers to direct command messages to a single reader that is tasked with controlling a piece of equipment. In this application there can be one or more identical pieces of equipment being controlled, each using a unique instance of the same reader code. I already understand that I should set the reader's RELIABILITY_QOS to RELIABLE and the OWNERSHIP_QOS to EXCLUSIVE_OWNERSHIP. The part that I am still thinking about is how to configure my application so that when a writer sends a command to the reader controlling the piece of equipment, other readers that might also receive the message will not act on it. I would like to do this at the FAST-RTPS level; that is, configure the application so that only the reader controlling the equipment receives the command message versus allowing multiple readers to receive the control message while programming these readers so that only the controlling reader will act on it. My approach so far involves assigning all controlling writers and only the controlling reader to a partition (See Advanced Functionalities in the Fast-RTPS Users Manual). There will be one of these partitions for each piece of equipment. Is this the proper way to implement my requirements or are there other, better ways?
Thank you.
Since this question was asked in under data-distribution-service, this answer references the OMG DDS specification, currently at version 1.4.
Although you could use Partitions to achieve the selective delivery that you are looking for, this would probably not be the recommended approach for your use case. The main disadvantage that comes to mind is a situation where a single writer has to send control messages to multiple pieces of equipment. With your current approach, you need a single Partition for each equipment, and you additionally need each message to be written into the right Partition. This can only be achieved by attaching a single Partition to each DataWriter, which would consequently require a single DataWriter per piece of equipment. Depending on your set-up, you may end up with many DataWriters where you would prefer to have a few, from the perspective of resource usage perspective as well as code complexity.
The proper mechanism that is intended for this kind of use-case is the so-called ContentFilteredTopic, as found in section 2.2.2.3.3 ContentFilteredTopic Class in the specification. For your convenience, I quoted some of it:
ContentFilteredTopic describes a more sophisticated subscription
that indicates the subscriber does not want to necessarily see all
values of each instance published under the Topic. Rather, it wants
to see only the values whose contents satisfy certain criteria. This
class therefore can be used to request content-based subscriptions.
The selection of the content is done using the filter_expression
with parameters expression_parameters.
Using ContentFilteredTopics, each DataReader would use a filter_expression that aligns with an identifier of the device that it is associated with. At the application level on the sender side, DataWriters would not be aware of that; they would just be writing their control messages. The middleware would take care of the delivery to those (and only those) DataReaders for which the filter expressions matches the data.
This is a core feature of many DDS-based systems. Although the DDS specification does not require it, in many cases the implementation is smart enough to do filtering on the DataWriter side, before the message goes onto the wire, in cases where that makes sense.
I do not know how much of this is actually implemented by Fast-RTPS.

2 separate systems, how to make them communicate

I got a DDS-system(OMG DDS) who's communicating with a ROS-node over radio. The information being received is a struct with velocity, state, longitude, latitude etc. This works well, and my DDS-client has no problem printing the information being transmitted from the node over the radio. Now, I've got a GUI-application written in Qt, who creates models and puts them on a predefined map. These modelse have defined set-information functions, which when triggered updates the map to give a smooth visualisation of the information it receives.
Now here is problem, I've no idea how to make the GUI application communicate with my DDS-client. I would rather not intertwine these two, since I've had enough trouble just making the DDS-client and sender work and compile with ROS. Ive though about a separate queue system, which can be included in the DDS-client and the GUI application, but I dont know if this could work. Ive also though about writing a SQL database, and then push new data, and pull new data when it is detected in my GUI application. Some sort of on_data_available function which triggers the pull-function. Ive heard the last one is a bad idea, since I'm working with only one set of data which is being continuously updated (the model represents one USV), and a database is then considered overkill, but I would love to get inputs here.
Im sorry if this isn't sufficient information, I can't really provide code examples for different reasons. If anyone have any inputs, shout out, would love to hear them. And if I'm not being specific enough, ill try to rewrite it as best as i can.
I've no idea how to make the GUI application communicate with my
DDS-client
Your question is not specific to DDS or your GUI application -- you essentially ask for a simple and convenient inter-process communication (IPC) mechanism. As you can see when you follow the link, there are loads of different options.
Given that you already have your data as well as the associated type definitions available in DDS, I suspect that using DDS for this task would still be the easiest way to go. You could set it up to communicate over shared memory or local loopback. DDS will do all discovery and communication under the hood, including (cross-language) de/serialization. If you choose a different mechanism, you might end up doing more work yourself.
As an alternative, some DDS implementations (commercially) support native integration with SQL databases. Those will introspect the DDS data definitions and create all required tables for you. Updates from DDS are automatically forwarded to the database, and vice-versa. You could feed your GUI off of that database.

What is best way of changing the ABAP standard code

I have almost 4 months learning/working in SAP. I've done several reports and enhancements all along this time but recently I began to work in a requirement which is related to Mobile Data Entry or RF and it basically consists to add the EAN and some other data to the dynpro 2502.
I made a copy of the dynpro 2502 in program SAPLLMOB into SAPLXLRF 9502, related the user exit MWMRF502 and programmed the basic functionality of it but it is not working as I expected because this exit is very limited and it only lets me import and export a small group of data and is difficult to perform exactly as the standard.
I've been searching all over internet and a lot of people make their own implementations and other just simply change the standard. I don't know how to make my own implementation cause I don't understand all the process within and the alternative of changing the standard code would be better for performance and time spent in development but as I quoted I would have to change the standard code and that's something I would like to do only if there's no other option.
But the question is ¿Is it OK to change the standard? ¿How often is the the standard code changed in SAP implementations? ¿What would be the better alternative?
Thanks in advance.
You are asking the right sort of questions and it is good that you are not just plowing ahead without thinking about the consequences of what you are doing. Keep researching!
As far as changing the SAP standard goes, you generally do not want to copy an object to change it. For screens SAP quite often creates a user-exit with a sub-screen that can be modified by the customer. For Web-Dynpro you can use enhancement points and/or bADI's to extend the functionality.
Try to look for one of the following:
A SAP BAdI in the area that you want to change (transaction SE18),
a user-exit allowing you to change the necessary screen(s) (transaction SMOD),
explicit enhancement points within the functionality,
one of the implicit enhancement points in functionality
There are a lot of documentation on sdn.sap.com as well as within the SAP help regarding the topics above.
If none of are available, you may have no other choice but to modify (repair) the SAP standard objects. In order to be able to change the SAP standard you need to register the object(s) that you have to change on SAP OSS and get a repair key that the system needs to allow you to make changes. Always ensure that the SAP Modification Assistant is switched on when making changes, this will make your life a lot easier when you patch or upgrade your system.
If at all possible try to find an experienced ABAP programmer to help you with this.
Also see this question regarding changing SAP standard code:
Edit: Thomas Weiss on SDN has a helpful blog series on the enhancement and switch framework.
Always make sure that there's absolutely no other way to implement the functionality you need. If you're sure about that, then either write your own implementation from scratch, or simply change SAP's code. Just don't copy SAP's programs to the customer namespace, because I can guarantee you that that'll turn into a maintenance nightmare. You'll have to decide yourself whether the size of the change is worth the time building your own implementation, or changing SAP's.
If you decide to change SAP's code, keep in mind that all changes will pop up for review when the system is upgraded, which will take time to evaluate and adjust to the new SAP code.
Your options are, from most to least desirable:
Check the documentation of the application on help.sap.com for possible extensibility scenarios. There are many ways how SAP intends for you to customize their applications through various kinds of event architectures. Unfortunately any attempts by the various departments at SAP to agree on one event architecture and then stick to it failed. So you have user exits, BTEs, FQEVENTS, BAdIs, explicit enhancement spots and many more. If you want to know what's used by the application you need to change, RTM.
Use an implicit enhancement spot. Enhancements are a great way to modify standard software in ways SAP did not anticipate, because they are easy to disable and usually pretty stable during upgrades (use the transaction SPAU_ENH after an upgrade to confirm that your enhancements still make sense in the new version of the program). You will find implicit enhancement spots at the beginning and end of every include and every kind of subroutine, which allows you to inject arbitrary ABAP code in these locations.
But sometimes there just is no implicit enhancement spot where you need it to be. In that case you can copy the whole program into the customer-namespace and modify it. This gives you the freedom to do whatever you want with the program while still retaining the original program as a possible fall-back. It is usually a good idea to use as many components from the original program as possible, by including its includes or calling FORMs from the original program via PERFORM formname IN PROGRAM originalprogram. The main problem with this method is that after a new release, your program might no longer behave as expected. You will have to look at the new version of the program and see if there are any changes you need to port to your version. And there is nothing in the SAP standard that assists you with this maintenance task. So you are responsible to keep a list of all your copies of standard programs.
Just modify the program directly. But this is really a last-resort option for programs that are too complex to copy into the customer-namespace. The problem with this is that it means SAP will no longer offer you support for that program. If you post a ticket about that program on launchpad.support.sap.com, and they find out you modified the program, they will assume it's your own fault and close the ticket. But fortunately, when you upgrade your system, you have the transaction SPAU that will help you to merge your changes with the new versions of the modified SAP programs.

Using flow chart or diagram for routines across programs

I have a busy set of routines to validate or download the current client application. It starts with a Windows desktop shortcut that invokes a .WSF file. This calls on several .VBS files, an .INI for settings, and potentially a .BAT file. Some of these script documents have internal functions. The final phase opens a Microsoft Access database, which entails an AutoExec macro, which kicks off some VBA, including a form which has a load routine of its own in VBA.
None of this detail is specifically important (so please don't add a VBA tag, OR criticize my precious complexity). The point is I have a variety of tools and containers and they may be functionally nested.
I need better techniques for parsing that in a flow chart. Currently I rely on any or all of the following:
a distinct color
a big box that encloses a routine
the classic 'transfer of control' symbol
perhaps an explanatory call-out
Shouldn't I increase my flow charting vocabulary? Tutorials explain the square, the diamond, the circle, and just about nothing more. Surely FC can help me deal with these sorts of things:
The plethora of script types lets me answer different needs, and I want to indicate tool/language.
A sub-routine could result in an abort of the overall task, or an error, and I want to show the handling of that by (or consequences for) higher-level "enclosing" routines.
I want to distinguish "internal" sub-routines from ones in a different script file.
Concurrent script processing could become critical, so I want to note that.
The .INI file lets me provide all routines with persistent values. How is that charted?
A function may have an argument(s) and a return value/reference ... I don't know how to effectively cite even that.
Please provide guidance or point me to a extra-helpful resource. If you recommend an analysis tool set (like UML, which I haven't gotten the hang of yet), please also tell me where I can find a good introduction.
I am not interested in software. Please consider this a white board exercise.
Discussion of the question suggests flowcharts are not useful or accurate.
Accuracy depends on how the flow charts are constructed. If they are constructed manually, they are like any other manually built document and will be out of date almost instantly; that makes hand-constructed flowcharts really useless, which is why people tend to like looking at the code.
[The rest of this response violate's the OPs requirement of "not interested in software (to produce flowcharts)" because I think that's the only way to get them in some kind of useful form.]
If the flowcharts are derived from the code by an an appropriate language-accurate analysis tool, they will be accurate. See examples at http://www.semanticdesigns.com/Products/DMS/FlowAnalysis.html These examples are semantically precise although the pages there don't provide the exact semantics, but that's just a documetation detail.
It is hard to find such tools :-} especially if you want flowcharts that span multiple languages, and multiple "execution paradigms" (OP wants his INI files included; they are some kind of implied assignment statements, and I'm pretty sure he'd want to model SQL actions which don't flowchart usefully because they tend to be pure computation over tables).
It is also unclear that such flowcharts are useful. The examples at the page I provided should be semiconvincing; if you take into account all the microscopic details (e.g., the possiblity of an ABORT control flow arc emanating from every subroutine call [because each call may throw an exception]) these diagrams get horrendously big, fast. The fact that the diagrams are space-consuming (boxes, diamonds, lines, lots of whitespace) aggravates this pretty badly. Once they get big, you literally get lost in space following the arcs. Again, a good reason for people to avoid flowcharts for entire systems. (The other reason people like text languages is they can in fact be pretty dense; you can get a lot on a page with a succinct language, and wait'll you see APL :)
They might be of marginal help in individual functions, if the function has complex logic.
I think it unlikely that you are going to get language accurate analyzers that produce flowcharts for all the languages you want, that such anlayzers can compose their flowcharts nicely (you want JavaScript invoking C# running SQL ...?)
What you might hope for is a compromise solution: display the code with various hyper links to the other artifacts referenced. You still need the ability to produce such hyperlinked code (see http://www.semanticdesigns.com/Products/Formatters/JavaBrowser.html for one way this might work), but you also need hyperlinks across the language boundaries.
I know of no tools that presently do that. And I doubt you have the interest or willpower to build such tools on your own.

objective-c logging best practices

I am writing my first objective-c daemon type process that works in the background. Everything it does needs to be logged properly.
I am fairly new to Apple stuff so I am not sure, what is the most common and/or best way to log activity? Does everyone simply log to a text file in their own special format, or use some sort of system call?
You should look at the Apple System Logger. ASL writes to the system log database (making it easy to query the log from Console.app or from within your own app) and additionally to one or more flat files (if you choose). Peter Hosey's introduction to the ASL is the best I'm aware of. ASL is a C-level API, but it's relatively easy to wrap in Objective-C if you'd like. I would recommend also taking a look at Google's Toolbox for Mac. Among many other goodies, it contains a GTMLogger facility that includes ASL support. I've ditched my home-grown ASL wrapper in favor of the GTMLogger.
Another alternative you might want to try is https://github.com/CocoaLumberjack. Lumberjack is quite flexible and will allow you to log to various destinations, configure log levels, etc. It's very log4j / log4net like, if you are familiar with those.
It's also reports that it is faster than ASL... I don't know how it compares to GTMLogger with respect to functionality or speed, but the documentation seems to be a bit more approachable.