problems in accessing reservoir engineering data using Ocean API - ocean

I am building a plug in where I need to access the Reservoir engineering domain data using ocean API. I can access the development strategy using Simulation root but not been able to get the type of development strategy. Whether its history strategy or Prediction strategy.
Any way to know this information.

Unfortunately there is no Ocean API to access the type of the development strategy. I will add your use case to our requirement system.
Regards,
Carole

Having faced multiple limitations in Petrel RE API, and having to go through EclipseKeywordEditor a lot to achieve the goal, I have to say this is an easy one. DevelopmentStrategy.StrategyType property is there to help:
The following code is working for me on Petrel 2012.1:
SimulationRoot sroot = SimulationRoot.Get(PetrelProject.PrimaryProject);
DevelopmentStrategyCollection dsCol = DevelopmentStrategyCollection.NullObject;
if (sroot.HasDevelopmentStrategyCollection)
{
dsCol = sroot.DevelopmentStrategyCollection;
foreach (DevelopmentStrategy strat in dsCol.DevelopmentStrategies)
{
PetrelLogger.InfoOutputWindow(string.Format("{0} is a {1} strat",strat.Name,strat.StrategyType));
}
}
DevGuide doesn't list it, IntelliSense doesn't show it, yet you can bring up Object Browser and see it's actually there (grayed out in fact).

Related

Access control of objects in Julia Web Platform

We are creating a online platform and exposing an Julia API via a embedded code-editor. The user can access the API and run some analysis on our web-app. I have a question related to controlling access to the API and objects.
The API right now contains a database handle and other objects that are exposed to the user and can be used to hack the internal system.
Below is the current architecture:
UserProgram.jl
function doanalysis()
data = getdata()
# some analysis on data
end
InternalProgram.jl
const client = MongoClient()
const collection = MongoCollection(client,"dbname","collectionName")
function getdata()
data = #some function to get data from collection
return data
end
#after parsing the user program
doanalysis()
To run the user analysis, we pass user program as a command-line argument (using ArgParse module) and run the internal program as follows
$ julia InternalProgram.jl --file Userprogram.jl
With this architecture, user potentially gets access to "client" and "collection" and can modify internal databases.
Is there a better way to solve this problem without exposing the objects?
I hope someone has an answer to this.
You will be exposing yourself to multiple types of vulnerabilities - as the general rule, executing user inputed code is a VERY BAD IDEA.
1/ like you said, you'll potentially allow users to execute random code against your database.
2/ your users will have access to all the power of Julia to do things on your server (download files they can later execute for example, access other servers and services on the server [MySQL, email, etc]). Depending on the level of access of the Julia process, think unauthorized access to your file system, installing key loggers, running spam servers, etc.
3/ will be able to use Julia packages and get you into a lot of trouble - like for example add/use the Requests.jl package and execute DoS attacks on other servers.
If you really want to go this way, I recommend that:
A/ set proper (minimal) permissions for the MongoDB user configured to be used in the app (ex: http://blog.mlab.com/2016/07/mongodb-tips-tricks-collection-level-access-control/)
B/ execute each user's code into a separate sandbox / container that only exposes the minimum necessary software
C/ have your containers running on a managed platform where tooling exists (firewalls) to monitor incoming and outgoing traffic (for example to block spam or DoS attacks)
In order to achieve B/ and C/ my recommendation is to use JuliaBox. I haven't used it myself, but seems to be exactly what you need: https://github.com/JuliaCloud/JuliaBox
Once you get that running, you can also use https://github.com/JuliaWeb/JuliaWebAPI.jl

Integrating System Center Operation Manager [SCOM] with external monitoring tool [Application]

WHAT AM I TRYING TO ACHIEVE
Synopsis:
Trying to create an API or connector for an inhouse monitoring tool that integrates with SCOM [System Center/Microsoft System Operations Manager 2012].
Our tool has a restful page with all the necessary endpoints and simply would like SCOM to read the status of those endpoints.
Thus far according to SCOM documentation and my understanding, I need to build a management pack. And this consists of Authoring tools with visual studio etc.
Whilst I am still going through the documentation on this, whose tackled something like this before. Some guidance on how to approach this would be appreciated.
##### UPDATE [04/01/16] ########
Thinking.... * Plan to create a MP(s) for Discovery, Monitoring and Dashboard.*
New Question...
Created a script using posh that exposes the endpoints needed by SCOM.
+ These need to be converted to a class object (converting posh to xml). - not done yet!
+ Thinking ahead I am not sure what Base Class to use for this discovery script?
A very simple way to do this would be with Web Application Availability Monitoring, which works with any HTTP endpoint. As well as checking availability, this monitor can check the content of the response and raise an alert accordingly.
To get started, use the SCOM console and navigate to Authoring > Management Pack Templates > Create > Web Application Availability Monitoring
This blog is a really good walkthrough of doing that:
http://www.opsmanfan.com/index.php/6-use-scom-2012-to-monitor-a-webapi-without-using-scripts
Some limitations with this approach vs. a custom management pack:
you won't get any control over the alert content (name, desciption etc)
it won't scale well to many monitors (in terms of administrative burden)
you can't represent the health using a complex object model (no classes / discoveries)
If you want to test a large number of URLs with this method, then a community Management Pack called URLGenie might also help:
http://blogs.msdn.com/b/tysonpaul/archive/2015/05/04/urlgenie-management-pack-for-scom-an-easy-solution-for-bulk-website-monitoring.aspx
You are right that custom MP is the right way to do an integration of the custom/third-party monitoring system with SCOM. You have to think about three important things when you are planning your work on such MP:
How you are going to get information from external system
How you are going to persist and use it in SCOM
How you are going to visualize it in SCOM
Let's walk through these three items:
From your intro it looks obvious - your system exposes RESTful API. SCOM (even 2012 or 2016) doesn't have native datasources to parse JSON so you'll need to create custom datasources using Powershell or C# (depends on your experience) . In this case, it might be reasonable to use any standard library to make this job easier.
SCOM has its special object model. You have classes to represent objects, monitors to detect failures/state changes and rules to collect performance metrics and alerts/events. So you'll need to implement Discovery datasources to get data about objects, monitored by your custom monitoring system (such as servers, databases, disks, apps, etc.) and define a class hierarchy to persist these objects in SCOM.
Then you'll need to create datasources for monitors and rules and here you must think before act - what failures, alerts and metrics you want to expose to SCOM. When you have clear understanding of this area - you are good to implement that (again - using PS or C#).
SCOM will give you some OOB visualization after you dome (1) and (2), so in the minimal scenario you'll need to define just a couple of views to show in SCOM console data collected by your MP. In ultimate case - if you want to have some fancy visualization - you'll have to create custom Dashboard. A good option here - use dashboards from SQL Server MP (it was released recently, it's free and it is really cool).
In fact, SCOM is not a monitoring system, but a framework, which has runtime platform, development language, and libraries, so building your own MP is closer to programming than IT administration :)
You also can try to use Silect MP authoring tool, but I'm not sure if it will help you to build custom datasources better than VS.
Good luck!
P.S. feel free to ping me via LinkedIn for more details about MP development.

Porting PHP API over to Parse

I am a PHP dev looking to port my API over to the Parse platform.
Am I right in thinking that you only need cloud code for complex operations? For example, consider the following methods:
// Simple function to fetch a user by id
function getUser($userid) {
return (SELECT * FROM users WHERE userid=$userid LIMIT 1)
}
// another simple function, fetches all of a user's allergies (by their user id)
function getAllergies($userid) {
return (SELECT * FROM allergies WHERE userid=$userid)
}
// Creates a script (story?) about the user using their user id
// Uses their name and allergies to create the story
function getScript($userid) {
$user = getUser($userid)
$allergies = getAllergies($userid).
return "My name is {$user->getName()}. I am allergic to {$allergies}"
}
Would I need to implement getUser()/getAllergies() endpoints in Cloud Code? Or can I simply use Parse.Query("User")... thus leaving me with only the getScript() endpoint to implement in cloud code?
Cloud code is for computation heavy operations that should not be performed on the client, i.e. handling a large dataset.
It is also for performing beforeSave/afterSave and similar hooks.
In your example, providing you have set up a reasonable data model, none of the operations require cloud code.
Your approach sounds reasonable. I tend to put simply queries that will most likely not change on the client side, but it all depends on your scenario. When developing mobile apps I tend to put a lot of code in cloud code. I've found that it speeds up my development cycle. For example, if someone finds a bug and it's in cloud code, make the fix, run parse deploy, done! The change is available to all mobile environments instantly!!! If that same code is in my mobile app, it really sucks, cause now I have to fix the bug, rebuild, push it to the app store/google play, wait x number of days for it to be approved, have the users download it... you see where I'm going here.
Take for example your
SELECT * FROM allergies WHERE userid=$userid query.
Even though this is a simple query, what if you want to sort it? maybe add some additional filtering?
These are the kinds of things I think of when deciding where to put the code. Hope this helps!
As a side note, I have also found cloud code very handy when needing to add extra security to my apps.

How do I use the LookbackAPI for burnup charts?

I need a good example of using the LookbackAPI to get the data for a burn up chart. I see some limited questions and responses on the API but no examples on how I would use it to do so. I need to get the current scope on story points and story points completed.
Sorry for the scarcity of available examples. More and better examples will be coming as the LBAPI beta matures. I'd definitely recommend that you become familiar with the Lookback API (LBAPI) Documentation, as there are good examples there for formulating queries.
For a burnup, let's say you want to get the state Snapshots for an Iteration going from 15-Jan-2013 through 30-Jan-2013, and that the Iteration applies to a Project hierarchy that is four-deep. The following LBAPI query would obtain the PlanEstimate, ToDo, and Schedule State for Stories scheduled into that Iteration:
{
find:
{
_TypeHierarchy:"HierarchicalRequirement",
Children:null,
_ValidFrom:{
$gte:"2013-01-15TZ",
$lt:"2013-01-30TZ"
},
Iteration:{
$in:[
12345678910,
12345678911,
12345678912,
12345678913
]
}
},
fields:[
"PlanEstimate",
"ToDo",
"ScheduleState"
]
}
Where:
$in:[
12345678910,
12345678911,
12345678912,
12345678913
]
Are the ObjectID's of the Iteration called "Iteration 1". It's probably easiest to get these Object ID's from a standard WSAPI query on Iterations: (Name = "Iteration 1"). For Iterations copied into a four-deep project hierarchy, we would see the four Iteration OID's similar to the above.
For charting, the toughest part right now is an easy way to deal with the Time-Series data. The most robust way to query and process LBAPI data currently is by working directly against the REST endpoint and processing the returned JSON results in your own code.
With Javascript apps, for processing the data and turning it into a Chart, the preferred toolkit is AppSDK2, specifically the SnapshotStore.
For Javascript apps, the Lumenize javascript library is separate from LBAPI, but was developed by Rally's director of analytics and is bundled in the SDK. You can find some examples of using LBAPI and Lumenize to produce charting as part of some Rally-internal and Rally-customer Hackathon projects here:
https://github.com/RallyHackathon
Please be cautious with these examples for a couple of reasons:
Several aspects of the Lumenize namespace will be changing/renamed for clarity
There's a bug in the current version of Lumenize where its timeSeriesCalculator does not correctly account for stories deleted or reparented.
Hopefully there will be an updated version of AppSDK2 bundled and released soon to consolidate the Lumenize namespace and resolve the bug, so that there's better glue between AppSDK2 and LBAPI for Javascript App development.
Unfortunately, the .NET, Java and Python toolkits have not yet been updated to support the Lookback API. As a result, you'll have to do an HTTP POST to the Lookback API's REST endpoint directly, with a body similar to the one Mark W listed above and Content-Type 'application/json'.
I'd recommend using the Chrome extension 'XHR Poster' to experiment with what you're sending from a browser:
https://chrome.google.com/webstore/detail/xhr-poster/akdbimilobjkfhgamdhneckaifceicen

Is it possible to use Bukkit for Minecraft to define a new kind of mob?

I'd like to write a Minecraft mod which adds a new type of mob. Is that possible? I see that, in Bukkit, EntityType is a predefined enum, which leads me to believe there may not be a way to add a new type of entity. I'm hoping that's wrong.
Yes, you can!
I'd direct you to some tutorials on the Bukkit forums. Specifically:
Creating a Meteor Entity
Modifying the Behavior of a Mob or Entity
Disclaimer: the first is written by me.
You cannot truly add an entirely new mob just via Bukkit. You'd have to use Spout to give it a different skin. However, in the case you simply want a mob, and are content with sharing a skin of another entity, then it can be done.
The idea is injecting the EntityType values via Java's Reflection API. It would look something like this:
public static void load() {
try {
Method a = EntityTypes.class.getDeclaredMethod("a", Class.class, String.class, int.class);
a.setAccessible(true);
a.invoke(a, YourEntityClass.class, "Your identifier, can be anything", id_map);
} catch (Exception e) {
//Insert handling code here
}
}
I think the above is fairly straightforward. We get a handle to the private method, make it public, and invoke its registration method.id_map contains the entity id to map your entity to. 12 is that of a fireball. The mapping can be found in EntityType.class. Note that these ids should not be confused with their packet designations. The two are completely different.
Lastly, you actually need to spawn your entity. MC will continue spawning the default entity, since we haven't removed it from the map. But its just a matter of calling the net.minecraft.server.spawnEntity(your_entity, SpawnReason.CUSTOM).
If you need a skin, I suggest you look into SpoutPlugin. It does require running the Spout client to join to such a server, but the possibilities at that point are literally infinite.
It would only be possible with client-side mods as well, sadly. You could look into Spout, (http://www.spout.org/) which is a client mod which provides an API for server-side plugins to do more on the client, but without doing something client side, this is impossible.
It's not possible to add new entities, but it is possible to edit entity behaviors for example one time, I made it so that you could tame iron golems and they followed you around.
Also you can sort of achieve custom looking human entities by accessing player entities and tweaking network packets
It's expensive as you need to create a player account to achieve this that then gets used to act as a mob. You then spawn a named entity and give it the same behaviour AI as you would with an existing mob. Keep in mind however you will need to write the AI yourself (you could borrow code straight from craftbukkit/bukkit) and you will need to push the movement and events of this mob to players within sight .. As technically speaking all your doing is pushing packets to the client from the serve on what's actually happening but if your outside that push list nothing will happen as other players will see you being knocked around by invisible something :) it's a bit of a mental leap :)
I'm using this concept to create Npc that act as friendly and factional armies. I've also used mobs themselves as friendly entities (if you belong to a dark faction)
I'd like to personally see future server API that can push model instructions to the client for server specific cache as well as the ability to tell a client where to download mob skins ..
It's doable today but I'd have to create a plugin for the client to achieve this which is then back to a game of annoyance especially when mojang push out a new release and all the plugins take forever to rise with its tide
In all honesty this entire ecosystem could be managed more strategically but right now I think it's just really ad hoc product management (speaking as a former product manager of .net I'd love to work on this strategy it would be such a fun gig)