Access control of objects in Julia Web Platform - api

We are creating a online platform and exposing an Julia API via a embedded code-editor. The user can access the API and run some analysis on our web-app. I have a question related to controlling access to the API and objects.
The API right now contains a database handle and other objects that are exposed to the user and can be used to hack the internal system.
Below is the current architecture:
UserProgram.jl
function doanalysis()
data = getdata()
# some analysis on data
end
InternalProgram.jl
const client = MongoClient()
const collection = MongoCollection(client,"dbname","collectionName")
function getdata()
data = #some function to get data from collection
return data
end
#after parsing the user program
doanalysis()
To run the user analysis, we pass user program as a command-line argument (using ArgParse module) and run the internal program as follows
$ julia InternalProgram.jl --file Userprogram.jl
With this architecture, user potentially gets access to "client" and "collection" and can modify internal databases.
Is there a better way to solve this problem without exposing the objects?
I hope someone has an answer to this.

You will be exposing yourself to multiple types of vulnerabilities - as the general rule, executing user inputed code is a VERY BAD IDEA.
1/ like you said, you'll potentially allow users to execute random code against your database.
2/ your users will have access to all the power of Julia to do things on your server (download files they can later execute for example, access other servers and services on the server [MySQL, email, etc]). Depending on the level of access of the Julia process, think unauthorized access to your file system, installing key loggers, running spam servers, etc.
3/ will be able to use Julia packages and get you into a lot of trouble - like for example add/use the Requests.jl package and execute DoS attacks on other servers.
If you really want to go this way, I recommend that:
A/ set proper (minimal) permissions for the MongoDB user configured to be used in the app (ex: http://blog.mlab.com/2016/07/mongodb-tips-tricks-collection-level-access-control/)
B/ execute each user's code into a separate sandbox / container that only exposes the minimum necessary software
C/ have your containers running on a managed platform where tooling exists (firewalls) to monitor incoming and outgoing traffic (for example to block spam or DoS attacks)
In order to achieve B/ and C/ my recommendation is to use JuliaBox. I haven't used it myself, but seems to be exactly what you need: https://github.com/JuliaCloud/JuliaBox
Once you get that running, you can also use https://github.com/JuliaWeb/JuliaWebAPI.jl

Related

How to keep GraphQL API and frontend synchronized on a staging server?

We have a Rails application with GraphQL API in one GIT repository, and React frontend application in another. Both backend and frontend have CI and are deployed separately. But both backend and frontend are still under heavy development and often our staging server doesn't work, because deployment is not synchronized and we don't test the whole application - we test API and we test frontend without API.
What is the best way to deploy frontend and backend only when they are synchronized, I mean when new versions doesn't break functionalities? I thought about third repository with backend and frontend included as GIT modules, acceptance tests and deploying both sides at once. But maybe there is simpler solution? Maybe some versioning?
You certainly can do versioning with GraphQL, but ideally any changes to your schema shouldn't be breaking ones. This just takes discipline on the part of your backend devs, although there are also tools (like this) to help detect breaking changes. Some general guidelines:
Deprecate fields using the #deprecated directive instead of deleting them. Deprecated fields can be communicated to client teams and retired after some agreed-upon amount of time.
Avoid renaming types. Try to use more specific naming to avoid having to rename things in the future (i.e. use emailMessage instead of just message if you could foreseeably have a different kind of message in the future).
Use payload types for mutations. If you mutate a User, for example, instead of just returning the User, return a payload type that has a user field. If down the road, you realize the mutation should also return other information, you can easily add fields to the payload type without creating breaking changes.

Reducing Parse Server to only Parse Cloud

I'm currently using a self hosted Parse Server up to date but I'm facing some security issues.
At the moment, calls done to the route /classes can retrieve any object in any table and, even though I might want an object to be public readable, I wouldn't like to show all the parameters of that object. Briefly I don't want the database to be retrieved in any case, I would like to disable "everything" except the Parse Cloud code. So that is, I would be able to run calls to my own functions, but not able to use clients (Android, iOS, C#, Javascript...) to retrieve data.
Is there any way to do this? I've been searching deeply for this, trying to debug some Controllers but I don't have any clue.
Thank you very much in advance.
tl;dr: set the ACL for all objects to be only readable when using the master key and then tell the query in Cloud Code to use the MK when querying your data
So without changing Parse Server itself you could make use of ACL and only allow a specific user to access objects. You would then "login" as that user in your Cloud Code and be able to access all objects.
As the old method, Parse.Cloud.useMasterKey() isn't available in the OS Parse Server you will have to pass the parameter useMasterKey to the query you are running which should do the trick for this particular request and will bypass ACL/CLP. There is an example in the Wiki of Parse Server as well.
For convenience, here is a short code example from the Wiki:
Parse.Cloud.define('getTotalMessageCount', function(request, response) {
var query = new Parse.Query('Messages');
query.count({
useMasterKey: true
}) // count() will use the master key to bypass ACLs
.then(function(count) {
response.success(count);
});
});

How to test SAP .Net Connector 3 Client / Server without SAP System

I want to write some code using the SAP .Net Connector 3 to receive and send data to a SAP System using RFC and iDoc.
How can I setup a simple SAP Test System with RFC to test my code.
Is there a way to mock the SAP System or do I have to install a SAP System?
If so is there any simple tutorial on how to setup an SAP System with a simple "Hello World" RFC?
I was originally going to post a comment. But it was too long.
This isnt a solution, its a warning.
I think you have placed too much emphasis on unit test for this type of solution. Mock the rest of the code all you like. But mocking an interface that may/will behave differently is false confidence.
By all means abstract the infrastructure layer and push dummy data into int to test the rest of the app. But dont plan on mocking the interface in any way that is relevant to stability.
How do you plan to mock:
The sign on process
single sign on, SNC...
gateway connection
connection specific settings
authorizations
load balancing
connection pooling
timeout
Test it against the DEV system, then test again in QA system
and get ready for unexpected issues in PROD.
You can write code to generate TABLE/STRUCTURE content. So you easily mock what that you expect to receive or send to SAP system. Write a dummy that returns that data and mock the call. Dont bother with mock infrastructure. That achieves nothing.

Integrating System Center Operation Manager [SCOM] with external monitoring tool [Application]

WHAT AM I TRYING TO ACHIEVE
Synopsis:
Trying to create an API or connector for an inhouse monitoring tool that integrates with SCOM [System Center/Microsoft System Operations Manager 2012].
Our tool has a restful page with all the necessary endpoints and simply would like SCOM to read the status of those endpoints.
Thus far according to SCOM documentation and my understanding, I need to build a management pack. And this consists of Authoring tools with visual studio etc.
Whilst I am still going through the documentation on this, whose tackled something like this before. Some guidance on how to approach this would be appreciated.
##### UPDATE [04/01/16] ########
Thinking.... * Plan to create a MP(s) for Discovery, Monitoring and Dashboard.*
New Question...
Created a script using posh that exposes the endpoints needed by SCOM.
+ These need to be converted to a class object (converting posh to xml). - not done yet!
+ Thinking ahead I am not sure what Base Class to use for this discovery script?
A very simple way to do this would be with Web Application Availability Monitoring, which works with any HTTP endpoint. As well as checking availability, this monitor can check the content of the response and raise an alert accordingly.
To get started, use the SCOM console and navigate to Authoring > Management Pack Templates > Create > Web Application Availability Monitoring
This blog is a really good walkthrough of doing that:
http://www.opsmanfan.com/index.php/6-use-scom-2012-to-monitor-a-webapi-without-using-scripts
Some limitations with this approach vs. a custom management pack:
you won't get any control over the alert content (name, desciption etc)
it won't scale well to many monitors (in terms of administrative burden)
you can't represent the health using a complex object model (no classes / discoveries)
If you want to test a large number of URLs with this method, then a community Management Pack called URLGenie might also help:
http://blogs.msdn.com/b/tysonpaul/archive/2015/05/04/urlgenie-management-pack-for-scom-an-easy-solution-for-bulk-website-monitoring.aspx
You are right that custom MP is the right way to do an integration of the custom/third-party monitoring system with SCOM. You have to think about three important things when you are planning your work on such MP:
How you are going to get information from external system
How you are going to persist and use it in SCOM
How you are going to visualize it in SCOM
Let's walk through these three items:
From your intro it looks obvious - your system exposes RESTful API. SCOM (even 2012 or 2016) doesn't have native datasources to parse JSON so you'll need to create custom datasources using Powershell or C# (depends on your experience) . In this case, it might be reasonable to use any standard library to make this job easier.
SCOM has its special object model. You have classes to represent objects, monitors to detect failures/state changes and rules to collect performance metrics and alerts/events. So you'll need to implement Discovery datasources to get data about objects, monitored by your custom monitoring system (such as servers, databases, disks, apps, etc.) and define a class hierarchy to persist these objects in SCOM.
Then you'll need to create datasources for monitors and rules and here you must think before act - what failures, alerts and metrics you want to expose to SCOM. When you have clear understanding of this area - you are good to implement that (again - using PS or C#).
SCOM will give you some OOB visualization after you dome (1) and (2), so in the minimal scenario you'll need to define just a couple of views to show in SCOM console data collected by your MP. In ultimate case - if you want to have some fancy visualization - you'll have to create custom Dashboard. A good option here - use dashboards from SQL Server MP (it was released recently, it's free and it is really cool).
In fact, SCOM is not a monitoring system, but a framework, which has runtime platform, development language, and libraries, so building your own MP is closer to programming than IT administration :)
You also can try to use Silect MP authoring tool, but I'm not sure if it will help you to build custom datasources better than VS.
Good luck!
P.S. feel free to ping me via LinkedIn for more details about MP development.

Configuration for PowerShell module created via .NET framework

What's the best practice when you have dependencies that you want to be able to configure when creating a PowerShell module in C#?
My specific scenario is that the PowerShell module I am creating via C# code will use a WCF service. Hence, the service's URL must be something that the clients can configure.
Is there a standard approach on this? Or will this be something that must be custom implemented?
A somewhat standard way to do this is to allow a value to be provided as a parameter or default to reading special variable via PSCmdlet's GetVariableValue. This is what the built-in Send-MailMessage cmdlet does. It reads the variable PSEmailServer if no server is provided.
I might not be understanding your question. So I'll posit a few scenarios:
You PS module will always use the same WCF endpoint. In that case you could hardcode the URL in the module
You have a limited number of endpoints to choose from, and there's some algorithm or best practice to associate an endpoint with a particular user, such as the closest geographically, based on the dept or division the user is in, etc.
It's completely up to the end user's preference to choose a URL.
For case #2, I suggest you implement the algorithm/best practice and save the result someplace - as part of the module install.
For case #3, using an environment variable seems reasonable, or a registry setting, or a file in one of the user's profile directories. Probably more important than where you persist the data though, is the interface you give users to change the setting. For example if you used an environment variable, it would be less friendly to tell the user to go to Control Panel, System, Advanced, Environment, User variable, New..., than to provide a simple PS function to change the URL. In fact I'd say providing a cmdlet/function to perform configuration is the closest to a "standard" I can think of.