Call Metatrader MQL4/MQL5 function from imported DLL - dll

I would like to call MQL4 or MQL5 function from my own imported DLL in Metatrader.
Is it possible?

Forest,
As far as i have experienced during the past 2 years working with MetaTrader, there is no real way to call MQL functions from an external DLL. But there are some custom built APIs that closely resemble to what you want to achieve:
MT4 API
MetaTraderâ„¢ Java / .Net API
These APIs do somewhat allow you to use MQL functionality out-of-the-box

Principle
After several hundred man*years in the FX domain, there is another approach to orchestrate a smooth and elegant MT4 Terminal co-operation with other processes than to try to push water up the hill or than to pay USD500+ for a kit, that will stop working right upon the next shock once Build 524-> Build 562->Build 586->Build 600->Build 609->Build 624->... moves again
A non-existent toy
Yes, MT4 architecture does not expose it's own interface to allow self to be "disturbed" by an undeterministic obligation to handle external low-level calls via DLL et al.
How to fix it
Nevertheless, it is possible to reverse the architecture and make MT4 Terminal act as a lightweight thin-Client, operating a smart messaging library, trough which the MT4 functions are being exposed for a remote call ( RPC ).
Example
This way a Python Node may collect MT4 data for numerical processing,
same way a PHP Node may in parallel handle remote-syslog-s,
same way a C++ Node may integrate another task,
same way another Python Node may act as a CLI terminal interface with a Custom-specific scripting-syntax language to command MetaTrader-side activities via command-line / stdio
simply -- whatever your application infrastructure needs can be done this way
( One may even improve a poor real-time features of the native MT4 threads to gain a much better soft-real-time predictability and a low-latency massively parallel architecture .. and still be on a safer-side, protected from being torpedoed by any next "new"-MQL4 )
nota bene: just to imagine the invisible threat, the headbang collision in "new"-MQL4.56789 is, besides others, that string, while being syntax-proposed as string, is not in fact a string but a struct and all your previous DLL-related work simply has to be re-worked and wrapped-around to emulate a string-as-struct or a new DLL-interface has to be designed for cases, which return a value in a buffered ArrayOfBYTEs, which MQL4.56789 side can receive and process, but which it cannot free on it's own and memory leaks.

If it's acceptable for your DLL to be a .NET DLL, then you could try
this MT4 .NET integration library called NQuotes.
With this library it's possible to access any MQL4 function from your DLL.

Related

Run MATLAB program with Web Server inputs

I have a MATLAB application that I want to execute on a linux box with inputs from a web server. Requests to the server would all be from the local network.
Searching for different solutions, I've seen recommendations to host a Django server that serves an HTML form where users could input all the various data needed by the application. When a user fills out the form and submits it, the data would be sent through an API to the MATLAB application, which would serve up the report in a network shared drive.
Would this work well? Is there a different/easier solution available?
Need more details to know if this would "work well". But in terms of the general outline you presented, seems feasible.
When you say "the data would be sent through an API to the MATLAB application", what exactly do you mean here? What API are we talking about? And what is "the Matlab application"? Do you mean just installing regular Matlab on this server machine, and then having the Django or other web application server run the matlab command to run a Matlab program, running as a distinct process (corresponding to a single matlab -batch execution, probably?) that services that? Two issues here: One, Matlab is a large program with a slow startup time. Matlab Production Server and similar solutions handle this by maintaining a pool of already-running "warmed up" Matlab worker processes to service incoming requests. Two, licensing: the "regular" Matlab licenses are aimed towards interactive use by humans; running Matlab like that on the server side to handle requests for a web app used by multiple humans may not be covered. Talk to your organization's lawyer or IT licensing expert before doing this.
#Will is right here: The Matlab Production Server is the product or "solution" that MathWorks provides for this scenario. And it's relatively easy to use. But ain't cheap. (On the other hand, when you're talking about Matlab, what is?)
If you have someone who can do a bit of system programming for you, there's a more affordable alternative: use the Matlab Compiler to build your Matlab code into a "CTF" DLL, and write a thin custom server wrapper on top of that, which can accept service calls for the particular Matlab things you need done, and dispatches them to your code. (Running that in a pool of multiple processes, if you want to be able to service multiple concurrent clients.) "Compiled" Matlab libraries that run against the Matlab Runtime do not require any additional licenses for their runtime execution.
Big questions here are: Do you want this to Go Fast? How many clients are you going to have, and how often are they going to be sending requests? What kind of data will be contained in the inputs and outputs to this Matlab code?
Have a look at the -batch option to the matlab command. Have a look at the various deployment options supported by the Matlab Compiler. And talk to your organization's lawyer.
If you decide to go the matlab -batch route, you probably do not want to pass the inputs to your Matlab code as command-line arguments. Command lines and environment variables only pass simple strings, and parsing those sucks, especially once you get in to nontrivial numerics. Bundle up all your inputs as JSON files, MAT files, or something similar, and then pass just a reference to those files (or SQL blobs, or similar) on the command line.
Also, depending on what your Matlab code is like, GNU Octave (https://gnu.org/software/octave/index) may be an option for you. Octave is many years behind Matlab in terms of functionality and stability, and doesn't have equivalents of all the Matlab Toolboxes, so it isn't a drop-in replacement in general terms. But for simple stuff, it works. And it is unencumbered by licensing, and has faster startup times in command-line mode.
An easier solution (though possibly not the cheapest if your existing MATLAB license doesn't already cover it) is to use MATLAB Production Server which basically exists for this category of problem. It has a RESTful API that would straightforwardly handle your user input use case.

How to execute an untrusted function efficiently in a cross-platform way?

I am writing an open source cross-platform application written in C++ that targets Windows, Mac, and Linux on x86 CPUs. The application produces a stream of data (integers) that needs to be validated, and my application will perform actions depending on the validation result. There are multiple validators, which we shall call "modules", and they can be swapped out for one another.
Anybody can write and share modules with other users, so my application has to ensure that maliciously-written modules cannot harm the user in any way (perhaps except via high CPU usage, in which case my application should be able to kill the module after some amount of time - this can be done by using a surrogate process). Furthermore, the stream of data is being sent at a high rate (up to 100kB/s).
Fortunately, the code in these modules are usually simple arithmetic operations on data in the stream (usually processing each incoming integer in constant time), and they do not need to make any system calls (not even heap allocation).
I've considered the following possibilities (all of them with some drawbacks):
Kernel-based sandboxing
On Linux, we can use secure computing (seccomp), which prevents a process from making any system calls except for reading and writing with already-open file destriptors. Module creators would write their modules as a single function that takes in input and output file descriptors (in a language like C or C++) and compile it into a shared object, then distribute that shared object.
My application will probably prepare input and output file descriptors, then fork() itself or exec() a surrogate process, and this child process uses dlopen() and dlsym() to get a pointer to the untrusted function. Then strict secure computing mode will be enabled, before executing the untrusted function.
Drawbacks: There's the problem that dlopen() will actually run the constructor function from the shared library. This would have to be properly sandboxed as well, and I can't think of a way to do so. Also, of course, this thing will only work on Linux. As far as I know, there is no way to ban WinNT system calls on Windows, so a similar solution on Windows won't be very secure.
Application-level sandboxing
[[ Any form of application-level sandboxing means that we cannot run untrusted machine code of any form. An untrusted function can overwrite its return value or data outside its call stack, thereby compromising the whole application (and effectively acquiring any permissions that the original application had). ]]
Make modules use a simple scripting language that does not support any system calls - just pure arithmetic operations and perhaps the ability to read an input stream. My application would contain an interpreter for this language.
Drawbacks: Unfortunately I have not found this scripting language. Many scripting languages have extensive functionalities (e.g. Python) and a sandbox (e.g. PyPy's sandbox) simply filters OS system calls. I would be shipping a lot of useless interpreter code with my application, and it arguably is more prone to security issues due to bugs in the intepreter than a language with simply no functionality to do things other than simple calculations and control flow instructions (basically a function that does not make any system calls). Furthermore, marshalling the data between C++ (machine code) and the scripting language is usually a slow process.
Distribute modules with a 'safe' compiled language that again does not support any system calls. My application would contain a JIT for this language.
Marshalling won't be necessary because my application would call into the JITted machine code of the untrusted module, so performance across this boundary should be fast. The untrusted module now won't be able to corrupt the stack, attempt return-oriented programming, or perform any other malicious actions, due to the language restrictions and checks of the 'safe' language. WebAssembly is the first and only language that comes to mind (if it can be called a language). (As far as I can tell, WebAssembly seems to provide the security guarantees for my use case, right?)
Drawbacks: The existing implementations of WebAssembly seem to be all browser-based, so I would have to steal an implementation from an open source browser. This does seem like a lot of work, considering that I would have to uncouple it from all the JavaScript and other browser bits. However, a standalone WebAssembly JIT based on LLVM seems to be under development.
Question:
What is the best way to execute an untrusted function efficiently that works on Windows, Mac, and Linux?
Right now, I think that the scripting language way would probably be the safest, and be the easiest for module writers. But for a more efficient solution, WebAssembly is probably better. Am I right, or are there better or easier solutions that I have not thought of?
(Remark: I think several pairs of tags used in this question have never been seen together before!)
Regarding WebAssembly:
Unfortunately, there is no production-quality stand-alone implementation yet. I expect some to show up in the future, but it hasn't happened yet.
For historical reasons, existing production implementations are all part of a JavaScript VM. Fortunately, none of these VMs is tied to a browser. If you don't mind including some unused JS baggage, you can embed them as they are (ripping out the JS would be very hard). One problem, though, is that these VMs don't yet provide embedding interfaces for Wasm specifically. You have to go through JS, which is stupid.
There is an initial design for a C and C++ API for WebAssembly, which would give direct access to an embedded Wasm VM. It is meant to be VM-neutral, i.e., could be implemented by any existing VM (the repo contains a prototype implementation on top of V8). This may evolve into a standard, but I cannot promise any timeline. Right now it's only for the brave.

API vs Toolkit vs Framework vs Library

My question is very simple, and I want a clear answer with a simple example.
What's the main difference between API, Toolkit, Framework, and Library?
I prefer following:
An API is an abstract description of how to use an application. For example, an API may describe the function syntax (declaration) of a chat server. i.e. login, publish_message, subscribe_messages. And, it describes any protocols to use the application. i.e. must login before sending or recieving messages, or clients are dropped after 2 minutes if not sending or receiving messages.
A library is an implementation of an API, it containes the compiled code that implements the functions and protocols (maintains usage state).
A toolkit is a set of libraries (API) and services grouped together to provide the developer with a wider range of possible solutions. For example, the Globus Toolkit provides services (such as File transfering, Job Subission and Scheduling) that a devleoper can install and start on their servers. They also provide API's to build applications that may use the services deployed in an integrated fashion. For example, the developer may build a program that uses the Job Submission API to communicate with the Job Submission Service.
A Framework is a set of guidelines that prevents inappropriate use or developement. The developer must contruct their applications within the rules and boundaries of the framework. This is done by forcing the developer to extend the current framework to develope new software. by extending the framework, you force adhearence to the framework.
I'm not saying these are completely correct, but its worked ok for me so far!
This has always been my understanding, you will no doubt see differing opinions on the subject:
API (Application Programming Interface) - Allows you to use code in an already functional application in a stand-alone fasion.
Framework - Code that gives you base classes and interfaces for a certain task/application type, usually in the form of a design pattern. (Though not always)
Library - Related code that can be swapped in and out at will to accomplish tasks at a class level
Toolkit - Related code that can be used to accomplish tasks at a component level.
Those terms sometimes are misinterchanged.
Similar posts, read:
What is the major difference between a framework and a toolkit?
Framework vs. Toolkit vs. Library
I prefer to call a library as an alias of module or namespace. Toolkit and A.P.I. is usually a set of libraries for a common task. Altought, A.P.I. is more used for Procedural Programming than Object Oriented Programming.

Is there still a difference between a library and an API?

Whenever I ask people about the difference between an API and a library, I get different opinions. Some give this kind of definition, saying that an API is a spec and a library is an implementation...
Some will tell you this type of definition, that an API is a bunch of mapped out functions, and a Library is just the distribution in compiled form.
All this makes me wonder, in a world of web code, frameworks and open-source, is there really a practical difference anymore? Could a library like jQuery or cURL crossover into the definition of an API?
Also, do frameworks cross over into this category at all? Is there part of Rails or Zend that could be more "API-like," or "libraryesque"?
Really looking forward to some enlightening thoughts :)
My view is that when I speak of an API, it means only the parts that are exposed to the programmer. If I speak of a 'library' then I also mean everything that is working "under the hood", though part of the library nevertheless.
A library contains re-usable chunks of code (a software program).
These re-usable codes of library is linked to your program through APIs
(Application Programming Interfaces). That is, this API is an interface to library through which re-usable codes are linked to your application program.
In simple term it can be said that an API is an interface between two software programs which facilitates the interaction between them.
For example, in procedural languages like C, the library math.c contains the implementations of mathematical function, such as sqrt, exp, log etc. It contains the definition of all these functions.
These function can be referenced by using the API math.h which describes and prescribes the expected behavior.
That being said, an API is a specification (math.h explains about all the functions it provides, their arguments and data they return etc.) and a library is an implementation (math.c contains all the definitions of these functions).
API is part of library that defines how it will interact with external code. Every library has API, API is sum of all public/exported stuff. Nowadays meaning of API is widened. we might call the way web site/service interact with code as API also. You can also tell that some device has API - the set of commands you can call.
Sometimes this terms can be mixed together. For example you have some server app (like TFS for example). It has API with it, and this API is implemented as a library. But this library is just a middle layer between you and not the one who executes your calls. But if library itself contains all action code then we can't say that this library is API.
I think that Library is a set of all classes and functions that can be used from our code to do our task easily. But the library can contain some of its private functions for its usage which it does not want to expose.
API is a part of library which is exposed to the user. So whatever documentation we have regarding a library, we call it an API Documentation because it contains only those classes and functions to which we have access.
we have first to define an interface ...
Interface :is the means by which 2 "things" talk to each other and exchange information. "things" could be a (1) human or (2) a running code of any sort (e.g. library ,desktop application , OS , web service ... etc).
if a human want to talks to a program he need Graphical user interface (GUI) or command line interface (CLI). both are types of interfaces that humans (but not programs) would like to use.
if however a running code (of any sort) want to talk to another running code (of any sort) it doesn't need or want a GUI or CLI ,it rather need an Application Programming Interface (API).
so to answer the original poster question : library is a type of running code and the API is the means by which this running code talk to other running codes.
In Clear and concise language
Library: Collection of all classes and methods stored for re-usability
API: Part of library classes and methods which can be used by a user in his/her code.
According to my perspective, whatever the functions are accessible to invoker , we can called as api in library file, library file having some of the functions which is private , we cannot access them ..
There are two cases when we speak or think of API
Computer program using library
Everything else (wider meaning)
I think, that in the first case, thinking in terms of API is confusing. It's because we always use a library. There are only libraries. API without library doesn't exist, while there's a tendency to think in such terms.
How about The Standard Template Library (STL) in C++? It's a software library.
You can have different libraries with the same API, meaning set of available classes, objects, methods, functions, procedures or whatever terms you like in some programming language. But it can be said, that we have different implementation of some "standard" library.
Some analogy may be that: SQL is a standard but can have different implementations. What you use is always some SQL engine which implements SQL. You may follow only standard set of features or use some extended, specific to that implementation.
And what "under the hood" in library is not your concern, except in terms of differences in efficiency by different implementations of such library.
Of course I'm aware, that this way of thinking is not what is a "generally binding standard". Just a lot of new terms have been created, that are not always clear, precise, intuitive, that brings some confusion. When Oracle speaks about Collections. It's not library, it's not API, it's a "Collections Framework".
Hello brothers and sisters.
Without using technical terms I would like to share my understanding regarding API and library.
The way I distinguish 'library' and 'API' is imagining a situation where I go to a book library. When I go there, I request a book which I need to a 'librarian' without knowing how a entire library is managed.
I make a simple relation between them like this.
Library = A book library which has a whole system and staffs to manage books.
API = A librarian who provides me a simple access to a book which I need.

Symbian and OpenC Applications

I have been researching around trying to find the best way to begin developing an application which aims to analyse user's writing styles based on outgoing SMS messages. I have installed Symbian's SDK and Carbide and purchased a book on their specific style of C++ to get started. However, I was told to check out Open C for Symbian as I have some previous C experience. I have installed the plugin from http://www.forum.nokia.com/Resources_and_Information/Explore/Runtime_Platforms/Open_C_and_C++/ and tested a simple Hello, World! application with success.
Although, the initial success would lead me to believe Open C would be a better option for me, I am worried about limitations of using Open C. For example, I need to be able to access native functions of the Symbian OS to capture keystrokes while in the SMS composer. I also need to be able to run my application in the background and have it load on system startup as not to interfere with user's normal activities.
Can someone clarify if Open C can access such functions and fulfil my needs in terms of developing this specific application? Also, what are the limitations to using Oepn C in comparison to the standard Symbian C++?
I'm by no means a Symbian guru but we've used the Open C/C++ plugin for Symbian here. My understanding is that the plugin is simply an extension -- it gives you the standard libraries and lets you deal with familiar functions (in our case, just the simple cstring.h, and stdio.h libraries were what we were looking for).
You can still mix and match the Symbian calls and likely will have to deal with some painful conversions to get your char* into the proper "descriptor". However, you should only have to do these at the interfaces at which you're touching existing Symbian libraries (as they're going to expect descriptors, not char*s).
In our code, we have some places where we're using a remove call to delete files and in the same class, creating the detailed Symbian RFs abd RFile objects.
So yes, while we use C/C++ libraries to do some low-level stuff and a lot of string manipulation, we're also using the Web Browser Control, key input monitoring and all that.
...And yes, we need to clean up our code. :-)
Open C provides a set of standard C libraries for Symbian OS programs i.e. it is a library.
This means you can call Open C code and Symbian native code freely in the same program, just as with any other library, provided you respect the preconditions and assumptions that the libraries require.
This is where the complexity comes in, because the standard Symbian APIs often require things like descriptors and a working active scheduler, whereas the Open C libraries don't. But provided you're careful you can do what you want.