I would like to know, if I code an EA in a normal metatrader4 platform, can I reuse the .ex4 in other trading platform for example InstaTrader?
The reason is that, when I have created a new EA in InstaTrader, the EA code generated from InstaTrader is different from the one generated from metatrader4. And I couldn't find any documentation regarding to InstaTrader's EA.
Not sure anyone has encountered this before?
No. MQL is the language specifically for meta-editor which is a member of meta-trader platform.
Other trading languages may have their own scripting languages.
Metatrader4
In principle Metatrader4 uses a Metalang.exe that compiles the MQL4-source-code files into an "internally"-executable format EX4
As defined, EX4 is binary-executable on all Metatrader4 terminals.
.
WhiteLabel-ed Terminals
InstaTrader(TM) and many other *-Trader(TM)-s are so called White-Label modifications of the same MetaQuotes, Inc., software product, the [Metatrader 4 Terminal], that are just individually "skinned" in a name of the respective Broker, who has bought from the MetaQuotes, Inc. the license for a suite of [Metatrader 4 Server + Metatrader 4 Risk Management + Metatrader 4 Dealer Desk + ... ], inclusive, but not limited to, the right to re-label the client Terminal program.
Thus under most of the situations your EX4 code shall run on any other re-labeled Terminal
But...
Restrictions on binary-compatibility apply, as Metatrader4 terminals gets released in so called Builds, ( Build 432 -> Build 468 -> Build 509 -> ... -> Build 600 -> Build 624 ) and some of these have also modified the binary-code format.
Thus the EX4 code shall be hosted on a "similar" generation of the Terminal Build
Finally...
The ultimate show-stopper is the MetaQuotes, Inc., licensing policy, which makes a server-side locking to take place [Metatrader 4 Server] has a setting to reject connection requests from client Terminals in case their Build # is lesser than a treshold set on Server side.
There the SLM story ends. Forever.
Related
I'm working on a big optimization problem on GAMS so it's impossible for me to put the entire code here but I hope you could help me with where I am stuck at. I have 4 power nodes in my model that are connected by 2 bidirectional transmission lines (r) like this.
where r_a, r_b are the current transmission line capacities. Power can flow both directions and I'm tracking power going from A to A' and B to B' as well as from A' to A and B' to B. So there are 4 power flows (f) in 2 transmission lines (r). My decision variables are how much capacity upgrade (c(f)) I need to build in each of these lines to satisfy more power flow needs. So in GAMS, I minimized the cost of upgrading as:
investment_cost.. cap_cost =e= sum(f,c(f)*capCost(f));
here capCost(f) is the capital cost of upgrading the transmission line capacity to be able to flow 1 extra GW her hour.
My constraint: In each time period (t), the total power flow p(f,t) must be less than or equal to the existing line capacity + newly upgraded capacity:
line_cap(f,t).. old_line_cap(l)+c(f) =g= p(f,t);
However, my solution looks something like this:
fAA': 6.7 (built 6.7 GW more of capacity in line from A to A')
fA'A: 5.0 (built 5.0 GW more of capacity in line from A' to A)
fBB': 5.5 (built 5.5 GW more of capacity in line from B to B')
fB'B: 8.1 (built 8.1 GW more of capacity in line from B' to B)
But this is not right because if I upgrade line AA' by 6.7 GW, I don't need to upgrade line A'A, since they are the same line. Basically, I pay twice to upgrade the same line.
So I fix this, I'm trying to use Alias such as this:
Alias(f,ff)
line_cap(f,t).. old_line_cap(f)+c(f)+ sum((f,ff)$[line_source(f)=line_sink(ff) and line_source(ff)=line_sink(f)],c(ff)) =g= p(f,t);
But that still does not fix my problem.
I'd appreciate any help! Thank you!
If you want to stick with your 4 fs, let's call them your virtual lines (since a pair of two of these virtual lines actually form one physical lines). Now, let's introduce a new set of physical lines (fPhys) and a mapping between the virtual and the physical lines (fMap):
Set fPhys / fAA, fBB /
fMap(f, fPhys) / ("fA'A", "fAA'").fAA
("fB'B", "fBB'").fBB /;
Now, you can stick with using the set f for your flows, and use fPhys for the investment decision, like this:
investment_cost.. cap_cost =e= sum(fPhys,c(fPhys)*capCost(fPhys));
line_cap(f,t).. sum(fMap(f,fPhys), old_line_cap(fPhys)+c(fPhys)) =g= p(f,t);
Scenario:
There are 5 Links in the Home page:
Link 1
Link 2
Link 3
Link 4
Link 5
Each of the above links are separate test cases, so there are a total of 5 test cases.
All the links may not present in all the sites, according to the requirements.
So I need to write a Robot framework test case which works dynamically for all the sites, Like 1 site may have 3 links only some has all the 5 links. So its like SKIPPING a particular Test case if that lisk is not present.
*** Keywords ***
Go to Manage Client Reports
Click Link link:Manage Client Reports
Can anyone help.
In the upcoming Robot Framework Release 4.0 a new test status skipped will be introduced. Here is a brief status about the release:
Past due by 27 days 87% complete
Major release concentrating on adding the skip status (#3622), IF/ELSE
(#3074) and enhancing the listener API (#3296 and #3538). Last major
release to support Python 2.
So it can be ready any time soon now.
This is what you can have New SKIP status #3622. There will be a Skip If and a Skip keywords and more to be used.
How to skip tests
There are going to be multiple ways:
A special exception that library keywords can use to mark a single test to be skipped. See also #3685.
BuiltIn keyword Skip (or Skip Test and Skip Task) that utilizes the aforementioned exception.
BuiltIn keyowrd Skip If to skip based on condition.
When the skipping exception is used in a suite setup, all tests in the suite are skipped.
Command line option --skip to unconditionally skip tests based on tags. Similar to --exclude but skipped tests are shown in logs/reports
with a skip status and not dropped from execution altogether.
Command line option --skiponfailure to skip tests if they fail. Similar effect than with the current --noncritical.
What about criticality
As already discussed in #2087, the skip status is very similar feature
than Robot's current criticality concept. There are many people who
would like to have both, but I don't think that's a good idea and
believe it's better to remove criticality when skipping is added.
Separate issue #3624 covers removing criticality and explains this in
more detail. Colors
Skip status needs a specific color to match current pass (green) and
fail (red). Yellow feels like a good candidate with a traffic light
metaphor, but I'm open for other ideas and we could possibly change
other colors as well. Probably should make colors configurable too --
currently only report background colors support it.
Report background color mentioned above needs some thinking as well.
Currently it's either green or red, but with the added skip status we
could use also yellow or whatever skip color we decide to use.
Different scenarios where different colors could be used are listed
below (assuming green/yellow/red scheme):
All tests pass. This is naturally green.
Any test fails. This is naturally red.
Any test is skipped (no failures). This probably should be green but could also be yellow.
All tests skipped. This could be yellow. Could also be green but that's a bit odd if all tests are yellow.
Depending on your deadlines you might won't be able to wait this release, nevertheless it is a good to know thing.
There is an advanced solution where you can generate your test cases run-time. To do so you have to implement a small library that also acts as a listener. This way it can have a start_suite method that will be invoked and it will get the suite(s) as Python object(s), robot.running.model.TestSuite. Then you could use this object along with Robot Framework's API to create new test cases. The idea below was inspired by and it is based on this blog post: Dynamically create test cases with Robot Framework.
DynamicTestLibrary.py:
from robot.running.model import TestSuite
class DynamicTestLibrary(object):
ROBOT_LISTENER_API_VERSION = 3
ROBOT_LIBRARY_SCOPE = 'GLOBAL'
ROBOT_LIBRARY_VERSION = 0.1
def __init__(self):
self.ROBOT_LIBRARY_LISTENER = self
self.top_suite = None
def _start_suite(self, suite, result):
self.top_suite = suite
self.top_suite.tests.clear() # remove placeholder test
def add_test_case(self, keyword, *args):
tc = self.top_suite.tests.create(name=keyword)
tc.keywords.create(name=keyword, args=args)
globals()[__name__] = DynamicTestLibrary
UPDATE for Robot Framework 4.0
Due to the backward incompatible changes (Running and result models have been changed) made in the 4.0 release the add_test_case function should be change as below if you are using version above 4.0.
def add_test_case(self, name, keyword, *args):
tc = self.top_suite.tests.create(name=name)
tc.body.create_keyword(name=keyword, args=args)
You can utilize this library in a suite setup, in which you check which links are present and add test cases for the ones that are available.
test.robot
*** Settings ***
Library DynamicTestLibrary
Suite Setup Check Links And Generate Test Cases
*** Variables ***
##{LINKS} Manage Clients # test input 1
#{LINKS} Manage Clients Manage Client Hardware # test input 2
##{LINKS} Manage Clients Manage Client Hardware Manage Client Reports # test input 3
*** Test Cases ***
Placeholder
[Documentation] Placeholder test that will be removed during execution.
No Operation
*** Keywords ***
Check Links And Generate Test Cases
FOR ${link} IN #{LINKS}
DynamicTestLibrary.Add Test Case Go to ${link}
END
Go to Manage Client Reports
Log Many Click Link link:Manage Client Reports
Go to Manage Client Hardware
Log Many Click Link link:Manage Client Hardware
Go to Manage Clients
Log Many Click Link link:Manage Clients
Go to ${link} will give the appropriate keyword name that will be called in a test case with the same name. You can check with each example input list that the number of executed tests will be equal with the length of the list.
Here is the output:
# robot --pythonpath . test.robot
==============================================================================
Test
==============================================================================
Go to Manage Clients | PASS |
------------------------------------------------------------------------------
Go to Manage Client Hardware | PASS |
------------------------------------------------------------------------------
Test | PASS |
2 critical tests, 2 passed, 0 failed
2 tests total, 2 passed, 0 failed
==============================================================================
I have an i.mx7 som. I want to build a Yocto image which I can dd onto a usb stick to boot from. I believe that I want an hddimg image but cannot see how to create one (I have sdimg which works prefectly).
I would appreciate advice.
I have set IMAGE_FSTYPES to "hddimg" but get "ERROR: Nothing PROVIDES 'syslinux'"
The SOM is the Technexion i.MX7. Layers are:
layer path priority
=======================================================
meta sources/poky/meta 5
meta-poky sources/poky/meta-poky 5
meta-oe sources/meta-openembedded/meta-oe 6
meta-multimedia sources/meta-openembedded/meta-multimedia 6
meta-freescale sources/meta-freescale 5
meta-freescale-3rdparty sources/meta-freescale-3rdparty 4
meta-freescale-distro sources/meta-freescale-distro 4
meta-powervault sources/meta-powervault 6
meta-python sources/meta-openembedded/meta-python 7
meta-networking sources/meta-openembedded/meta-networking 5
meta-virtualization sources/meta-virtualization 8
meta-filesystems sources/meta-openembedded/meta-filesystems 6
meta-cpan sources/meta-cpan 10
meta-mender-core sources/meta-mender/meta-mender-core 6
meta-mender-freescale sources/meta-mender/meta-mender-freescale 10
Nope, you certainly do not want an hddimg, as this is a mostly deprecated format for x86 systems. On ARM, you almost never want syslinux :-)
Usually your SOM comes with a Board Support Package in the form of a layer, which includes the MACHINE definition which in turn defines the IMAGE_FSTYPE that this machine likes for booting. If in doubt, consult the manual or ask your vendor.
Having said that, if you specify SOM and layers in use we can have a look if publicly accessible, but without those details it is impossible to give a proper answer.
I want to define a Ring Group that, when called, rings one extension and one external number (mobile phone). What is the best way to achieve that?
Right now only the extension is called. So just entering an external number in the Destination field does not work, the logs say
[NOTICE] switch_cpp.cpp:1376 [ring groups][call forward all] user_exists id <mobileno> <domainname>
and later
[DEBUG] switch_ivr_originate.c:3865 Originate Resulted in Error Cause: 27 [DESTINATION_OUT_OF_ORDER]
It will check all calls using the dialplan to see if the destination is a local number for an external number it should say user_exists false every time.
This [DESTINATION_OUT_OF_ORDER] indicates that it may not have found a matching outbound route that matches the number of digits of the external phone number. Or it may mean that your carrier rejected the call maybe didn't like the caller ID that was sent. Easiest thing to try is attempt it with an outbound rout to different carrier.
In case you weren't aware FusionPBX 4.4 was release on 5 April 2018. Instructions to upgrade are post on docs.fusionpbx.com. Search term upgrade (version upgrade).
I am referring the TRM of DM3730 and modifying the pad configurations on an EVM 3530 accordingly. I couldn't understand the following properly.
1) What are CORE power domain and WKUP power domain?
2) What is core control module and Wake-Up control module?
3) The above two questions may be completely hardware-oriented. But the reason I'm asking is, in EVM 3530 source code, in pad configurations, certain pins are defined as PAD_ENTRY and certain others as WKUP_PAD_ENTRY. What makes the difference?
#define PAD_ENTRY(x,y) {PAD_ID(x),y,0},
#define WKUP_PAD_ENTRY(x,y) {WKUP_PAD_ID(x),y,0},
#define I2C3_PADS \
PAD_ENTRY(I2C3_SCL, INPUT_ENABLED | PULL_RESISTOR_DISABLED | MUXMODE(0)) \
PAD_ENTRY(I2C3_SDA, INPUT_ENABLED | PULL_RESISTOR_DISABLED | MUXMODE(0))
#define I2C4_PADS \
WKUP_PAD_ENTRY(I2C4_SCL, INPUT_ENABLED | PULL_RESISTOR_DISABLED | MUXMODE(0)) \
WKUP_PAD_ENTRY(I2C4_SDA, INPUT_ENABLED | PULL_RESISTOR_DISABLED | MUXMODE(0))
Any kind of guidance is welcome.
WKUP provides functions for sections of the OMAP SoC to come out of power-saving mode.
A power domain can be turned on/off without affecting others (4.1.3.2). WKUP power domain is continously active, it allows for switching others. CORE power domain comprises interconnect / memory / peripheral core functions.
Wake-up control module and core control module provide for save and restore of pad configurations (7.3) when switched off.
It looks like the pads which can be configured as I2C4 SCL/SDA can also be configured with wakeup capabilities. Then in your code base (Windows CE 6 BSP?) a different macro from generic PAD_ENTRY is appropriate, probably there is an error check of (x) to confirm pad ID is valid. The non-wakeup-related macro parameters should work the same for you, there won't be a difference.
Section references are to OMAP35x-TRM.