What's the C++ side of an Emscripten XMLHttpRequest call? - xmlhttprequest

I'm writing a program that I'd like to be able to compile natively and compile with Emscripten. I need to make synchronous HTTPS requests as part of that program.
How do I do that in C++? The Javascript side makes sense, but I don't know what compiles to the XMLHttpRequest.

There are a few answers to your question:
You can use a few methods in emscripten.h such as emscripten_async_wget
You can write a method in Javascript yourself and call it from C++
https://emscripten.org/docs/porting/connecting_cpp_and_javascript/Interacting-with-code.html
but the kicker is that you can't easily make a synchronous call from XMLHttpRequest and get back binary data. Firefox OS will disallow that if the mime type specifies binary data. However, you can override the mime type and convert the resulting text into a typed array yourself. It's the same technique as the hack in this link.
https://developer.mozilla.org/en-US/docs/Web/API/XMLHttpRequest/Sending_and_Receiving_Binary_Data?#Receiving_binary_data_in_older_browsers
At first glance this sounds like a perfect solution, but if you are receiving a lot of data back, you will have to convert that character array into a typearray and that is slow.

Related

Asking sample code for ISO 8583 verifone vx520

I want to know the sample code for sending message to server and get back response to verifone vx520 terminal using ISO 8583.
As noted in a comment on your question, this is not a code sharing site, so such an open-ended question is a bit difficult to answer, but perhaps I can get you started on the right foot.
First of all, let me start by suggesting that if you have control over the terminal code and the server that it will be talking to, I suggest you NOT use ISO8583. Yes, it's an industry standard and yes, it communicates data efficiently, BUT it is much more difficult to use than, say, VISA-1 or XML, or JSON etc. That means you have more opportunities for bugs to creep into your code. It also means that if something goes wrong, it takes a lot more effort to try and figure out what happened and try and fix it. I have used all these protocols and others besides and I'll tell you that ISO8583 is one of my least favorite to work with.
Assuming you do not have a choice and you must use ISO8583 then it's worth noting that ISO8583 is nothing but a specification on how to assemble data packets in order to communicate. There is nothing special about the Vx520 terminal (or any other VeriFone terminal) that would change how you would implement it verses how you might do so on any other C++ platform EXCEPT that VeriFone DOES provide you with a library for working with this spec that you are free to use or ignore as you see fit.
You don't need to use this library at all. You can roll your own and be just fine. You can find more information on the specification itself at Wikipedia, Code Project, and several other places (just ask your favorite search engine). Note that when I did my 8583 project, this library was not available to me. Perhaps I wouldn't have hated this protocol so much if I had had access to it... who knows?
If you are still reading this, then I'll assume that ISO8583 is a requirement (or you are a glutton for punishment) and that you are interested in trying out this engine that VeriFone has provided.
The first thing you will need to do (and hopefully, you have already done it) is to install ACT as part of the development suite (I also suggest you head over to DevNet and get the latest version of ACT before you get started...). Once installed, the library header can be found at %evoact%\include\iso8583.h. Documentation on how to use it can be found at %evoact%\docs. In particular, see chapter 6 of DOC00310_Verix_eVo_ACT_Programmers_Guide.pdf.
Obviously, trying to include a whole chapter's worth of information here would be out of scope, but to give you a high-level idea of how the engine works, allow me to share a couple excerpts:
This engine is designed to be table driven. A single routine is used
for the assembly and disassembly of ISO 8583 packets. The assembly and
disassembly of ISO 8583 packets is driven by the following structures:
Maps One or more collections of 64 bits that drive packet assembly and
indicate what is in a message.
Field table Defines all the fields used
by the application.
Convert table Defines data-conversion routines.
Variant tables Optional tables used to define variant fields.
The process_8583() routine is used for the assembly and disassembly of ISO
8583 packets.
An example of using process_8583() is given elsewhere as follows:
#include "appl8583.h"
int packet_sz;
void assemble_packet ()
{
packet_sz = process_8583 (0, field_table, test_map, buffer, sizeof( buffer));
printf ("\ fOUTPUT SIZE %d", packet_sz);
}
void disassemble_packet ()
{
packet_sz = process_8583 (1, field_table, test_map, buffer, packet_sz);
printf ("\ fINPUT NOT PROCESSED %d", packet_sz);
}
To incorporate this engine into an application, modify the APPL8583.C
and APPL8583.H files so that each has all the application variables
required in the bit map and set up the map properly. Compile
APPL8583.C and link it with your application and the ISO 8583 library.
Use the following procedures to transmit or receive an ISO 8583 packet
using the ISO 8583 Interface Engine:
To transmit an ISO 8583 packet
1 Set data values in the application variables for those to transmit.
2 Call the prot8583_main() routine. This constructs the complete
message and returns the number of bytes in the constructed message.
3 Call write() to transmit the message.
To receive a message
1 Call read() to receive the message.
2 Call the process_8583() routine. This results in all fields being
deposited into the application variables.
3 Use the values in the application variables.

equivalent of nevow.tags.raw for twisted.web.template

I'm trying to port pydoctor to twisted.web.template and have hit a pretty basic problem: pydoctor uses epydoc to render docstrings into HTML but I can't see a way to include this HTML in the generated page without escaping. What can I do?
There is, somewhat intentionally, no way to insert HTML into the page without parsing; twisted.web.template is a bit more of a stickler about producing correct output than nevow was.
There are a couple of ways around this.
Ultimately, your HTML is going to some kind of output stream. You could simply insert a renderer that returns a pair of Deferred objects, and does a .write to the underlying stream after the first one fires but before the second. Kind of gross, but it effectively expresses your intent :).
You can simply re-parse the output of epydoc into HTML using XMLString or similar, so that twisted.web.template can write it out correctly. This will "waste" a little bit of CPU, but in my opinion it will be worth it for (A) the stress-test it will give t.w.t and (B) the guarantee - presuming that t.w.t is correct - that it will give you that you're emitting valid HTML.
As I was writing this answer, however, I realized that point 2 isn't generally possible with arbitrary HTML with the current public API of twisted.web.template. Ideally, you could use html5lib to parse this stuff, and then just dump the parsed input into your document tree.
If you don't mind mucking around with private API, you could probably hook up html5lib's SAX support to the internal SAX parser that we use to load templates.
Of course, the real solution is to fix the ticket you already filed, so you don't have to use private API outside of Twisted itself...

Reading Byte Data through the serial port in C++/CLI

I am trying to make an interface with another program so I have to use C++.
It's been years since I have programmed in C++ and I have been at this problem for about a week so I'm slowly starting to see how everything works.
I want to read byte data coming from a serial port device.
I have verified that I can get text through the serial port using the readline command:
For example:
String^ message = _serialPort->Readline();
Is how the data is read in an example from MSDN that I got to work successfully.
However I have tried to modify it several times and I'm having no luck coming up with something that reads the data as bytes. (I already have conversion of byte data to string so I can actually see the bytes such as the number 15 equaling 0f in bytes.)
Modifying the code to
wchar_t message = _serialPort->Readline();
gives me
error c2440: 'initializing' : cannot convert from System::String ^' to 'wchar_t'.
I'm not familiar with Readline. Is it only for strings? I have verified that it does work with strings and if I use a serial device that sends a string the first set of code does work.
Can someone explain what method I could use to read byte data? Thanks.
If you actually want to use C++ rather than C++/CLI, I recommend using boost.asio. It is well established, relatively easy to understand, and has a specific set of functionality just for working with serial ports.
Update
Pure C++ Win32 API versions:
See the following good references
CodeProject article
MSDN
Is there any specific reason you are doing this in C++/CLI code?
I thought you might not even be aware of that (otherwise, tag your questions, please).
String^, Readline etc are CLR functions (i.e. .NET, think: "you could do this more easily in C#). So, again,
If there is a need for this to be in C++, why don't you look at the native Win32 API
Otherwise, why are you bothering with C++
If you really wanted C++/CLI I suggest not mixing native/managed code when handling the serial IO. You can get an UnmanagedMemoryStream to marshal the data in/out of managed land.
$0.02

Is there a way to mix MonoTouch and Objective-C?

I'd like to know if there is a way to mix C# and Obj-C code in one project. Specifically, I'd like to use Cocos2D for my UI in Obj-C and call some MonoTouch C#-Library that does some computations and get some values back. Is there a way to do this? Or maybe the other way around, i. e. building in MonoTouch and calling Cocos2D-functions?
Thanks.
The setup that you describe is possible, but the pipeline is not as smooth as it is when you do your entire project in MonoTouch. This is in fact how we bootstrapped MonoTouch: we took an existing Objective-C sample and we then replaced the bits one by one with managed code.
We dropped those samples as they bitrot.
But you can still get this done, use the mtouch's --xcode command line option to generate a sample program for you, and then copy the bits that you want from the generated template.m into your main.m. Customize the components that you want, and just start the XCode project from there.
During your development cycle, you will continue to use mtouch --xcode
Re: unknown (google):
We actually did this as described.
See this page for a quick start, but the last code segment on that page is wrong, because it's omitting the "--xcode"-parameter.
http://monotouch.net/Documentation/XCode
What you have to do to embed your Mono-EXE/DLL into an Objective-C program is to compile your source with SharpDevelop, then run mtouch with these parameters:
/Developer/MonoTouch/usr/bin/mtouch --linksdkonly --xcode=output_dir MyMonoAssembly.exe
This only works with the full version of MonoTouch. The trial does not allow to use the "--xcode"-argument . The "--linksdkonly"-argument is needed if you want mtouch to keep unreferenced classes in the compiled output, otherwise it strips unused code.
Then mtouch compiles your assembly into native ARM-code (file extension .s) and also generates a XCode template which loads the Mono-Runtime and your code inside the XCode/ObjC-program. You can now use this template right away and include your Obj-C-code or extract the runtime loading code from the "main.m"-file and insert it into your existing XCode-project. If you use an existing project you also have to copy all .exe/.dll/.s files from the xcode-output-dir that mtouch made.
Now you have your Mono-Runtime and assembly loaded in an XCode-project. To communicate with your assembly, you have to use the Mono-Embedding-API (not part of MonoTouch, but Mono). These are C-style API calls. For a good introduction see this page.
Also the Mono-Embedding-API documentation might be helpful.
What you have to do now in your Obj-C-code is to make Embedding-API calls. These steps might involve: Get the application domain, get the assembly, get the image of the assembly, locate the class you want to use, instantiate an object from that class, find methods in class, call methods on object, encapsulate method arguments in C-arrays and pass them to the method-call, get and extract method return values.
There are examples for this on the embedding-api-doc-page above.
You just have to be careful with memory consumption of your library, as the mono runtime takes some memory as well.
So this is the way from Obj-C to C#. If you want to make calls from C#/Mono into your Obj-C-program, you have to use the MonoTouch-bindings, which are described here.
You could also use pure C-method calls from the embedding/P/Invoke-API.
Hope this gets you started.
Over the weekend it emerged that someone has been porting Cocos2D to .NET, so you could also do the whole work on .NET:
http://github.com/city41/CocosNet
Cocos2D started as a Python project, that later got ported to Objective-C, and now there is an active effort to bring it to C#. It is not finished, but the author is accepting patches and might be a better way forward.
Calling Objective-C from MonoTouch definitely looks possible. See the Objective-C selector examples
What library are you calling? Perhaps there's an Objective-C equivalent.

How to locally test cross-domain builds?

Using the dojo toolkit, what is the proper way of locally testing code that will be executed as cross-domain, without making the actual build?
As it appears, there are three possible options (each, with their own drawbacks):
Using local (non xd) XMLHttpRequest dojo.require
This option does not really test the xd behavior, since it dojo.require[s] the js synchronously via XHR.
djConfig.debugAtAllCosts = true;
Although this option does load the required code asynchronously (via the 'script' tag), it also pulls the code in via XHR, parses the dojo.require[s] inside that, and pulls them in. This (using the loader_debug), again, is not what the loader_xd is doing. More info on this topic in a different question.
Creating a cross-domain build
This approach requires a build, which is not possible in the environment which I'm running the code in (We're using our own on-the-fly build process, which includes only the js that is necessary for a particular page. This process is not suitable for development).
Thus, my question: is there a way to use the loader_xd, which does not require an xd build (which adds the xd prefix / suffix to every file)?
The 2nd way (using the debugAtAllCosts) also makes me question the motivation for pre-parsing the dojo.require[s]. If the loader_xd will not (or rather can not) pre-parse, why is the method that was created for testing/debugging doing so?
peller has described the situation. If you wanted to just generate .xd.js file for your modules, you could look at util/buildscripts/jslib/buildUtilXd.js and its buildUtilXd.xdgen() function.
It would take a bit of work to make your own script, but you could look at util/buildscripts/build.js for pointers.
I am hoping in the future for Dojo (maybe Dojo 2.x timeframe) we can switch to a loader that just uses script tags with a module format that has a function wrapper around the module, something that is coded by the developer. This would allow the same module format to work in the local and xd cases.
I don't think there's any way to do XD loading without building and deploying it. Your analysis of the various options seems about right.
debugAtAllCosts is there specifically to solve a debugging problem, where most browsers, until recently, could not do anything intelligent with code brought in through eval. Still today, Firefox will report exception in the console as appearing at the eval site (bootstrap.js) with a line number offset from the eval, rather than from the actual eval buffer, and normally that eval buffer is anonymous. Firebug was the first debugger to jump through some hoops to enhance the debugging experience and permitted special metadata that Dojo's loader injects between the XHR and the eval to determine a filepath to the source. Webkit/Safari have recently implemented this also. I believe debugAtAllCosts pre-dates the XD loader.