Hardware implementations of WebAssembly - hardware

I have been poking around some websites, and discovered WebAssembly, and was intrigued by the fact that, to be implemented, a virtual machine is created, along with instruction sets.
Is it theoretically possible to make a WebAssembly implementation in hardware? Does the vm lack any capabilities that could not be solved by external functions?

Theoretically yes, and someone started to develop an initial implementation for an FPGA called WASM Metal but I believe has since been abandoned. Notably, folks like Brendan Eich are skeptical of the utility of it.

Wasm was designed for just-in-time compilation, so there are some minor complications that make direct execution slightly more involved (e.g., the way branch targets are addressed). Some future extensions, such as garbage collection support, might also be less straightforward, though an implementation will be allowed to not provide those.
But yes, in principle it should be possible (and useful!) to implement Wasm in hardware. I am aware of some people/projects looking into this idea, but none of them have announced anything publicly yet.

Related

Possible to share information between an add-on to an existing program and a standalone application? [duplicate]

I'm looking at building a Cocoa application on the Mac with a back-end daemon process (really just a mostly-headless Cocoa app, probably), along with 0 or more "client" applications running locally (although if possible I'd like to support remote clients as well; the remote clients would only ever be other Macs or iPhone OS devices).
The data being communicated will be fairly trivial, mostly just text and commands (which I guess can be represented as text anyway), and maybe the occasional small file (an image possibly).
I've looked at a few methods for doing this but I'm not sure which is "best" for the task at hand. Things I've considered:
Reading and writing to a file (…yes), very basic but not very scalable.
Pure sockets (I have no experience with sockets but I seem to think I can use them to send data locally and over a network. Though it seems cumbersome if doing everything in Cocoa
Distributed Objects: seems rather inelegant for a task like this
NSConnection: I can't really figure out what this class even does, but I've read of it in some IPC search results
I'm sure there are things I'm missing, but I was surprised to find a lack of resources on this topic.
I am currently looking into the same questions. For me the possibility of adding Windows clients later makes the situation more complicated; in your case the answer seems to be simpler.
About the options you have considered:
Control files: While it is possible to communicate via control files, you have to keep in mind that the files need to be communicated via a network file system among the machines involved. So the network file system serves as an abstraction of the actual network infrastructure, but does not offer the full power and flexibility the network normally has. Implementation: Practically, you will need to have at least two files for each pair of client/servers: a file the server uses to send a request to the client(s) and a file for the responses. If each process can communicate both ways, you need to duplicate this. Furthermore, both the client(s) and the server(s) work on a "pull" basis, i.e., they need to revisit the control files frequently and see if something new has been delivered.
The advantage of this solution is that it minimizes the need for learning new techniques. The big disadvantage is that it has huge demands on the program logic; a lot of things need to be taken care of by you (Will the files be written in one piece or can it happen that any party picks up inconsistent files? How frequently should checks be implemented? Do I need to worry about the file system, like caching, etc? Can I add encryption later without toying around with things outside of my program code? ...)
If portability was an issue (which, as far as I understood from your question is not the case) then this solution would be easy to port to different systems and even different programming languages. However, I don't know of any network files ystem for iPhone OS, but I am not familiar with this.
Sockets: The programming interface is certainly different; depending on your experience with socket programming it may mean that you have more work learning it first and debugging it later. Implementation: Practically, you will need a similar logic as before, i.e., client(s) and server(s) communicating via the network. A definite plus of this approach is that the processes can work on a "push" basis, i.e., they can listen on a socket until a message arrives which is superior to checking control files regularly. Network corruption and inconsistencies are also not your concern. Furthermore, you (may) have more control over the way the connections are established rather than relying on things outside of your program's control (again, this is important if you decide to add encryption later on).
The advantage is that a lot of things are taken off your shoulders that would bother an implementation in 1. The disadvantage is that you still need to change your program logic substantially in order to make sure that you send and receive the correct information (file types etc.).
In my experience portability (i.e., ease of transitioning to different systems and even programming languages) is very good since anything even remotely compatible to POSIX works.
[EDIT: In particular, as soon as you communicate binary numbers endianess becomes an issue and you have to take care of this problem manually - this is a common (!) special case of the "correct information" issue I mentioned above. It will bite you e.g. when you have a PowerPC talking to an Intel Mac. This special case disappears with the solution 3.+4. together will all of the other "correct information" issues.]
+4. Distributed objects: The NSProxy class cluster is used to implement distributed objects. NSConnection is responsible for setting up remote connections as a prerequisite for sending information around, so once you understand how to use this system, you also understand distributed objects. ;^)
The idea is that your high-level program logic does not need to be changed (i.e., your objects communicate via messages and receive results and the messages together with the return types are identical to what you are used to from your local implementation) without having to bother about the particulars of the network infrastructure. Well, at least in theory. Implementation: I am also working on this right now, so my understanding is still limited. As far as I understand, you do need to setup a certain structure, i.e., you still have to decide which processes (local and/or remote) can receive which messages; this is what NSConnection does. At this point, you implicitly define a client/server architecture, but you do not need to worry about the problems mentioned in 2.
There is an introduction with two explicit examples at the Gnustep project server; it illustrates how the technology works and is a good starting point for experimenting:
http://www.gnustep.org/resources/documentation/Developer/Base/ProgrammingManual/manual_7.html
Unfortunately, the disadvantages are a total loss of compatibility (although you will still do fine with the setup you mentioned of Macs and iPhone/iPad only) with other systems and loss of portability to other languages. Gnustep with Objective-C is at best code-compatible, but there is no way to communicate between Gnustep and Cocoa, see my edit to question number 2 here: CORBA on Mac OS X (Cocoa)
[EDIT: I just came across another piece of information that I was unaware of. While I have checked that NSProxy is available on the iPhone, I did not check whether the other parts of the distributed objects mechanism are. According to this link: http://www.cocoabuilder.com/archive/cocoa/224358-big-picture-relationships-between-nsconnection-nsinputstream-nsoutputstream-etc.html (search the page for the phrase "iPhone OS") they are not. This would exclude this solution if you demand to use iPhone/iPad at this moment.]
So to conclude, there is a trade-off between effort of learning (and implementing and debugging) new technologies on the one hand and hand-coding lower-level communication logic on the other. While the distributed object approach takes most load of your shoulders and incurs the smallest changes in program logic, it is the hardest to learn and also (unfortunately) the least portable.
Disclaimer: Distributed Objects are not available on iPhone.
Why do you find distributed objects inelegant? They sounds like a good match here:
transparent marshalling of fundamental types and Objective-C classes
it doesn't really matter wether clients are local or remote
not much additional work for Cocoa-based applications
The documentation might make it sound like more work then it actually is, but all you basically have to do is to use protocols cleanly and export, or respectively connect to, the servers root object.
The rest should happen automagically behind the scenes for you in the given scenario.
We are using ThoMoNetworking and it works fine and is fast to setup. Basically it allows you to send NSCoding compliant objects in the local network, but of course also works if client and server are on he same machine. As a wrapper around the foundation classes it takes care of pairing, reconnections, etc..

Preferred python networking framework/library for desktop app

I want to write a p2p share software using python, it mainly used in windows, but can also works in linux. So I've tried some frameworks/libraries such as Twisted, Gevent, and Tornado(may be tornado is not a good one for windows desktop client).
But I don't know which one to choose.
Twisted is a little big, I think...
I think Gevent is more useful in *nix platform.
Tornado is a web server, so may be this one is not suitable for desktop app.
Twisted is the most suited of these to the development of network applications. It contains the most support code for implementing protocols. Twisted also includes the best GUI library integration out of these. It works with Gtk (on Windows, too) and Qt3 and Qt4. It may also work with wxWidgets (though this is less well supported than Gtk or Qt3/4). It can integrate with the Windows GUI event loop as well.
Of course, it would be ridiculous to suggest that Twisted is the best suited library for your needs, given the extremely minimal (almost non-existent) description of your needs. I think it's likely that Twisted is at least as well suited, if not better suited, to the needs of an arbitrary network application than the other options you've listed (and, indeed, any of the other options available in Python). However, whether it is best suited to your particular case, I can't say.
I think the default for the underlying event loop for all of these in Windows will be based on Select (although it appears at least Twisted has platform specific support for IOCP).
Someone with a better understanding of the differences above that should probably comment but, from the developer's perspective the choice will largely be around preferred syntax. Twisted implements everything through a reactor pattern while gevent uses co-routines. I'd take a look at some simple examples of each and see which is better suited to your sensibility.

What are alternatives to the Java VM?

As Oracle sues Google over the Dalvik VM it becomes clear, that you cannot implement a Java VM without license from Oracle (EDIT: Matthew Flaschen points out, that the claims of Oracle may not be valid. Anyways we have currently a situation, where Oracle threats VM-implementations.). That may become the death for Open-Source-implementations of Java (like Apache Harmony).
I don't want to discuss the impact or the legitimation of this lawsuit. but as a Java-programmer I want to take a deeper look into the alternatives, to be prepared for every case. As I see the creation of a compiler as a minor problem, my main interest are alternative VM-implementations, that serve a similar purpose as the JVM.
The VM I'm looking for, should meet some conditions:
free of patent-issues
an Open-Source-implementation exists
potential for optimizations/good performance
platform independent (the VM can be ported to different platforms without bigger hurdles)
Please add some recommendations for me.
LLVM is a really good optimizing, low level virtual machine. It can support languages like C and C++, and does not have built in support for high level features like garbage collection.
VMKit is an implementation of the Java and CLI virtual machines on top of LLVM. Since it uses Java bytecode, this probably wouldn't help with the patent issues.
HLVM is another interesting high level virtual machine built on top of LLVM. It is probably different enough to avoid most well known patents, but it is mainly targeted at numerical computing and functional programming.
On the dynamically typed side, there is Parrot.
I am actually working on a compiler and VM for a language of my own design, but don't count on it ever being finished. ;-)
Keep in mind that any large piece of software will infringe on numerous patents, the important thing is how well known they are (and how much the patents' owners actively seek out infringers). Of course, the whole patent system is absurd, and we would be much better off getting rid of it.
I don't think there is any significant piece of software that is free from patent issues.
If you are an independent developer or working for a smaller company you probably won't get hit directly by the problems though. It's unlikely that big companies holding patents will go after lots of small claims - it's an expensive process and causes a lot of resentment. SCO tried something like that and it didn't work out too well for them.
I would concentrate on finding the best tool for the job without worrying too much about the patent issues, otherwise you will never get anything done.
GraalVM is a research project developed by Oracle Labs and already in production at Twitter. I can't believe my eyes that no one mentions anything about it, it’s so weird. Anyways, GraalVM is a well promising extension of the java virtual machine to support more language and execution modes for running applications like JavaScript, Python, Ruby, R, JVM-based languages, and LLVM-based languages such as C and C++.The GraalVM project includes a new high-performance Java compiler, itself called Graal, which can be used in a just-in-time configuration on the HotSpot VM, or in an ahead-of-time configuration on the SubstrateVM. The main goal of this project is to improve the performance of the java virtual machine base language to match the performance of native languages. Let’s sum up the novel features that this project offers and make a brief explanation according to the docs why you should adopt it.
Polyglot: All languages (even LLVM-based) share the same VM and its capabilities. Zero overhead interoperability between programming languages allows you to write polyglot applications and select the best language for your task
Native: Native images compiled with GraalVM ahead-of-time improve the startup time and reduce the memory footprint of JVM-based applications.
Embeddable: GraalVM can be embedded in both managed and native applications. There are existing integrations into OpenJDK, Node.js, Oracle Database, and MySQL GraalVM removes the isolation between programming languages and enables interoperability in a shared runtime. It can run either standalone or in the context of OpenJDK, Node.js, Oracle Database, or MySQL.
Performance: Graal benchmark reports show great performance improvements in almost all of its implementations thanks to the way that GraalVM performs object allocations
If someone don’t get convinced by now that is a good choice and it is a really awesome project you can see this talk by Christian Thalinger on “on why Graal is a good fit for Twitter”

Limitations of XUL

I'm trying to understand if it is worth the pain to learn XUL more thoroughly.
If you have experience with a moderately complex project (like an independent application rather than a Firefox extension), can you tell me what your experience has been like?
I am particularly worried for feature which are not supported by the XUL framework natively. There are two possibilities: either create more XPCOM components, or using external tools. The latter approach is not completely satisfactory, as interprocess communication seems somehow lacking in XUL.
On the other hand, I have no knowledge of C++. How difficult would it be for a first time learner to wrap an existing library in XPCOM dressing?
I have not written any XPCOM in my three years of developing XUL applications. It does seem intimidating. So far, though, I haven't had a good reason to create any XPCOM. I do use some external tools - for reporting, working with mobile devices, etc. I eventually figured out that you can at least get the STDOUT return value from a process that runs (at least on Windows, it seems that this particular feature might not be consistent across platforms). That allowed me to have at least a single return value, which allowed me to implement error handling.
I think that you will find that you can do quite a bit without touching XPCOM. However, everything is not polished and easy, and there is not a large, helpful, developer community/ not much developer support, so it can be a frustrating learning experience.
If this is a large application, or an application that you might be adding other developers too, you may wish to consider choosing a more supported development platform.

Is Seaside still a valid option?

Seaside just released a release candidate for the upcoming 3.0 version, so it appeared on my radar again. As I'm currently pondering what web framework to use for a future project, I wonder whether it's something to consider. Alas, most of the publicity for Seaside is from '07, which is probably one or two generations for the web. So I'm hoping that the community here can answer some questions
Continuation-based frameworks were pretty great when most of your workflow was mostly in HTML, e.g. form submits. For today's JavaScript-heavy environments, that hardly seems worthwhile anymore.
Is Squeak able to handle a reasonable workload? From other questions here and elsewhere, it seems that for proper scaling another implementation (Gemstone etc.) would probably fare better in the long run, but I don't have a proper idea how far away that is. Sessions seem to be rather expensive.
I know that comparisons are hard, but most of the articles you find on the net set Seaside and Rails side by side. How would combinations like Scala/Lift, Clojure/Compojure or Erlang/Nitrogen do instead?
I have answers to question one and two:
This is true. However since version 2.8 Seaside is not a strictly "continuation-based" framework anymore. Seaside uses continuations in the flow module only. Since Seaside 3.0 the flow module is even optional. Also note that Seaside has strong Javascript support since 2005, this is long before mainstream frameworks started to add Javascript functionality. Today Seaside comes with JQuery and JQueryUI support built-in.
Of course that depends on what you store within your session objects, but typically sessions are small (less than 20 KiB). Use the memory profiler in your application to determine the exact memory consumption.
And there is a new seaside book: http://book.seaside.st/book
I find the productivity of working in a Smalltalk IDE with a good set of abstractions outweights all other issues in engineering dominated projects. It works well as an enterprise system for a small company with about 100 (simultaneous, but not heavy) users on a single server (without going to SSD). Since 2007:
Seaside has shown to be able to make the switch from html workflows to javascript ones;
Seaside has been ported to a lot of different Smalltalks;
Has seen Gemstone release GLASS;
The new 'cog' vm with much improved performance has been released a few weeks ago and shows great promise for improved performance.
In Smalltalk we have now three web frameworks to consider, besides Seaside also
Aida/Web and
Iliad.
Both later effectively solve three-like control flow, but without needing continuations. Both also have a very strong Ajax integration, actually you don't realize anymore that you are working with Ajax.
Both also scale in memory consumption well. 10.000 sessions spend 220MB in Aida/Web, that is about 23KB per session, which can be further optimized down to mere 400B per session. This means, that you can run not only but many websites from the single Smalltalk image. Of course you can always upgrade to load balancing solution, when you really need. Which is from my experience very rarely needed.
Comparing to Ruby on Rails, a friend of mine complained that he needs 50MB of memory initially for every webshop site he is selling. He then turned to the Aida/Web solution where he needs about the same MB for the image, but then just few KB for every additional webshop site.
Avi Bryant, the developer of Seaside, said that AJAX triumphs continuations in almost all situations. Nevertheless, you can build reasonably powerful applications with Seaside and AJAX, too.
The Application part of a Web-App can be done in other frameworks quite well using Ajax.
I think a Seaside integrated Smalltalk-to-Javascript Framework like Cappuccino-for-Clamato is missing, currently. I'd like to be able to build real Javascript-Apps using Smalltalk.
Javascript is awesome but being capable of dealing with complicated workflow in a clean cheap way in the server side (as Seaside allows you to) is preventing it to become obsolete. Economy and utility are things that gives return in the short and long run. But telling this in the abstract has no value at all. You should be talking about a precise application and deciding if seaside is part of your bunch of competitive advantages to form an equation that rocks (and knowing why).
About scaling workload with Seaside, the short answer is that you can scale it like hell yah (for the long answer check my answer here: Does Seaside scale?).
too unanswerable man :) rty a variation of what you're really trying to ask
I think the best thing you can do is a prototype of something in a weekend.
If you can do a prototype in two days and you can capture some attention and you enjoyed the developing experience of doing it with seaside then you'll have the foundation of your next thing.
It costs only your time (you can publish in an amazon server).
BTW, this week I've heard about a startup that made its prototype by hand (was everything static and they processed stuff manually). Pretty amazing and crazy and cheap. When they felt that they had enough traction on the idea (which the had) they implemented the app (with whatever tech, I'm sure is no challenge for a seaside developer)