I wish to have an ability to write a dynamic plugin (*.so) which will allow the server to use data from the Amazon queue. Working data to make predictions based on an already taught model.
I have read the manual about the server extension points
https://github.com/tensorflow/serving/blob/master/tensorflow_serving/g3doc/architecture.md
It says it is allowed to have custom sources, including the cloud ones. The problems are, I was unable to find a good complete example for the beginners (like I am), the help is a tiny one, and (if I understand correctly), those custom modules should be statically linked in the server code (which is, for some serious reasons uncontrollable by me, is not acceptable).
Related
I have a challenge from a customer. They'd like me to be able to support the creation, editing and deletion of custom fields against a Model within my application. This is so they can record any values that they see fit against a model and report against them.
I've tried a few methods of completing this however neither allowed for easy integration into our existing reporting engine and I'd quite like to keep that as untouched as possible.
I've scoured the web for similar answers to this question and most were from 2015 - 2018 and most were asking for dynamic models to be supported as well and frequently the answer was "No not yet but we will support this in future".
To support dynamic models I have seen solutions that reload waterline via sails.hooks.orm.reload() or even pro-grammatically restarting the sails app altogether and initialising again with the new model present. This could cause a few seconds of downtime for your application as the server restarted (for us it is about 30 as we have a huge app with nearly 50 models and 350 routes). We can't really go with this process as that down time is too much of an overhead and our clients would be a bit miffed.
Is potentially supporting just the dynamic handling of attributes possible without all the bootstrapping that takes place on sails lift or is it just not feasible? My initial feeling from all my reading is essentially no - as the processes in place during startup hook up and sync waterline/your db/your files.
Any discussion or examples of similar work welcomed.
I got a DDS-system(OMG DDS) who's communicating with a ROS-node over radio. The information being received is a struct with velocity, state, longitude, latitude etc. This works well, and my DDS-client has no problem printing the information being transmitted from the node over the radio. Now, I've got a GUI-application written in Qt, who creates models and puts them on a predefined map. These modelse have defined set-information functions, which when triggered updates the map to give a smooth visualisation of the information it receives.
Now here is problem, I've no idea how to make the GUI application communicate with my DDS-client. I would rather not intertwine these two, since I've had enough trouble just making the DDS-client and sender work and compile with ROS. Ive though about a separate queue system, which can be included in the DDS-client and the GUI application, but I dont know if this could work. Ive also though about writing a SQL database, and then push new data, and pull new data when it is detected in my GUI application. Some sort of on_data_available function which triggers the pull-function. Ive heard the last one is a bad idea, since I'm working with only one set of data which is being continuously updated (the model represents one USV), and a database is then considered overkill, but I would love to get inputs here.
Im sorry if this isn't sufficient information, I can't really provide code examples for different reasons. If anyone have any inputs, shout out, would love to hear them. And if I'm not being specific enough, ill try to rewrite it as best as i can.
I've no idea how to make the GUI application communicate with my
DDS-client
Your question is not specific to DDS or your GUI application -- you essentially ask for a simple and convenient inter-process communication (IPC) mechanism. As you can see when you follow the link, there are loads of different options.
Given that you already have your data as well as the associated type definitions available in DDS, I suspect that using DDS for this task would still be the easiest way to go. You could set it up to communicate over shared memory or local loopback. DDS will do all discovery and communication under the hood, including (cross-language) de/serialization. If you choose a different mechanism, you might end up doing more work yourself.
As an alternative, some DDS implementations (commercially) support native integration with SQL databases. Those will introspect the DDS data definitions and create all required tables for you. Updates from DDS are automatically forwarded to the database, and vice-versa. You could feed your GUI off of that database.
Looking into Meteor to create a collaborative document editing app, because it’s great that Meteor synchronizes data between multiple clients by default.
But when using a text-area, like in Sameer Kalburgi’s example
http://www.skalb.com/2012/04/16/creating-a-document-sharing-site-with-meteor-js/
http://docshare-tutorial.meteor.com/
the experience is sub-optimal.
I tried to type at the same time with a colleague and my changes would be overwritten when she typed and vice versa. So in the conflict resolution there is no merge algorithm yet, I think?
Is this planned for the feature? Are there ways to implement this currently? Etherpad seems to handle this problem rather well. Having this in Meteor would make creating collaborative document editing apps way more accessible.
So I looked into it some more, the algorithm used in Etherpad is known as Operational Transformation:
The solution is Operational Transformation (OT). If you haven’t heard of it, OT is a class of algorithms that do multi-site realtime concurrency. OT is like realtime git. It works with any amount of lag (from zero to an extended holiday). It lets users make live, concurrent edits with low bandwidth. OT gives you eventual consistency between multiple users without retries, without errors and without any data being overwritten.
Unfortunately, implementing OT sucks. There's a million algorithms with different tradeoffs, mostly trapped in academic papers. The algorithms are really hard and time consuming to implement correctly. We need some good libraries, so any project can just plug in OT if they need it.
Thats’s from the site of sharejs. A node.js based ot server-client that you can hook into your existing client.
OT is also implemented in the Racer model synchronization engine for Node.js, that forms the underpinnings for Derby. At the moment, derby.js doesn’t transparently provide it yet, but they plan too, from the Derby docs:
Currently, Racer defaults to applying all transactions in the order received, i.e. last-writer-wins. (…) Racer [also] supports conflict resolution via a combination of Software Transactional Memory (STM), Operational Transformation (OT), and Diff-match-patch techniques.
These features are not fully implemented yet, but the Racer demos show preliminary examples of STM and OT.
Coincidentally, both the sharejs and derbyjs teams have an ex Google-waver on board. Meteor has an ex etherpad/Google Waver in their core team. Since Etherpad is one of the best known implementations of OT I was imagining Meteor would surely want to support it at some point as well…
I've created a Meteor smart package that integrates ShareJS:
https://github.com/mizzao/meteor-sharejs
It's quite preliminary right now, but you can import it into your app, drop in textareas, and it "just works". Please try it out and submit some new features via pull requests :)
Check out a demo here:
http://documents.meteor.com
What you describe seems out of Meteors scope for me. Its not a tool to set up collaboration possibilities!
What it provides is a way to transparently work against a subset of a servers database. But the implementation of use-case specific merging functionality is the job of the application, not the framework.
I have been having this debate with a friend where i have a library (its python but I didn't include that as a tag as the question is applicable to any language) that has a few dependencies. The debate is whether to provide a default environment in the initialization or force the user of the code to explicitly set one.
My opinion is to force the user as its explicit and will avoid confusion and make it clear what they are pointing to.
My friend this is safer and more convenient to default to an environment and let the user override if he wants to.
Thoughts ? Are there any good references or examples / patterns in popular libraries that support either of our arguments? also, any popular blogs or articles that discuss this API design point?
I don't have any references, but here are my thoughts as a potential user of said library.
I think it's good to have a default configuration available to allow developers to quickly evaluate the library. I don't want to have to go through a bunch of configuration just to see if the library will do what I need. Once I'm happy that the library will do what i need it to do, then I'm happy to configure it the way I want.
A good example is Microsoft's ASP.Net MVC framework. When you create a new MVC project it hooks in a default authentication and membership provider, which allows the developer to very quickly get a functioning application up and running. It is also easy to configure different providers to be used if the default one's don't meet the requirements of the application in question.
As a slightly different example, Atlassian Confluence is wiki software which supports many different back-end databases. Atlassian could have chosen to have no default DB configuration, but instead Confluence ships with a default, simple, file-based database to allow users to evaluate the software. For production installations you can then hook up to Oracle, SQL Server, mySQL or whatever else you like.
There may be instances where a default configuratino for a library doesn't really make sense, but I think that would be a special case, rather than a general rule.
It depends. If you can provide sensible defaults, you might want to do that: it will make life easer on the occasional user of the library as they can set only the relevant settings, as opposed to the whole environment (with possibly settings the implications of which they don't fully understand (yet)). You are correct, that in situations it is possible this leads to frustration and confusion as the defaulted settings might cause unexpected behavior (unexpected by the (inexperienced) user) -- you have to weigh the reduced frustration of convenience against the price of not-understood defaults to make the choice for each of these possible-to-default settings, which choice might affect the choice for other, related settings as well
On the other hand, if there is no sensible default (e.g. DB credentials, remote address), you should require the user to provide those settings.
The key in both cases is to provide enough information in the documentation of the library and in the error messages (either for missing settings or conflicting ones) that the user can figure out what those settings actually mean/control without having to read through the source code of the library. This part is hard because 1) it is usally tedious from the point of view of the library developer (so it is often skimped) and 2) the documentation has to be written from the mindset of a newbie to the library, which is often different from the library developer's mindset -- the latter knows the implicit connections/implications, the former has to be told about those in an understandable way.
Although not exactly identical in terms of problem domain, this strikes me as the Convention over Configuration argument.
There has been quite a lot momentum behind CoC in recent years, and in my mind, it makes a whole lot of sense. As long as flexibility is not lost, you have everything to gain. Lower friction development is what we are all after, and if I've got to configure every aspect of your API in order to get it working, I'm less inclined to use it over another API of equal functionality.
I happen to like Hanselman's podcasts, so if you want a little light listening, check out this podcast.
I think your question needs some clarification. For starters, I don't think a library should have any runtime configuration. In terms of dependencies, library dependencies should be handled in a manner appropriate to the environment they are being written for. In python, those dependencies should be in the setup.py file (under requirements), and ultimately that file should meet the requirements of whatever service you plan on making it available on (i.e. pypi for python).
For applications, it is completely okay to require runtime configuration, but you should try to have sensible defaults. If your application depends on libraries, that dependency should be handled in the same way a library dependency would be handled, even though that information may be redundant in the context of an installer (if needed). For the most part first-run scripts and their ilk should be apart of the installer/rpm.
For Web Frameworks, it is typical that your app would carry configuration with it, and likely that it would need to be installed in a different way than traditional applications. Here, about the only thing you can do is try to follow the conventions of whatever framework you are writing in.
I'm a "do it yourself" kind of guy, but I want to make sure I'm not going to do myself in by trying to bite off more than I can chew.
I am writing a browser-based mapping application that needs to have the option to run standalone (no internet connection) on the end-user's machine. That is, the application is some kind of server that will, in many cases, get installed on the end user's machine and the browser will point to some localhost URL to access it.
I will be using MapLayers on the client side, and the server side will have a bunch of custom logic specific to the application, such as handling click events on the map in certain custom ways, creating various custom objects on the map at certain times, and so on.
For the "business logic" part of the server, I'm happy using paste/webob with python. It's a simple infrastructure that lets me put all this custom logic in easily.
I had been thinking that the client would communicate with two servers: this paste/webob business logic server, and a server just for serving WMS and WFS map elements. So I was looking at MapServer and GeoServer to handle the map parts and ... I'm not happy.
I'm not happy because I don't want to have to install and worry about a "beast" on the client machines. For MapServer, I don't really want to install a full-blown web server like Apache, and have to deal with CGI and PHP and MapScript. For GeoServer, there's (potentially) installing Java, and dealing with various complexities of the GeoServer setup and administration.
Part of this is simply a learning curve issue. If I can avoid it, I'm not especially interested in learning the intricacies of either MapServer or GeoServer. I installed GeoServer, pointed it to some of my data, and was able to use the MapLayers preview built into GeoServer's nice web admin to view my data. But when I tried to serve the data for real using my own MapLayers web page pointed at GeoServer, I crashed GeoServer. That I could crash the server just be sending some presumably malformed requests from the client was quite surprising to me. And I could dig into the GeoServer logs to try to figure out what I did wrong, but ... I don't really want to spend a lot of time on that.
So, I am considering implementing parts of the WMS and WFS interface myself just using the paste/webob server I already have. It may in fact be that I only need the WMS, since I might handle vector objects through a simple custom protocol that I make to pass data to the client, which can then create and manipulate the objects directly using OpenLayers.
I've looked at the specs and example messages for WMS (and a bit less at WFS). It seems not so difficult just to implement this protocol myself, especially because I have full control of the client in this case -- it's not like I need to be able to act as a generic WMS or WFS server; I just have to make my own OpenLayers client happy.
The two main abilities that I need the WMS server to have are:
Serve tiles from a store of prerendered tiles that I've created ahead of time (I'll prerender the tiles using OpenStreetMap data and mapnik as the redering engine; and I'll store and access them using the normal Google Maps style tile naming scheme that OpenLayers expects)
Have the ability to server modified versions of these tiles where certain data that I store locally is drawn on top of the tiles. For instance, I might have, say, 10000 points on one "layer" and 10000 polygons on another layer, and when the user activates these layers I will serve my same base tiles, but as I'm serving these tiles I'll render these additional features on top of them, and probably I'll implement a simple caching scheme to keep these over-rendered tiles around for some amount of time.
So my question is: Even though I know there are existing tools that do these things (MapServer, GeoServer, TileCache, and others), I'm actually feeling like it's less work for me to just to respond to some simple WMS messages, and do this additional over-drawing on my tiles myself in python, making sure everything is projected correctly, etc. I don't need to draw fancy wide streets or anything for these over-layers, just simple lines, icons, and perhaps labels. It sure sounds nice and simple to have a python-only solution.
I figure if I ever need to expand into supporting more of the WMS/WFS protocol, or doing fancier overdrawing, I can just insert MapServer/GeoServer at that time.
Are there pitfalls here I'm not considering?
Mapserver is very easy to setup and learn. Implementing any kind of rendering by yourself is going to require much more effort, and you will probably find a lots of unexpected traps.
mapserver cgi should be enough for your needs. If you require some very specific tweak, then mapscript can be useful.
I think it could be interesting if you could make a pure JavaScript application, and save yourself from installing a web server (and a map server). If you just needed browsing a tile mosaic, may be you could do it just with JavaScript (generate an html table with a cell for each tile). You can render points or polygons, with JavaScript, using a canvas and doing some basic coordinate conversion to translate geographic points to pixels. Openlayers have this functionality, I think.
EDIT: I just checked and with Openlayers you can browse local tiles, and you can render kml and some other vect data. So, I think you should give Openlayers a try.
No need to have a wms/wfs. What you need is a tile implementation. Basically you should have some sort of central service, or desktop service that generates the tiles. Once these tiles are generated, you can simply transform them to your "no-real-webserver-architecture" filesystem. You can create a directory structure that conforms to /{x}/{y}/{z}.png and call it from javascript.
An example of how openstreetmap does this can be found here: http://wiki.openstreetmap.org/wiki/OpenLayers_Simple_Example
You may like featureserver: http://featureserver.org/.
It has its own WFS. I am using it right now.