API for server-side 3D rendering [closed] - api

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 4 years ago.
Improve this question
I'm working on an application that needs to quickly render simple 3D scenes on the server, and then return them as a JPEG via HTTP. Basically, I want to be able to simply include a dynamic 3D scene in an HTML page, by doing something like:
<img src="http://www.myserver.com/renderimage?scene=1&x=123&y=123&z=123">
My question is about what technologies to use to do the rendering. In a desktop application I would quite naturally use DirectX, but I'm afraid it might not be ideal for a server-side application that would be creating images for dozens or even hundreds of users in tandem. Does anyone have any experience with this? Is there a 3D API (preferably freely available) that would be ideal for this application? Is it better to write a software renderer from scratch?
My main concerns about using DirectX or OpenGL, is whether it will function well in a virtualized server environment, and whether it makes sense with typical server hardware (over which I have little control).

RealityServer by mental images is designed to do precisely what is described here. More details are available on the product page (including a downloadable Developer Edition).
RealityServer docs

Id say your best bet is have a Direct3D/OpenGL app running on the server (without stopping). THen making the server page send a request to the rendering app, and have the rendering app snend a jpg/png/whatever back.
If Direct3D/OpenGL is to slow to render the scene in hardware, then any software solution will be worse
By keep the rendering app running, you are avoiding the overhead of creating/destroying textures, backbuffers, vertex buffers, etc. You could potentialy render a simply scene 100's of times a second.
However many servers do not have graphics cards. Direct3D is largly useless in software (there is an emulated device from Ms, but its only good for testing effects), never tried OpenGL in software.

You could wrap Pov-ray (here using POSIX and the Windows build). PHP example:
<?php
chdir("/tmp");
#unlink("demo.png");
system("~janus/.wine/drive_c/POV-Ray-v3.7-RC6/bin/pvengine-sse2.exe /render demo.pov /exit");
header("Content-type: image/png");
fpassthru($f = fopen("demo.png","r"));
fclose($f);
?>
demo.pov available here.
You could use a templating language like Jinja2 to insert your own camera coordinates.

Server side rendering only makes sense if the scene consists of a huge number of objects such that the download of the data set to the client for client rendering would be far too slow and the rendering is not expected to be in realtime. Client side rendering isn't too difficult if you use something like jogl coupled with progressive scene download (i.e. download foreground objects and render, then incrementally download objects based on distance from view point and re-render).
If you really want to do server side rendering, you may want to separate the web server part and the rendering part onto two computers with each configured optimally for their task (renderer has OpenGL card, minimal HD and just enough RAM, server has lots of fast disks, lots of ram, backups and no OpenGL). I very much doubt you will be able to do hardware rendering on a virtualised server since the server probably doesn't have a GPU.

Not so much an API but rather a renderer; Povray? There also seem to exist a http interface...

You could also look at Java3D (https://java3d.dev.java.net/), which would be an elegant solution if your server architecture was Java-based already.
I'd also recommend trying to get away with a software-only rendering solution if you can - trying to wrangle a whole lot of server processes that are all making concurrent demands on the 3D rendering hardware sounds like a lot of work.

Yafaray (http://www.yafaray.org/) might be a good first choice to consider for general 3D rendering. It's reasonably fast and the results look great. It can be used within other software, e.g. the Blender 3D modeler. The license is LPGL.
If the server-side software happens to be written in Python, and the desired 3D scene is a visualization of scientific data, look into MayaVi2 http://mayavi.sourceforge.net/, or if not, go for a browse at http://www.vrplumber.com/py3d.py
Those who suggest the widely popular POV-Ray need to realize it's not a library or any kind of entity that offers an API. The server-side process would need to write a text scene file, execute a new process to run POV-Ray with the right options, and take the resulting image file. If that's easy to set up for a particular application, and if you've more expertise with POV-Ray than with other renderers, well go for it!

Check out wgpu.net.
I think it's very helpful.

Related

What are the features/technologies that native apps in iOS can use that web apps can't? [closed]

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 11 years ago.
I'm struggling trying to figure out if I should go native or if I should go with the web app approach for a particular project. I understand the benefits and the differences in approaches. The biggest question in my mind is what do I lose access to by going with a web app versus going with a native app?
I know there are certain things WebKit can handle that used to be the domain of native apps only (i.e. access to geolocation information). I also know about different frameworks for adding this functionality, like PhoneGap. I'm not looking for any hybrid applications. I'm talking about differences between an app that runs inside Safari and one that is native Obj-C.
Preferably, this will be iOS information and not Android information, but either would be interesting.
Core Data (data storage on the device)
Camera (except with custom tweaking)
Accelerometer
Ability to run and at least provide some UI when no internet
connection is available
The iOS App Store distribution method
Web Apps have to go to the server for everything (UI, data, etc).
Native apps only need to go for fresh data and authentication. This
usually results in a slower and more clumsy user experience.
TableViews and ViewControllers (the MVC model)
I am sure there are more. I was in the same dilemma as you about a year ago. I decided to take the plunge and learn objective-c so I could do it natively. I found that the extra time taken to do that was made up for by the ease at which the UI can be generated natively. The positioning and rendering is so precise and cuts down on the trial and error methods that are usually used when laying out HTML and CSS.
I am glad I did that, I can now crank out a fully functional, complex web-service fed app in a weekend and I need minimal help from my graphics artist to do it.
HTML5 is pretty powerful for webapps, but there are still some APIs missing in the browsers - such as getting access to camera and microphone (the Device/getUserMedia API is still in draft). You can play audio stuff, use accelerometer, gyroscope, geolocation and websockets, but all of this is handled via javascript. This is not bad, but it depends on the browser and its features.
In a native app you can use all of those sensors, but you will have to implement these features explicitly for every desired platform (iOS/Android/WinMO).
And you can use graphic languages (OpenGL) in native applications / WebGL isn't yet supported on mobile devices (afaik). Native apps can use the full potential of the hardware, webapps are limited to their browser's javascript performance.
Web apps (or apps that run on the browser (webKit Engine)) are limited access to the underlying device functionality due to security concerns..
So they cannot access many of of the underlying device functionalities like -
- accessing device contact list and other sensitive information stored in the device
- access to device hardware - like sensors / bluetooth / wifi etc
- make use of underlying OS API calls.
So depending on your application features you will have to device the best for your app.
Native apps can use pre-compiled pre-optimized native ARM code, and the benefits that developer driven compilation provides. Apps that require significant computation (game physics, audio DSP, etc.) will have better performance, either due to lack of interpreter overhead, more compiler optimization, or not requiring communication overhead and latency of offloading the task to a remote server. Native code virtual reality creation and similar types of real-time feedback may also have less lag. The lesser processor cycles or communication needs arising from a natuve code solution might also consume significantly less of the user's limited battery life.
Certain iOS subview animations are only possible or run more smoothly from the native code API than from JavaScript.
User privacy and security concerns may also limit or completely restrict access to certain user data and sensors (photo album, mic, front camera) from Safari web apps, to which native apps currently have some access.

WMS/WFS server: am I crazy to write my own?

I'm a "do it yourself" kind of guy, but I want to make sure I'm not going to do myself in by trying to bite off more than I can chew.
I am writing a browser-based mapping application that needs to have the option to run standalone (no internet connection) on the end-user's machine. That is, the application is some kind of server that will, in many cases, get installed on the end user's machine and the browser will point to some localhost URL to access it.
I will be using MapLayers on the client side, and the server side will have a bunch of custom logic specific to the application, such as handling click events on the map in certain custom ways, creating various custom objects on the map at certain times, and so on.
For the "business logic" part of the server, I'm happy using paste/webob with python. It's a simple infrastructure that lets me put all this custom logic in easily.
I had been thinking that the client would communicate with two servers: this paste/webob business logic server, and a server just for serving WMS and WFS map elements. So I was looking at MapServer and GeoServer to handle the map parts and ... I'm not happy.
I'm not happy because I don't want to have to install and worry about a "beast" on the client machines. For MapServer, I don't really want to install a full-blown web server like Apache, and have to deal with CGI and PHP and MapScript. For GeoServer, there's (potentially) installing Java, and dealing with various complexities of the GeoServer setup and administration.
Part of this is simply a learning curve issue. If I can avoid it, I'm not especially interested in learning the intricacies of either MapServer or GeoServer. I installed GeoServer, pointed it to some of my data, and was able to use the MapLayers preview built into GeoServer's nice web admin to view my data. But when I tried to serve the data for real using my own MapLayers web page pointed at GeoServer, I crashed GeoServer. That I could crash the server just be sending some presumably malformed requests from the client was quite surprising to me. And I could dig into the GeoServer logs to try to figure out what I did wrong, but ... I don't really want to spend a lot of time on that.
So, I am considering implementing parts of the WMS and WFS interface myself just using the paste/webob server I already have. It may in fact be that I only need the WMS, since I might handle vector objects through a simple custom protocol that I make to pass data to the client, which can then create and manipulate the objects directly using OpenLayers.
I've looked at the specs and example messages for WMS (and a bit less at WFS). It seems not so difficult just to implement this protocol myself, especially because I have full control of the client in this case -- it's not like I need to be able to act as a generic WMS or WFS server; I just have to make my own OpenLayers client happy.
The two main abilities that I need the WMS server to have are:
Serve tiles from a store of prerendered tiles that I've created ahead of time (I'll prerender the tiles using OpenStreetMap data and mapnik as the redering engine; and I'll store and access them using the normal Google Maps style tile naming scheme that OpenLayers expects)
Have the ability to server modified versions of these tiles where certain data that I store locally is drawn on top of the tiles. For instance, I might have, say, 10000 points on one "layer" and 10000 polygons on another layer, and when the user activates these layers I will serve my same base tiles, but as I'm serving these tiles I'll render these additional features on top of them, and probably I'll implement a simple caching scheme to keep these over-rendered tiles around for some amount of time.
So my question is: Even though I know there are existing tools that do these things (MapServer, GeoServer, TileCache, and others), I'm actually feeling like it's less work for me to just to respond to some simple WMS messages, and do this additional over-drawing on my tiles myself in python, making sure everything is projected correctly, etc. I don't need to draw fancy wide streets or anything for these over-layers, just simple lines, icons, and perhaps labels. It sure sounds nice and simple to have a python-only solution.
I figure if I ever need to expand into supporting more of the WMS/WFS protocol, or doing fancier overdrawing, I can just insert MapServer/GeoServer at that time.
Are there pitfalls here I'm not considering?
Mapserver is very easy to setup and learn. Implementing any kind of rendering by yourself is going to require much more effort, and you will probably find a lots of unexpected traps.
mapserver cgi should be enough for your needs. If you require some very specific tweak, then mapscript can be useful.
I think it could be interesting if you could make a pure JavaScript application, and save yourself from installing a web server (and a map server). If you just needed browsing a tile mosaic, may be you could do it just with JavaScript (generate an html table with a cell for each tile). You can render points or polygons, with JavaScript, using a canvas and doing some basic coordinate conversion to translate geographic points to pixels. Openlayers have this functionality, I think.
EDIT: I just checked and with Openlayers you can browse local tiles, and you can render kml and some other vect data. So, I think you should give Openlayers a try.
No need to have a wms/wfs. What you need is a tile implementation. Basically you should have some sort of central service, or desktop service that generates the tiles. Once these tiles are generated, you can simply transform them to your "no-real-webserver-architecture" filesystem. You can create a directory structure that conforms to /{x}/{y}/{z}.png and call it from javascript.
An example of how openstreetmap does this can be found here: http://wiki.openstreetmap.org/wiki/OpenLayers_Simple_Example
You may like featureserver: http://featureserver.org/.
It has its own WFS. I am using it right now.

What was the most advanced stuff you did with compact framework [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
At work I use mostly the .NET Compact Framework 3.5 for developing applications that run on smart devices. Our devices are not phones or handhelds - they are measurement instruments which you get with a whole punch of features. Our application is pretty advanced - we are even using a N-Layer architecture, a self-made GUI framework and even dependency injection (we built our own as the ones other there are not lightweight enough).
So what's the most advanced things you did with the compact framework?
What's currently missing (for example a mocking framework, as there is no Reflection.Emit on compact framework)?
How are you developing your applications? Are you deploying your application every time to the device. In our case this is very slow, as the solution consists of 30 projects so we have a Win32 Version which runs on the PC.
We've done a plant-floor monitoring system that acts as a data server and a web server collecting data from PLCs and creating dynamic web-based reports all in the CF. We've created a peer-to-peer notification and file sharing system. We've done vehicle tracking and dispatching systems. We've done smart-farming applications that monitor loads of data from a tractor and couple that with location and previous year data, plus quite a few others. So I guess you could say de've written several highly-complex things using the CF.
There are lots of "missing" pieces, but most can be worked around. The most obvious missing piece that can't be worked around is the lack of EE Hosting. Reflection pieces for mocking would be nice, but we can live without - it just makes test more of a bear. The lack of Hosting makes several things simply impossible.
As for deployment, it's all about configuration. The Smart Device Framework itself, when coupled with all of the unit test stuff, is something like 45 projects. Deploying isn't bad as it only recompiled and deploys changes, and I often adjust the configuration of test applications to not deploy all projects, but only the main one. That should auto-deploy all references (eliminating the double-deploys you're probably getting). Also having all projects output to one common directory and setting "Copy Local" to false improves things quite a bit too.
One of the most useful things we do with our .net cf application is work hard to make sure that they can be re-targeted to the full framework. This means you have a second desktop project or a unit test that actually runs your entire application on the desktop. There is a bit of work to do if you are using device specific functionality via pinvokes or device only APIs, but the effort usually pays off because:
You can quickly run/debug your application without having to wait for an emulator or device to spin up
You are forced to architect your code in a way that device specific functionality can be mocked and tested
In many cases you are part way to having a desktop version of your application as well as the device version
It probably goes without saying that in the end, testing will need to be done specifically on the device, but during development and the quick code/debug cycles it is really nice to not wait on the emulator. I remember Daniel Moth posting something about how to actually create a device deployment target that is your desktop computer to achieve this same effect. Maybe someone else can find a link?
I have done Win CE app for industrial PDAs for route sales from pre-loaded inventory and clients list. It gets GPS coordinates, uses scanner to collect data, transmits data over GPRS/EDGE of sales made in the device. The app also prints a receipt (linked to protable printer ober BT).
I wrote an app that monitors the statistics on my self-made blog by interfacing with a WebService.
I have developed a multi-language dictionary. Using one code base on Windows, PDA and via MONO on unix and MAC.
Basically the application is complicated because we use multiple databases that are large. We were able to tweak the data access performance and lookups on large tables are almost instantaenous.
Small devices are not very powerful, but if you design for the way they work you can get good performance out of them.
I made an app to collect measures of any magnitude (for weather), using an n tier app, with MVC and using db4o as a database... Pretty impresive

What is a good tool for graphing sub-millisecond timelines? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 8 years ago.
Improve this question
I'm trying to produce a timeline for my real-time embedded code.
I need to show all the interrupts, what triggers them, when they are serviced, how long they execute, etc. I have done the profiling and have the raw data, now I need a way to show the timeline graphically, to scale.
I've been searching for a good tool, but haven't come up with anything great yet. Everything that I've found works on timelines of days and years. I want a graph showing a single 2-millisecond cycle. For now I'm using Visio, but I keep thinking there must be something easier. Any ideas?
I'm hoping to produce something like this: .
Unfortunately, mine is more complicated, but that's the general idea.
So at that scale your abscissas is going to be a pure number (e.g. microseconds from the start time, or some such). Graphing tools to graph things like this are commonplace.
I'd suggest something like gnuplot, but I suspect there's more to the problem than is evident in your summary.
Ah, the picture makes it all much clearer. If gnuplot doesn't do it for you, I'll offer another suggestion (or at least tell you what I'd do): write it from scratch.
Specifically, I'd probably throw together something in a scripting language (ruby, python, whatever) to read the data and generate pic code that looked the way I wanted. If you decide to go that route, here's an overview of pic basics and also the manual. If you dig in you should have something plausible in an hour and within a week you'll have something that suits you better than any off the shelf GUI app ever will.
I feel for you. In my system, we have a 1.1 millisecond cycle and 13 measurement points over 4 different components. I suspect you're facing similar complexity.
Bad news is there are no off-the-shelf solutions I'm aware of. However MarkusQ is correct stating that you can use (abuse?) standard graphing packages to accomplish what you need. But you will need to invest some time to customize the output to your liking.
We make extensive use of the R Project driven by Python code via RPy R/Python bridge to generate our plots. This setup works very well for us and has enabled us to automate the process. Python is used to acquire and cleanse the data from the real-time system and R does the drawing.
R's graphics customization support is extensive allowing you to control all aspects of the plot, locations, sizes, etc. It can be intimidating at first, but there is an excellent book R Graphics that helps with a companion website that contains all of the book's examples.
Whatever you choose, make sure there's the ability to automate via scripting. The amount of data real-time systems generate is too much to deal with without flexible tools.
gtkwave could be used

What are the best resources if you wanted to create an application with modularization? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 4 years ago.
Improve this question
In my analysis of the newer web platforms/applications, such as Drupal, Wordpress, and Salesforce, many of them create their software based on the concept of modularization: Where developers can create new extensions and applications without needing to change code in the "core" system maintained by the lead developers. In particular, I know Drupal uses a "hook" system, but I don't know much about the engine or design that implements it.
If you were to go down a path of creating an application and you wanted a system that allowed for modularization, where do you start? Is this a particular design pattern that everyone knows about? Is there a handbook that this paradigm tends to subscribe to? Are their any websites that discuss this type of development from the ground-up?
I know some people point directly to OOP, but that doesn't seem to be the same thing, entirely.
This particular system I'm planning leans more towards something like Salesforce, but it is not a CRM system.
For the sake of the question, please ignore the Buy vs. Build argument, as that consideration is already in the works. Right now, I'm researching the build aspect.
There are two ways to go around here, which one to take depends on how will your software behave.
One way is the plugin route, where people can install new code into the application modifying the relevant aspects. This route demands your application is installable and not only offered as a service (or else that you install and review code sent in by third parties, a nightmare).
The other way is to offer an API, which can be called by the relevant parties and make the application transfer control to code located elsewhere (a la Facebook apps) or make the application to do as the API commands enable the developer (a la Google Maps).
Even though the mechanisms vary and how to actually implement them differ, you have to, in any case, define
What freedom will I let the users have?
What services will I offer for programmers to customize the application?
and the most important thing:
How to enable this in my code while remaining secure and robust. This is usually done by sandboxing the code, validating inputs and potentially offering limited capabilities to the users.
In this context, hooks are predefined places in the code that call all the registered plugins' hook function, if defined, modifying the standard behavior of the application. For example, if you have a function that renders a background you can have
function renderBackground() {
foreach (Plugin p in getRegisteredPlugins()) {
if (p.rendersBackground) p.renderBackground();
}
//Standard background code if nothing got executed (or it still runs,
//according to needs)
}
In this case you have the 'renderBackground' hook that plugins can implement to change the background.
In an API way, the user application would call your service to get the background rendered
//other code
Background b = Salesforce2.AjaxRequest('getBackground',RGB(255,10,0));
//the app now has the result of calling you
This is all also related to the Hollywood principle, which is a good thing to apply, but sometimes it's just not practical.
The Plugin pattern from P of EAA is probably what you are after. Create a public interface for your service to which plugins (modules) can integrate to ad-hoc at runtime.
This is called a component architecture. It's really quite a big area, but some of the key important things here are:
composition of components (container components can contain any other component)
for example a grid should be able to contain other grids, or any other components
programming by interface (components are interacted with through known interfaces)
for example a view system that might ask a component to render itself (say in HTML, or it might be passed a render area and ask the view to draw into it directly
extensive use of dynamic registries (when a plugin is loaded, it registers itself with the appropriate registries)
a system for passing events to components (such as mouse clicks, cursor enter etc)
a notification
user management
and much much more!
If you're hosting the application, publish (and dogfood) a RESTful API.
If you're distributing software, look at OSGi.
Here's a small video that at least will give you some hints; the Lego Process [less than 2 minutes long]
There's also a complete recipe for how to create your own framework based extensively on Modularization...
The most important key element to make modularized software is to remember that it's purely [mostly] a matter of how loosely coupled you can create your systems. The more loosely coupled, the easier it is to modularize...