Compact Framework - Lightweight GUI Framework? [closed] - compact-framework

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 9 years ago.
Improve this question
Winform on CF is a bit heavy, initialising a lot of windows handles takes serious time and memory. Another issue is the lack of inbuilt double buffering and lack of control you have over the UI rendering means that during processor intensive operations the UI might leave the user staring at a half rendered screen. Nice!
To alleviate this issue I would seek a lightweight control framework, is there one kicking about already or would one have to homebrew?
By lightweight I mean a control library that enables one to fully control painting of controls and doesn't use many expensive windows handles.
NOTE: Please don't suggest that I am running too much on the UI thread. That is not the case.

I ran across this the other day, which might be helpful at least as a starting point: Fuild - Windows Mobile .NET Touch Controls. The look and feel is nice, but there is no design time support. I don't know too much about memory footprint, etc but everything is double buffered and the performance appears to be pretty good.

Ok, just an idea off the top of my head...
How about creating a synchronisation object, e.g. critical section or single lock, in your application, shared between your worker and gui threads. Override the paint. When you start painting, block all the other threads, such that you are not left with a half painted screen while they hog the CPU.
(this of course assumes that presenting a pretty picture to you user is the most important thing you require ;) )

a bit slow and you can't control the paint event so during processor intensive operations the UI might leave the user staring at a half rendered screen.
It's generally a bad idea to do expensive tasks on the UI thread. To keep your UI responsive, these tasks should be performed by a worker thread

Actually, you can override the paint event.
And the idea is that you offload long-running operations to a separate thread. That's no different from any other event-driven framework, really. Anything that relies on a handling a Paint event is going to be susceptible to that.
Also, there's no system that lets you determine when the paint event is raised. That kind of event is generally raised by the window manager layer, which is outside of the application (or even the framework). You can handle the event yourself and just do no work some of the time, but I wouldn't recommend it.

Related

Better to not use Core Data for ease of thread safety?

I have an app that is currently built on Core Data and has multiple threads with multiple NSManagedObjectContexts. It's a music app so there is always stuff running on the background threads that needs to not interfere with the main thread and vice versa.
So far I've been slowly chipping away at all sorts of deadlock & thread safety issues, but frankly I'm hitting a wall in trying to keep MOCs in sync and have them not block threads, and nothing access entities that have been deleted etc.
My question is this:
If I were to ditch Core Data and just create some custom NSObjects to keep track of properties would that make these kind of issues simpler? Is it possible to access the NSObjects from multiple threads (without causing deadlock etc) so that I wouldn't have to maintain several copies and sync them? Or will I still face similar challenges?
I'm pretty new to objective-c so i'm really looking for the easier solution rather than the most sophisticated. Any links to good designs patterns for this sort of thing also appreciated!
Rephrasing the question: "Would we be better off ditching a framework where the engineers, whose sole focus is on creating said framework, have spent countless hours working on that framework's concurrency model to instead roll our own concurrency model, starting from scratch?"
Concurrency is hard.
Rolling your own means tackling all the problems that Core Data has tackled already (including accessing state from multiple threads without deadlocking and/or requiring many copies of the data) and then adding on whatever unique twists you need for your app. The team that wrote Core Data has thought quite deeply about the subject, to the point that there is an entire set of documentation devoted to it (with the APIs to back it).
So, certainly, there is quite likely a concurrency model that would be highly specific to your application that would be considerably more efficient than using Core Data, but you are going to end up re-inventing the persistency patterns Core Data currently offers you and you'll need to engineer said concurrency model from the ground up.
So, no, there is no easy way out.
Given that it is a music app, I can fully appreciate how difficult the concurrency issues can be. Latency is obviously a huge issue. You'll need to ask a more specific architectural/design question to really glean much insight beyond the above.
For example, what granularity of data are you persisting via Core Data and what is the frequency of change of said data? Is the model of said data relatively optimal? Etc.etcetc...
Or, with a concrete example:
Say you have a Note class that describes a note to be played. Say that note has properties like pitch, volume, duration, instrument, etc...
Now, imagine your background thread needs to set pitch and duration.
Without thread synchronization of some kind, how would the consuming thread know that the editing thread is done editing?

How to document a system flow before coding it? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 5 years ago.
Improve this question
Can anybody help me with this?
Here's the problem...
When I have to code let's say, a registration form, I add the new form and start coding it. But sometimes the form is a bit complex and I find myself duplicating code and making the same verifications over and over again making the code messy.
I was wondering is there is some sort of tool that allows me to create a flow of this form before coding it, like a flow chart... where I can find such places where I'm duplicating code and then avoid that.
thanks!
Well real tool/language designed for this is UML. You can read up on it.
But its very strict. Altough you don't have to follow all specs and conventions. There are several types of diagram that cover pretty much everything. But AFAIK only 4 are practically in use.
Most people I know tend to draw Control flow diagrams
Google Docs drawing is perfectly fine for that.
But it depends on the type of application. I pesonally think more in data and like data flow diagrams.
I also like to design top-down. Other people do it differently. I mostly start with a sheet of paper and a pen and draw some stuff i could not tell what it means half an hour later. But I start very basic with application/database/user or something and when a picture arises i go into specifics using modeling tools.
I cannot design anything without knowing the greater picture, altough i know it is a software developers quality to just that.
ps: designing a form sounds very trivial at first, altough it might be not. but a great help
I think a great help is sticking to some programming patterns and paradigms you like. A good base is the MVC concept. I like to extend it with a "resource model" that does all the database stuff.
1) The best place to start is the white board. If your company doesn't have white boards, tell them to order some. Seriously. You will wonder how you lived without it.
2) Build a paper prototype with the stakeholders, or have them build one. They take maybe 30 minutes to make and solve a ton of UI arguments that otherwise would be "defects"
3) Code. That's the easy part.
4) Refactor as you fix defects. You'll notice better things you could have done, shortcuts, duplicate code. Take time to fix the defect correctly and code quality will improve. It's an iterative process.
5) Visio if you hand the process off (to support or whatever). This could be step 4 as kind of a state machine, but the paper prototypes should be enough of a process to get you started with enabling, disabling, etc.
If you're on the computer designing and writing code before you have a prototype and have white boarded everything out, you will have to invest a lot more time in the Refactor step. Visio and other state design applications will show you what happens, but the white board marker is the excalibur of the development world.
I know this doesn't answer the question you asked, verbatim; however, solid processes are infinitely more valuable than tools.

Beginner LabVIEW Tasks [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 9 years ago.
Improve this question
I am on a FRC (FIRST Robotics Competition) team, and we plan on using LabVIEW to program our robot. I was wondering if anyone had any basic LabVIEW tasks that we could use to learn LabVIEW before we begin the actual programming of our robot?
EDIT: Most of the programmers have at least a basic understanding of programming, and are coming from another language.
I believe the best thing would be to go through the getting started tutorial of LabVIEW:
http://digital.ni.com/manuals.nsf/websearch/EC6EF8DE9CB98742862576F7006B0E1E
The reason I say that is because they contain exercises between every lesson, and you could attempt to do that without having a look at the solution.
Also, the following site has the 3-hour and 6-hour course on LabVIEW which could be approached in the same way:
http://www.ni.com/academic/labview_training/
Also, if you need guidance for that particular project, I don't mind getting involved to mentor your team on it. You could provide me with the contact details of your teacher/professor and I can get in touch with them.
Take Care
Adnan
I was also on a FIRST team for a while, and I taught the programmers while I was there. I found that the best way to get the language down was to practice with some simple projects which solidify data-flow and other important concepts in the mind.
A few:
A stop light with user-manipulable controls for how fast each light should stay on. Once you've got that down, fix it so that the user can only change stopping distance and speed limit. That way you work in some of the math functions.
I always taught some of the basic concepts, like loops and shift registers, with imaginary killbots. A killbot has a pre-set kill limit (for for loops), and has to keep track of how many hits it gets with shift registers.
I certainly wouldn't go with NI's training things. They only managed to confuse the new programmers, even the ones with experience in other languages. I also found it best not to teach the concept of global variables, which NI does, because it breaks the whole point of LabVIEW, data-flow.
Wow. That was long winded.
While I haven't gone through them, Ben Zimmer's company has been posting (apparently free) FRC training videos at http://www.frcmastery.com/. Possibly they're worth checking out.
If you have LabVIEW installed you could have a look at the following two sections of the on-line help files:
Getting started
Fundamentals
The Getting started section is a technical part on how to use LabVIEW, the fundamentals on the other hand provide a deep inside in how to program with LabVIEW and covers a lot. Both elements are available on the web (I provided the URLs)
Personally, I'm not so into NI resources.
However, they provided this short and rather nice course: http://cnx.org/content/col10241/1.4
(I like the videos).
Also, I used this
http://techteach.no/labview/lv85/labview/index.htm

What would a multithreaded UI api look like, and what advantages would it provide?

Or, equivalently, how would you design such an API. Expected/example usage would be illustrative as well.
My curiosity comes directly from the comments (and subsequent editting on my part) of this answer. Similar questions/discussions in the past provide a bit of inspiration to actually asking it.
Executive summary:
I don't feel a multithreaded UI api is possible in a meaningful way, nor particularly desirable. This view seems somewhat contentious and being a (relatively) humble man I'd like to see the error of my ways, if they actually are erroneous.
*Multithreaded is defined pretty loosely in this context, treat** it however makes sense to you.
Since this is pretty free-form, I'll be accepting whichever answer has the most coherent and well supported answer in my opinion; regardless of whether I agree with it.
Answer Accepted
**Ok, perhaps more clarification is necessary.
Pretty much every serious application has more than one thread. At the very least, they'll spin up an additional thread to do some background task in response to a UI event.
I do not consider this a multithreaded UI.
All the UI work is being done on single thread still. I'd say, at a basic level, a multithreaded UI api would have to do away with (in some way) thread based ownership of UI objects or dispatching events to a single thread.
Remeber, this is about the UI api itself; not the applications that makes use of it.
I don't see how a multithreaded UI API would differ much from existing ones. The major differences would be:
(If using a non-GC'd language like C++) Object lifetimes are tracked by reference-counted pointer wrappers such as std::tr1::shared_ptr. This ensures you don't race with a thread trying to delete an object.
All methods are reentrant, thread-safe, and guaranteed not to block on event callbacks (therefore, event callbacks shall not be invoked while holding locks)
A total order on locks would need to be specified; for example, the implementation of a method on a control would only be allowed to invoke methods on child controls, except by scheduling an asynchronous callback to run later or on another thread.
With those two changes, you can apply this to almost any GUI framework you like. There's not really a need for massive changes; however, the additional locking overhead will slow it down, and the restrictions on lock ordering will make designing custom controls somewhat more complex.
Since this usually is a lot more trouble than it's worth, most GUI frameworks strike a middle ground; UI objects can generally only be manipulated from the UI thread (some systems, such as win32, allow there to be multiple UI threads with seperate UI objects), and to communicate between threads there is a threadsafe method to schedule a callback to be invoked on the UI thread.
Most GUI's are multithreaded, at least in the sense that the GUI is running in a separate thread from the rest of the application, and often one more thread for an event handler. This has the obvious benefit of complicated backend work and synchronous IO not bringing the GUI to a screeching halt, and vice versa.
Adding more threads tends to be a proposition of diminishing returns, unless you're handling things like multi-touch or multi-user. However, most multi-touch input seems to be handled threaded at the driver level, so there's usually no need for it at the GUI level. For the most part you only need 1:1 thread to user ratio plus some constant number depending on what exactly you're doing.
For example, pre-caching threads are popular. The thread can burn any extra CPU cycles doing predictive caching, to make things run faster in general. Animation threads... If you have intensive animations, but you want to maintain responsiveness you can put the animation in a lower priority thread than the rest of the UI. Event handler threads are also popular, as mentioned above, but are usually provided transparently to the users of the framework.
So there are definitely uses for threads, but there's no point in spawning large numbers of threads for a GUI. However, if you were writing your own GUI framework you would definitely have to implement it using a threaded model.
There is nothing wrong with, nor particularly special about multithreaded ui apps. All you need is some sort of synchronization between threads and a way to update the ui across thread boundaries (BeginInvoke in C#, SendMessage in a plain Win32 app, etc).
As for uses, pretty much everything you see is multithreaded, from Internet Browsers (they have background threads downloading files while a main thread is taking care of displaying the parts downloaded - again, making use of heavy synchronization) to Office apps (the save function in Microsoft Office comes to mind) to games (good luck finding a single threaded big name game). In fact the C# WinForms UI spawns a new thread for the UI out of the box!
What specifically do you think is not desirable or hard to implement about it?
I don't see any benifit really. Let's say the average app has 3 primary goals:
Rendering
User input / event handlers
Number crunching / Network / Disk / Etc
Dividing these into one thread each(several for #3) would be pretty logical and I would call #1 and #2 UI.
You could say that #1 is already multithreaded and divided on tons of shader-processors on the GPU. I don't know if adding more threads on the CPU would help really. (at least if you are using standard shaders, IIRC some software ray tracers and other CGI renderers use several threads - but i would put such applications under #3)
The user input metods, #2, should only be really really short, and invoke stuff from #3 if more time is needed, that adding more threads here wouldn't be of any use.

API for server-side 3D rendering [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 4 years ago.
Improve this question
I'm working on an application that needs to quickly render simple 3D scenes on the server, and then return them as a JPEG via HTTP. Basically, I want to be able to simply include a dynamic 3D scene in an HTML page, by doing something like:
<img src="http://www.myserver.com/renderimage?scene=1&x=123&y=123&z=123">
My question is about what technologies to use to do the rendering. In a desktop application I would quite naturally use DirectX, but I'm afraid it might not be ideal for a server-side application that would be creating images for dozens or even hundreds of users in tandem. Does anyone have any experience with this? Is there a 3D API (preferably freely available) that would be ideal for this application? Is it better to write a software renderer from scratch?
My main concerns about using DirectX or OpenGL, is whether it will function well in a virtualized server environment, and whether it makes sense with typical server hardware (over which I have little control).
RealityServer by mental images is designed to do precisely what is described here. More details are available on the product page (including a downloadable Developer Edition).
RealityServer docs
Id say your best bet is have a Direct3D/OpenGL app running on the server (without stopping). THen making the server page send a request to the rendering app, and have the rendering app snend a jpg/png/whatever back.
If Direct3D/OpenGL is to slow to render the scene in hardware, then any software solution will be worse
By keep the rendering app running, you are avoiding the overhead of creating/destroying textures, backbuffers, vertex buffers, etc. You could potentialy render a simply scene 100's of times a second.
However many servers do not have graphics cards. Direct3D is largly useless in software (there is an emulated device from Ms, but its only good for testing effects), never tried OpenGL in software.
You could wrap Pov-ray (here using POSIX and the Windows build). PHP example:
<?php
chdir("/tmp");
#unlink("demo.png");
system("~janus/.wine/drive_c/POV-Ray-v3.7-RC6/bin/pvengine-sse2.exe /render demo.pov /exit");
header("Content-type: image/png");
fpassthru($f = fopen("demo.png","r"));
fclose($f);
?>
demo.pov available here.
You could use a templating language like Jinja2 to insert your own camera coordinates.
Server side rendering only makes sense if the scene consists of a huge number of objects such that the download of the data set to the client for client rendering would be far too slow and the rendering is not expected to be in realtime. Client side rendering isn't too difficult if you use something like jogl coupled with progressive scene download (i.e. download foreground objects and render, then incrementally download objects based on distance from view point and re-render).
If you really want to do server side rendering, you may want to separate the web server part and the rendering part onto two computers with each configured optimally for their task (renderer has OpenGL card, minimal HD and just enough RAM, server has lots of fast disks, lots of ram, backups and no OpenGL). I very much doubt you will be able to do hardware rendering on a virtualised server since the server probably doesn't have a GPU.
Not so much an API but rather a renderer; Povray? There also seem to exist a http interface...
You could also look at Java3D (https://java3d.dev.java.net/), which would be an elegant solution if your server architecture was Java-based already.
I'd also recommend trying to get away with a software-only rendering solution if you can - trying to wrangle a whole lot of server processes that are all making concurrent demands on the 3D rendering hardware sounds like a lot of work.
Yafaray (http://www.yafaray.org/) might be a good first choice to consider for general 3D rendering. It's reasonably fast and the results look great. It can be used within other software, e.g. the Blender 3D modeler. The license is LPGL.
If the server-side software happens to be written in Python, and the desired 3D scene is a visualization of scientific data, look into MayaVi2 http://mayavi.sourceforge.net/, or if not, go for a browse at http://www.vrplumber.com/py3d.py
Those who suggest the widely popular POV-Ray need to realize it's not a library or any kind of entity that offers an API. The server-side process would need to write a text scene file, execute a new process to run POV-Ray with the right options, and take the resulting image file. If that's easy to set up for a particular application, and if you've more expertise with POV-Ray than with other renderers, well go for it!
Check out wgpu.net.
I think it's very helpful.