Looking into Meteor to create a collaborative document editing app, because it’s great that Meteor synchronizes data between multiple clients by default.
But when using a text-area, like in Sameer Kalburgi’s example
http://www.skalb.com/2012/04/16/creating-a-document-sharing-site-with-meteor-js/
http://docshare-tutorial.meteor.com/
the experience is sub-optimal.
I tried to type at the same time with a colleague and my changes would be overwritten when she typed and vice versa. So in the conflict resolution there is no merge algorithm yet, I think?
Is this planned for the feature? Are there ways to implement this currently? Etherpad seems to handle this problem rather well. Having this in Meteor would make creating collaborative document editing apps way more accessible.
So I looked into it some more, the algorithm used in Etherpad is known as Operational Transformation:
The solution is Operational Transformation (OT). If you haven’t heard of it, OT is a class of algorithms that do multi-site realtime concurrency. OT is like realtime git. It works with any amount of lag (from zero to an extended holiday). It lets users make live, concurrent edits with low bandwidth. OT gives you eventual consistency between multiple users without retries, without errors and without any data being overwritten.
Unfortunately, implementing OT sucks. There's a million algorithms with different tradeoffs, mostly trapped in academic papers. The algorithms are really hard and time consuming to implement correctly. We need some good libraries, so any project can just plug in OT if they need it.
Thats’s from the site of sharejs. A node.js based ot server-client that you can hook into your existing client.
OT is also implemented in the Racer model synchronization engine for Node.js, that forms the underpinnings for Derby. At the moment, derby.js doesn’t transparently provide it yet, but they plan too, from the Derby docs:
Currently, Racer defaults to applying all transactions in the order received, i.e. last-writer-wins. (…) Racer [also] supports conflict resolution via a combination of Software Transactional Memory (STM), Operational Transformation (OT), and Diff-match-patch techniques.
These features are not fully implemented yet, but the Racer demos show preliminary examples of STM and OT.
Coincidentally, both the sharejs and derbyjs teams have an ex Google-waver on board. Meteor has an ex etherpad/Google Waver in their core team. Since Etherpad is one of the best known implementations of OT I was imagining Meteor would surely want to support it at some point as well…
I've created a Meteor smart package that integrates ShareJS:
https://github.com/mizzao/meteor-sharejs
It's quite preliminary right now, but you can import it into your app, drop in textareas, and it "just works". Please try it out and submit some new features via pull requests :)
Check out a demo here:
http://documents.meteor.com
What you describe seems out of Meteors scope for me. Its not a tool to set up collaboration possibilities!
What it provides is a way to transparently work against a subset of a servers database. But the implementation of use-case specific merging functionality is the job of the application, not the framework.
Related
Intro:
I'm working for a contractor company. We're making SW for different corporate clients, each with their own rules, SW standards etc.
Problem:
The result is, that we are using several bug-tracking systems. The amount of tickets flow is relatively big and the SLA are deadly sometimes. The main problem is, that we are keeping track of these tickets in our own BT (currently Mantis) but we're also communicating with clients in theirs BT. But as it is, two many channels of communication are making too much information noise.
Solution, progress:
Actual solution is an employee having responsibility for synchronizing the streams and keeping track of the SLA and many other things. It's consuming quite a large part of his time (cca 70%) that can be spend on something more valuable. The other thing is, that he is not fast enough and sometimes the sync is not really synced. Some parts of the comments are left only on one system, some are lost completely. (And don't start me at holidays or sickness, that's where the fun begins)
Question:
How to automate this process: aggregating tasks, watching SLA, notifying the right people etc. partially or all together?
Thank you, for your answers.
You need something like Zapier. It can map different applications and synchronize data between them. It works simply:
You create zap (for example between redmine and teamwork).
You configure mapping (how items/attributes in redmine maps to items/attributes in teamwork)
You generate access tokens in both systems and write them to zap.
Zapier makes regular synchronization between redmine and teamwork.
But mantis is not yet supported by Zapier. If all/most of your clients BT are in Zapier's apps list, you may move your own BT to another platform or make a request to Zapier for mantis support.
Another way is develop your own synchronization service that will connect to all client's BTs as each employee using login/password/token and download updates to your own BT. It is hard way and this solution requires continious development to support actual virsions of client's BTs.
You can have a look a Slack : https://slack.com/
It's a great tools for group conversations
Talk, share, and make decisions in open channels across your team, in
private groups for sensitive matters, or use direct messages
one-to-one.
you can have a lot of integrations tools, and you can use Zapier https://zapier.com/ with it to programm triggers.
With differents channels you can notifying the right people partially or all together in group conversation :)
The obvious answer is to create integrations between all of the various BT's. Without knowing what those are, it's hard to say if that's entirely possible. Most modern BTs have an API and support integrations. Some, especially more desktop based ones, don't. For those you probably have to monitor a database directly.
Zapier, as someone already suggested, is a great tool for creating integrations and may already have some of the ones yo need available. I love Slack and it has an API, but messages are basically just text and unless you want to do some kind of delimiting when you post messages to its API, it probably isn't going to work.
I'm not sure what budget is, but it will cost resources to create the integrations. I'd suggest that you hire someone to simply manage these. Someone who's sole responsibility is to cross-populate the internal and the external bug tracking system and track the progress in each. All you really need is someone with good attention to detail for this, they don't have to be a developer. This should be more cost effective than using developer resources on this.
The other alternative is simply to stop. If your requirements dictate that you use your clients' bug tracking software for projects you do for them, just use their software and stop duplicating the effort. If you need some kind of central repository or something for managing work maybe just a simple table somewhere or spreadsheet with the client, the project, the issue number, the status and if possible a link to the issue in the client's BT. I understand the need and desire for centralizing this, but if it's stifling productivity, then the opportunity costs are too high IMO.
If you create an integration tool foe this, you will indeed have a very viable product. This is actually a pretty common problem.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I've created an API based on the RESTful pattern and I was wondering what's the best way to monitor it? Can I somehow gather statistics on each request and how deep could I monitor the requests?
Also, could it be done by using open source software (maybe building my own monitoring service) or do I need to buy third party software?
If it could be achieved by using open source software where do I start?
Start with identifying the core needs that you think monitoring will solve. Try to answer the two questions "What do I want to know?" and "How do I want to act on that information?".
Examples of "What do I want to know?"
Performance over time
Largest API users
Most commonly used API features
Error occurrence in the API
Examples of "How do I want to act on that information?"
Review a dashboard of known measurements
Be alerted when something changes beyond expected bounds
Trace execution that led to that state
Review measurements for the entire lifetime of the system
If you can answer those questions, you can either find the right third party solution that captures the metrics that you're interested in, or inject monitoring probes into the right section of your API that will tell you what you need to do know. I noticed that you're primarily a Laravel user, so it's likely that many of the metrics you want to know can be captured by adding before ( Registering Before Filters On a Controller ) and after ( Registering an After Application Filter ) filters with your application, to measure time for response and successful completion of response. This is where the answers to the first set of questions are most important ( "What do I want to know?" ), as it will guide where and what you measure in your app.
Once you know where you can capture the data, selecting the right tool becomes a matter of choosing between (roughly) two classes of monitoring applications: highly specialized monitoring apps that are tightly bound to the operation of your application, and generalized monitoring software that is more akin to a time series database.
There are no popular (to my knowledge) examples of the highly specialized case that are open source. Many commercial solutions do exist however: NewRelic, Ruxit, DynaTrace, etc. etc. etc. Their function could easily be described to be similar to a remote profiler, with many other functions besides. (Also, don't forget that a more traditional profiler may be useful for collecting some of the information you need - while it definitely will not supplant monitoring your application, there's a lot of valuable information that can be gleaned from profiling even before you go to production.)
On the general side of things, there are many more open source options that I'm personally aware of. The longest lived is Graphite (a great intro to which may be read here: Measure Anything, Measure Everything), which is in pretty common use amongst many. Graphite is by far from the only option however, and you can find many other options like Kibana and InfluxDB should you wish to host yourself.
Many of these open source options also have hosted options available from several providers. Additionally, you'll find that there are many entirely commercial options available in this camp (I'm founder of one, in fact :) - Instrumental ).
Most of these commercial options exist because application owners have found it pretty onerous to run their own monitoring infrastructure on top of running their actual application; maintaining availability of yet another distributed system is not high on many ops personnel's wishlists. :)
(I'm clearly biased for answering this since I co-founded Runscope which I believe is the leader in API Monitoring, so you can take this all with a grain of salt or trust my years of experience working with 1000s of customers specifically on this problem :)
I don't know of any OSS tools specific to REST(ful) API monitoring. General purpose OSS metrics monitoring tools (like Graphite) can definitely help keep tabs on pieces of your API stack, but don't have any API-specific features.
Commercial metrics monitoring tools (like Datadog) or Application Performance Monitoring (APM) tools like (New Relic or AppDynamics) have a few more features specific to API use cases, but none are centered on it. These are a useful part of what we call a "layered monitoring approach": start with high-level API monitoring, and use these other tools (exception trackers, APM, raw logs) to dive into issues when they arise.
So, what API-specific features should you be looking for in an API monitoring tool? We categorize them based on the three factors that you're generally monitoring for: uptime/availability, performance/speed and correctness/data validation.
Uptime Monitoring
At a base level you'll want to know if you're APIs are even available to the clients that need to reach them. For "public" (meaning, available on the public internet, not necessarily publicized...a mobile backend API is public but not necessarily publicized) APIs you'll want to simulate the clients that are calling them as much as possible. If you have a mobile app, it's likely the API needs to be available around the world. So at a bare minimum, your API monitoring tool should allow you to run tests from multiple locations. If your API can't be reached from a location, you'll want notifications via email, Slack, etc.
If your API is on a private network (corporate firewall, staging environment, local machine, etc.) you'll want to be able to "see" it as well. There are a variety of approaches for this (agents, VPNs, etc.) just make sure you use one your IT department signs off on.
Global distribution of testing agents is an expensive setup if you're self-hosting, building in-house or using an OSS tool. You need to make sure each remote location you set up (preferably outside your main cluster) is highly-available and fully-monitored as well. This can get expensive and time-consuming very quickly.
Performance Monitoring
Once you've verified your APIs are accessible, then you'll want to start measuring how fast they are performing to make sure they're not slowing down the apps that consume them. Raw response times is the bare minimum metric you should be tracking, but not always the most useful. Consider cases where multiple API calls are aggregated into a view for the user, or actions by the user generate dynamic or rarely called data that may not be present in a caching layer yet. These multi-step tasks or workflows are can be difficult to monitor with APM or metrics-based tools as they don't have the capabilities to understand the content of the API calls, only their existence.
Externally monitoring for speed is also important to get the most accurate representation of performance. If the monitoring agent sits inside your code or on the same server, it's unlikely it's taking into account all the factors that an actual client experiences when making a call. Things like DNS resolution, SSL negotiation, load balancing, caching, etc.
Correctness and Data Validation
What good is an API that's up and fast if it's returning the wrong data? This scenario is very common and is ultimately a far worse user experience. People understand "down"...they don't understand why an app is showing them the wrong data. A good API monitoring tool will allow you to do deep inspection of the message payloads going back and forth. JSON and XML parsing, complex assertions, schema validation, data extractions, dynamic variables, multi-step monitors and more are required to fully validate the data being sent back and forth is Correct.
It's also important to validate how clients authenticate with your API. Good API-specific monitoring tools will understand OAuth, mutual authentication with client certificates, token authentication, etc.
Hopefully this gives you a sense of why API monitoring is different from "traditional" metrics, APM and logging tools, and how they can all play together to get a complete picture of your application is performing.
I am using runscope.com for my company. If you want something free apicombo.com also can do.
Basically you can create a test for your API endpoint to validate the payload, response time, status code, etc. Then you can schedule the test to run. They also provide some basic statistics.
I've tried several applications and methods to do that, and the best (for my company and our related projects) is to log key=value pairs (atomic entries with all the information associated with this operation like IP source, operation result, elapsed time, etc... on specific log files for each node/server) and then monitorize with Splunk. With your REST and json data maybe your aproach will be different, but it's also well supported.
It's pretty easy to install and setup. You can monitor (almost) real time data (responses times, operation results), send notifications on events and do some DWH (and many other things, there are lots of plugins).
It's not open source but you can try it for free if you use less than 50MB logs per day (that's how it worked some time ago, since now I'm on enterprise license I'm not 100% sure).
Here is a little tutorial explaining how to achieve what you are looking for: http://blogs.splunk.com/2013/06/18/getting-data-from-your-rest-apis-into-splunk/
as opposed to writing your own library.
We're working on a project here that will be a self-dividing server pool, if one section grows too heavy, the manager would divide it and put it on another machine as a separate process. It would also alert all connected clients this affects to connect to the new server.
I am curious about using ZeroMQ for inter-server and inter-process communication. My partner would prefer to roll his own. I'm looking to the community to answer this question.
I'm a fairly novice programmer myself and just learned about messaging queues. As i've googled and read, it seems everyone is using messaging queues for all sorts of things, but why? What makes them better than writing your own library? Why are they so common and why are there so many?
what makes them better than writing your own library?
When rolling out the first version of your app, probably nothing: your needs are well defined and you will develop a messaging system that will fit your needs: small feature list, small source code etc.
Those tools are very useful after the first release, when you actually have to extend your application and add more features to it.
Let me give you a few use cases:
your app will have to talk to a big endian machine (sparc/powerpc) from a little endian machine (x86, intel/amd). Your messaging system had some endian ordering assumption: go and fix it
you designed your app so it is not a binary protocol/messaging system and now it is very slow because you spend most of your time parsing it (the number of messages increased and parsing became a bottleneck): adapt it so it can transport binary/fixed encoding
at the beginning you had 3 machine inside a lan, no noticeable delays everything gets to every machine. your client/boss/pointy-haired-devil-boss shows up and tell you that you will install the app on WAN you do not manage - and then you start having connection failures, bad latency etc. you need to store message and retry sending them later on: go back to the code and plug this stuff in (and enjoy)
messages sent need to have replies, but not all of them: you send some parameters in and expect a spreadsheet as a result instead of just sending and acknowledges, go back to code and plug this stuff in (and enjoy.)
some messages are critical and there reception/sending needs proper backup/persistence/. Why you ask ? auditing purposes
And many other use cases that I forgot ...
You can implement it yourself, but do not spend much time doing so: you will probably replace it later on anyway.
That's very much like asking: why use a database when you can write your own?
The answer is that using a tool that has been around for a while and is well understood in lots of different use cases, pays off more and more over time and as your requirements evolve. This is especially true if more than one developer is involved in a project. Do you want to become support staff for a queueing system if you change to a new project? Using a tool prevents that from happening. It becomes someone else's problem.
Case in point: persistence. Writing a tool to store one message on disk is easy. Writing a persistor that scales and performs well and stably, in many different use cases, and is manageable, and cheap to support, is hard. If you want to see someone complaining about how hard it is then look at this: http://www.lshift.net/blog/2009/12/07/rabbitmq-at-the-skills-matter-functional-programming-exchange
Anyway, I hope this helps. By all means write your own tool. Many many people have done so. Whatever solves your problem, is good.
I'm considering using ZeroMQ myself - hence I stumbled across this question.
Let's assume for the moment that you have the ability to implement a message queuing system that meets all of your requirements. Why would you adopt ZeroMQ (or other third party library) over the roll-your-own approach? Simple - cost.
Let's assume for a moment that ZeroMQ already meets all of your requirements. All that needs to be done is integrating it into your build, read some doco and then start using it. That's got to be far less effort than rolling your own. Plus, the maintenance burden has been shifted to another company. Since ZeroMQ is free, it's like you've just grown your development team to include (part of) the ZeroMQ team.
If you ran a Software Development business, then I think that you would balance the cost/risk of using third party libraries against rolling your own, and in this case, using ZeroMQ would win hands down.
Perhaps you (or rather, your partner) suffer, as so many developers do, from the "Not Invented Here" syndrome? If so, adjust your attitude and reassess the use of ZeroMQ. Personally, I much prefer the benefits of Proudly Found Elsewhere attitude. I'm hoping I can proud of finding ZeroMQ... time will tell.
EDIT: I came across this video from the ZeroMQ developers that talks about why you should use ZeroMQ.
what makes them better than writing your own library?
Message queuing systems are transactional, which is conceptually easy to use as a client, but hard to get right as an implementor, especially considering persistent queues. You might think you can get away with writing a quick messaging library, but without transactions and persistence, you'd not have the full benefits of a messaging system.
Persistence in this context means that the messaging middleware keeps unhandled messages in permanent storage (on disk) in case the server goes down; after a restart, the messages can be handled and no retransmit is necessary (the sender does not even know there was a problem). Transactional means that you can read messages from different queues and write messages to different queues in a transactional manner, meaning that either all reads and writes succeed or (if one or more fail) none succeeds. This is not really much different from the transactionality known from interfacing with databases and has the same benefits (it simplifies error handling; without transactions, you would have to assure that each individual read/write succeeds, and if one or more fail, you have to roll back those changes that did succeed).
Before writing your own library, read the 0MQ Guide here: http://zguide.zeromq.org/page:all
Chances are that you will either decide to install RabbitMQ, or else you will make your library on top of ZeroMQ since they have already done all the hard parts.
If you have a little time give it a try and roll out your own implemntation! The learnings of this excercise will convince you about the wisdom of using an already tested library.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
Okay, I will shortly be starting down the path of windows mobile development. I know nothing about the subject really and I am looking for people with experience to let me know of any gottchas you may know of.
Right now I dont even have a breif of what is requied but the assumption is that the application will be very little more than a bunch of CRUD forms for updating data. The only other requirment knowladge I have is that the application will need to support offline storage when there is no signal avaliable. This in turn will obviously require some kind of syncronization when signal returns.
My initial thoughts are that the application will primarily be a front end to interact with a web service layer. Im assuming that WCF will be an appropriate technology for building these services? I also thought that SQL Server CE would be a good route to go down with regards to the offline storage issues.
Any knowlage that you feel is useful within this domain would be appreciated. Advice, links, books anything appreciated.
EDIT: It has been noted that there are two ways to go with off-line synchronization. To either use some form of message queuing or to use SQL synchronization tools. Could anyone offer a good comparison and introduction to these?
EDIT 2: After a little more digging I get the impression that there are basically 3 different approaches I can use here:
Emmbeded Database to query against then syncronization online, when able
MSMQ along with .NET remoting
WCF with ExchangeWebServiceMailTransport bindings using Exchange Server.
Now, there has been a nice few points raised on the first issue, and I think I understand at some level the issues I would face. But I'd like to get a little more information regarding MSMQ implementations and using WCFs new bindings.
Here a few words from my experience so far (about 9 months) of .net Windows Mobile development.
Well you are occasionally connected. (Or more likely occasionally disconnected). You have to choose whether you are going to use messaging with queues (i.e. WCF/SOAP/XML or something like it) or database synchronisation. I choose the SQL synchronisation route so I can't really comment on messaging. The SQL synchronisation route is not hassle free!
If you go down the sync route with SQL compact like me you basically have two choices. SQL Server merge replication or the newer ADO.NET Synchronisation services. If you choose the former you need to be really careful with your DB design to ensure it can be easily partitioned between mobile subscribers and the publisher. You really need to think about conflicts, and splitting tables that wouldn't normally be split in a normalised DB design is one way of doing that. You have to consider situations where a device goes offline for some time and the publisher DB (i.e. main DB) and/or a subscriber alters the same data. What happens when the device comes back online? It might mean resolving conflicts even if you have partitioned things well. This is where I got burnt. But SQL Merge Replication can work well and reduces the amount of code you have to write.
Roll your own DAL. Don't attempt to use datareaders etc. directly from UI code and don't use typed datasets either. There may be third party DALs that work with Windows Mobile (i.e. I know LLBLGEN does, might be worth a look) but Linq-to-SQL is not supported and anyway you need something lightweight. The chances are the DAL won't be too big so roll it yourself.
If you are using .net you'll probably end up wanting some unimplemented platform features. I recommend using this inexpensive framework to give you what your missing (especially as related to connectivity and power management) - http://www.opennetcf.com/Products/SmartDeviceFramework/tabid/65/Default.aspx
Windows Mobile devices partially switch off to save power when not in use. If you are doing a polling type design you'll need to wake them up every x mins. A normal .net timer class won't do this. You'll need to use a platform feature which can be used from OpenNetCF (above). The timer class is called LargeIntervalTimer and is in the OpenNetCF.WindowsCE assembly/namespace (I think).
Good Luck!
SqlCE is only one of the options available for local data storage on a Windows Mobile device, and although it's an excellent database it has limitations. For one thing, SqlCE will not work (period) under encryption (in other words, if your user encrypts the location where your SDF file is, you will no longer be able to access the data).
The second (and most critical) weakness of SqlCE lies in the RDA/Merge Replication tools. SqlCE Merge Replication is not 100% reliable in situations where the network connection can drop during replication (obviously very common in Windows Mobile devices). If you enjoy trying to explain missing or corrupted data to your clients, go ahead and use SqlCE and merge replication.
Oracle Lite is a good alternative to SqlCE, although it too doesn't work properly under encryption. If encryption is a potential problem, you need to find a database engine that works under encryption (I don't know of one) or else write your own persistence component using XML or something.
Writing a WM application as a front end that primarily interacts with a web service in real time will only work in an always-connected environment. A better approach is to write your application as a front end that primarily interacts with local data (SqlCE, Oracle Lite, XML or whatever), and then create a separate Synchronization component that handles pushing and pulling data.
Again, SqlCE merge replication does this pushing and pulling beautifully and elegantly - it just doesn't work all the time. If you want a replication mechanism that works reliably, you'll have to write your own. Oracle Lite has something called a snapshot table that works very well for this purpose. A snapshot table in Olite tracks changes (like adds, updates and deletes) and allows you to query the changes separately and update the central database (through a web service) to match.
This thread I just posted on SO a few days ago has proven to be a great resource for me thus far.
Also the Windows Mobile MSDN WebCasts are a wealth of information on everything from just getting started up to advanced development.
I would suggest Sqlite for local storage. From the last benchmark I ran it was much better than SqlCe and you don't have to do stupid things like retain an open connection for performance improvements.
Trade-offs being that the toolset is less rich and the integration with other MSSql products is nil. :(
you might want to refer to this:
getting-started-with-windows-mobile-development
You shouldn't be intimidated for windows mobile development. It isn't much different from desktop development. I strongly recommend that you use .NET Compact Framework for development and not C++/MFC.
Some useful links:
Mobile section at the Code
Project. You would find a lot of
articles, a little digging is needed
to find the appropriate one.
Smart
Device Framework from OpenNetCF
offer valuable extensions to the
compact framework.
When you install
the Mobile SDK, you will find under the
Community folder links for the
Windows Mobile and CF framework
blogs. These are also valuable
resources.
Regarding your application, you are right about the WCF and the SQL Server CE. These are the proper ways for handling communication and storage.
Some hints for people coming from a desktop world:
You need to have some sort of power management. The device may automatically go to suspend state. Also, you shouldn't consume power when you don't have to.
Network connectivity is a difficult issue. You can register notifications for when a specific network (Wi-Fi, GPRS) becomes available or unavailable. You can also set the preferred means of communication.
Make the UI as simple as possible. The user uses his thumb and/or a pen and he is probably on the move.
Test in a real device as early as possible.
"24 Hours of Windows Mobile Application Development" from the Windows Mobile Team Blog has some good resources
If you can, try to start from the user use cases and work back to the code, rather than vice versa. It's really easy to spend a lot more time working on the tools than working on the business problem. And thinking through user requirements will help you consider alternate strategies, because a lot of the patterns you know from normal .NET don't apply.
I've done lots of intermittent application development of exactly the type you are describing, and an on-board database works just fine. The MSMQ/WCF stuff just adds conceptual overhead without adding much value. You need a logical datastore locally anyway, and replication at this level is a simple concept that you want to keep simple, so the audit trail is easily monitored and debugged. MSMQ and WCF tend to hide things in unfamiliar places.
I upvoted the SqlLite suggestion BTW. MS doesn't have their persistence story stabilized yet for CE.
For the database replication bit I highly recommend Sybase Ultralite. In terms of flexibility and performance it knocks the socks off SQL CE
I had to do this once. Weird setup with Macs for development, and we were all Java programmers. And a short deadline. PowerPC macs too, so no chance to install Windows for Visual Studio development, never mind that the money for this would never have appeared.
We ended up writing applications using Java, running on the IBM J9 virtual machine, with SWT for a user interface. Entirely free development stack. Easy to deploy. Code ran on any platform we desired, not just PocketPC/WinMob.
Most of the work was on the server side anyway - the database, the web service server. The logic. The reporting engine. The client side wasn't totally simple however - would get the form templates from the server (because they changed frequently), the site details (multi-site deployment), generate a UI from the form template (using some SWT GUI components that are wonderful for PocketPC development, like the ExpandBar), gather data with a point and click interface (minimising keyboard entry where possible), and then submit it back to the server.
For offline storage we used XML files on the device itself. More than enough for our needs, but yours may differ. Maybe consider SQLite?
There are a couple links you can check out to start with:
http://developer.windowsmobile.com
http://msdn.microsoft.com/en-us/windowsmobile/default.aspx
If you have a sticking point while developing, there are also Windows Mobile dedicated chats on MSDN that you can attend and ask your questions. The calendar hasn't been updated yet, but the next ones should be in January. You can find the schedule here: http://msdn.microsoft.com/en-us/chats/default.aspx
I am going to add an additional question to this post, as its been active enough and hopefully will be helpful to others as well as me. Ok, so after playing around I now realize that standard class libraries cannot be included in windows mobile applications.
Now the overwhelming advice here seems to be use an embedded database, though I now do have use cases and it appears that I will need to have document synchronization as well as relational data. With this in mind service layer interaction seems inevitable. So my question is how would I share common domain objects and interfaces between the layers?
"Document synchronization" - does that mean bidirectional? Or cumulative write-only? I can think of mobile architectures that would mainly collect and submit transactions for a shared document - if that's your requirement, then we should discuss offline - it's a long (and interesting) conversation.
Owen you can share code from Compact Framework -> Desktop, it's only Desktop -> Compact Framework that has compatability issues if you use certain objects that are not supported by the CF.
While a desktop lib doesn't work on CF a CF lib WILL work on the desktop, you can also run CF.exes on the desktop!
Just create a CF library as the project that defines your base objects / interfaces etc.
This book sshould e essential reading for all Windows Mobile developers: http://www.microsoft.com/learning/en/us/books/10294.aspx
For developing windows mobile applications you must have the basic tools like silverlight, visual studio, windows phone emulator and sqlite as your database storage.
A while ago another question referred to the (possibly urban tale) statistic that
... the average lifespan of software is about 3 years
At the time I came up with the following reasons (and I'm sure there are more possibly better ones):
A new major system (ERP, CRM, etc.) is implemented and it has an "integrated" module to replace the old app.
Same, but no integrated app - but the existing app is not adaptable (the people left, technology has changed, current IT policies have changed, users don't like the existing app.)
The company you acquired the basic app from, to customize it for your needs has disappeared.
Or you don't get along well with them any more.
The technology for the existing app is "obsolete" (according to the framework vendor/Microsoft/consultant/industry expert/new IT manager who has management's ear.)
"We're phasing out (Windows 95/Windows 98/Windows 2000/Windows XP/NT) and we need matching technology in our apps".
"We've learned a lot from (App Version n) and we'll do a lot better the second/third/fourth/n+1th time."
Job justification for developers/IT manager/Division VP/consulting company.
The users hate it.
We've merged/acquired a competitor/been acquired by a competitor and theirs is better.
Some of these are unavoidable (e.g. your company gets bought), but overall this is surely smething that needs to be avoided. Does your organization intentionally fight this syndrome? What effective strategies would you recommend?
That's why an application needs to be easy to expand, and you should be able to easily add-in all the buzzwords.
If you have a solid base code, most of the buzzwords are related to the UI (Vista Controls, Ajax, .net, ASP.net 3.5)...
You could be running COBOL in the back-end ( I wouldn't).
A new major system is implemented - There's nothing you can do.
current IT policies have changed, - The app should be adaptable.
users don't like/hate the existing app - why? cosmetic changes in the UI can fix this most of the time.
The company you acquired the basic app from, to customize it for your needs has disappeared. - I wouldn't do that, I'd prefer to write it myself.
The technology for the existing app is "obsolete" (according to the framework vendor/Microsoft/consultant/industry expert/new IT manager who has management's ear.) - same as the above, if the back-end is solid, you should follow these in the front-end.
"We're phasing out (Windows 95/Windows 98/Windows 2000/Windows XP/NT) and we need matching technology in our apps". - a simple compatibility test and minor UI elements solve this.
I'll also say that this is different when you compare in-house to commercial apps, if you're doing an in-house app, change guarantees your job (if you know what you're doing). If you're doing a commercial app, change is an opportunity to make more money, new features would get you upgrades from existing clients and new clients who are looking for the buzzwords, these buzzword could become your advantage when compared to a competitor.
The average lifetime of software I write at the moment is probably a few days. (I write a lot of scripts, so I might be an aberration. ;-) But the core system I work with is probably 15 to 20 years old now. The underlying OS is about 30 years old. There is nothing inherently wrong with either old or young software. In fact, software ages best when it's possible to adapt it to new uses.
Having layers of abstraction between functional parts make it easier to replace functionality in a system. For instance, we've gone through several different tape libraries on our system and now we are considering going to disk archives in the future. Since the "archive" portion of our system sits behind an abstraction layer, we can replace it fairly easily without replacing the rest of the system.
When possible, it's also best to use standard parts. That way, if you run into some limitation, it's likely others will have the same problems and more likely someone will come up with a fix.
Continuous improvement - add useful features at regular intervals
No show-stopping bugs in new versions - testing, testing, testing...
be nice to your clients and treat them with respect (most users really don't want to change their ERPs every three years so if you have a good realtions with them they'll be on your side)
Stay current with new technologies and integrate them in your application when needed
When gathering requirements and someone says "Situation X will always be the case, no exceptions", make it configurable. It will always change, no exceptions.
Most companies don't make it for 5 years. Their software implementations wouldn't be expected to last as long.