Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
Has anybody any successful remarks about having a team working via Remote Desktop?
In many workplaces, we put end users via Citrix and the applications on a central, powerful server. Sometimes the clients are in the same building as the server, but often, they are remote.
There could be some huge benefits for me to put my developers on Windows XP or Vista instances running on a couple servers with Hyper-V.
I'm worried that RDP/RDC via the internet would be too slow for somebody to be able to develop efficiently.
I'm sure I can hear plenty of bad things about it... are there any people out there that have had success?
I have seen a situation where the attempt was made to do this with a sattelite office. It was done for a java development team using various java IDE tools. The result wasn't regarded as a success, and the company brought the team back into a central London office at considerable expense.
For someone doing this on a day-in day-out basis on interactive software, the result isn't really very pleasant. For something that mainly uses text-based tools such as vim and unix command line tools, it works somewhat better. At one point I had XVNC going over a 128 Kbit DSL link (of a type that was prevalent in New Zealand at the time) and could do work on an Oracle-based data warehouse at a remote location quite readily. The level of interactivity required by the tooling made them much less sensitive to the slow link than a Windows-based IDE.
So, I'll invoke the 'it depends' argument with some qualifications:
I would not recommend it for a modern IDE, and certainly not for something heavily graphical like Dreamweaver, BI Development Studio or Informatica.
For a textual environment like traditional unix development tools it could probably be made to work quite well. These user interfaces are much less sensitive to latency than a direct-manipulation user interface.
I'm something of a believer in the 'best tools' principle. Going out of your way to give a second-rate user interface to a development team will give off negative signals. The cost saving from doing this is likely to be minimal and it will annoy some of your team members. Even if it can be made to work reasonably well you are still making a value statement by doing this. Weigh the cost saving against the cost of replacing one or more of your key development staff.
We connect to our development environments using RDP and locally the performance is great. It slows a bit over VPN, but is still acceptably responsive.
Turn off all the windows animation functionality, desktop background, etc. and that will help considerably.
If you're not worried about the latency on audio and fast-moving imagery and you're not developing anything dependent on 3D hardware, you'll likely be fine.
I've never used it in a team environment, but I use my laptop RDP'd into my workstation all day and love it.
I've worked in an environment where we would occasionally edit some existing code via remote desktop. There were no significant challenges to this. As a developer I positively hated doing that work. Everything felt slow and unresponsive. However, we got the work done.
Thankfully these were often short 3-4 hours jobs... mostly fixes to existing systems on remote customer sites. I don't think I could recommend it as a normal way of doing work, but its certainly possible.
I've used both VNC and RDP over a DSL connection, running through an SSH tunnel, and have had no real issues.
There are definitely some lags, particularly if you're redrawing large parts of a screen. But most development involves small edits, and both of these protocols handle that very well.
I use Remote Desktop to control my Windows machine at work. I use a Parallels VM on a Mac and my connection is 2.5M down, 256k up.
This works really really well. I've been doing this for 2 years for 1-3 days a week. The slow upspeed isn't an issue - I can't type that fast.
I have 3 screens at work but still find a 20" Mac screen to be superior. The colours are much cleaner and I can work longer at the Mac than my work screens!
The thing that is a killer is Flash on a browser. If I accidentally open a browser on my remote machine with Flash it kills the connection. The solution is to use FlashBlock (a firefox addin).
I use Eclipse and Visual Studio with no issues whatsoever.
I've used it to work from home (remote login to my in-office PC via VPN).
The performance depends on your ISPs, of course.
It's slightly less reliable (because as well as your having downtime when/if ever the office LAN is down, there's now additional risk of downtime while either of the internet connections is down).
I have a remote server on a 1Mbps upstream pipe which I RDP to (over a VPN) and it works just fine. I even use large screen resolutions (1600x1200) with no performance problems. Of course, I'm not sure how such a setup would fare for multiple concurrent users, however.
A benefit of developing over RDP that I hadn't anticipated is that you can save your sessions--so after you get done developing for the day, you quit your RDP client and power down your computer, and when you log back in the following day your session is right where you left it.
As an added bonus, RDP clients are available for linux, and OS X.
I use RDP daily for development, I leave my laptop on at home with my work environment open and ready to go. When I get to work and everybody is loading up their projects and opening their programs I just RDP in and I'm ready to go. You have to keep in mind certain keyboard shortcuts that change though (CTRL+ALT+DEL for example), it is annoying at first but you get used to it.
To keep the latency to a minimum, I recommend...
turning the colors down to 256 (after all, you only need to see text)
Leave the wallpaper at the other computer
Leave sounds at the other computer
Leave any themes on the other computer
Choose a lower connection speed, even if yours is higher. Windows will minimize the data sent.
One of the advantages you might also consider is processing power. If your machine at home has far better specifications than your workstation on the job, compilation time is improved a fair bit. Since your local machine only needs to update the image from the remote machine, your local computer is not under load.
Using this option also allows me to keep on track. While others log in and browse the internet and waste time, I'm set up and ready to go. Being more productive helps you get paid the big bucks (if your employer notices), while others are still stuck in their junior programming roles.
Pre-2000 I did it for 3 years every day several hours a day. This was when bandwidth sucked too.
Nowadays it's much much better.
And if you use NxMachine life gets even better :)
I did not, however, use the machine with multiple users. My concern with that would be that developers are a finicky bunch (myself included) and we tend to push machines really hard as it is.
Can't imagine several folks on one box all deciding to compile :)
G-Man
We do it with citrix and is very fast.
I wonder what the reason for this would be. Does the central server(s) have access to some resources that the individual developer machines could not access?
I'm using RDP to connect from my home computer to my work computer from time to time. I have to say - it's possible to code, but it's way more comfortable to do it when the IDE is on your own machine. Even when on a 100MBit LAN there is some noticeable lag. Not enough to bother work, but annoying nevertheless.
If the people have to work from remote places on a regular basis, I'd rather prefer a setup where the central source control is available through some secure protocol (HTTPS, VPN, etc.), but the development can happen locally on the developer's machines. If using something like SVN, which works well even with offline development, then it should be way more comfortable for the programmers themselves.
What is important for a development workstation is sheer processing power. At our place the developers have the most high-end workstations in terms of cpu, memory, disk, etc and not in terms of audio and graphics. It's the latter that are most affected by RDP.
As long as the server that your developers are RDP-ing to is fast enough to handle multiple compiles, builds at the same time you should be fine.
As with all things, the answer to your question is "Your Milage May Vary" or YMMV. It depends on what the developers are doing. Do they spend most of their time writing code, or do they do a lot of large compiles? Do they need direct hardware access?
Do they need debugging rights? Once you grant them debugging rights, they basically own the machine and can interfere with other users.
It's typically much better to allow the users to develop on their own computers, and use a VPN to allow them to acces the version control system. Then they can checkout the files to their local computers and do whatever they want, then checkin the changes.
But, RDP has it's advantages too. You really need to weigh the pro's and cons and decide which list is longer or more "weighty".
I use NoMachine NX Client to remote desktop onto a headless server that runs FreeNX. It is great because I can login to my session from anywhere and my last session is still there for me. Speed has never been a problem, except when the DSL line is down.
Anyway, my point is that if you are running a Linux server and use 'vi' then there is a nicer alternative than 'screen'.
Related
I have the following dilema: My clients (mom-n-pop pawnshops) have been using my mgmt. system, developed with ISQL, for over 20 years. Throughout these two decades, I have customized the app to each clients desire, or when changes in Laws/Regulations have required it. Most clients are single-user sites. Some have multiple stores, but have never wanted a distributed db, don't trust the reliability or security of the internet or any other type of networking. So, they all use Standard Engines. I've been able to work around some SE limitations and done some clever tricks with ISQL and SE, but sooner or later, new laws may require images of pawnshop customers, merchandise, electronic transmision, etc. and then it will be time to upgrade to IDS, re-write the app in 4GL or change to another RDBMS. The logical and easiest route would be IDS/4GL, however, when I mentioned Linux or Unix-like platforms to my clients, they reacted negatively and demanded a Windows platform, so the easiest solution could be 4Js, Querix, etc.?.. or Access, Visual FoxPro or ???.. anyone have suggestions?
This whole issue probably comes down to a couple of issues that you'll have to deal with.
The first thing is what application programming and development language Are you willing to learn and work with?
The other thing is what kind of Internet capabilities to you want?
So for example while looking at a report do you want to be able to click on a button and have the report converted to a PDF document, and then launch the e-mail client with that PDF attached?
What about after they enter all the information data into the system, perhaps each store would like their own miniature web site in which people in town could go there to check what they've have place of having to phone up the store and ask if they have a $3 used lighter (the labor of phone and checking for these cheap items is MORE than the cost of selling the item – so web really great for this type of scenario).
The other issue is what kind of interface do you want? I assume you currently have some type of green screen or text based interface? Or perhaps over the years you did convert over to a GUI (graphical user interface).
If still green screen (text based) you now you have to sit down and give a considerable amount of effort and time into the layout and how you of screens will work with a graphical based system. I can remember when going from green screens to color, all of a sudden now the choices and effort of having to choose correct colors and layouts for that screen actually increased the workload by quite a bit. And then I went from color test screens to that of a graphical interface, then again all of a sudden now we're presented with a large number of new controls, colors, and in addition to that we have large choices in terms of different fonts and sizes.
And then now with the web, not only do you deal at different kinds a button styles (round, oval, shading, shadows, glow effects), but in addition to all those hover effects and shading effects etc, you now have to get down to some pretty serious issues in terms of what kind of colors (theme) your software will adopt for the whole web site.
This really comes down to how much learning and time you are willing to invest into new tools and how much software you can and will produce for given amount of time and effort.
I quite partial to RAD tools when you get down into the smaller business marketplace. Most of the smaller businesses can not afford rates for a .net developer (it not so much the rate, as the time to build an application). So, using ms-access is a good choice in the smaller business market place. Access is still a good 3 to 5 times many of the other tools in the marketplace. So quote by .net developer to develop something might be 12,000 bucks, and the same thing in Access might be $3000. I mean that small business can not afford to pay you to write unit testing code. This type of extra cost is just not going to happen on the smaller scale projects.
The other big issue you have to deal is what kind of report writing system are you going to build into the system? This is another reason why I like for the smaller business applications is access is because the report writer is really fantastic. Access reports have a whole bunch of abilities to bake connections in from forms and queries and pass filters and parameters into those reports. And, often the forms and queries that you spend time building already can talk to reports with parameters and pass values in a way that again really reduces the workload (development costs).
I think the number one issue that you'll have to address here however is what you're going to do for your web based strategy? You absolutely have to have one. Even if you build the front end part in access, you might still want to use a free edition of SQL server for the back end part. There are several reasons for this, but one reason is then it makes it easy to connect multiple stores up over the Internet.
Another advantage of putting your data in some type of server based system, is now you can set up some type of web server for all the stores to use, and build a tiny little customize system that allows each store to have their products and listings online (but, they use YOUR web server, or one that you paying $15 per month to host all of those customers). This web part could be an optional component that maybe perhaps all customers don't necessarily want. It would work off of the data they have to enter into the system anyway.
One great advantage of adopting these web based systems is not only does it allow these stores to serve their customers far better, but it also opens up the doors for you to convert your software into a monthly fee based system, or at least some part of it such as the optional web hosting part you offer.
When I converted so my longer time applications from green screen mainframe type software into windows desktop based applications it opened up large markets for me. With remote desktop, downloading software, issuing updates from a web site, then these new software systems make all of these nuts and bolts part of delivering software very easy now and especially so for supporting customers in different cities that you've never met face to face.
So, if you talking still primarily single user and one location, Access will reduce your development costs by a lot. It really depends on how complex and rich of an application you are talking about. If the size and scope of the project is beyond one developer, then you talking more about developer scaling (source code control, object development methodology, unit testing, cost and time of setting up a server based database system like SQL server etc). So they're certainly tipping point here when you go beyond that tipping point of cost time in complex city, then I actually don't recommend access. So this all comes down to the right horse for the right course.
Perhaps that the end of the day, it really comes down to what application development system are you willing to invest the time to learn?
Look at Aubit4GL - that is, I believe, available (or can be compiled on) Windows.
Yes, IDS is verging on overkill for a single-user system, but if SE doesn't provide all the features you need, or anticipate needing in the near future, it is a perfectly sensible choice. However, with a modicum of care, it can be set up to be (essentially) completely invisible to the user. And for a non-stressful application like this, the configuration is not complicated. You, as the supplier, would need to be fairly savvy about it. But there are features like silent install such that you could have your own installer run the IDS installer to get the software onto the customer's machine without extra ado. The total size of the system would go up - IDS is a lot bigger on disk than SE is (but you get a lot more functionality). There are also mechanisms to strip out the bigger chunks of code that you won't be using - in all probability. For example, you'd probably use ON-Tape for the backups; you would therefore omit ON-Bar and ISM from what you ship to customers.
IDS is used in embedded systems where there are no users and no managers working with the system. The hardware sits in the cupboard (closet) and works, communicating over the network.
It's good to see folks still getting value out of "old school" Informix Tools. I was never adept at Perform, but the ACE report writer always suited me. We skipped Perform and went straight for FourGen, and I lament that I've never been as productive as I was with FourGen. It had it own kind of elegance from its code generators to it funky, but actually quit powerful, stand alone menu system.
I appreciate the modern UI dynamics, but, damn, is it hard to write applications today. Not just tools, but simply industry requirements et al (such as you may be experiencing in your domain). And the Web is just flat out murder.
I guess part of it is that since most "green screen" apps look the same, it's hard to make one that looks bad! With GUIs and the Web etc., you can't simply get away with a good field order and the labels lining up.
But, alas, such as it is, that is what we have.
I have not used it in, what now, 15 years, but you may also want to look at Alpha 5. It was a pretty powerful, but not overly complicated, database development package, and (apparently) still going strong.
I wouldn't be too afraid of IDS. It runs pretty simply. Out of the box with zero or little tweaking, the DB works and is efficient, and it used to be pretty trivial to install. It was no SE, in that SE's access was tied to the application (using a library) vs an independent server that is IDS. But, operationally, it's really straightforward -- especially for an app like what you're talking about. I appreciate that it might be overkill, but even today, the resource requirements won't necessarily be insane. There's a lot of functionality, of course, and flexibility that you won't use. But frankly, beyond "flat file" DBase style databases, pretty much ALL of the server based SQL databases are very powerful and capable and potentially complicated. But they don't have to be. They can still be used "simply" and easily (well, save for Oracle -- Oracle can't do anything "simply").
As far as exploring other solutions, don't be too afraid of the "OOP" stuff, as most applications, while they leverage OOP libraries, aren't really OOP themselves (they can be, they just typically aren't, they simply don't need to be). The biggest issue with many of the OOPs systems, is they're simply to finely structured. Dealing with events at far too low of a level. While many programs need to access to that fine level of control, most applications, particularly the ones much like yours, do not. So, the extra flexibility simply gets in the way or creates more boiler plate.
That said, you shouldn't be frightened away from them per se, citing lacking of expertise. They can be picked up reasonably quickly. But I would certainly exhaust the more specialized tools (like Alpha 5, or Access, etc.) first to see if they don't offer what you want.
In terms of Visual FoxPro, was and remains a peerless tool (despite flak from people who know little about it). It has a fast, native database engine, built-in SQL and powerful report designer and so on. But you also have to consider that Microsoft support will be dropped for it in 2014, there will never be a 64-bit version, and so on. And the file locking method it uses will be increasingly flaky on future versions of Windows IMO.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 8 years ago.
Improve this question
I am going to be starting a new job soon and the company has previously had hobbyist developers working on their applications and as such not had to worry about supplying equipment for the developers.
Having spoken to them they seem to understand that I will need a reasonable system in order to ensure I am both as productive as possible and happy working there. I will be working both in there office and from home (about 50:50) so I need to ensure that I pick a setup which allows me to work comfortable from both.
One option I have is to use a dedicated workstation at the office with dual screens which I can use in the office. As I will be using SVN I could then work on my own systems at home so long as I make sure I check my code in every night.
Another option is for them to get me a new laptop which would be something like (dual core, 4gb ram, 1920x1200) and an external monitor so I can at least use 2 screens (even if they are not the same size)
Another option I am toying with right now, as I need to replace my old work laptop (I work freelance in addition to this job) is to finally make the move over to OS X and get myself a Mac Book Pro. Therefore my thought here would be to see if they are willing to buy a 27" iMac which I can use to run some VM's etc in along with other services (db, unit testing etc) and I can then use its monitor with the Macbook while at the office.
I could then take the work to and from the office and hook up the laptop to monitors at home and have a dedicated machine to run other intensive tasks.
I am hoping someone can help me decide which route would be best to try and recommend the company to go. In summary the options are:
A dedicated dual screen workstation
A dedicated work laptop and external monitor
A compromise with me supplying the laptop and them a desktop/monitor
In all 3 I would hope to still be able to edit and maintain code etc from home with code being in SVN. I think the main issues will be where email and documents go so I can have them on me all the time...maybe solved using Google apps or something
Thanks for any advice any of you might be able to provide.
You're going to want to continue to have a personal laptop. so make sure that you are the one owning the laptop no matter who pays for it. Given a choice, I'd rather own a laptop and use it part-time for business than use somebody else's laptop for my own purposes, commercial or personal, or juggle two laptops.
So, I'd ask for the best desktop development system I'd be likely to get, best being of course dependent on what you like, what you're doing, and company policy.
If you go for separate development machines at work and at home, you probably want a better system than trying to remember to check into Subversion at the end of each session. If you keep your version of the project on your laptop at all times, that would eliminate the problem. Other than that, if you could connect into your work machine from home, and your home machine from work, you could either use a distributed VCS (like Mercurial or Git) on your own machines or just log in to commit the stuff you forgot when you left last time.
Ideally, the monitors will have the same RGB pattern. Some monitors (e.g., Viewsonic 201b) have a "BGR" layout of the pixels on screen.
If you are using Windows with ClearType font rendering, it is important to have all monitors with the same pixel pattern -- otherwise the ClearType both looks funny, and causes a slowdown as you drag a window from one screen to the other.
I realize you might not use Windows, as you stated, but thought I would through this out to you. There could someday be similar issues with non-Windows OS. Also, I'm not sure if it matters for OSX at present, it might.
I have a MacBook Pro laptop and am very happy with it as a dev platform. It's a unix environment that has beautiful tools (I'm not looking at you, XCode) and is a pretty well built machine. The big advantage over using source control is that, even though SCM will allow you to work in different places (especially a distributed SCM like Git or Mercurial) the time will come when you will forget to push your changes - and if your workplace has any sort of security, going through firewalls, vpn, etc... is just a pain in the butt. I think it's much better to be able to carry your one configured machine with you.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
Okay, I will shortly be starting down the path of windows mobile development. I know nothing about the subject really and I am looking for people with experience to let me know of any gottchas you may know of.
Right now I dont even have a breif of what is requied but the assumption is that the application will be very little more than a bunch of CRUD forms for updating data. The only other requirment knowladge I have is that the application will need to support offline storage when there is no signal avaliable. This in turn will obviously require some kind of syncronization when signal returns.
My initial thoughts are that the application will primarily be a front end to interact with a web service layer. Im assuming that WCF will be an appropriate technology for building these services? I also thought that SQL Server CE would be a good route to go down with regards to the offline storage issues.
Any knowlage that you feel is useful within this domain would be appreciated. Advice, links, books anything appreciated.
EDIT: It has been noted that there are two ways to go with off-line synchronization. To either use some form of message queuing or to use SQL synchronization tools. Could anyone offer a good comparison and introduction to these?
EDIT 2: After a little more digging I get the impression that there are basically 3 different approaches I can use here:
Emmbeded Database to query against then syncronization online, when able
MSMQ along with .NET remoting
WCF with ExchangeWebServiceMailTransport bindings using Exchange Server.
Now, there has been a nice few points raised on the first issue, and I think I understand at some level the issues I would face. But I'd like to get a little more information regarding MSMQ implementations and using WCFs new bindings.
Here a few words from my experience so far (about 9 months) of .net Windows Mobile development.
Well you are occasionally connected. (Or more likely occasionally disconnected). You have to choose whether you are going to use messaging with queues (i.e. WCF/SOAP/XML or something like it) or database synchronisation. I choose the SQL synchronisation route so I can't really comment on messaging. The SQL synchronisation route is not hassle free!
If you go down the sync route with SQL compact like me you basically have two choices. SQL Server merge replication or the newer ADO.NET Synchronisation services. If you choose the former you need to be really careful with your DB design to ensure it can be easily partitioned between mobile subscribers and the publisher. You really need to think about conflicts, and splitting tables that wouldn't normally be split in a normalised DB design is one way of doing that. You have to consider situations where a device goes offline for some time and the publisher DB (i.e. main DB) and/or a subscriber alters the same data. What happens when the device comes back online? It might mean resolving conflicts even if you have partitioned things well. This is where I got burnt. But SQL Merge Replication can work well and reduces the amount of code you have to write.
Roll your own DAL. Don't attempt to use datareaders etc. directly from UI code and don't use typed datasets either. There may be third party DALs that work with Windows Mobile (i.e. I know LLBLGEN does, might be worth a look) but Linq-to-SQL is not supported and anyway you need something lightweight. The chances are the DAL won't be too big so roll it yourself.
If you are using .net you'll probably end up wanting some unimplemented platform features. I recommend using this inexpensive framework to give you what your missing (especially as related to connectivity and power management) - http://www.opennetcf.com/Products/SmartDeviceFramework/tabid/65/Default.aspx
Windows Mobile devices partially switch off to save power when not in use. If you are doing a polling type design you'll need to wake them up every x mins. A normal .net timer class won't do this. You'll need to use a platform feature which can be used from OpenNetCF (above). The timer class is called LargeIntervalTimer and is in the OpenNetCF.WindowsCE assembly/namespace (I think).
Good Luck!
SqlCE is only one of the options available for local data storage on a Windows Mobile device, and although it's an excellent database it has limitations. For one thing, SqlCE will not work (period) under encryption (in other words, if your user encrypts the location where your SDF file is, you will no longer be able to access the data).
The second (and most critical) weakness of SqlCE lies in the RDA/Merge Replication tools. SqlCE Merge Replication is not 100% reliable in situations where the network connection can drop during replication (obviously very common in Windows Mobile devices). If you enjoy trying to explain missing or corrupted data to your clients, go ahead and use SqlCE and merge replication.
Oracle Lite is a good alternative to SqlCE, although it too doesn't work properly under encryption. If encryption is a potential problem, you need to find a database engine that works under encryption (I don't know of one) or else write your own persistence component using XML or something.
Writing a WM application as a front end that primarily interacts with a web service in real time will only work in an always-connected environment. A better approach is to write your application as a front end that primarily interacts with local data (SqlCE, Oracle Lite, XML or whatever), and then create a separate Synchronization component that handles pushing and pulling data.
Again, SqlCE merge replication does this pushing and pulling beautifully and elegantly - it just doesn't work all the time. If you want a replication mechanism that works reliably, you'll have to write your own. Oracle Lite has something called a snapshot table that works very well for this purpose. A snapshot table in Olite tracks changes (like adds, updates and deletes) and allows you to query the changes separately and update the central database (through a web service) to match.
This thread I just posted on SO a few days ago has proven to be a great resource for me thus far.
Also the Windows Mobile MSDN WebCasts are a wealth of information on everything from just getting started up to advanced development.
I would suggest Sqlite for local storage. From the last benchmark I ran it was much better than SqlCe and you don't have to do stupid things like retain an open connection for performance improvements.
Trade-offs being that the toolset is less rich and the integration with other MSSql products is nil. :(
you might want to refer to this:
getting-started-with-windows-mobile-development
You shouldn't be intimidated for windows mobile development. It isn't much different from desktop development. I strongly recommend that you use .NET Compact Framework for development and not C++/MFC.
Some useful links:
Mobile section at the Code
Project. You would find a lot of
articles, a little digging is needed
to find the appropriate one.
Smart
Device Framework from OpenNetCF
offer valuable extensions to the
compact framework.
When you install
the Mobile SDK, you will find under the
Community folder links for the
Windows Mobile and CF framework
blogs. These are also valuable
resources.
Regarding your application, you are right about the WCF and the SQL Server CE. These are the proper ways for handling communication and storage.
Some hints for people coming from a desktop world:
You need to have some sort of power management. The device may automatically go to suspend state. Also, you shouldn't consume power when you don't have to.
Network connectivity is a difficult issue. You can register notifications for when a specific network (Wi-Fi, GPRS) becomes available or unavailable. You can also set the preferred means of communication.
Make the UI as simple as possible. The user uses his thumb and/or a pen and he is probably on the move.
Test in a real device as early as possible.
"24 Hours of Windows Mobile Application Development" from the Windows Mobile Team Blog has some good resources
If you can, try to start from the user use cases and work back to the code, rather than vice versa. It's really easy to spend a lot more time working on the tools than working on the business problem. And thinking through user requirements will help you consider alternate strategies, because a lot of the patterns you know from normal .NET don't apply.
I've done lots of intermittent application development of exactly the type you are describing, and an on-board database works just fine. The MSMQ/WCF stuff just adds conceptual overhead without adding much value. You need a logical datastore locally anyway, and replication at this level is a simple concept that you want to keep simple, so the audit trail is easily monitored and debugged. MSMQ and WCF tend to hide things in unfamiliar places.
I upvoted the SqlLite suggestion BTW. MS doesn't have their persistence story stabilized yet for CE.
For the database replication bit I highly recommend Sybase Ultralite. In terms of flexibility and performance it knocks the socks off SQL CE
I had to do this once. Weird setup with Macs for development, and we were all Java programmers. And a short deadline. PowerPC macs too, so no chance to install Windows for Visual Studio development, never mind that the money for this would never have appeared.
We ended up writing applications using Java, running on the IBM J9 virtual machine, with SWT for a user interface. Entirely free development stack. Easy to deploy. Code ran on any platform we desired, not just PocketPC/WinMob.
Most of the work was on the server side anyway - the database, the web service server. The logic. The reporting engine. The client side wasn't totally simple however - would get the form templates from the server (because they changed frequently), the site details (multi-site deployment), generate a UI from the form template (using some SWT GUI components that are wonderful for PocketPC development, like the ExpandBar), gather data with a point and click interface (minimising keyboard entry where possible), and then submit it back to the server.
For offline storage we used XML files on the device itself. More than enough for our needs, but yours may differ. Maybe consider SQLite?
There are a couple links you can check out to start with:
http://developer.windowsmobile.com
http://msdn.microsoft.com/en-us/windowsmobile/default.aspx
If you have a sticking point while developing, there are also Windows Mobile dedicated chats on MSDN that you can attend and ask your questions. The calendar hasn't been updated yet, but the next ones should be in January. You can find the schedule here: http://msdn.microsoft.com/en-us/chats/default.aspx
I am going to add an additional question to this post, as its been active enough and hopefully will be helpful to others as well as me. Ok, so after playing around I now realize that standard class libraries cannot be included in windows mobile applications.
Now the overwhelming advice here seems to be use an embedded database, though I now do have use cases and it appears that I will need to have document synchronization as well as relational data. With this in mind service layer interaction seems inevitable. So my question is how would I share common domain objects and interfaces between the layers?
"Document synchronization" - does that mean bidirectional? Or cumulative write-only? I can think of mobile architectures that would mainly collect and submit transactions for a shared document - if that's your requirement, then we should discuss offline - it's a long (and interesting) conversation.
Owen you can share code from Compact Framework -> Desktop, it's only Desktop -> Compact Framework that has compatability issues if you use certain objects that are not supported by the CF.
While a desktop lib doesn't work on CF a CF lib WILL work on the desktop, you can also run CF.exes on the desktop!
Just create a CF library as the project that defines your base objects / interfaces etc.
This book sshould e essential reading for all Windows Mobile developers: http://www.microsoft.com/learning/en/us/books/10294.aspx
For developing windows mobile applications you must have the basic tools like silverlight, visual studio, windows phone emulator and sqlite as your database storage.
I'm building a web app against a database where a small number of records (about 5000) are active at the same time. Each active working record probably experiences 50-300 changes by 30 users over a 4 hour period ... which is thousands of changes per minute.
Because our testing environment is so static, testing is not realistic, and some issues do not arise until we hit the production database.
I had the idea to Run Profiler, collect the DML statements, then replay them on the test server while debugging the app ... Assuming I can replay them in the same time intervals as the original was run. But even this wouldn't be a valid test, since tester changes could corrupt future DML statements being replayed.
Does anybody know how to simulate real time database changes for realistic testing?
Thanks.
BTW-Our problems are not concurrency issues.
Maybe this Selenium-based service is what you need: browsermob
Few people recommended it.
And yes, this is not an ad :)
There's a few commercial packages that do this. I work for Quest Software, makers of one of the packages, but I'm going to list three because I've worked with all of 'em before I came to work for Quest:
Microsoft Visual Studio Test Edition - it has load testing tools added on. It lets you design tests inside Visual Studio like simulating browsers hitting your web app. Recording the initial macros is kind of a pain, but when you've done it, it's easy to replay. It also has agents that you can deploy across multiple desktops to drive more load. For example, we installed it on several developers' desktops, and when we needed to do load testing after hours, we could throw a ton of computing power at the servers. The downside is that the setup and ongoing maintenance is kinda painful.
HP Quality Center (used to be Mercury Test Director and some other software) - also has load testing tools, but it's designed from the ground up for testers. If your testers don't have Visual Studio experience, this is an easier choice.
Quest Benchmark Factory - this tool focuses exclusively on the database server, not the web and app servers. It captures load on your production server and then can replay it on your dev/test servers, or it can generate synthetic transactions too. There's a freeware version you can use to get started.
If you know and love Visual Studio, and if you want to test your web servers and app servers, then go with Visual Studio Test Edition. If you just want to focus on the database, then go with Benchmark Factory.
Perhaps use something along the lines of a database stress-testing tool like the mysqlslap load-emulator. Here's a link explaining use-cases and specific examples.
OK, not that kind of hostile. I'm curious to hear how people deal with developing on big corporate networks that mandate all kinds of developer-unfriendly services and policies on desktops (think ProQuota, over-zealous virus scanners, no local admin, no access to SO). I've previously used virtual LANs used effectively, or completely seperated parallel networks, but these aren't always practical. Any other tips?
The most important thing (if possible) is to recruit support from your boss.
Unless he's a PHB, he will often understand the impact of these restrictions on you, your team, and indirectly on his success. If the requests are reasonable, he can provide the buffer if you do go against IT. In addition, if the entire team or other developers seek the same policies, this "group bargaining power" can be used to create special policies.
Generally speaking, large corporations are over-zealous about legal issues and information security. However, IT departments generally hate dealing with numerous requests for support from the same person. Sometimes, if you show a clear harm to productivity from a project (e.g., you use a lot of temp files and the anti virus hits them), or that your program has to be installable from administrator mode, they will sometimes reach a compromise. You may have to sign something stating you would not use an administrative access on your machine to install illegal software, but you'd still get admin.
In the few cases I have gone for job interviews (I'm mostly in academia but worked some in the industry), one of my greatest concerns was the amount of control I had about my computing environment, from hardware, to software, to administrative rights. If I cannot be trusted as a developer to manage my own windows box, I don't feel I should be trusted with a mission-critical system.
I haven't tried this myself, but I once saw someone say that the central IT gave in and let him administer his own workstation, after he complied with the policies by submitting to them a change request form with a list of the first 300 things he wanted changing on his workstation.
Anything that interferes with you doing your job is good to bring up in a meetings.
Ex:
This Virus Scanner runs 4 times a day while I am at work. During that run my compile times take 5 times as long, and the use of my other development tools is brought down to a crawl.
The web filters are overzealous. I have attempted to access sites x, y, and z for extra development information, and have been unable. The time it took to find a good resources was doubled because of this.
And so on.
Work within the (hostile) rules and give up, quit and find somewhere more enlightened or try to change the organization, your choice.
If you decide to try and change things don't go against IT alone, that will just make you the "trouble maker" and you will never get anywhere, try to get support from your boss and other developers - if you can't get support then you may be better off looking for a new job.
I would explain your issues to your boss and/or sys admin, if they are receptive and agree its a good idea to let you have control over your workstation(s) then problem solved, if not I would walk from the project/job before your probationary period is over.
I was in a similar situation once at a large goverment corporation and it turned out management not be willing to unlock developers boxes was just the tip of the iceburg of a massive buracracy, the project ended up being a huge failure and by the time I left half of the IT department (not just the project team) had quit.
Just my 2 cents
Yeah. Leave. If your organization is not willing to give you the normal tools that any normal professional programmer should be able to use, then it's time to up your networking skills and update your resume.
Bringing your own laptop with the necessary tools is always a good way to overcome these man-made hurdles
Bring you own laptop but DON'T connect it to the network (and make it obvious that you do not intend to).
Copy stuff e.g. Visio diagrams over via USB drive.
If they don't allow USB, you can access the internat from outside and email the files. Using OWA via browser sometimes gives you more rights to send files.
Sounds like they're doing you a favor. Your code is guaranteed to run as a normal user, doesn't try to write to program files or other sensitive directories, is aware of what issues virus scanners bring to the table, and can handle other issues you wouldn't have normally encountered until installing your apps on a client machine.
As for no access to SO, I'd quit.
Our workplace required a full virus scan every day, so in the morning, when I hooked my laptop up, it was a 2 hour wait before I could do work.
I finally found a solution. MSVC 6, has a built in debugger. I went into task manager, and picked the mcaffee scanner process, and told it to debug. This fired up msvc6, and the scanner froze at a breakpoint. I hit reset, and the problem was gone. About 6 months later they removed the policy and all was good.