How to get your network support team behind click-once? - smartclient

I'm trying to make the case for click-once and smart client development but my network support team wants to keep with web development for everything.
What is the best way to convince them that click-once and smart client development have a place in the business?

We use ClickOnce where I work; in terms of comparison to a web release I would base the case around the need for providing users with a rich client app, otherwise it might well actually be better to use web applications.
In terms of releasing a rich client app ClickOnce is fantastic; you can set it up to enforce updates on startup thus enforcing a version throughout the network. You can make the case that ClickOnce gives you the same benefit of having a single deployment point that web deployment possesses.
Personally I've found ClickOnce to be unbelievably useful. If you're developing rich client .net apps (in Windows, though let's face it the vast majority of real .net development is in Windows) and want to deploy it across a network nothing else compares.

Here is a couple of ideas that may help
long running processes, they are not asp.net best friend.
scaling, using client side processing as compared to bigger or more servers reduces cost etc.

They have a place in the Windows environment but not in any other environment and so if you intend on writing applications for external clients, then your probably best sticking with Web based development.
I heard this "Write Once, Run Many" before from Microsoft when Asp.net 1.1 was released, it never happened in practice.

#Mark
scaling, using client side processing as compared to bigger or more servers reduces cost etc.
I'm not sure I would entirely agree with this. It would seem to cost less to buy 1 powerful server and 1,000's of "dum terminals" than an average powerful server and 1,000 of powerful desktop computers.

#GateKiller
when i speak of scaling i was talking about the cost of buying more servers and not clients.
most workstations in an organization barely use 50% of their computing power right through the day. If i was to use a click once deployed application i would be using the grunt of existing workstations therefore not having any further cost on the organiztion.

Related

How to migrate thick client to the cloud

Current situation:
Thick client wrote in .NET
We have a very old computation software that we can't maintain anymore.
We don't really know how the kernel is working (people left, 15 years old code).
We have the code and some technical experts.
We want to migrate it to the cloud behind a public API in order to serve some SPA application or even thick client applications.
What is you recommendation about that problem?
We have thought about:
Lift-n-Shift
Lift-Adjust-n-Shift
Rearchitecting or redeveloping from the ground
Repurchasing a new cloud solution (but it doesn't seem to have any)
All options that you mentioned are possible but which one to choose really depends on your business needs time and budget.
Lift and shift (vms)
This is mostly quickest approach and you may simply use VMs to migrate to cloud. But managing VMs is your responsibility and is on going committment.
Lift adjust and shift (containers)
in my opinion you get benefits of cloud when you start using PAAS services. You may consider containerize (docker) your application and migrate it to cloud and start using paas services. your dev ops cycle will be quick and scaling is easy. Since you are not managing vms anymore it's less hassle.
rearchitect amd redevlop
this could be costly and time consuming and really depends if your business requirements allow you to do that. if you plan to expand the existing code base then you may consider this else it could be big deal when you can simply migrate your services using approaches mentioned above.

Advice for Designing a Web API Infrastructure

I wonder if anyone could share their thoughts on my question regarding web based APIs (we use Microsoft stacks)..
We are currently in the process of building an infrastructure to host web apis across our business.
As a organisation we have seperate business areas that provide services to our customers. These individual areas of our business generally have their own best of breed IT system. Offering APIs is something we've long thought about and we have started the design process.
The APIs we aim to offer shall be web based (.NET/webAPI/WCF etc.) and will largely (99%) be consumed within our organisation but some may be exposed externally in the future should the requirement arise (new mobile app may need to use the services etc.)
I'd love to hear your thoughts and experiences around how you architected yuor farms. I understand its quite an open question without understanding the crooks of our requirements but its more general advice/experiences I'd like to hear.
Particularly we are trying to decide whether we should design the infrastrcuture by:
1) Providing each area of the business with their own API server whereby we shall deploy each web API within a new application inside IIS.
or
2) Setup up a load balanced web api farm whereby we have say 2/3 iis web servers, all built the same, hosting the same web apis but the business areas will all share the same server effectively. Each area would have a segregated site within iis and new APIs shall be setup under new applications inside their respective web sites.
I dont foresee us having thousands of APIs but some will be business critical so I'm certainly bearing resilience in mind which is why as much as I like each business area having their own API server, I'm being swayed towards the option of having a load balanced farm which the whole business shares.
Anyone have any thoughts, experiences etc.?
Thanks!
That's a very interesting question, and i'd love to hear what others might think. I'm no big expert, but here are my two cents.
It seems to me, that the answer should be somewhere in between those two options you specified. Specifically, each critical business area, should get their own resilient, load balanced farm, while less critical services can utilize single machine deployments. Critical business area may not mean only one API, but can actually be a group of APIs, with high cohesion among themselves.
Using option 1 environment to full extent can be hard to maintain,
while utilizing option 2 fully, can be inefficient in terms of redeployment if (or better yet, when) business logic changes. Furthermore, i think it will be possible for greedy APIs to hog resources in peak traffic, making other services temporary less performant (unless you have some sort of dynamic scaling mechanism).

Setting up a web developer lab for learning purposes

I'm not a developer by profession. Therefore, I'm not exposed to real world technical problems that face professional developers. I read/heard about web farms, integration between different systems, load balancing ... etc.
Therefore, I was wondering if there are ways for the individual developer to create an environment that simulates real world situations with minimal number of machines like:
web farms & caching
simulating many users accessing your website (Pressure tests?)
Performance
load balancing
anything you think I should consider.
By the way, I have a server machine and 1 PC. and I don't mind investing in tools and software.
PS. I'm using Microsoft technologies for development but I hope this is not a limiting factor.
Thanks
Since I am new, SO will not let me post more than one link, so I compiled a list of links for you at a pastebin here.
alot of tools for this. i like
http://www.acme.com/software/http_load/
and http://curl-loader.sourceforge.net/
they both can simulaty much queries to your server. Run it from another machine.

How important is platform independence?

A lot of software frameworks, languages, platforms claim platform independence and boast it as a selling feature. However, I have failed to understand how could this such an important feature. For example, Java is said to be platform independent - but why should I care when I know that my webapp is going to run on only one platform? Is the overhead of making an application platform independent really worthwhile?
For webapps it mostly isn't an issue as they by definition are almost "platform independent". I mean, users of application mostly aren't tied to any particular platform.
For desktop apps it is a question of your potential client base. If you think that you will benefit from multi platform targeting, then it's worth to make your application platform independent, otherwise better stay away from it :)
If you know your app is going to run on only one platform you shouldn't care - you should evaluate the framework using the same criteria as every other framework on your target platform.
This of course depends on the application in question. If you know that the application is going to run on only one platform, then there's obviously no reason to require it to be platform independent. On the other hand, if you are building an application that is supposed to be usable for, say, next 15 years, how can you know that the platform you choose will even exist then? It's hard to predict the future, and therefore making your app platform independent gives you one headache less.
Platform independence doesn't necessarily imply overhead. Rather, it implies good programming practices; if you make your app orthogonal to the platform, then changing the platform is a breeze.
Sometimes it's impossible to avoid platform-dependent function calls, for example because of having to directly communicate with some hardware device at low level. Even then it's possible make the app "almost platform independent". Instead of scattering the platform dependent things everywhere, wrap them all strictly into one class/package/whatever. Then you need to change just that one unit in order to translate your app to another platform.
we develop a Java B2B application that is Unix only, but works on all Unix flavors (where java is available).
the advantage to have a multiplatform application is that our customers sometimes have knowledge in Linux, sometimes in Solaris, sometimes in FreeBSD, ...
this way we can adapt to the customer and not force them to use one specific platform
For example, Java is said to be platform independent - but why should I care when I know that my webapp is going to run on only one platform?
The fact that it's not advantageous to you doesn't mean that it's of no benefit. I'm sure many Java developers enjoy the fact that they don't have to recompile their application for each platform (hence it's a selling point). A web app that makes use of Active X exclusively for certain components will face more road blocks if, in the future, other platforms also become of interest.
Is the overhead of making an application platform independent really worthwhile?
Depends on what you mean by overhead. If it's a good framework, there might be minimal overhead. Of course if other platforms are of no interest to you, then yes, it's an overhead. However, the fact is that unlike a decade or so ago, more platforms are starting to matter these days (at least for web and desktop application). So, the overhead could be worth it in the long run.
If you're developping only server-side, you probably don't need to take care of it at the moment. However, you might be extremely happy down the road to find that you can run your application seamlessly on another OS if the needs arise (for instance, if asked for by a client, or if you have specific performance/fonctionnalities needs).
For a client-side application, platform-independence means a lot less work to be able to ship for Mac and Linux, and yes, that might be worth it.
You almost answer your own question. Platform independence is only important if you want your application to work on multiple platforms. If you don't, then that's one less thing to worry about.
Take OpenOffice or Firefox for example. You can use those on every major platform. That's important to them because they want everyone to be able to use them and have the same experience no matter what their OS is.
If your project is smaller and doesn't really need to be on every platform, then don't worry about it. It's really a judgment call for each program you develop.
but why should I care when I know that
my webapp is going to run on only one
platform?
You shouldn't. If you know that you are going to run on only one platform, platform independence is not very relevant to you.
But you are not equal to all the population of potential users. Other people will want to target pc's in multiple platforms.
It's like having a version in Chinese. If you're going to sell only in English speaking coutries, it's irrelevant. If you're trying to sell in China, it might help.
Theoretically, platform independence helps you avoid the so-called "vendor lock" while at the same time giving you a broader reach and potentially more customers.
In practical terms, you should evaluate your target audience and do good business calculation on whether the profit potential of being able to deliver to multiple platforms outweighs the cost of adopting a platform independent framework. After all, the framework might claim to work the same on all platforms, but you will have to verify that claim. Not to mention that no framework solves all problems for delivering an application, like deployment, configuration, centralized management, updating/upgrading and so on.
Of course, if your product is a server-based and the end user is going to consume it through an HTTP agent, you don't have to worry about it. For the most part and as long as you stay in the [relatively] safe realm of HTML, JavaScript and Flash.
Platform independence is a desirable feature for software vendors because they invest a large amount of money developing a modern, sophisticated application so they don't want to artificially cut out any market segment. They want to sell their baby to as many organizations as possible.
Software vendors try to convince IT departments that platform independence is a good thing because it avoids vendor lock-in. I'm sure that is importance, in theory; however, in practice most IT departments self impose vendor lock-in with their attitudes, usually concerning a particular technology vendor of high prominence.
"Platform independence" can mean different things to different people. For example, is "Windows XP" a different platform than "XP 64", or Vista, or Windows 7? It depends upon whether you write application software or drivers, and on what pre-installed libraries and services you depend on.
In the most general sense, no application can be truly platform-independent - you won't expect to run a web application on the embedded Linux in your toaster, or on a 16-MB Windows 3.11 machine.
But software frameworks that have platform independence as an architectural target are generally better prepared when your platform changes, and in any long-lived project, it will change, if only because hardware will be replaced every 3-5 years, and new hardware often comes with new OS versions.
You always pay for Flexibility.
Always.
Deciding if the cost is worth it (the pay offs can be very high) is entirely dependent on the needs of the individual/company at hand but there is always a cost. Many of these are implicitly assumed, for example:
Most people code to a file system agnostic[1] api rather than one assuming a particular implementation and this choice is correct so often as to be a reasonable default choice in the absence of any particular requirement in that area.
Nonetheless it is sometimes worth revisiting your core assumptions every so often simply to know what they are
[1] at least to the level of saying it's a tree with path separators '/' as opposed to talking ext3, NTFS, ReiserFS, etc...
For a web application that only you are going to use, the only point of being platform-independent is that it makes it easier on you if you change servers down the line.
Of course, languages like Java are used for a lot more than web applications - people write standalone(-ish) desktop programs in them as well, and for those it's a lot more useful to be platform independent. Sun can do the work of making sure Java runs on a whole bunch of different computers, and every Java application developer shares the benefits of that work for free, basically. It's especially beneficial to developers of mobile phone applications (not the iPhone or Android, but good old basic cell phones): writing different code for every different phone out there would be a nightmare. The fact that many phones include a JRE to run applications makes the developers' jobs easier.
One field where cross-platform is an issue even for the desktop applications is software for the scientific community. From my experience, the desktops in the academy are much more heterogeneous than the ones you see at home, offices etc.
Platform independence is not much of an issue when you target a certain platform but it is when you write an application. There are libraries and frameworks out there which solve about any problem you might encounter. Only you can't use them unless they have been written for your target platform.
Which is why it is usually a good thing for a library or framework to be as platform independent as possible because every developer on the planet is a possible client. In the next step, it makes it more simple for application developers to write code which runs on any platform. In the last years, we have seen the user numbers of Mac and Linux grow steadily. So if you can sell to them for little additional cost, why not?

WCF in the enterprise, any pointers from your experience?

Looking to hear from people who are using WCF in an enterprise environment.
What were the major hurdles with the roll out?
Performance issues?
Any and all tips appreciated!
Please provide some general statistics and server configs if you can!
WCF can be configuration hell. Be sure to familiarize yourself with its diagnostics and svcTraceViewer, lest you get madenning cryptic, useless exceptions. And watch out for the generated client's broken implementation of the disposable pattern.
I've been recently hired to a company that previously handled their client/server communication with traditional asp.net web services and passing dataset's back and forth.
I re-wrote the core so now there is a Net.Tcp "connected" client... and everything is done through there. It was a week worth of "in-production-discoveries"... but well worth it.
The pain points we had to find out late in the game was:
1) The default throttling blocked the 11th user onward (it defaults to allow only 10).
2) The default "maxBufferSize" was set to 65k, so the first bitmap that needed to be downloaded crashed the server :)
3) Other default configurations (max concurent connections, max concurrent calls, etc).
All in all, it was absolutely worth it... the app is a lot faster just by changing their infrustructure and now that we have "connected" users... the server can send messages down to the clients.
Other beautiful gains is that, since we know 100% who is connected, we can actually enforce our licensing policy at the application level. Before now (and before I was hired) my company had to simply log, and then at the end of the month bill the clients extra for connecting too many times.
As already stated, configuration nightmare and exceptions can be cryptic. You can enable tracing and use the trace log viewer to generally troubleshoot a problem but its definitely a shifting of gears to troubleshoot a WCF service, especially once you've deployed it and you are experiencing problems before your code is even executing.
For communication between components within my organization I ended up using [NetDataContract] on my services and proxies which is recommended against (you can't integrate with platforms outside of .NET and to integrate you need the assembly that has the contracts) though I found the performance to be stellar and my overall development time reduced by using it. For us it was the right solution.
WCF is definitely great for enterprise stuff as it is designed with scalability, extensibility, security, etc... in mind.
as maxidad said, it can be very hard though as exceptions often tell you nearly nothing, if you use security (obvisously for enterprise scenarios) you have to deal with certificates, meaningless MessageSecurityExceptions and so on.
Dealing with WCF services is definitely harder than with old asmx service, but it's worth the effort once you're in.
supplying server configs will not be useful to you as it has to fit to your scenario. using the right bindings is very important, as well as security, concurreny. there is no single way to go when using wcf. just think about your requirements. do you need callbacks, what are your users? what kind of security do you need?
however, WCF will be definitely the right technology for enterprise scale applications.