A while back, smart clients had been touted as the solution to "occasional connected" usage environments, and toolkits like Google Gears have sprouted for the same reason.
It looks to me like constant, reliable Internet access is becoming more and more pervasive (even in places such as commercial airplanes), so my question to the community is this: How relevant are solutions with offline support going forward?
I'm approaching this from the standpoint of a data-intensive enterprise application, such as CRM.
Over the last 3 years I've built 2 separate occassionally connected smart clients.
I've found that adding 'occassionally connected' multiplies an applications complexity (and development time ) by about 3 or 4 times. So it is a very expensive feature to add.
But there are solid business cases for these apps as I'm sure there are for many systems. One was for engineers on the road who often go to client sites where (for whatever reason, security being one reason sometimes) their wireless connection does not work. The user still wants to continue using the system just like they were connected and then have it effortlessly (on their part) synchronize itself once a connection becomes available.
The second app will either be used on a LAN or will have no connection at all, until the user returns 'to the office'.
From a personal perspective I love the idea that with or without an active connection I can continue to 'do my work', indeed even if the connection drops out half way through an activity everything still works and I won't lose any of my data.
Acheiving this seamless connected -> disconnected -> connected etc scenario takes ALOT of work and testing so there must be a very strong business case.
And finally, I think we will never be able to assume that there will always be an internet connection. Whether it be a hardware or service provider failure or some active security blocking connections, at some point in time your users will be in disconnected mode.
I think it depends on the kind of application. For some applications internet access is more relevant than for others, but for general productivity apps I think an offline scenario will stay relevant. Working as a consultant in the software industry, internet access is everywhere, but not every client allows me to connect my laptop to their network.
On the other hand, with 3g and mobile data access becoming affordable, maybe the future will bring us internet anywhere.
Well, reliable internet access is not as widespread as you think if you think globally. Even locally, WiFi isn't even quite as reliable, especially if you are moving from place to place. Building for an occasionally connected scenario gives a greater user experience; I don't think it's always required but it is quite nice. :)
Related
I'd like to implement a simple video chat system for students to tutor each other. I'm a one man show, and would like a system I can run in a cost effective way starting with 10 users, and hopefully scale up as needed.
WebRTC seems like a great, low latency, and cheap option to build this feature. However, if clients are communicating, then they must know each other's public IP. Is this a significant privacy or security issue?
What is the worst case scenario of somebody getting my IP address? Wouldn't any malicious actor have to get through my ISP to get my specific location?
Thanks!
If you host it yourself, WebRTC can be extremely cost-effective. I've been running the SFU at galene.org (disclaimer: I'm the main developer), which is used for multiple lectures with up to a hundred students. Even though this is a full-fledged SFU (and not a mere TURN server), hosting amounts to just over €6/month.
If your tutoring sessions involve just two or three people, then peer-to-peer WebRTC might be enough, but even then a TURN server will be required, especially if some of your users are on university networks. For larger groups, you will need to push your traffic through an SFU.
If you do peer-to-peer WebRTC, then any user can learn the IP of any user they are communicating with; this is most probably not an issue, since the IP addresses are most probably already being disclosed (e.g. in mail headers). If you go though an SFU, then the IP addresses are not deliberately disclosed, but they might still leak; for example, the SFU implementation mentioned above (Galene) discloses IP addresses when a user initiates a file transfer since file transfers happen directly between clients, in a peer-to-peer fashion. (It may be possible to avoid this disclosure by setting the iceTransportPolicy field to relay in the PeerConnection constructor, but I haven't tested how effective it is.)
WebRTC doesn't have to be P2P. You could run a SFU. Each user will upload their video to your server, and the server will distribute via WebRTC. Then the users will never know each others IPs.
I don't have any exact numbers, but it isn't expensive either. Your biggest expense will probably be bandwidth. Lots of Open Source SFUs exist, this is a good list to get started.
What considerations are needed when creating a web app that is intended to be used in an industrial plant setting for a company? My specific use case is an industrial facility with several different production plants that would each have its own device for the application interface.
How do companies enforce the usage of such apps on a monitor/tablet? For example, could I prevent them from using other stuff on the tablet?
Importantly, how would security work? They'd share a device. There may be multiple operators that use the app in a given shift. Would they all use the same authentication session (this is not preferable, as I'd like to uniquely identify the active user)? Obviously I could use standard username/passwords with token based sessions that expire, however, this leaves a lot of potential for account hijacking. Ideally, they'd be able to log on very quickly (PIN, perhaps?) and their session would end when they are done.
As long as there is internet connection, I would presume that there isn't much pro/con regarding the use of native applications versus web based or progressive web apps. Is this assumption correct?
What's the best way of identifying which device the application is being run on?
Is this a common thing to do in general? What other technologies are used to create software that obtains input from industrial operators?
--
Update - this is a good higher level consideration of the question at hand, however, it has become apparent why focused, specific questions are helpful. As such, I will follow up with questions that are specific.
Identifying the Area/Device a Web Application is Accessed On
Enforcing Specific Application Use on Tablets
Best Practices for Web App Authentication in Industrial Settings
I'm not able to answer everything in great detail but here are a few pointers. In the environment as you describe we usually see these two options. 1) you tell them what you need, internet, security, if they give you device and how it will be configured 2) they tell you exactly what you need to deliver.
I do not think you can 100% prevent them. We did it by providing the tablet( well laptops in our case) and the OS configuration took care of that, downside we had few devices to support. You seem to hint that there is always an internet connection so I guess you can collect all info about the system and send it back to you daily?
We were allowed to "tap" into their attendance SW and when you entered the facility you were able to use your 4 digit pin to log in if you were out of premisses you could not log in at all. I can imagine the following: you log in with your username and password - this does full verification, after that, you can use 4 digit pin to login for next n hours.
maybe, kinda, depends on what you are doing. Does the browser have all features you need? Our system needs multicast to perform really fast, so we have a native app
touched on this in 1. You could also use device enrolment process. You can also contractually force them that there will be only your software and it may invalidate support contract. It really depends on your creativity. My favourite( and it works - just tell them, there will only be installed my software and if not you will pay me double for support. I only saw one customer who installed some crap on the device when there were told not to
it really depends on what industry you are talking about, every industry is different. We almost always build a custom solution
The enforcement of the device/app usage depends on the customer, if the customer asked for help in the enforcement, then you can provide guide, training and workshops. If the customer serious about the enforcement then it will be a policy that's adapted by all the organization from top to down. Usually seniors will resist a workflow change more than juniors, so top management/executive should deal with that. Real life story: SAP team took 6 months to transform major newspaper workflow, during that few seniors got fired because they refuse to adapt the change.
Security shouldn't handicap the users, usually in industrial environment the network is isolated or at least restricted through VPN to connect multiple sites (plants in your case), regarding the active user: we usually provide guide/training/workshop for the users and inform them that using colleague account or device will prevent the system from tracking your accomplishment/tasks, so each user is responsible to make sure the active account/device is the one assigned to him/her.
It depends, with native you have more controls than web, but if the app is just doing monitoring then most of today apps use web for monitoring and the common way to receive input is REST APIs (even if the industrial devices doesn't support REST API, a middleware could be written to transform the output). If you need more depth about native vs web you need to ask new question with more details about the requirements.
Depends on the tech you are using (native or web), and things I mentioned in point 2: you can use whitelist of devices that's allowed to run the app. overall there are many best ways to track down the device.
How common in general? I think such information can only be achieved by survey, the world full of variations. And having something common not mean its safe or best, our industry keep changing at all levels. So to stay in the loop, we must keep learning and self-updating without reboot.
Well, I tried to ask this question as a comment on this question, but I thought that maybe no one will notice it, so I decided to ask it as a separate one.
The question is about how to do real-time GPS tracking system things; if we have the following scenario:
Rather than connecting a GPS receiver to a PC, the user will have a mobile device with an integrated GPS receiver.
Location data will be sent over mobile network using GPRS data connection to a server side.
The data will be processed and a KML path file will be created and updated on time intervals and used to track the user using Google Earth.
The question is: what is the best method to accomplish this scenario for the server side; is it a web service, a web application, a windows service, a windows application or what exactly? Taking into account that the system will serve a number of users simultaneously, and that more users may use the system in the future(scalability issues).
Thank you in advance and I highly appreciate any help :)
What kind of device are you using exactly, something like this or something more sophisticated / configurable? If we assume that the device sends its data over TCP, I would consider the following approach with separate input/output processes:
Input: a process listening specific TCP port and storing incoming coordinates to database with a device id. Preferably, your listening loop must be able to handle simultaneous connections without them blocking each other.
Output: web application reading coordinates from database for a given device id and displaying them through the Google Earth API.
Use whatever programming language(s) you are familiar with.
For me there is a technical limitation/risk here -> the mobile device, and its connectivity.
1) What are your requirements? Do you need to support various mobile devices or will you focus on only one platform ?
2) More importantly, you have to understand that GPRS data connections differ from a PC connected to the Internet. There are various connection restrictions imposed by different mobile operators.
If I was to design such a system in order to minimise those risks I would go with a web server running on port 80 which the mobile devices would upload their Long/Lat through POST (or even GET to simplify things).
EDIT: Regarding scalability, it would be very easy to scale things up in the future using tried&tested load-balancing techniques.
EDIT2: Whichever technology you decide to use, i would HIGHLY recommend that the first thing you do is to mock up a prototype. Those connection restrictions could be show-stoppers. Ideally you need to explore them before you have made any serious investment.
How can I build a simple 2-player game, that communicates over the internet?
I need to solve the problems of:
lookup or rendezvous - two players want to find each other.
ongoing communications. Either player can initiate an action that requires delivering information to the other side, in a reasonbly quick timeframe (IM-type latency, not email-type latency).
In this regard, I suppose it is equivalent to a 2-way chat, where people want to be able to find each other, and then also, once paired up, intercommunicate.
Further requirements:
for now, assume the endpoints are Windows OS, relatively recent.
assume neither endpoint machine is directly accessible from the internet. Assume they are client machines, hidden behind firewalls that block incoming requests. The machines can make outbound requests. (say, over HTTP, but TCP is also fine)
communication should be private. For simplicity, let's say there's a shared secret already in place, and the endpoints are able to do AES. I guess what I mean by this is, any intermediary should not need to decrypt the message packets. The decryption will happen only at the endpoints.
all custom code should run only on the client PCs.
Assume there is no server in the internet that is under my control.
I'm happy to use third-party servers to facilitate intercommunication, like an IM server or something, as long as it's free, and I am not required to install custom code on it.
What APIs are available to facilitate this design?
Can I do this with IM APIs? WCF? Are there WCF Channels for Windows Messenger?
What protocols? HTTP? I have this tagged as "peer-to-peer" but I mean that virtually; there's no hard requirement for a formal p2p protocol.
What message formats would you use?
EDIT
To clarify the requirements around servers, what I want is NO SERVER UNDER MY CONTROL. And NONE OF MY CUSTOM CODE ON ANY SERVER. That is not the same as "No server".
Think of it this way: I can send an email over SMTP, using custom code that I write on the sending and receiving side. My custom code can connect via a free SMTP server intermediary. This would require no installation of code on the SMTP server. This is something like what I want, but SMTP is not acceptable, because of the latency.
EDIT2
I also found this: library for Instant Messaging, like libpurple, but written in C#
ANSWER
I can do what I want, using libraries for IM frameworks. One simple way to do it using Windows Live Messenger is to use the Messenger Activity SDK. This proves the concept, but is not really a general solution. But, similar things can be accomplished with the IM libraries for various messenger systems, like libpurple, or using libs for IRC channels. In all these cases, the IM servers act as the firewall-penetrating communications infrastructure.
IM is the wrong tool. Instead, use an IRC chat room.
With an IRC chat room, your clients "log in" to the chat room, and that is used for your "presence". Anyone in the chat room is "available" to play the game.
Once that is done, the game instance communicate with each other through the chat room. They can use the global channel, or simply private IRC channels for game traffic.
The issues to solve:
First, all game state is shared on the clients. Many games have done this (RTS's like Age of Empires, RPGs like Diablo). But client states are susceptible to hacking and cheating. That's just a plain truth. If the game is popular, it WILL be hacked.
Ping traffic. Basically the flow is you log in to the room, your client is in "available to play" mode. Then it pings EVERYONE ELSE to see if THEY are available to play. This will happen with every client "sign in" to the chat room. You can then use the public room for broadcast events "Frank is ready for a new game", "Frank started a game with Joe", etc. That can help keeps games in sync and not chatty, but when a client connects to the chat room, it's going to go "Hi All, it's Bob, what are you all doing". So you need to manage that.
Traffic volume. IRC rooms can handle a lot of traffic, but not a LOT of traffic. Most are designed to prevent "spamming", "flooding", etc. So you may well be rate limited on you game play. Not a problem for "Checkers", more so for "World of Warcraft" during a 40 man Raid. That's a game design issue.
Terms of service. The IRC provider may well say "Uh no, you can't do that with our service". I haven't looked in to it, so I don't know, but could be an issue.
Other than that, IRC is a pretty good fit. Lots of IRC bot code floating around on the net, I've never used any of it.
Every two-player game must have some type of server environment by the basic need of having to communicate between two clients/players at the very least. Keep in mind, each of the clients/players can also act as its own server to communicate with other linked clients. But the need to keep tabs on all clients/players at any given time and the need to facilitate searching of other clients/players inherently requires some type of server environment to begin with.
libpurple along with otr can give you the privacy-over-IM such an application would need.
You could setup a message board on one of the free message board servers so that players can find each other. You'll probably want to encourage them to use private messages to exchange IP addresses. Then, use a protocol that connects using IP addresses. Good luck with that. Firewalls make it a pain.
Then, of course, one machine of the pair would need to act as server, the other as client. Your software must contain both sets of code. I've written such a game and can tell you that the communication code gets a little confusing.
I can tell you right now that you'd be much happier in life if you wrote a web service to facilitate communication. But, then, you'd need a server for that.
Good luck. You're going to need it.
OR, you could just write a game for an IM client, like Microsoft Messenger. I've seen games for that one, so I know it can be done.
As somebody has said, it may not yet possible to do so if you don't have any mediated server between 2 players. As you're happy to use third party server, I suggest that you build your system using Google App Engine + XMPP over HTTP. It works nicely over internet and behind firewall. And yet it's free (as long as your system doesn't grow out of GAE quota).
Peer to peer is out due to your firewall constraint. This doesn't really work easily for directory services anyway.
The next easiest method I would use is to toss up a very simple CGI server script on one of the numerous super cheap web hosting sites. It seems that you don't want to go this route. Is there some particular reason? 100 lines of code and a super cheap server should give you everything you're asking for and more.
I suppose you could hook into some sort of third party chat library thing. I don't know about the current IM protocols, but good old IRC and a separate channel for your game would work. You even could cobble something together using FTP. BLOG comments on a free blog site would work too. The question is why?
These are all kludges. They get the job done in obtuse, inelegant, and poorly scaling ways.
I urge you to reconsider the web server solution.
You have a lot of conflicting requirements. Both clients behind a firewall blocking incoming requests pretty much means they can't do peer-2-peer since neither machine can act as the server, and you will need to have a transport server in the middle somewhere routing messages to each client. Right now what you are asking is pretty much not possible given the no server requirement.
My current employer uses a 3rd party hosted CRM provider and we have a fairly sophisticated integration tier between the two systems. Amongst the capabilities of the CRM provider is for developers to author business logic in a Java like language and on events such as the user clicking a button or submitting a new account into the system, have validation and/or business logic fire off.
One of the capabilities that we make use of is for that business code running on the hosted provider to invoke web services that we host. The canonical example is a sales rep entering in a new sales lead and hitting a button to ping our systems to see if we can identify that new lead based on email address, company/first/last name, etc, and if so, return back an internal GUID that represents that individual. This all works for us fine, but we've run into a wall again and again in trying to setup a sane dev environment to work against.
So while our use case is a bit nuanced, this can generally apply to any development house that builds APIs for 3rd party consumption: what are some best practices when designing a development pipeline and environment when you're building APIs to be consumed by the outside world?
At our office, all our devs are behind a firewall, so code in progress can't be hit by the outside world, in our case the CRM provider. We could poke holes in the firewall but that's less than ideal from a security surface area standpoint. Especially if the # of devs who need to be in a DMZ like area is high. We currently are trying a single dev machine in the DMZ and then remoting into it as needed to do dev work, but that's created a resource scarcity issue if multiple devs need the box, let alone they're making potentially conflicting changes (e.g. different branches).
We've considered just mocking/faking incoming requests by building fake clients for these services, but that's a pretty major overhead in building out feature sets (though it does by nature reinforce a testability of our APIs). This also doesn't obviate the fact that sometimes we really do need to diagnose/debug issues coming from the real client itself, not some faked request payload.
What have others done in these types of scenarios? In this day and age of mashups, there have to be a lot of folks out there w/ experiences of developing APIs--what's worked (and not worked so) well for the folks out there?
In the occasions when this has been relevant to me (which, truth be told, is not often) we have tended to do a combination of hosting a dev copy of the solution in-house and mocking what we can't host.
I personally think that the more you can host on individual dev boxes the better-- if your dev's PCs are powerful enough to have the entire thing running plus whatever else they need to develop then they should be doing this. It allows them to have tonnes of flexability to develop without worrying about other people.
For dev, it would make sense to use mock objects and write good unit tests that define the task at hand. It would help to ensure that the developers understand the business requirements. The mock libraries are very sophisticated and help solve this problem.
Then perhaps a continuous build process that moves the code to the dev box in the DMZ. A robust QA process would make sense plus general UAT testing.
Also, for general debugging, you again need to have access the machine in the DMZ where you remote in.
This is probably an "ideal" situation, but you did ask for best practices :).