Are there any real pitfalls in using single process mode in production? The official statement seems to discourage this, but so far the application has been stable. CEF1 seems to have been abandoned, and if CEF3 single process is used for dev, then the latter should be at the very least be part of the test suite, and therefore stable. Or is that not true?
Also, is CEF3 single process not equivalent to CEF1? The new Battle.net launcher is using CEFl (1453). I wonder if that was for legacy reasons or if that was a conscious decision to avoid using CEF3.
This question is repeated often on cef's forum. The short answer is "don't take this path".
There are, from time to time, certain bugs that pops up on the single process mode and the official word from cef's maintainer is that this mode is unsupported, period. I think this is worded too lightly on the documentation, though.
People coming from cef1 background tends to keep doing it the old way, so single process mode seems like a good way to go, but the newer (now years old) cef3 highly depends on chromium's internals, and these are solely based on multiple process for production.
Related
Seaside just released a release candidate for the upcoming 3.0 version, so it appeared on my radar again. As I'm currently pondering what web framework to use for a future project, I wonder whether it's something to consider. Alas, most of the publicity for Seaside is from '07, which is probably one or two generations for the web. So I'm hoping that the community here can answer some questions
Continuation-based frameworks were pretty great when most of your workflow was mostly in HTML, e.g. form submits. For today's JavaScript-heavy environments, that hardly seems worthwhile anymore.
Is Squeak able to handle a reasonable workload? From other questions here and elsewhere, it seems that for proper scaling another implementation (Gemstone etc.) would probably fare better in the long run, but I don't have a proper idea how far away that is. Sessions seem to be rather expensive.
I know that comparisons are hard, but most of the articles you find on the net set Seaside and Rails side by side. How would combinations like Scala/Lift, Clojure/Compojure or Erlang/Nitrogen do instead?
I have answers to question one and two:
This is true. However since version 2.8 Seaside is not a strictly "continuation-based" framework anymore. Seaside uses continuations in the flow module only. Since Seaside 3.0 the flow module is even optional. Also note that Seaside has strong Javascript support since 2005, this is long before mainstream frameworks started to add Javascript functionality. Today Seaside comes with JQuery and JQueryUI support built-in.
Of course that depends on what you store within your session objects, but typically sessions are small (less than 20 KiB). Use the memory profiler in your application to determine the exact memory consumption.
And there is a new seaside book: http://book.seaside.st/book
I find the productivity of working in a Smalltalk IDE with a good set of abstractions outweights all other issues in engineering dominated projects. It works well as an enterprise system for a small company with about 100 (simultaneous, but not heavy) users on a single server (without going to SSD). Since 2007:
Seaside has shown to be able to make the switch from html workflows to javascript ones;
Seaside has been ported to a lot of different Smalltalks;
Has seen Gemstone release GLASS;
The new 'cog' vm with much improved performance has been released a few weeks ago and shows great promise for improved performance.
In Smalltalk we have now three web frameworks to consider, besides Seaside also
Aida/Web and
Iliad.
Both later effectively solve three-like control flow, but without needing continuations. Both also have a very strong Ajax integration, actually you don't realize anymore that you are working with Ajax.
Both also scale in memory consumption well. 10.000 sessions spend 220MB in Aida/Web, that is about 23KB per session, which can be further optimized down to mere 400B per session. This means, that you can run not only but many websites from the single Smalltalk image. Of course you can always upgrade to load balancing solution, when you really need. Which is from my experience very rarely needed.
Comparing to Ruby on Rails, a friend of mine complained that he needs 50MB of memory initially for every webshop site he is selling. He then turned to the Aida/Web solution where he needs about the same MB for the image, but then just few KB for every additional webshop site.
Avi Bryant, the developer of Seaside, said that AJAX triumphs continuations in almost all situations. Nevertheless, you can build reasonably powerful applications with Seaside and AJAX, too.
The Application part of a Web-App can be done in other frameworks quite well using Ajax.
I think a Seaside integrated Smalltalk-to-Javascript Framework like Cappuccino-for-Clamato is missing, currently. I'd like to be able to build real Javascript-Apps using Smalltalk.
Javascript is awesome but being capable of dealing with complicated workflow in a clean cheap way in the server side (as Seaside allows you to) is preventing it to become obsolete. Economy and utility are things that gives return in the short and long run. But telling this in the abstract has no value at all. You should be talking about a precise application and deciding if seaside is part of your bunch of competitive advantages to form an equation that rocks (and knowing why).
About scaling workload with Seaside, the short answer is that you can scale it like hell yah (for the long answer check my answer here: Does Seaside scale?).
too unanswerable man :) rty a variation of what you're really trying to ask
I think the best thing you can do is a prototype of something in a weekend.
If you can do a prototype in two days and you can capture some attention and you enjoyed the developing experience of doing it with seaside then you'll have the foundation of your next thing.
It costs only your time (you can publish in an amazon server).
BTW, this week I've heard about a startup that made its prototype by hand (was everything static and they processed stuff manually). Pretty amazing and crazy and cheap. When they felt that they had enough traction on the idea (which the had) they implemented the app (with whatever tech, I'm sure is no challenge for a seaside developer)
as opposed to writing your own library.
We're working on a project here that will be a self-dividing server pool, if one section grows too heavy, the manager would divide it and put it on another machine as a separate process. It would also alert all connected clients this affects to connect to the new server.
I am curious about using ZeroMQ for inter-server and inter-process communication. My partner would prefer to roll his own. I'm looking to the community to answer this question.
I'm a fairly novice programmer myself and just learned about messaging queues. As i've googled and read, it seems everyone is using messaging queues for all sorts of things, but why? What makes them better than writing your own library? Why are they so common and why are there so many?
what makes them better than writing your own library?
When rolling out the first version of your app, probably nothing: your needs are well defined and you will develop a messaging system that will fit your needs: small feature list, small source code etc.
Those tools are very useful after the first release, when you actually have to extend your application and add more features to it.
Let me give you a few use cases:
your app will have to talk to a big endian machine (sparc/powerpc) from a little endian machine (x86, intel/amd). Your messaging system had some endian ordering assumption: go and fix it
you designed your app so it is not a binary protocol/messaging system and now it is very slow because you spend most of your time parsing it (the number of messages increased and parsing became a bottleneck): adapt it so it can transport binary/fixed encoding
at the beginning you had 3 machine inside a lan, no noticeable delays everything gets to every machine. your client/boss/pointy-haired-devil-boss shows up and tell you that you will install the app on WAN you do not manage - and then you start having connection failures, bad latency etc. you need to store message and retry sending them later on: go back to the code and plug this stuff in (and enjoy)
messages sent need to have replies, but not all of them: you send some parameters in and expect a spreadsheet as a result instead of just sending and acknowledges, go back to code and plug this stuff in (and enjoy.)
some messages are critical and there reception/sending needs proper backup/persistence/. Why you ask ? auditing purposes
And many other use cases that I forgot ...
You can implement it yourself, but do not spend much time doing so: you will probably replace it later on anyway.
That's very much like asking: why use a database when you can write your own?
The answer is that using a tool that has been around for a while and is well understood in lots of different use cases, pays off more and more over time and as your requirements evolve. This is especially true if more than one developer is involved in a project. Do you want to become support staff for a queueing system if you change to a new project? Using a tool prevents that from happening. It becomes someone else's problem.
Case in point: persistence. Writing a tool to store one message on disk is easy. Writing a persistor that scales and performs well and stably, in many different use cases, and is manageable, and cheap to support, is hard. If you want to see someone complaining about how hard it is then look at this: http://www.lshift.net/blog/2009/12/07/rabbitmq-at-the-skills-matter-functional-programming-exchange
Anyway, I hope this helps. By all means write your own tool. Many many people have done so. Whatever solves your problem, is good.
I'm considering using ZeroMQ myself - hence I stumbled across this question.
Let's assume for the moment that you have the ability to implement a message queuing system that meets all of your requirements. Why would you adopt ZeroMQ (or other third party library) over the roll-your-own approach? Simple - cost.
Let's assume for a moment that ZeroMQ already meets all of your requirements. All that needs to be done is integrating it into your build, read some doco and then start using it. That's got to be far less effort than rolling your own. Plus, the maintenance burden has been shifted to another company. Since ZeroMQ is free, it's like you've just grown your development team to include (part of) the ZeroMQ team.
If you ran a Software Development business, then I think that you would balance the cost/risk of using third party libraries against rolling your own, and in this case, using ZeroMQ would win hands down.
Perhaps you (or rather, your partner) suffer, as so many developers do, from the "Not Invented Here" syndrome? If so, adjust your attitude and reassess the use of ZeroMQ. Personally, I much prefer the benefits of Proudly Found Elsewhere attitude. I'm hoping I can proud of finding ZeroMQ... time will tell.
EDIT: I came across this video from the ZeroMQ developers that talks about why you should use ZeroMQ.
what makes them better than writing your own library?
Message queuing systems are transactional, which is conceptually easy to use as a client, but hard to get right as an implementor, especially considering persistent queues. You might think you can get away with writing a quick messaging library, but without transactions and persistence, you'd not have the full benefits of a messaging system.
Persistence in this context means that the messaging middleware keeps unhandled messages in permanent storage (on disk) in case the server goes down; after a restart, the messages can be handled and no retransmit is necessary (the sender does not even know there was a problem). Transactional means that you can read messages from different queues and write messages to different queues in a transactional manner, meaning that either all reads and writes succeed or (if one or more fail) none succeeds. This is not really much different from the transactionality known from interfacing with databases and has the same benefits (it simplifies error handling; without transactions, you would have to assure that each individual read/write succeeds, and if one or more fail, you have to roll back those changes that did succeed).
Before writing your own library, read the 0MQ Guide here: http://zguide.zeromq.org/page:all
Chances are that you will either decide to install RabbitMQ, or else you will make your library on top of ZeroMQ since they have already done all the hard parts.
If you have a little time give it a try and roll out your own implemntation! The learnings of this excercise will convince you about the wisdom of using an already tested library.
Let's assume you're in the middle of a long running project (long running = several years) and, as expected, there will be several things coming up with brand new releases. There might be a new .Net Framework with brand new features (e.g. Linq, Entity Framework, WPF, WF...), a new Visual Studio or V.next of your favorite Control Library, a new Mock Framework and a lot more things.
What are your guidelines for handling these technology updates? Do you adopt them instantly or do you ignore them until the end of the project? Do you have different guidelines for different things (Tools, Frameworks, supporting stuff)?
In my experience, these decisions are always made on a case-by-case basis. Several factors are considered, including:
How mature is the new technology? Does the organization like to be at the forefront working with bleeding edge new technologies, or does it prefer to work with proven tools and methodologies?
What skill sets do your people have? Are they consistent with use of the new technology, or is more training needed? Will improved productivity outweigh the time it takes to come up to speed?
What investment do you have in the existing technology? What is the cost of moving to the new technology? How much rework and rewriting of code is involved?
What is the requirement? Is it supported by the existing techology, or are new tools needed to fulfill the requirement?
What are the performance expectations? Does the new technology provide a performance improvement that cannot be met with the old technology?
What about the technological culture? Is the organization vendor specific (e.g. a Microsoft shop)? Can open-source code be used?
What is the scope of the project? Is it a large project that would benefit from supporting technologies like frameworks and tools, or is it a small project that would be unduly weighed down and complicated by these things?
How is the new technology supported? Does the vendor have good documentation? Is there someone you can talk to if you have problems? Or are you an organization that has people that know how to solve problems without a support contract?
Is the technology comfortable to work with? Does it seem to make sense? Is it clean and elegant? Do other people seem to like it? Are other people having problems with it?
Is the technology the latest flavor of the week? Has it proven itself in the battlefield to produce tangible results, or is it just a religion?
How much time do you have to learn the new technology and iron out the kinks? Do the benefits outweigh the costs?
As a very brief example, I chose Link to SQL for my most recent project, because the project was complex enough to warrant an ORM, L2S performs well and is lightweight, we are a Microsoft Shop, and it is my sense that the Entity framework is not quite ready for prime time (even though Microsoft says that it will be the go-to framework for the future).
Stick with what you've started with.
A large and long running project often comes with a huge and highly complex code-base. Any change or upgrade to a new version of a library can add bugs in very subtle and unexpected ways.
Also: For large projects the tools and libraries used should have been tested and evaluated in the design-phase. Unless you find a show-stopper or a security issue it's best to not upgrade.
Always remember: Don't change horses in the middle of a stream. :-)
I would say different factors pitch in, like-
Say a software is nearing its end of life, for example last April, Microsoft retired mainstream support for SQL Server 2000, and your product uses it then its wiser to go for the next version of SQL Server in your next release.
Another factor which comes into play is how much value does the new features in the latest release of a software would bring to your product. It may well be the case that the new release of .NET framework has something which does not add any value to your product, then that does not build a strong case to upgrade.
Budget is also an important factor. I think you need to upgrade licenses in order to step up to the next release unless you are already part of something like software assurance.
Training to the team is also a factor. If the latest release is going to add to your product then you will have to train your team as well.
Well, there could be other telling factors too. These were the ones off the top of my head. I hope it helps.
cheers
If you're talking about a framework-specific example, the biggest piece of advice I'll give you is keep the system and your application separate. This is why I love patterns such as Model-View-Controller - it keeps your code modular and means you can upgrade sections without breaking the app as an entirety.
On a more practical level, if your framework has a Git or SVN repository, checkout the usual 'system' directory from the repo, then you can call 'svn update' occasionally to keep up with the latest and greatest builds.
I would suggest that the project not last that long. Develop the application in smaller pieces with iterations every couple months. That way, as new technology comes out, you can make the necessary change and implement updates as you go rather then have to decide to redevelop the whole application. As you say, trying to develop the whole application as things change just doesn't work.
As another poster said, it's certainly a case-by-case basis thing. What you can upgrade and when is determined mostly by how hard or easy it is to test the new version of the system. Having a comprehensive automated test suite for your application helps a lot with this.
Generally, I try to update to the latest stable release of libraries and so on as often as possible, because that makes maintenance easier. If you don't update, you may find yourself patching or working around bugs in the version of the library you are using. If you update less frequently, each update will be more work because you have more changes to deal with, and it's been longer since you last touched the system, and thus you remember less about it.
What do you prefer (from your developer's point of view) when it comes to implement a business process?
A Business Process Management System (BPMS) or just your favorite IDE with the needed tools and frameworks (a reporting tool for example)?
What is from your point of view the greatest Benefit of a BPMS compared to an IDE with your personal tools and frameworks?
OK. Maybe I should be more specific... I got to know one specific BPMS which should make it easy to implement a business process by configuring rules. But for me as a developer it is hard to work with the system. I would like to work with text files which I can refactor and I would like to be able to choose the right technology or framework for the job I have to do. Instead the system forces me to configure.
There are rules where I can use java, but even then I have to stick to the systems editor without intellisense etc.
So this leads me to the answer of my own question - I would like to use the tools I am used to instead of having to learn how to work with a BPMS (at least the one I know) because it limits me more than it helps. The BPMS I know is a framework from which it is hard to escape! At this time, I would prefer a framework like Grail over any BPMS I know.
So maybe the more specific question is: do you feel the same or are there BPMSes which support you in beeing a developer and think like a developer or do most of them force you to do your job a different way?
In my experience the development environments provided by BPMS systems are third rate, unproductive, and practically force you to write hard to maintain, poorly designed code (due to their limitations). Almost all the "features" (UI, integrations, etc) provided by the BPMS system I'm familiar with (the one sold by that company named for its database) were not worth the money we paid.
If you're forced to use BPMS, as a developer, my advice would be to build as much of your application in a conventional development environment, such as Java or .Net, build as little as possible in the BPMS environment itself, and integrate the two. The only things that should go in the BPMS is the minimum to make the business process work.
Not sure what exactly you ask, but the choice BPM vs. plain programming will depend on the requirements. A "business process" is a relatively vague term in software engineering.
Here are a few criterion to evaluate your needs:
complexity of the rules - Are the decisions/rules embodied in your process simple, complicated, configurable, hard-coded?
volatility of the process - How frequently does your process change? Who should be able to make the change?
integration need - Is your process realized using multiple heterogenous services, or is all implemented in the same language?
synchronous/asynchrounous - Is your process "long-running" with the need to handle asynchronous actions?
human tasks - Does your process involves human interaction, with task being assigned/routed to people according to their roles/responsibilities?
monitoring of the process - What is the level of control you want on the existing process instances being executed? Do you need to audit the actions, etc. ?
error handling - Depending on the previous points, how do you plan to deal with errors, or retry of faulty process execution?
Depending on the answer to these questions, you may realize that your process is closer to a simple state chart with a few actions and decisions that can be executed in a sequence, or you may realize that you need something more elaborated, and that you don't want to re-implement all that yourself.
Between plain programming and a full-fledge BPM solution (e.g. Oracle BPM suite which contains BPEL, rule engine, etc.), there are intermediate solutions such as jBPM or Windows Workflow Foundation and probably a lot of others. These intermediate solution are frequently good trade-off.
I have worked with Biztalk in the past and more recently with JBPM. My opinion is biased against BPMs for the following reasons:
Steep learning curve : To make a process work, I have to understand how the system and the editor works. It is hard enough for a developer to understand the system, let alone a business user. The drag and drop and visual representation is a great demo tool. It certainly impresses managers (who ultimately pay for it), but a developer's productivity just drops.
Non developers changing the workflow : I haven't seen one BPM solution do it flawlessly. Though it doesn't look like code, right click on the box and you do have to put some code, otherwise it is not going to work. So you definitely need a developer to do it. The best part is that it is neither developer friendly nor business user friendly, just demo user friendly.
Testablity and refactoring : It is virtually impossible to test drive a BPMS. You do have 'unit test frameworks' advertised, but most of them are hacks and hard to use. Recently I tried the JBPM one; I ended up writing a lot of glue code and fake workflow handlers to make it work. The deal breaker for me though is refactoring. If the business radically changes it's mind about how a business process should look, then good luck re-arranging the boxes, because just re-arranging them won't work, all the variables bound to the boxes also need to be re-arranged. I would prefer the power of the IDE and tests to refactor my business process.
If your application has workflow, then you could try a workflow library (with or without persistent state). It will still manage your workflows without all the bloat that comes with a BPM. If a business user needs to understand the code, then let the business prepare good process flowcharts and translate them into good domain driven code. Use cucumber style acceptance tests to make bring the developers and business together. A BPM is just something that tries to do too many things and ends up doing all those things badly.
BPMS-- a lot of common business case, use case are already implemented. So you just have to know how to use it. For common workflow, you don't even need to write a single line of code, though mostly you would have to write some scripts to cover things that are not yet implemented.
Plain programming-- just use the IDE to hack out the code. The positive side: more control. The negative? A lot of times are spent on rewriting boilerplate code. And you have to maintain them.
So in a nutshell, I would prefer a Business Process Management System. One that I would recommend is ProcessMaker. It features an intuitive process designer that allows you to design workflow with drag and drop. And you can always write trigger to extend the process functionalities. It's open source as well.
I've been with my current company for about four months now and I've noticed how several of our RnD scopes/documents use the term "lifecycle testing."
I've always thought that this term would mean the entire testing phase of a project, but the context of the term suggests that it instead is when the software is tested with "live" or "real" data in a staging environment as close to the production environment as possible.
This has led me to wonder if I have misunderstood the meaning of the phrase, in which case, can somebody explain what lifecycle testing is supposed to be or mean?
A lifecycle of software is it's behaviour in the following situations:
Startup. Does it load correctly? Is it fast at startup? (Depends on what kind of software)
Mid-life. Does it use much memory? Does it clean up memory? Does it do what it's ought to do?
Exeting. Does it cleanup resources correctly? Does it closes everything down well?
Lifecycle testing is very important for server applications, where it's especially focussed on "mid-life" (it's not an official term btw). Server apps may never crash while doing something importantly, and if they do: they shouldn't bring down the complete system.
The clue "lifetime" of being "live" or "real" isn't much true, it's more being "alive" than "live".
For instance; I've build a Flash client-application which is a "billboard" application, displayed at a large screen, and I am lifecycle-testing it:
Graphics, does everything show up well? Not just the first minutes, but even 12 hours without restarting the app.
Auto-update, does that work?
etc.