With the next closing of SqueakSource repository, I wonder what is the advantage of SmalltalkHub over SqueakSource3. Is there some feature missing in SqueakSource3?
I have noticed that basic functionality like explore projects seems broken in SmalltalkHub, which is understandable because it is still in beta, but why to move or register projects to SmalltalkHub then?
Pharo's core development happens completely on SmalltalkHub.
It is much faster than SqueakSource3, which is the main point in favor of it. SmalltalkHub is still being extended and new features are on the way. For a typical use case from the image side there is little or no difference between the two.
SmalltalkHub is faster than SS3. It has a new look. You can manage team and collaborators with a modern interface. Nicolas Petton uses it daily for work and in addition it will continue to enhance it. All the Pharo core packages are hosted there. Finally, you can also easily install and use SmalltalkHub in your company if you need it. The software stack behind SmalltalkHub is really robust and nice. Several companies uses it for other projects.
From a risk-avoidance POV, there is something to be said for storing them in both. Their software stacks and feature sets are sufficiently different. Just make sure you identify one as primary. Copying the monticello files from one repository to the other is easily automated. Stef provided some scripts to copy whole projects from squeaksource to smalltalkhub recently, they would equally well work for canary or ss3.
Related
Both recommended in the official Mobx page if one wants an opinionated way to using mobx for state management.
Based on these(1,2), keystone seems like an improvement of state-tree. Having everything that state-tree has + more. Nowhere I could find anything that state-tree has that keystone doesn't.
I see keystone is nowhere nearly as mature as state-tree. That's probably the main point stopping me from picking it instead. What are other good points for state-tree over keystone?
P.S. It's going to be used in a React app.
I'm the current maintainer of MobX-State-Tree. I think the primary benefit of MST over MobX-Keystone is that MST is used more widely and has more broad third-party support. For example, mobx-devtools supports MST but not MobX-Keystone, as does Reactotron.
With all that said, I'm very interested in exploring MobX-Keystone for our own usage at my consulting company. Even though I'm maintaining MST, I'm not opposed to MobX-Keystone, and the much better TypeScript support is very tempting. If we end up using it in a project and it goes well, we will likely build in support to Reactotron for it.
I hope this perspective helps.
(With regard to the other answer asking if you really need more than just MobX, it's my opinion that MST and MobX-Keystone bring super useful patterns and tools that help you scale across a whole application in a more cohesive way than remaking them yourself using MobX.)
First ask yourself if you really need those libraries, because you can get really far with just mobx and good ol' OOP patterns. In the official docs you have an example of a store that does auto-saving and serializing.
Having said that, I would go for the mobx-keystone. Typescript works right out of the box, and you can use classes to construct your store, which is IMHO, easier that MST stores. Plus the author is very responsive and he is also a contributor to mobx library.
mobx-keystone is super awesome. After using both of the two libraries for a while I would recommend using it over mobx-state-tree in every possible occasions. It's much more intuitive and easer to learn. It saved a lot of time for my project (switching from zustand to jotai to mobx-keystone, IMO each next one is a bit better than previous one in my use case)
After extensive testing / reading the source code / playing with test suite it feels that mobx-keystone provides a more robust developer experience with Typescript and also has enough escape hatches if you need to work with performance sensitive code. I would implementing a small problem (but complex enough) problem with both libraries to judge for yourself.
I am creating a new project that will run in Azure Web App on the new ASP.NET 5. We are not planning to run it on linux or anything like that, at least now. So the question is, should I try to keep both frameworks if possible just in case or I should prefer one of them. There are e.g. much less dependencies that I can use with dnxcore50 which is not so nice. So the main question is: are there any benefits of using dnxcore50 if running in Azure Web App, like: performance, stability, etc. over dnx451.
I have to start that I'm still the beginner in ASP.NET 5 (like the most other), so I didn't posted my answer before and you should ignore my reputation, because it's come from another subjects, which I know better.
I think that everybody, who switch to ASP.NET 5, ask the same question whether it does make sense to keep both framework in his projects. I try to post below my personal thoughts about the subject.
My personal choice is my short recommendation to you: keep both framework till you find some really important reason to drop one from there.
ASP.NET 5 is still not final. The strategy is not full fixed and it can be changed in a short time later. Just some examples. Previous beta versions have supported "Helios" as an option for hosting ASP.NET 5 applications on IIS. The option was dropped later (see the statement). Even the name dnxcore50 is renamed now to dotnet5.4 at least in all internal Microsoft components (see the announcement). One can suppose that some other things could be changed in the future. Thus I think that putting all your eggs in one basket would be too dangerous now: keeping of both frameworks could reduce the risk.
The next thing, which I found, was the following. dnxcore50 (dotnet5.4 or CoreFX or .NET Core foundational libraries) don't support many features supported by .Net Framework. One important example for me was missing XSD Schema validation (see here and here). I use XML only in combination with XSD Schema validation. I prefer JSON in the most other cases. Kipping of both frameworks in your project could helps you to locate the parts of your code, which could be not yet implemented in CoreFX. It could helps you to move the code in separate component or to change the implementation.
About the performance. One should distinguish potentiality of both frameworks from the current implementation. In general CoreFX was redesigned and decomposed. Many parts of one mscorlib was separated or removed (remoting, AppDomains and so on). It means that the performance of CoreFX should be better. Theoretically the factored API can provide better performance. Moreover one can more easy improve one parts of CoreFX and publish new version with improved performance. More modules instead of having one monolith gives us the new way for improvement of the performance and for fixing the bugs. On the other side replacing of dependencies to new version could be origin of new compatibility problems and thus it increases the risk and could decrease the stability. By keeping of both frameworks we can test whether the new problem exist in alternative framework. It allows us to suppose that the last changes of dependencies and not the last changes of our main code is the origin of new problems.
I can continue with pros and cons of the usage of every framework, but nodoby like to read long text and all my arguments forward me to the same practical decision: keeping by default of both frameworks in my projects as soon as I would find out a real requirement to drop one from the frameworks.
No major advantages really so far.
This might change in the future and why I'm planning to target both (CoreCLR and .NET 4.6). A lot of investment is being spent in CoreCLR but also on Docker and Service Fabric.
Just my 2 cents.
Here is the scenario.
We are developing a product where we have a base product and regional variations for the product. We have all the common code checked into the main trunk while we have created 2 branches (branch_us, branch_uk) for the variations off of the main trunk. There is common code that is constantly being checked into the main trunk and the code that is being checked into branch_uk,branch_us is dependent on the code that is checked into the main trunk. This is being done because we expect more regions to added in future releases and as a result we want to have max reuse as well as thin regional variations layer.
Based on the current strategy, the developer will have to develop locally and then manually check-in the common files into main_trunk and regional variations into branch_uk & branch_us. Then everytime code is checked into the main_trunk, we will have to perform a merge from main_trunk->branch_uk & main_trunk->branch_us before we can perform a build for branch_uk & branch_uk (two separate deployments) because of dependency of new code in branch_uk/us branch to the new common code in main_trunk. This model seems extremely painful to think about and unproductive.
I'm by no means an expert on TFS. Here is what I am seeking opinion on:
Is there a way TFS can dynamically pull changes into branch_uk/branch_us from the main_trunk without doing a manual merge after every check-in (in the main_trunk)?
Do you guys have any other recommendations on the code management process that might be more effective/productive than the current one?
Any thoughts and feedback will be much appreciated!
This seems like a weird architecture to me, but of course I'm coming at it from a position of almost total ignorance, so there might be a compelling reason to approach it that way.
That being said: It sounds to me like you don't have a single application with two regional variations, you have two separate applications that share a common ancestor. The short answer to your question is "No". A slightly longer answer is "No, but you could write code to automate it."
A more thoughtful question-answer is "Are you sure centralized version control is the right tool for the job?" It might be more intuitive to use Git for this. What you have are, in effect, a base repository and two forks of that repository. Developers can work against whatever fork makes sense, and if something represents a change that should apply to all localizations, open a pull request to have the change merged into the base repository. This would require more discipline on the part of the developers, since they would have to ensure that their commits are isolated such that they can open a pull request that contains just commits that apply to the core platform. Git has powerful but difficult history-rewriting tools that can assist. Or, of course, they could just switch back and forth between working on the core platform, then pulling changes from the core platform back up to the separate repositories. This puts you back to where you started, but Git merges are very fast and shouldn't be a big issue.
Either way, thinking of the localizations are a single application is your mistake.
A non-source control answer might involve changing the application's architecture so that all localizations run off of the same codebase, but with locale-specific functionality expressed in a combination of configuration flags and runtime-discoverable MEF plugins, or making a "core" application platform that runs as an isolated service, and separately developed locale-specific services that express only deviations from the core application platform.
I maintain in-house business software for a living. Technologies included here are Java, Struts, Spring MVC, jsp, wicket, and a few others. I think it's time to branch out and learn something new.
I am hoping to show myself with a side project that writing code can, in fact, be fun (in some plane of the universe), and that I haven't wasted the past few years of my life doing something I can never love or have fun doing.
I'm thinking of having a fantasy-sport style web site - obviously much, much smaller with regards to features and all that. I was hoping I could get some recommendations for the newest or cleanest frameworks that will allow me to accomplish such a project. My goals are to work on following a real development process instead of just hacking a bunch of crap into an already crappy application on a daily basis. Also I will strive to follow best practices and create good, clean, understandable code that I don't shudder at the thought of having to modify. It's hard to do this at work, because the software I work on has already been developed by 50 guys from various continents that never took the time to design anything before jumping into coding.
I would need a simple database to store users and their picks for each event. Also at my job, the login security is all handled by another group completely. Do people usually write their own login systems from scratch, or are there open source utilities for that as well? I'd be interested in those, as my site will need to have a user login system, and be secure.
I had ruby and rails installed on my computer the last time I conjured up the motivation for this idea, but that was nixed by a hard drive crash. I figured before I just jumped straight to rails for this idea, that I would get a few other opinions off stack overflow to see if people liked something else that I didn't know about.
Also, if anyone has any good resources for how to think about OO design, I could brush up on that as well. I'm looking for anything that will help me to just think about the design from the start and how to get my thoughts into a diagram. I'd like it not to focus so much on patterns and other principles as much as just how to get started and actually put my thoughts in a professional document that I can use to build my project from. I tried to practice this prior to a card game that I wrote, and it got way too complicated way too fast, and the results ended up being not so great.
I’m more familiar with Django, although like you, the only frameworks I’ve really used are the Java/Struts/Spring/JSP, etc. The automatically generated administration interface in Django is amazing coming from these, and it comes with its own authentication system too.
Unless you’re especially predisposed against Python, I think you should give it a go.
Ruby on Rails, Python on Django, PHP on (not sure -- maybe Zend? or CakePHP?), are probably the most popular frameworks if I understand correctly that you want to learn a new language. If I misunderstood you, and you'd rather stick with Java, GWT seems pretty cool -- it's the only real way to avoid "explicitly" writing Javascript (if you DO want to learn and use some Javascript, I personally am in love with Dojo, but jQuery is substantially more popular: those are two good popular frameworks you should consider, though there are others of course, like for all languages I mentioned so far).
One advantage of picking Python and Django is that they work particularly well with Google App Engine (and with Dojo, too, thanks to the cool dojango project!) -- GAE supports JVM too, now, but it's supported Python for a much longer time and the Python side of it is more solid and complete at this time. So, if that's the technology stack you choose, you get to develop and deploy for free, on highly scalable infrastructure, at least until your app gets more than a few million page views per month -- and you really minimize your system adminsitration hassles, all you do is basically to code and write one simple configuration file.
Let's assume you're in the middle of a long running project (long running = several years) and, as expected, there will be several things coming up with brand new releases. There might be a new .Net Framework with brand new features (e.g. Linq, Entity Framework, WPF, WF...), a new Visual Studio or V.next of your favorite Control Library, a new Mock Framework and a lot more things.
What are your guidelines for handling these technology updates? Do you adopt them instantly or do you ignore them until the end of the project? Do you have different guidelines for different things (Tools, Frameworks, supporting stuff)?
In my experience, these decisions are always made on a case-by-case basis. Several factors are considered, including:
How mature is the new technology? Does the organization like to be at the forefront working with bleeding edge new technologies, or does it prefer to work with proven tools and methodologies?
What skill sets do your people have? Are they consistent with use of the new technology, or is more training needed? Will improved productivity outweigh the time it takes to come up to speed?
What investment do you have in the existing technology? What is the cost of moving to the new technology? How much rework and rewriting of code is involved?
What is the requirement? Is it supported by the existing techology, or are new tools needed to fulfill the requirement?
What are the performance expectations? Does the new technology provide a performance improvement that cannot be met with the old technology?
What about the technological culture? Is the organization vendor specific (e.g. a Microsoft shop)? Can open-source code be used?
What is the scope of the project? Is it a large project that would benefit from supporting technologies like frameworks and tools, or is it a small project that would be unduly weighed down and complicated by these things?
How is the new technology supported? Does the vendor have good documentation? Is there someone you can talk to if you have problems? Or are you an organization that has people that know how to solve problems without a support contract?
Is the technology comfortable to work with? Does it seem to make sense? Is it clean and elegant? Do other people seem to like it? Are other people having problems with it?
Is the technology the latest flavor of the week? Has it proven itself in the battlefield to produce tangible results, or is it just a religion?
How much time do you have to learn the new technology and iron out the kinks? Do the benefits outweigh the costs?
As a very brief example, I chose Link to SQL for my most recent project, because the project was complex enough to warrant an ORM, L2S performs well and is lightweight, we are a Microsoft Shop, and it is my sense that the Entity framework is not quite ready for prime time (even though Microsoft says that it will be the go-to framework for the future).
Stick with what you've started with.
A large and long running project often comes with a huge and highly complex code-base. Any change or upgrade to a new version of a library can add bugs in very subtle and unexpected ways.
Also: For large projects the tools and libraries used should have been tested and evaluated in the design-phase. Unless you find a show-stopper or a security issue it's best to not upgrade.
Always remember: Don't change horses in the middle of a stream. :-)
I would say different factors pitch in, like-
Say a software is nearing its end of life, for example last April, Microsoft retired mainstream support for SQL Server 2000, and your product uses it then its wiser to go for the next version of SQL Server in your next release.
Another factor which comes into play is how much value does the new features in the latest release of a software would bring to your product. It may well be the case that the new release of .NET framework has something which does not add any value to your product, then that does not build a strong case to upgrade.
Budget is also an important factor. I think you need to upgrade licenses in order to step up to the next release unless you are already part of something like software assurance.
Training to the team is also a factor. If the latest release is going to add to your product then you will have to train your team as well.
Well, there could be other telling factors too. These were the ones off the top of my head. I hope it helps.
cheers
If you're talking about a framework-specific example, the biggest piece of advice I'll give you is keep the system and your application separate. This is why I love patterns such as Model-View-Controller - it keeps your code modular and means you can upgrade sections without breaking the app as an entirety.
On a more practical level, if your framework has a Git or SVN repository, checkout the usual 'system' directory from the repo, then you can call 'svn update' occasionally to keep up with the latest and greatest builds.
I would suggest that the project not last that long. Develop the application in smaller pieces with iterations every couple months. That way, as new technology comes out, you can make the necessary change and implement updates as you go rather then have to decide to redevelop the whole application. As you say, trying to develop the whole application as things change just doesn't work.
As another poster said, it's certainly a case-by-case basis thing. What you can upgrade and when is determined mostly by how hard or easy it is to test the new version of the system. Having a comprehensive automated test suite for your application helps a lot with this.
Generally, I try to update to the latest stable release of libraries and so on as often as possible, because that makes maintenance easier. If you don't update, you may find yourself patching or working around bugs in the version of the library you are using. If you update less frequently, each update will be more work because you have more changes to deal with, and it's been longer since you last touched the system, and thus you remember less about it.