Should I use a single framework codebase for multiple sites or one for each site? - oop

Here's the situation:
Multiple sites are built using a certain framework.
All sites are hosted on the same server.
Each site has its own database
Since all the sites are hosted on the same location and they all have the framework code in common, I could easely install the framework once on the server and have each site use the same files as their framework code.
This way I only have 1 framework installation that is used by all the websites.
The second option is to have each site work with its own installation of the framework.
pro's and con's of option 1:
Only have to maintain 1 codebase for the framework of multiple websites
Framework updates instantly apply for all the websites
Should 1 site have different needs of the framework or have code that's no longer compatible with the latest framework version, custom framework compatibility patches become required. (there is not always time or budget to keep legacy projects compatible with the latest framework version)
pro's and con's of option 2:
Seperate framework for each site to maintain
Framework updates have to be applied seperately for each site
Should 1 site have different needs of the framework or has no budget to be maid compatible with the latest framework update, we simply don't update that site's framework installation.
If it's really necessary, a site could quickly modify its framework to match the needs without interfering with other sites on the server.
So option 1 seems easier to maintain, while option 2 is much more flexible. I don't know what's most important.
Which of the 2 options is the overall the best choice? Or are there more options possible?

I approach this slightly differently.
I'd have a dir structure like this:
/sites/
/site1/
/site2/
[etc]
/framework/
/1.0/
/1.1/
/1.x/
[etc]
Then within each site have a /framework mapping which points to the version of the framework that site is using. That way each site's framework version is independent of the others: if one site needs a different framework version than the others (stuck on an old version, needs a new version before the other sites have been tested with it), then you have control over that sort of granularity. Equally, changing the codebase in /framwork/nightly/ (for example) will "automatically" update all sites with their /framework mapping pointing to that bleeding-edge version of the codebase.

Personally, I will always have each site using its own framework codebase, or at least using a shared codebase frozen at a set version. The problem with sharing the framework is that with each update, you'd have to test each site using that shared codebase to ensure they are still working as expected. What also happens when the framework deprecates a feature you use in one of your sites? It could prevent you from updating, leaving you open to security issues.
Option 2 does give you more of a maintenance overhead but it's the safest approach in my opinion.

It depends on a couple factors.
How many developers working on the sites? Is the framework well supported, documented? Is there a *better framework available (better by documentation, overhead (memory footprint) and community support). Which choice would look better 12 months or longer down the road.
Choice 1 is appealing for its consistency and familiarity.
Choice 2 is appealing for its potential, learning and future growth.

Related

How to test/debug cross-platforms desktop apps(Windows, MacOS) with limited resources

I am trying to build a desktop app.
I am thinking of using electron on the recommendation of a web-developer friend of mine, but as I am the only sole developer, I don't have the means to test the software on different platforms(OS, hardware etc.).So I am anticipating that this will cause a problem later, in the end, to test/debug software on different platforms and different OS.
I have ruled out web-apps because of some privacy concerns of the users for the remote data hosting.
Software is pretty lightweight and is almost equivalent to the image viewer apps with some slight modifications.
How to solve the problem of variations of different platforms?
Any literature suggestions pointing me in the general direction are also welcome.
Sometimes it helps to think of Electron as two processes.
The renderer vs the main processes. Generally the renderer process which runs the HTML/CSS/JS is it's own isolated component, and you communicate to the main process using IPC.
So generally for the UI, you can use mostly any web based testing framework to test reliability. At Amna, for example, we use Cypress as our E2E testing platform. You an also use something like QAWolf. Both should work with localhost. In general, most website testing tools should work fine, and consistently across platforms.
Where this gets tricky is when a UI functionality makes a call to the OS or the main process. For example, saving to the disk, or launching a program.
The general flow is this, and I've yet to find radically simpler options:
Set-Up a VM or buy a machine with the corresponding OS. I used Spot VMs in Azure for this.
Manually test the scenarios you care about in each VM before you ship
If you have a lot of cases that rely on the OS, then you should be able to further optimize this by using an automated test runner like Spectron.
From experience, what I've realized is that most of the iterations I do happen more on the UI than the underlying functions with the cross-platform capabilities. And if your code has good separation (e.g. contextIsolation:true, nodeIntegration:false), it should be pretty obvious when you need to do an entire "cross-platform" test vs just UI tests.
I'm not familiar with a lot of large-scale electron testing frameworks, I do know that ToDesktop handles package building and generating binaries to perform a smoke test and verify things open across different operating systems.
It depends.
The answer depends on what you are building, so it makes sense to figure out what you actually want to build. Some questions you might ask yourself:
Do I need a database?
Do I need authentication?
Do I need portability?
Do I need speed to market?
Do I want to pick a language I'm familiar in?
These are all good questions and there are dozens more we all ask ourselves. However, back to your original question.
Electron is a fine choice
Yes, there are alternatives. But Electron is used for Visual Studio Code, Facebook Messenger, Microsoft Teams and Figma. Choosing Electron means there are other developers making apps and there are proven apps in the market so you don't have to worry about a dead ecosystem.
Electron is easy to onboard if you know web technologies, think js, html and css. If you know these, you can transfer your web dev knowledge and make a cross-platform app. You don't have to worry about learning each OS since the UI is the webpage which will look mostly* the same between each OS. (*some very minor differences, but essentially the same).
Cross-platform deployment is easy
There are a few ways of bringing your app to multiple platforms, I happen to be most familiar with electron-builder, but the other two solutions work as well.
Many templates to start with
I am biased, since I'm the author of secure-electron-template which is one of the many templates you can choose from when starting an app. However, I recently reviewed all Electron templates and found that only 4 do not have serious security vulnerabilities.
The Electron framework frequently is updated, and over the course of the past few years there has been a shift in the way Electron apps are made. Some earlier frameworks didn't have good secure defaults which some of the older Electron templates inherited and thus, aren't as secure as new frameworks that follow security guidelines.
If you decide on Electron, give my template a try. It's got a number of features I'm building out in order to help the community with features they might want (ie. internationalization (i18n), saving local data, custom context menus, page routing, e2e unit testing, and how one can use license key validation, to name a few things).

How to use shared library in ASP.Net Core MVC running on IIS

I'm looking into using ASP.Net Core MVC for some of my new projects. I work on a team of developers for a very large organization, and we each individually write a lot of small web apps. Due to the size of our organization, we have a lot of rules that we have to follow, and sometimes those rules change, completely out of our control. So this is what we have used in the past projects, all running on IIS:
ASP Classic - Each IIS root folder has a shared folder, containing a lot of commonly used .asp files. These files are mostly the same on each server, but can point to different databases for dev/test/prod environments. These library files are used for common things like authentication, authorization, encryption, sending emails, etc... Each application would be in a sibling folder to the shared folder, and include files like "..\shared\library.asp"
ASP.Net / MVC - The closest thing we could find was the GAC. Everybody says not to use the GAC, but for our purposes it does exactly what we need. We built a DLL library, and store it in the GAC of each web server. We then put local configuration (dev/test/prod environment specific stuff) information on the global web.config of each IIS server. Application specific information would be stored in that application's local web.config file.
The beauty of these two systems, is sometimes things change, and we can simply go update the global libraries, and every application that depends on them will adapt to the new code without needing a recompile. We have many applications, running on many web servers. This may not be ideal, but for our needs it works perfectly, considering the rules can change at a moment's notice, and recompiling every application would be a huge ordeal. We just have to be sure not to ever introduce breaking changes into our libraries, which is simple enough. We have zero problems with how it works.
Now, on to ASP.Net Core. Is there an elegant way to do this? It seems like Core doesn't support the GAC, nor does it support web.config. Everything wants to use appsettings.json. Is there a way to create an appsettings.json at the root level of IIS, and have it set global variables like environment="dev", authdatabase="devsql" etc? And can we store a .Net Core/Standard DLL in a shared folder, and have every app load it with a path like "..\shared\library.dll"? The closest thing I could find to do this with .Net framework was the GAC, but I'm not really finding any answers for this with Core. I appreciate any help, thanks!
sometimes things change, and we can simply go update the global libraries, and every application that depends on them will adapt to the new code without needing a recompile
Note that this is exactly one of the reasons why GAC deployment is usually avoided. If you update a dependency, and that happens to contain a breaking change (in any possibly case), then applications will start to break randomly without you having control over that.
Usually, if you update a dependency, you should have to retest every application that depends on that before you deploy the updated application. That is why dependency updates (e.g. via NuGet) are deliberate choices you need to make.
.NET Core avoids this in general by never sharing assemblies between applications and by allowing different versions side-by-side. That way, you can update applications one by one without affecting others.
This is actually a primary reason why .NET Core was made in the first place: The .NET Framework is shipped with Windows, and is a global thing. All applications will always use the same framework version. So whenever Microsoft ships an update to the .NET Framework, they have to be incredibly careful not to break applications. And that is incredibly difficult because countless applications depend on all kinds of things in the framework. Even fixing a possibly obvious bug can break stuff.
With .NET Core and side-by-side dependencies, this is no longer a problem because updates will not automatically break applications that still depend on older versions. It is a developer’s explicit choice to update an application, shipping newer dependencies.
So you should actually embrace this and start to develop your applications independently. If you have common dependencies, consider creating (private) NuGet packages for those, so that applications can depend on them and so that you have a good way to update them properly.

SOA application installation best practice: sigle installer or few separate MSI?

I have an application which consists of 2 web sites, 3 Windows services (2 of them hosts WCF services) and 3 SQL databases.
It's likely that all system will be installed on 1 or 2 servers, but also possible that some services will be installed on separate machines if performance will be seriously affected by them.
I need to create WIX installer for all this staff. The thing is I'm not sure if it would be better to create single complicated installer which will be able to install whatever component set (some services may be installe), or few separate installers.
Single-installer approach has important advantage if the whole product is installed on the same machine - we will need only few dialogs shown to user and we will be able to setup connection configurations between components automatically. But of course development of such installer is very complicated: installer wizard pages should be shown based on features, installing currently on machine. There will be multiple conditions and testing all them will be a hell.
Multiple-installer approach is easier to implement and test. But of course it will be harder for user to install the product - he needs to set same service installation URLs few times (once per installer) - to setup connection configurations to WCF services.
I searched for best-practices in this area but looks like there is not a lot of information. Can anybody suggest which way is more appropriate?
If you design it correctly you can switch back and forth between the two approaches. Personally I've used InstallShield which has a feature called RELEASE flags and Product Configurations. It's basically a way of defining multiple builds that pull in different features and apply different branding (ProductName, ProductVersion, ProductCode, UpgradeCode, INSTALLDIR and so on.) This is how you'd make say a multiple single product installations and then build it again as a product family suite. This pattern applies to SOA very much. The same thing could be done in WiX using preprocessor statements to control compilation.
I once worked for a company that made an SOA based server product. The system had 26 installers of 5 catagories ( Win Client, Mobile Client, Service Layer, Web UI layer and SSRS Reports) I did each as it's own installer and it really helped cut down on the complexity. Each service had it's own database which made writing the SQL scripts to create and update the DB a lot easier. It also cut down on the amount of configuration each installation had to do since it only had to know how to set itself up and the other piece it communicated with.
The bootstrapper to bring all these pieces together may or may not be needed. Part of the reason we choose SOA is to give us complete control on how to scale the system out. This type of flexibility (26 pieces on 1 server, 1 piece on 26 servers, 2 tiers, 3 tiers, n tiers ) makes it difficult to encapsulate through a simple wizard UI.
I would go with multiple installers, bundled into a big single installer, this allows you to easily separate and test them. The bundle installer has the purpose of capturing the data from the user a single time and pass it to all installers selected for installation.
You can do this using Burn support from Wix or other setup authoring tools (that provide you with a GUI, allowing you create the projects and dialogs much easily). Wix is a very good tool, but the time you need to create the projects is too much, compared with the costs of a license for a good setup authoring tool that provides and drag and drop IDE to design dialogs and chain the installation of the separate installers.
EDIT
To pass values from the bootstrapper to the background installers is very easy, you don't need to write any custom code. This is handled directly from the setup authoring tool. All you need to do is to make sure the tool is capable to create Windows Installer compliant packages. This way, you can pass the selections made from the user to the background installers as property values, in the command lines of the installers.
The bootstrapper installer should not register with Windows Installer, so it does not appear in Control Panel, as a separate applications, there you should see only the separate background installers.
I'm providing a second answer to give a different perspective from my first. Microsoft Team Foundation Server installation is 1 single installer for different roles. You can scale out the various layers of the system but the install is all the same: install it all.
What they have chosen instead is to invest in a role configuration wizard that gets executed post installation to activate a given installed feature and configure it's settings and that of the next layer it needs to communicate with.
This is a very good way to go also.

Will Embarcadero RadPHP XE2 scale to an e-commerce site?

In a nutshell: is RadPHP a toy? or can you build real web sites, such as a e-commerce/shopping carts app that will:
Support 100s of simultaneous users on a reasonably good web server, like any other PHP app
my specific concern is the RPCL library might be bloated and inefficient
Be easy to assign the CSS hooks and integrate CSS files supplied by designers
Be as easy as 'plain' PHP programming is to talk to external sites such as payment gateways
Easily integrate third party components; Javascript and PHP e.g. Lightbox, eg CKEditor.
I am coming from a Delphi background, not PHP, so please excuse my ignorance and trouble at evaluating RadPHP XE2's potential as an easier way to transition to web development without sacrificing potential to scale.
It has a demo app created for oscommerce the well known open source e-commerce app.
Yes
No appearent barrier.
It already has components integrating 3rd party stuff such as zend, qooxdoo, jquery etc..
I'm also coming from delphi background with almost no php. Currently I'm developing a prado framework based ERP application using eclipse as ide. On my leisure time I'm toying with radphp, and I think we could have used it as well as the eclipse-prado kit but I'm in no place to make the decision. In my experience radphp is developing well into form. The first releases / versions were really sluggish. But XE2 looks solid. If vcl for php is fine tuned for performance in the future releases, radphp will have better days.

Organising frameworks for client, server, and plugins

I am working on a large project that comprises a server, some plugins for controlling devices that can be loaded into the server process, and also clients that can connect to the server. I am looking for the best practice for structuring the framework(s) that will ultimately need to exist.
There are header files that are shared between client, server, and plugins, and some headers that are specific to each aspect of the system. Sometimes headers are only shared between say client and server, or server and plugin. Similarly, there is common code that can be shared between all three aspects of the project, and also code that would only be needed by one particular aspect.
When the project is finished, we will need to release a client application and plugin developer API for third parties to develop against.
I am not sure how to properly structure the Frameworks that would be needed to support this.
Do I need to have 2 separate frameworks? Or can I have 1 framework that includes all the headers and provides 2 separate dylibs?
If I need to have 2 separate frameworks, what do I do with header files that are shared between all aspects of the system? I do not want to copy them into each framework to avoid possible problems with versioning.
Would a 3rd headers-only Framework be a reasonable option?
Can anyone recommend a best practice for structuring this kind of thing with Frameworks on OS X?
Framework = library + headers for that library
Each framework only needs to include header files for the interfaces you want to expose. Even if common headers were used to build all three frameworks, you are not obliged to bundle them.
A 3 framework approach would be fine, even if one of the frameworks only bundles common headers and no library at all. Example: if you install Qt on your Mac, you will see it split across many frameworks, but headers are never duplicated among them. There are also some frameworks that only contain headers and no code (QtScript.framework, for instance).