What to include in an application Wiki? - documentation

What information would be best placed in a wiki?
This particular application is a rewrite from Classic ASP to ASP.Net Core.
I have gone over the deployment process, have made a walk through of the existing and new app via screenshots (the application has around 15 pages in total).
What else would help whoever picks this project up next?

Although the question is a bit unclear and broad, I can add points from my notes about which pages are typically required for almost any team:
Sprints retrospective.
Versions history: when do you do branching and releases.
Production deployment state and history.
Software used by team (Dev, QA, BA, whatever team members you have)
Onboarding pages.
Cheatsheets for commands (SQL scripts, git commands, bash commands etc.)
Links to all resources (test servers, jira, wiki, CI servers etc.)
Database schema pages.
Glossary.
Code conventions.
Naming conventions (UI naming, how to write domain-specific terms etc.)
Team list, contacts, roles and geographical position.
Responsibility matrix: who is responsible for what project functionalities.
Any hidden knowledges about the code.

Related

How to correctly write feature driven architecture with shared features?

I am rewriting my monolithic architectural application into feature driven architecture in Vue.js. Since we are hiring more people to our organization, we need to be able to assign workers into different features/domains to work with.
We have created features directory where we have different domain endpoints as features. For example, “feature/medicines/active-substances”. In there we have the management of the substances; adding, creating, editing, displaying all relating brands etc.
Now our second feature is “features/hematology”. We manage patients there, related documents etc. But we need to use active substances for example filtering and creating patient treatment plans. Our problem comes now when we need to use active substances. Hematology feature should not use the services from “features/medicines/active-substances”. If I did use and edit some of the services or components from active substances feature, then it would most likely break something for that team. This is just one of the examples but there are many modules like this which need to communicate at some level.
So, my question is how to solve this problem? Do I rewrite the whole logic again into the hematology feature again which destroys clean architecture, or do I import the needed services from the active substances feature which leads to high coupling?
Since there is not a lot context on your pipelines or if you do TDD and how you handle your dependencies.
There's the downside and the upside of the monolithic apps for sure.
On my experience and now with Vitejs a fat SPA is not a big problem when it comes to have it or not a monolithic.
Maybe not a solution to your answer but an enhancement to your environment would improve code isolation and task organizaton.
The way we structure the projects is that we have a root project which houses vite configuration and autoloading other features we split to other git repositories which we then again deploy as npm packages to an internal NPM Registry at our company. For private projects I use private npm packages.
There is this feature:
Library mode at Vite eco system and it bundles your split project as an npm ready.
Then you can import exports of your split project at your root project as:
import { x, y } from '#org/features'.
Including the lazy loading features of the router in the eco system it really becomes blazing fast and manageable to have frontend monolithics which are well organized and perform efficiently.
With some basic experience you can dynamically also chose which libraries to load and import and inject or install them to Vue 3 eco system. Depending which features you heavily use.
So maybe, before ditching monolithic entirely and heading to Micro Frontends to deal with cookie, CORS, Domain, issues on every developer environment. Consider these insights, reasses and proceed.
If you choose the option suggested above, initially decide a NAMESPACE pattern, unique.
Apply use of the pattern on router modules exported by feature packages.
Apply use of the pattern on store modules exported by feature packages.
Namespaces save you some time and provide anchor points which you can debug issues arising from different packages and isolate user access based on namespaces they have access to e.x: state: user.access["DASHBOARD_READ", "TASKS_WRITE"] then you can create also a middleware to your vue router to perform checks if user can reach or not them at the very basic or use casl to grant access to user by iterating their access by backend to their session etc.
Also these practices are testable both unit and e2e. We use them actively at enterprise grade. No issues with speed, performance or organizaton. Each feature or group of features has/have its own repo their issues meet their respective authors and they have their own tests to pass.

Can you Auto update branches from the Main trunk?

Here is the scenario.
We are developing a product where we have a base product and regional variations for the product. We have all the common code checked into the main trunk while we have created 2 branches (branch_us, branch_uk) for the variations off of the main trunk. There is common code that is constantly being checked into the main trunk and the code that is being checked into branch_uk,branch_us is dependent on the code that is checked into the main trunk. This is being done because we expect more regions to added in future releases and as a result we want to have max reuse as well as thin regional variations layer.
Based on the current strategy, the developer will have to develop locally and then manually check-in the common files into main_trunk and regional variations into branch_uk & branch_us. Then everytime code is checked into the main_trunk, we will have to perform a merge from main_trunk->branch_uk & main_trunk->branch_us before we can perform a build for branch_uk & branch_uk (two separate deployments) because of dependency of new code in branch_uk/us branch to the new common code in main_trunk. This model seems extremely painful to think about and unproductive.
I'm by no means an expert on TFS. Here is what I am seeking opinion on:
Is there a way TFS can dynamically pull changes into branch_uk/branch_us from the main_trunk without doing a manual merge after every check-in (in the main_trunk)?
Do you guys have any other recommendations on the code management process that might be more effective/productive than the current one?
Any thoughts and feedback will be much appreciated!
This seems like a weird architecture to me, but of course I'm coming at it from a position of almost total ignorance, so there might be a compelling reason to approach it that way.
That being said: It sounds to me like you don't have a single application with two regional variations, you have two separate applications that share a common ancestor. The short answer to your question is "No". A slightly longer answer is "No, but you could write code to automate it."
A more thoughtful question-answer is "Are you sure centralized version control is the right tool for the job?" It might be more intuitive to use Git for this. What you have are, in effect, a base repository and two forks of that repository. Developers can work against whatever fork makes sense, and if something represents a change that should apply to all localizations, open a pull request to have the change merged into the base repository. This would require more discipline on the part of the developers, since they would have to ensure that their commits are isolated such that they can open a pull request that contains just commits that apply to the core platform. Git has powerful but difficult history-rewriting tools that can assist. Or, of course, they could just switch back and forth between working on the core platform, then pulling changes from the core platform back up to the separate repositories. This puts you back to where you started, but Git merges are very fast and shouldn't be a big issue.
Either way, thinking of the localizations are a single application is your mistake.
A non-source control answer might involve changing the application's architecture so that all localizations run off of the same codebase, but with locale-specific functionality expressed in a combination of configuration flags and runtime-discoverable MEF plugins, or making a "core" application platform that runs as an isolated service, and separately developed locale-specific services that express only deviations from the core application platform.

Web apps architecture: 1 or n API

Background:
I'm thinking about web application organisation. I will separate front (web site for browser) from back (API): 2 apps, 2 repository, 2 hosting. Front will call API for almost everything.
So, if I have two separate domain services with my API (example: learning context and booking context) with no direct link between them, should I build 2 API (with 2 repository, 2 build process, etc.) ? Is it a good practice to build n APIs for n needs or one "big" API ? I'm speaking about a substantial web app with trafic.
(I hope this question will not be closed as not constructive... I think it's a real question for a concrete case, sorry if not. This question and some other about architecture were not closed so there is hope for mine)
It all depends on the application you are working on, its business needs, priorities you have and so on. Generally you have several options:
Stay with one monolithic application
Stay with one monolithic application but decouple domain model across separate modules/bundles/libraries
Create distributed architecture (like Service Oriented Architecture (SOA) or Event Driven Architecture (EDA))
One monolithic application
It's the easiest and the cheapest way to develop application on its beginning stage. You don't have to worry about complex architecture, complex deployment and development process. It also works better if there are no many developers around.
Once the application is growing up, this model begins to be problematic. You can't deploy modules separately, the app is more exposed to anti-patterns, spaghetti code/design (especially when a lot people working on it). QA process takes more and more time, which may make it unusable on CI basis. Introducing approaches like Continuous Integration/Delivery/Deployment is also much much harder.
Within this approach you have one repo/build process for all your APIs,
One monolithic application but decouple domain model
Within this approach you still have one big platform, but you connect logically separate modules on 3rd party basis. For example you may extract one module and create a library from it.
Thanks to that you are able to introduce separate processes (QA, dev) for different libraries but you still have to deploy whole application at once. It also helps you avoid anti-patterns, but it may be hard to keep backward compatibility across libraries within the application lifespan.
Regarding your question, in this way you have separate API, dev process and repository for each "type of actions" as long as you move its domain logic to separate library.
Distributed architecture (SOA / EDA)
SOA has a lot profits. You can introduce completely different processes for each service: dev, QA, deploying. You can deploy just one service at once. You also can use different technologies for different purposes. QA process gets more reliable as it involves smaller projects. You can version communication (API) between services which makes them even more independent. Moreover you have better ability to scale horizontally.
On the other hand complexity of the high level architecture grows. You have much more different components you have to take care: authentication / authorisation between services, security, service discovering, distributed transactions etc. If your application is data driven (separate frontend which use APIs for consuming data) and particular services don't need to communicate to each other - it may be not as much complicate (but such assumption is IMO quite risky, sooner or letter you will need to communicate them).
In that approach you have separate API, with separate repositories and separate processes for each "type of actions" (which I understand ss separate domain model / services).
As I wrote on the beginning the way you choose depends on the application and its needs. Anyway, back to your original question, my suggestion is to keep APIs as separate as you can. Even if you have one monolithic application you should be able to version APIs separately and keep their domain logic separate. Separating repositories and/or processes depends on the approach you choose (eg. among these I mentioned before).
If I missed your point, please describe in more detailed way what answer do you expect.
Best!

In your Understanding what is the scope of a Toolsmith in Software Development?

I've been asked to maintain a few of our software development tools; Clearcase (Views, Streams, Triggers), JIRA, Enterprise Architect and various document based repositories, Confluence, doku Wiki etc..
But as I wonder where does the line is when it comes to the "best practices" and cleanup efforts to make the most of these tools for development teams.
for an SysAdmin perspective it's quite easy; Make sure u maintain the application, network and backup system. but when it's development support and in the term used "Tool's Smith" I believe is half way between a lead developer and system administrator.
Basically I would view this as someone in charge of proposing a framework enabling the developer to not see those tools, but only the process he/she needs to follow:
declare a new task (behind the scene: open a JIRA ticket)
associate to a project (UCM snasphot view to a Stream, download the code)
document the task (new page in the wiki)
...
The administration of those tools is still there, but a bit of development is needed to help the users (developers) in their daily process without them having constantly to think about the specificities of each tools.
If they wanted you to be a lead developer, they wouldn't have given you a title of "Tool Smith". I would view it as a specialized system administrator, more more to the point, application administrator. You might have some additional development responsibilities but being in charge of maintaining the tools probably doesn't confer any responsibility over how those tools are used.
Just my $0.02.

Where to find good examples or Templates for Configuration Management Plans?

Documentation is not the favorite area of a developer but an important area to fulfill if you want to have standards in the organization. We are trying to put together a new Configuratio Mgmt Plan to setup Change Controls, Backups strategies and other fun things, like the process from development, staging to production.
I will like to have your opinions on good examples or probably a good start for CMP process.
If you would like some list of the item in software configuration management plant, this links provide an example: http://www.scmwise.com/software-configuration-management-plan.html
However, please notes that the content of SCM plan is highly dependent on your company standard and the process during software development itself.
I usually refer cmcrossroads.com for such information. The site is not organized well but has lots of info. Another very useful resource is nasa.gov (I know).
Rather than me listing out the index of a CM plan, I would recommend you to check this link out: http://www.nasa.gov/centers/ivv/pdf/170879main_T2401.pdf
Some quick pointers for setting up CM process:
You absolutely need the
management's/corporation's backing.
Without them pushing, no one will
adhere to the process.
SCM is like police/postal service. No one remembers them until something goes wrong. In your case, it is a good sign if no one talks (complains) about your implemented SCM process.
Open source SCM systems are at par with the other kind. Depending on the intensity of your project, you may have to do several POCs to determine which system suits your needs.
This is a vast topic; I would recommend Alexis Leon's book if you are stuck.
Software development related all kind of documents are available as part of Rational Unified Process[RUP]. You can find those at:
Configuration management Plans
RUP templates
RUP templates