We've recently starting to implement web components in our app, the hope is that these web components can be shared to different clients in the future. My primary role is to have this process to be as simple so maintenance won't be too much work. We started of on having one web component per repo it dawn at me that we could potentially have 100s of core repos * (# of clients), making it seem crazy to maintain with lots of overhead. I'm just not sure if a very granular repo structure is the best way to do it. I just don't know how to maintain such setup.
Option 1: One component per repo
Option 2: One feature per repo
Option 3: One repo per client
I need to find the balance of flexibility and the operational maintenance of each option.
Related
I study Vue and Vuex. In the official documentation there is a simple example of using a Vuex with saving data to localStorage.
To better understand the material I studied, I decided to consolidate the knowledge into practice and write a mini application - a clone of trello (SPA).
Namely:
Create three routes:
General dashboard (/dashboard) where are boards
Board (/board) where one or several columns are located, each column has a button for
creating a task in it
Task (/:task-id) that are in columns, tasks can be moved between columns
Sidebar in which all notice with the board are displayed (CRUD by tasks and columns, changes in the status of a task, and so on.)
Sockets so that other users can see the
changes on the board in real time.
Questions!
What data should I store exclusively in the storage Vuex? Excluding authorization. It is obvious.
For what data in this application can localStorage be useful?
What should I use so that data is not discarded when I refresh the page or navigate? I can use localStorage, but hypothetically there can be a lot of data. The fourth question follows from this.
Is a better solution to use persistent remote storage on server / cloud? If so, could you give me information on how to do this? And in this case, interaction with the database is of interest, at what point is it better to save data in the database?
I'm interested in how to properly build such an application, as in a real commercial application.
I use and learn the stack MEVN
1- you can store any type of data in your store, 2 - I don't thing is useful. Because if users remove browser cache all them data will be forget. So you need configure an database for this. 3 - You need a Database and some Backend to provide your data. 4 - It's depends. if you need only for developement, you can install any things in your machine. If you need some thing more robust, could you take a cloud server. But for configure the server you need a little bit system administrator skills.
Currently we have TFS and use this feature of only allowing certain users to checkin after review and testing. Our company is adopting devops model and moving towards Atlassian STASH and this tool doesn't have this feature readily available. Anyone has implemented it?
I'm assuming you mean Atlassian Stash, not git stash?
If so, you can use branch permissions to enforce a workflow that only allows certain users to write to a branch (such as master), and/or to allow changes only via pull requests.
Combined with pull request settings that require a minimum number of approvals you can achieve a strict change management workflow as desired.
Now, what I just said does require slightly different thinking. Your wording implied a review and test should happen before checkin. With git, people can and should work on branches that are committed and pushed to the central repository. It is before merge that review and testing takes place.
As an aside, these features aren't exactly pre-requisites for a "devops model" (some may in fact argue the opposite), but I can see how judicious use of workflow settings (branch permissions and pull request settings) could play a part.
Disclosure: I work for Atlassian
I've deployed a single micro-instance redis on compute engine using the (very convenient) click-to-deploy feature.
I would now like to update this configuration to have a couple of instances, so that I can benchmark how this increases performance.
Is it possible to modify the config while it's running?
The other option would be to add a whole new redis deployment, bleed traffic onto that over time and eventually shut down the old one. Not only does this sound like a pain in the butt, but, I also can't see any way in the web UI to click-to-deploy multiple clusters.
I've got my learners license with all this, so would also appreciate any general 'good-to-knows'.
I'm on the Google Cloud team working on this feature and wanted to chime in. Sorry no one replied to this for so long.
We are working on some of the features you describe that would surely make the service more useful and powerful. Stay tuned on that.
I admit that there really is not a good solution for modifying an existing deployment to date, unless you launch a new cluster and migrate your data over / redirect reads and writes to the new cluster. This is a limitation we are working to fix.
As a workaround for creating two deployments using Click to Deploy with Redis, you could create a separate project.
Also, if you wanted to migrate to your own template using the Deployment Manager API https://cloud.google.com/deployment-manager/overview, keep in mind Deployment Manager does not have this limitation, and you can create multiple deployments from the same template in the same project.
Chris
How should we best handle code that is part of a single Rails app, but is used in several different "modes"?
We have several different cases of an app that is driven from the same data sources (MySQL, MongoDB, SOLR) and shares core logic, assets, etc. across multiple different uses.
Background/details:
HTML vs REST API
A common scenario is that we have HTML and REST interfaces. These differences are handled through routing (e.g. /api/v1/user/new vs /user/new) -- with minor differences they provide the same functions. This seems reasonably clean to me.
Multi-tenant
Another common scenario is that the app is "multi-tenant", determined mainly by subdomain of the URL, e.g. partner1.example.com and partner2.example.com (or query-string parameter for API customers) -- each has a number of features or properties that differ. This is handled by a filter ApplicationController using data largely stored in a set of tenant-specific database tables with tenant-specific functionality encapsulated by methods. This also seems reasonably clean to me.
Offline Tasks
One scenario is that a great deal of the data is acquired through a very large number of tasks, running pretty much continuously: feed loaders, scrapers, crawlers, and other tasks of this sort ... the kinds of things you would find in a search engine, which is a large part of what we do. These tasks are launched on idle server instances and run periodically ... but are just rake tasks that are part of the app.
These tasks are characteristically different than our front-end code -- they update data, run calculations, do maintenance tasks and so on -- some tasks run for days (e.g. update 30M documents from an external web service). In the end, these tasks create and keep fresh the core data that our front end app uses.
This one doesn't seem as clean to me, in particular, in some cases, these tasks are running and doing data updates at the same time as our application is using them, so occasionally need to defer to the front-end app when we're under peak loads.
Major Variants of the App
This last case is clearly wrong -- we have made major customizations of our app -- 15% or 20% different, by making branches and then running as an entirely separate app, sharing some of the core data sometimes, but using some of its own data other times. We have mostly fixed this now, as it was, of course, untenable.
OK, there's a question in here somewhere, right?
So in particular for the offline tasks I feel like the app really needs to be launched in a "mode" or perhaps "sub-environment". But we still have normal development, test, qa, demo, pre_release, production environments that have their own isolated data and other configuration parameters. For each of these, we want to be able to run, develop, test and deploy the various "modes" of the application.
Can anyone suggest an appropriate architecture that is similar to the declarative notions of standard Rails environments?
If the number of modes is ever-increasing:
Perhaps the offline tasks could be separated from the main app, into their own application (or a parent abstract task with actual tasks inheriting from it and deployed individually).
If the number of modes is relatively small and won't be changing often:
You could put the per-mode configuration into a config file, logically separate from the rest of the code. Then during the deployments, you would be able to provide a combination of (environment, mode, set of hosts) and get a good level of control of your environments while using the same codebase.
I am in the process of implementing an enhancement to an existing web application(A). The new solution will provide features(charts/images/data) to the application A. The new enhancement will be a new project and will generate new assemblies. I am trying to identify what would be most elegant way to read this information.
1) Do a binary reference and read the data directly. The new assemblies live with your application and are married together
2) Write a WCF call and get the data. This will help to decouple the application.
The new application will involve me to buy some expensive licences. So if i go with the 2nd option i can limit the license fee to a single server or atmost 2-3. My current applicaiton runs under a webfarm of 8 servers.
Please share out the pros/cons of both approach.
Thanks.
If you decouple the two pieces sufficiently, you will also permit the use of clients running something other than .NET. Using the first option, you could only support .NET clients. This may turn out to be important, even if today you are absolutely certain that only .NET will ever be used - tomorrow, your company may be purchased by another which is a Java or PHP shop.
Even if you never need to support a non .NET client, coupling to the assemblies will require you to maintain version compatibility between the client and server. If this is not necessary, then use option #2.
The benefit of using WCF (decoupled approach) is that you get a deployment option to take it outside of the machine if it impacts the machine too much in terms of processing or storage.
The downside is that you'll likely pay some performance hit compared to linking directly.
I'm sure you can do some dynamic linking so you don't have to deploy to all 8 servers.