Strategy for modeling a multiple web app subscription system - sql

I am working on a system using php/mysql where I am allowing users to subscribe monthly to various, small browser based web apps. Each app will have different subscription terms and plans. The apps are all currently built and they reside within the same framework.
I am in the modeling phase so I am looking to make this system as flexible as possible wheren the terms from one plan to the next will vary. Any thoughts on how to elegantly model this?

Rather than building this yourself you could look into using something like Zuora.com. Please note that I haven't used these guys or have any affiliation, I just remember reading something about services like this starting to emerge for web-app publishers needing a simple billing/metering solution.
Of course, you also need to consider which payment gateways you support, but I think that Zuora does that behind the scenes.

Related

Discussion on best way to create a "backend" for my apps

Over the years I have written many apps for myself and for customers using VB.NET, most of these were standalone apps which worked on workstations and servers and didn't involve much if anything to do with networking, client/server or a backend.
I want to evolve and learn more, and specifically am interested in creating a "backend" for some of my apps so that the data from the client app, whether it be error information or custom data for a client can be sent from the "client" to whatever is the best option for the "backend". I would then store the received data in a database I assume and then have ability to view it, and parse the data to show specific results etc
This is a new area to me but one I want to learn more about, so I am hoping that others who have this knowledge / experience can give me some pointers as to the best route to take etc, especially covering the points below and anything else you think I need to know but have missed
What to use for the "backend"? I prefer to have it in the cloud rather than having my own server, so do I use something like Azure? if so what? or do I sign up for hosting with a company which provided ASP and .NET Core hosting and create something there? if so what would I use?
What are the best methods for sending data from a to b - I notice most services I currently use for my apps e.g. for sending mail or error reports etc often use JSON? is that best option?
How do I send the data, any built in or suggested frameworks or API's to use to "post / send" information from my client to my backend / remote endpoint?
Hope this all make sense, please ask if not, all comments appreciated, as I say I want to use this both as a way to create / modernize some of my existing apps, but also as a learning exercise to learn areas I don't know and hopefully also make user of newer / evolving technologies and solutions.
Thanks

Considerations for Creating Industrial Applications (Native/Web)

What considerations are needed when creating a web app that is intended to be used in an industrial plant setting for a company? My specific use case is an industrial facility with several different production plants that would each have its own device for the application interface.
How do companies enforce the usage of such apps on a monitor/tablet? For example, could I prevent them from using other stuff on the tablet?
Importantly, how would security work? They'd share a device. There may be multiple operators that use the app in a given shift. Would they all use the same authentication session (this is not preferable, as I'd like to uniquely identify the active user)? Obviously I could use standard username/passwords with token based sessions that expire, however, this leaves a lot of potential for account hijacking. Ideally, they'd be able to log on very quickly (PIN, perhaps?) and their session would end when they are done.
As long as there is internet connection, I would presume that there isn't much pro/con regarding the use of native applications versus web based or progressive web apps. Is this assumption correct?
What's the best way of identifying which device the application is being run on?
Is this a common thing to do in general? What other technologies are used to create software that obtains input from industrial operators?
--
Update - this is a good higher level consideration of the question at hand, however, it has become apparent why focused, specific questions are helpful. As such, I will follow up with questions that are specific.
Identifying the Area/Device a Web Application is Accessed On
Enforcing Specific Application Use on Tablets
Best Practices for Web App Authentication in Industrial Settings
I'm not able to answer everything in great detail but here are a few pointers. In the environment as you describe we usually see these two options. 1) you tell them what you need, internet, security, if they give you device and how it will be configured 2) they tell you exactly what you need to deliver.
I do not think you can 100% prevent them. We did it by providing the tablet( well laptops in our case) and the OS configuration took care of that, downside we had few devices to support. You seem to hint that there is always an internet connection so I guess you can collect all info about the system and send it back to you daily?
We were allowed to "tap" into their attendance SW and when you entered the facility you were able to use your 4 digit pin to log in if you were out of premisses you could not log in at all. I can imagine the following: you log in with your username and password - this does full verification, after that, you can use 4 digit pin to login for next n hours.
maybe, kinda, depends on what you are doing. Does the browser have all features you need? Our system needs multicast to perform really fast, so we have a native app
touched on this in 1. You could also use device enrolment process. You can also contractually force them that there will be only your software and it may invalidate support contract. It really depends on your creativity. My favourite( and it works - just tell them, there will only be installed my software and if not you will pay me double for support. I only saw one customer who installed some crap on the device when there were told not to
it really depends on what industry you are talking about, every industry is different. We almost always build a custom solution
The enforcement of the device/app usage depends on the customer, if the customer asked for help in the enforcement, then you can provide guide, training and workshops. If the customer serious about the enforcement then it will be a policy that's adapted by all the organization from top to down. Usually seniors will resist a workflow change more than juniors, so top management/executive should deal with that. Real life story: SAP team took 6 months to transform major newspaper workflow, during that few seniors got fired because they refuse to adapt the change.
Security shouldn't handicap the users, usually in industrial environment the network is isolated or at least restricted through VPN to connect multiple sites (plants in your case), regarding the active user: we usually provide guide/training/workshop for the users and inform them that using colleague account or device will prevent the system from tracking your accomplishment/tasks, so each user is responsible to make sure the active account/device is the one assigned to him/her.
It depends, with native you have more controls than web, but if the app is just doing monitoring then most of today apps use web for monitoring and the common way to receive input is REST APIs (even if the industrial devices doesn't support REST API, a middleware could be written to transform the output). If you need more depth about native vs web you need to ask new question with more details about the requirements.
Depends on the tech you are using (native or web), and things I mentioned in point 2: you can use whitelist of devices that's allowed to run the app. overall there are many best ways to track down the device.
How common in general? I think such information can only be achieved by survey, the world full of variations. And having something common not mean its safe or best, our industry keep changing at all levels. So to stay in the loop, we must keep learning and self-updating without reboot.

Reuse microservices across different project

We developed a monolithic API to be used as a SAAS.
In the company we also develop custom build for some customers.
Some of our customers are asking for some features that are already implemented in the monolithic application.
We are thinking about splitting our API into microservices, but our major questions are the following :
Does microservices can be reuse across different projects ?
If we do split, do we create a microservice that everybody use or do we create an instance per custom build ?
E.G :
project A use "User", "Project" so we deploy 2 microservices
project B use "User", "Project", "Store" so we deploy 3 microservices
total number of microservices deployed : 5
If we create an instance of each microservice per custom build, won't be too hard to manage the communication between all the microservices within the same custom build ?
Or do we stick with one instance per microservice that everybody use and we specify the project source ?
As we are using C# GraphQL.
We also thought about creating Nuget package for each component, so each package will contains :
Exposed GraphQL Queries / Mutations
His own db
His own logic
E.G :
- Api A install "User" & "Project" packages
- 3 db are instantiated "Api.A", "Api.A.User", "Api.A.Project"
- Api B install "User", "Project" & "Store" packages
- 4 db are instantiated "Api.B", "Api.B.User", "Api.B.Project" & "Api.B.Store"
But does it make sense to do that ?
In my mind it could very similar from Hangfire https://www.hangfire.io/
Note that we are currently using AWS Serverless to host our applications.
An important point is that we are a small team 2-4.
We are very open minded so any suggestion is good to hear.
Thank you !
First of all, I would like to say is that there is no right way here and I am providing my point of view from the way we have already done things hoping it will guide you in finding a solution best suited for your requirements.
So to understand your dilemma, you have a base vanilla product which is an API SAAS and there is a customized deployment for some customers as well. But as you are having to build custom deployments for each customer you are noticing a common pattern, wherein a lot of the functionality is repeated across the SAAS for each customer.
Now assuming I have the requirement correct, I would say micro-services will provide definite benefits in your case in terms of scaling and customer-specific customization which will be managed by independent teams.
But a lot of this depends on how your business logic is structured and how big and vast your customization is. Some of these questions should drive your solution are.
Can you store Customer-specific data in a central data store or at customers' end ? & How are your databases going to be structured and how many of them?
How big are the customizations ? are they cosmetic or workflow adhering?
How much cross-communication you expect across various services like User, Store, and Project and if there is any communication across A.User - B.User or A.Project - B.Store, etc?
Now moving to some of the important things you might want to consider post answering the above questions.
Consideration 1. If the data stores can be allowed to be in a single central place you can go ahead with a single cluster where all your micro-services can be deployed. But looking at the data provided I can assume you have multiple databases per customer and I would recommend to keep them separate and not introduce any coupling between them. Thus you may end up with one microservice or microservice per customer which talks only to that customer's database. ( more in fig.1)
Consideration 2. The customization as far I the norm goes should be separated from the service itself and your every service should have an input for configuration loading which will drive the service behavior. Again depending on how big your customization is there can be a limit to this configuration and in those cases, I woul recommend creating a new service with customizations built-in.
Consideration 3. This is a major factor for deciding the number of microservices you may have, but the boundary of each service should be defined by the domain, for example, a User service, a Store service, and a Project service. These are the vanilla services that interact with each other to produce a valid business scenario. And each of the customers is just specialized instances of these services.
ok Now that this is done lets gloss over your primary questions...
Des microservices can be reused across different projects?
-- Yes you can, but again it depends on how you have designed the business workflow, configuration injection.
If we do split, do we create a microservice that everybody uses or do we create an instance per custom build?
-- Yes this would be an ideal scenario enabling separation of concerns across projects as we do not want to mix data boundaries and client-specific sensitive configurations. That said there might be a case where the single microservice solution is what is demanded but should be done with caution.
If we create an instance of each microservice per custom build, won't it be too hard to manage the communication between all the microservices within the same custom builds?
-- Communication across microservice is an important part or factor which is more often than not unavoidable in most cases. Thus considering you will be requiring some form of cross microservice communication you can look at an enterprise bus or API communication based on your requirement. But it is a known triviality is my opinion.
Or do we stick with one instance per microservice that everybody uses and we specify the project source?
-- I would not recommend this as the example stated in your question for a module for database injections doesn't sound a good idea to me. This will cause a highly coupled system design. And this might also mean if one service fails all your customer sites go down. you surely wouldn't want that !!!
Now as it is said a picture is worth a thousand words...

Effective Backend for Real Estate Application

I am looking to develop a cross platform mobile app involving real-estate. I have looked at Zillow's API and I think that will be one of the API's I utilize.
https://www.zillow.com/howto/api/APIOverview.htm
My question is if I were to utilize their API as well as those of some other real estate sites, would it make more sense for me to call those APIs directly from the mobile applications, or would it make more sense to have a proxy server, possibly with my own databases compiled from these sites, that the mobile application would call? I have only read the basic overview of the Zillow API, but it looks like it is limited to 1000 calls per day. I understand it is a fairly general questions. If there are any more details that would help to make a better answer, please let me know.
Also, if you know of any other free/cheap real-estate APIs, can you please provide them?
Thanks
Not exactly sure what your metrics are.
But generally speaking, it is a bad idea to hook your mobile app directly to third party API for the following reasons:
You do not control the API, if the third party changed their API your app won't work, the user would have to upgrade. But if you isolate the mobile app by connecting to your server you have more control and can have much longer life.
Caching/rate limits. You can get the data from the third party and store it (if you are allowed) then share the data with all your users
Multiple datasources: Usually you get the data from multiple datasources, so aggregating the data on your server then send the enhanced data model to the app is a lot easier than pulling data from different sources and compiling them on the app itself.

Best practices for development environment and API dev?

My current employer uses a 3rd party hosted CRM provider and we have a fairly sophisticated integration tier between the two systems. Amongst the capabilities of the CRM provider is for developers to author business logic in a Java like language and on events such as the user clicking a button or submitting a new account into the system, have validation and/or business logic fire off.
One of the capabilities that we make use of is for that business code running on the hosted provider to invoke web services that we host. The canonical example is a sales rep entering in a new sales lead and hitting a button to ping our systems to see if we can identify that new lead based on email address, company/first/last name, etc, and if so, return back an internal GUID that represents that individual. This all works for us fine, but we've run into a wall again and again in trying to setup a sane dev environment to work against.
So while our use case is a bit nuanced, this can generally apply to any development house that builds APIs for 3rd party consumption: what are some best practices when designing a development pipeline and environment when you're building APIs to be consumed by the outside world?
At our office, all our devs are behind a firewall, so code in progress can't be hit by the outside world, in our case the CRM provider. We could poke holes in the firewall but that's less than ideal from a security surface area standpoint. Especially if the # of devs who need to be in a DMZ like area is high. We currently are trying a single dev machine in the DMZ and then remoting into it as needed to do dev work, but that's created a resource scarcity issue if multiple devs need the box, let alone they're making potentially conflicting changes (e.g. different branches).
We've considered just mocking/faking incoming requests by building fake clients for these services, but that's a pretty major overhead in building out feature sets (though it does by nature reinforce a testability of our APIs). This also doesn't obviate the fact that sometimes we really do need to diagnose/debug issues coming from the real client itself, not some faked request payload.
What have others done in these types of scenarios? In this day and age of mashups, there have to be a lot of folks out there w/ experiences of developing APIs--what's worked (and not worked so) well for the folks out there?
In the occasions when this has been relevant to me (which, truth be told, is not often) we have tended to do a combination of hosting a dev copy of the solution in-house and mocking what we can't host.
I personally think that the more you can host on individual dev boxes the better-- if your dev's PCs are powerful enough to have the entire thing running plus whatever else they need to develop then they should be doing this. It allows them to have tonnes of flexability to develop without worrying about other people.
For dev, it would make sense to use mock objects and write good unit tests that define the task at hand. It would help to ensure that the developers understand the business requirements. The mock libraries are very sophisticated and help solve this problem.
Then perhaps a continuous build process that moves the code to the dev box in the DMZ. A robust QA process would make sense plus general UAT testing.
Also, for general debugging, you again need to have access the machine in the DMZ where you remote in.
This is probably an "ideal" situation, but you did ask for best practices :).