How banks and other companies handle recurring tasks? - enterprise

I'm curious how banks handle recurring tasks of their users. And if they have thousands of those tasks, for example paypal recurring payments, how does paypal handle them? What software they use? I dont think they use cron\quartz for such tasks.
What do they do if system goes down and unable to process user tasks?
Is there any solution like mysql has with event scheduler? Have anyone tried RabbitMQ to process various events?

CICS, COBOL and over-night batch jobs still abound.

Banks and telecom sector have very power full system and they are just develop for this specific purpose we just need to integrate them.
When i was working in a telecom company they have lot of servers and all of them have 32+GB of ram etc.
For banks they also have very power full system, When ever their system crash automatically other server start working. Banks have central repository where they save their data due to the central system they didn't get data consistency issue.

Related

How to architect scheduled API to API integration

My organization moves data for customers between systems, these integrations are in BizTalk and are done by file, sometimes to/from APIs. More and more customers are switching to APIs so we are facing more and more API to API integrations.
I'm mostly a backend developer but have been tasked with finding out how we can find a more generic pattern or system to make these integrations, we are talking close to a thousand of integrations.
But not thousands of different APIs, many customers use the same sort of systems.
What I want is a solution that:
Fetches data from the source api
Transforms the data to the format for the target api
Sends the data to the target api
Another requirement is that it should be possible to set a schedule when these jobs should run.
This is easily done in BizTalk but as mentioned there will be thousands of integrations and if we need to change something in one of the steps it will be a lot of work.
My vision is something that holds interfaces to all APIs that we communicate with and also contains the scheduled jobs we want to be run between them. Preferrably with logging/tracking.
There must be something out there that does this?
Suggestions?
NOTE: No cloud-based solutions since they are not allowed in our organization.
You can easily implement this using temporal.io open source project. You can code your integrations using a general-purpose programming language. Temporal ensures that the integration runs to completion in the presence of all sorts of intermittent failures. Scheduling is also supported out of the box.
Disclaimer: I'm a founder of the Temporal project.

How to monitor nservicebus queue length

We use nservicebus for a few applications and monitor endpoint heartbeats and failed messages through service pulse.
Most of the time messages are processed within minutes, but occasionally there is a spike in traffic and clients will ask if there is a problem. I would like to know the length of an endpoint queue so that I can respond and provide estimates.
We use sql as a transport layer and subscription store. I cannot view the database remotely.
What is the best approach to surface this data?
I could expose an SSRS report on top of the database, add code to service control and service pulse since they are both open source, or add a custom check through service pulse...
How about running a job (at a configured interval on the SQL server) on the queues tables that will write the number of messages to a table you can query?
You can than use this table to run your monitoring tool and generate alerts, or indeed write a customCheck so you will get alerts on ServicePulse...
While this is a temporary solution, we are working on filling that gap, take a look at this anouncement: https://groups.google.com/d/msg/particularsoftware/zRJ18bxeY2Y/zrLu9WOIAQAJ
we've been working on enhancing the Particular Service Platform to close existing gap and provide a means of monitoring your NServiceBus-related system more easily.
The initial offering will focus on identifies key metrics (one of them is the queue length) for assessing the health of a system and then presenting these metrics to you in a manner that's easy to visualize and consume.
In the weeks ahead we will share more information about our monitoring philosophy and how we are looking to ease the pain of implementing it. So follow our blog to get notified of updates.
In the meantime you are welcome to join the live webinar,on the monitoring theme, Wednesday, June 28 at 12:00 EDT (17:00BST).
Also: me and my college, William Brander will show the metrics you should consider when monitoring microservices.
link- https://particular.net/what-to-consider-when-monitoring-microservices
Hope this helps,
If I can help, please feel free to email support at particular.net

Perforce: Any side-effects to sharing Login accounts / Client-Specs among multiple users?

I am currently working on a file system application in C# that requires users to login to a Perforce server.
During our analysis, we figured that having unique P4 login accounts per user is not really beneficial and would require us to purchase more licenses.
Considering that these users are contractual and will only use the system for a predefined amount of time, it's hard to justify purchasing licenses for each new contractual user.
With that said, are there any disadvantages to having "group" of users share one common Login account to a Perforce server ? For example, we'd have X groups who share X logins.
From a client-spec point-of-view, will Perforce be able to detect that even though someone synced to head, the newly logged user (who's on another machine), also needs to sync to head ? Or are all files flagged as synced to head since someone else synced already ?
Thanks
The client specs are per machine, and so will work in the scenario you give.
However, Perforce licenses are strictly per person, and so you will be breaking the license deal and using the software illegally. I really would not advocate that.
In addition to the 'real' people you need licenses for, you can ask for a couple of free 'robot' accounts to support things like automatic build services, admin etc.
Perforce have had arrangements in the past for licensing of temporary users such as interns, and so what I would recommend is you contact them and ask what they can do for you in your situation.
Greg has an excellent answer and you should follow his directions first. But I would like to make a point on the technical side of sharing clients on multiple machines. This is generally a bad idea. Perforce keeps track of the contents of each client by client name only. So if you sync a client on one machine, and then try to sync the same client on another machine, then the other machine will only get the "recently" changed files and none of the changes that were synced on the first machine.
The result of this is that you have to do a lot of force syncing. Or keep track of the changelists you sync to and do some flushing and then syncing.

Webbased or Thick-client through VPN?

In an electric company where I was hired temporarily, we have to implement an upgrade of the billing and payments system ( the current system is a dbaseIII system). The company's programmer and I have decided to use VB.Net and MySQL.
The company served several towns and have billing and payments centers in selected towns. Every billing period, the meter readers would read the readings for every electric meters and then write the readings in the sheet. Every 5 pm, an employee from the centers would collect the sheets and then travel to the main center where the readings are encoded.
The billings are printed in the main center, and then distributed to the branches.
During discussions with General Manager and heads of the company, the two of us are tasked to take advantage of the internet because those towns where the centers are located have internet connectivity, and for those none, we can use the mobile internet.
The new system will allow users to enter the readings, and then send the data to the main server in the main branch. They also have the ability to download and print the billings.
Our problem now is what type of system we have to implement. Should it be web based or a desktop application that will connect to our database server through vpn.
If this is a fixed price project, and the client will accept either web or desktop, go with desktop over VPN. You'll save A TON of time, and have something that is more responsive (from a user perspective).
However, if you think the client will eventually need to use the product on mobile devices or the web, you're shooting yourself in the foot by going winforms.
Having had some experience with using a thick client through VPN, I'd say go with some kind of web app.
If done wrong, a thick client can become really painful to use through a VPN because of data churning. A web app concentrates all of that on the server, which makes it much better from that point of view.
Other benefits:
no deployment hassle
no direct access to the database from the user machine.
Evidently it also depends on your skills, and on how much time/budget you have...
I do not know the situation of the client... but what about giving them the best of both worlds? Considering it sounds like you will be programming on a windows based system, and have deployment access to windows server based hardware, why not either build a Silverlight application, or build a WPF application that's hosted in an IE window? That could give you the best of both worlds?
I think that the answer depends on the type / frequency of database queries you need to make. Querying a DB from a thick client through VPN can be SLOOOOOWWWWWW. In a web app, the application logic runs close to the DB, maybe even on the same machine, so DB queries are fast. The downside is that UI can be slower. But it is probably easier to design a responsive web-based UI than make VPN fast.
what instrument your bill collector will use ?
1>Laptop with Mobile InetConnection
2>Or specialized hand held tool that read the bill and send to Service center ?
1> If it is Laptop then you can create website where only authorized person can loggin and then he can insert a database. You can use HTTPs for better security.

Application Level Replication Technologies

I am building out a solution that will be deployed in multiple data centers in multiple regions around the world, with each data center having a replicated copy of data actively updated in each region. I will have a combination of multiple databases and file systems in each data center, the state of which must be kept consistent (within a data center). These multiple repositories will be fronted by a SOA service tier.
I can tolerate some latency in the replication, and need to allow for regions to be off-line, and then catch up later.
Given the multiple back end repositories of data, I can't easily rely on independent replication solutions for each one to maintain a consistent state. I am thus lead to implementing replication at the application layer -- by replicating the SOA requests in some manner. I'll need to make sure that replication loops don't occur, and that last writer conditions are sorted out correctly.
In your experience, what is the best pattern for solving this problem, and are there good products (free or otherwise) that should be investigated?
Lotus/ Domino is your answer. I've been working with it for ten years and its exactly what you need. It may not be trendy (a perception that I would challenge) but its powerful, adaptable and very secure, The latest version R8 is the best yet.
You should definitely consider IBM Lotus Domino. A Lotus Notes database can replicate between sites on a predefined schedule. The replicate in Notes/Domino is definitely a very powerful feature and enables for full replication of data between sites. Even if a server is unavailable the next time it connects it will simply replicate and get back in sync.
As far as SOA Service tier you could then use Domino Designer to write a webservice. Since Notes/Domino 7.5.x (I believe) Domino has been able to provision and consume webservices.
AS what other advised, I will recommend also Lotus Notes/Domino. 8.5 is really very powerful application development platfrom
You dont give enough specifics to be certain of your needs but I think you should check out SQL Server Merge replication. It allows for asynchronous replication of multiple databases with full conflict resolution. You will need to designate a Global master and all the other databases will replicate to that one, but all the database instances are fully functional (read/write) and so you can schedule replication at whatever intervals suit you. If any region goes offline they can catch up later with no issues - if the master goes offline everyone will work independantly until replication can resume.
I would be interested to know of other solutions this flexible (apart from Lotus Notes/Domino of course which is not very trendy these days).
I think that your answer is going to have to be based on a pub/sub architecture. I am assuming that you have reliable messaging between your data centers so that you can rely on published updates being received eventually. If all of your access to the data repositories is via service you can add an event notification to the orchestration of each of your update services that notifies all interested data centers of the event. Ideally the master database is the only one that sends out these updates. If the master database is the only one sending the updates you can exclude routing the notifications to the node that generated them in the first place thus avoiding update loops.