How can i design better a Inventory, Order sync web app - asp.net-mvc-4

I have a web application that displays inventory, orders, tracking information from drop-shippers for orders and tracking updates. When a customer logs in, he will see all the above information in different pages.
I have a Console based application in the server that hosts 4 background workers to do each of the above tasks and updates the database. Now i have one console application for each customer. I did this because for any reason the console application fails because of one customer's data, it should not effect others.
Is there a better approach or any existing tools, api, frameworks available to support this kind of stack in Microsoft? Or what i am doing is correct and best approach? Are there any technologies that are more stable to support Subscription based membership, Offline data sync, Queue User requests and notifying user when they are completed.

I would take a look at the Azure Queues and Webjobs (Links below)
With a queue structure, you can simply decouple your application and make the application only do what is needed. Your main application can then just put relevant and needed information in the Queue and forget about it.
Next (and perhaps the most crucial part of this) you can write a simple console application that will run when a queue is present and ready. The beauty of this is that you not only can have multiple webjobs doing the same thing (I don't recommend it) but also, you only need to have and maintain one Console application. If the application crashes, it will simply restart it again (within a few seconds) and go back at it again.
Below, please find a link to the tutorial of how to make a sample Queue and Webjob:
http://azure.microsoft.com/en-us/documentation/articles/websites-dotnet-webjobs-sdk-get-started/?rnd=1

Related

Application Insights strategies for web api serving multiple clients

We have a back end API, running ASP.Net Core, with two front ends: A SPA web site (Vuejs) and a progressive web page (for mobile users). The front ends are basically only client code and all services are on different domains. We don't use cookies as authentication uses bearer tokens.
We've been playing with Application Insights for monitoring, but as the documentation is not very descriptive for our situations, I would like to get some more inputs for what is the best strategy and possibilities for:
Tracking users and metrics without cookies from e.g. the button click in the applications to the server call, Entity Framework/SQL query (I see that this is currently not supported, How to enable dependency tracking with Application Insights in an Asp.Net Core project), processing data and presentation of the result on the client.
Separating calls from mobile and standard web in an easy manner in Application Insights queries. Any way to show this in the standard charts that show up initially would be beneficial.
Making sure that our strategy will also fit in situations where other external clients will access the API, and we should be able to identify these easily, and see how much load they are creating for the system.
Doing all of the above with the least amount of code.
this might be worthy of several independent questions if you want specifics on any of them. (and generally your last bullet is always implied, isn't it? :))
What have you tried so far? most of the "best way for you" kinds of things are going to be opinions though.
For general answers:
re: tracking users...
If you're already doing user info/auth for other purposes, you'd just set the various context.user.* fields with the info you have on the incoming request's telemetry context. all other telemetry that occurs using that same telemetry context would then inerit whatever user info you already have.
re: separating calls from mobile and standard...
if you're already doing this as different services/domains, and you are already using the same instrumentation key for both places, then the domain/host info of pageviews or requests is already there, you can filter/group on this in the portal or make custom queries in the analytics portal to analyze that way. if you know which site it is regardless of the host, you could add that as custom properties in the telemetry context, you could also do that to avoid dealing with host info.
re: external callers via an api
similarly, if you're already exposing an api and using auth, you should (ideally) already know who the inbound callers are, and you can set that info in custom properties as well.
In general, custom properties (string:string key value pairs) and custom metrics (string:double key value pairs) are your friends. you can set them on contexts so all the events generated in that context inherit the same properties, you can explicitly set them on individual TrackEvent (or any of the other Track* calls) to send specific properties/metrics with any single event.
You can also use telemetry initializers to augment or filter any telemetry that's being generated automatically (like requests or dependencies on the server side, or page views and ajax dependencies client side)

Design: Is it correct to push records to database first and then pick from database and push to rabbitmq

We are e-commerce website with several listings. The listings can be created/updated/deleted from various apps:- i.e. desktop, mobile, apps.
We need to push all this updated information to some third party APIs i.e. any new listing created/deleted or if the existing listing updates, we need to push complete listing information through some third party API. We are using Rabbitmq for this since we expect a high volume of record update.
We have two choices:
Push from all endpoints (desk/msite/app) info like (listingId, Action on listing i.e. CREATE/UPDATE/DELETE) to rabbitmq. Rabbitmq further dequeue these messages and hit appropriate API.
Implement trigger on listings table i.e. on create, insert entry into some table with column (listingId, database action to be executed i.e. CREATE/UPDATE/DELETE). Now, create a job toread from this table after every 10 sec. and push these to rabbitmq.
Which is better appraoch?
I think an HTTP based API might be the best solution. You can implement a gateway which includes security (OAuth2/SAML), rate limiting etc. Internally you can use RabbitMQ. The gateway can publish the updates to RabbitMQ and have subscribers which write the data to your master database and other subscribers that publish the data to your third party APIs.
The added benefit of an HTTP gateway, beyond the extra security controls available, is that you could change your mind about RabbitMQ in the future without impacting your desktop and mobile apps which would be difficult to update in their entirety.
Having worked with databases for most of my career, I tend to avoid triggers. Especially if you expect large volumes of updates. I have experienced performance problems and reliability problems with triggers in the past.
It is worth noting that RabbitMQ does have message duplication problems - there is no exactly once delivery guarantee. So you will need to implement your own deduplication or preferably make all actions idempotent.

How To Working With Offline Droppoint In Flowgear

Here i am troubled out with a issue with my drop point.
Here is the scenario:
-> we are using JAVA POST api for inserting values in sage database(Using flowgear sage evolution node).
-> when we are online, and the workflow is called from api then everything works fine.
-> But when i am offline or out of internet(my workflow is not call)
then it gives workflow offline error.
i.e "DropPoint '****-***' is offline and is required for this Workflow".
So, is there any way to manage the hits and dataloss when we are offline. [i will miss my data to be inserted in sage when i am offine and api will be called]
Please can you guide me on the same.
Thanks
Flowgear isn't really intended to handle this. It would be best to cache content to be sent at the source and have the ability to keep unsent data until it is successfully integrated.
That said, here would be the recommended way:
Decide where to store unprocessed data. If it's a small amount of data you could use the Flowgear Cacher or Statistics but it's probably best to have a database (eg. SQL in Azure)
The workflow that is bound to a REST endpoint and is called from your app should be modified to ONLY store data in the intermediate store described above. (i.e. its role is to queue data).
Create a second workflow that uses a timer or trigger to check for data in the intermediate store and process it.

WinRT live tile on system startup

I have a live tile working which updates how many users are online and how many lobbies are open within the app. This begins updating when my app loses its visibility (no point it updating the live tile whilst the app is running), but I want it to update when I first turn the computer on.
I have had a look around and mentions of making the app a lock screen app have popped up but that is all, no explanation how to do it.
Does anyone know how to do this and provide a nice little explanation or link of how to do so?
Many thanks,
Kevin
You should use push notifications for this kind of behaviour. This msdn link has more info:-
Using tile notifications
Choosing the right notification method to update your tile
There are several mechanisms which can be used to update a live tile:
Local API calls
One-time scheduled notifications, using local content
Push notifications, sent from a cloud server
Periodic notifications, which pull information from a cloud server at a fixed time interval
The choice of which mechanism to use largely depends on the content you want to show and how frequently that content should be updated. The majority of apps will probably use a local API call to update the tile when the app is launched or the state changes within the app. This makes sure that the tile is up-to-date when it launches and exits. The choice of using local, push, scheduled, or polling notifications, alone or in some combination, completely depends upon the app. For example, a game can use local API calls to update the tile when a new high score is reached by the player. At the same time, that same game app could use push notifications to send that same user new high scores achieved by their friends.
You're right with the assumption that you require a lock screen capability to be able to run background tasks without your app being started once. The main process would be to extract the part of your application that gets the data into a background task that is probably triggered by a timer and write some code to be on the lock screen.
When I first encountered that restriction I was kind of surprised, but in terms of battery performance this design decision makes sense: Only consume battery power if the data is absolutely worth it. If it's worth, it is also of interest having it on the lock screen.
On MSDN is a good overview about lock screen along with further reading links. It's much better than what I could type in here. Come back with problems related the implementation (which actually even better fits the purpose of SO). This blog might be useful, too.

Error monitor for different applications

Currently we have many applications, where each application has its own error notification and reporting mechanism, so we clearly have many problems:
Lack of consistent error monitoring across different systems/applications: different GUIs, interfaces, different messages, etc.
Different approaches for error notification per application (many applications use email notifications, other applications publish messages to queue, etc.).
Separated configuration settings for reporting and monitoring per application: notification frequency, message recipients, etc.
You could add many other issues to the list, but the point is clear. Currently there is a plan to develop a custom application or service to provide a consistent and common solution for this situation.
Anyway, I am not sure if it is a good idea to create a custom application for this, I am sure that there should be a framework, platform or an existing solution or product (preferentially open source) that already solves this problem, so my question is: do you know what project or product to check before deciding to create our custom application?
Thanks!
Have a look at AlertGrid, it works as a centralized event handler, and notification dispatcher. It can collect events from different sources and you can easily manage event handling by creating rules in a visual editor. So, you can filter events, and raise notifications (email, sms, phone - works worldwide) whenever your custom condition is met. You can react not only to events that ocurred but also the ones that did not occur (detect missing 'heartbeats'). All you need is to feed AlertGrid with events (Signals), by a very simple API.
I'm not quite sure if this is what you're looking for. I'm in the AlertGrid dev team, if you had any questions - feel free to ask. We constantly develop this tool and appreciate any feedback.
Depending on how much information is written to application logs, you could consider using Hyperic. It's open source and has a lot of the features you are looking for.
http://www.hyperic.com/
Bugsnag is an awesome option if you are looking to do cross platform error monitoring. It supports some 50 platforms in one interface. https://blog.bugsnag.com/react-native-plus-code-push/