WCF SOA - How to implement centeral messages to display in cusumer applications - wcf

This question is more of design issue rather than programming.
We are following SOA architecture with WCF Services, having multiple web services being consumed by multiple web applications. Lets say three web applications 1, 2 and 3 are consuming web service1, i.e. (ignore other web services for this question)
WebApp1 > consumes > WebService1
WebApp2 > consumes > WebService1
WebApp3 > consumes > WebService1
All three web applications are consuming same CreateUser() function from WebService1. Service will return success or error message depending on insert query in SQL Server database.
Here is the critical point, these success or error messages are being returned from database. Database table acts as central repository for messages to display in all applications. There are multiple applications with multiple messages. But lets focus on 3 applications with 2 messages. The table structure looks like this:
ApplicationID MessageID MessageText
1 5 User Saved Successfully.
2 6 User Inserted Successfully.
3 7 Record Saved Successfully.
1 8 User save failed.
2 9 User insert failed.
3 10 Record save failed.
Each application can have different MessageID with different text, one for success and one for error. I want to return different messages based on given ApplicationID. Each application is passing its own ApplicationID in request to CreateUser().
How to write this function to return respective message text based on ApplicationID.
Does it need to change table design or can be implemented in other better way?

Yes I think so.
Add the message logic in the web apps and not in the database. Then webService1 just returns if the user was created or not, and it is up the each app to display whatever they want. Then you don't need to patch you database if they want to change content or fix spelling errors.

Related

Azure Graph API - Device Mgmt. does not return all systems

I have sent the following call to the Microsoft Graph API: https://graph.microsoft.com/beta/deviceManagement/managedDevices here I get a list of devices, but it is incomplete. Client and server systems are missing, which our customer can see via the web interface (endpoint.microsoft.com). The first thought was that the client systems are not in Intune, but this isn't true. If I now make my subsequent query for the individual devices and take an ID that does not come from the response but from the interface of the customer, I get an "internal server error". I have made the following queries: https://graph.microsoft.com/beta/deviceManagement/managedDevices/{DeviceId}/windowsProtectionState https://graph.microsoft.com/beta/deviceManagement/managedDevices/{DeviceId}/windowsProtectionState/detectedMalwareState
Is it possible to get a logical error message there?
How to retrieve the information for server systems and what can be the reason that we do not get all client system?
The complete data (Device) might be not coming in a single page there will a next page link for see the rest of devices (Missing Server and Client System)
Use the #odata. next link API to get the Rest of missing System
Here is one more alternative way you can get the List of Client and Server System(Including Manage and Unmanaged).
API for Server System Details:
https://graph.microsoft.com/v1.0/devices?$count=true&$filter=startswith(operatingSystem, 'Windows Server')
API for Client System Details
https://graph.microsoft.com/v1.0/devices?$count=true&$filter=startswith(operatingSystem, 'Windows')
Note: Based on version you can also filter like Windows 10 or Window Sever 2012 Datacentre
Reference : https://learn.microsoft.com/en-us/graph/api/device-list?view=graph-rest-1.0&tabs=http

Conditionally returning a custom response object to the client in Wep Api 2

I have a Web Api 2 service that will be deployed across 4 production servers. When a request doesn't pass validation a custom response object is generated and returned to the client.
A rudimentary example
if (!ModelState.IsValid)
{
var responseObject = responseGenerator.GetResponseForInvalidModelState(ModelState);
return Ok(responseObject);
}
Currently the responseGenerator is aware of what environment it is in and generates the response accordingly. For example, in development it'll return a lot detail but in production it'll only return a simple failure status.
How can I implement a "switch" that turns details on without requiring a round trip to the database each time?
Due to the nature of our environment using a config file isn't realistic. I've considered using a flag in the database and then caching it at the application layer but environmental constraints make refreshing the cache on all 4 servers very painful.
I ended up going with the parameter suggestion and then implementing a token system on the back end. If a Debug token is present in the request the service validates it against the database. If it's a valid and active token it returns the additional detail.
This allows us to control things from our end while keeping things simple for the vendors and only adds that extra round trip to the database during debugging.

ASP.NET MVC 4 Read through cache

I have a system which sits on a web server and generates files on the fly in response to HTTP requests. The system is built using asp.net mvc 4. All the code is in controller. Once the files are generated, they don't change very often, so I'd like to implement a cache . How can I implement the below ?
User 1 requested document 1 --- web server the processing is going on… meanwhile User 2 requested document 1 ( same document).. I don’t want to start generating document 1 second time.. I would like this request to wait until the user1 request is completed so that user 2 request can be served from cache. This may just basic problem… I want to understand the solution before I implement caching.
Please help with some samples..
A relatively simple solution would be to cache a Lazy<MyType> instance, where MyType is the Type that represents your document.
For the scenario you describe, you should construct your Lazy<MyType> instance using LazyThreadSafetyMode.ExecutionAndPublication. This will ensure that the document is generated only once.

Enable debug logging when inserting ApexTestQueueItem records via the SOAP API

I'm inserting several ApexTestQueueItem records into an Org via the Partner API to queue the corresponding Apex Classes for asynchronous testing. The only field I'm populating is the ApexClassId. (Steps as per Running Tests Using the API)
After the tests have run and I retrieve the corresponding ApexTestResult record(s) the ApexLogId field is always null.
For the ApexLogId field the help documents have the Description:
Points to the ApexLog for this test method execution if debug logging is enabled; otherwise, null
How do I enable debug logging for asynchronous test cases?
I've used the DebuggingHeader in the past with the runTests() method but it doesn't seem to be applicable in this case.
Update:
I've found if I add the user who owns the Salesforce session under Administration Setup > Monitoring > Debug Logs as a Monitored User the ApexLogId will be populated. I'm not sure how to do this via the Partner API or if it is the correct way to enable logging for asynchronous test cases.
You've got it right. That's the intended way to get a log.

transaction processing scenario simulation in a web application

I am looking into transaction processing and I cannot find a way to simulate multiple queries (HTTP Methods) to a resource (script) which will act on shared data.
e.g an HTTP GET representing access to a resource from user1 with param1 and another HTTP GET for access from user2 with param2
For example 2 users trying to book a limited resource "at the same time" or access a url which triggers actions that should have all ACID properties.
Is there a way to test such scenarios in a web application?
Should I stick in a "programmable" scenario (a scenario I will code) which can run using a stress test tool ?
What method(s) do you use in such cases?
You can use Apache JMeter to set up test scripts which run multiple simulated users, varying test content and so on. You can even run slaves to test from more than one physical test clients if you need to increase your load. The requests can be created with templates including user-specific data, pick random prepared requests or run scripts to create the data for each request.