setting jt400 server job library list, jobq and outq from a Job Description instead of the user profile - jt400

I have a web application that replaces a 5250 application. I use jt400 for JDBC and other various IBM i services.
In the 5250 app, if a user had more than one role or business unit, A separate User Profile was created that referenced the appropriate Job Discription.
I'm implementing SSO for the Web Apps in the company. So a user with multiple role/business unit responsibilities will only have one IBM i User Profile.
Since the IBM i prestarts jt400 server jobs ahead of need, and then swaps identities when an AS400 instance connects, I'd like the the library list, Job Queue and Output Queue set from a Job Description, not the User Profile
Can I do this once on the AS400 object instance?
I'd hate to have to manually do it for every server job which an AS400 instance connects.

Short Answer, need to set the new OUTQ and LIBL after connecting byinvoking the CL commands on the new AS400 object

Related

FHIR manage of notifications

I'm trying to manage a FHIR workflow based on API Rest for resources CRUD (as Patient, Practitioner and so on).
For workflow handling among different systems I want to use task resource, but I don't want to manage the Subscription resource and it's architecture.
So I have a doubt about manage of notifications.
The correct way is: the different systems must apply a polling operation on server to know if there's a task resource to consume? Or is it the server to warn the different systems?
The server FHIR I want to use is R4.
EDIT
We want to create a interoperability platform about the exchange of data among three systems. Every system is already in production developed by different software house and we can't work on them.
Every system, actually, hasn't got a server FHIR (as the Option B of Workflow architecture).
Every system is available about communication in HL7 v.3 / FHIR
So we want to add a layer with FHIR Server as the below image.
In this case:
if System A sends to FHIR server a resource (i.e. Appointment) then System B take this appointment to process in its environment. How works the schema of communication?
The FHIR workflow communication patterns page defines a number of architecture alternatives. One possibility is to create the Task on the fulfiller's system. In that case, no need for polling or subscription. If the Task is created on the placer's system or an intermediary system and you're sticking with pure REST, then the fulfilling system will need to either have a subscription that will result in them receiving a notification about the Task or they'll have to poll. Other non-RESTful options include POSTing to a "process task" operation on the fulfilling system or sending a FHIR message to the fulfilling system.

IBM Mobilefirst persistent data table load

In IBM Mobilefirst 8.0 how often does MFP_PERSISTENT_DATA table is updated? Once per adapter call by client? What load on this table can one expect?
Please refer to MFP Internal tables on all the runtime databases and why they are used.
Persistent Data is primarily used for the following purposes:
Client Registration Data ( information about every registered client instance (device and application pair) including information about the device, application, usernames associated with the client, last activity time and additional custom attributes.)
Client Security Context : ( authentication state of the client. The size of the data depends on the number of the security checks used by the application, and the size of the state data stored by each security check.)
Load on the DB would primarily depend on how often the states of the above entries change. In general, it will be created on client registration and updated on every access of the client - updating "Last activity time" column. If adapter code deals with "Custom attributes" and updates ClientRegistrationData, this will in turn lead to persistent data update.
I would say it is directly proportional to the number of active instances and depends on various factors like Security check state, Custom attributes usage .

Duplex messaging or Azure Queue Service

All ,
We have a requirement to develop a azure based platform, in which the user can configure multiple pharmaceutical instruments, start measurements on them and analyze the measured data. The typical components in the azure based platform will be following
1 - A .NET based 4 client application running on the computer connected to each instrument. This client application should receive the start measurement command from the azure platform , perform the measurement and update the result back to the azure*
2 - A set of services[probably REST based] which will get the results from the client application and update the database on the cloud
3 - A set of services and business logic which which can be used to perform analysis on the data
4 - A asp.net web application where the user can view instrument details , start measurement etc
There is a two way communication between the Azure platform and the client application i.e. the client needs to update results to the azure and the azure needs to initiate measurement on the instrument via the client application
In such a scenario , what is the recommended approach for the azure platform to communicate to the clients. Is it any of the following
1 - Create a duplex service between the client and server and provide a call back interface to start the measurement
2 - Create a command queue using Azure message queue for each client. when a measurement needs to be started , a message will the put on the queue. The client app will always read from the queue and execute the command
or do we have any other ways to do this , any help is appreciated
We do not fully understand your scenario and constraints around it, but as pointers, we have seen lot of customers use Azure storage queues to implement master-worker scenario (some component adds message to appropriate queue to get work done (take measurements in your case) and workers polling the queue to process this work (client computer connected to your instrument in this case)).
In terms of storing the results back, your master component could provide SAS access to client to write results back to specific blob in an Azure storage account and either have your service and business logic monitor existence of that blob to start your analysis.
Above approach will decouple your client from server and make communication asynchronous via storage. Again, these are just pointers and you would be the best person to pick the right approach that suits your requirement
For communication between the server and the client, you could use SignalR http://signalr.net/ there are two forms of messaging systems supported "as a service" on Azure, these are Service Bus and Message Queues - see this link http://msdn.microsoft.com/en-us/library/hh767287.aspx

How to design a report request from client machines to be run on an available server

I have a vb.net 2.0 winforms project that is full of all kinds of business reports (generated with Excel interop calls) that can be run "on-demand". Some of these reports filter through lots of data and take a long time to run - especially on our older machines around the office.
I'd like to have a system where a report request can be made from the client machines, some listener sees it, locates a server with low-load, runs the report on that server, and emails the result to the user that requested it.
How can I design such a change? All our reports take different parameters, and I can't seem to figure out how to deal with this. Does each generator need to inherit from a "RemoteReport" class that does this work? Do I need to use a service on one of our servers to listen for these requests?
One approach you could take is to create a database that the clients can connect to, and have the client add a record that represents a report request, including the necessary parameters which could be passed in an xml field.
You can then have a service that periodically checks this database for new requests, and depending on how many other requests are current processing, submit the request to the least busy server.
The server would then be able to run the report and email the file to the user.
This is by no means a quick solution and will likely take some time to design the various elements and get them to work together, but its not impossible, especially considering that it has the possibility to scale rather well (adding more available/more powerful servers).
I developed a similar system where a user can submit a request for data from a web interface, that would get picked up by a request manager service that would delegate the request to the appropriate server based on the type of request, while providing progress indication to the client.
How about write a web service that accepts reporting requests. On completion the reports could be emailed to the users. The web service can provide a Status method that allows your WinForms app to interrogate the current status of the report requests.

Offline client and messages to azure

I'm playing around with windows azure and I would like to build a clouded server application that receives messages from many different clients, such as mobile and desktop.
I would like to build the client so that they work while in "offline-mode", i.e. I would like the client to build up a local queue of messages that are sent to the azure server as soon as they get online.
Can I accomplish this using wcf and/or azure queing mechanism, so that I don't have to worry about whether the client is online or offline when I write the code?
You won't need queuing in the cloud to accomplish this. For the client app to be "offline enabled" you need to do queuing on the client. For this there are many options, a local database, xml files, etc. Whenever the app senses network availability you can upload your queue to Azure. And yes, you can use WCF for that.
For the client queue/sync stuff you could take a look at the Sync Framework.
I haven't found a great need for the queue so far. Maybe it's just that I'm not seeing it in my app view. Could also be that the data you can store in the queue is minimal. You basically store short text strings (like record ids), and then you have to do something with the ID when you pull it from the queue, such as look it up, delete it, whatever.
In my app, I didn't use the queue at all, just as Peter suggests. I wrote directly to table storage (accessed via it's REST interface using StorageClient) from the client. If you want to look at a concrete example, take a look at http://www.netalerts.mobi/traffic. Like you, I wanted to learn Azure so I built a small web site.
There's a worker_role that wakes up every 60 seconds. Using one thread, it retrieves any new data from it's source (screen scraping a web page). New entries are stored directly in table storage (no need for a queue). Another thread deletes entries in table storage that are older than a specified threshold (there's no issue with running multiple threads against table storage). And then I'm working on the third thread which is designed to send notifications to handheld devices.
The app itself is a web_role, obviously.