How can I set up multiple Continuum servers to share a log file directory on NFS? - continuum

We have two Continuum server setup in a HA configuration and want to share log data between them using a shared NFS directory. How can we configure the two Continuum servers to both read and write log data to the NFS location?

Two Continuum services cannot open a file handle to the same log file, as the service locks the file once the first service opens it.
Job logs will fail in this scenario, while the use of NFS for log storage has not been tested.
We do not know any reason why the logs could not be stored on an NFS share with the caveat that two services cannot write to the same log file.

Related

Where are the files stored on cloudhub

Per my understanding, cloudhub is a PaaS service and we can deploy applications directly to Cloudhub. I have below questions
Can we create intermediate files on cloudhub. If yes, how can we define the path ?
When we use SFTP to pull file from particular location, what should be the path on cloudhub server for processing
Can we do SSH on cloudhub server
If we need to externalize cron timings of scheduler ,(via config etc to avoid code change) , what is the best practice for setting cron expression.
All above questions are related to Cloudhub deployment model.
Thanks in advance
The scheduler already gets externalized in the platform when you deploy to CloudHub.
You can technically store the files in /temp, but don't expect them to persist. That would be an "ephemeral" file system.
You can not SSH into the CloudHub server.
Rather than downloading the entire SFTP file and saving it, and then working on it, I would suggest streaming it if possible. You can process JSON/XML/CSV files as a stream, and even use deferred DataWeave with them enabling end-to-end streaming.

How to write logs in a diff file for multiples instances in mfp

I have multiple instances of MFP running, but my problem is that all the instances are writing logs to a single log file. How can I write logs to diff location for diff instances?
Assuming that by "multiple instances of MFP" you are referring to multiple MFP runtimes deployed on the same JVM,it is normal to see all logging appearing in the same log - SystemOut.log for WAS , messages.log for WebSphere Liberty etc.
This is because MFP is a layer that is deployed on top of the Application server and all logging from MFP is directed to the standard logging of the JVM. As such, if you deploy multiple runtime wars on the same JVM, it is normal to expect all logging from all the runtimes appear in the same log. This is not different from different EAR/WARs deployed on the same Application server, logging into the same log file.
If you wish to have different logs for different runtimes, deploy them in different JVM instances.

Apache Tomcat load balancing file replica

We have two apache servers for load balancing. Whenever I upload a file on one server. Using load balancing concepts, will it get copied into other server.
Do these two server maintain replica of each other?
If not, how to do that? How to maintain the replica of one another servers?
If Yes, what configuration is required.
Thanks for help.
Load balancing balances the requests that are sent to a load balancer to the server that actually answers them.
Handling files that are uploaded to one server is on the application level - it must be handled by your application - e.g. through storing it in a location that all nodes can access (filesystem, database).
There's nothing that tomcat or an appserver can do for you - because they don't know what needs to be replicated and what doesn't. They don't know if something that you uploaded will be processed and can be forgotten, or if it will be stored for later download.

Sharing files across weblogic clusters

I have a weblogic 12 cluster. Files get pushed to it both through http forms and through scp to a single machine on the cluster. However I need the files on all the nodes of the cluster. I can run scp myself and copy to all parts of the cluster, but I was hoping that weblogic supported the functionality in some manner. I don't have a disk shared between the machines that would make this easier. Nor can I create a shared disk.
Does anybody know?
No There is no way for WLS to ensure a file copied on one instance of WLS is copied to another. Especially when you are copying it over even using scp.
Please use a shared storage mount, so that all managed servers can refer to this location with out the need to do SCP.

GCP - CDN Server

I'm trying to architect a system on GCP for scalable web/app servers. My initial intention was to have one disk per web server group hosting the OS, and another hosting the source code + imagery etc. My idea was to mount the OS disk on multiple VM instances so to have exact clones of the servers, with one place to store PHP session files (so moving in between different servers would be transparent and not cause problems).
The second idea was to mount a 2nd disk, containing the source code and media files, which would then be shared with 2 web servers, one configured as a CDN server and one with the main website and backend. The backend would modify/add/delete media files, and the CDN server would supply them to the browser when requested.
My problem arises when reading that the Persistent Disk Storage is only mountable on a single VM instance with read/write access, and if it's needed on multiple instances it can be mounted only in write access. I need to have one of the instances with read/write access with the others (possibly many) with read only access.
Would you be able to suggest ways or methods on how to implement such a system on the GCP, or if it's not possible at all?
Unfortunately, it's not possible.
But, you can create a Single-Node File Server and mount it as a read and write disk on other VMs.
GCP has documentation on how to create a single-Node File Server
An alternative to using persistent (which as you said, only alows a single RW mount or many read-only) is to use Cloud Storage - which can be mounted through FUSE.