Using Mule how to pull a file from an FTP site in response to an incoming VM event? - mule

When I get a triggering event on an inbound VM queue I want to pull a file from an FTP site.
The problem is the flow needs to be triggered by the incoming VM message not the FTP file's availability.
I cannot figure out how to have what is essentially two inputs. I considered using a content enricher but it seems to call an outbound endpoint. The Composite Source can have more than one input, but it runs when any of the message sources trigger it, not a sum of sources.
I am setting up an early alert resource monitoring of FTP, file systems, databases, clock skew, trading partner availability, etc. Periodically I would like to read a custom configuration file that tells what to check and where to do it and send a specific request to other flows.
Some connectors like File and FTP do not lend themselves to be triggered by an outside event. The database will allow me to select on the fly but there is no analog for File and FTP.
It could be that I am just thinking about it in the wrong light but I am a little stumped. I tried having the VM event trigger a script that starts a flow that had an initial state of “stopped” and that flow pulls from an FTP site but VM seems to not play well with starting and stopping flows, and it begins to feel like a 'cluttered' solution.
Thank you,
- Don

For this kind of scenarios, you should use the Mule requester module.

Related

Custom language server: how to get the client to send *all* files to the server, not just those opened/edited by the user?

I'm working on implementing a custom language server and a VSCode language extension. My starting point for the client side is lsp-sample. My server implementation is entirely from scratch, in a different language (not JS).
Currently, I've successfully set up textDocument/didOpen and textDocument/didChange messages to be sent by the client and received by the server. However, I'm having trouble figuring out how to synchronize all files in the VSCode workspace, not just those that the user has opened. I can't find where this is supported in the protocol. The only text document synchronization capabilities I see are for documents opening, closing, and edits. What about all the other documents in the workspace?
For example, in order to handle "goto definition" requests, the server needs to know about definitions in other files, perhaps those that have never been opened or edited by the user.
A hacky solution would be, on the server side, to parse the URI of the workspace and just go load a bunch of files manually. But this seems like something that the LSP should support; perhaps I'm just missing where it's documented. (Also, it feels like I would be violating the spirit of the LSP design to do some covert ops like this behind the scenes, without communicating with the client.)
Perhaps you're looking for
DidChangeWatchedFiles Notification
See Specification
The watched files notification is sent from the client to the server
when the client detects changes to files and folders watched by the
language client (note although the name suggest that only file events
are sent it is about file system events which include folders as
well). It is recommended that servers register for these file system
events using the registration mechanism. In former implementations
clients pushed file events without the server actively asking for it.
Basically the client is responsible of watching the files you require and sends a notification to the server each time something changes on them. At this point the server is capable to load them.

SSL Proxy / Decryption?

One of my clients just received the software ordered from his chosen developers, asked me to look at it and prepare the hosting procedures.
It's an Java (jar) app, so far so good ... but I saw something suspect, every 60 minutes or so the software connects to a remote host :443 port using SSL and transferring ~ 3-10 MB of encrypted data (as POST) then closes the connection, this is very strange. Tried to wireshark it but everything is encrypted and I have no clue about what kind of data is transferred, I know only the destination hostname. The hosted data within the app will be highly sensitive (insurance-broker) and if my client decides to go with it - this is a serious issue for his business and also for his clients, I've asked the developer company about this and they said that no one added something like this even if I provided them the proff (pcap).
I can block it within firewall, but if they added something like this it could exist another hosts ready to receive the encrypted data.
The only way I can figure it out is to somehow decrypt the SSL traffic in order to read RAW data and give my client all the needed informations in order to talk with the developer company to sort it out, how can I do that ? With some sort of ssl-proxy or whatever ... tried to google it but didn't find any kind of relevant tutorials.
I have access to the physical machine which is running the Java application, I can see every single bit of the traffic but ... encrypted.
If I was in your place instead of trying to decrypt ssl connection would have tried following steps:
1)Since you are aware of the host to which it is making a post request , find out more about that service so as to learn what it does ? May be try contacting that site saying that we need to consume your service what should I send my in post request ;)
2)Second way around would be if you can decompile the jar file and find line in the source code which makes that request and then you could go back to the developer asking as why this has been written. To find the source code which is making the call what you could do is block the host access on your firewall.
The code would fail and mostly probably he would have logged the exception in his log files. Find the stack trace and you will know the line of code that is
making that request.
Hope this helps.

Good logging strategy for AppHarbor and Amazon S3

I'm hosting an application on AppHarbor that uses NLog for logging. I've been trying the Logentries add-on, which is a nice service to pipe all the application logging through to and then view via their web interface. That has now come to the end of its free trial and I'd like to look at doing my own logging before paying for that service.
Because I'm using AppHarbor, they recommend not writing to the file system because it's wiped on each deploy and, when in flow, I do multiple deployments per day. I'm using S3 for storing images anyway, so it seems natural to store logs there as well.
The problem I can see with that approach is that I would be firing log statements to a text file stored on S3, which I would need to append to. Once the site gets some traffic, there will be multiple, simultaneous calls to store log entries, which will probably end up locking the write mechanism. Is there a better way to do this that I'm not aware of? Maybe batching the log entries somehow before sending them across? I'm using Raven as my database so may look at writing logs directly into Raven if there's no better option.
It doesn't look like there are NLog targets for S3 or RavenDB, but there are a bunch or other options: http://nlog-project.org/wiki/Targets

Apache: simultaneous connections to single script

How does Apache (most popular version nowadays, i guess) handle a connection to a script when this script is already being executed for another connection?
My guess has always been - upon receipt of a request to a script, script's contents are copied-to-memory/compiled/executed, and IF during this process there's another request to this script - same things happen (assuming Apache does not lock the script file, and simply gives another share of memory/cpu for another compilation/memory-storage/execution)
Or is there a queuing/waiting mechanism involved?
Assuming this additional connection is afforded enough memory, cpu, and does not pass maximum connections setting.
The quickly (and easy) answer is every request is processes by a new process.
Apache listens in some port and for each request create a new process that handles that request. That means no shared memory.
Also take a look to processes with "ps" command, you will see one "http" process for each request.
Take a look here for more complex working: http://httpd.apache.org/docs/2.0/mod/worker.html
and look at google too :) http://docstore.mik.ua/orelly/weblinux2/apache/ch01_02.htm

having a script that does something continuosly at the back end without the need for a browser

I am kind of confused. So pls go easy on me. Take any standard web application implemented with mvc, like codeigniter or rails. The scripts gets executed only when a browser sends request right. So when a user logs in and sends request the server recieves it and sends him response.
Now consider the scenario where apart from the regular application i also need something like a backend process. For example a script which checks whether a bidding time is closed and sends the mail to the bidder that the bidding is closed and chooses the bid winner. Now all these actions has to be done automatically as soon as the bidding time ends.
Now if this script is part of a regular app then it should be triggered by the client(browser) but i dont want that to happen. This should be like a bot script which must run on the server checking the DB for events and patterns like this.
How do i go about doing something like this. Also is this possible to have this implemented on a regular shared or dedicated hosting where we dont have shell access but only ftp access.
You'd have to write your script as a standalone program and either have it run continuously in the background or have cron (or some other scheduling service; also only works if you're only interested in time-based events) execute it for you.
There are probably hosts that have shell-less ways to do this (fancy GUI interfaces for managing background processes or something,) but your run of the mill web host with only FTP access definitely doesn't.
You need a cron job, it's easy to set up on linux. That cron job will either call the command line version of PHP with your script or create a local HTTP request with curl or wget.
If you don't have access then you need an external site that automatically generates periodic HTTP requests. A cheap one is setcronjob.