How to determine whether a file has passed anti-virus detection? - malware

We have to develop a Java web service that is running on WebLogic Server 12.2.1 on a Windows Server 2008 R2 server. The web service allows clients to send files to it in BASE64 format, which the web service will then decode and then create actual files on the server with the decoded binary.
The server has Trend Micro OfficeScan Client installed, which I was told that it will scan for any file that is copied to the server. If the binary that I am writing to disk contains a virus, would the IO write fail immediately by the virus detection? I am not exactly sure when the virus scanning will take place. Is it immediately while a file is in the midst of being created on the server, or after a file has already been created on the server?
I need to know this because we want the web service to send an alert back to the client if the file that he sent contains a malware. Therefore how can the web service determine that no virus has been detected by Trend Micro OfficeScan Client?
Thanks.

If "realtime protection" option is enabled in the AV, then it will immediately detect the virus "after" the writing operation is completed.
The best way I can think of for your scenario, is to programmatically invoke the AV to scan the file, using command line options of the AV. Then, you'll know for sure that the AV has finished the scanning and get the scanning results as well.

Related

Using ASP .Net SignalR for Long Polling for a single client?

I have this problem in which I need to compress (zip) multiple files from the web app (on the server) before streaming it down to the browser. I'm pulling the files from a separate service that connects to a SQL database. So there's a huge delay in opening the files from the service as well as a delay in compressing the files before the zipped package can be streamed to the browser. Ideally, I would like to have the DOWNLOAD button on the page make a call to a SignalR method on the server which will then push a notification back to the client once the multiple files are done compressing. That way, the browser won't request the server to stream the zipped file right away. It will only begin streaming once the multiple files are done compressing.
background info: I'm using IIS 7.5 and MVC 4.
I've been reading up and watching videos on SignalR, but have only seen examples of chat hubs and pushing to multiple clients, etc. Would it be possible to only use SignalR for the client that is making the request? And if so, I would appreciate some example code or perhaps a link to a tutorial on how one could accomplish something like this. Thanks!
To achieve what you need, you will have to define 3 clients
The Browser, it will call The Hub when a download is requested, then it will wait for a call from The Hub to download the files.
The Server, receives a notification from The Hub when the browser requests a download, and when all is ready calls The Hub to pass the files.
The Service, received the files from The Hub when its passed from The Server, and make the files ready for download, then send a notification to The Hub to inform The Browser.
Note
Storing large files in memory is not recommended, and passing it through SignalR is not as well, unless its the only way the server and the service can share the files, so if you have a common storage -Disk or Database- then its better to use it

Which is a Better Solution in this scenario of WCF

i have a WCF service which Monitors a Particular Drive and Creates a New Folder weekly which i am using as a Document Storage
i have many Drives configured for Document Storage and i have to Monitor which Drive is Active(only one drive can be Active at one time ) and on Weekly Basis i have to Add a new Folder in My Active Drive at a predefined Path
provided at the configuration Time.
The Client can make any Drive Inactive or the drive can become Inactive if it is Full and i need to make another Drive Active dynamically using a Service based on priority for example
i have following drives
drive A priority 1 Active yes
drive B priority 2 Active no
if A Becomes Full i have to Make Drive B as Active
Now should i Implement a WCF Service in IIS or as a Windows Service as My Program Will Watch has to Perform Many Actions Like check the drive size and make another drive Active and send Updates in the Database
Which is a Better Way IIS or Windows Service
I need A Service which Get the Information about Drives path From the Database and I have a Configuration WIndows Application which needs to communicate with this Service also to check the drive path and Check the size if it is invalid Application will not Configure the Drive Path and if it is valid it will keep the entry in the Database and any client can have multiple directories and only one directory will be Active So that i can Store Documents in it
What about the Performance and can i configure WCF for IIS like IIS does not Refresh the Application Pool as i want my Service to Run periodically say every 30minutes – Nitin Bourai just now edit
It seems to me a better architecture would be to have a service responsible for persisting your Documents, it can then decide where (and how) to store it and where to read it from based on who's requesting it / how much disk space is available etc. This way all your persistance implementation details are hidden from consumers - they only need to care about Documents, not how they are persisted.
As to how to host it... there is lots of useful information out there documenting both:
IIS : here
Windows Service: here
Both would be more than capable of hosting such a service.
I would go with a windows service in this case. Unless I misunderstand, you want this all to happen with no human intervention, correct? So, I don't see a contract, which means its not a good candidate for WCF.
As I see it both Windows Service or IIS hosted service will work well in your scenario. Having said that, I would go with the Windows Service. It is just a feeling matter but I guess you have a little more config support 'out of the box'. I believes it is easier to config what to do if it fail to start, config the user you want the service to run with and so on.
But as I said, it is a matter of feeling

Web Server being used as File Storage - How to improvise?

I am making a DR plan for a web application which is hosted on a production web server. Now that web server also acts as a file storage for storing the feed uploads files (used by the web application as input) and report files( output of web application processing). Now if the web server goes down , the files data is also lost, so need to design a solution and give recomendations which eliminates this single point of failiure.
I have thought of some recommendations as follows-
1) Use a seperate file server however it requires a new resources
2) Attach a data volume mounted on the web server which is mapped to some network filer ( network storage) which can be used to store the feeds and reports. In case the web server goes down , the network filer can be mounted and attached to the contingency web server.
3) There is one more web server which is load balanced however that is not currently being used as file storage , and if we can implement a feature which takes the back up of the file data regularly to that load balanced second web server , we can start using that incase the first web server goes down. The back up can be done through a back up script, or seperate windows service , or some scheduling job for scheduling the backup job every night.
Please help me to review above or suggest new recommendations to help eliminate this single point of failiure problem on the web server. It would be highly appreciated?
Regards
Kapil
I've successfully used Amazon's S3 to store the "output" data of web and non-web applications. Using a service like that is beneficial from the single-point-of-failure perspective because then any other instance of that web application, or a different type of client, on the same server or in a completely different datacenter still has access to the same output files. Another similar option is Rackspace's CloudFiles.
Both of these services are very redundant, and you could use them as the back, and keep the primary storage on your server, or use them as the primary and keep a backup on your other web server. There are lots of options! Hops this info helps.

Getting Working Processes within IIS App Pools

I am looking for a way to enumerate through the Virtual Directories (Windows Server 2003) in an App Pool and get diagnostic data (specifically WorkingSet, Private Bytes, and Virtual Bytes).
I've found plenty on how to enumerate through a server's App Pools, and getting the Virtual Directories within, but what do I need to do in order to obtain diagnostic data?
Basically I want to add a script that grabs this data for a monitoring app (NAGIOS). We have a script that already grabs the top 2 running worker processes on the server, but we don't know what app pool they belong to.
Thanks.
As you've discovered, it's a two-step process: you need to look up resource utilization for every worker process, and you also need to know which app pool corresponds to each worker process.
You've already figured out the first part. Here's how to do the other part: in Windows Server 2003, there's a command-line script available in Windows Server 2003 called iisapp.vbs. See the documentation for more details. The output from this command-line tool will look like this:
W3wp.exe PID: 2232 AppPoolID: DefaultAppPool
W3wp.exe PID: 2608 AppPoolID: MyAppPool
Simply parse the output from this script and you'll be able to tie process IDs to App Pools. Then look up each process by ID or filter your existing list of enumerated processes to find the matching Process ID.
There may be additional restrictions too around security and specific IIS configuration needed. See the documentation link above.
Note that Windows Server 2008 uses a different command, appcmd list wp, and it has different output format, so this solution is specific to Windows Server 2003.

Strange WCF Error - IIS hosted - context being aborted

I have a WCF service that does some document conversions and returns the document to the caller. When developing locally on my local dev server, the service is hosted on ASP.NET Development server, a console application invokes the operation and executes within seconds.
When I host the service in IIS via a .svc file, two of the documents work correctly, the third one bombs out, it begins to construct the word document using the OpenXml Sdk, but then just dies. I think this has something to do with IIS, but I cannot put my finger on it.
There are a total of three types of documents I generate. In a nutshell this is how it works
SQL 2005 DB/IBM DB2 -> WCF Service written by other developer to expose data. This service only has one endpoint using basicHttpBinding
My Service invokes his service, gets the relevant data, uses the Open Xml Sdk to generate a Microsoft Word Document, saves it on a server and returns the path to the user.
The word documents are no bigger than 100KB.
I am also using basicHttpBinding although I have tried wsHttpBinding with the same results.
What is amazing is how fast it is locally, and even more that two of the documents generate just fine, its the third document type that refuses to work.
To the error message:
An error occured while receiving the HTTP Response to http://myservername.mydomain.inc/MyService/Service.Svc. This could be due to the service endpoint binding not using the HTTP Protocol. This could also be due to an HTTP request context being aborted by the server (possibly due to the server shutting down). See server logs for more details.
I have spent the last 2 days trying to figure out what is going on, I have tried everything, including changing the maxReceivedMessageSize, maxBufferSize, maxBufferPoolSize, etc etc to large values, I even included:
<httpRuntime maxRequestLength="2097151" executionTimeout="120"/>
To see maybe if IIS was choking because of that.
Programatically the service does nothing special, it just constructs the word documents from the data using the Open Xml Sdk and like I said, locally all 3 documents work when invoked via a console app running locally on the asp.net dev server, i.e. http://localhost:3332/myService.svc
When I host it on IIS and I try to get a Windows Forms application to invoke it, I get the error.
I know you will ask for logs, so yes I have logging enabled on my Host.
And there is no error in the logs, I am logging everything.
Basically I invoke two service operations written by another developer.
MyOperation calls -> HisOperation1 and then HisOperation2, both of those calls give me complex types. I am going to look at his code tomorrow, because he is using LINQ2SQL and there may be some funny business going on there. He is using a variety of collections etc, but the fact that I can run the exact same document, lets call it "Document 3" within seconds when the service is being hosted locally on ASP WebDev Server is what is most odd, why would it run on scaled down Cassini and blow up on IIS?
From the log it seems, after calling HisOperation1 and HisOperation2 the service just goes into la-la land dies, there is a application pool (w3wp.exe) error in the Windows Event Log.
Faulting application w3wp.exe, version 6.0.3790.1830, stamp 42435be1, faulting module kernel32.dll, version 5.2.3790.3311, stamp 49c5225e, debug? 0, fault address 0x00015dfa.
It's classified as .NET 2.0 Runtime error.
Any help is appreciated, the lack of sleep is getting to me.
Help me Obi-Wan Kenobi, you're my only hope.
I had this message appearing:
An error occured while receiving the HTTP Response to http://myservername.mydomain.inc/MyService/Service.Svc. This could be due to the service endpoint binding not using the HTTP Protocol. This could also be due to an HTTP request context being aborted by the server (possibly due to the server shutting down). See server logs for more details.
And the problem was that the object that I was trying to transfer was not [Serializable]. The object I was trying to transfer was DataTable.
I believe word documents you were trying to transfer are also non serializable so that might be the problem.
Yes, we'd want logs, or at least some idea of what you're logging. I assume you have both message and transport logging on at the WCF level.
One thing to look at is permissions. When you run under Cassini the web server is running as the currently logged in user. This hides any SQL or CAS permission problems (as, lets be honest, your account is usually a local administrator). As soon as you publish to IIS you are now running under the application pool user, which is, by default, a lot more limited.
Try turning on IIS debug dumps and following the steps in KB919789
Fyi, I changed IIS 6 to work in IIS 5.0 Isolation mode and everything works. Odd.
I had the same error when using an IEnumerable<T> DataMember in my WCF service. Turned out that in some cases I was returning an IQueryable<T> as an IEnumerable<T>, so all I had to do was add .ToList<T>() to my LINQ statements.
I changed the IEnumerable<T> to IList<T> to prevent making the same mistake again.