I have multiple content servers on different machines. I need to check the status of every server. I'm concerned about things like disk size, priority etc.
One solution that I'm using now is to install a Window Service on each machine which regularly checks the server but I have to manually install the service on each server.
Is there any way I can get the server information like disk space from a WCF service or using a windows application? I want to create a single watcher for my servers as I have to add servers sometimes.
Look at windows WMI you can make remote calls to machines so long as you have permissions to do so. You will only have to run one service on one server that can connect to the others.
http://msdn.microsoft.com/en-us/library/windows/desktop/aa394582(v=vs.85).aspx
Related
A Weblogic server got hacked and the problem is now removed.
I am looking through the infected VM's now in a sandbox and want to see what if any data was accessed on the application servers.
the app servers were getting hammered with ssh requests and so we identified the infected VM's as the web logic VMS, we did not have http logging on. Is there any way to Identify if any PII was Compromised?
Looked through secure logs on weblogic as well as looked through the PIA logs
I am not sure how to identify what if any data was accessed
I would like to find out what went out of our network and info or data
what should I be looking for
is there anything I can learn from looking at the weblogic servers running on red hat?
I would want to believe that SSH was not the only service being hammered, and that was a large attempt to make eyes be on Auth logging whilst an attempt on other services is made.
Do you have a Time frame that you are working with?
Have the OS logs been checked for that time frame?
.bash_history been checked? env variables? /etc/pass* for added users? aliases? reverse shells open on the network connections? New users created on services running on that particular host?
Was WebLogic the only service running on this publicly available host?
What other services and ports were available?
Was this due to an older version of Weblogic or another service, application, plugin?
Create yourself an excel spreadsheet and start a timeline.
Look at all the OS level logging possible and start to make note of anything that looks suspicious, to then follow that breadcrumb to exhaustion.
Azure VM, Cloud service or Web job?
I have a configurable console application which runs continuosly. Currently it is running on a VM and consumes lot of memory (it is basically doing data mining).
The current requirement is to have multiple instances of this application with different set of configuration which can be changed by specific users.
So where should I host this application such that the configuration can be modified using some front end which provides access managements(like Sharepoint),ability to stop it/restart (like WCF service) without logging on the VM?
I am open to any suggestions/ideas. Thanks
I don't think there's any sold answer to this question as there is the preference variable but for what it's worth, if it were up to me I would deploy it against individual azure VM's for each specific set of users. That way if the server resources went up because of config changes the user group made it is isolated to that group, and with azure, will scale automatically to meet the resource demand. Then just build a little .net web app to allow user to authenticate and change configuration settings.
You could expose an "admin" endpoint for your service (obviously you need authentication here!) that:
1. can return the current configuration
2. accept new configuration
3. restart the service (if needed). Stopping the service will be harder, since that leaves the question on how to start it again.
Then you need to write your own (or use a 3-party (like sharepoint or a CMS)) application that will handle your users and under the hood consume your "admin" endpoint.
Edit: The hosting part: If I understand you correctly your app is just an console application today, and you don't know how to host it? Well, there are many answers to that question. If you have a operations department go talk to them, if you are on your own play around and see what fits you and your environment best!
My tip: go for a http/https protocol/interface - just because there are many web host out there, and you can easy find tools for that protocol. if you are on the .NET platform check out Web.API or OWASP
Azure now has Machine learning to process data mining.
You should check if it's suit to you.
Otherwise, you can use Webjob:
Allow you to have multiple instances of your long time running job (Webjon scaling out).
AppSettings can be change from the Azure Portal or using the Azure Management API
Our product (Network Forensics and analytics tool) has a requirement to dissect RDP sessions on Windows 200x servers and
Map each session to a logged-in account.
Track all TCP/UDP sessions that are going to the internet.
URLs visited
External Servers and ports connected to, etc.
I have designed a code that can achieve this by installing an NT service on each of the terminal servers. This service will mine the data on that server and push it to my linux-based appliance. Alternatively it can log the information to the local Eventlog and then I can use simple WMI calls to retrieve this information.
However, I would like to know if there is a way to retrieve all TCP/UDP connections by polling the terminal servers externally (via WMI or otherwise) and gather the same information. Basically, I am trying to check if there is a way to avoid installing anything on the Windows terminal servers.
Thanks,
-Chandra
On my Windows server, I will be hosting a few unrelated websites that I would like to add the features of OSQA to. As such, there will be no shared data between the OSQA instances.
Is it possible to have multiple OSQA instances running off the same database (I'm guessing if it's not supported, some db and script tweaking would be required to ID the requesting site), or alternatively (and probably the simplest), having several OSQA instances running on the same box?
I have taken a look at the Bitnami OSQA stack, and this may be the simplest solution. However, this installs Apache, so I wouldn't want multiple instances of Apache running on my box either.
I would also like to be able to access these instances through IIS.
You should be able to install different OSQA instances on the same database server but you will need to create different databases (in that database server) for each instance. Unfortunately currently we BitNami) don't support IIS nor multiple OSQA installations on the same Apache server so you will need to do it manually.
I have a self hosted wcf service with a startup task that runs
netsh http add urlacl url=https://+:{PORT}/{SERVICENAME} user=everyone listen=yes delegate=yes"
previously the service didn't have ssl, but the old http url reservation was still there (or was added by something else I'm not aware of).
So do I need to add a netsh remove to startup task?
EDIT:
I remove desktop-ed to the role to check if the reservation is there.
To make you understand better the scenario, when you deploy your application in cloud, you are running application in a virtual machine within virtualize environment. Your application will be running within a data center however the virtual machine will be hosted on a host machine which can be changed any time due to any particular reason. This is possible due to Guest OS or Host OS update, hardware failure, resources change requirement, and any other reason. Because of it you should not consider that your virtual machine will always be same, to be more specific it is "virtual".
You can never assume it will be the same, it often is, but if there were a hardware failure and your role were restarted within the data center elsewhere, it certainly wouldn't be. Any startup task would need to be idempotent.