Azure VM, Cloud service or Web job?
I have a configurable console application which runs continuosly. Currently it is running on a VM and consumes lot of memory (it is basically doing data mining).
The current requirement is to have multiple instances of this application with different set of configuration which can be changed by specific users.
So where should I host this application such that the configuration can be modified using some front end which provides access managements(like Sharepoint),ability to stop it/restart (like WCF service) without logging on the VM?
I am open to any suggestions/ideas. Thanks
I don't think there's any sold answer to this question as there is the preference variable but for what it's worth, if it were up to me I would deploy it against individual azure VM's for each specific set of users. That way if the server resources went up because of config changes the user group made it is isolated to that group, and with azure, will scale automatically to meet the resource demand. Then just build a little .net web app to allow user to authenticate and change configuration settings.
You could expose an "admin" endpoint for your service (obviously you need authentication here!) that:
1. can return the current configuration
2. accept new configuration
3. restart the service (if needed). Stopping the service will be harder, since that leaves the question on how to start it again.
Then you need to write your own (or use a 3-party (like sharepoint or a CMS)) application that will handle your users and under the hood consume your "admin" endpoint.
Edit: The hosting part: If I understand you correctly your app is just an console application today, and you don't know how to host it? Well, there are many answers to that question. If you have a operations department go talk to them, if you are on your own play around and see what fits you and your environment best!
My tip: go for a http/https protocol/interface - just because there are many web host out there, and you can easy find tools for that protocol. if you are on the .NET platform check out Web.API or OWASP
Azure now has Machine learning to process data mining.
You should check if it's suit to you.
Otherwise, you can use Webjob:
Allow you to have multiple instances of your long time running job (Webjon scaling out).
AppSettings can be change from the Azure Portal or using the Azure Management API
Related
I have an active Blazor Wasm - Core Hosted site that is being used by one entity at the moment. Whenever I perform an update I wait until they're not using it (perhaps for no good reason).
Going forward, I'd hope to have multiple users and there will not be a common window where the site is not in use. Am sure this is not a unique requirement, so am pretty sure that its all covered under the hood, but if you can answer the following it would help clarify for me.
When a client connects to server the application is downloaded to client ...
Is this done by the script "_framework/blazor.webassembly.js"?
If I publish an update to my site at my hosting site whilst the application is in use ...
Am I right to assume that the application pool running in memory will
persist until I force a 'recycle'?
Does this mean that my published code will have no effect until the
pool is recycled? Or do new connections get the new app?
If I force a recycle whilst there are active users, might I not end up
with a downloaded client app that is out of sync with the running
server app?
A Weblogic server got hacked and the problem is now removed.
I am looking through the infected VM's now in a sandbox and want to see what if any data was accessed on the application servers.
the app servers were getting hammered with ssh requests and so we identified the infected VM's as the web logic VMS, we did not have http logging on. Is there any way to Identify if any PII was Compromised?
Looked through secure logs on weblogic as well as looked through the PIA logs
I am not sure how to identify what if any data was accessed
I would like to find out what went out of our network and info or data
what should I be looking for
is there anything I can learn from looking at the weblogic servers running on red hat?
I would want to believe that SSH was not the only service being hammered, and that was a large attempt to make eyes be on Auth logging whilst an attempt on other services is made.
Do you have a Time frame that you are working with?
Have the OS logs been checked for that time frame?
.bash_history been checked? env variables? /etc/pass* for added users? aliases? reverse shells open on the network connections? New users created on services running on that particular host?
Was WebLogic the only service running on this publicly available host?
What other services and ports were available?
Was this due to an older version of Weblogic or another service, application, plugin?
Create yourself an excel spreadsheet and start a timeline.
Look at all the OS level logging possible and start to make note of anything that looks suspicious, to then follow that breadcrumb to exhaustion.
I have few WCF services, deployed at IIS, which are consumed by some android devices. I'm have to move all my databases and services to Azure. I googled to find out how to deploy WCF at Azure, and found a concept of WebRole and Azure Cloud Service project. See THIS SO POST
But at the same time, I just tried creating a new webapp in Azure and simply published my WCF services project there, and it worked fine. (I tested with client).
My Question is,
What is the difference between the methods mentioned in the post I linked, and the way I deployed? I'm concerned, because I'm expecting high amount of requests.
Is it okay to deploy my already created/ready services the way I did?
What is preferred?
Both (WebApp and Web role) will work for your scenario.
The main difference between this two way is that Web role allows you to connect to it through remote desktop.
You can keep using WebApp and configure Auto Scaling (based on CPU usage or at a specific time).
I have a slightly complicated setup that, of course, works fine in XP, but chokes on Windows 7. It may seem like madness, but it made sense at the time!
I have a WPF application that launches and then launches another application that communicates with an external device. After launching it establishes communications with the new process using WCF (hosted by the new process) via a named pipe (net.pipe). This seems to work fine on either OS.
I wanted to make some of the functionality of the WPF application externally available to a command line program, so I set up another WCF service, this time hosted by the WPF application and again exposed it via named pipes. Again, this seems to work.
Next, I wanted to make the functionality of the WPF application available via the web. Now, it's important that the WPF application be runnable from a regular user account, so I thought the best way to make this work on Windows 7 would be to create a windows service that would provide the web service part and have it communicate back to the WPF application via the same named pipe that works fine for the command line. I implemented this and it runs fine on XP, but it chokes on Windows 7. The problem seems to be with trying to establish the named pipe connection between the windows service and the WPF application.
If I run the WPF app as an administrator, it works fine. So it seems to be a problem with the account that the windows service is running in can't communicate with a regular user account that is hosting the WCF service via named pipes. Is there a way to make this work? It seems a WCF service running in a regular user account can communicate using named pipes to another app running in the same account, but it seems it can't do the same thing with a different account.
Oddly, the reverse seems to work. The windows service does, in fact, also expose a service with a named pipe binding (it's used as an activation function since the service is running all the time). I can connect from the WPF app to this service without any problems.
My knowledge of security is somewhat limited. Can anybody shine a light on what's going on?
This question has been asked several times previously on SO. For example, see Connecting via named pipe from windows service to desktop app
The problem is that your user session applications don't possess the SeCreateGlobalPrivilege security privilege necessary to allow them to create objects in the global kernel namespace visible to other sessions, but only in the local namespace which is only visible within the session. Services, on the other hand, which run with this privilege by default, can do so.
It is not the named pipe object itself which is constrained to the local namespace in this way, but another named kernel object, a shared memory section, on which the WCF named pipe binding relies in order to publish to its clients the actual name of the pipe, which is a GUID which changes each time the service is started.
You can get round this constraint by reversing the roles - make the windows service application the WCF Service, to which your user session apps connect. The windows service has no problem publishing its service to your session. And connecting things up this way round makes more sense because the windows service is always running, whereas your session and its apps comes and goes as you log in and out. You'll want to define the service with a duplex contract, so that once the connection is established, the essential flow of communication over the WCF service can still happen in the same direction you originally intended.
The applications (WPF/Console) are creating locally scoped named pipes (this happens by default when they are unable to create globally scoped pipes). My guess is that they can communicate with each other because they can see each others named pipes because they are running under the same account.
The windows service has higher privileges and can therefore create a globally scoped named pipe for the client applications to see.
You can check out a discussion on Christian Weyer's Blog.
Kindly help me in architecting a solution which is required for my ongoing project.
I have developed some WCF Services hosted as windows services which I did and working fine so far. Now I am asked to develop a master WCF type of service which should be intelligent enough to manage all other WCF service for possible corruption/errors and can repair them and restart.
Thanks in advance.
As we have written a custom host and took us years to make it a real application server, I will share some of the challenges that we had. Creating a custom host that manages WCF services as an NT service is a very challenging task if you want to manage all of the details and treat the NT service as a real service host. The challenges start from managing multiple Appdomains ( one for each Service ), managing the statuses of the services, startup times, deployments from the IDE and the worst of all is activation. Have you considered how to implement that? If you do not have this feature, it means that all of your services will be active and in memory at all times. IIS and Appfabric do that very well and trust me , it is noty easy to implement. The other part that was challenging was a UI to manage this host and offcourse a UI that can manage multiple hosts ( NT services running on different boxes ). Do you need a discovery proxy implementation? And at last how about if you want to manage services running in your custom host , IIS and App fabric the same way?
Think about before doing such an implementation because the scope may crypt on you as you do it.
I do something similar here.
Create a Dictionary<key, ApplicationDomain> collection into your main program
Key: something unique for each application domain, like a Guid or a System.Type.
That ApplicationDomain class exposes a internal property to access your AppDomain proxy (that which inherits a MarshallByRef class)
Load your WCF host into main program, so you'll get access to that collection
Every time your service get some access, you just need to take that key, access your proxy and do anything you want within your service hoster.
Keypoint: Your service must have access to all your service hosts.