so consider that a process in one computer tries to call a method of an interface in another process in another computer.
How does DCOM knows that the process can be instantiated on a remote computer and afterwards actually instantiate that object/class in the remote computer?
The remote host is mentioned in the registry, via RemoteServerName value under AppId key.
Or, you could specify it programmatically, by means of COSERVERINFO parameter to CoCreateInstanceEx.
Related
Let's look at a scenario. Say I have the domain foo.bar.cc and I'm attempting to connect via ssh:
ssh foo.bar.cc
But, in this scenario, foo.bar.cc:22 requires VPN access. So, this DNS entry is not visible to me. Seeing how I'd never be able to connect, the connection eventually TO's (times out).
Before the TO, what is happening under-the-hood while I am attempting to access the server? What is the connection loop process like during the connection attempt, and what system calls are called, and why? Eventually sshd bails out: how does it determine this? Again, which system calls typically come into play, etc.
I'm trying to clone an EC2 instance so that I can test some things. I created an AMI and launched an instance and it seems to be running ok. However, I cannot connect to it with ssh or putty.
My live instance, which I'm making the copy of, has various users who can all log in happily with their private key. But they cannot log in with the exact same credentials to the cloned instance. I just get:
Disconnected: No supported authentication methods available (server sent: publickey)
Is there more to do than to just change the IP address from the live instance to the cloned instance?
I also cannot connect to the ec2-user login, using the private key I created during launch. One slight quirk of my live server is that I had to change the AuthorizedKeysFile setting in /etc/ssh/sshd_config in order to deal with some SFTP problems I was having. Is this likely to have messed up the connection for a cloned server? Surely all the settings are identical?
The answer was to do with the AuthorizedKeysFile setting after all. I undid the edit I made in /etc/ssh/sshd_config, took another snapshot, made another AMI, launched another instance and all was well. I didn't even need to restart the sshd service, so this didn't mess up my configuration on my live server.
I'm not entirely sure why this caused a problem, but the lesson here is that EC2 needs the AuthorizedKeysFile to be set to the default location or I guess it doesn't know where to look for the public key.
I have a self hosted wcf service with a startup task that runs
netsh http add urlacl url=https://+:{PORT}/{SERVICENAME} user=everyone listen=yes delegate=yes"
previously the service didn't have ssl, but the old http url reservation was still there (or was added by something else I'm not aware of).
So do I need to add a netsh remove to startup task?
EDIT:
I remove desktop-ed to the role to check if the reservation is there.
To make you understand better the scenario, when you deploy your application in cloud, you are running application in a virtual machine within virtualize environment. Your application will be running within a data center however the virtual machine will be hosted on a host machine which can be changed any time due to any particular reason. This is possible due to Guest OS or Host OS update, hardware failure, resources change requirement, and any other reason. Because of it you should not consider that your virtual machine will always be same, to be more specific it is "virtual".
You can never assume it will be the same, it often is, but if there were a hardware failure and your role were restarted within the data center elsewhere, it certainly wouldn't be. Any startup task would need to be idempotent.
I have a Web application and a WCF service hosted on the same Windows 2003 development server. They each have their own IIS website node responding to drs.displayscreen.web and drs.displayscreen.service host headers respectively. The hosts file contains entries for both headers pointing back to 127.0.0.1. The web site has a service reference to drs.displayscreen.service.
Both applications work perfectly when their application pool uses the 'Network Service' account.
I need to perform some COM processing under the hood on the service so I want to run the applications under a customised identity. Both sites run on a new application pool.
When I change the application pool identity to use a new windows account created for the purpose, I get the following (inner) exception:
[EndpointNotFoundException: Could not connect to http://drs.displayscreen.service/Handler.svc. TCP error code 10060: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond 192.168.98.2:8080. ]
192.168.98.2:8080 is the address of a DNS server that is no longer in use. It is not referenced anywhere in the solution. It is not referenced by ipconfig at all.
I have made sure that the new account is a member of IIS_WPG and I have run aspnet_regiis -ga . I have also given the account explicit permission to read the hosts file.
Why does the application attempt to use the defunct DNS server to resolve the temporary url (drs.displayscreen.service) instead of the hosts file entry? It has to be a permission of some sort because it does not have this problem when running under the network service account. Help!!
Well, it appears that the answer might involve a bug in the .Net framework. I found a blog posting that clued me in to the fact that the MS .Net implementation of SocketCache.GetSocket might cache invalid sockets and another one that suggests a workaround/hack in the form of an explicit don't-use-proxies configuration setting.
We don't actually use a proxy server in the environment where this problem cropped up but it appears that SocketCache.GetSocket is overridden or behaves differently when the don't-use-proxies setting is in place. Strangely, removing the setting causes the problem to come back so obviously the SocketCache is not repaired when a valid ip/hostname is discovered and successfully used. According to the author of the first post mentioned above, the bug does not exist in Mono. :)
As part of our databuild run a 3rd party program (3D Studio Max) to export a number of assets. Unfortunately if a user is not currently logged in, or the machine is locked, then Max does not run correctly.
This can be solved for freshly booted machines by using a method such as TweakUI for automatic login. However when a user connects via Remote Desktop (to initiate a non-scheduled build, change a setting, whatever) then after the session ends the machine is left in a locked state with Max unable to run.
I'm looking for a way to configure windows (via fair means or foul) so either it does not lock when the remote session ends, or it "unlocks" itself a short while after. I'm aware of a method under XP where you can run a batchfile on the machine which kicks the remote user off, but this does not appear to work on Windows Server.
There is a separate terminal service connection available called the 'console' connection.
You can connect to this space using mstsc /console /v:servername. Use mstsc /? for full command line options.
This allows you to connect, open up the terminal services manager and boot the bad sessions.
Logging in over RDP shouldn't affect whether the console locks. If you don't log out of RDP (just closing the client keeps your session pending), then your session will be locked. You can solve that with idle timeouts in Terminal Services Manager.
If your console is locking, that's a seperate policy in Local Computer Settings or some such. If you have a domain, set it with a GPO. If you need the exact name of the policy, let me know and I'll dig it up for you.
I assume by unlock you want to make sure that disconnected sessions are logged off. To do this
Administrative Tools | Terminal Services Configuration
Right-Click RDP-TCP on the Connections folder and choose Properties
Go to the Sessions tab and select the Override user settings check box
Configure the End a Disconnected session to your needed timeout value
more reading at http://technet.microsoft.com/en-us/library/cc758177.aspx
You might want to look at using the "shadow" utility. This allows you to essentially proxy into an existing remote desktop session. You could log into the console of the machine with the account you need, then users could open non-console remote desktop sessions to the machine (or to another machine) then use shadow to connect to the same console session. The users will have to be in the administrators group on the machine.
Although, this might be as simple as telling people not to use the console session when logging into the machine using remote desktop.
Possible Solution from here.
To disable the Lock Computer button,
open Regedit and browse to
HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Policies\
System and
HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\Policies\
System and create a new REG_DWORD
value in each called
DisableLockWorkstation. Setting this
value to 0 will allow the Lock
Computer button to be used, while 1
will disable it.
There may be a problem if you are running these tasks as Administrator and others are logging in via Remote Desktop as Administrator. The task should be run from its own account.
With the most recent terminal services client you can connect to the console using the /ADMIN switch.
So "Computer:" will be something like:
myworkstation.mydomain.local /ADMIN
-Ed