I have a RavenDB IIS instance that is working just fine via the Silverlight interface. I am trying to connect to as an embedded client by targeting the web folder, but I keep getting an error saying that it cannot find a lucene DLL. Is this even possible?
No, that is not possible. In embedded mode, the EmbeddableDocumentStore actually contains the database instance. Only one can be spun up at a time. You cannot have multiple embedded clients using the same set of files.
If you have an instance running in IIS, then don't connect with embedded mode. Connect using the regular client and point at the URL of your server.
Related
Azure VM, Cloud service or Web job?
I have a configurable console application which runs continuosly. Currently it is running on a VM and consumes lot of memory (it is basically doing data mining).
The current requirement is to have multiple instances of this application with different set of configuration which can be changed by specific users.
So where should I host this application such that the configuration can be modified using some front end which provides access managements(like Sharepoint),ability to stop it/restart (like WCF service) without logging on the VM?
I am open to any suggestions/ideas. Thanks
I don't think there's any sold answer to this question as there is the preference variable but for what it's worth, if it were up to me I would deploy it against individual azure VM's for each specific set of users. That way if the server resources went up because of config changes the user group made it is isolated to that group, and with azure, will scale automatically to meet the resource demand. Then just build a little .net web app to allow user to authenticate and change configuration settings.
You could expose an "admin" endpoint for your service (obviously you need authentication here!) that:
1. can return the current configuration
2. accept new configuration
3. restart the service (if needed). Stopping the service will be harder, since that leaves the question on how to start it again.
Then you need to write your own (or use a 3-party (like sharepoint or a CMS)) application that will handle your users and under the hood consume your "admin" endpoint.
Edit: The hosting part: If I understand you correctly your app is just an console application today, and you don't know how to host it? Well, there are many answers to that question. If you have a operations department go talk to them, if you are on your own play around and see what fits you and your environment best!
My tip: go for a http/https protocol/interface - just because there are many web host out there, and you can easy find tools for that protocol. if you are on the .NET platform check out Web.API or OWASP
Azure now has Machine learning to process data mining.
You should check if it's suit to you.
Otherwise, you can use Webjob:
Allow you to have multiple instances of your long time running job (Webjon scaling out).
AppSettings can be change from the Azure Portal or using the Azure Management API
I have a slightly complicated setup that, of course, works fine in XP, but chokes on Windows 7. It may seem like madness, but it made sense at the time!
I have a WPF application that launches and then launches another application that communicates with an external device. After launching it establishes communications with the new process using WCF (hosted by the new process) via a named pipe (net.pipe). This seems to work fine on either OS.
I wanted to make some of the functionality of the WPF application externally available to a command line program, so I set up another WCF service, this time hosted by the WPF application and again exposed it via named pipes. Again, this seems to work.
Next, I wanted to make the functionality of the WPF application available via the web. Now, it's important that the WPF application be runnable from a regular user account, so I thought the best way to make this work on Windows 7 would be to create a windows service that would provide the web service part and have it communicate back to the WPF application via the same named pipe that works fine for the command line. I implemented this and it runs fine on XP, but it chokes on Windows 7. The problem seems to be with trying to establish the named pipe connection between the windows service and the WPF application.
If I run the WPF app as an administrator, it works fine. So it seems to be a problem with the account that the windows service is running in can't communicate with a regular user account that is hosting the WCF service via named pipes. Is there a way to make this work? It seems a WCF service running in a regular user account can communicate using named pipes to another app running in the same account, but it seems it can't do the same thing with a different account.
Oddly, the reverse seems to work. The windows service does, in fact, also expose a service with a named pipe binding (it's used as an activation function since the service is running all the time). I can connect from the WPF app to this service without any problems.
My knowledge of security is somewhat limited. Can anybody shine a light on what's going on?
This question has been asked several times previously on SO. For example, see Connecting via named pipe from windows service to desktop app
The problem is that your user session applications don't possess the SeCreateGlobalPrivilege security privilege necessary to allow them to create objects in the global kernel namespace visible to other sessions, but only in the local namespace which is only visible within the session. Services, on the other hand, which run with this privilege by default, can do so.
It is not the named pipe object itself which is constrained to the local namespace in this way, but another named kernel object, a shared memory section, on which the WCF named pipe binding relies in order to publish to its clients the actual name of the pipe, which is a GUID which changes each time the service is started.
You can get round this constraint by reversing the roles - make the windows service application the WCF Service, to which your user session apps connect. The windows service has no problem publishing its service to your session. And connecting things up this way round makes more sense because the windows service is always running, whereas your session and its apps comes and goes as you log in and out. You'll want to define the service with a duplex contract, so that once the connection is established, the essential flow of communication over the WCF service can still happen in the same direction you originally intended.
The applications (WPF/Console) are creating locally scoped named pipes (this happens by default when they are unable to create globally scoped pipes). My guess is that they can communicate with each other because they can see each others named pipes because they are running under the same account.
The windows service has higher privileges and can therefore create a globally scoped named pipe for the client applications to see.
You can check out a discussion on Christian Weyer's Blog.
We're using windows 2008 R2 servers and we need to backup the file to the other server whenever a file gets uploaded.
Unfortunately, the client requires that there would be no file/directory sharing between servers via LAN so we are trying to do this via WCF calling another WCF. But now we're having problem calling the other WCF since they're hosted on SSL-secured website.
Calling the WCF via silverlight works.
Questions:
1) What might be causing the SSL/TLS error when the WCF calls the other but everything works fine for the silverlight calling the WCF?
code:
public FileUpload(FileUploadClass file)
{
// store locally
...
// call the other wcf
if (!fileIsExisting)
{
ServiceRefClient svcClient = new ServiceRefClient();
svcClient.FileUploadClass(file)
}
}
2) Any other way to backup the file to the other server securely apart from using WCF and Database (I'm trying database now but hopefully there is a prettier way to do this)? File/Directory/Drive sharing via local network is prohibited.
Can you give more details on the exact error? Meanwhile you may want to check:
Assuming Server B hosting the WCF File Backup service is using a self signed certificate, does server A which is calling Server B have the certificate imported in the appropiate certificate store?
Again an assumption: Check Server A´s application pool identity, does it have sufficient permissions to call Server B?
Since dealing with security issues are often tough and time consuming, I decided to store the file as binaries to the database and load it from the 2nd server instead.
Impersonation works but that's really more of bypassing the securities.
I've got a working WCF service and a working Delphi client. On a normal PC, they work nicely. On a VM that's "Bridged" they work nicely if I log onto the domain (but not if I logon locally to the VM as administrator). If the VM is NATed, the connection attempt times out.
I would love to hear people's thoughts on what could be making such a difference to whether the client can successfully connect to the WCF service. Bear in mind I'm connecting with basicHttpBinding with no security.
The service is setup to use System Account (interact with desktop is NOT checked), and it starts automatically. The service URI doesn't change, the port is open, and can be telnet'd to in all scenarios.
Any ideas or pointers?
Within the VM, open Internet Explorer and verify that you can view the WSDL of the WCF service. If you can't, then your issue is connectivity and has nothing to do with your Delphi code.
Group Policies and Enterprise Security solutions that swap certificates or require certificates to be registered (we're using a UTM called CyberRoam) make a difference.
Also when Virtual Machines join a domain, their ComputerNames are added to a list maintained by the Domain Controller. When the same Virtual Machine is "moved" or "copied", its ComputerName should be changed to avoid DNS resolution issues.
I'm not claiming this as the definitive answer, however it does explain the issues I noticed in this instance.
I have a Web application and a WCF service hosted on the same Windows 2003 development server. They each have their own IIS website node responding to drs.displayscreen.web and drs.displayscreen.service host headers respectively. The hosts file contains entries for both headers pointing back to 127.0.0.1. The web site has a service reference to drs.displayscreen.service.
Both applications work perfectly when their application pool uses the 'Network Service' account.
I need to perform some COM processing under the hood on the service so I want to run the applications under a customised identity. Both sites run on a new application pool.
When I change the application pool identity to use a new windows account created for the purpose, I get the following (inner) exception:
[EndpointNotFoundException: Could not connect to http://drs.displayscreen.service/Handler.svc. TCP error code 10060: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond 192.168.98.2:8080. ]
192.168.98.2:8080 is the address of a DNS server that is no longer in use. It is not referenced anywhere in the solution. It is not referenced by ipconfig at all.
I have made sure that the new account is a member of IIS_WPG and I have run aspnet_regiis -ga . I have also given the account explicit permission to read the hosts file.
Why does the application attempt to use the defunct DNS server to resolve the temporary url (drs.displayscreen.service) instead of the hosts file entry? It has to be a permission of some sort because it does not have this problem when running under the network service account. Help!!
Well, it appears that the answer might involve a bug in the .Net framework. I found a blog posting that clued me in to the fact that the MS .Net implementation of SocketCache.GetSocket might cache invalid sockets and another one that suggests a workaround/hack in the form of an explicit don't-use-proxies configuration setting.
We don't actually use a proxy server in the environment where this problem cropped up but it appears that SocketCache.GetSocket is overridden or behaves differently when the don't-use-proxies setting is in place. Strangely, removing the setting causes the problem to come back so obviously the SocketCache is not repaired when a valid ip/hostname is discovered and successfully used. According to the author of the first post mentioned above, the bug does not exist in Mono. :)