I've ServiceControl, ServicePulse & RavelDb all installed in a production server. Unfortunately, I can't run any browser in this server for security reasons. I would like to view the database used by ServiceControl. When I connect to the RavenDb server remotely in my desktop browser (http://server:8080), I could see all the end points databases but I don't see the database created for ServiceControl.
I applied the settings to expose RavenDb as :
The ServiceControl documentation says the db is visible at http://localhost:33333/storage which is not an option for me (no browser is allowed in this server)
I have customized the ServiceControl host name, so http://server:33333/api is reachable but http://server:33333/storage returns 404.
Any thoughts/solution?
Check out the documentation here, my guess is that you are missing the configuration bit.
Related
I launched an Ubuntu virtual machine on microsoft azure and connected via ssh into the instance.
I followed all the installation instructions at:
http://bugzilla.readthedocs.org/en/latest/installing/quick-start.html
After following the installation instructions, I am able to login into bugzilla via lynx.
The installation worked except I cannot log in to bugzilla on my PC via my browser, (chrome/ edge).
Typing in the IP address results in a timeout error (ERR_CONNECTION_TIMED_OUT). I would expect instead to see the bugzilla login page.
I went to var/www/data and set urlbase in params.json:
"urlbase" : "http://40.127.99.16",
But still I cannot login.
Any ideas what I am doing wrong?
Typing in the IP address result in a timeout error (ERR_CONNECTION_TIMED_OUT). I would expect instead to see the bugzilla login page.
This typically means that something between your browser and the server is preventing the connection. Typical culprits are either firewall rules on the remote server itself (managed with iptables), or in the remote cloud environment (managed using some some sort of platform-specific web interface or API).
I've followed the instructions here: http://guac-dev.org/doc/gug/installing-guacamole.html
This says
Guacamole is separated into two pieces: guacamole-server, which provides the guacd proxy and related libraries, and guacamole-client, which provides the client to be served by your servlet container, usually Tomcat.
guacamole-client is available in binary form, but guacamole-server must be built from source. Don't be discouraged: building the components of Guacamole from source is not as difficult as it sounds, and the build process is automated. You just need to be sure you have the necessary tools installed ahead of time. With the necessary dependencies in place, building Guacamole only takes a few minutes.
And then proceed to describe how to install guacamole-server and use it. I can now go to http://localhost:8080/guacamole/ and access the server and see which clients have connected.
How do I connect a client though? I see no documentation of where the remote desktop needs to browse to in order to run the guacamole-client?
Or have I totally misunderstood this?
The key phrase in the quoted documentation is:
... guacamole-client, which provides the client to be served by your servlet container, usually Tomcat.
"guacamole-client" is the web application and the client. When a user visits the URL for your Guacamole server, logs in, and clicks on a connection, they are connected to the corresponding remote desktop via Guacamole's JavaScript client which is served to their browser like any other web application.
I can now go to http://localhost:8080/guacamole/ and access the server and see which clients have connected.
The list you see when you first log in to your Guacamole server is not the list of clients that have connected; it is the list of connections to remote desktops which are available. If you click on one of those connections, you will be connected using Guacamole's own built-in JavaScript client.
How do I connect a client though? I see no documentation of where the remote desktop needs to browse to in order to run the guacamole-client?
The remote desktop does not need to do anything - Guacamole will simply connect to it. You can see a video of the overall user experience on the Guacamole website which may hopefully clear things up for you:
https://vimeo.com/116207678
Overall:
You deploy guacamole-client (the web application) and install guacamole-server (the remote desktop proxy that the web application uses in the backend). The combination of these two pieces of software makes up a typical Guacamole server.
You and your users can log in through the web application and connect to remote desktops using a web browser.
You do not need to explicitly run a client.
It looks like this
Internet -> Guacamole server (on the local network) -> Desktop pc
I installed Guacamole in a vmware enviroment on Ubuntu.
There is a file in /etc/guacamole what is called user-mapping.xml
In that file you add or edit the connections available to the user you want.
A connection for that user must be set between the <connection> tags
This is my Azure configuration:
I have a Virtual Network with a couple of subnets and a gateway
configured to allow point-to-site.
There is one Virtual Machine with SQL Server (2014) installed. There
are some databases in there already. SQL Server is set up to allow
SQL Server and Windows Authentication mode. This VM is in the Virtual Network
I have an empty Azure Web App
I deployed my main MVC WebApp to the empty Azure Web App and looks good, except when it tries to retrieve information from the database.
Is it a connection string error? or there can be something else...
My connection string looks like this:
<add name="MyEntities" connectionString="metadata=res://*/Data.MyModel.csdl| res://*/Data.MyModel.ssdl| res://*/Data.MyModel.msl;provider=System.Data.SqlClient;
provider connection string="
data source=tcp:10.0.1.4;
initial catalog=MyDataBase;
persist security info=False;
user id=MySystemAdmin;
password=SystemAdminPassword;
multipleactiveresultsets=True;
App=EntityFramework""
providerName="System.Data.EntityClient" />
Here is the error thrown by the azure web app...
So it seems to be related to either the way I'm providing the connection string or the end-points/firewall configuration.
Check your connection string against this connection string for Entity Framework designer files (https://msdn.microsoft.com/en-us/data/jj556606.aspx#Connection)
Just from a quick glance I see two possible errors:
Semicolon missing added after provider=System.Data.SqlClient (the example on the page I provided the link to doesn't have one)
The IP address you specify to connect to is a local one (10.0.0.1) and should be the IP/DNS name of your database in Azure.
Not sure if this is the issue or if StackOverflow just clobbered your markup. In addition you talk about a lot of gateways so I would check to make sure you can talk between your systems. Finally posting error messages and capturing exceptions about what's actually going on will help diagnose the error because at this point it's all guesswork.
Hope that helps.
What the guys said above plus:
The Web App needs to have a hybrid connection to the VNET the VM is if you want to use the local IP address, otherwise you have to use the PIP.
Check the firewall on the VM if the proper ports are open. This has to be both on the VM firewall and the endpoints. Also, if there are any ACLs on the VM, you have to check those too.
The other answers gave me the guidelines to find out the solution.
I'll try to describe the steps I followed:
Using the new azure portal (portal.azure.com currently in preview) I
established a connection between the Azure Web App and the Virtual
network:
Home > Browse > Click on Azure Web App name
In the Azure Web App blade click on Networking tile
In Virtual Network blade, click on the Virtual Network where the database is located (it's important to mention that the Virtual Network ought to have a gateway previously configured)
My intention was to provide certain level of security to the VM with
the databases by placing it inside a Virtual Network, so I had not
considered opening ports. Turns out that it's necessary, so, in the VM:
I enabled the TCP/IP protocol for SQL Server using the Sql Server Configuration Manager (How to? here)
Then I created a new Inbound Rule opening the 1433 port, but only for private connections (very nice).
It was not necessary to create an endpoint in the VM for this port (very happy with this).
Finally, I published the the app to the Azure Web App using the connection strings as shown in the question (with internal database IP)
Final touch: in the new Azure Portal > Azure Web App > Settings, I was able to enter Connection Strings. Settings created in the portal are not overwritten; so now I'm sure this Azure Web App will always use the correct connection string.
Final note: in theory (not tested yet) the internal IP will not change as long as the VM is not Stopped (Deallocated).
I have a RavenDB IIS instance that is working just fine via the Silverlight interface. I am trying to connect to as an embedded client by targeting the web folder, but I keep getting an error saying that it cannot find a lucene DLL. Is this even possible?
No, that is not possible. In embedded mode, the EmbeddableDocumentStore actually contains the database instance. Only one can be spun up at a time. You cannot have multiple embedded clients using the same set of files.
If you have an instance running in IIS, then don't connect with embedded mode. Connect using the regular client and point at the URL of your server.
I have a Web application and a WCF service hosted on the same Windows 2003 development server. They each have their own IIS website node responding to drs.displayscreen.web and drs.displayscreen.service host headers respectively. The hosts file contains entries for both headers pointing back to 127.0.0.1. The web site has a service reference to drs.displayscreen.service.
Both applications work perfectly when their application pool uses the 'Network Service' account.
I need to perform some COM processing under the hood on the service so I want to run the applications under a customised identity. Both sites run on a new application pool.
When I change the application pool identity to use a new windows account created for the purpose, I get the following (inner) exception:
[EndpointNotFoundException: Could not connect to http://drs.displayscreen.service/Handler.svc. TCP error code 10060: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond 192.168.98.2:8080. ]
192.168.98.2:8080 is the address of a DNS server that is no longer in use. It is not referenced anywhere in the solution. It is not referenced by ipconfig at all.
I have made sure that the new account is a member of IIS_WPG and I have run aspnet_regiis -ga . I have also given the account explicit permission to read the hosts file.
Why does the application attempt to use the defunct DNS server to resolve the temporary url (drs.displayscreen.service) instead of the hosts file entry? It has to be a permission of some sort because it does not have this problem when running under the network service account. Help!!
Well, it appears that the answer might involve a bug in the .Net framework. I found a blog posting that clued me in to the fact that the MS .Net implementation of SocketCache.GetSocket might cache invalid sockets and another one that suggests a workaround/hack in the form of an explicit don't-use-proxies configuration setting.
We don't actually use a proxy server in the environment where this problem cropped up but it appears that SocketCache.GetSocket is overridden or behaves differently when the don't-use-proxies setting is in place. Strangely, removing the setting causes the problem to come back so obviously the SocketCache is not repaired when a valid ip/hostname is discovered and successfully used. According to the author of the first post mentioned above, the bug does not exist in Mono. :)