I'm using Windows 8. In drive C 's root directory (C:) there are 27 WCF Trace file; 26 of them are named like <guid>Traces.svclog and one is Traces.svclog. I'm trying to find out which applications are creating these files; because I don't have any application that uses wcf trace described here.
And here is my C:\ directory's view:
I've looked all of the traces: there are only 2 unique trace and the rest is exact copies of them. Here comes the details from one of them;
<E2ETraceEvent xmlns="http://schemas.microsoft.com/2004/06/E2ETraceEvent">
<System xmlns="http://schemas.microsoft.com/2004/06/windows/eventlog/system">
<EventID>589828</EventID>
<Type>3</Type>
<SubType Name="Information">0</SubType>
<Level>8</Level>
<TimeCreated SystemTime="2012-10-15T06:37:57.4222685Z" />
<Source Name="System.ServiceModel" />
<Correlation ActivityID="{00000000-0000-0000-0000-000000000000}" />
<Execution ProcessName="Microsoft.VisualStudio.Web.Host" ProcessID="7104" ThreadID="78" />
<Channel />
<Computer>MYCOMP</Computer>
</System>
<ApplicationData>
<TraceData>
<DataItem>
<TraceRecord xmlns="http://schemas.microsoft.com/2004/10/E2ETraceEvent/TraceRecord" Severity="Information">
<TraceIdentifier>http://msdn.microsoft.com/tr-TR/library/System.ServiceModel.Activation.WebHostCompilation.aspx</TraceIdentifier>
<Description>ASP.Net hosted compilation.</Description>
<AppDomain>7f594460-3-129947566686622036</AppDomain>
<Source>System.ServiceModel.Activation.ServiceParser/17864371</Source>
<ExtendedData xmlns="http://schemas.microsoft.com/2006/08/ServiceModel/StringTraceRecord">
<VirtualPath>/BI/DataService.svc</VirtualPath>
</ExtendedData>
</TraceRecord>
</DataItem>
</TraceData>
</ApplicationData>
</E2ETraceEvent>
So the C:\ directory is filling with rubbish traces. I've installed WCF&WF samples, and maybe that caused this. Can you help me on this; to find out which application is doing this?
Search your drive for app.config files that have systems.diagnostics sections that enable trace on the system.servicemodel tracesource.
Related
We want to set up Orbeon Forms PE replication but we cannot use multicasting as proposed in the docs.
We have two nodes - 172.13.238.241 and 172.13.238.242 and the problem seems to be with the EhCache part. I open the form, load balancer (haproxy) directs me to a node, I turn of the second node and then for several minutes all the requests in the brower fail. Eventually the request start to work again but they are very slow.
This is what I have in node 1 (another node has same config with IP-s replaced with each other and it has a different uniqueId value):
<Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"
channelSendOptions="asynchronous"
channelStartOptions="3">
<Manager className="org.apache.catalina.ha.session.DeltaManager"
expireSessionsOnShutdown="false"
notifyListenersOnReplication="true"/>
<Channel className="org.apache.catalina.tribes.group.GroupChannel">
<Interceptor className="org.apache.catalina.tribes.group.interceptors.StaticMembershipInterceptor">
<Member className="org.apache.catalina.tribes.membership.StaticMember"
port="4100"
host="172.13.238.242"
uniqueId="{0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,2}" />
</Interceptor>
<Receiver className="org.apache.catalina.tribes.transport.nio.NioReceiver"
address="172.13.238.241"
port="4100"
autoBind="0"
maxThreads="6"
selectorTimeout="5000" />
<Sender className="org.apache.catalina.tribes.transport.ReplicationTransmitter">
<Transport className="org.apache.catalina.tribes.transport.nio.PooledParallelSender"/>
</Sender>
<Interceptor className="org.apache.catalina.tribes.group.interceptors.TcpFailureDetector"/>
<Interceptor className="org.apache.catalina.tribes.group.interceptors.MessageDispatchInterceptor"/>
<Interceptor className="org.apache.catalina.tribes.group.interceptors.ThroughputInterceptor"/>
</Channel>
<Valve className="org.apache.catalina.ha.tcp.ReplicationValve" filter=".*\.gif|.*\.js|.*\.jpeg|.*\.jpg|.*\.png|.*\.htm|.*\.html|.*\.css|.*\.txt"/>
<ClusterListener className="org.apache.catalina.ha.session.ClusterSessionListener"/>
</Cluster>
I wonder if anyone could spot some mistakes in my ehcache configuration.
With WildFly 8.2.1, I am trying to make existing webservice (JAX-WS) use SSL, but I haven't seen any use of SSL in quickstarts and the information I was able to google is limited. So far I've added this to web.xml:
<security-constraint>
<display-name>Foo security</display-name>
<web-resource-collection>
<web-resource-name>FooService</web-resource-name>
<url-pattern>/foo/FooService</url-pattern>
<http-method>POST</http-method>
</web-resource-collection>
<user-data-constraint>
<transport-guarantee>CONFIDENTIAL</transport-guarantee>
</user-data-constraint>
</security-constraint>
and this is in my standalone.xml:
<subsystem xmlns="urn:jboss:domain:webservices:1.2">
<wsdl-host>${jboss.bind.address:127.0.0.1}</wsdl-host>
<endpoint-config name="Standard-Endpoint-Config"/>
<endpoint-config name="Recording-Endpoint-Config">
<pre-handler-chain name="recording-handlers" protocol-bindings="##SOAP11_HTTP ##SOAP11_HTTP_MTOM ##SOAP12_HTTP ##SOAP12_HTTP_MTOM">
<handler name="RecordingHandler" class="org.jboss.ws.common.invocation.RecordingServerHandler"/>
</pre-handler-chain>
</endpoint-config>
<client-config name="Standard-Client-Config"/>
</subsystem>
but apparently that's not enough; when I look into standalone/data/wsdl/foo.ear/foo.war/FooService/Bar.wsdl I see:
<service name="FooService">
<port binding="foowsb:FooBinding" name="FooBinding">
<soap:address location="http://localhost:8080/foo/FooService"/>
</port>
</service>
Note that in the EAR/WAR, the soap:address.location is filled just with a placeholder (I suppose that the value is ignored).
I've found some info about setting up security realm, and creating the self-signed certificate using keytool (which I did), but I completely miss how this should be linked together.
I've also tried to setup wsdl-uri-scheme=https, but this is supported only in later versions of CXF.
Seems that the soap:address.location value is not ignored when it's being replaced, since changing that from REPLACE_WITH_ACTUAL_URL to https://REPLACE_WITH_ACTUAL_URL did the trick - now the service got exposed on https://localhost:8443.
There is a couple of more steps I had to do in standalone.xml: in undertow, add https-listener:
<https-listener name="secure" socket-binding="https" security-realm="SslRealm"/>
define the SslRealm:
<security-realm name="SslRealm">
<server-identities>
<ssl>
<keystore path="foo.keystore" relative-to="jboss.server.config.dir" keystore-password="foo1234" alias="foo" key-password="foo1234"/>
</ssl>
</server-identities>
<authentication>
<truststore path="foo.truststore" relative-to="jboss.server.config.dir" keystore-password="foo1234"/>
</authentication>
</security-realm>
Note that I reuse the same keystore for server and clients here. And since my clients are ATM in the same WF node during development, I had to setup the client-side part there, too:
<system-properties>
<property name="javax.net.ssl.trustStore" value="${jboss.server.config.dir}/foo.keystore"/>
<property name="javax.net.ssl.trustStorePassword" value="foo1234"/>
<property name="org.jboss.security.ignoreHttpsHost" value="true"/>
</system-properties>
The last property should be replaced in WF 9+ with cxf.tls-client.disableCNCheck.
I have WCF service which takes a post request and JSON. The service then use the C# backgroundworker class to further parse JSON and update DB.
But when I post JSON that is larger than 7KB, the worker is crashing. I am not getting any exceptions or errors in the application logs, but when I look at system logs , I get this,
<Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event">
<System>
<Provider Name="Microsoft-Windows-WAS" Guid="{524B5D04-133C-4A62-8362-64E8EDB9CE40}" EventSourceName="WAS" />
<EventID Qualifiers="32768">5011</EventID>
<Version>0</Version>
<Level>3</Level>
<Task>0</Task>
<Opcode>0</Opcode>
<Keywords>0x80000000000000</Keywords>
<TimeCreated SystemTime="2014-06-16T18:51:21.000000000Z" />
<EventRecordID>74571</EventRecordID>
<Correlation />
<Execution ProcessID="0" ThreadID="0" />
<Channel>System</Channel>
<Computer>xxxxx</Computer>
<Security />
</System>
<EventData>
<Data Name="AppPoolID">YYYYYYYY</Data>
<Data Name="ProcessID">ZZZZZZZ</Data>
<Binary>6D000780</Binary>
</EventData>
</Event>
Could this be a memory leak? How can I find the exact cause if my application log does not show any errors?
PS: The service is working perfect in local and staging environments we have. But live server has a different configuration.
Presently I am building a Silverlight WCF RIA application. It has been going well, with the client obtaining the data it needs without a problem. Then I decided to add a table to the database, update the associated Entity Data Model EDMX file, and re-generated the associated Domain Service class. Now it still gets all the tables it used to get with no problem, but when I try to obtain data from the new table tblProject, I'm receiving this error:
Error
Load operation failed for query 'GetTblProjects'. The remote server returned an error: NotFound.
Error Details
at System.ServiceModel.DomainServices.Client.OperationBase.Complete(Exception error)
at System.ServiceModel.DomainServices.Client.LoadOperation.Complete(Exception error)
at System.ServiceModel.DomainServices.Client.DomainContext.CompleteLoad(IAsyncResult asyncResult)
at System.ServiceModel.DomainServices.Client.DomainContext.<>c__DisplayClass1b.<Load>b__17(Object )
Caused by: The remote server returned an error: NotFound.
at System.ServiceModel.DomainServices.Client.WebDomainClient`1.EndQueryCore(IAsyncResult asyncResult)
at System.ServiceModel.DomainServices.Client.DomainClient.EndQuery(IAsyncResult asyncResult)
at System.ServiceModel.DomainServices.Client.DomainContext.CompleteLoad(IAsyncResult asyncResult)
Caused by: The remote server returned an error: NotFound.
at System.Net.Browser.AsyncHelper.BeginOnUI(SendOrPostCallback beginMethod, Object state)
at System.Net.Browser.BrowserHttpWebRequest.EndGetResponse(IAsyncResult asyncResult)
at System.ServiceModel.Channels.HttpChannelFactory.HttpRequestChannel.HttpChannelAsyncRequest.CompleteGetResponse(IAsyncResult result)
Caused by: The remote server returned an error: NotFound.
at System.Net.Browser.BrowserHttpWebRequest.InternalEndGetResponse(IAsyncResult asyncResult)
at System.Net.Browser.BrowserHttpWebRequest.<>c__DisplayClassa.<EndGetResponse>b__9(Object sendState)
at System.Net.Browser.AsyncHelper.<>c__DisplayClass4.<BeginOnUI>b__0(Object sendState)
I've spent a lot of time looking at the domain service class, along with the XAML code and the associated view model class, and can't see any differences between the implementation related to, say, the tblBasin database table that works fine with no problems and the new tblProject table that is giving me the error. When I turn on WCF tracing, here is what I get for the tblBasin:
<E2ETraceEvent xmlns="http://schemas.microsoft.com/2004/06/E2ETraceEvent">
<System xmlns="http://schemas.microsoft.com/2004/06/windows/eventlog/system">
<EventID>458758</EventID>
<Type>3</Type>
<SubType Name="Information">0</SubType>
<Level>8</Level>
<TimeCreated SystemTime="2012-04-20T21:54:03.3280726Z" />
<Source Name="System.ServiceModel" />
<Correlation ActivityID="{169c9eeb-338f-4ea5-a93a-34f234113283}" />
<Execution ProcessName="WebDev.WebServer40" ProcessID="5276" ThreadID="14" />
<Channel/>
<Computer>WKSTCAL0123</Computer>
</System>
<ApplicationData>
<TraceData>
<DataItem>
<TraceRecord xmlns="http://schemas.microsoft.com/2004/10/E2ETraceEvent/TraceRecord" Severity="Information">
<TraceIdentifier>http://msdn.microsoft.com/en-CA/library/System.ServiceModel.Security.SecurityImpersonationSuccess.aspx</TraceIdentifier>
<Description>Security Impersonation succeeded at the server.</Description>
<AppDomain>f8f8d82-2-129794323085920534</AppDomain>
<ExtendedData xmlns="http://schemas.microsoft.com/2006/08/ServiceModel/SecurityImpersonationTraceRecord">
<OperationAction>http://tempuri.org/ProjectSetDomainServicebinary/GetTblBasins</OperationAction>
<OperationName>GetTblBasins</OperationName>
</ExtendedData>
</TraceRecord>
</DataItem>
</TraceData>
</ApplicationData>
</E2ETraceEvent>
Here is what I get for the tblProject table data that fails:
<E2ETraceEvent xmlns="http://schemas.microsoft.com/2004/06/E2ETraceEvent">
<System xmlns="http://schemas.microsoft.com/2004/06/windows/eventlog/system">
<EventID>262163</EventID>
<Type>3</Type>
<SubType Name="Information">0</SubType>
<Level>8</Level>
<TimeCreated SystemTime="2012-04-20T21:54:03.3270721Z" />
<Source Name="System.ServiceModel" />
<Correlation ActivityID="{30c0de8a-fd38-4ca6-8c8a-b88f27a783bf}" />
<Execution ProcessName="WebDev.WebServer40" ProcessID="5276" ThreadID="12" />
<Channel/>
<Computer>WKSTCAL0123</Computer>
</System>
<ApplicationData>
<TraceData>
<DataItem>
<TraceRecord xmlns="http://schemas.microsoft.com/2004/10/E2ETraceEvent/TraceRecord" Severity="Information">
<TraceIdentifier>http://msdn.microsoft.com/en- CA/library/System.ServiceModel.Channels.MessageReceived.aspx</TraceIdentifier>
<Description>Received a message over a channel.</Description>
<AppDomain>f8f8d82-2-129794323085920534</AppDomain>
<Source>System.ServiceModel.Activation.HostedHttpContext+HostedHttpInput/61784148</Source>
<ExtendedData xmlns="http://schemas.microsoft.com/2006/08/ServiceModel/MessageTransmitTraceRecord">
<MessageProperties>
<AllowOutputBatching>False</AllowOutputBatching>
<Via>http://localhost:57671/Services/ZEGApps-Web-Services-ProjectSetDomainService.svc/binary/GetTblProjects</Via>
</MessageProperties>
<MessageHeaders>
<To d4p1:mustUnderstand="1" xmlns:d4p1="http://schemas.microsoft.com/ws/2005/05/envelope/none" xmlns="http://schemas.microsoft.com/ws/2005/05/addressing/none">http://localhost:57671/Serv ices/ZEGApps-Web-Services-ProjectSetDomainService.svc/binary/GetTblProjects</To>
</MessageHeaders>
</ExtendedData>
</TraceRecord>
</DataItem>
</TraceData>
</ApplicationData>
</E2ETraceEvent>
Does anyone have any suggestions on how to resolve this issue? TIA.
UPDATE: All service calls are succeeding except the call to obtain data from the new tblProject database table I created.
I know this sounds pretty basic, but your question doesn't mention this information and the problem sounds very much like this could be your answer:
Have you updated the appropriate executable files on the server? If you updated only the client code with the knowledge of the new table, the server would behave this way.
Thank you so much for your reply, John!
Presently, the application is in pretty early development stages, so I am actually testing it using the Visual Studio Cassini web server. Both the client and the server projects are in the same solution. So when I rebuild the application, it should rebuild the associated XAP file, shouldn't it? This is what the timestamp for the file indicates. BTW, the SQL Server database is running on a separate database server.
Also, when I open and inspect the EDMX file, it shows the tblProject table as I expect.
If there is anything else I may have missed or you have any other suggestions, they are most welcome.
(Don't know if this still an open question...)
Have you tried using Fiddler and making your remote call ? Sometimes an error occurs on the server and an error page is returned to the client. As RIA Services handles the call, you just get a generic error message.
If your dev server is on localhost, please remember to use "localhost." (with the point) to have the call intercepted by Fiddler.
I've been tearing my hear out trying to figure out why SSL works in one of my Azure projects but not in another.
When I navigate to my site, say https://foo.com, I can't even connect to the site. Browsers can't connect at all and curl says "couldn't connect to host". However, if I go to my cloudapp.net URL (e.g. https://foo.cloudapp.net), it can connect but browsers will complain and say my cert is for *.foo.com. Note: I am able to connect to http://foo.com without any trouble.
Here's my code with certain values obfuscated.
ServiceDefinition.csdef:
<?xml version="1.0" encoding="utf-8"?>
<ServiceDefinition name="MyApp" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition">
<WebRole name="www" vmsize="Small">
<Sites>
<Site name="Web">
<VirtualApplication name="r" physicalDirectory="../Foo/Bar" />
<Bindings>
<Binding name="Endpoint1" endpointName="Endpoint1" />
<Binding name="Endpoint2" endpointName="Endpoint2" />
</Bindings>
</Site>
</Sites>
<Endpoints>
<InputEndpoint name="Endpoint1" protocol="http" port="80" />
<InputEndpoint name="Endpoint2" protocol="https" port="443" certificate="STAR.foo.com" />
</Endpoints>
<Imports>
<Import moduleName="Diagnostics" />
</Imports>
<Certificates>
<Certificate name="STAR.foo.com" storeLocation="LocalMachine" storeName="My" />
</Certificates>
</WebRole>
</ServiceDefinition>
my cert is uploaded, the thumbprint matches (in this example it's also "1234567890")
ServiceConfiguration.csfg:
<?xml version="1.0" encoding="utf-8"?>
<ServiceConfiguration serviceName="myApp" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="1" osVersion="*">
<Role name="www">
<Instances count="2" />
<ConfigurationSettings>
<Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" value="UseDevelopmentStorage=true" />
</ConfigurationSettings>
<Certificates>
<Certificate name="STAR.foo.com" thumbprint="1234567890" thumbprintAlgorithm="sha1" />
</Certificates>
</Role>
</ServiceConfiguration>
Azure Console:
I have verified that:
My cert is uploaded
It's SHA1
It's thumbprint matches what I've specified in ServiceConfiguration.cscfg (in this example it's "1234567890")
The certs for the Certificate Authorities are also present (for me it's "PositiveSSL CA" and "AddTrust External CA root")
For the Azure instance, it confirms there are 2 endpoints (port 80 and port 443)
Why would I not be able to connect at all via https://foo.com, but my https://foo.cloudapp.net will load (although triggering a browser warning)? This seem to indicate my configuration is correct but something else is off... ideas?
I think you may be looking in the wrong place for your problem!
How have you mapped foo.com to your site's address?
Note that Azure instances are given dynamic IP addresses - what address your site may be on NOW may not be what its on tomorrow. The recommendation for Azure is to add a "www" CNAME DNS entry in your domain records that points at "foo.cloudapp.net".
This way, when someone browses to www.foo.com, the DNS server will (invisibly) say "hey, actually, that site is as foo.cloudapp.net. The browser will then ask for the IP address of foo.cloudapp.net. This domain is managed by Microsoft who will return the current IP address for your site.
If you want foo.com to still get you to www.foo.com, you'll have to setup DNS redirection so that whenever someone types foo.com into their browser, they're redirected to www.foo.com. This will then cuase the browser to resolve foo.cloudapp.net and then the HTTP request will be sent to your site. Some domain hosters charge for this (typically a nominal fee), some offer it as a free service.
HTH.