I've been doing some reading and watching some videos on aspnetcore and otel.
Its been a bit challenging b/c the api surface appears to have evolved quite a bit since 2020.
I've got my aspnetcore solution wired up with otel via OpenTelemetry.Extensions.Hosting, OpenTelemetry.Instrumentation.HTTP, OpenTelemetry.Instrumentation.AspNetCore and using a Jaeger exporter.
I have a couple of questions;
In the sample's I've seen I can set the top level service name via
service registration on the exporters, but this property appears to
have been dropped in the most recent packages. How do I set the top
level service name (its showing up a unknown service in jaeger)?
I'm trying to propagate my tenant identifier to all span related
calls. I'm using activity.Current.AddBaggage("TenantId",MyTenantId)
but the value isn't getting exported to jaeger (the baggage items
aren't present in the raw json received by jaeger).
I'd like to include the activity id in the response headers for all
outgoing responses. Do I need to write this myself, or is it baked
into the aspnet core otel code somewhere?
Thanks!
How do I set the top level service name (its showing up a unknown service in jaeger)?
You achieve this using Resource. There are couple of ways to do this. You could set the env variables OTEL_RESOURCE_ATTRIBUTES=service.name=my-service-name or the OTEL_SERVICE_NAME=my-service-name
If you want to do this programatically you would register a Resource object to The TracerProvider.
I'm trying to propagate my tenant identifier to all span related calls. I'm using activity.Current.AddBaggage("TenantId",MyTenantId) but the value isn't getting exported to jaeger (the baggage items aren't present in the raw json received by jaeger).
I don't think baggage items are automatically added to trace. For this use you probably should be looking at TraceState.
I'd like to include the activity id in the response headers for all outgoing responses. Do I need to write this myself, or is it baked into the aspnet core otel code somewhere?
I am not super familiar with aspnet instrumentation components but there should be some sort of hooks to do this. JS, Python and other Client libraries has this feature.
Related
I am trying to work around a issue with a 3rd party filter. My current plan is to put a filter in front of that filter to "fix" the query string so it does not error out.
I made an ActionFilterAttribute and added it into the filter list. It is running fine. I am adding my logic in the OnActionExecuting method.
The first item of context.HttpContext.Request.Query has a Key that is a json structure. I need to change that Key to be {}.
Problem is that both context.HttpContext.Request.Query and context.HttpContext.Request.QueryString are read-only.
How can I alter the context.HttpContext.Request.Query or the context.HttpContext.Request.QueryString?
EDIT - The Underlying Problem:
BreezeJS did a minimal level upgrade to support .NET Core. In this upgrade, part of the code expects that every call that has any parameters to return an IQueryable (QueryFns.cs Line 32). From reading the code it seems like this is an error (the calling function (the actual filter) seems to just expect null to be returned not an Exception.)
Either way, this makes moving to .NET Core very hard.
I considered my other options and if this fails, I will continue to pursue them:
Submit a pull request to fix the issue: The project has not accepted any pull requests in over a year and a half. So it seems unlikely my request will be taken.
Fork my own branch: I would rather not have to create and maintain a separate version with my own build and publishing pipeline.
Find a way to make the Breeze filter ignore the call when the result is not an IQueryable: I am currently looking into this one. (This question.)
Find a way to send my call from the client differently so that breeze ignores calls that do not return IQueryable: The return type of the call is owned by the service. And this is an issue with the service. I would rather not have to have tight coupling between the service and the client such that the client is crafting workarounds for service filter issues.
There is a ton of documentation on academic theory and best practices on how to manage versioning for RESTful Web Services, however I have not seen much discussion on how multiple REST APIs interact with data.
I'd like to see various architectural strategies or documentation on how to handle hosting multiple versions of your app that rely on the same data pool.
For instance, suppose you make a database level destructive change to a database table that causes you to have to increment your major API version to v2.
Now at any given time, users could be interacting with the v1 web service and the v2 web service at the same time and creating data that is visible and editable by both services. How should this be handled?
Most of changes introduced to API affect the content of the response, till changes introduced are incremental this is not a very big problem (note: you should never expose the exact DB model directly to the clients).
When you make a destructive/significant change to DB model and new API version of API is introduced, there are two options:
Turn the previous version off, filter out all queries to reply with 301 and new location.
If 1. is impossible to need to maintain both previous and current version of the API. Since this might time and money consuming it should be done only for some time and finally previous version should be turned off.
What with DB model? When two versions of API are active at the same time I'd try to keep the DB model as consistent as possible - having in mind that running two versions at the same time is just temporary. But as I wrote earlier, DB model should never be exposed directly to the clients - this may help you to avoid a lot of problems.
I have given this a little thought...
One solution may be this:
Just because the v1 API should not change, it doesn't mean the underlying implementation cannot change. You can modify the v1 implementation code to set a default value, omit the saving of a field, return an unchecked exception, or do some kind of computational logic that helps the v1 API to be compatible with the shared datasource. Then, implement a better, cleaner, more idealistic implementation in v2.
when you are going to change any thing in your API structure that can change the response, you most increase you'r API Version.
for example you have this request and response:
request post: a, b, c, d
res: {a,b,c+d}
and your are going to add 'e' in your response fetched from database.
if you don't have any change based on 'e' in current client versions, you can add it on your current API version.
but if you'r new changes are going to change last responses, for example:
res: {a+e, b, c+d}
you most increase API number to prevent crashing.
changing in the request input's are the same.
Hi I am working with Mule Web Service Consumer and i was trying to call operation with Multiple Parameters it is warning me that
Warning : Operation Messages With More then 1 Part Are Not Supported
I just want to pass multiple parameters to access my SOAP method to achieve the task.
Is this the problem with Web Service Consumer or is their any way to deal with this.
I'm afraid this is a known limitation of the web services consumer. However you can accomplish this with the cxf component.
I having the same issue and found some information around it ...
There is a improvement logged in JIRA, may help if you vote for it :)
This link suggests that you can still use WSConsumer but need to do some hand crafting of the request XML ... I could not understand what that exactly it meant so if anyone has an example on it would be great
PS: The problem I had with using CXF component is that it does not play well with the new Dataweave transformer as the Dataweave needs to be placed within the response block and from there it cannot datasense the response coming out from the CXF component
The Solution here is very simple. You just have to comment other messages and then load metadata for non-commented message (for one which you're trying to load metadata). Repeat this procedure for all the other messages and you're good to go.
Hope this helps !
Is it possible to write a plugin for Glimpse's existing SQL tab?
I'm trying to log my SQL queries and the currently available extensions don't support our in-house SQL libary. I have written a custom plugin which logs what I want, but it has limited functionality and it doesn't integrate with the existing SQL tab.
Currently, I'm logging to my custom plugin using a single helper method inside my DAL's base class. This function looks takes the SqlCommand and Duration in order to show data on my custom tab:
// simplified example:
Stopwatch sw = Stopwatch.StartNew();
sqlCommand.Connection = sqlConnection;
sqlConnection.Open();
object result = sqlCommand.ExecuteScalar();
sqlConnection.Close();
sw.Stop();
long duration = sw.ElapsedMilliseconds;
LogSqlActivity(sqlCommand, null, duration);
This works well on my 'custom' tab but unfortunately means I don't get metrics shown on Glimpse's HUD:
Is there a way I can provide Glimpse directly with the info it needs (in terms of method names, and parameters) so it displays natively on the SQL tab?
The following advise is based on the fact that you can't use DbProviderFactory and you can't use a proxied SqlCommand, etc.
The data that appears in the "out-of-the-box" SQL tab is based on messages of given types been published through our internal Message Broker (see below on information on this). Because of the above limitations in your case, to get things lighting up correctly (i.e. your data showing up in HUD and the SQL tab), you will need to simulate the work that we do under the covers when we publish these messages. This shouldn't be that difficult and once done, should just work moving forward.
If you have a look at the various proxies we have here you will be above to see what messages we publish in what circumstances. Here are some highlights:
DbCommand
Log command start - here
Log command error - here
Log command end - here
DbConnection:
Log connection open - here
Log connection closed - here
DbTransaction
Log Started - here
Log committed - here
Log rollback - here
Other
Command row count here - Glimpses calculates this at the DbDataReader level but you could do it elsewhere as well
Now that you have an idea of what messages we are expecting and how we generate them, as long as you pass in the right data when you publish those messages, everything should just light up - if you are interested here is the code that looks for the messages that you will be publishing.
Message Broker: If you at the GlimpseConfiguration here you will see how to access the Broker. This can be done statically if needed (as we do here). From here you can publish the messages you need.
Helpers: For generating some of the above messages, you can use the helpers inside the Support class here. I would have shifted all the code for publishing the actual messages to this class, but I didn't think there would be too many people doing what you are doing.
Update 1
Starting point: With the above approach you shouldn't need to write your own plugin. You should just be able to access the broker GlimpseConfiguration.GetConfiguredMessageBroker() (make sure you check if its null, which it is if Glimpse is turned off, etc) and publish your messages.
I would imagine that you would put the inspection that leverages the broker and published the messages, where ever you have knowledge of the information that needs to be collected (i.e. inside your custom lib). Normally this would require references inside your lib to glimpse (which you may not want), so to protect against this, from your lib, you would call a proxy (which could be another VS proj) that has the glimpse dependency. Hence your ado lib only has references to your own code.
To get your toes wet, try just publishing a couple of fake connection and command messages. Assuming the broker you get from GlimpseConfiguration.GetConfiguredMessageBroker() isn't null, these should just show up. Then you can work towards getting real data into it from your lib.
Update 2
Obsolete Broker Access
Its marked as obsolete because its going to change in v2. You will still be able to do what you need to do, but the way of accessing the broker has changed. For what you currently need to do this is ok.
Sometimes null
As you have found this is really dependent on where in the page lifecycle you are currently at. To get around this, I would probably change my original recommendation a little.
In the code where you are currently creating messages and pushing them to the message bus, try putting them into HttpContext.Current.Items. If you haven't used it before, this is a store which asp.net provides out of the box which lasts the lifetime of a given request. You could have a list that you put in there, still create the message objects that you are doing, but put them into that list instead of pushing them through the broker.
Then, create a HttpModule (its really simple to do) which taps into the PostLogRequest event. Within this handler, you would pull the list out of the context, iterate through it and push the message into the message broker (accessing the same way you have been).
I have set up a Content Type Hub and tested the syndication is working correctly by creating a test content type and watching it be published to the client site.
Then I deployed the content types I am actually interested in publishing to the hub (by way of a feature) along with the site columns they depend on.
I get the error
Content type '...' cannot be published to this site because feature '...' is not enabled.
I want to deploy content types with features for upgradability and ease of porting between dev, qual and prod environments. But am left not understanding what the benefit of the Hub is.
If I have to activate the deploying feature, the content types will already be on the site before publishing will take place. If I have to manually create the content types on the Hub site with the web UI (yuck!), I have the issue of trying to keep three landscapes manually synchronized.
Is there a way to efficiently manage content type deployment to the Hub while still using the Hub to publish the content types?
The advantage of using the Content Type Hub, is that it allows you to use and reuse your Content Types over multiple site collections and Web Applications throughout your farm.
Because all of your site collections are now using instances of the same syndicated content types, if, in the future, you need to add/remove/rename columns within the content types, this is done as easily as updating the content type, and resubscribing (then allowing sharepoint to run its timer jobs, and double checking that the changes updated because you're a careful SharePoint administrator).
I am not sure which error you are receiving, there simply isn't enough context in your post. However, I think you may be slightly confused on how syndicated content types are published. First, you turn on the content syndication hub publishing feature on the site collection that holds all of the content types you are going to reuse throughout your farm. Next you configure the mixed metadata service, so that SharePoint loads each of your content types "into memory" more or less.
After this step, you get to choose which site collections you want to subscribe to the syndication hub. To do this, you need to turn on the content type publishing site collection feature. Note: If you use blank templates for your sites you may receive a feature error like you've described, due to a "flaw" with blank templates. See my post at: http://www.thesharepointblog.net/Lists/Posts/Post.aspx?ID=109
Only AFTER you've turned on the subscribing feature, And content Type Hub timer job has run, AND the subscribing timer job has run, will your site collection see the available content types.
As for manually creating content types on the hub site, the only OOB way of doing this is to use the UI. Personally, I wrote a utility that does everything I just described for me, from creating the initial content types, to creating the syndication hub, publishing them to all of the site collections, and most time consumingly, associating them with all of the lists and libraries on the subscribing site collections. I had intended for my employing company to sell it, but as they don't seem interested, I could open source it if there is enough interest.
Hope this was helpful.
This looks like a shortcoming of the hub, indeed.
I've witnessed it before.
If you've deployed your content type to the hub, please check if the INHERITS tag of the content type element is set to TRUE. Otherwise it won't work in a hub.
<ContentType ID="xxxxx"
Name="xxxx"
Group="xxxx"
Description="xxxx"
Inherits="TRUE"
Version="0">
</ContentType>
Don't forget that you can actually synchronize the content types BETWEEN farms -- this is especially valuable when you are developing on a separate farm and don't want to hassle with a PnP Framework for managing your content types... In some cases, the Content Type may already exist on the production farm and you need a copy of them on dev and/or test..