How to use Miniprofiler storage to support multiple web instances? - asp.net-core

I've hooked up Miniprofiler to my local ASP.NET Core project and it works as expected. Now I need it to work in a hosted environment where there are multiple instances of the same website and there are no sticky sessions. It is my understanding that this should be supported if you just set the storage option when configuring the profiler. However, setting the storage does not seem to do anything. I initialize the storage like this:
var redisConnection = "...";
MiniProfiler.DefaultOptions.Storage = new RedisStorage(redisConnection);
app.UseMiniProfiler();
After doing this, I expected that I could open a profiled page and a result would be added to my redis cache. I would then also expect that a new instance of my website would list the original profiling result. However, nothing is written to the cache when generating new profile results.
To test the connection, I tried manually saving a profiler instance (storage.Save()) and it gets saved to the storage. But again, the saved result is not loaded when showing profiler results (and regardless, none of the examples I've seen requires you to do this). I have a feeling that I've missed some point about how the storage is supposed to work.

It turns out that my assumption that MiniProfiler.DefaultOptions.Storage would be used was wrong. After changing my setup code to the following, it works.
// Startup.cs ConfigureServices
var redisConnection = "...";
services.AddMiniProfiler(o =>
{
o.RouteBasePath = "/profiler";
o.Storage = new RedisStorage(redisConnection); // This is new
});
// Startup.cs Configure
app.UseMiniProfiler();

Related

Is there an option in Serilog to change log file parameters in runtime, the same way LogLevel can be changed?

With Serilog in asp.net core you can change the log level in runtine by using
MinimumLevel.ControlledBy(SeriLogLevelSwitch).
Is there a similar way to do this with LoggerConfiguration().WriteTo.File(...)
For instance i need to change the configuration for log fileSizeLimitBytes, or rollingInterval withour restaring the service. Can this be achieved with Serilog?
By pulling in the latest Serilog.AspNetCore you'll find a class called ReloadableLogger, constructed through the CreateBootstrapLogger() extension method:
// using Serilog;
var logger = new LoggerConfiguration()
.WriteTo.File(...)
.CreateBootstrapLogger();
// Optional but suggested:
Log.Logger = logger;
// Use the logger...
// Change parameters later on:
logger.Reload(lc => lc
.WriteTo.File(...));
You might find that some interactions between CreateBootstrapLogger() and UseSerilog(callback) in ASP.NET Core trip things up a bit; if you use this technique, try the parameterless version of UseSerilog().
ReloadableLogger has only just appeared and is focusing on a slightly different scenario, so you may still need to work through some awkwardness setting this up - YMMV.

Can I determine `IsDevelopment` from `IWebJobsBuilder`

Very much an XY problem, but I'm interested in the underlying answer too.
See bottom for XY context.
I'm in a .NET Core 3 AzureFunctions (v3) App project.
This code makes my question fairly clear, I think:
namespace MyProj.Functions
{
internal class CustomStartup : IWebJobsStartup
{
public void Configure(IWebJobsBuilder builder)
{
var isDevelopment = true; //Can I correctly populate this, such that it's true only for local Dev?
if(isDevelopment)
{
// Do stuff I wouldn't want to do in Prod, or on CI...
}
}
}
}
XY Context:
I have set up Swagger/Swashbuckle for my Function, and ideally I want it to auto-open the swagger page when I start the Function, locally.
On an API project this is trivial to do in Project Properties, but a Functions csproj doesn't have the option to start a web page "onDebug"; that whole page of project Properties is greyed out.
The above is the context in which I'm calling builder.AddSwashBuckle(Assembly.GetExecutingAssembly()); and I've added a call to Diagnostics.Process to start a webpage during Startup. This works just fine for me.
I've currently got that behind a [Conditional("DEBUG")] flag, but I'd like it to be more constrained if possible. Definitely open to other solutions, but I haven't been able to find any so ...
While I am not completely sure that it is possible in azure functions I think that setting the ASPNETCORE_ENVIRONMENT application setting as described in https://learn.microsoft.com/en-us/azure/azure-functions/functions-how-to-use-azure-function-app-settings should allow you to get whether the environment is set as production or development by injecting a IHostEnvironment dependency and checking
.IsDevelopment()
on the injected dependency.

Optimize API call in Symfony

How optimize an API call in symfony?
I call with Guzzle bundle, but the time in some situations is very long.
In client application call a function from the server.
In server application extract the objects from the database and send back to the client.
In client creat the new object with properties from server respons.
One of the ways to improve your API calls is to use caching. In Symfony there are many different ways to achieve this. I can show you one of them (PhpFileCache example):
In services.yml create cache service:
your_app.cache_provider:
class: Doctrine\Common\Cache\PhpFileCache
arguments: ["%kernel.cache_dir%/path_to/your_cache_dir", ".your.cached_file_name.php"]
(Remember, you need Doctrine extension in your app to work)
Then pass your caching service your_app.cache_provider to any service where you need caching:
Again in your services.yml:
some_service_of_yours:
class: AppBundle\Services\YourService
arguments: ['#your_app.cache_provider']
Finally, in your service (where you want to perform API caching):
use Doctrine\Common\Cache\CacheProvider;
class YourService
{
private $cache;
public function __construct(CacheProvider $cache)
{
$this->cache = $cache;
}
public function makeApiRequest()
{
$key = 'some_unique_identifier_of_your_cache_record';
if(!$data = $this->cache->fetch($key))
{
$data = $provider->makeActualApiCallHere('http://some_url');
$this->cache->save($key, serialize($data), 10800); //10800 here is amount of seconds to store your data in cache before its invalidated, change it to your needs
}
return $data; // now you can use the data
}
}
This is quite GENERIC example, you should change it to your exact needs, but idea is simple. You can cache data and avoid unnecessary API calls to speed things up. Be careful though, because cache has drawback of presenting stale(obsolete) data. Some things can (and should) be cached, but some things don't.
If you control the server
You should put a cache reverse proxy like Varnish on top of your PHP server. The PHP app must send HTTP cache headers to tell to the proxy how many time it must cache the request. Alternatively, you can use a library like FOSHttpCache to setup a cache invalidation strategy (the PHP server will purge the cache from the proxy when an update of the data occurs - it's a more advanced and complex scenario).
The PHP server will not even be called if the requested resource is in the reverse proxy cache.
You should also use a profiler like Blackfire.io or xhprof to find why some parts of your PHP code (or your SQL queries) take so many time to be executed, then optimize.
If you control the client
You can use this HTTP cache middleware for Guzzle to cache every API result according to HTTP headers sent by the API.

Store and Sync local Data using Breezejs and MVC Web API

I want to use breezejs api for storing data in local storage (indexdb or websql) and also want to sync local data with sql server.
But I am failed to achieve this and also not able to find sample app of this type of application using breezejs, knockout and mvc api.
My requirement is:
1) If internet is available, the data will come from sql server by using mvc web api.
2) If internet is shutdown, the application will retrieve data from cached local storage (indexdb or websql).
3) As soon as internet is on, the local data will sync to sql server.
Please let me know Can I achieve this requirement by using breezejs api or not?
If yes, please provide me some and links and sample.
If no, what other can we use for achieving this type of requirement?
Thanks.
Please help me to meet this requirement.
You can do this, but I would suggest simply using localstorage. Basically, every time you read from the server or save to the server, you export the entities and save that to local storage. THen, when you need to read in the data, if the server is unreachable, you read the data from localstorage and use importentities to get it into the manager and then query locally.
function getData() {
var query = breeze.EntityQuery
.from("{YourAPI}");
manager.executeQuery.then(saveLocallyAndReturnPromise)
.fail(tryLocalRestoreAndReturnPromise)
// If query was successful remotely, then save the data in case connection
// is lost
function saveLocallyAndReturnPromise(data) {
// Should add error handling here. This code
// assumes tis local processing will be successful.
var cacheData = manager.exportEntities()
window.localStorage.setItem('savedCache',cacheData);
// return queried data as a promise so that this detour is
// transparent to viewmodel
return Q(data);
}
function tryLocalRestoreAndReturnPromise(error) {
// Assume any error just means the server is inaccessible.
// Simplified for example, but more robust error handling is
// warranted
var cacheData = window.localStorage.getItem('savedCache');
// NOTE: should handle empty saved cache here by throwing error;
manager.importEntities(cacheData); // restore saved cache
var query = query.using(breeze.FetchStrategy.FromLocalCache);
return manager.executeQuery(query); // this is a promise
}
}
This is a code skeleton for simplicity. You should check catch and handle errors, add an isConnected function to determine connectivity, etc.
If you are doing editing locally, there are a few more hoops to jump through. Every time you make a change to the cache, you will need to export either the whole cache or the changes (probably depending on the size of the cache). When there is a connection, you will need to test for local changes first and, if found, save them to the server before requerying the server. In addition, any schema changes made while offline complicate matters tremendously, so be aware of that.
Hope this helps. A robust implementation is a bit more complex, but this should give you a starting point.

.Net 4.0 MemoryCache Clearing

I am using a .Net 4.0 MemoryCache in my WCF service.
I originally was using the Default Cache as below:
var cache = MemoryCache.Default;
Then doing the usual pattern as trying to find something in the Cache, getting, then
setting into the cache if did not find (code snippet / pseudo code as below):
var geoCoordinate = cache.Get(cacheKey) as GeoCoordinate;
if (geoCoordinate == null)
{
geoCoordinate = get it from somewhere;
cache.Set(cacheKey, geoCoordinate, DateTimeOffset.Now.AddDays(7));
}
I was finding that my entries were disappearing after approx. 2 minutes. Even if my code placed the missing entries back into the cache, subsequent cache Gets would return null.
My WCF Service is being hosted by IIS 7.5. If I recycled the App Pool, everything would work normally for 2 minutes, but then the pattern as described above would repeat.
After doing some researching I then did the below to replace:
var cache = MemoryCache.Default;
// WITH NEW CODE TO CREATE CACHE AS BELOW:
var config = new NameValueCollection();
//Hack: Set Polling interval to 10 days, so will no clear cache.
config.Add("pollingInterval", "10:00:00:00");
config.Add("physicalMemoryLimitPercentage", "90");
config.Add("cacheMemoryLimitMegabytes", "2000");
//instantiate cache
var cache = new MemoryCache("GeneralCache", config);
It seems that no matter what I place into physicalMemoryLimitPercentage, or cacheMemoryLimitMegabytes does not seem to help. But placing the pollingInterval to a large datespan does.
ie: If I set as below:
config.Add("pollingInterval", "00:00:15:00");
Then everything works fine for 15 minutes.
Note: If my WCF service is hosted by IISExpress on my dev environment, I cannot reproduce.
This also seems to happen when my WCF service is hosted by IIS 7.5.
My app pool on IIS 7.5 is NOT recycling.
Has anybody experienced something like this?
I have seen the below:
MemoryCache does not obey memory limits in configuration
Thanks,
Matt
I too have seen this issue and filed a bug with MS here with a simple reproducer project.
This has been resolved by MS in the above bug I filed - with a work around there and an upcoming QFE for .net 4 as well as confirmation that this isn't a problem in 4.5
I have not yet tried the work around
I can however give some more information on conditions required by myself to recreate this. The application pool needed to be in Integrated Pipeline mode for me to see this issue - Classic mode fixes this issue though removes some of the benefits of moving to IIS 7.5.
Equally when using Integrated mode I also did not see this issue if I used a built-in application pool identity such as ApplicationPoolIdentity. However my app needs to run as a custom identity using a service account and it is at this point at which I see the behavior. Therefore if you don't need Integrated mode or a custom Identity you can maybe work around this.
Perhaps the built-in accounts have permissions to do the cache memory statistics gathering initiated by the pollingInterval that my custom Identity does not have, I don't know.
Hope this helps or even that someone else can join more of the dots to figure out a better work around.