Hello i am building an tcp server where i will have multiple clients connected that will send and receive data to the server.
I want to know if the framework does not create a 1:1 Thread to Client ratio but uses the threadpool how do the following happen:
1.If the method that gets executed after accepting the socket contains a loop inside it, won't the allocated thread (by the threadpool) get blocked on a client context?
2.Where is the context for each client stored?
P.S In my picture i do not understand how the blue thread (given by the threadpool to service the two clients)gets reused.
The code below contains the Handler (contains all connections) and the Client (a socket wrapper , with basic read/write functionality).
Sockets Handler
class Handler
{
private Dictionary<string, Client> clients = new Dictionary<string, Client>();
private object Lock = new object();
public Handler()
{
}
public async Task LoopAsync(WebSocketManager manager)
{
WebSocket clientSocket = await manager.AcceptWebSocketAsync();
string clientID = Ext.MakeId();
using(Client newClient = Client.Create(clientSocket, clientID))
{
while (newClient.KeepAlive)
{
await newClient.ReceiveFromSocketAsync();
}
}
}
public bool RemoveClient(string ID)
{
bool removed = false;
lock (Lock)
{
if (this.clients.TryGetValue(ID, out Client client))
{
removed= this.clients.Remove(ID);
}
}
return removed;
}
}
SocketWrapper
class Client:IDisposable
{
public static Client Create(WebSocket socket,string id)
{
return new Client(socket, id);
}
private readonly string ID;
private const int BUFFER_SIZE = 100;
private readonly byte[] Buffer;
public bool KeepAlive { get; private set; }
private readonly WebSocket socket;
private Client(WebSocket socket,string ID)
{
this.socket = socket;
this.ID = ID;
this.Buffer = new byte[BUFFER_SIZE];
}
public async Task<ReadOnlyMemory<byte>> ReceiveFromSocketAsync()
{
WebSocketReceiveResult result = await this.socket.ReceiveAsync(this.Buffer, CancellationToken.None);
this.KeepAlive = result.MessageType==WebSocketMessageType.Close?false:true;
return this.Buffer.AsMemory();
}
public async Task SendToSocketAsync(string message)
{
ReadOnlyMemory<byte> memory = Encoding.UTF8.GetBytes(message);
await this.socket.SendAsync(memory, WebSocketMessageType.Binary,true,CancellationToken.None);
}
public void Dispose()
{
this.socket.Dispose();
}
}
The service that will get injected in the application:
class SocketService
{
Handler hander;
public SocketService(Handler _handler)
{
this.hander = _handler;
}
RequestDelegate next;
public async Task Invoke(HttpContext context)
{
if (!context.WebSockets.IsWebSocketRequest)
{
await this.next(context);
return;
}
await this.hander.AddClientAsync(context.WebSockets);
}
}
Related
Let's say I have several ASP.NET BackgroundServices and each is logging to its own scope/operation (OP1 and OP2).
public class MyBackgroundService1 : BackgroundService
{
private readonly ILogger<MyBackgroundService1> _logger;
public MyBackgroundService1(ILogger<MyBackgroundService1> logger)
{
_logger = logger;
}
protected override async Task ExecuteAsync(CancellationToken stoppingToken)
{
var activity = new Activity("OP1");
activity.Start();
while (!stoppingToken.IsCancellationRequested)
{
_logger.LogInformation("Hello from MyBackgroundService1");
await Task.Delay(5000, stoppingToken);
}
}
}
public class MyBackgroundService2 : BackgroundService
{
private readonly ILogger<MyBackgroundService2> _logger;
public MyBackgroundService2(ILogger<MyBackgroundService2> logger)
{
_logger = logger;
}
protected override async Task ExecuteAsync(CancellationToken stoppingToken)
{
var activity = new Activity("OP2");
activity.Start();
while (!stoppingToken.IsCancellationRequested)
{
_logger.LogInformation("Hello from MyBackgroundService2");
await Task.Delay(1000, stoppingToken);
}
}
}
Now I would like to use Blazor and want to display a table per operation with all corresponding logs.
Example output
OP1 Logs:
Hello from MyBackgroundService1
Hello from MyBackgroundService1
OP2 Logs:
Hello from MyBackgroundService2
Hello from MyBackgroundService2
How would I do that?
For this purpose, you need to create a log provider that stores the information in the database and then retrieves the information from the log table.
First, create a class to store logs in the database as follows:
public class DBLog
{
public int DBLogId { get; set; }
public string? LogLevel { get; set; }
public string? EventName { get; set; }
public string? Message { get; set; }
public string? StackTrace { get; set; }
public DateTime CreatedDate { get; set; }=DateTime.Now;
}
Now, We need to create a custom DBLogger. The DBLogger class inherits from the ILogger interface and has three methods, the most important of which is the Log method, which is actually called every time the Logger is called in the program. To read more about the other two methods, you can refer here.
public class DBLogger:ILogger
{
private readonly LogLevel _minLevel;
private readonly DbLoggerProvider _loggerProvider;
private readonly string _categoryName;
public DBLogger(
DbLoggerProvider loggerProvider,
string categoryName
)
{
_loggerProvider= loggerProvider ?? throw new ArgumentNullException(nameof(loggerProvider));
_categoryName= categoryName;
}
public IDisposable BeginScope<TState>(TState state)
{
return new NoopDisposable();
}
public bool IsEnabled(LogLevel logLevel)
{
return logLevel >= _minLevel;
}
public void Log<TState>(
LogLevel logLevel,
EventId eventId,
TState state,
Exception exception,
Func<TState, Exception, string> formatter)
{
if (!IsEnabled(logLevel))
{
return;
}
if (formatter == null)
{
throw new ArgumentNullException(nameof(formatter));
}
var message = formatter(state, exception);
if (exception != null)
{
message = $"{message}{Environment.NewLine}{exception}";
}
if (string.IsNullOrEmpty(message))
{
return;
}
var dblLogItem = new DBLog()
{
EventName = eventId.Name,
LogLevel = logLevel.ToString(),
Message = $"{_categoryName}{Environment.NewLine}{message}",
StackTrace=exception?.StackTrace
};
_loggerProvider.AddLogItem(dblLogItem);
}
private class NoopDisposable : IDisposable
{
public void Dispose()
{
Dispose(true);
GC.SuppressFinalize(this);
}
protected virtual void Dispose(bool disposing)
{
}
}
}
Now we need to create a custom log provider so that an instance of the above custom database logger (DBLogger) can be created.
public class DbLoggerProvider : ILoggerProvider
{
private readonly CancellationTokenSource _cancellationTokenSource = new();
private readonly IList<DBLog> _currentBatch = new List<DBLog>();
private readonly TimeSpan _interval = TimeSpan.FromSeconds(2);
private readonly BlockingCollection<DBLog> _messageQueue = new(new ConcurrentQueue<DBLog>());
private readonly Task _outputTask;
private readonly IServiceProvider _serviceProvider;
private bool _isDisposed;
public DbLoggerProvider(IServiceProvider serviceProvider)
{
_serviceProvider = serviceProvider ?? throw new ArgumentNullException(nameof(serviceProvider));
_outputTask = Task.Run(ProcessLogQueue);
}
public ILogger CreateLogger(string categoryName)
{
return new DBLogger(this, categoryName);
}
private async Task ProcessLogQueue()
{
while (!_cancellationTokenSource.IsCancellationRequested)
{
while (_messageQueue.TryTake(out var message))
{
try
{
_currentBatch.Add(message);
}
catch
{
//cancellation token canceled or CompleteAdding called
}
}
await SaveLogItemsAsync(_currentBatch, _cancellationTokenSource.Token);
_currentBatch.Clear();
await Task.Delay(_interval, _cancellationTokenSource.Token);
}
}
internal void AddLogItem(DBLog appLogItem)
{
if (!_messageQueue.IsAddingCompleted)
{
_messageQueue.Add(appLogItem, _cancellationTokenSource.Token);
}
}
private async Task SaveLogItemsAsync(IList<DBLog> items, CancellationToken cancellationToken)
{
try
{
if (!items.Any())
{
return;
}
// We need a separate context for the logger to call its SaveChanges several times,
// without using the current request's context and changing its internal state.
var scopeFactory = _serviceProvider.GetRequiredService<IServiceScopeFactory>();
using (var scope = scopeFactory.CreateScope())
{
var scopedProvider = scope.ServiceProvider;
using (var newDbContext = scopedProvider.GetRequiredService<ApplicationDbContext>())
{
foreach (var item in items)
{
var addedEntry = newDbContext.DbLogs.Add(item);
}
await newDbContext.SaveChangesAsync(cancellationToken);
// ...
}
}
}
catch
{
// don't throw exceptions from logger
}
}
[SuppressMessage("Microsoft.Usage", "CA1031:catch a more specific allowed exception type, or rethrow the exception",
Justification = "don't throw exceptions from logger")]
private void Stop()
{
_cancellationTokenSource.Cancel();
_messageQueue.CompleteAdding();
try
{
_outputTask.Wait(_interval);
}
catch
{
// don't throw exceptions from logger
}
}
public void Dispose()
{
Dispose(true);
GC.SuppressFinalize(this);
}
protected virtual void Dispose(bool disposing)
{
if (!_isDisposed)
{
try
{
if (disposing)
{
Stop();
_messageQueue.Dispose();
_cancellationTokenSource.Dispose();
}
}
finally
{
_isDisposed = true;
}
}
}
}
In the end, it is enough to call this custom log provider (DbLoggerProvider) in the Startup.cs or Program.cs class.
var serviceProvider = app.ApplicationServices.CreateScope().ServiceProvider;
loggerFactory.AddProvider(new DbLoggerProvider(serviceProvider));
From now on, every time we call the _logger.LogInformation("");, the log information will also be stored in the database.
Note: Because the number of calls to record logs in the database may be high, a concurrent queue is used to store logs.
If you like, you can refer to my repository that implements the same method.
In order to log the areas separately(scope/operation), you can create several different DBLoggers to store the information in different tables.
I have a ASP NET Core application that will serve as a RabbitMQ producer.I have read the tutorial and guides regarding the RabbitMQ .NET client and i still do not know how to deal with the channel lifetime and concurrent access.
From what i have read i understood the following:
IConnection is threadsafe ,but is costly to create
IModel is not threadsafe but is lightweight
For the IConnection i would initialize it in the Startup and inject it as a singleton (service).
However i I do not know how to deal with IModel management.Lets say i have a couple of services that use it, is it scalable to just :
Solution 1
public void Publish(IConnection connection)
{
using(IModel model=connection.CreateChannel())
{
model.BasicPublish(...);
}
}
Solution 2
From what i have read , i understood that its not really scalable.
So another solution would be to create a separate service which would contain a loop , a ConcurrentQueue, and all services would dispatch messages here.
This service would be the sole publisher to RabbitMQ
Publisher
public class Publisher
{
private CancellationTokenSource tcs=new CancellationTokenSource();
public BlockingCollection<byte[]> messages=new BlockingCollection<byte[]>();
private IModel channel;
private readonly string ExchangeName;
private Task loopTask;
public void Run()
{
this.loopTask=Task.Run(()=>Loop(tcs.Token),tcs.Token);
}
private void Loop(Cancellation token)
{
while(true)
{
token.ThrowIfCancellationRequested();
queue.Take(out byte[]data);
channel.BasicPublish(...,body:data);
}
}
public void Publish(byte[] message)
{
this.queue.Add(message);
}
}
Usage
public class SomeDIService
{
private IConnection connection;
SomeDIService(Publisher publisher)
{
this.publisher=publisher;
}
public void DoSomething(byte[] data)
{
//do something...
this.publisher.Publish(data);
}
}
I would prefer solution 1 but i do not know the performance penalty ,while i do not like solution 2 since i wanted to just publish messages directly to RabbitMQ.Now i have to deal with this long running Task too.
Is there any other solution , am i missing something ? Is there a simpler way?
Update
I mentioned concurrent access.I meant i need a way to publish messages from multiple endpoints (services) to RabbitMQ.
Real scenario
public class Controller1:Controller
{
private SomeDIService service; //uses Publisher
[HttpGet]
public void Endpoint1()
{
this.service.DoSomething();
}
[HttpPost]
public void Endpoint2()
{
this.service.DoSomething();
}
}
public class Controller2:Controller
{
private SomeDIService service;
[HttpGet]
public void Endpoint3()
{
this.service.DoSomething();
}
[HttpPost]
public void Endpoint4()
{
this.service.DoSomething();
}
}
after searching for long time i found this solution and it works very good for me
using Microsoft.Extensions.Options;
using RabbitMQ.Client;
using System;
using System.Collections.Generic;
using System.Text;
using System.Text.Json;
using System.Threading.Tasks;
namespace BSG.MessageBroker.RabbitMQ
{
public class Rabbit : IRabbit
{
private readonly EnvConfigModel EnvConfig;
private readonly string _hostname;
private readonly string _password;
private readonly string _exchangeName;
private readonly string _username;
private IConnection _connection;
private IModel _Model;
public Rabbit(IOptions<EnvConfigModel> appSettings)
{
EnvConfig = appSettings.Value;
_Logger = services;
_exchangeName = EnvConfig.Rabbit_ExchangeName;
_hostname = EnvConfig.Rabbit_Host;
_username = EnvConfig.Rabbit_Username;
_password = EnvConfig.Rabbit_Password;
CreateConnection();
_Model = _connection.CreateModel();
}
private void CreateConnection()
{
try
{
var factory = new ConnectionFactory
{
HostName = _hostname,
UserName = _username,
Password = _password,
AutomaticRecoveryEnabled = true,
TopologyRecoveryEnabled = true,
NetworkRecoveryInterval = TimeSpan.FromSeconds(3)
};
_connection = factory.CreateConnection();
}
catch (Exception ex)
{
Console.WriteLine($"Could not create connection: {ex.Message}");
}
}
private bool ConnectionExists()
{
if (_connection != null)
{
return true;
}
CreateShredderConnection();
return _connection != null;
}
public bool PushToQueue(string Message)
{
try
{
if (ConnectionExists())
{
byte[] body = Encoding.UTF8.GetBytes(JsonSerializer.Serialize(Message));
_Model.BasicPublish(exchange: _exchangeName,
routingKey: 1001,
basicProperties: null,
body: body);
}
return true;
}
catch (Exception ex)
{
return false;
}
}
}
}
I have two subscribers to the same event. One subscriber writes to the database and the other caches data in memory. The former takes much longer than the latter and that's OK since the caching is more time critical than writing to the database. Some times the DB writer gets behind and its queue starts to grow, and that's OK (as long as eventually it catches up). But, it's unacceptable for the caching subscriber to get behind. It's OK for it to can get ahead of the DB writer when the DB writer cannot keep up. I want the two queues to be processed as fast as possible without the processing of one affecting the processing of the other.
But, what I see is that when the DB writer queue grows, the caching subscriber's queue grows. The two queues have the same number of pending items. They seem to be in lock step.
Profiling shows that the DB write takes about 500 times longer than the memory cache (which is not surprising). So, the caching subscriber could easily keep up, but it seems to be held back by the DB writer subscriber.
I'll apologize for code up front. It would be easier to understand if not for the endpoint wrapper code.
This method creates the endpoint as en event publisher. (My wrapper code provides different endpoint types for different purposes. I find the NSB endpoint abstraction to be lacking):
public async Task Start()
{
if (_endpoint == null)
_endpoint = await EventPublisherEndpoint.Start(EndpointName);
}
Here's the EventPublisherEndpoint:
public sealed class EventPublisherEndpoint : Endpoint, IEventPublisherEndpoint
{
private EventPublisherEndpoint(IEndpointInstance nsbEndpoint, string name) : base(nsbEndpoint, name) { }
/// <summary>
/// Create and start endpoint.
/// </summary>
public static async Task<IEventPublisherEndpoint> Start(string endpointName)
{
var ep = await ConfigureEndpoint(new EndpointConfig(), endpointName);
return new EventPublisherEndpoint(ep, endpointName);
}
public async Task Publish(object message)
{
await NsbEndpoint.Publish(message);
}
}
Here's the general endpoint factory:
protected static async Task<IEndpointInstance> ConfigureEndpoint(
IEndpointConfig config,
string endpointName,
Action<EndpointConfiguration> configureEndpoint = null,
Action<TransportExtensions<MsmqTransport>> configureTransport = null,
Action<PersistenceExtensions<SqlPersistence>> configurePersistence = null)
{
if (config == null)
throw new ArgumentNullException(nameof(config));
if (endpointName == null)
throw new ArgumentNullException(nameof(endpointName));
var fullName = GetFullName(endpointName);
try
{
IEndpointInstance ep = null;
await Logger.OperationAsync("Start endpoint: " + endpointName, async () =>
{
// eliminate existing error queue to clear it; is re-created below
string errorEndpointName = $"{fullName}.error";
EliminateQueue(errorEndpointName);
var endpointConfiguration = new EndpointConfiguration(fullName);
endpointConfiguration.UseSerialization<JsonSerializer>();
endpointConfiguration.SendFailedMessagesTo(errorEndpointName);
endpointConfiguration.EnableInstallers();
endpointConfiguration.DoNotCreateQueues(); // created explicitly (below)
endpointConfiguration.DefineCriticalErrorAction(OnCriticalError);
endpointConfiguration.LimitMessageProcessingConcurrencyTo(config.MaxConcurrency);
configureEndpoint?.Invoke(endpointConfiguration);
// NOTE:
// Using TransportTransactionMode.None disables retries.
// Using default (whatever that value is) results in runtime error on endpoint
// config that distributed transactions are not enabled (whatever that means).
// Stumbled into SendsAtomicWithReceive which seems to work; no error and
// retries happen.
const TransportTransactionMode transactionMode = TransportTransactionMode.SendsAtomicWithReceive;
var transport = endpointConfiguration
.UseTransport<MsmqTransport>()
.Transactions(transactionMode);
{
var routing = transport.Routing();
var instanceMappingFile = routing.InstanceMappingFile();
instanceMappingFile.FilePath(NsbInstanceMappingFile.DefaultFilePath);
}
configureTransport?.Invoke(transport);
var persistence = endpointConfiguration.UsePersistence<SqlPersistence>();
persistence.SqlVariant(SqlVariant.MsSqlServer);
persistence.ConnectionBuilder(() => new SqlConnection(config.PersistConnectionString));
persistence.TablePrefix(fullName + ".");
configurePersistence?.Invoke(persistence);
var subscriptions = persistence.SubscriptionSettings();
subscriptions.CacheFor(SubscriptionCachePeriod);
EnsureQueuesForEndpoint(fullName);
ep = await NServiceBus.Endpoint.Start(endpointConfiguration);
});
return ep;
}
catch (Exception exception)
{
throw new ApplicationException($"Unable to start endpoint '{endpointName}' for connection [{config.PersistConnectionString}].", exception);
}
}
Here's the publish code (inside a logging call):
public async Task Publish(DeviceOutput message)
{
await Logger.OperationWithoutBeginLogOrThrowAsync(
() => $"Publishing {message.GetType().Name} message: {message.ToString().Truncate()}.",
async () => { await _endpoint.Publish(message); });
}
Here's the event class:
public sealed class Datagram : DeviceOutput, IEvent
{
public Datagram(long deviceID, DatagramType payloadType, string payload) : base(deviceID)
{
Payload = payload;
PayloadType = payloadType;
}
public string Payload { get; set; }
public DatagramType PayloadType { get; set; }
public override string ToString()
{
return base.ToString() + Environment.NewLine + $"PayloadType:{PayloadType} Payload:{Payload.Truncate()}";
}
}
Here's one of the subscriber endpoint config:
public async Task Start()
{
if (_endpoint == null)
{
_endpoint = await EventSubscriberEndpoint<Datagram>.Start(
endpointName: "datagram-store-subscriber",
publisherEndpointName: DatagramPublisher.EndpointName);
}
}
And the associated handler:
public async Task Handle(Datagram datagram, IMessageHandlerContext context)
{
await _storer.Store(datagram);
}
Here's the other subscriber endpoint config:
public async Task Start()
{
if (_endpoint == null)
{
_endpoint = await EventSubscriberEndpoint<Datagram>.Start(
endpointName: "datagram-live-subscriber",
publisherEndpointName: DatagramPublisher.EndpointName);
}
}
And associated handler:
public async Task Handle(Datagram datagram, IMessageHandlerContext context)
{
//var start = DateTime.Now;
if (IsLiveDataEnabled)
{
await Logger.OperationWithoutBeginLogAsync(
() => $"Saving datagram: {datagram}",
async () => { await WriteData(datagram); });
}
//Logger.Info($"{DateTime.Now - start}");
}
Here's EventSubscriberEndpoint:
public sealed class EventSubscriberEndpoint<T> : Endpoint, IEventSubscriberEndpoint where T : IEvent
{
private EventSubscriberEndpoint(IEndpointInstance nsbEndpoint, string name) : base(nsbEndpoint, name) {}
public async Task Unsubscribe()
{
await NsbEndpoint.Unsubscribe(typeof(T), new UnsubscribeOptions());
}
/// <summary>
/// Create and start endpoint.
/// </summary>
public static async Task<IEventSubscriberEndpoint> Start(
string endpointName,
string publisherEndpointName,
Type[] excludeSubscriberTypes = null)
{
return await Start(endpointName, publisherEndpointName, typeof(T), excludeSubscriberTypes);
}
/// <summary>
/// Non-generic version of Start.
/// </summary>
private static async Task<IEventSubscriberEndpoint> Start(
string endpointName,
string publisherEndpointName,
Type messageType,
Type[] excludeSubscriberTypes = null)
{
return await Start(new EndpointConfig(), endpointName, publisherEndpointName, messageType, excludeSubscriberTypes);
}
private static async Task<IEventSubscriberEndpoint> Start(
IEndpointConfig config,
string endpointName,
string publisherEndpointName,
Type messageType,
Type[] excludeSubscriberTypes = null)
{
if (publisherEndpointName == null)
throw new ArgumentNullException(nameof(publisherEndpointName));
if (messageType == null)
throw new ArgumentNullException(nameof(messageType));
var ep = await ConfigureEndpoint(config,
endpointName,
endpointConfiguration =>
{
if (excludeSubscriberTypes != null)
endpointConfiguration.AssemblyScanner().ExcludeTypes(excludeSubscriberTypes);
}, transport =>
{
transport.Routing().RegisterPublisher(messageType, GetFullName(publisherEndpointName));
});
return new EventSubscriberEndpoint<T>(ep, endpointName);
}
}
Is there special config that allows queues to be processed independently of each other?
NServiceBus 6.2.1
Working on building signalR hub, I'm able to get data from hub to the client but I'm, not sure how do I push it every 1 second.
I'm not sure where do I set the timer in the controller where getApps method exists or in the hub?
Hub:
public class nphub : Hub
{
public readonly sbController _sbcontroller;
public nphub(sbController sbcontroller)
{
_sbcontroller = sbcontroller;
}
public async Task NotifyConnection()
{
IActionResult result = await _sbcontroller.getApps();
await Clients.All.SendAsync("TestBrodcasting", result);
}
}
In Controller:
public async Task<IActionResult> getApps()
{
// var request = new HttpRequestMessage(HttpMethod.Get, "apps");
// var response = await _client_NP.SendAsync(request);
// var json = await response.Content.ReadAsStringAsync();
return Ok($"Testing a Basic HUB at {DateTime.Now.ToLocalTime()}");
}
Client:
let connection = new signalR.HubConnectionBuilder()
.withUrl("/nphub").build();
connection.start().then(function () {
TestConnection();
}).catch(function (err) {
return console.error(err.toString());
});
function TestConnection() {
connection.invoke("NotifyConnection").catch(function (err) {
return console.error(err.toString());
});
}
connection.on("TestBrodcasting", function (time) {
document.getElementById('broadcastDiv').innerHTML = time.value;
document.getElementById('broadcastDiv').style.display = "block";
});
Just for the test purpose to see realtime changes, I'm trying to return time. I'm able to see time on the client but it's not changing.
You need to use a hosted service, as described in the docs. Add a class like:
internal class SignalRTimedHostedService : IHostedService, IDisposable
{
private readonly IHubContext<nphub> _hub;
private readonly ILogger _logger;
private Timer _timer;
public SignalRTimedHostedService(IHubContext<nphub> hub, ILogger<SignalRTimedHostedService> logger)
{
_hub = hub;
_logger = logger;
}
public Task StartAsync(CancellationToken cancellationToken)
{
_logger.LogInformation("Timed Background Service is starting.");
_timer = new Timer(DoWork, null, TimeSpan.Zero,
TimeSpan.FromSeconds(1));
return Task.CompletedTask;
}
private void DoWork(object state)
{
_logger.LogInformation("Timed Background Service is working.");
// send message using _hub
}
public Task StopAsync(CancellationToken cancellationToken)
{
_logger.LogInformation("Timed Background Service is stopping.");
_timer?.Change(Timeout.Infinite, 0);
return Task.CompletedTask;
}
public void Dispose()
{
_timer?.Dispose();
}
}
Note: A hosted service lives in singleton scope. You can inject IHubContext<T> directly, though, because it too is in singleton scope.
Then in ConfigureServices:
services.AddHostedService<SignalRTimedHostedService>();
My code at the moment looks like this:
Server side:
#region IClientCallback interface
interface IClientCallback
{
[OperationContract(IsOneWay = true)]
void ReceiveWcfElement(WcfElement wcfElement);
}
#endregion
#region IService interface
[ServiceContract(SessionMode = SessionMode.Required, CallbackContract = typeof(IClientCallback))]
interface IService
{
[OperationContract(IsOneWay = true, IsInitiating = false, IsTerminating = false)]
void ReadyToReceive(string userName, int source, string ostatniTypWiadomosci);
[OperationContract(IsOneWay = false, IsInitiating = false, IsTerminating = false)]
bool SendWcfElement(WcfElement wcfElement);
[OperationContract(IsOneWay = false, IsInitiating = true, IsTerminating = false)]
List<int> Login(Client name, string password, bool isAuto, bool isSuperMode);
}
#endregion
#region Public enums/event args
public delegate void WcfElementsReceivedFromClientEventHandler(object sender, WcfElementsReceivedFromClientEventArgs e);
public class WcfElementsReceivedFromClientEventArgs : EventArgs
{
public string UserName;
}
public class ServiceEventArgs : EventArgs
{
public WcfElement WcfElement;
public Client Person;
}
#endregion
#region Service
[ServiceBehavior(InstanceContextMode = InstanceContextMode.Single, ConcurrencyMode = ConcurrencyMode.Multiple)]
public class Service : IService
{
#region Instance fields
//thread sync lock object
private static readonly Object SyncObj = new Object();
//callback interface for clients
IClientCallback _callback;
//delegate used for BroadcastEvent
public delegate void ChatEventHandler(object sender, ServiceEventArgs e);
public static event ChatEventHandler ChatEvent;
private ChatEventHandler _myEventHandler;
//holds a list of clients, and a delegate to allow the BroadcastEvent to work
//out which chatter delegate to invoke
static readonly Dictionary<Client, ChatEventHandler> Clients = new Dictionary<Client, ChatEventHandler>();
//current person
private Client _client;
#endregion
#region Helpers
private bool CheckIfPersonExists(string name)
{
return Clients.Keys.Any(p => p.UserName.Equals(name, StringComparison.OrdinalIgnoreCase));
}
private ChatEventHandler getPersonHandler(string name)
{
foreach (var c in Clients.Keys.Where(c => c.UserName.Equals(name, StringComparison.OrdinalIgnoreCase)))
{
ChatEventHandler chatTo;
Clients.TryGetValue(c, out chatTo);
return chatTo;
}
return null;
}
private Client GetPerson(string name)
{
return Clients.Keys.FirstOrDefault(c => c.UserName.Equals(name, StringComparison.OrdinalIgnoreCase));
}
#endregion
#region IService implementation
public List<int> Login(Client client, string password, bool isAuto, bool isSuperMode)
{
if (client.ElementsVersions == null)
{
client.ElementsVersions = new WcfElement(WcfElement.RodzajWiadomosci.VersionControl, client.UserName);
}
//create a new ChatEventHandler delegate, pointing to the MyEventHandler() method
_myEventHandler = MyEventHandler;
lock (SyncObj)
{
if (!CheckIfPersonExists(client.UserName))
{
_client = client;
Clients.Add(client, _myEventHandler);
}
else
{
_client = client;
foreach (var c in Clients.Keys.Where(c => c.UserName.Equals(client.UserName)))
{
ChatEvent -= Clients[c];
Clients.Remove(c);
break;
}
Clients[client] = _myEventHandler;
}
_client.LockObj = new object();
}
_callback = OperationContext.Current.GetCallbackChannel<IClientCallback>();
ChatEvent += _myEventHandler;
var rValue = isAuto ? bazaDanych.Login(client.UserName, isSuperMode) : bazaDanych.Login(client.UserName, password);
return rValue;
}
public void PerformDataSync(Client c)
{
WcfElement wcfDelete = null;
WcfElement wcfUpdate = null;
//...
//this method prepares elements for client
//when done it adds them to clients queue (List<WcfElement)
try
{
var counter = 0;
if (wcfDelete != null)
{
foreach (var wcf in WcfElement.SplitWcfElement(wcfDelete, false))//split message into small ones
{
c.AddElementToQueue(wcf, counter++);
}
}
if (wcfUpdate != null)
{
foreach (var wcf in WcfElement.SplitWcfElement(wcfUpdate, true))
{
c.AddElementToQueue(wcf, counter++);
}
}
SendMessageToGui(string.Format("Wstępna synchronizacja użytkownika {0} zakończona.", c.UserName));
c.IsSynchronized = true;
}
catch (Exception e)
{
}
}
private void SendMessageToClient(object sender, EventArgs e)
{
var c = (Client) sender;
if (c.IsReceiving || c.IsSending)
{
return;
}
c.IsReceiving = true;
var wcfElement = c.GetFirstElementFromQueue();
if (wcfElement == null)
{
c.IsReceiving = false;
return;
}
Clients[c].Invoke(this, new ServiceEventArgs { Person = c, WcfElement = wcfElement });
}
public void ReadyToReceive(string userName)
{
var c = GetPerson(userName);
c.IsSending = false;
c.IsReceiving = false;
if (c.IsSynchronized)
{
SendMessageToClient(c, null);
}
else
{
PerformDataSync(c);
}
}
public bool SendWcfElement(WcfElement wcfElement)
{
var cl = GetPerson(wcfElement.UserName);
cl.IsSending = true;
if (wcfElement.WcfElementVersion != bazaDanych.WcfElementVersion) return false;
//method processes messages and if needed creates creates WcfElements which are added to every clients queue
return ifSuccess;
}
#endregion
#region private methods
private void MyEventHandler(object sender, ServiceEventArgs e)
{
try
{
_callback.ReceiveWcfElement(e.WcfElement);
}
catch (Exception ex)
{
}
}
#endregion
}
#endregion
Client side in a moment
#region Client class
[DataContract]
public class Client
{
#region Instance Fields
/// <summary>
/// The UserName
/// </summary>
[DataMember]
public string UserName { get; set; }
[DataMember]
public WcfElement ElementsVersions { get; set; }
private bool _isSynchronized;
public bool IsSynchronized
{
get { return _isSynchronized; }
set
{
_isSynchronized = value;
}
}
public bool IsSending { get; set; }
public bool IsReceiving { get; set; }
private List<WcfElement> ElementsQueue { get; set; }
public object LockObj { get; set; }
public void AddElementToQueue(WcfElement wcfElement, int position = -1)
{
try
{
lock (LockObj)
{
if (ElementsQueue == null) ElementsQueue = new List<WcfElement>();
if (position != -1 && position <= ElementsQueue.Count)
{
try
{
ElementsQueue.Insert(position, wcfElement);
}
catch (Exception e)
{
}
}
else
{
try
{
//dodaje na koncu
ElementsQueue.Add(wcfElement);
}
catch (Exception e)
{
}
}
}
}
catch (Exception e)
{
}
}
public WcfElement GetFirstElementFromQueue()
{
if (ElementsQueue == null) return null;
if (ElementsQueue.Count > 0)
{
var tmp = ElementsQueue[0];
ElementsQueue.RemoveAt(0);
return tmp;
}
return null;
}
#endregion
#region Ctors
/// <summary>
/// Assign constructor
/// </summary>
/// <param name="userName">The userName to use for this client</param>
public Client(string userName)
{
UserName = userName;
}
#endregion
}
#endregion
ProxySingletion:
[CallbackBehavior(ConcurrencyMode = ConcurrencyMode.Reentrant, UseSynchronizationContext = false)]
public sealed class ProxySingleton : IClientCallback
{
#region Instance Fields
private static ProxySingleton _singleton;
public static bool IsConnected;
private static readonly object SingletonLock = new object();
private ServiceProxy _proxy;
private Client _myPerson;
private delegate void HandleDelegate(Client[] list);
private delegate void HandleErrorDelegate();
//main proxy event
public delegate void ProxyEventHandler(object sender, ProxyEventArgs e);
public static event ProxyEventHandler ProxyEvent;
//callback proxy event
public delegate void ProxyCallBackEventHandler(object sender, ProxyCallBackEventArgs e);
public static event ProxyCallBackEventHandler ProxyCallBackEvent;
#endregion
#region Ctor
/// <summary>
/// Blank constructor
/// </summary>
private ProxySingleton()
{
}
#endregion
#region Public Methods
#region IClientCallback implementation
public void ReceiveWcfElement(WcfElement wcfElement)
{
//process received data
//...
ReadyToReceive();
}
#endregion
public void ReadyToReceive()
{
try
{
if (bazaDanych.Dane.Client.IsSending) return;
var w = bazaDanych.Dane.Client.GetFirstElementFromQueue();
if (w != null)
{
SendWcfElement(w);
return;
}
_proxy.ReadyToReceive(bazaDanych.Dane.Client.UserName, source, ostatniTypWiadomosci);
}
catch (Exception)
{
IsConnected = false;
}
}
public static WcfElement CurrentWcfElement;
public bool SendWcfElement(WcfElement wcfElement)
{
if (bazaDanych.Dane.Client.IsReceiving)
{
bazaDanych.Dane.Client.AddElementToQueue(wcfElement);
return true;
}
bazaDanych.Dane.Client.IsSending = true;
foreach (var wcfElementSplited in WcfElement.SplitWcfElement(wcfElement, true))
{
CurrentWcfElement = wcfElementSplited;
try
{
var r = _proxy.SendWcfElement(wcfElementSplited);
CurrentWcfElement = null;
}
catch (Exception e)
{
IsConnected = false;
return false;
}
}
bazaDanych.Dane.Client.IsSending = false;
ReadyToReceive();
return true;
}
public void ListenForConnectOrReconnect(EventArgs e)
{
SendWcfElement(WcfElement.GetVersionElement());//send wcfelement for perform PerformDataSync
ReadyToReceive();
}
public static bool IsReconnecting;
public bool ConnectOrReconnect(bool shouldRaiseEvent = true)
{
if (IsReconnecting)
{
return IsConnected;
}
if (IsConnected) return true;
IsReconnecting = true;
bazaDanych.Dane.Client.IsReceiving = false;
bazaDanych.Dane.Client.IsSending = false;
bazaDanych.Dane.Client.IsSynchronized = false;
try
{
var site = new InstanceContext(this);
_proxy = new ServiceProxy(site);
var list = _proxy.Login(bazaDanych.Dane.Client, bazaDanych.Dane.UserPassword, bazaDanych.Dane.UserIsAuto, bazaDanych.Dane.UserIsSuperMode);
bazaDanych.Dane.UserRights.Clear();
bazaDanych.Dane.UserRights.AddRange(list);
IsConnected = true;
if (shouldRaiseEvent) ConnectOrReconnectEvent(null);
}
catch (Exception e)
{
IsConnected = false;
}
IsReconnecting = false;
return IsConnected;
}
}
#endregion
At the moment my app works like this:
After successful login every client sends WcfElements(which contains bunch of list with ids and versions of elements). Then it sends ReadyToReceive one way message which after login fires performsync method. That method prepares data for client and sends first of them using one way receive method. IF there is more than one wcfelement to send then only last one is marked as last. Client responds with ReadyToReceive after every successful receive from Server. All up to this point works quite well. Problem starts later. Mostly packages are lost (method receiveWcfElement). Server has marked that client is receiving and maybe processing message and is waitng for readytoreceive packet, which will never be send because of lost element.
I've made it like this because as far as I know client can't send and receive at the same time. I've tried this and got this problem:
If client send wcfElement with SendWcfElement method and server due to processing this element created another element which was supposed to be ssend back to client then client whoud have faulted proxy if callback was send before sendWcfElement returned true indicating that method was completed.
Now I wonder if it is possible for client to send and receive at the same time using two way methods ?
I ended up with to services(two connections). One for connection from client to server and another with callback which handles connection from server to client.