I'm looking into using a javascript logging framework in my app.
I quite like the look of log4javascript (http://log4javascript.org/) but I have one requirement which I'm not sure that it satisfies.
I need to be able to ask the framework for all messages which have been logged.
Perhaps I could use an invisible InPageAppender (http://log4javascript.org/docs/manual.html#appenders) to log to a DOM element, then scrape out the messages from that DOM element - but that seems pretty heavy.
Perhaps I need to write my own "InMemoryAppender"?
There's an ArrayAppender used in log4javascript's unit tests that stores all log messages it receives in an array accessible via its logMessages property. Hopefully it should show up in the main distribution in the next version. Here's a standalone implementation:
var ArrayAppender = function(layout) {
if (layout) {
this.setLayout(layout);
}
this.logMessages = [];
};
ArrayAppender.prototype = new log4javascript.Appender();
ArrayAppender.prototype.layout = new log4javascript.NullLayout();
ArrayAppender.prototype.append = function(loggingEvent) {
var formattedMessage = this.getLayout().format(loggingEvent);
if (this.getLayout().ignoresThrowable()) {
formattedMessage += loggingEvent.getThrowableStrRep();
}
this.logMessages.push(formattedMessage);
};
ArrayAppender.prototype.toString = function() {
return "[ArrayAppender]";
};
Example use:
var log = log4javascript.getLogger("main");
var appender = new ArrayAppender();
log.addAppender(appender);
log.debug("A message");
alert(appender.logMessages);
Related
When I add bot.hears(...), it registers middleware for handling matching text messages. But now it will handle those messages even if they are sent any time, even if not expected.
So if I am creating a stateful service, I would like to listen to particular messages only at appropriate time.
How can I unregister middleware, so that it does not hear any more previously handled messages?
I turned out I was looking for Scenes. How to use them is described on Github.
I'll just post a slightly modified code from the links above:
const { Telegraf, Scenes, session } = require('telegraf')
const contactDataWizard = new Scenes.WizardScene(
'CONTACT_DATA_WIZARD_SCENE_ID', // first argument is Scene_ID, same as for BaseScene
(ctx) => {
ctx.reply('Please enter guest\'s first name', Markup.removeKeyboard());
ctx.wizard.state.contactData = {};
return ctx.wizard.next();
},
(ctx) => {
// validation example
if (ctx.message.text.length < 2) {
ctx.reply('Please enter real name');
return;
}
ctx.wizard.state.contactData.firstName = ctx.message.text;
ctx.reply('And last name...');
return ctx.wizard.next();
},
);
const stage = new Scenes.Stage();
stage.register(contactDataWizard);
bot.use(session());
bot.use(stage.middleware());
But I still don't know how to generally implement it, so I need to find it out in the Scenes code of Telegraf.
I want to invoke DetectIntent programatically.
I am using Google.Cloud.Dialogflow.V2 - Client libraries.
using Google.Cloud.Dialogflow.V2;
var query = new QueryInput
{
Text = new TextInput
{
Text = text,
LanguageCode = "en-us"
}
};
var sessionId = "1234567890";
var agent = "myAgentName";
var creds = GoogleCredential.FromFile("JSONFileName");
Channel channel = new Channel(
SessionsClient.DefaultEndpoint.Host, SessionsClient.DefaultEndpoint.Port, creds.ToChannelCredentials());
var client = SessionsClient.Create(channel);
DetectIntentRequest request = new DetectIntentRequest
{
SessionAsSessionName = new SessionName("smartresort-facebook-bot-fgvjh", "1111"),
QueryInput = query,
};
DetectIntentResponse response = client.DetectIntent(request);
With above code I am getting error as below
I am already using same JSON file in node js code and it is working fine. So in nodejs detect intent code is working fine. I am trying to do the same in .NET core.
After this I have tried another code snippet.
var client = SessionsClient.Create();
var response = client.DetectIntent(
session: new SessionName("smartresort-facebook-bot-fgvjh", "1234567890"),
queryInput: new QueryInput()
{
Text = new TextInput()
{
Text = text,
LanguageCode = "en-us"
}
}
);
I am not trying to write fulfillment which would be called after the intent is detected. I am trying to write the code before intent is detected. So I want to give a call to detect intent and then process the response based on which intent is detected.
Check my answer here, It is an example on how to integrate Dialogflow with .Net Core based on a working sample. If you still have more questions let me know and I will be happy to help!
For one call, I am replying with a huge JSON object which sometimes causes the Node event loop to become blocked. As such, I'm using Big Friendly JSON package to stream JSON instead. My issue is I cannot figure out how to actually reply with the stream
My original code was simply
let searchResults = s3Access.getSavedSearch(guid)).Body;
searchResults = JSON.parse(searchResults.toString());
return reply(searchResults);
Works great but bogs down on huge payloads
I've tried things like, using the Big Friendly JSON package https://gitlab.com/philbooth/bfj
const stream = bfj.streamify(searchResults);
return reply(stream); // according to docs it's a readable stream
But then my browser complained about an empty response. I then tried to add the below to the reply, same result.
.header('content-encoding', 'json')
.header('Content-Length', stream.length);
I also tried return reply(null, stream); but that produced a ton of node errors
Is there some other way I need to organize this? My understanding was I could just reply a readable stream and Hapi would take care of it, but the response keeps showing up as empty.
Did you try to use h.response, here h is reply.
Example:
handler: async (request, h) => {
const { limit, sortBy, order } = request.query;
const queryString = {
where: { status: 1 },
limit,
order: [[sortBy, order]],
};
let userList = {};
try {
userList = await _getList(User, queryString);
} catch (e) {
// throw new Boom(e);
Boom.badRequest(i18n.__('controllers.user.fetchUser'), e);
}
return h.response(userList);
}
I want to get Call Details from Genesys Platform SIP Server.
And Genesys Platform has Platform SDK for .NET .
Anybod has a SIMPLE sample code which shows how to get call details using Platform SDK for .NET [ C# ] from SIP Server?
Extra Notes:
Call Details : especially i wanted to get AgentId for a given call
and
From Sip Server : I am not sure if Sip Server is the best candiate to
take call details. So open to other suggestions/ alternatives
You can build a class that monitor DN actions. Also you watch specific DN or all DN depending what you had to done. If its all about the call, this is the best way to this.
Firstly, you must define a TServerProtocol, then you must connect via host,port and client info.
var endpoint = new Endpoint(host, port, config);
//Endpoint backupEndpoint = new Endpoint("", 0, config);
protocol = new TServerProtocol(endpoint)
{
ClientName = clientName
};
//Sync. way;
protocol.Open();
//Async way;
protocol.BeginOpen();
I always use async way to do this. I got my reason thou :) You can detect when connection open with event that provided by SDK.
protocol.Opened += new EventHandler(OnProtocolOpened);
protocol.Closed += new EventHandler(OnProtocolClosed);
protocol.Received += new EventHandler(OnMessageReceived);
protocol.Error += new EventHandler(OnProtocolError);
Here there is OnMessageReceived event. This event where the magic happens. You can track all of your call events and DN actions. If you go genesys support site. You'll gonna find a SDK reference manual. On that manual quiet easy to understand there lot of information about references and usage.
So in your case, you want agentid for a call. So you need EventEstablished to do this. You can use this in your recieve event;
var message = ((MessageEventArgs)e).Message;
// your event-handling code goes here
switch (message.Id)
{
case EventEstablished.MessageId:
var eventEstablished = message as EventEstablished;
var AgentID = eventEstablished.AgentID;
break;
}
You can lot of this with this usage. Like dialing, holding on a call inbound or outbound even you can detect internal calls and reporting that genesys platform don't.
I hope this is clear enough.
If you have access to routing strategy and you can edit it. You can add some code to strategy to send the details you need to some web server (for example) or to DB. We do such kind of stuff in our strategy. After successful routing block as a post routing strategy sends values of RTargetPlaceSelected and RTargetAgentSelected.
Try this:
>
Genesyslab.Platform.Contacts.Protocols.ContactServer.Requests.JirayuGetInteractionContent
JirayuGetInteractionContent =
Genesyslab.Platform.Contacts.Protocols.ContactServer.Requests.JirayuGetInteractionContent.Create();
JirayuGetInteractionContent.InteractionId = "004N4aEB63TK000P";
Genesyslab.Platform.Commons.Protocols.IMessage respondingEventY =
contactserverProtocol.Request(JirayuGetInteractionContent);
Genesyslab.Platform.Commons.Collections.KeyValueCollection keyValueCollection =
((Genesyslab.Platform.Contacts.Protocols.ContactServer.Events.EventGetInteractionContent)respondingEventY).InteractionAttributes.AllAttributes;
We are getting AgentID and Place as follows,
Step-1:
Create a Custome Command Class and Add Chain of command In ExtensionSampleModule class as follows,
class LogOnCommand : IElementOfCommand
{
readonly IObjectContainer container;
ILogger log;
ICommandManager commandManager;
public bool Execute(IDictionary<string, object> parameters, IProgressUpdater progress)
{
if (Application.Current.Dispatcher != null && !Application.Current.Dispatcher.CheckAccess())
{
object result = Application.Current.Dispatcher.Invoke(DispatcherPriority.Send, new ExecuteDelegate(Execute), parameters, progress);
return (bool)result;
}
else
{
// Get the parameter
IAgent agent = parameters["EnterpriseAgent"] as IAgent;
IIdentity workMode = parameters["WorkMode"] as IIdentity;
IAgent agentManager = container.Resolve<IAgent>();
Genesyslab.Desktop.Modules.Core.Model.Agents.IPlace place = agentManager.Place;
if (place != null)
{
string Place = place.PlaceName;
}
else
log.Debug("Place object is null");
CfgPerson person = agentManager.ConfPerson;
if (person != null)
{
string AgentID = person.UserName;
log.DebugFormat("Place: {0} ", AgentID);
}
else
log.Debug("AgentID object is null");
}
}
}
// In ExtensionSampleModule
readonly ICommandManager commandManager;
commandManager.InsertCommandToChainOfCommandAfter("MediaVoiceLogOn", "LogOn", new
List<CommandActivator>() { new CommandActivator()
{ CommandType = typeof(LogOnCommand), Name = "OnEventLogOn" } });
enter code here
IInteractionVoice interaction = (IInteractionVoice)e.Value;
switch (interaction.EntrepriseLastInteractionEvent.Id)
{
case EventEstablished.MessageId:
var eventEstablished = interaction.EntrepriseLastInteractionEvent as EventEstablished;
var genesysCallUuid = eventEstablished.CallUuid;
var genesysAgentid = eventEstablished.AgentID;
.
.
.
.
break;
}
RavenDB throws InvalidOperationException when IsOperationAllowedOnDocument is called using embedded mode.
I can see in the IsOperationAllowedOnDocument implementation a clause checking for calls in embedded mode.
namespace Raven.Client.Authorization
{
public static class AuthorizationClientExtensions
{
public static OperationAllowedResult[] IsOperationAllowedOnDocument(this ISyncAdvancedSessionOperation session, string userId, string operation, params string[] documentIds)
{
var serverClient = session.DatabaseCommands as ServerClient;
if (serverClient == null)
throw new InvalidOperationException("Cannot get whatever operation is allowed on document in embedded mode.");
Is there a workaround for this other than not using embedded mode?
Thanks for your time.
I encountered the same situation while writing some unit tests. The solution James provided worked; however, it resulted in having one code path for the unit test and another path for the production code, which defeated the purpose of the unit test. We were able to create a second document store and connect it to the first document store which allowed us to then access the authorization extension methods successfully. While this solution would probably not be good for production code (because creating Document Stores is expensive) it works nicely for unit tests. Here is a code sample:
using (var documentStore = new EmbeddableDocumentStore
{ RunInMemory = true,
UseEmbeddedHttpServer = true,
Configuration = {Port = EmbeddedModePort} })
{
documentStore.Initialize();
var url = documentStore.Configuration.ServerUrl;
using (var docStoreHttp = new DocumentStore {Url = url})
{
docStoreHttp.Initialize();
using (var session = docStoreHttp.OpenSession())
{
// now you can run code like:
// session.GetAuthorizationFor(),
// session.SetAuthorizationFor(),
// session.Advanced.IsOperationAllowedOnDocument(),
// etc...
}
}
}
There are couple of other items that should be mentioned:
The first document store needs to be run with the UseEmbeddedHttpServer set to true so that the second one can access it.
I created a constant for the Port so it would be used consistently and ensure use of a non reserved port.
I encountered this as well. Looking at the source, there's no way to do that operation as written. Not sure if there's some intrinsic reason why since I could easily replicate the functionality in my app by making a http request directly for the same info:
HttpClient http = new HttpClient();
http.BaseAddress = new Uri("http://localhost:8080");
var url = new StringBuilder("/authorization/IsAllowed/")
.Append(Uri.EscapeUriString(userid))
.Append("?operation=")
.Append(Uri.EscapeUriString(operation)
.Append("&id=").Append(Uri.EscapeUriString(entityid));
http.GetStringAsync(url.ToString()).ContinueWith((response) =>
{
var results = _session.Advanced.DocumentStore.Conventions.CreateSerializer()
.Deserialize<OperationAllowedResult[]>(
new RavenJTokenReader(RavenJToken.Parse(response.Result)));
}).Wait();