Is it possible to force FirebaseCrashlytics to print the log messages to console prior google buy them (and make shit as always) it was possible using the fabric api. But now seems these methods were removed.
Is there any way to do for the android sdk?
Short Answer
IT IS IMPOSSIBLE (USING FIREBASECRASHLYTICS SDK)
Complete Answer
It is a shame that prior to Google buys Crashlytics, watch the log messages on development console was an easy task to do. But now these methods were removed.
The whole problem is, if I'm in the development environment and want to follow the code execution (by watching the log messages) Crashlytics won't show them... I need to intentionally cause an crash, then wait a time for it be uploaded to dashboard them start hunting for the registers among maybe thousands of others... (non sense)
I filled a bug report for firebase
https://github.com/firebase/firebase-android-sdk/issues/3005
For those who don't want wait to google fix their shit there is a workaround:
FirebaseApp.initializeApp(this);
if (BuildConfig.DEBUG) {
try {
Field f = FirebaseCrashlytics.class.getDeclaredField("core");
f.setAccessible(true);
CrashlyticsCore core = (CrashlyticsCore) f.get(FirebaseCrashlytics.getInstance());
f = CrashlyticsCore.class.getDeclaredField("controller");
f.setAccessible(true);
Object controler = f.get(core);
f = controler.getClass().getDeclaredField("logFileManager");
f.setAccessible(true);
f.set(controler, new LogFileManager(null, null) {
#Override
public void writeToLog(long timestamp, String msg) {
super.writeToLog(timestamp, msg);
System.out.println(msg);
}
});
FirebaseCrashlytics.getInstance().log("test");
} catch (Exception e) {
}
}
The code above is replacing the field that was supposed to write the log messages to a file (AND ACTUALLY DOES NOTHING) by a new class which does all the previous one (NOTHING) but prints on the fly all messages logged.
ATTENTION
I've tested this on firebase-analytics:19.0.1 and this will only on versions of the lib with the same fields names
IT WON'T WORK IN OBFUSCATED BUILD, if you obfuscate the code in DEBUG mode the code will break (unless you add the proper rules to proguard)
If this topic reaches google engineers is very likely they will remove/obfuscate the code for next versions
google...
Related
I have created numerous VSTO Add-Ins over the last few years. They are running against many versions of MS Word (but mostly MS Word 2016). I share a common library of code that I add to when working on each new project.
I've noticed sporadic crashing when closing Word. It's the bad crash that requires task manager to clean up:
"Microsoft Word has stopped working" "Close the program?"
It happens very rarely. Rare enough that I shrug it off as "Crazy MS Word..".
My colleagues have also noticed the problem a number of times. Often it went away after I built them a new assembly (with little or no code changes..)
The situation is now more serious, seeing as a client has report the issue while testing. Interestingly, on the client machine the crashing is reproducible.
I've spent the last few days commenting out code in an attempt to identify the problem. I thought I had the problem isolated to some Ribbon Visibility code, however it turns out I was just going round in circles..
I've tried getting a crash dump via:
adsplus.exe -crash -pn winword.exe -o c:\Temp
After running this command I am unable to reproduce the error.
I noticed that changing my log4net tracing level from WARN to DEBUG caused the reproducible error to stop. I'm not confident that it's fixed however.
Is it a timing issue? Any idea how I can find the cause of my problem?
-- Edit --
#Thomas Weller, I was able to get a crash dump using https://learn.microsoft.com/en-us/sysinternals/downloads/procdump
Will see how I go interpreting it.
I was able to get a memory dump using ProcDump and it showed that a Thread exception was occurring as Word was closing.
Turns out I when I thought I was using the same dispatcher object, I was creating a new one each time.
I had code similar to:
Dispatcher dispatcher = System.Windows.Application.Current.Dispatcher;
O.CustomXMLParts parts = null;
dispatcher.WaitUntilApplicationIdle(() =>
{
parts = doc.CustomXMLParts.SelectByNamespace(Office.Namespace);
});
...
public static void WaitUntilApplicationIdle(this Dispatcher dispatcher, Action action)
{
dispatcher.Invoke(action, DispatcherPriority.ApplicationIdle);
}
And I had to ensure that the application was correctly set up as Word started:
void ThisAddIn_Startup(object sender, EventArgs e)
{
EnsureApplication();
}
void EnsureApplication()
{
if (System.Windows.Application.Current == null)
new Application()
{
ShutdownMode = ShutdownMode.OnExplicitShutdown
};
}
I'm trying to make sure that 3rd party dependencies are running, and built a service to do this based on the Monitoring 3rd party Sample Application, which emits ServiceControl CheckResult messages.
This works fine; ServicePulse alerts me when I stop/start my local and remote windows services, Databases, Flux Capacitors, etc.
I now want to build a windows service / nServiceBus Endpoint, like ServicePulse, but with logic that can attempt recovery, send emails etc. I don't really want to put this code into the 3rdParty monitor.
I followed the servicecontrol/external-integrations and servicecontrol/contracts tutorials, and created my MendStuffOrEmail endpoint - But it doesn't work; It doesn't receive any messages.
I was going to ask "what am I doing wrong?", but I think I know; I'm using IHandleMessages<ServiceControl.Contracts.MessageFailed> which is for failed messages.
I need to listen for the "CheckResult" type messages - but what are they? I have looked through the ServiceControl and ServicePulse code, but cannot work out what is being sent/received. How can I find this out, or has anyone else actually done this and already knows?
UPDATE
After more extensive rummaging, I also subscribed to CustomCheckFailed and CustomCheckSucceeded messages. I implemented IHandle interfaces for them, but I'm still not getting any messages. The log shows autosubscriber has taken out a subscription to them. What should I check for next?
I compared my code to Sean's posted
example and found the mistake:
I had implemented two of the interfaces, IConfigureThisEndpoint and AsA_Server in the wrong class (a 2am cut 'n' paste error).
The example listens for failed messages, but for anyone else trying to do this, you do need to subscribe to CustomCheckFailed and CustomCheckSucceeded messages (nuget ServiceControl.Contracts).
public partial class MessageHandler : IHandleMessages<CustomCheckFailed>,
IHandleMessages<CustomCheckSucceeded>
{
public void Handle(CustomCheckFailed message)
{
this.HandleImplementation(message);
}
partial void HandleImplementation(CustomCheckFailed message);
public void Handle(CustomCheckSucceeded message)
{
this.HandleImplementation(message);
}
partial void HandleImplementation(CustomCheckSucceeded message);
public IBus Bus { get; set; }
}
then the logic to do something with the messages. (I left in my original test - sending email - but our system has a library with all sorts of recovery & notification methods. You'll need something similar to stop an email flood):
public partial class MessageHandler
{
partial void HandleImplementation(CustomCheckFailed message)
{
var messageBody = string.Format("Message with id {0} failed with reason {1}", message.CustomCheckId, message.FailureReason);
MailMessageFactory.sendEmail("Failure Notification", messageBody);
Console.Out.WriteLine(messageBody);
}
}
And a similar file with the logic for recovery messages (CustomCheckSucceeded). You probably want a check in there to detect it is actually recovering from a failure, not just passing the test.
So anyway, fixed - on my dev pc.
The next problem was making it work on the server, which took a support call. It turns out ServiceControl ALSO needs a licence, available as part of the "Advanced", "Enterprise", and "Ultimate" editions - Not part of the platform on the standard licence.
I've run into the following problem when porting an app from REST API to GDAA.
The app needs to download some of (thousands of) JPEG images based on user selection. The way this is solved in the app is by downloading a thumbnail version first, using this construct of the REST API:
private static InputStream getCont(String rsid, boolean bBig){
InputStream is = null;
if (rsid != null) try {
File gFl = bBig ?
mGOOSvc.files().get(rsid).setFields("downloadUrl" ).execute():
mGOOSvc.files().get(rsid).setFields("thumbnailLink").execute();
if (gFl != null){
GenericUrl url = new GenericUrl(bBig ? gFl.getDownloadUrl() : gFl.getThumbnailLink());
is = mGOOSvc.getRequestFactory().buildGetRequest(url).execute().getContent();
}
} catch (UserRecoverableAuthIOException uraEx) {
authorize(uraEx.getIntent());
} catch (GoogleAuthIOException gauEx) {}
catch (Exception e) { }
return is;
}
It allows to get either a 'thumbnail' or 'full-blown' version of an image based on the bBig flag. User can select a thumbnail from a list and the full-blown image download follows (all of this supported by disk-base LRU cache, of course).
The problem is, that GDAA does not have an option to ask for reduced size / thumbnail version of an object (AFAIK), so I have to resort to combining both APIs, which makes the code more convoluted then I like (bottom of the page). Needles to state that the 'Resource ID' needed by the REST may not be immediately available.
So, the question is: Is there a way to ask GDAA for a 'thumbnail' version of a document?
Downloading thumbnails isn't currently available in the Drive Android API, and unfortunately I can't give a timeframe to when it will be available. Until that time, the Drive Java Client Library is the best way to get thumbnails on Android.
We'd appreciate if you go ahead and file a feature request against our issue tracker: https://code.google.com/a/google.com/p/apps-api-issues/
That gives requests more visibility to our teams internally, and issues will be marked resolved when we release updates.
Update: I had an error in the discussion of the request fields.
As Ofir says, you can't get thumbnails with the Drive Android API and you can get thumbnails with the Drive Java Client Library. This page has is a really good primer for getting started:
https://developers.google.com/drive/v3/web/quickstart/android
Oddly, I can't get the fields portion of the request to work as it is on that quick start. As I've experienced, you have to request the fields a little differently.
Since you're doing a custom field request you have to be sure to add the other fields you want as well. Here is how I've gotten it to work:
Drive.Files.List request = mService.files()
.list()
.setFields("files/thumbnailLink, files/name, files/mimeType, files/id")
.setQ("Your file param and/or mime query");
FileList files = request.execute();
files.getFiles(); //Each File in the collection will have a valid thumbnailLink
A sample query might be:
"mimeType = 'image/jpeg' or mimeType = 'video/mp4'"
Hope this helps!
private void Subscribe()
{
EventSystem.Subscribe<User, LoadEventArgs>(GetInfo, EventPhases.Initiated);
}
public void GetInfo(User user, LoadEventArgs args, EventPhases phase)
{
TcmUri id = user.Id;
string name = user.Title;
Console.WriteLine(id.ToString());
Console.WriteLine(name);
}
I wrote above code and add the assembly in config file in Tridion server but no console window is coming on login of a user
The event you were initially subscribing to is the processed phase of any identifiable object with any of its actions, that will trigger basically on every transaction happening in the SDL Tridion CMS, so it won't give you any indication of when a user logs in (it's basically everything which happens all the time).
Probably one of the first things which is happening after a user logs in, is that its user info and application data is read. So what you should try is something along the lines of:
private void Subscribe()
{
EventSystem.Subscribe<User, LoadEventArgs>(GetInfo, EventPhases.Initiated);
}
public void GetInfo(User user, LoadEventArgs args, EventPhases phase)
{
TcmUri id = user.Id;
string name = user.Title;
}
But do keep in mind that this will also be triggered by other actions, things like viewing history, checking publish transactions and possibly a lot more. I don't know how you can distinguish this action to be part of a user login, since there isn't an event triggered specifically for that.
You might want to check out if you can find anything specific for a login in the LoadEventArgs for instance in its ContextVariables, EventStack, FormerLoadState or LoadFlags.
Edit:
Please note that the Event System is running inside the SDL Tridion core, so you won't ever see a console window popup from anywhere. If you want to log information, you can include the following using statement:
using Tridion.Logging;
After adding a reference to the Tridion.Logging.dll which you can find in your ..\Tridion\bin\client directory. Then you can use the following logging statement in your code:
Logger.Write("message", "name", LoggingCategory.General, TraceEventType.Information);
Which you will find back in your Tridion Event log (provided you have set the logging level to show information messages too).
But probably the best option here is to just debug your event system, so you can directly inspect your object when the event is triggered. Here you can find a nice blog article about how to setup debugging of your event system.
If you want to get the TCM URI of the current user, you can do so in a number of ways.
I would recommend one of these:
Using the Core Service, call GetCurrentUser and read the Id property.
Using TOM.NET, read the User.Id property of the current Session.
It looks like you want #2 in this case as your code is in the event system.
I work at a college and have been developing an ASP.NET site with many, many reports about students, attendance stats... The basis for the data is an MSSQL server DB which is the back end to our student management system. This has a regular maintenance period on Thursday mornings for an unknown length of time (dependent on what has to be done).
Most of the staff are aware of this but the less regular users seem to be forever ringing me up. What is the easiest way to disable the site during maintenance obviously I can just try a DB query to test if it is up but am unsure of the best way to for instance redirect all users to a "The website is down for maintenance" message, bearing in mind they could have started a session prior to the website going down.
Hopefully, something can be implemented globally rather than per page.
Drop an html file called "app_offline.htm" into the root of your virtual directory. Simple as that.
Scott Guthrie on the subject and friendly errors.
I would suggest doing it in Application_PreRequestHandlerExecute instead of after an error occurs. Generally, it'd be best not to enter normal processing if you know your database isn't available. I typically use something like below
void Application_PreRequestHandlerExecute(Object sender, EventArgs e)
{
string sPage = Request.ServerVariables["SCRIPT_NAME"];
if (!sPage.EndsWith("Maintenance.aspx", StringComparison.OrdinalIgnoreCase))
{
//test the database connection
//if it fails then redirect the user to Maintenance.aspx
string connStr = ConfigurationManager.ConnectionString["ConnectionString"].ConnectionString;
SqlConnection conn = new SqlConnection(connStr);
try
{
conn.Open();
}
catch(Exception ex)
{
Session["DBException"] = ex;
Response.Redirect("Maintenance.aspx");
}
finally
{
conn.Close();
}
}
}
You could display a message to people who have logged in saying "the site will be down for maintenance in xxx minutes" then run a service to log everyone out after xxx minutes. Then set a flag somewhere that every page can access, and at the top of every page(or just the template page) you test if that flag is set, if it is, send a redirect header to a site is down for maintenance page.
What happens now when the site is down and someone tries to hit it? Does ADO.NET throw a specific exception you could catch and then redirect to the "website down" page?
You could add a "Global.asax" file to the project, and in its code-behind add an "Application_Error" event handler. It would fire whenever an exception is thrown and goes uncaught, from anywhere in your web app. For example, in C#:
protected void Application_Error(object sender, EventArgs e)
{
Exception e = Server.GetLastError().GetBaseException();
if(e is SqlException)
{
Server.ClearError();
Server.Transfer("~/offline.aspx");
}
}
You could also check the Number property on the exception, though I'm not sure which number(s) would indicate it was unable to connect to the database server. You could test this while it's down, find the SQL error number and look it up online to see if it's specifically what you really want to be checking for.
EDIT: I see what you're saying, petebob.
The "offline.html" page won't work if the user was already navigating within the site, or if he's accessing the site from a bookmark/external link to a specific page.
The solution I use is to create a second web site with the same address (IP or host header(s)), but have it disabled by default. When the website is down, a script deactivates the "real" web site and enables the "maintenance" website instead. When it comes back online, another script switches back to the "real" web site.
The "maintenance" web site is located in a different root directory, with a single page with the message (and any required images/css files)
To have the same message shown on any page, the "maintenance" web site is set up with a 404 error handler that will redirect any request to the same "website is down for maintenance" page.
A slightly more elegant version of the DB check on every page would be to do the check in the Global.asax file or to create a master page that all the other pages inherit from.
The suggestion of having an online site and an offline site is really good, but only really applicable if you have a limited number of sites to manage on the server.
EDIT: Damn, the other answers with these suggestions came up after I loaded the page. I need to remember to refresh before replying :)
James code forgets to close the connection, should probably be:
try
{
conn.Open();
}
catch(Exception ex)
{
Session["DBException"] = ex;
Response.Redirect("Maintenance.aspx");
}
finally
{
conn.Close();
}
Thanks for the replies so far I should point out I'm not the one that does the maintenance nor does I have access all the time to IIS. Also, I prefer options where I do nothing as like all programmers I am a bit lazy.
I know one way is to check a flag on every page but I'm hoping to avoid it. Could I not do something with the global.asax page, in fact, I think posting has engaged my brain:
Think I could put in Application_BeginRequest a bit of code to check the SQL state then redirect:
HttpContext context = HttpContext.Current;
if (!isOnline())
{
context.Response.ClearContent();
context.Response.Write("<script language='javascript'>" +
"top.location='" + Request.ApplicationPath + "/public/Offline.aspx';</scr" + "ipt>");
}
Or something like that may not be perfect not tested yet as I'm not at work. Comments appreciated.