SQL Server trace - translating PAGE information to actual resource - sql

I am researching deadlocks that are happening in our application. I turned trace on for 1204, 1205 and 3605. I got the deadlock trace alright. But I am unable to figure out the resource it is deadlocking on. I have read many forums and they all say that the trace should contain something called a KEY/RID which would point to the resource in question. But my trace files does not contain KEY/RID at all. Instead it contains something called as PAGE.
For example,
06/30/2010 16:29:52,spid4s,Unknown,PAGE: 8:1:16512 CleanCnt:2 Mode:IX Flags: 0x2
06/30/2010 16:29:52,spid4s,Unknown,PAGE: 8:1:5293 CleanCnt:2 Mode:IX Flags: 0x2
How can I determine what this resource is, based on this PAGE information I am getting? Thanks in advance for your help!

it looks like the lock is being done at a page level. Check out http://msdn.microsoft.com/en-us/library/aa937573(SQL.80).aspx > Using Trace Flag 1204 > Terms in a Trace Flag 1204 Report > PAG
PAG
Identifies the page resource on which
a lock is held or requested.
PAG is represented in Trace Flag 1204
as PAG: db_id:file_id:page_no; for
example, PAG: 7:1:168.
Edit
Use DBCC PAGE (http://support.microsoft.com/kb/83065 or http://www.sqlmonster.com/Uwe/Forum.aspx/sql-server/26555/Determining-table-for-a-particular-File-id-Page-No) to get the object id from the page information,
then use OBJECT_NAME (http://msdn.microsoft.com/en-us/library/ms186301.aspx) or query sys.objects to get the resource

Related

How to get Exception source “Activity Description name”

When exceptions occur in a UIPath project I have an email that is sent out with the exception info included. There seems to be an issue though where I can only see where the error occured by looking at the selector information such as:
Cannot find the UI element corresponding to this selector:
<html app='chrome.exe' title='Microsoft Dynamics GP' />
<webctrl aaname='Add' idx='1'
parentid='a00000000000000008549000000030009000000000001000000000000' tag='DIV' />
This info and the stack trace or any other info is not really helpful for quickly finding the source of the problem. I have looked through the UIPath documentation and forum and found only the this question, which seemed to point to using the exception.Source to show the name of the activity where the error occurred. exception.Source only returns “UiPath.Core.Activities” though instead of "Type into Copy Job# 'INPUT'" in the following example:
This obviously causes a big problem with exception handling. How can I easily return the source with each exception?
When your selector fails, you will end up with a new object of type UiPath.Core.SelectorNotFoundException. However, until the team at UiPath decides to add the Display Name into the inner exception, there is little you can do in this particular case.
Take the following example - the first line shows the Inner Exception, and the second one in red is essentially just the exception being rethrown. Note that only the latter one contains the Display Name property.
The Source itself will usually be of type UiPath.Core.Activities, but since this is just the type's name, we don't have any link to the faulting object. Here's what you can do:
Add some details to your exception. You don't want to do this for each activity, but you could have certain blocks of try-catches (example: logging into the system consists of three individual activites, and they reside in one block).
Rethrow the exception. That way the Display Name will end up in the execution log file.

Datastax: Block not found error from DSEFS

Spark streaming job running in DSE using DSEFS for check-pointing directory. I see this error in debug log file. How to resolve this error?
ERROR [dsefs-netty-worker-5] 2017-12-01 05:23:02,679 DSE-FS RestServerHandler.scala:126 - [id: 0x9964e082, /<>:58874 :> 0.0.0.0/0.0.0.0:5598] Streaming data to remote end failed.
java.io.IOException: Block not found a3859f30-aa23-11e7-80b9-4b8bdaf197cd
at com.datastax.bdp.fs.server.blocks.BlockService$stateMachine$33$1.apply(BlockService.scala:706) ~[dsefs-server_2.10-5.0.19.jar:5.0.19]
at com.datastax.bdp.fs.server.blocks.BlockService$stateMachine$33$1.apply(BlockService.scala:703) ~[dsefs-server_2.10-5.0.19.jar:5.0.19]
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32) [scala-library-2.10.6.jar:na]
at com.datastax.bdp.fs.exec.SameThreadExecutionContext$class.executeInSameThread(SameThreadExecutionContext.scala:24) ~[dsefs-common_2.10-5.0.19.jar:5.0.19]
at com.datastax.bdp.fs.exec.SameThreadExecutionContext$class.execute(SameThreadExecutionContext.scala:33) ~[dsefs-common_2.10-5.0.19.jar:5.0.19]
at com.datastax.bdp.fs.exec.SerialExecutionContextProvider$$anon$5$$anon$2.execute(SerialExecutionContextProvider.scala:24) ~[dsefs-common_2.10-5.0.19.jar:5.0.19]
at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:40) [scala-library-2.10.6.jar:na]
at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:248) ~[scala-library-2.10.6.jar:na]
at scala.concurrent.Promise$class.complete(Promise.scala:55) ~[scala-library-2.10.6.jar:na]
at scala.concurrent.impl.Promise$DefaultPromise.complete(Promise.scala:153) ~[scala-library-2.10.6.jar:na]
at com.datastax.bdp.fs.server.blocks.BlockService$stateMachine$1$1.apply(BlockService.scala:60) ~[dsefs-server_2.10-5.0.19.jar:5.0.19]
at com.datastax.bdp.fs.server.blocks.BlockService$stateMachine$1$1.apply(BlockService.scala:60) ~[dsefs-server_2.10-5.0.19.jar:5.0.19]
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32) [scala-library-2.10.6.jar:na]
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:358) [netty-all-4.0.34.Final.jar:4.0.34.Final]
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:357) [netty-all-4.0.34.Final.jar:4.0.34.Final]
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:112) [netty-all-4.0.34.Final.jar:4.0.34.Final]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_112]
This error means DSEFS server failed to find metadata of the data block in the dsefs.blocks Cassandra table. The ids of the file blocks are stored in the dsefs.block_offsets table and they reference blocks stored in dsefs.blocks. If a row exists in dsefs.block_offsets and points to the block id that is absent in dsefs.blocks, you get this error when reading the file.
This error should not happen under normal circumstances and it means the filesystem metadata somehow got into inconsistent state. This may be a bug in the DSEFS implementation, a result of a data loss caused by setting up dsefs keyspace with insufficient replication factor or a result of a write operation that did not finish successfully and was applied only partially.
Please make sure you set dsefs keyspace RF to at least 3 and run nodetool repair to avoid accidental data loss or unavailability of some DSEFS metadata.
If this doesn't help, please contact me directly or through DataStax technical support and provide more details, including logs from the time before the error and more context on what the job was doing when the failure occurred.

Determine actual errors from a load job

Using the Java SDK I am creating a load job for just a single record with a fairly complicated schema. When monitoring the status of the load job, it takes a surprisingly long time (but perhaps this is due to working out the schema), but then says:
11:21:06.975 [main] INFO xxx.GoogleBigQuery - Job status (21694ms) create_scans_1384744805079_172221126: DONE
11:24:50.618 [main] ERROR xxx.GoogleBigQuery - Job create_scans_1384744805079_172221126 caused error (invalid) with message
Too many errors encountered. Limit is: 0.
11:24:50.810 [main] ERROR xxx.GoogleBigQuery - {
"message" : "Too many errors encountered. Limit is: 0.",
"reason" : "invalid"
?}
BTW - how do I tell the job that it can have more than zero errors using Java?
This load job does not appear in the list of recent jobs in the console, and as far as I can see, none of the Java objects contains any more details about the actual errors encountered. So how can I pro-grammatically find out what is going wrong? All I can find is:
if (err != null) {
log.error("Job {} caused error ({}) with message\n{}", jobID, err.getReason(), err.getMessage());
try {
log.error(err.toPrettyString());
}
...
In general I am having a difficult time finding good documentation for some of these things and am working it out by trial and error and short snippets of code found on here and older groups. If there is a better source of information than the getting started guides, then I would appreciate any pointers to that information. The Javadoc does not really help and I cannot find any complete examples of loading, querying, testing for errors, cataloging errors and so on.
This job is submitted via a NEWLINE_DELIMITIED_JSON record, supplied to the job via:
InputStream dummy = getClass().getResourceAsStream("/googlebigquery/xxx.record");
final InputStreamContent jsonIn = new InputStreamContent("application/octet-stream", dummy);
createTableJob = bigQuery.jobs().insert(projectId, loadJob, jsonIn).execute();
My authentication and so on seems to work correctly as separate Java code to list the projects, and the datasets in the project all works correctly. So I just need help in working what the actual error is - does it not like the schema (I have records nested within records for instance), or does it think that there is an error in the data I am submitting.
Thanks in advance for any help. The job number cited above is an actual failed load job if that helps any Google staffers who might read this.
It sounds like you have a couple of questions, so I'll try to address them all.
First, the way to get the status of the job that failed is to call jobs().get(jobId), which returns a job object that has an errorResult object that has the error that caused the job to fail (e.g. "too many errors"). The errorStream list is a lost of all of the errors on the job, which should tell you which lines hit errors.
Note if you have the job id, it may be easier to use bq to lookup the job -- you can run bq show <job_id> to get the job error information. If you add the --format=prettyjson it will print out all of the information in the job.
A hint you also might want to consider is to supply your own job id when you create the job -- then even if there is an error starting the job (i.e. the insert() call fails, perhaps due to a network error) you can look up the job to see what actually happened.
To tell BigQuery that some errors are allowed during import, you can use the maxBadResults setting in the load job. See https://developers.google.com/resources/api-libraries/documentation/bigquery/v2/java/latest/com/google/api/services/bigquery/model/JobConfigurationLoad.html#getMaxBadRecords().

Regions/Locales and Pulling DateTimes( From Strings) from a Database

I've been having several problems lately with clients that are using a different Windows Region setting than I am. I cannot seem to find a way to fix it. The error is as follows:
The type initializer for 'InventoryDataTable' threw an exception. ---> System.TypeInitializationException: The type initializer for 'InventoryDataTable' threw an exception. ---> System.FormatException: String was not recognized as a valid DateTime.
The error occurs when users load the application and their region settings do not match my own. The application loads up a dataset and attempts to receive a small amount of data before allowing the user to log in. When this is removed, the problem occurs immediately after the user logs in to the app.
I cannot seem to find the proper settings to force the user to use either my Region, or to allow the application to figure it out on it's own...
Exact error:
System.InvalidOperationException: An error occurred creating the form. See Exception.InnerException for details. The error is: The type initializer for 'InventoryDataTable' threw an exception. ---> System.TypeInitializationException: The type initializer for 'InventoryDataTable' threw an exception. ---> System.FormatException: String was not recognized as a valid DateTime.
at System.DateTimeParse.Parse(String s, DateTimeFormatInfo dtfi, DateTimeStyles styles)
at System.DateTime.Parse(String s)
at Invasion_3042_v2.INVDataSet.InventoryDataTable..cctor() in C:\Users\Tdata\Desktop\I2.original\Invasion 3042 v2\INVDataSet.Designer.vb:line 7588
--- End of inner exception stack trace ---
at Invasion_3042_v2.INVDataSet.InventoryDataTable..ctor()
at Invasion_3042_v2.INVDataSet.InitClass() in C:\Users\Tdata\Desktop\I2.original\Invasion 3042 v2\INVDataSet.Designer.vb:line 4296
at Invasion_3042_v2.INVDataSet..ctor() in C:\Users\Tdata\Desktop\I2.original\Invasion 3042 v2\INVDataSet.Designer.vb:line 447
at Invasion_3042_v2.INV3042LOGIN.InitializeComponent() in C:\Users\Tdata\Desktop\I2.original\Invasion 3042 v2\INV3042LOGIN.Designer.vb:line 39
at Invasion_3042_v2.INV3042LOGIN..ctor() in C:\Users\Tdata\Desktop\I2.original\Invasion 3042 v2\INV3042LOGIN.vb:line 100
--- End of inner exception stack trace ---
at Invasion_3042_v2.My.MyProject.MyForms.Create__Instance__[T](T Instance) in 17d14f5c-a337-4978-8281-53493378c1071.vb:line 190
at Invasion_3042_v2.My.MyProject.MyForms.get_INV3042LOGIN()
at Invasion_3042_v2.My.MyApplication.OnCreateMainForm() in C:\Users\Tdata\Desktop\I2.original\Invasion 3042 v2\My Project\Application.Designer.vb:line 35
at Microsoft.VisualBasic.ApplicationServices.WindowsFormsApplicationBase.OnRun()
at Microsoft.VisualBasic.ApplicationServices.WindowsFormsApplicationBase.DoApplicationModel()
Posting the actual code causing the issue would have helped, but I can put forth a pretty good guess at what's going on.
Could it be that the date format being stored into your centralized database is always of a given format (maybe US?). If so, what is very likely happening is the following:
A date like 1/14/2012 is returned from a query to your database. The code running on your problem clients are parsing the date using perhaps European date regional settings. (In the EU and most other countries in the world 14-Jan-2012 is expressed as 14/01/2012 aka dd/MM/yyyy, not MM/dd/yyyy.
Try either specifying that you want to always use a known, base regional setting in your config file like this:
<globalization requestEncoding="utf-8" responseEncoding="utf-8" culture="en-US" />
...or parse your dates using a specified culture setting like this to avoid defaulting to the local system setting that does not match your server-side expectations.
DateTime.Parse("1/14/2012", new CultureInfo("en-US")); // or whatever culture your server database expects...

How to dump Permgen?

I wanted to take the dump of the Permgen of a application server.
I do not want to use -XX:+TraceClassLoading -XX:+TraceClassUnloading as i do not want to restart the server, Neither i want to use jconsole.
I there any tool like jmap(used to heap dump didnt find any option for permgen) to get the permgen so that i can supply only the pid.
jmap -permstat <pid>
is going to produce an output like that :
30337 intern Strings occupying 2746200 bytes.
class_loader classes bytes parent_loader alive? type
<bootstrap> 2031 7253392 null live <internal>
0x517474f0 1 1760 null dead sun/reflect/DelegatingClassLoader#0x43f95d38
0x4f83f670 1 1744 0x4ebfb8e8 dead sun/reflect/DelegatingClassLoader#0x43f95d38
[...]
total = 287 10020 35889952 N/A alive=3, dead=284 N/A
This is not a full dump, but doing that is going to allow you to do some investigation.
I am still looking on how to find more information.
It is not possible to 'dump permgen' as it's done for the heap.
In addition to jmap -permstat as others have presented, you can analyze standard heap dump to shed some light on your permanent generation as described in this blog entry: 'The Unknown Generation: Perm'.
Because a heap dump does not really contain a lot of information about perm space, perm problems are difficult to tackle. Recently, I found this great article by Sporar, Sundararajan and Kieviet. The authors shed some light on the permanent generation. Of course, I had to check right away if and how I can use the Eclipse Memory Analyzer to analyze this “unknown” generation. This is what this blog is about.
jmap -permstat <pid>