How to view the full exception/error stack trace in Cloudera - hive

I am trying to run a query from Cloudera Hue editor for Hive. However the query fails with an exception (which I am trying to explore). How do I see the full stack trace?

Found the log file location in /var/log/hive.

Related

Docker PostgreSQL 12.3 how to debug failed transaction\ incomplete message

Im using PostgreSQL 12.3 docker image, and i have an application that stopped working as some query transaction gets closed or incomplete in the middle of it and the application just doesn't work.
My question is, how can i see or make PostgreSQL to save the incomplete transaction ?
I want to see from where, when and what was the query and issue, is there any way to debug it?
or save it maybe on a error table ?
my psql log:
LOG: incomplete message from client
Thanks!

"o76.getSource EXTERNAL table not supported" Error with AWS Glue custom connector with BigQuery

I was following this step by step to connect data from BQ to AWS Glue and store in S3 , everything works ok until i tried to run the job, where the job keeps failing with:
An error occurred while calling o76.getSource. The type of table {datasetName}.{table_name} is currently not supported: EXTERNAL
I can't seem to find any similar error online, also can't find further helpful info from the log, it seems that it's stuck at the issue with the BQ table, I was following exactly as what the author did here in the blog with the key-value pair to indicate project ID and dataset/table (image refers to blog's author table name).
Does anybody know what's causing this?

Cannot Export a Table from BigQuery to Google Cloud Storage

I am trying to export a table from big query to google cloud storage from console/command line. The console job runs for a few minutes and errors out without any error code and the command line job also after running for sometime gives the below error:
BigQuery error in extract operation: Error processing job 'data-flow-experiment:bqjob_r308ff0f73d1820a6_00000157f77e8ab9_1': Backend error. Job aborted.
Job id of the command line is given above.
The billing is enabled for the project and the big query service is also enabled.
Also I get the below error when I try to create a bucket in the Google Cloud Storage:
AccessDeniedException: 403 The account for the specified project is read only.
Though the IAM user I am using has owner access and I have created buckets using this account previously and have also extracted tables in the past.
Please guide.
For the bigquery issue:
Do you happen to have a timestamp column which have out-of-range values (say, far far far into the future)?
If so, you can just wait for two more days, as the fix is

Updating cloud deployment which contains bigquery dataset in GCP

I am trying to update a deployment I have made using the GCP deployment manager, however, I get an error saying the datasets in the deployment already exist. Is there a way I can tell my deployment to create the dataset when it doesn't exist and do nothing if it does. I thought that was the point of the update command. Below is the error I am getting:
code: u'RESOURCE_ERROR'
location: u'dep23/dataset'
message: u'Unexpected response from resource of type bigquery.v2.dataset: 409 {"code":409,"errors":[{"domain":"global","message":"Already Exists: Dataset my-project:dataset","reason":"duplicate"}],"message":"Already Exists: Dataset my=project:dataset","statusMessage":"Conflict","requestPath":"https://www.googleapis.com/bigquery/v2/projects/my-project/datasets"}'>
It seems like the resource has to be created by the deployment manager or else you are going to have various issues. I had to delete my dataset and re-create it using the deployment manager and then it started working.

Squirrel SQL Exception Logging

I am developing a JDBC driver which is a wrapper for a web service. My unit tests work fine and I can write my own Java code that uses the driver to do useful things.
When I plug it into Squirrel SQL it is able to connect and get its initial batch of metadata (properties, schemas/catalogs, etc), but a simple SELECT query does not work. I receive an InvocationTargetException. This means a reflective call failed inside the method or constructor being invoked: this exception always wraps another exception which shows what really failed.
However, the error window in Squirrel SQL simply shows the exception name: no wrapped exception/cause, no stack trace. The log in my user directory contains no information regarding what happened.
Looking through the global properties and connection properties, I have not found any settings that would increase logging. I am using Squirrel SQL version 3.5.3 on Java 7 64-bit, Windows 7 64-bit.
How can I get Squirrel SQL to provide more information to help me find the cause of this error? I do not care if it outputs to the log file or the error window, just so I have something to go by.
The easiest way to change the log level is to edit the log4j.properties file. This file is in the same folder as the batch file that starts SquirrelSQL.
Simply change the line
log4j.rootLogger=info, SquirrelAppender
to
log4j.rootLogger = debug, SquirrelAppender