I have tried to create a json_split function(UDF) using brickhouse.udf.json.JsonSplitUDF which was successful at beeline. But when I am trying to use the same function in my queries, it fails due to "unable to evaluate brickhouse.udf.json.JsonSplitUDF"! I have used the same class at Hive by adding explicitly it is working.
additional info: I have added the jars at hive.reloadable.aux.jars.path and did the reload at beeline. we are using Kerberos and Sentry authentication.
I would be glad to provide if any additional information required in order to assist me further.
I would greatly appreciate your time and earliest response.
Thanks
Jagath
Related
I am having a problem with a data transfer from Google Ads. When I schedule the backfill I get the following error for some dates:
Invalid value: Load configuration must specify at least one source URI
When I check the log inside of the details of execution I get the following message:
Failed to start job for table p_ClickStats_5419416216$20201117 with error INVALID_ARGUMENT: Invalid value: Load configuration must specify at least one source URI
The weird part is that this happens for random dates which I had transfered before in a previous transfer. Did anyone have a problem similar to that?
I had the very same problem with the same error. I was using a free trial account. I shared my project under another account that was an upgraded account with billing set-up (but still with the promotional credits). So far, I have not had the same issue. Try to upgrade your account to one where you set up billing and try the data transfer again. Don't use the purely free trial account. You can share the project to the other account and set it up there, the data transfer. Backfilling for me has also worked and looks like no more duplicate runs either. Maybe free account trial is a limited version.
I'm writing a DAG that makes use of RedShiftToS3Operator but I'm implementing the code of the operator instead of using the operator itself, only modification is specifying schema in PostgresHook.
Airflow is running on an EC2 instance and has the aws_default connection defined. This is properly configured in AWS with a proper I_AM role, and yet I get the following error:
error: S3ServiceException:The AWS Access Key Id you provided does not exist in our records.,Status 403,Error InvalidAccessKeyId.
The region is specified in the connection in the extras field if anyone's wondering.
I've tried solutions mentioned here and here but I still keep facing this issue.
I do not want to expose the credentials in the UI or put them in a file and put them in the EC2 instance.
Any help is much appreciated, thank you!
This happens not because of incorrect AWS_CONN_ID for the operator or setting up of the connection, rather due to a bug in the RedshiftToS3Transfer operator. You need to supply in token=credentials.tokenwith other credentials in the unload query.
More here
All my datasheets, tables, and ALL items inside BQ are un EU. When I try to do a View->to->Table 15 min scheduled query I get an error regarding my location, which is incorrect, because all, source and destiny are both on EU...
Anyone knows why?
There is a transient known issue matching your situation, GCP support team needs more time for troubleshooting. There may be a potential issue in the UI. I would ask you to try the following steps:
Firstly, try to make the same operation in Chrome's incognito mode.
Another possible workaround is trying to follow this official guide using a different approach than the UI (CLI for instance).
I hope it helps.
I have a job running in my Pentaho ETL server but am unable to figure out which user (i.e. username) had triggered the job. The default logging I can see does not seem to give any details on the user that triggered the job. There must be an easy way here to find this which I am missing, any help would be appreciated.
Details,
I am running Pentaho EE6.1
Thanks
Deepak
By default the username doesn’t show up in the logs, you need to change the format on your log4j.xml for that.
Or enable DB auditing and who ran what and when will be stored in the PRO_AUDIT table.
So I was looking for installing an Oracle 12c database on my Windows 8 laptop, so that I could learn much of SQL(after posting my last question).
I have downloaded all the needed zips. obviously while trying I got error:
[INS-30131] Initial setup required for the execution of installer validations failed.
Additional Information:
- Framework setup check failed on all the nodes
- Cause: Cause Of Problem Not Available
- Action: User Action Not Available
Summary of the failed nodes
hp
- Version of exectask could not be retrieved from any node
- Cause: Cause Of Problem Not Available
- Action: User Action Not Available
Well after looking into many posts on SO, I figured out that it needs some hidden User account (C$). I got steps for setting up such a account but unfortunately they are not working for me.
Following the path as: Control Panel>Administrative Tools> Computer Management>Shared.
As mentioned in steps across internet, there is no option for me to create a new account.
Apart from that, I have tried changing my Username and also I have tried using default Administrator account but nothing seems working.
I am pretty sure this is not new so somebody out there must have a solution to this issue. Pls advice...
This is the description of the error, I saw it, but was trying to find an idea how to fix it.
Anyway, I solved it by renaming the volume group and updating accordingly the fstab and the grub.conf.