Access Hive metastore table data using Templeton/WebHcat respt API's for Last Updated Time - hive-metastore

All, I am working in a Hadoop project and I have requirement to get last scooped timestamp of a database using Templeton/WebHcat Rest API's.
Please post if any one knows the process.
Thanks in advance
regards,
Joshua

Related

Power BI query is not visible in BigQuery query history

More of a curiosity question really. I load data into Power BI report from Google BigQuery (using native Google BigQuery connector in Power BI). All works fine, but for some reason I don't see this query in BigQuery's query history.
Did anyone experience something similar and knows the reason why this happens or how to change that (if at all possible)?
If I do exactly the same thing but using simba ODBC connector, I see this query in BigQuery's query history as expected.
Never seen that before. I am always able to find the query history no matter what 3rd party connection I used. Could you confirm the GCP service-account or auth-account and the GCP project for BQ job query that you used for your native Google BigQuery connector in Power BI?
Please make sure you have the access to the query history of that GCP account in that BQ job project.

Impala OR hive with SPARK as execution engine?

I want to design Web UI which fetches data from HDFS. I want to generate some reports using this data which is stored in HDFS. I have my own custom reports format. I am writing REST API's to fetch data. But running HIVE queries gives latency issues Hence I want different approach for this, I could think of two.
Using IMPALA to create tables. But I am not sure about REST support for IMPALA.
Using HIVE but instead of MR use SPARK as execution engine. .
spark-job-server provides REST support, and fetch data with SPARK-SQL.
Which of the approach will be suitable or is there any better approach for this?
Please can anyone help as I am very new in this.
I'd prefer to choose impala if latency is the main consideration. It's dedicated to SQL processing on hdfs and does it well. About REST api and the application logic you are achieving, this seems to be a good example

Loading data regulary from ServiceNow to Pentaho Kettle

I'm working on a BI project and I want to retrieve data from ServiceNow and load it to Pentaho Data Integration so I can record it in my data warehouse, and I want to do this regulary, in other words I want to retrieve the new records regulary from servicenow , only the new ones that haven't been loaded yet to the data warehouse, someone knows how can I acheive my goal? Help me please
The question is too vague.
You need to set up an ETL job that incrementally loads data. That will require you to define a timestamp or incremental key to identify which records are more recent than the ones already loaded.
You will need to schedule that job, e.g., using crontab and calling kitchen from the command line.
Your question pretty much translates to "please develop my ETL project". Too wide in scope.

Oracle : How to lock the rows using for update in oracle?

Please anyone guide me samples for oracle for update attributes for row level locking purpose. please give me sample sql. thanks in advance
Guys thanx for your support, i had done a mistake, i will run the for update in same session that is the problem it wont affect the row level becoz of same thread process, using two different thread i mean sessions it will work fine.

does core data create a database for you?

does core data create a database table for you? I am new to core data and iphone programming and all the tutorials that I see bring in a pre populated database. I am trying to create an app that saves data (i.e. Dates and times) and I don't need to use a prepopulated database. I was wondering when you check the box to use core data, does it create an empty database for you or do you have to create a database table with all the empty fields you need and bring in that sqlite file? This probably sounds like a newb question but I appreciate any help you can give me.
Yes, it creates a table for you. You don't need to mess with such these things.
Here is the best starting tutorial: Core Data Programming Guide
When you know these informations, then you can head and learn more from other tutorials. You will have a different look at them.
You specify a filename. If the file doesn't exist (typical first time your app is run on a device) an empty database is created. If the file exists it will be loaded. You wouldn't really want to try to prepopulate some data in there using SQL. The database has to have a specific, unpublished format for Core Data. If it doesn't match there would be an error.