Actually I am doing some dataframe work for ETL, the dataframe is read from Azure datawarehouse . and seems somehow to notebook hang forever , but I don't know where it is and why it hang so long !!!
Any one has idea and the experience ?
There are various rare scenarios and corner cases that can cause a streaming or batch job to hang. It is also possible for a job to hang because the Databricks internal metastore has become corrupted.
Often restarting the cluster or creating a new one resolves the problem. In other cases, run the following script to unhang the job and collect notebook information, which can be provided to Databricks Support.
For more details, refer "How to Resolve Job Hands and Collect Diagnostic Information".
Hope this helps.
Related
The data size for the data is in the Terabytes.
I have multiple Databricks notebooks for incremental data load into Google BigQuery for each dimension table.
Now, I have to perform this data load every two hours i.e. run these notebooks.
What is a better approach among the following:
Create a master Databricks notebook and use dbutils to chain/parallelize the execution of the aforementioned Databricks notebooks.
Use Google Composer (Apache Airflow's Databricks Operator) to create a master DAG to orchestrate these notebooks remotely.
I want to know which is better approach when I have use cases for both parallel execution and sequential execution of said notebooks.
I'd be extremely grateful if I could get a suggestion or opinion on this topic, thank you.
why can't you try with databricks jobs . So that you can use job for way of running a notebook either immediately or on a scheduled basis.
I'm facing this issue intermittently now, where the query (called from stored Procedure) goes for CXSYNC_PORT wait type and continues to remain in that for longer time (sometimes 8hours in stretch). I had to kill the process and then rerun the procedure. This procedure is called every 2-hours from ADF pipeline.
What's the reason for this behavior and how do I fix the issue?
I searched a lot and there is not Microsoft documents talk about the wait type: CXSYNC_PORT. Others have asked the same question but still with no more details.
Most suggestions are that ask the same problem in more forums. Or ask professional engineer for help, and they will deal with your problem separately and confidentially.
Ask Azure support for details help: https://learn.microsoft.com/en-us/azure/azure-portal/supportability/how-to-create-azure-support-request
And here's the same question which Microsoft engineer gave more details about the issue:
As part of a fix CXPACKET waits were further broken down into
CXSYNC_CONSUMER and CXSYNC_PORT (and data transfer waits still
reported as CXPACKET) as to distinguish between different wait times
for correct diagnose of the problem.
Basically, CXPACKET is divided into 3: CXPACKET, CXSYNC_PORT,
CXSYNC_CONSUMER. CXPACKET is used for data transfer sync, while
CXSYNC_* are used for other synchronizations. CXSYNC_PORT is used for
synchronizing opening/closing of exchange port between consuming
thread and producing thread. Long waits here may indicate server load
and lack of available threads. Plans containing sort may contribute
this wait type because complete sorting may occur before port is
synchronized.
Please ref this link What is causing wait type CXSYNC_PORT and what to do about it? to get more useful messages. But for now, there isn't an exact solution.
use query hint OPTION(MAXDOP 1)
This will run your long running query in a single thread and you won't get the CX type waits. In my experience this can make a massive 10-20X decrease in execution time and will free up CPU for other tasks as there will be no context switching and thread coordination activity.
I'm having some trouble with a Dataflow pipeline that reads from PubSub and writes to BigQuery.
I had to drain it to perform some more complex updates. When I rerun the pipeline it started reading fom PubSub at a normal rate, but then after some minutes it stopped and now it is not reading messages from PubSub anymore! Data watermark is almost one week delayed and not progressing. There are more than 300k messages in the subscription to be read, according to Stackdriver.
It was running normally before the update, and now even if I downgrade my pipeline to the previous version (the one running before update), I still doesn't get it to work.
I tried several configurations:
1) We use Dataflow autoscaling, and I tried starting the pipeline with more powerful workers (n1-standard-64), and limiting it to ten workers, but it won't improve performance neither autoscale (it keeps only the initial worker).
2) I tried providing more disk through diskSizeGb (2048) and diskType (pd-ssd), but still no improvement.
3) Checked PubSub quotas and pull/push rates, but it's absolutely normal.
Pipeline shows no errors or warnings, and just won't progress.
I checked instances resources and CPU, RAM, disk read/write rates are all okay, compared to other pipelines. The only thing a little higher is network rates: about 400k bytes/sec (2000 packets/sec) outgoing and 300k bytes/sec incoming (1800 packets/sec).
What would you suggest I do?
The Dataflow SDK 2.x for Java and the Dataflow SDK for Python are based on Apache Beam. Make sure you are following the documentation as a reference when you update. Quotas can be an issue for slow running pipeline and lack of output but you mentioned those are fine.
It seems there is a need to look at the job. I recommend to open an issue on the PIT here and we’ll take a look. Make sure to provide your project id, job id and all the necessary details.
We are running some high volume tests by pushing metrics to OpenTSDB (2.3.0) with BigTable, and a curious problem surfaces from time to time. For some metrics, an hour of data stops showing up on the web UI when we run a query. The span of "missing" data is very clearcut and borders on the hour (UTC). After a while, while rerunning the same query, the data shows up. There does not seem to be any pattern that we can deduce here, other than the hour span. Any pointers on what to look for and debug this?
How long do you have to wait before the data shows up? Is it always the most recent hour that is missing?
Have you tried using OpenTSDB CLI when this is happening and issuing a scan to see if the data is available that way?
http://opentsdb.net/docs/build/html/user_guide/cli/scan.html
You could also check via an HBase shell scan to see if you can get the raw data that way (here's information on how it's stored in HBase):
http://opentsdb.net/docs/build/html/user_guide/backends/hbase.html
If you can verify the data is there then it seems likely to be a web UI problem. If not, the next likely culprit is something getting backed up in the write pipeline.
I am not aware of any particular issue in the Google Cloud Bigtable backend layer that would cause this behavior, but I believe some folks have encountered issues with OpenTSDB compactions during periods of high load that result in degraded performance.
It's worth checking in the Google Cloud Console to see if there's any outliers in the latency, CPU or throughput graphs that correlate with the times during which you experience the issue.
For the BQ team, queries that usually work, are failing sometimes.
Could you please look into what could be the issue, there is just this:
Error: Unexpected. Please try again.
Job ID: aerobic-forge-504:job_DTFuQEeVGwZbt-PRFbMVE6TCz0U
Sorry for the slow response.
BigQuery replicates data to multiple datacenters and may run queries from any of them. Usually this is transparent ... if your data hasn't replicated everywhere, we will try to find a datacenter that has all of the data necessary to run a query and execute the query from there.
The error you hit was due to BigQuery not being able to find a single datacenter that had all of the data for one of your tables. We try very hard to make sure this doesn't happen. In principle, it should be very rare (we've got solution designed to make sure that it never happens, but haven't finished the implementation yet). We saw an uptick in this issue this morning, and have a bug filed and are currently investigating the issue.
Was this a transient error? If you retry the operation, does it work now? Are you still getting errors on other queries?