How to delete/remove unfetched URLs from NUTCH Database (CrawlDB) - lucene

I want to crawl new URL list using nutch but there are some Un-fetched URL available :
bin/nutch readdb -stats
WebTable statistics start
Statistics for WebTable:
retry 0: 3403
retry 1: 25
retry 2: 2
status 4 (status_redir_temp): 5
status 5 (status_redir_perm): 26
retry 3: 1
status 2 (status_fetched): 704
jobs: {db_stats-job_local_0001={jobName=db_stats, jobID=job_local_0001, counters={Map-Reduce Framework={MAP_OUTPUT_MATERIALIZED_BYTES=227, REDUCE_INPUT_RECORDS=13, SPILLED_RECORDS=26, VIRTUAL_MEMORY_BYTES=0, MAP_INPUT_RECORDS=3431, SPLIT_RAW_BYTES=1059, MAP_OUTPUT_BYTES=181843, REDUCE_SHUFFLE_BYTES=0, PHYSICAL_MEMORY_BYTES=0, REDUCE_INPUT_GROUPS=13, COMBINE_OUTPUT_RECORDS=13, REDUCE_OUTPUT_RECORDS=13, MAP_OUTPUT_RECORDS=13724, COMBINE_INPUT_RECORDS=13724, CPU_MILLISECONDS=0, COMMITTED_HEAP_BYTES=718675968}, File Input Format Counters ={BYTES_READ=0}, File Output Format Counters ={BYTES_WRITTEN=397}, FileSystemCounters={FILE_BYTES_WRITTEN=1034761, FILE_BYTES_READ=912539}}}}
max score: 1.0
status 1 (status_unfetched): 2679
min score: 0.0
status 3 (status_gone): 17
TOTAL urls: 3431
avg score: 0.0043631596
WebTable statistics: done
So, How can i remove it from Nutch Database ?? Thanks

You could use CrawlDbMerger but you would only be able to filter by the URL and not the status, the Generator Job has already support for using jexl expressions but as far as I remember we don't have that feature built in now into the crawl DB.
One way would be to list all the URLs with the status_unfetched (readdb) and write some regex to block them (using the normal URL filter) then you just use the CrawlDbMerger to filter the crawldb with this filter enabled and your URLs should disappear.

Related

how to get the average response time for a request using Jmeter testing tool

I am trying Jmeter for load and performance test, so I created the thread group and below is the output of aggregate report.
The first column Avg request t/s, i have calculated using the formula
((Average/Total Requests)/1000)
but it does not seems good, as I am loggin request time in my code, almost every request is taking minimum 2-4 seconds.
I tried with MEdian/1000 but again, i am in doubt.
what is the corerct way to get the average time for a request?
Avg request t/s
Total Requests
Average
MEdian
Min
Max
Error %
ThroughPut request per time unit
Recieved KB
Sent KB
0.07454
100
7454
6663
2464
19313
0
3/sec
2.062251152
1.074506499
1.11322
100
111322
107240
4400
222042
0
26.3/min
0.1408915377
0.1271878015
1.19035
100
119035
117718
0.03
26.3/min
0.1309013211
0.1279624502
1.21287
100
121287
119198
0
0.4136384882
0.135725129
0.1211831508
1.11943
100
111943
111582
5257
220004
0
0.4359482965
0.1507086884
0.1264420352
1.14289
100
114289
114215
4543
223947
0
0.4369846313
0.1497867242
0.1288763268
0.23614
150
35421
26731
4759
114162
0
0.9494271789
0.3600282257
0.1842358496
Don't you have the ThroughPut request per time unit column already? What else do you need to "calculate"?
As per Aggregate Report documentation
Throughput - the Throughput is measured in requests per second/minute/hour. The time unit is chosen so that the displayed rate is at least 1.0. When the throughput is saved to a CSV file, it is expressed in requests/second, i.e. 30.0 requests/minute is saved as 0.5.
So if you click Save Table Data button:
you will get the average transactions per second in the CSV file
It is also possible to generate the CSV file with the calculated aggregate values using JMeter Plugins Command Line Tool as
JMeterPluginsCMD.bat --generate-csv aggregate-report.csv --input-jtl /path/to/your/results.jtl --plugin-type AggregateReport

How to set up job dependencies in google bigquery?

I have a few jobs, say one is loading a text file from a google cloud storage bucket to bigquery table, and another one is a scheduled query to copy data from one table to another table with some transformation, I want the second job to depend on the success of the first one, how do we achieve this in bigquery if it is possible to do so at all?
Many thanks.
Best regards,
Right now a developer needs to put together the chain of operations.
It can be done either using Cloud Functions (supports, Node.js, Go, Python) or via Cloud Run container (supports gcloud API, any programming language).
Basically you need to
issue a job
get the job id
poll for the job id
job is finished trigger other steps
If using Cloud Functions
place the file into a dedicated GCS bucket
setup a GCF that monitors that bucket and when a new file is uploaded it will execute a function that imports into GCS - wait until the operations ends
at the end of the GCF you can trigger other functions for next step
another use case with Cloud Functions:
A: a trigger starts the GCF
B: function executes the query (copy data to another table)
C: gets a job id - fires another function with a bit of delay
I: a function gets a jobid
J: polls for job is ready?
K: if not ready, fires himself again with a bit of delay
L: if ready triggers next step - could be a dedicated function or parameterized function
It is possible to address your scenario with either cloud functions(CF) or with a scheduler (airflow). The first approach is event-driven getting your data crunch immediately. With the scheduler, expect data availability delay.
As it has been stated once you submit BigQuery job you get back job ID, that needs to be check till it completes. Then based on the status you can handle on success or failure post actions respectively.
If you were to develop CF, note that there are certain limitations like execution time (max 9min), which you would have to address in case BigQuery job takes more than 9 min to complete. Another challenge with CF is idempotency, making sure that if the same datafile event comes more than once, the processing should not result in data duplicates.
Alternatively, you can consider using some event-driven serverless open source projects like BqTail - Google Cloud Storage BigQuery Loader with post-load transformation.
Here is an example of the bqtail rule.
rule.yaml
When:
Prefix: "/mypath/mysubpath"
Suffix: ".json"
Async: true
Batch:
Window:
DurationInSec: 85
Dest:
Table: bqtail.transactions
Transient:
Dataset: temp
Alias: t
Transform:
charge: (CASE WHEN type_id = 1 THEN t.payment + f.value WHEN type_id = 2 THEN t.payment * (1 + f.value) END)
SideInputs:
- Table: bqtail.fees
Alias: f
'On': t.fee_id = f.id
OnSuccess:
- Action: query
Request:
SQL: SELECT
DATE(timestamp) AS date,
sku_id,
supply_entity_id,
MAX($EventID) AS batch_id,
SUM( payment) payment,
SUM((CASE WHEN type_id = 1 THEN t.payment + f.value WHEN type_id = 2 THEN t.payment * (1 + f.value) END)) charge,
SUM(COALESCE(qty, 1.0)) AS qty
FROM $TempTable t
LEFT JOIN bqtail.fees f ON f.id = t.fee_id
GROUP BY 1, 2, 3
Dest: bqtail.supply_performance
Append: true
OnFailure:
- Action: notify
Request:
Channels:
- "#e2e"
Title: Failed to aggregate data to supply_performance
Message: "$Error"
OnSuccess:
- Action: query
Request:
SQL: SELECT CURRENT_TIMESTAMP() AS timestamp, $EventID AS job_id
Dest: bqtail.supply_performance_batches
Append: true
- Action: delete
You want to use an orchestration tool, especially if you want to set up this tasks as recurring jobs.
We use Google Cloud Composer, which is a managed service based on Airflow, to do workflow orchestration and works great. It comes with automatically retry, monitoring, alerting, and much more.
You might want to give it a try.
Basically you can use Cloud Logging to know almost all kinds of operations in GCP.
BigQuery is no exception. When the query job completed, you can find the corresponding log in the log viewer.
The next question is how to anchor the exact query you want, one way to achieve this is to use labeled query (means attach labels to your query) [1].
For example, you can use below bq command to issue query with foo:bar label
bq query \
--nouse_legacy_sql \
--label foo:bar \
'SELECT COUNT(*) FROM `bigquery-public-data`.samples.shakespeare'
Then, when you go to Logs Viewer and issue below log filter, you will find the exactly log generated by above query.
resource.type="bigquery_resource"
protoPayload.serviceData.jobCompletedEvent.job.jobConfiguration.labels.foo="bar"
The next question is how to emit an event based on this log for the next workload. Then, the Cloud Pub/Sub comes into play.
2 ways to publish an event based on log pattern are:
Log Routers: set Pub/Sub topic as the destination [1]
Log-based Metrics: create alert policy whose notification channel is Pub/Sub [2]
So, the next workload can subscribe to the Pub/Sub topic, and be triggered when the previous query has completed.
Hope this helps ~
[1] https://cloud.google.com/bigquery/docs/reference/rest/v2/Job#jobconfiguration
[2] https://cloud.google.com/logging/docs/routing/overview
[3] https://cloud.google.com/logging/docs/logs-based-metrics

Elastalert rule for CPU usage in percentage

I am facing issue with elastalert rule for CPU usage (not load average). I am not getting any hit and match. Below is my .yaml file for CPU rule:
name: CPU usgae
type: metric_aggregation
index: metricbeat-*
buffer_time:
minutes: 10
metric_agg_key: system.cpu.total.pct
metric_agg_type: avg
query_key: beat.hostname
doc_type: doc
bucket_interval:
minutes: 5
sync_bucket_interval: true
max_threshold: 60.0
filter:
- term:
metricset.name: cpu
alert:
- "email"
email:
- "xyz#xy.com"
Can you please help me what changes i need to make in my rule.
Any assistance will be appreciated.
Thanks.
Metricbeat reports CPU values in the range of 0 to 1. So a threshold of 60 will never be matched.
Try it with max_threshold: 0.6 and it probably will work.
Try reducing buffer_time and bucket_interval for testing
The best way to debug elastalert issue is by using command line option --es_debug_trace like this (--es_debug_trace /tmp/output.txt). It shows exact curl api call to elasticsearch being used by elastalert in background. Then the query can be copied and used in Kibana's Dev Tools for easy analysis and fiddling.
Most likely, doc_type: doc setting might have caused the ES endpoint to look like this: metricbeat-*/doc/_search
You might not have that doc document hence no match. Please remove doc_type and try.
Also please note that the pct value is less than 1 hence for you case: max_threshold: 0.6
For me following works, for your reference:
name: CPU usage
type: metric_aggregation
use_strftime_index: true
index: metricbeat-system.cpu-%Y.%m.%d
buffer_time:
hour: 1
metric_agg_key: system.cpu.total.pct
metric_agg_type: avg
query_key: beat.hostname
min_doc_count: 1
bucket_interval:
minutes: 5
max_threshold: 0.6
filter:
- term:
metricset.name: cpu
realert:
hours: 2
...
sample match output:
{
'#timestamp': '2021-08-19T15:06:22Z',
'beat.hostname': 'MY_BUSY_SERVER',
'metric_system.cpu.total.pct_avg': 0.6155,
'num_hits': 50,
'num_matches': 10
}

Spark : Data processing using Spark for large number of files says SocketException : Read timed out

I am running Spark in standalone mode on 2 machines which have these configs
500gb memory, 4 cores, 7.5 RAM
250gb memory, 8 cores, 15 RAM
I have created a master and a slave on 8core machine, giving 7 cores to worker. I have created another slave on 4core machine with 3 worker cores. The UI shows 13.7 and 6.5 G usable RAM for 8core and 4core respectively.
Now on this I have to process an aggregate of user ratings over a period of 15 days. I am trying to do this using Pyspark
This data is stored in hourwise files in day-wise directories in an s3 bucket, every file must be around 100MB eg
s3://some_bucket/2015-04/2015-04-09/data_files_hour1
I am reading the files like this
a = sc.textFile(files, 15).coalesce(7*sc.defaultParallelism) #to restrict partitions
where files is a string of this form 's3://some_bucket/2015-04/2015-04-09/*,s3://some_bucket/2015-04/2015-04-09/*'
Then I do a series of maps and filters and persist the result
a.persist(StorageLevel.MEMORY_ONLY_SER)
Then I need to do a reduceByKey to get an aggregate score over the span of days.
b = a.reduceByKey(lambda x, y: x+y).map(aggregate)
b.persist(StorageLevel.MEMORY_ONLY_SER)
Then I need to make a redis call for the actual terms for the items the user has rated, so I call mapPartitions like this
final_scores = b.mapPartitions(get_tags)
get_tags function creates a redis connection each time of invocation and calls redis and yield a (user, item, rate) tuple
(The redis hash is stored in the 4core)
I have tweaked the settings for SparkConf to be at
conf = (SparkConf().setAppName(APP_NAME).setMaster(master)
.set("spark.executor.memory", "5g")
.set("spark.akka.timeout", "10000")
.set("spark.akka.frameSize", "1000")
.set("spark.task.cpus", "5")
.set("spark.cores.max", "10")
.set("spark.serializer", "org.apache.spark.serializer.KryoSerializer")
.set("spark.kryoserializer.buffer.max.mb", "10")
.set("spark.shuffle.consolidateFiles", "True")
.set("spark.files.fetchTimeout", "500")
.set("spark.task.maxFailures", "5"))
I run the job with driver-memory of 2g in client mode, since cluster mode doesn't seem to be supported here.
The above process takes a long time for 2 days' of data (around 2.5hours) and completely gives up on 14 days'.
What needs to improve here?
Is this infrastructure insufficient in terms of RAM and cores (This is offline and can take hours, but it has got to finish in 5 hours or so)
Should I increase/decrease the number of partitions?
Redis could be slowing the system, but the number of keys is just too huge to make a one time call.
I am not sure where the task is failing, in reading the files or in reducing.
Should I not use Python given better Spark APIs in Scala, will that help with efficiency as well?
This is the exception trace
Lost task 4.1 in stage 0.0 (TID 11, <node>): java.net.SocketTimeoutException: Read timed out
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.read(SocketInputStream.java:152)
at java.net.SocketInputStream.read(SocketInputStream.java:122)
at sun.security.ssl.InputRecord.readFully(InputRecord.java:442)
at sun.security.ssl.InputRecord.readV3Record(InputRecord.java:554)
at sun.security.ssl.InputRecord.read(InputRecord.java:509)
at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:934)
at sun.security.ssl.SSLSocketImpl.readDataRecord(SSLSocketImpl.java:891)
at sun.security.ssl.AppInputStream.read(AppInputStream.java:102)
at org.apache.http.impl.io.AbstractSessionInputBuffer.read(AbstractSessionInputBuffer.java:198)
at org.apache.http.impl.io.ContentLengthInputStream.read(ContentLengthInputStream.java:178)
at org.apache.http.impl.io.ContentLengthInputStream.read(ContentLengthInputStream.java:200)
at org.apache.http.impl.io.ContentLengthInputStream.close(ContentLengthInputStream.java:103)
at org.apache.http.conn.BasicManagedEntity.streamClosed(BasicManagedEntity.java:164)
at org.apache.http.conn.EofSensorInputStream.checkClose(EofSensorInputStream.java:227)
at org.apache.http.conn.EofSensorInputStream.close(EofSensorInputStream.java:174)
at org.apache.http.util.EntityUtils.consume(EntityUtils.java:88)
at org.jets3t.service.impl.rest.httpclient.HttpMethodReleaseInputStream.releaseConnection(HttpMethodReleaseInputStream.java:102)
at org.jets3t.service.impl.rest.httpclient.HttpMethodReleaseInputStream.close(HttpMethodReleaseInputStream.java:194)
at org.apache.hadoop.fs.s3native.NativeS3FileSystem$NativeS3FsInputStream.seek(NativeS3FileSystem.java:152)
at org.apache.hadoop.fs.BufferedFSInputStream.seek(BufferedFSInputStream.java:89)
at org.apache.hadoop.fs.FSDataInputStream.seek(FSDataInputStream.java:63)
at org.apache.hadoop.mapred.LineRecordReader.<init>(LineRecordReader.java:126)
at org.apache.hadoop.mapred.TextInputFormat.getRecordReader(TextInputFormat.java:67)
at org.apache.spark.rdd.HadoopRDD$$anon$1.<init>(HadoopRDD.scala:236)
at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:212)
at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:101)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
at org.apache.spark.rdd.CoalescedRDD$$anonfun$compute$1.apply(CoalescedRDD.scala:93)
at org.apache.spark.rdd.CoalescedRDD$$anonfun$compute$1.apply(CoalescedRDD.scala:92)
at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371)
at scala.collection.Iterator$class.foreach(Iterator.scala:727)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
at org.apache.spark.api.python.PythonRDD$.writeIteratorToStream(PythonRDD.scala:405)
at org.apache.spark.api.python.PythonRDD$WriterThread$$anonfun$run$1.apply(PythonRDD.scala:243)
at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1617)
at org.apache.spark.api.python.PythonRDD$WriterThread.run(PythonRDD.scala:205)
I could really use some help, thanks in advance
Here is what my main code looks like
def main(sc):
f=get_files()
a=sc.textFile(f, 15)
.coalesce(7*sc.defaultParallelism)
.map(lambda line: line.split(","))
.filter(len(line)>0)
.map(lambda line: (line[18], line[2], line[13], line[15])).map(scoring)
.map(lambda line: ((line[0], line[1]), line[2])).persist(StorageLevel.MEMORY_ONLY_SER)
b=a.reduceByKey(lambda x, y: x+y).map(aggregate)
b.persist(StorageLevel.MEMORY_ONLY_SER)
c=taggings.mapPartitions(get_tags)
c.saveAsTextFile("f")
a.unpersist()
b.unpersist()
The get_tags function is
def get_tags(partition):
rh = redis.Redis(host=settings['REDIS_HOST'], port=settings['REDIS_PORT'], db=0)
for element in partition:
user = element[0]
song = element[1]
rating = element[2]
tags = rh.hget(settings['REDIS_HASH'], song)
if tags:
tags = json.loads(tags)
else:
tags = scrape(song, rh)
if tags:
for tag in tags:
yield (user, tag, rating)
The get_files function is as:
def get_files():
paths = get_path_from_dates(DAYS)
base_path = 's3n://acc_key:sec_key#bucket/'
files = list()
for path in paths:
fle = base_path+path+'/file_format.*'
files.append(fle)
return ','.join(files)
The get_path_from_dates(DAYS) is
def get_path_from_dates(last):
days = list()
t = 0
while t <= last:
d = today - timedelta(days=t)
path = d.strftime('%Y-%m')+'/'+d.strftime('%Y-%m-%d')
days.append(path)
t += 1
return days
As a small optimization, I have created two separate tasks, one to read from s3 and get additive sum, second to read transformations from redis. The first tasks has high number of partitions since there are around 2300 files to read. The second one has much lesser number of partitions to prevent redis connection latency, and there is only one file to read which is on the EC2 cluster itself. This is only partial, still looking for suggestions to improve ...
I was in a similar usecase: doing coalesce on a RDD with 300,000+ partitions. The difference is that I was using s3a(SocketTimeoutException from S3AFileSystem.waitAysncCopy). Finally the issue was resolved by setting a larger fs.s3a.connection.timeout(Hadoop's core-site.xml). Hopefully you can get a clue.

summarize mutlitple values sent to graphite at the same time

I'm trying to display the sum of several values ​​sent to Graphite (carbon-cache) for the same timestamp.
Sent values are like :
test.nb 10 1421751600
test.nb 11 1421751600
test.nb 12 1421751600
test.nb 13 1421751600
and I would Graphite to display value "46" for timestamp 1421751600.
Only the last value "13" is displayed on Graphite.
Here are configuration files :
storage-aggregation.conf
[test_sum]
pattern = ^test\.*
xFilesFactor = 0.1
aggregationMethod = sum
storage-schemas.conf
[TEST]
pattern = ^test\.
retentions = 10s:30d
Is there a way to do this with Graphite/Carbon ?
Thx.
storage-aggregation.conf file defines how to aggregate data to lower precision retentions and since you only have one retention precision defined: 10s for 30 days, this is not needed.
In order to this with Graphite daemons, you will have to use
carbon-aggregator.py that is run in front of carbon-cache.py to buffer metrics over time. Check [aggregator] section in config file. By default, carbon-aggregator listens on port 2023 (default) so you will have to send data points to this port and not carbon-cache port (2004 by default).
Also, you will have to specify the aggregation rule in aggregation-rules.conf that will allow you to add several metrics together as the come in. You can find detailed explanation here.