AttributeError: 'rational_resampler_ccc' object has no attribute 'connect' GNUradio - gnuradio

Help solve the problem with GNUradio
Traceback (most recent call last):
File "/home/pedro/DOPLOM/signal.py", line 166, in <module>
main()
File "/home/pedro/DOPLOM/signal.py", line 142, in main
tb = top_block_cls()
File "/home/pedro/DOPLOM/signal.py", line 81, in __init__
self.rational_resampler_xxx_1 = filter.rational_resampler_ccc(
File "/usr/local/lib/python3/dist-packages/gnuradio/filter/rational_resampler.py", line 150, in __init__
_rational_resampler_base.__init__(self, filter.rational_resampler_base_ccc,
File "/usr/local/lib/python3/dist-packages/gnuradio/filter/rational_resampler.py", line 121, in __init__
self.connect(self, self.resampler, self)
AttributeError: 'rational_resampler_ccc' object has no attribute 'connect'

Related

Apache BEAM pipeline fails when writing TF Records - AttributeError: 'str' object has no attribute 'iteritems'

The issue started appearing over the weekend. For some reason, it feels to be a DataFlow issue.
Previously, I was able to execute the script and write TF records just fine. However, now, I am unable to initialize the computation graph to process the data.
The traceback is:
Traceback (most recent call last):
File "my_script.py", line 1492, in <module>
MyBeamClass()
File "my_script.py", line 402, in __init__
self.run()
File "my_script.py", line 514, in run
transform_fn_io.WriteTransformFn(path=self.JOB_DIR + '/transform/'))
File "/anaconda3/envs/ml27/lib/python2.7/site-packages/apache_beam/pipeline.py", line 426, in __exit__
self.run().wait_until_finish()
File "/anaconda3/envs/ml27/lib/python2.7/site-packages/apache_beam/runners/dataflow/dataflow_runner.py", line 1238, in wait_until_finish
(self.state, getattr(self._runner, 'last_error_msg', None)), self)
apache_beam.runners.dataflow.dataflow_runner.DataflowRuntimeException: Dataflow pipeline failed. State: FAILED, Error:
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/dataflow_worker/batchworker.py", line 649, in do_work
work_executor.execute()
File "/usr/local/lib/python2.7/dist-packages/dataflow_worker/executor.py", line 176, in execute
op.start()
File "apache_beam/runners/worker/operations.py", line 531, in apache_beam.runners.worker.operations.DoOperation.start
def start(self):
File "apache_beam/runners/worker/operations.py", line 532, in apache_beam.runners.worker.operations.DoOperation.start
with self.scoped_start_state:
File "apache_beam/runners/worker/operations.py", line 533, in apache_beam.runners.worker.operations.DoOperation.start
super(DoOperation, self).start()
File "apache_beam/runners/worker/operations.py", line 202, in apache_beam.runners.worker.operations.Operation.start
def start(self):
File "apache_beam/runners/worker/operations.py", line 206, in apache_beam.runners.worker.operations.Operation.start
self.setup()
File "apache_beam/runners/worker/operations.py", line 480, in apache_beam.runners.worker.operations.DoOperation.setup
with self.scoped_start_state:
File "apache_beam/runners/worker/operations.py", line 485, in apache_beam.runners.worker.operations.DoOperation.setup
pickler.loads(self.spec.serialized_fn))
File "/usr/local/lib/python2.7/dist-packages/apache_beam/internal/pickler.py", line 247, in loads
return dill.loads(s)
File "/usr/local/lib/python2.7/dist-packages/dill/_dill.py", line 317, in loads
return load(file, ignore)
File "/usr/local/lib/python2.7/dist-packages/dill/_dill.py", line 305, in load
obj = pik.load()
File "/usr/lib/python2.7/pickle.py", line 864, in load
dispatch[key](self)
File "/usr/lib/python2.7/pickle.py", line 1232, in load_build
for k, v in state.iteritems():
AttributeError: 'str' object has no attribute 'iteritems'
I am using tensorflow==1.13.1 and tensorflow-transform==0.9.0 and apache_beam==2.7.0
with beam.Pipeline(options=self.pipe_opt) as p:
with beam_impl.Context(temp_dir=self.google_cloud_options.temp_location):
# rest of the script
_ = (
transform_fn
| 'WriteTransformFn' >>
transform_fn_io.WriteTransformFn(path=self.JOB_DIR + '/transform/'))
I was experiencing the same error.
It seems to be triggered by a mismatch in the tensorflow-transform versions of your local (or master) machine and the workers one (specified in the setup.py file).
In my case I was running tensorflow-transform==0.13 on my local machine whereas the workers were running 0.8.
Downgrading the local version to 0.8 fixed the issue.

Tensorflow TF_records Generate Error

When I try to generate TF record, I'm getting the following error message:
Traceback (most recent call last):
File "generate_tfrecord.py", line 112, in <module>
tf.app.run()
File "/home/harisohmnaathss/anaconda3/envs/tensorflow/lib/python3.5/site-
packages/tensorflow/python/platform/app.
py", line 124, in run
_sys.exit(main(argv))
File "generate_tfrecord.py", line 98, in main
writer = tf.python_io.TFRecordWriter(FLAGS.output_path)
File "/home/harisohmnaathss/anaconda3/envs/tensorflow/lib/python3.5/site-
packages/tensorflow/python/lib/io/tf_rec
ord.py", line 106, in __init__
compat.as_bytes(path), compat.as_bytes(compression_type), status)
File "/home/harisohmnaathss/anaconda3/envs/tensorflow/lib/python3.5/site-
packages/tensorflow/python/framework/err
ors_impl.py", line 473, in __exit__
c_api.TF_GetCode(self.status.status))
tensorflow.python.framework.errors_impl.NotFoundError: ; No such file or
directory
The command that I try to run is:
python generate_tfrecord.py --csv_input=data/Train_labels.csv
--output_path=data/train.records
Any ideas to solve this issue?
You are supplying output path as data/train.records instead of data/train.tfrecord

YoutubeLiveStream API Error - AttributeError: 'module' object has no attribute 'argv'

When I am trying to use Youtube Live Stream API, I got errors like this.
Is there any way to solve this issue?
======================================================================
07:23:49.643 Traceback (most recent call last):
07:23:49.643 File "/usr/lib/enigma2/python/Components/PluginComponent.py", line 53, in readPluginList
07:23:49.643 File "/usr/lib/enigma2/python/Tools/Import.py", line 2, in my_import
07:23:49.643 File "/usr/lib/enigma2/python/Plugins/Extensions/YouTubeLiveStreaming/plugin.py", line 30, in <module>
07:23:49.644 import apiclient.discovery
07:23:49.644 File "/usr/lib/python2.7/site-packages/apiclient/__init__.py", line 24, in <module>
07:23:49.644 File "/usr/lib/python2.7/site-packages/googleapiclient/sample_tools.py", line 32, in <module>
07:23:49.644 File "/usr/lib/python2.7/site-packages/oauth2client/tools.py", line 70, in <module>
07:23:49.644 argparser = _CreateArgumentParser()
07:23:49.644 File "/usr/lib/python2.7/site-packages/oauth2client/tools.py", line 55, in _CreateArgumentParser
07:23:49.644 parser = argparse.ArgumentParser(add_help=False)
07:23:49.644 File "/usr/lib/python2.7/argparse.py", line 1586, in __init__
07:23:49.644 AttributeError: 'module' object has no attribute 'argv'
======================================================================

Error while running TensorFlow wide_n_deep Tutorial

I encountered the error:
AttributeError: 'NoneType' object has no attribute 'bucketize'
The full error is as follows:
Traceback (most recent call last):
File "wide_n_deep_tutorial_1.py", line 214, in <module>
train_and_eval()
File "wide_n_deep_tutorial_1.py", line 203, in train_and_eval
m.fit(input_fn=lambda: input_fn(df_train), steps=FLAGS.train_steps)
File "C:\Python35\lib\site-packages\tensorflow\contrib\learn\python\learn\estimators\dnn_linear_combined.py", line 711, in fit
max_steps=max_steps)
File "C:\Python35\lib\site-packages\tensorflow\python\util\deprecation.py", line 191, in new_func
return func(*args, **kwargs)
File "C:\Python35\lib\site-packages\tensorflow\contrib\learn\python\learn\estimators\estimator.py", line 355, in fit
max_steps=max_steps)
File "C:\Python35\lib\site-packages\tensorflow\contrib\learn\python\learn\estimators\estimator.py", line 699, in _train_model
train_ops = self._get_train_ops(features, labels)
File "C:\Python35\lib\site-packages\tensorflow\contrib\learn\python\learn\estimators\estimator.py", line 1052, in _get_train_ops
return self._call_model_fn(features, labels, model_fn_lib.ModeKeys.TRAIN)
File "C:\Python35\lib\site-packages\tensorflow\contrib\learn\python\learn\estimators\estimator.py", line 1019, in _call_model_fn
params=self.params)
File "C:\Python35\lib\site-packages\tensorflow\contrib\learn\python\learn\estimators\dnn_linear_combined.py", line 504, in _dnn_linear_combined_model_fn
scope=scope)
File "C:\Python35\lib\site-packages\tensorflow\contrib\layers\python\layers\feature_column_ops.py", line 526, in weighted_sum_from_feature_columns
transformed_tensor = transformer.transform(column)
File "C:\Python35\lib\site-packages\tensorflow\contrib\layers\python\layers\feature_column_ops.py", line 869, in transform
feature_column.insert_transformed_feature(self._columns_to_tensors)
File "C:\Python35\lib\site-packages\tensorflow\contrib\layers\python\layers\feature_column.py", line 1489, in insert_transformed_feature
name="bucketize")
File "C:\Python35\lib\site-packages\tensorflow\contrib\layers\python\ops\bucketization_op.py", line 48, in bucketize
return _bucketization_op.bucketize(input_tensor, boundaries, name=name)
AttributeError: 'NoneType' object has no attribute 'bucketize'
I got the same issue, it seems that on windows, we just got None, sourcecode,
try to run this code on linux, or try to remove the bucketization and the column crossing, for example. change the line:
flags.DEFINE_string("model_type","wide_n_deep","valid model types:{'wide','deep', 'wide_n_deep'")
to
flags.DEFINE_string("model_type","deep","valid model types:{'wide','deep', 'wide_n_deep'")
follow this issue for update: issue

Scrapy calling spider other than the one specified on the command line

(P6Svenv)malikarumi#Tetuoan2:~/Projects/P6/P6Svenv/test2/test2/spiders$ scrapy crawl zomd
Traceback (most recent call last):
File "/usr/bin/scrapy", line 9, in <module>
load_entry_point('Scrapy==1.0.3.post6-g2d688cd', 'console_scripts', 'scrapy')()
File "/usr/lib/pymodules/python2.7/scrapy/cmdline.py", line 142, in execute
cmd.crawler_process = CrawlerProcess(settings)
File "/usr/lib/pymodules/python2.7/scrapy/crawler.py", line 209, in __init__
super(CrawlerProcess, self).__init__(settings)
File "/usr/lib/pymodules/python2.7/scrapy/crawler.py", line 115, in __init__
self.spider_loader = _get_spider_loader(settings)
File "/usr/lib/pymodules/python2.7/scrapy/crawler.py", line 296, in _get_spider_loader
return loader_cls.from_settings(settings.frozencopy())
File "/usr/lib/pymodules/python2.7/scrapy/spiderloader.py", line 30, in from_settings
return cls(settings)
File "/usr/lib/pymodules/python2.7/scrapy/spiderloader.py", line 21, in __init__
for module in walk_modules(name):
File "/usr/lib/pymodules/python2.7/scrapy/utils/misc.py", line 71, in walk_modules
submod = import_module(fullpath)
File "/usr/lib/python2.7/importlib/__init__.py", line 37, in import_module
__import__(name)
File "/home/malikarumi/Projects/P6/P6Svenv/test2/test2/spiders/t350_crawl.py", line 36
def parse_item(self, response):
^
IndentationError: unindent does not match any outer indentation level
Do you see it? Scrapy isn't even calling the spider I specified on the command line!
I see that super in the traceback, but all my t350's are derived from CrawlSpider. zomd is subclassed from scrapy.Spider. Why is this happening and what do I do about it?
Spider's name doesn't equal to the file name. It is defined within the spider file by the second line below:
class CAPjobSpider(Spider):
name = "spider_name"
The above spider's name is "spider_name", even if the file may be "New_York.py".