GNU Radio TypeError: primitive_connect(): incompatible function arguments when executing simple graph - gnuradio

When trying to execute the following graph I get the error below. Any ideas what might be causing this? I'm on Ubuntu 20.10.
<<< Welcome to GNU Radio Companion 3.9.0.0-git >>>
Block paths:
/usr/share/gnuradio/grc/blocks
Loading: "/home/gnuradiouser/Dev/SDR/Lesson 1.grc"
>>> Done
Generating: '/home/gnuradiouser/Dev/SDR/Lesson_1.py'
Executing: /usr/bin/python3 -u /home/gnuradiouser/Dev/SDR/Lesson_1.py
Warning: failed to XInitThreads()
gr-osmosdr 0.2.0.0 (0.2.0) gnuradio 3.8.1.0
built-in source types: file fcd rtl rtl_tcp uhd hackrf bladerf rfspace airspy airspyhf soapy redpitaya freesrp
[INFO] [UHD] linux; GNU C++ version 10.2.0; Boost_107100; UHD_3.15.0.0-3build2
Using HackRF One with firmware 2018.01.1
Traceback (most recent call last):
File "/home/gnuradiouser/Dev/SDR/Lesson_1.py", line 193, in <module>
main()
File "/home/gnuradiouser/Dev/SDR/Lesson_1.py", line 169, in main
tb = top_block_cls()
File "/home/gnuradiouser/Dev/SDR/Lesson_1.py", line 142, in __init__
self.connect((self.osmosdr_source_0, 0), (self.qtgui_freq_sink_x_0, 0))
File "/usr/lib/python3/dist-packages/gnuradio/gr/hier_block2.py", line 37, in wrapped
func(self, src, src_port, dst, dst_port)
File "/usr/lib/python3/dist-packages/gnuradio/gr/hier_block2.py", line 100, in connect
self.primitive_connect(*args)
TypeError: primitive_connect(): incompatible function arguments. The following argument types are supported:
1. (self: gnuradio.gr.gr_python.hier_block2_pb, block: gnuradio.gr.gr_python.basic_block) -> None
2. (self: gnuradio.gr.gr_python.hier_block2_pb, src: gnuradio.gr.gr_python.basic_block, src_port: int, dst: gnuradio.gr.gr_python.basic_block, dst_port: int) -> None
Invoked with: <gnuradio.gr.gr_python.top_block_pb object at 0x7fac59391970>, <Swig Object of type 'gr::basic_block_sptr *' at 0x7fac575a4990>, 0, <gnuradio.gr.gr_python.basic_block object at 0x7fac5953a1f0>, 0
swig/python detected a memory leak of type 'gr::basic_block_sptr *', no destructor found.
>>> Done (return code 1)
I'm new to GNU Radio, so not sure where to start troubleshooting this issue. I've tried plugging in a signal generator instead of the osmocom source, and got the graph to execute, so I'm guessing there's a bad parameter in the osmocom object or a bad version of a library?
I've also tried installing an earlier version of GNU Radio (3.7 and 3.8), but couldn't get that going on Ubuntu 20.10.

Managed to resolve this, by uninstalling GNU Radio and removing the 'master' ppa repository sudo add-apt-repository ppa:gnuradio/gnuradio-master, which was in one of the installation instructions I came across.
After that I simply installed GNU Radio using the standard 20.10 repositories and all works fine now.

Related

Pipeline keeps failing at test and lint stage

I am pretty new to coding so I am sorry if I'm not giving enough context. I am trying to package and deploy a model in Python to a repo but keep getting the follow error when I push to GitLab
ERROR: No matching distribution found for numpy==1.23.1
$ if [[ "${COVERAGE_ARGS}" == "" ]]; then
$ dir_loc=`pwd`
$ COVERAGE_ARGS="--cov=${dir_loc}"
$ fi
$ pytest --junitxml=reports/report.xml --cov-config=${COV_CONFIG} ${COVERAGE_ARGS} ${EXTRA_ARGS}
============================= test session starts ==============================
platform linux -- Python 3.7.10, pytest-7.1.2, pluggy-1.0.0
rootdir: /builds/Ahs1yUQY/1/dse/dscp/dse-dscp-python-hoffinator
plugins: cov-3.0.0, requests-mock-1.9.3
collected 0 items / 1 error
==================================== ERRORS ====================================
______________________ ERROR collecting test/test_main.py ______________________
ImportError while importing test module '/builds/Ahs1yUQY/1/dse/dscp/dse-dscp-python-hoffinator/test/test_main.py'.
Hint: make sure your test modules/packages have valid Python names.
Traceback:
/usr/lib64/python3.7/importlib/__init__.py:127: in import_module
return _bootstrap._gcd_import(name[level:], package, level)
test/test_main.py:5: in <module>
from src.hoffinator import hoffinator_functions
src/hoffinator/hoffinator_functions.py:3: in <module>
import numpy as np
E ModuleNotFoundError: No module named 'numpy'
- generated xml file: /builds/Ahs1yUQY/1/dse/dscp/dse-dscp-python-hoffinator/reports/report.xml -
---------- coverage: platform linux, python 3.7.10-final-0 -----------
My requirements.txt for dependencies is as follows
ccplatlogging
boto3~=1.20.33
email-validator~=1.1.3
appnope==0.1.3
argon2-cffi==21.3.0
argon2-cffi-bindings==21.2.0
asttokens==2.0.5
attrs==21.4.0
backcall==0.2.0
beautifulsoup4==4.11.1
bleach==5.0.1
cffi==1.15.1
cycler==0.11.0
debugpy==1.6.2
decorator==5.1.1
defusedxml==0.7.1
entrypoints==0.4
executing==0.8.3
fastjsonschema==2.16.1
fonttools==4.34.4
iniconfig==1.1.1
ipykernel==6.15.1
ipython==7.34.0
ipython-genutils==0.2.0
ipywidgets==7.7.1
jedi==0.18.1
Jinja2==3.1.2
joblib==1.1.0
jsonschema==4.7.2
jupyter==1.0.0
jupyter-client==7.3.4
jupyter-console==6.4.4
jupyter-core==4.11.1
jupyterlab-pygments==0.2.2
jupyterlab-widgets==1.1.1
kiwisolver==1.4.4
MarkupSafe==2.1.1
matplotlib==3.5.2
matplotlib-inline==0.1.3
mistune==0.8.4
nbclient==0.6.6
nbconvert==6.5.0
nbformat==5.4.0
nest-asyncio==1.5.5
notebook==6.4.12
numpy==1.23.1
packaging==21.3
pandas==1.4.3
pandocfilters==1.5.0
parso==0.8.3
patsy==0.5.2
pexpect==4.8.0
pickleshare==0.7.5
Pillow==9.2.0
pluggy==1.0.0
ppscore==1.2.0
prometheus-client==0.14.1
prompt-toolkit==3.0.30
psutil==5.9.1
ptyprocess==0.7.0
pure-eval==0.2.2
py==1.11.0
pycparser==2.21
pycryptodome==3.15.0
Pygments==2.12.0
pyodbc==4.0.34
pyparsing==3.0.9
pyrsistent==0.18.1
pytest==7.1.2
python-dateutil==2.8.2
pytz==2022.1
pyzmq==23.2.0
qtconsole==5.3.1
QtPy==2.1.0
scikit-learn==0.24.2
scipy==1.8.1
seaborn==0.11.2
Send2Trash==1.8.0
six==1.16.0
soupsieve==2.3.2.post1
stack-data==0.3.0
statsmodels==0.13.2
teradatasql==17.20.0.0
terminado==0.15.0
threadpoolctl==3.1.0
tinycss2==1.1.1
tomli==2.0.1
tornado==6.2
traitlets==5.3.0
wcwidth==0.2.5
webencodings==0.5.1
widgetsnbextension==3.6.1
I do not understand why numpy is not found when it is cleary in the .txt. Is it a version issue?
Your console output shows that you are using Python 3.7.10. If you take a look at numpy release notes for 1.23.1 found here you’ll notice that it states:
The Python version supported for this release are 3.8-3.10.
You will need to either use a newer version of Python or use an older version of numpy.

Not able to find a migrated dpcpp output file and dpct responds as "Migration not necessary"

I am trying to migrate my cuda application using the dpct. When calling dpct; I see it processes CUDA files and generates some benign warnings but at the end it exits without writing out any DPC++ equivalent file.I can clearly see CUDA functions called in these applications and removal of CUDA path would fail the compile process.This is the command I used
$ dpct --report-type=all --cuda-include-path=/usr/local/cuda-10.2/include -p compile_commands.json"
I have eliminated the actual physical paths to files to avoid confusion:
Processing: ....../ComputeThermoGPU.cu
Processing: ....../CommunicatorGPU.cu
Processing: ....../ParticleData.cu
Processing: ....../Integrator.cu
------------------APIS report--------------------
API name Frequency
-------------------------------------------------
----------Stats report---------------
File name, LOC migrated to DPC++, LOC migrated to helper functions, LOC not needed to migrate,
LOC not able to migrate
....../Integrator.cu, 1, 0, 168, 0
....../ParticleData.cu, 1, 0, 402, 0
....../ComputeThermoGPU.cu, 1, 0, 686, 0
....../ParticleGroup.cu, 6, 0, 111, 0
Total migration time: 17207.371000 ms
-------------------------------------
dpct exited with code: 1 (Migration not necessary)```
I could get a solution to my own question
Use
dpct -p compile_commands.json --in-root=src --out-root=dpct_out --process-all
I think the reason might be, due to the absence of the driver code(containing main function), DPCT thinks that since these helper .cu files are not being utilized anyway, there is no need to perform migration on these files. That's the reason you see "Migration not necessary" warning.

redis.exceptions.DataError: Invalid input of type: 'NoneType'. Convert to a byte, string or number first

I've recently started to use Redis and RQ to run background processes. I built a Dash app which works fine on Heroku and used to work locally as well. Recently, I tried to test the same app locally again and I keep getting the following error - although I'm using exactly the same code hosted on Heroku:
redis.exceptions.DataError: Invalid input of type: 'NoneType'. Convert to a byte, string or number first.
In my requirements.txt and virtual env on Ubuntu 18.04 I have redis v.3.0.1, rq 0.13.0
When I run redis-server on my terminal I see that Redis 4.0.9 is used (that's also confusing to me).
I tried to google for two days looking for a solution with no avail.
Has anyone an idea of what might have happened and how to solve this error?
Here is the full relevant traceback:
File "/home/tom/dashenv/pb101_models/pages/cumulative_culture.py", line 1026, in stop_or_start_update
job = q.fetch_job(job_id)
File "/home/tom/dashenv/dash/lib/python3.6/site-packages/rq/queue.py", line 142, in fetch_job
self.remove(job_id)
File "/home/tom/dashenv/dash/lib/python3.6/site-packages/rq/queue.py", line 186, in remove
return self.connection.lrem(self.key, 1, job_id)
File "/home/tom/dashenv/dash/lib/python3.6/site-packages/redis/client.py", line 1580, in lrem
return self.execute_command('LREM', name, count, value)
File "/home/tom/dashenv/dash/lib/python3.6/site-packages/redis/client.py", line 754, in execute_command
connection.send_command(*args)
File "/home/tom/dashenv/dash/lib/python3.6/site-packages/redis/connection.py", line 619, in send_command
self.send_packed_command(self.pack_command(*args))
File "/home/tom/dashenv/dash/lib/python3.6/site-packages/redis/connection.py", line 659, in pack_command
for arg in imap(self.encoder.encode, args):
File "/home/tom/dashenv/dash/lib/python3.6/site-packages/redis/connection.py", line 124, in encode
"byte, string or number first." % typename)
redis.exceptions.DataError: Invalid input of type: 'NoneType'. Convert to a byte, string or number first.
Thanks in advance for any suggestion/hint.
All best,
Tom
Check this link: redis 3.0
It says that the redis-py no longer accepts NoneType
Try json.dumps(None), that worked for me

Issues with bitbake for building Angstrom

The issue I'm having is that I'm trying to build an Angstrom image from scratch using bitbake (since Angstrom is now Yocto Compatible) but I've run into an error the moment I run the bitbake systemd-image
Traceback (most recent call last):
File "/usr/bin/bitbake", line 234, in <module>
ret = main()
File "/usr/bin/bitbake", line 197, in main
server = ProcessServer(server_channel, event_queue, configuration)
File "/usr/lib/pymodules/python2.7/bb/server/process.py", line 78, in __init__
self.cooker = BBCooker(configuration, self.register_idle_function)
File "/usr/lib/pymodules/python2.7/bb/cooker.py", line 76, in __init__
self.parseConfigurationFiles(self.configuration.file)
File "/usr/lib/pymodules/python2.7/bb/cooker.py", line 510, in parseConfigurationFiles
data = _parse(os.path.join("conf", "bitbake.conf"), data)
TypeError: getVar() takes exactly 3 arguments (2 given)
ERROR: Error evaluating '${TARGET_OS}:${TRANSLATED_TARGET_ARCH}:build-${BUILD_OS}:pn-${PN}:${MACHINEOVERRIDES}:${DISTROOVERRIDES}:${CLASSOVERRIDE}:forcevariable${#bb.utils.contains("TUNE_FEATURES", "thumb", ":thumb", "", d)}${#bb.utils.contains("TUNE_FEATURES", "no-thumb-interwork", ":thumb-interwork", "", d)}'
Traceback (most recent call last):
File "/usr/lib/pymodules/python2.7/bb/data_smart.py", line 116, in expandWithRefs
s = __expand_var_regexp__.sub(varparse.var_sub, s)
File "/usr/lib/pymodules/python2.7/bb/data_smart.py", line 60, in var_sub
var = self.d.getVar(key, 1)
File "/usr/lib/pymodules/python2.7/bb/data_smart.py", line 260, in getVar
return self.expand(value, var)
File "/usr/lib/pymodules/python2.7/bb/data_smart.py", line 132, in expand
return self.expandWithRefs(s, varname).value
File "/usr/lib/pymodules/python2.7/bb/data_smart.py", line 117, in expandWithRefs
s = __expand_python_regexp__.sub(varparse.python_sub, s)
TypeError: getVar() takes exactly 3 arguments (2 given)
ERROR: Error evaluating '${#bb.parse.BBHandler.vars_from_file(d.getVar('FILE'),d)[0] or 'defaultpkgname'}'
Traceback (most recent call last):
File "/usr/lib/pymodules/python2.7/bb/data_smart.py", line 117, in expandWithRefs
s = __expand_python_regexp__.sub(varparse.python_sub, s)
File "/usr/lib/pymodules/python2.7/bb/data_smart.py", line 76, in python_sub
value = utils.better_eval(codeobj, DataContext(self.d))
File "/usr/lib/pymodules/python2.7/bb/utils.py", line 387, in better_eval
return eval(source, _context, locals)
File "PN", line 1, in <module>
TypeError: getVar() takes exactly 3 arguments (2 given)
I've been at this for a while now, searching on different sites. Originally I tried following the guide at the developer section on the Angstrom site, but once I got some errors (prior to this one I'm putting here), I found Derek Molloy's site http://derekmolloy.ie/building-angstrom-for-beaglebone-from-source/ which solved those errors and gave a little more detail into the process.
Eventually I stumbled onto another forum post which decribed my problem, but unfortunately the answers weren't really clear (for me anyway) http://comments.gmane.org/gmane.linux.distributions.angstrom.devel/7431. I'm at a loss on what could be wrong, and I'm pretty much new to Yocto project so I'm unsure if there's any steps missing or something that's implicit that I have overlooked, so I would deeply appreciate anyone who could point me on the right direction on this.
As side note, I've been thinking that it could be something having to do with the environment-angstrom-... file that I have, since mine is environment-angstrom-v2013.12 and all the other examples use previous versions, I'm wondering if there's a new step involved when working with this.
Is there a reason why you are using a system-wide bitbake instead of the one that is compatible with that release of Angstrom?
Don't use a system-wide bitbake, as the bitbake API can and does change over time. Use the corresponding bitbake for that release of angstrom.
(This is breaking because your bitbake requires getVar to take three arguments but your angstrom layers are only passing two)

plone.app.blob RuntimeError while migrating ATFile

I'm doing a blob migration onto a 3.2.1 site and I'm getting "RuntimeError: maximum recursion depth exceeded while calling a Python object error." on some file during ##blob-file-migration.
I found this http://svn.eionet.europa.eu/projects/Zope/ticket/4190 and it looks like they solved this problem for images by creating a custom migrator.
Any clue? Traceback below.
File "/home/simahawk/dev/plone/plone3/projx/src/plone.app.blob/src/plone/app/blob/content.py", line 113, in setFile
mutator = self.getField('file').getMutator(self)
File "/home/simahawk/dev/plone/plone3/buildout/eggs/Products.Archetypes-1.5.10-py2.4.egg/Products/Archetypes/BaseObject.py", line 241, in getField
return self.Schema().get(key)
File "/home/simahawk/dev/plone/plone3/buildout/eggs/Products.Archetypes-1.5.10-py2.4.egg/Products/Archetypes/BaseObject.py", line 828, in Schema
schema = ISchema(self)
File "/home/simahawk/dev/plone/plone3/projx/parts/zope2/lib/python/zope/app/component/hooks.py", line 96, in adapter_hook
return siteinfo.adapter_hook(interface, object, name, default)
File "/home/simahawk/dev/plone/plone3/buildout/eggs/archetypes.schemaextender-2.1.1-py2.4.egg/archetypes/schemaextender/extender.py", line 143, in cachingInstanceSchemaFactory
key = IUUID(context, str(id(context)))
File "/home/simahawk/dev/plone/plone3/projx/parts/zope2/lib/python/zope/app/component/hooks.py", line 96, in adapter_hook
return siteinfo.adapter_hook(interface, object, name, default)
RuntimeError: maximum recursion depth exceeded in cmp
2013-03-06 10:16:49 INFO ATCT.migration Rolling back to last safe point
When migration from Plone 3.x to Plone 4.x using Products.contentmigration I got teh same error. It seemed there was a bug in plone.app.blob migration. We made this custom migration to bypass the recursion error: http://svn.eionet.europa.eu/projects/Zope/browser/trunk/Products.EEAPloneAdmin/trunk/Products/EEAPloneAdmin/Extensions/ImageFS2Image.py?rev=29656
The problem is at.schemaextender version (2.1.1). Down-pinning to 1.6.0 solved the issue. This also solved a random KeyError on a 3.3.5 site. I think this is related to #12051 and #11396. It looks like these are common problems with newer version of at.schemaextender but in the package's README there's no info for Plone 3.x.