GNURadio Companion and OFDM TX and RX in single Graph - gnuradio

I am following this github example for understanding OFDM on gnuradio-companion, I am able to execute ofdm_tx individually (64 and 512 FFT point) without any issues, but when I connect these two in single graph, I am able to get spectrum from ofdm_tx (no output from ofdm_rx or getting straight line).
My question here, each time I close my output spectrum, my tool get hanged and in background (inside gnu-companion) I observe the following message tarin (attached, printscreen). Similar thing also observed when I run ofdm_rx individually.
Error message in Console :
packet_headerparser_b :info: Detected an invalid packet at item 1448.
header_payload_demux :info :parser returned #f
Please guide me in this regard,

by selecting "NO" for vector source "Repeat" variable , issue sorted out (no hang), but not able to see spectrum anymore.

Related

Why is VK_SAMPLE_COUNT_1_BIT an invalid choice for multisampling in Vulkan?

Hello people of StackOverflow,
I am currently working on a games engine using the Vulkan graphics API, in the past I was just setting anti-aliasing to the max it could be. However today I was trying to turn it off (to improve performance on weaker systems). To do this I tried to set the MSAA samples on my engine to VK_SAMPLE_COUNT_1_BIT however this produced the validation error:
Validation Error: [ VUID-VkSubpassDescription-pResolveAttachments-00848 ] Object 0: handle = 0x55aaa6e32828, type = VK_OBJECT_TYPE_DEVICE; | MessageID = 0xfad6c3cb | ValidateCreateRenderPass(): Subpass 0 requests multisample resolve from attachment 0 which has VK_SAMPLE_COUNT_1_BIT. The Vulkan spec states: If pResolveAttachments is not NULL, for each resolve attachment that is not VK_ATTACHMENT_UNUSED, the corresponding color attachment must not have a sample count of VK_SAMPLE_COUNT_1_BIT (https://www.khronos.org/registry/vulkan/specs/1.2-extensions/html/vkspec.html#VUID-VkSubpassDescription-pResolveAttachments-00848)
I can work around this problem relatively easily so it isn't really an issue for me, however I was wondering why exactly this limit is put into place. If I want to set the MSAA samples to 1 why can't I?
Thanks,
sckzor
A sample count of 1 means "not a multisampled image". And if you're doing multisample resolve, resolving from a non-multisampled image doesn't make sense. Which is also why you can't use such images for any other things that expect a multisampled image (you can't use an MS-style sampler or texture function on them).

Why old nodes are visible even after deleting event files [Tensorflow]?

I have just started learning tensorflow, and wrote the following piece of code in Jupyter-Notebook :
a = tf.placeholder(tf.float32,shape=[3,3],name='X')
b = tf.constant([[5,5,5],[2,3,4],[4,5,6]],tf.float32,name='Y')
c = tf.matmul(a,b)
with tf.Session() as sess:
writer = tf.summary.FileWriter('./graphs',sess.graph)
print (sess.run(c,feed_dict={a:[[2,3,4],[4,5,6],[6,7,8]]}))
writer.close()
Running the tensorboard first time, gives a single X,Y and mult node as follows :
However, when I again compile my code (ctrl+enter), the tensorboard now makes a duplicate of the original graph.
I tried to resolve this( remove the older,dead nodes) by:
1. Deleting the event files.
2. Deleting the whole directory containing multiple event files of the same code.
3. Running fuser 6006/tcp -k before the tensorboard command line call.
But even after that, when I ran tensorboard, it would show the duplicate copies.
The only solution that worked was to reset the graph using tf.reset_default_graph() at the beginning of the code or to shut the notebook down and restart it.
My question is :
1. Why is it that even after deleting the event files, the older dead nodes keep showing up on the tensorboard ?. And yes I even restarted Tensorboard after each try, but duplicates were still there.
2. What are the ways if any, besides the two I listed above, to get rid of the dead nodes ?
The nodes you described are not dead. They are still exist and can be used.
When you run your code the first time, the nodes where created and added to the graph. When you execute the same cell for the second time, they are added one more time with the different names.
The same can be achieved if you will just copy twice the code in your .py file.
Your solution with tf.reset_default_graph() is the right one. Restarting the notebook works because all the information from the memory was removed. The same as rerunning .py file.
Your stuff with removing the even files does not work because nonetheless the files are removed, the nodes added to the graph in the memory are still there.
I was also having the same problem of tensorboard displaying duplicate graphs. I tried several measures like deleting the event files , deleting the log directory which contained the event files, but tensorboard would still remember the nodes and the graph from the previous run and show them as duplicate copies, one after another.
I noticed each time I ran the code tensorflow would create the same nodes but with different names, which means it was still keeping the old nodes in its memory. Looked something like this in the python execution window:
After first run:
After running the code 2nd and 3rd time:
This solved the problem
1) Restarting the kernel before each run solved the problem
2) As mentioned in above posts, adding tf.reset_default_graph() in the beginning of the code also solved the problem.

Importing data from multi-value D3 database into SQL issues

Trying to use the mv.NET by bluefinity tools. Made some integration packages with it for importing data from a d3 multi-value database into MS SQL 2012 but seem to be having some trouble with the mapping.
For the VOYAGES table have some commentX fields in the D3 application that are acting quite unwieldy and the INSERT fails after a certain number of rows with the following message
>Error: 0xC0047062 at INSERT, mvNET Source[354]: System.Exception: Error #8: dataReader[0] = LTPAC002 ci.BufferColumnIndex = 52, ci.ColumnName = COMMGROUP(Error #8: dataReader[0] = LTPAC002 ci.BufferColumnIndex = 52, ci.ColumnName = COMMGROUP(The value is too large to fit in the column data area of the buffer.))
at mvNETDataSource.mvNETSource.PrimeOutput(Int32 outputs, Int32[] outputIDs, PipelineBuffer[] buffers)
at Microsoft.SqlServer.Dts.Pipeline.ManagedComponentHost.HostPrimeOutput(IDTSManagedComponentWrapper100 wrapper, Int32 outputs, Int32[] outputIDs, IDTSBuffer100[] buffers, IntPtr ppBufferWirePacket)
Error: 0xC0047038 at INSERT, SSIS.Pipeline: SSIS Error Code DTS_E_PRIMEOUTPUTFAILED.The PrimeOutput method on mvNET Source returned error code 0x80131500.The component returned a failure code when the pipeline engine called PrimeOutput().The meaning of the failure code is defined by the component, but the error is fatal and the pipeline stopped executing.There may be error messages posted before this with more information about the failure.
The value is too large to fit in the column data area of the buffer. -> tried changing the input / outputs types but can't seem to get it right.
In the SQL table the columns are of type ntext.
In the .dtsx job the data type for the columns are of type Unicode String [DT_WSTR] with length 4000 , I guess these are auto-detected.
The import worked for other D3 files like this not sure why it fails for these comment fields.
Running the query on the mv.NET Data Manager ( on the d3 server) times out after 240 seconds so maybe this is the underlying issue?
Any ideas how to proceed? Thank you ~
Most like reason is column COMMGROUP does not have correct data type or some record in source do not fit in output type
To find error record (causing) you have to use on redirect row (property of component failing component ) and get the result set in some txt.csv /or tsv file .
then check data
The exception is being thrown from mv.NET so I suggest you call (or ask your reseller) to call Bluefinity support and ask them about this. You're paying for support, might as well use it. Those programs shouldn't be allowed to throw exceptions like that.
D3 doesn't export Unicode, that might be one issue. But if the Data Manager times-out then I suspect something is wrong in the connectivity into D3. Open a Connection Monitor from the Session Monitor and watch the connection when you make the request. I'm guessing it's either hanging or more probably it's falling into BASIC Debug.
Make sure all D3-side programs related to this are either all Flash-compiled, or all Not Flashed. Your app code will fall into Debug if it's not Flashed but MVNET.BP is.
If it's your program that's in Debug, fix it. If you're not sure which program it is, LIST-RUNTIME-ERRORS in DM.
If it's a MVNET.BP program, again work with Bluefinity. If you are using MVSP for connectivity then the Connection Monitor may be useless, you'll need to change that to an IP (Telnet) connection to see the raw data exchange.

Why are my flows not connecting?

Just starting with noflo, I'm baffled as why I'm not able to get a simple flow working. I started today, installing noflo and core components following the example pages, and the canonical "Hello World" example
Read(filesystem/ReadFile) OUT -> IN Display(core/Output)
'package.json' -> IN Read
works... so far fine, then I wanted to change it slightly adding "noflo-rss" to the mix, and then changing the example to
Read(rss/FetchFeed) OUT -> IN Display(core/Output)
'http://xkcd.com/rss.xml' -> IN Read
Running like this
$ /node_modules/.bin/noflo-nodejs --graph graphs/rss.fbp --batch --register=false --debug
... but no cigar -- there is no output, it just sits there with no output at all
If I stick a console.log into the sourcecode of FetchFeed.coffee
parser.on 'readable', ->
while item = #read()
console.log 'ITEM', item ## Hack the code here
out.send item
then I do see the output and the content of the RSS feed.
Question: Why does out.send in rss/FetchFeed not feed the data to the core/Output for it to print? What dark magic makes the first example work, but not the second?
When running with --batch the process will exit when the network has stopped, as determined by a non-zero number of open connections between nodes.
The problem is that rss/FetchFeed does not open a connection on its outport, so the connection count drops to zero and the process exists.
One workaround is to run without --batch. Another one I just submitted as a pull request (needs review).

R2WinBugs data entry incompatible copy error for Conditional Binomial likelihood, probit link, Random Effects (Psoriasis example)

I was working on the the guide for calculating the effect size of different treatments using a NETWORK META-ANALYSIS as done in example 6.a.
Here
It works fine in winBugs, but I want to do the analysis in R using R2winugs so that I can automate the data input.
It reads the model just fine according to the log, but it gets hung up when reading the data.
E.G.
display(log)
check(C:/Users/Temp User/.../Plaque_Psoriasis_Project/RE_Psoriasis.bug.txt)
model is syntactically correct
data(C:/Users/Temp User/.../Plaque_Psoriasis_Project/data.txt)
This is where the program just hangs.
The trap screen reads
incompatible copy
BugsCmds.TextError [000003A1H]
.beg INTEGER 1699018032
.end INTEGER 34825508
.......
I've tried reading the data as:
data <- list(t=t,C=C,r=r,n=n,na=na,nc=nc, ns=ns, nt=nt, Cmax=Cmax, meanA=meanA, precA=precA)
and as
data <- list("t","C","r","n","na","nc", "ns", "nt", "Cmax", "meanA", "precA")
Neither works.
I got R2winbugs to do the toy school example in the documentation so it can work.
Any thoughts?