What would cause both nodes on a CAN bus to both exhibit bit dominant errors? [closed] - embedded

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 9 months ago.
Improve this question
I have 2 nodes connected on a CAN bus. The bus layout exists in the following manner:
Node 1 -> Node 2 -> 120 Ohm Terminator (GridConnect CAN terminator)
Attempting to transmit on either mode leads to “bit dominant errors”. I’m not sure if I’m misunderstanding how termination works, but it seems like an issue that, based on what I’ve read elsewhere, could be caused by faulty termination. Of course, I have my CANH/CANL/CANGND on Node 2 connected to the CAN terminator. Do I need another terminator connected to Node 1 as well?
If termination is not the issue, what are some other possible causes? Transmission is impossible for either node so it indicates both nodes can attempt to send a dominant bit on the bus but reading back every time. I haven’t attached a scope yet but I’d assume what I’d be seeing is the CAN TX pin refusing to go low? Because this is what would seem to cause an issue where the node expects to be sending a dominant bit on the bus but reading back a recessive one

You need terminators on both sides between CANH and CANL. Make sure that you have connected CANH with CANH and CANL with CANL.
120 Ohm <- Node 1 <-> Node 2 -> 120 Ohm
Pleas ask this kind of questions in electronics stack exchange, see maxy 's comment above.

Related

How does DMA work? What is the workflow of DMA? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 1 year ago.
Improve this question
I am trying to learn the basics of DMA. I watched certain videos on YouTube for the same.
I have got a few queries:
Can we set/reset bits of registers using DMA? Like if I want to set the 4th bit of GPIO_ODR, can I do it using DMA?
Does DMA follow polling method or interrupt method?
If incase I want to set and reset bits of the registers of the GPIO (general purpose input-output) peripheral, then what would be the workflow of DMA?
Will it be:
CPU->DMA->Peripheral->Register
and then for reverting back
Register->Peripheral->DMA->CPU
Is this workflow correct?
Please help me with this. Also, it would be great if you explain in simple words because I am completely new to this topic.
Thanks!
-Aditya Ubarhande
Disclaimer: My answer is based on my experience on DMA hardware of STM32 microcontrollers.
If the DMA you're using have access to the memory region where hardware registers reside (like GPIO), then yes, you can move data to these registers and change the bits. But be aware that this doesn't give you bit-wise read-modify-write access. DMA writes (or reads) the memory region (can be 8, 16 or 32 bits etc.) all at once. On STM32, Timer triggered DMA driven GPIO access can be used for synchronous parallel port implementations. On the other hand, DMA is generally used for event triggered bulk memory transfers, so using it for one time manipulation of hardware registers makes little sense.
In general, you arm the DMA and it generates an interrupt when its job is done (or half complete) or when some error occurs. DMA has its own control & status registers, so you can poll them instead of enabling & using interrupts. But most of the time, using interrupts is a better idea. It's also an option (probably a bad one) to fire & forget it, if you don't need to be notified when the transfer is complete.
In general, for any DMA transfer you configure source address, destination address, data length & width and the triggering condition (unless it's a memory-to-memory transfer). Of course, there can be additional settings like enabling interrupts etc.

How can I inspect network traffic from a GPRS watch? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I recently received a one of these Chinese watches that communicates over GPRS. I am trying to decipher the protocol used, as well as trying to figure out why it does not work.
I was thinking that there might be various approaches to inspecting the network traffic in this case.
Maybe there is a 3G/GSM operator that lets me inspect the network traffic? (does this exist?)
Create a fake base-station using software defined radio (seems incredibly overkill)
Maybe some other trick can work?
GPRS is what I'd call an extension to GSM. As that, it's encrypted.
So simply sniffing airborne traffic won't do. It's possible, though not overly likely, that your network operator uses weak encryption (slides), but deciphering GPRS traffic might be a bit much if you haven't done something like that before. Hence, your two approaches sound reasonable.
Maybe there is a 3G/GSM operator that lets me inspect the network traffic? (does this exist?)
No. At least, I don't think so (and on some level, I hope they don't. The potential for abuse is just too high).
However, you could be your own operator, as you notice yourself:
Create a fake base-station using software defined radio (seems incredibly overkill)
How's that overkill? You want to play man in the middle in a complex infrastructure. Becoming infrastructure does sound like the logical next step.
As a matter of fact, Osmocom's OpenBSC freshly supports GPRS modes. You can program your own sim card and use it, without faking anything, within your own network. It's noteworthy that under any jurisdiction I can think of, you'll need a spectrum license to operate a mobile phone network, so you should only do this within a well-shielded enclosure.
Another approach that sounds far more viable and financially sound: Disassemble one watch, look out for the different ICs/modules, identify whether there's an isolated GPRS modem. Find the serial lines between that and your watch's CPU, and tap electrically into that with a <10USD serial-to-USB converter. Out of curiosity, I think we'd all like to know which model you got from where :D

Static or dynamic width access to computer BUS? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
Suppose we have a simple processor, could be an embedded system, with one system bus, for the sake of the argument, a 32bit bus.
Now, if we have a couple of Peripherals, one named PER0 for example, attached to the bus, we can do two things:
Allow it to have fixed-width access to the main bus, for example 8 bits, and that way PER0 will always communicate with the bus in 8bit packages. This we can call static-width access.
Allow it to have options to choose how it will communicate with the
bus in terms of size of data by using signals with which it tells
the processor the mode of access it wants to use. For example, we
create two signals, A1 and A0, between the processor and PER0, whose
values will say:
00 - wait
01 - 8bit
10 - 16bit
11 - 32bit
and so the processor will know whether to send 8bit data to its
bus, or 32bit data, based on the values of A1, A0. This we can call
dynamic-width access to the bus.
Question:
In your experience, which of these two methods is preferred, and why? Also, in which cases should this be implemented? And finally, considering embedded systems, which method is more widely spread?
EDIT: I would like to expand on this topic, so I'm not asking for personal preferences, but for further information about these two methods, and their applications in computer systems. Therefore, I believe that this qualifies as a legitimate stackoverflow question.
Thanks!
There are multiple considerations. Naturally, the dynamic-width would allow better utilization of bandwidth in case you have multiple sizes in your transactions. On the other hand, if you transfer some 8 bytes, and then the next 8, you double the overhead compared to the baseline (transferring the full block in one go, assuming you can cache it until it fully consumed). So basically you need to know how well you can tell in advance which chunks you're going to need.
There's an interesting paper about the possibility of using such a dynamic sized transactions between the CPU and the DRAM:
Adaptive granularity memory systems: a tradeoff between storage efficiency and throughput
There you can see the conflict since it's very hard to tell which transactions you'll need in the future and whether bringing only partial data may cause a degradation. They went to the effort of implementing a predictor to try and speculate that. Note that this is applicable to you only if you're dealing with coherent memory.

how many GPS satellites can be search by handset? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
These days I play google ingress, but sometimes My phone can not location,I try to use gps tools to test my phone, only 10 satellites can be searched.
I compare my phone with samsung galaxy S3.
My phone can search 10 satellites, in good signal area.
But Samsung galaxy S3 can search 16 satellites although in good signal area or bad signal area.
16 satellites? what is the max number of GPS satelites can be search ?
thanks
You don't say what type of phone you have, but the number will depend on how many GPS receiver channels are implemented in your phone chipset.
You can find out how many GPS satellites are availeable at http://www.navcen.uscg.gov/?Do=constellationstatus. Currently there are 31. I guess about half these are over the horizon so you wouldn't expect to see more than 16. I know of some GPS receivers that could handle up to 66 satellites.
Having 16 GPS satellites in view at the same time should be very rare (see user2151446's answer). 10 satellites should be plenty for an excellent fix though -- anything above 5-6 satellites is enough for a precise position in theory.
In practice however, often the more important factors are
Relative position of the visible satellites: if all of them are in the same region of the sky, triangulation won't yield very precise results. With more than 5 satellites in view this is very unlikely however. See Dilution of Precision.
Signal to Noise ratio (SNR): if the received signal from the satellites is weak, the calculated position will be pretty inaccurate as well. Also bad SNR is an indicator for:
Signal reflections: If the signal is reflected from buildings etc, it will give the receiver misleading information. In the city this can be a pretty common problem.
Signal strength info is rather hard to get. On Android the "NMEA Recorder" app gives you a good view of the detailed GPS data as well as a log of the raw NMEA data, but I'm not sure if it includes the SNR info.
Edit: This SO question contains info on how to get the SNR programmatically.
Edit 2: The app GPS Status & Toolbox displays "signal strength" indicators (use it in horizontal layout). With some trying in good/bad reception areas this should give you a pretty good indication of what the situation actually is.

What is meant by dividing a process into pages in the concrete sense? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
The way I understand the notion of a 'process' is that it is a running instance of an executable program.The exe is in the secondary memory and the running instance of it is in the RAM. If this understanding is right, I would like to know what is really meant by this abstract description: 'Dividing a process into 'pages' and running some of the pages in RAM and keeping the rest in secondary memory for swapping when needed'? The question here is in the context of virtual memory.
Adding a 'programming' context to the question, following suggestions from moderators:
Say I write a small program to list the numbers from 1 to 100 (or) to print 'Hello world' (or) some desktop utility to scan a text file and print the words in the file one by one within the desktop window. Considering the final executable I have, once these programs are compiled and linked, how can the executable be 'divided' and run in parts in RAM when I run the executable? How shall I grasp and visualise the idea of what 'should be' in RAM at a point in time and what 'should not'?
You have it (the division) right there, in the virtual to physical address translation. The virtual address space is split into blocks of one or several kilobytes (typically, all of the same size), each of which can be associated with a chunk (page) of physical memory of the same size.
Those parts of the executable (or process) that haven't been used yet or haven't been used recently need not to be copied into the physical memory from the disk and so the respective portions of the virtual address space may not be associated with physical memory either. When the system becomes low on free physical memory, it may repurpose some pages, saving their contents to the disk if necessary (or not saving, if they contain read-only data/code).