How can I inspect network traffic from a GPRS watch? [closed] - embedded

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I recently received a one of these Chinese watches that communicates over GPRS. I am trying to decipher the protocol used, as well as trying to figure out why it does not work.
I was thinking that there might be various approaches to inspecting the network traffic in this case.
Maybe there is a 3G/GSM operator that lets me inspect the network traffic? (does this exist?)
Create a fake base-station using software defined radio (seems incredibly overkill)
Maybe some other trick can work?

GPRS is what I'd call an extension to GSM. As that, it's encrypted.
So simply sniffing airborne traffic won't do. It's possible, though not overly likely, that your network operator uses weak encryption (slides), but deciphering GPRS traffic might be a bit much if you haven't done something like that before. Hence, your two approaches sound reasonable.
Maybe there is a 3G/GSM operator that lets me inspect the network traffic? (does this exist?)
No. At least, I don't think so (and on some level, I hope they don't. The potential for abuse is just too high).
However, you could be your own operator, as you notice yourself:
Create a fake base-station using software defined radio (seems incredibly overkill)
How's that overkill? You want to play man in the middle in a complex infrastructure. Becoming infrastructure does sound like the logical next step.
As a matter of fact, Osmocom's OpenBSC freshly supports GPRS modes. You can program your own sim card and use it, without faking anything, within your own network. It's noteworthy that under any jurisdiction I can think of, you'll need a spectrum license to operate a mobile phone network, so you should only do this within a well-shielded enclosure.
Another approach that sounds far more viable and financially sound: Disassemble one watch, look out for the different ICs/modules, identify whether there's an isolated GPRS modem. Find the serial lines between that and your watch's CPU, and tap electrically into that with a <10USD serial-to-USB converter. Out of curiosity, I think we'd all like to know which model you got from where :D

Related

how to write sensor's libraries from scratch [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 2 years ago.
Improve this question
Can someone explain to me how can I write sensor's library from zero, I read the datasheet, and some Arduino libraries but I did not understand how they had written them
It's not a trivial task to write a library for embedded projects. Most of the times, it's almost impossible to write a completely generic one that can satisfy everyone's needs.
Don't let Arduino library examples fool you. Most of them are not designed and optimized for real world applications with strict timing constraints. They are useful when reading that sensor is the only thing your embedded system does. You can also use them sequentially in a master loop when blocking read operations are not a concern.
Complex embedded applications don't fit into this scheme. Most of the time, you need to execute more than one task at the same time, and you use interrupts and DMA to handle your sensor streams. Sometimes you need to use an RTOS. Timing constrains can be satisfied by using the advanced capabilities of STM32 Timer modules.
Connecting timers, DMAs, interrupts and communication (or GPIO) modules together so that they can work in harmony is not easy (also add RTOS, if you use one), and it's almost impossible to generalize. Here is an list of examples that comes into my mind:
You need to allocate channels for DMA usage. You library must be aware of the channel usage of other libraries to avoid conflicts.
TIM modules are not the same. They may have different number of I/O pins. Some specific peripherals (like ADC) can be triggered by some TIM modules but not the others. There are constraints if you want to chain them, you can't just take one timer and connect it to some other one.
The library user may want to use DMAs or interrupts. Maybe even an RTOS. You need to create different API calls for all possible situations.
If you use RTOS, you must consider different flavors. Although the RTOS concepts are similar, their approaches to these concepts are not the same.
HW pin allocation is a problem. In Arduino libraries, library user just says "Use pins 1, 2, 3 for the SPI". You can't do this in a serious application. You need to use pins which are connected to hardware modules. But you also need to avoid conflicts with other modules.
Devices like STM32 have a clock tree, which affects the cloks of each peripheral module. Your library must be aware of the clock frequency of the module it uses. Low power modes can change these settings and break a library which isn't flexible for such changes. Some communication modules have more complicated timing settings, like the CAN Bus module for example, which needs a complex calculation for both bit rate and bit sampling position.
[And probably many more reasons...]
This is probably why the uC vendors provide offline configuration & code generation tools, like the CubeMX for STM32's. Personally I don't like them and I don't use them. But I must admit that I still use CubeMX GUI to determine HW pin allocations, even though I don't use the code it generates.
It's not all hopeless if you only want to create libraries for your own use and your own programming style. Because you can define constraints precisely from the start. I think creating libraries are easier in C++ compared to C. While working on different projects, you slowly create and accumulate your own code snippets and with some experience, these can evolve into easily configurable libraries. But don't expect someone else can benefit from them as much as you do.

About embedded firmware development [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
In the past few days I found how important is RTOS layer on the top of the embedded hardware.
My question is :
Is there any bifurcation between device driver (written in C directly burned over the microcontroller)
And the Linux Device driver ?
This question is a little broad, but an answer, a little broad itself, can be given.
The broadness comes from the fact that "embedded hardware" is not a precise term. That hardware ranges from 4 bit microcontrollers, or 8 pins ones, up to big CPUs which have many points in common with typical processors used tipically on linux machines (desktop and servers). Linux itself can be tailored up to the point it does not resemble a normal operating system anymore.
Anyway, a few things, generally acceptable, can be the following. Linux is not, in its "plain" version, a real time operating system - with the term RTOS instead, the "real time" part is implied. So, this can be one bifurcation. But the most important thing, I think, is that embedded firmware tries to address the hardware and the task to be done without anything else added. Linux O.S. instead is general purpose - it means that it offers a lot of services and functionalities that, in many cases, are not needed and only give more cost, less performances, more complication.
Often, in a small or medium embedded system, there is not even a "driver": the hardware and the application talk directly to each other. Of course, when the hardware is (more or less) standard (like a USB port, a ethernet controller, a serial port), the programming framework can provide ready-to-use software that sometimes is called "driver" - but very often it is not a driver, but simply a library with a set of functions to initialize the device, and exchange data. The application uses those library routines to directly manage the device. The O.S. layer is not present or, if the programmer wants to use an RTOS, he must check that there are no problems.
A Linux driver is not targeted to the application, but to the kernel. And the application seldom talks to the driver - it uses instead a uniform language (tipically "file system idiom") to talk to the kernel, which in turns calls the driver on behalf of the application.
A simple example I know very well is a serial port. Under Linux you open a file (may be /dev/ttyS0), use some IOCTL and alike to set it up, and then start to read and write to the file. You don't even care that there is a driver in the middle, and the driver was written without knowledge of the application - the driver only interacts with the kernel.
In many embedded cases instead, you set up the serial port writing directly to the hardware registers; you then write two interrupt routines which read and write to the serial port, getting and putting data from/into ram buffers. The application reads and writes data directly to those buffers. Special events (or not so special ones) can be signaled directly from the interrupt handlers to the application. Sometimes I implement the serial protocol (checksum, packets, sequences) directly in the interrupt routine. It is faster, and simpler, and uses less resources. But clearly this piece of software is no more a "driver" in the common sense.
Hope this answer explains at least a part of the whole picture, which is very large.

Udacity: Functional Hardware Verification. What are the implementations? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I've just finished watching the first week of Functional Hardware Verification course at Udacity. In the promo, it says, it requires both Electrical Engineer and Software Engineer skills. This is why it intrigued me. But I got an impression like, it is only for chip designers. Am I wrong?
Can it be also useful for embedded software developing ?
For example, how can I utilize this information with a Raspberry Pi and/or an Android device ? Is it possible or am I wasting my time with this course?
I'll be happy if someone could give me an insight.
1) is this a stack overflow question? not sure, will see what happens.
2) is this a waste of time, I would argue no, its free, just watch, learn something
3) does it apply to embedded on a raspberry pi or android? probably not, depends on your definition of embedded first off, if you are making api/library calls to an operating system or environment, that is just writing applications. if you are digging down into the bare metal, it gets closer, but not quite. Now if you are working somewhere where you work hand in hand or are or are going to be designing chips, fpgas, cplds, etc. And, the company is such that they are willing to move into the 1990's or 2000's and allow the software developers to develop against the rtl in simulation, access to the simulator licenses (very expensive, doesnt take very many cadence licences to shadow your salary).
During the chip development phase of time (vs the post silicon sell the chips for a while before the next chip starts) we build our own sims using the kinds of things I assume this class is teaching, but perhaps not since the traditional testing methods are not quite what I am talking about. we sim portions or the whole chip, use the "foreign language interface" or whatever the vendor calls it, to interface software to the simulated hardware. We do it using an abstraction layer in such a way that a high percentage of the code we write against sim will run against silicon using a different abstraction layer/shim. This can give months to a year or more head start on the software, and can find bugs in the logic design as well as the design of the hardware/software interface (are interrupts a good idea, can we poll fast enough, use dma, etc).
Cadence is of course going to push their product and their ways of doing things even though their products support a wide range of features. There are other tools. I am a fan of verilator, open source, free. but it is very particular about the verilog it supports, mostly synthesizable only, and that is fine by me, so depending on the verilog author you are relying on they may not have habits that support whate verilator is looking for. icarus verilog is much more forgiving but very slow. verilator is like 10 times slower than cadence, icarus is slower than that. but free is free...There are a number of things at opencores for example that you can play with if you want to see this in action, amber, mpx, and altor32 are ones I have played with for example.
If you land one of these chip/board company jobs then familiarity with simulators like cadence and modelsim and the free tools (verilator, icarus, gtkwave, ghdl, etc) are valuable. Being able to read verlog and/or vhdl (which is not hard if you are already a programmer, the only thing new is that some of the code is truly "parallel", which with multi-core processors today that is not actually a new thing for a programmer). If you are able to interface software to the simulator, then you are an asset to that company because you can facilitate this development against the simulated hardware and save the company money in units of multiples of your salary with found bugs before silicon and shorting the schedule by many months.
Lastly, being able to look at waveforms and see your code execute is an addictive experience.
Just like learning bare metal, or assembly, getting familiar with hardware at this next lower level can only help you as a programmer, even if the experience is with logic or processors that are not the ones you are programming. Remember that just like programmers, take any N number of programmers given the same task they may come up with anywhere up to N different solutions. Just because one implementation of a mips clone has certain details does not mean all mips nor all processors look like that on the insides.

Static or dynamic width access to computer BUS? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
Suppose we have a simple processor, could be an embedded system, with one system bus, for the sake of the argument, a 32bit bus.
Now, if we have a couple of Peripherals, one named PER0 for example, attached to the bus, we can do two things:
Allow it to have fixed-width access to the main bus, for example 8 bits, and that way PER0 will always communicate with the bus in 8bit packages. This we can call static-width access.
Allow it to have options to choose how it will communicate with the
bus in terms of size of data by using signals with which it tells
the processor the mode of access it wants to use. For example, we
create two signals, A1 and A0, between the processor and PER0, whose
values will say:
00 - wait
01 - 8bit
10 - 16bit
11 - 32bit
and so the processor will know whether to send 8bit data to its
bus, or 32bit data, based on the values of A1, A0. This we can call
dynamic-width access to the bus.
Question:
In your experience, which of these two methods is preferred, and why? Also, in which cases should this be implemented? And finally, considering embedded systems, which method is more widely spread?
EDIT: I would like to expand on this topic, so I'm not asking for personal preferences, but for further information about these two methods, and their applications in computer systems. Therefore, I believe that this qualifies as a legitimate stackoverflow question.
Thanks!
There are multiple considerations. Naturally, the dynamic-width would allow better utilization of bandwidth in case you have multiple sizes in your transactions. On the other hand, if you transfer some 8 bytes, and then the next 8, you double the overhead compared to the baseline (transferring the full block in one go, assuming you can cache it until it fully consumed). So basically you need to know how well you can tell in advance which chunks you're going to need.
There's an interesting paper about the possibility of using such a dynamic sized transactions between the CPU and the DRAM:
Adaptive granularity memory systems: a tradeoff between storage efficiency and throughput
There you can see the conflict since it's very hard to tell which transactions you'll need in the future and whether bringing only partial data may cause a degradation. They went to the effort of implementing a predictor to try and speculate that. Note that this is applicable to you only if you're dealing with coherent memory.

Microcontroller interfacing [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a book, tool, software library, tutorial or other off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 8 years ago.
Improve this question
I'm new to the world of embedded programming and I'm looking for information about interfacing with a microcontroller using I2C, USB, UART, CAN, etc. Does anybody know any good links, books, tutorials, about this subject? Since I´m a real newb on this subject I prefer if it is as basic as possible.
Since you are already a desktop developer, you can probably jump in somewhere in the middle. Download the user manual for the controller you plan to use. Get the example code from the manufacturers etc. sites for one of the simpler peripherals - UART is good.
Get a development board and an Eclipse/Crossworks/whatever development system that supports your board and get something to work - flash LED, UART echo or the like. Don't try to use a multitasker and interrupt driver first off - just poll the UART with as few lines of code as possible. Why - because just getting a development setup to compile, link, download and run one line of code is a considerable exercise in itself, without any complications from complex code. You have your development setup and hardware to debug first before you can effectively write/debug any code for the controller. Just getting 'blinky' code that merely flashes an on-board LED to work is a huge step forwards:)
Most controllers have dedicated groups/blogs, either on the uC manufacturers site or linked from it - join them.
If you want to get into this efficiently, get a board and try to get it to do something - there is no better way. Once you can get a LED to flash or a UART to issue a string of chars, you're off to the races:)
Developing simple 'blinky' or UART-polling functions are not wasted - you can continue to use them later on when you have more complex code. Blinking a LED, (with the delay-loop replaced by an OS sleep), is such a good indicator that the code is sorta running that I've always kept it in on delivered systems. The UART poll is useful too - it will run without any interrupts and so you can call it from the data/prefetch/whatever abort vectors to issue the many 'CRITICAL ERROR - SYSTEM HALTED' messages that you will get during ongoing development:)
Wikipedia really is as good a place to start as any to learn the basics surrounding each of these mechanisms.
Once you've got a grasp, I'd recommend looking at an actual datasheet and user's guide for a microcontroller that has some of these features. For example, this 16-bit PIC has a dedicated UART, I2C and SPI bus. Looking at the relevant sections of the documentation, armed with your new knowledge of the underlying principles, you'll start to see how to design a system that uses them.
The next step would be to buy a development board for such a device and then, using example code (of which there is tons), code yourself up some datalinks. Incidentally, a UART is certainly the easiest to test, since all PCs can transmit using the RS-232 protocol from a terminal, so in this case you can code a loopback to transmit and receive characters with relative ease. With I2C and SPI however, I think you would need to buy a dedicated host dongle to allow you to transmit using the protocols, although I think Windows 8 might be introducing native support (but don't quote me on that).
I haven't implemented a datalink using CAN, so I can't comment specifically, although I just did a quick Google search and there's a PIC family that supports it, so I'm sure you could follow a similar approach. As for USB, I consider it a bit of a black art, so I'll leave someone else to answer that one - although I do know you can get a USB software stack for PICs, so again, a similar approach could probably be followed.
On a final note, I've only mentioned PICs as an example microcontroller. If you're not familiar with microcontrollers in general, you'll soon realise that all the major microcontroller families from companies like Microchip, TI and Atmel generally follow the same design, so I would have a browse and pick the family who's documentation you're most comfortable with (or least uncomfortable with, as suggested by Martin James!).
For the most part the data sheet or reference manual for the microcontroller you intend to use is what you need. Each vendor will implement these interfaces in different ways, so that is your only definitive source for low-level programming information.
I2C and UART are relatively simple with no standard higher-level protocol stack; that would be defined by you or the device you might be connection to. Your microcontroller vendor will almost ceratainly have example code and/or application notes.
CAN is more complex, and typically CAN networks use a higher-level application protocol of which there are several for different application domains such as CANopen, NMEA2000, DeviceNet, IEC J1939 and more. Most often a third-party library is the most cost effective method of implementing an application protocol stack, but if the network comprises of only devices you are implementing, then that complexity may not be necessary. Again your microcontroller vendor is likely to have examples.
USB is very strongly defined by the USB Implementers Forum, and protocol stacks are non-trivial. Most microcontroller vendors that have on-chip USB interfaces will provide sample code and application notes for at least device-class USB. USB Host stacks are more complex, and again the use of a third-party library (which are relatively expensive) is the quickest way to market. Fees you would otherwise be obliged to pay the USB IF may make writing your own host stack no less expensive. Even for a USB Device interface you will strictly need a USB Vendor ID at $2000 from USB IF. Some microcontroller and library vendors will allow you to use their Vendor ID for a defined subset of Product IDs at little or no cost, but you would probably have to be a significant customer in terms of volumes. For internal experimentation, you can probably get away without an official VID, but such a product could not be released commercially.