Is it possible to intercept and change the voice during a call on a Symbian S60? - symbian

On a Symbian S60 phone, is it possible to create an application that awakes when a voice call starts; intercepts what the user says; applies a filter to the voice stream and makes it sound like, say, Darth Vader or Donald Duck ?

The short answer is no. Voice call audio is not available to applications, mainly for two reasons:
(The technical reason) On many devices, voice audio is entirely handled on a separate processor from the one on which applications run. The call audio processor is typically referred to as the baseband processor, and runs its own real-time (often proprietary) operating system. A separate processor (the 'application processor') hosts Symbian OS, on which the applications run.
Clearly, these two processors can talk to one another, for example to transfer packet-switched data between the network and the Symbian OS IP stack. In some circumstances - for instance video telephony - circuit-switched data is also routed via the AP, since this in necessary in order to capture/encode/decode/render the two video streams. Voice calls, however, typically cannot be routed via the AP. Since the telephony stack must meet real-time deadlines during a call, allowing the data path to cross to a separate processor introduces a large amount of additional complexity. Given that there is no existing use case which requires this data path, manufacturers understandably don't go to the trouble of making it work.
(The legal reason) Even if voice call audio was available on the AP, device manufacturers would use the Symbian OS security model to ensure that it could not be accessed by third-party applications. This is because the device would likely fail cellular communications type approval if the manufacturer could not guarantee that applications cannot tamper with voice calls.
Furthermore, in some jurisdictions, recording telephone calls without the knowledge of the other party is illegal. Allowing applications to access voice call data would clearly allow them to perform surrepticious recording, exposing the OEMs to liability. Even if the application was not doing anything malicious with the data (applying a Donald Duck filter for example), it may still fall foul of these kinds of legal restrictions.
So, while your idea is a fun one, it is unlikely to be possible on a commercial Symbian device.

Related

Is it possible to communicate mobile to embedded devices through ultrasonic audio signal

I am looking for a wireless communication technology for exchanging data between devices via sound in ultrasonic frequencies.It is possible to communicate with two mobile devices.I want to communicate a mobile and an embedded device.Is it possible?Any device is working with this protocol?
Of course it is possible. Back in the 1970's my TV remote control used ultrasound to change the channel and turn the TV off. The control was somewhat rudimentary IIRC a short press changed the channel up and a long press turned the TV off. It worked quite reliably for these functions.
Providing more functionality would require a more complicated modulation scheme which, as has been said in another answer, would be prone to interference from other sound sources. This probably explains why infra-red communucation signals are used in more modern remote control systems.
It is possible - why shouldn't it be? Smartphones are just embedded computers too. I imagine getting CE/FCC/etc certifications with such an embedded device will not be so easy. And production testing ...
But is it feasible? Probably not. Power consumption is a lot higher than with any RF-link, it's more susceptible to noise (quite literally) and the required components (microphone+speaker) are bigger than RF-components (antenna).
And then there's a whole bunch of other things you need to keep in mind when working with ultrasound, starting with the plastic design of the embedded device. But also things like the effect of ultrasound on people and their pets etc.

Can HID Input/Feature reports have side-effects for the device?

To talk to an embedded device. I'm [ab]using the USB HID protocol, as seems to be common.
Due to limitations of the device stack (it's a Freescale Kinetis KL26 with their SDK) it supports only Output and GetFeature on EP0, and Interrupt Input on EP1. I'm mostly using EP0 for I/O.
What's prompted my question is the discovery that a Linux host downloads all Feature Reports at enumeration time (which is a problem, because on my device, they have side-effects). Is there any reliable way to stop the host getting reports except when an application explicitly requests them? [I've tried flagging all reports as "Volatile" but it doesn't take the hint.]
More generally: Does HID impose any rules to say that getting Feature or Input Reports must be idempotent or free from side-effects (kinda analogous to REST)? I can't find any such rule in the HID class specification, but maybe it's implicit.
Thanks

Difference between an API and a device driver

I try to understand how they both relate to each other. As far as I know, they both can be a part of the HAL. In case of a communication between an application and a graphics card - can an API get the job done on its own or do we have to rely on them both? Can an API directly communicate with the hardware or do we always need a driver in-between, which translates the command of the API?
TL;DR
Think of an API as a specification that describes what to do, while a driver is an implementation that describes how to do it.
Details
As a contrived example, imagine we have three different audio cards that we want to play nicely with multiple operating systems. We can define an API for the card manufacturers that says, "Each card must support four methods: mute(), playsound(sound), volumeup() and volumedown()". By defining the API, we get a common interface that allows the operating system designers to support the audio devices without worrying about the hardware details. They know that if they want to mute the sound card, they can call mute(), or if they want to turn the volume up, they call volumeup().
It is then up to the device manufacturers to implement a driver that actually performs those actions. The driver will vary between the three different audio cards because they are different at the hardware level, but the API is consistent so the next higher abstraction level (the OS) doesn't need to know how to deal with the hardware.
For a more concrete example, consider the Advanced Control & Power Interface (ACPI) specification. It defines a common interface for operating systems to manage power consumption and thermal characteristics of hardware devices. There are methods that a device driver or firmware must implement in order to be "ACPI Compliant". This allows Windows operating systems and Linux variants to both perform the same actions on hardware devices without needing to implement their own drivers for the hardware
Note: Windows performs ACPI actions through acpi.sys, which they call an "ACPI Driver". Don't let the terminology confuse you; even though they call it a driver, it is really a window into the ACPI interface. Linux uses the acpi kernel module to do the same thing, and Linux doesn't call it a driver. Perhaps ACPI wasn't the best example, but I don't have anything better at the moment.

What is an embedded system? Can Mobile be considered as an embedded product?

What is mean by embedded system?
If a system/machine or product which we are making is for multiple purposes, then can we consider it as an embedded system? Or is it that only a system dedicated for a particular task that is considered as an embedded system? Can a PC/mobile/laptop be considered as an embedded system or not?
Generally an embedded system is one placed into operation for a specific, narrow purpose, and lacking the kind of general purpose user interfaces you would find on an ordinary desktop/laptop.
That is not to say though that an embedded system cannot have these - I've seen test equipment such as network analyzers running desktop operating systems, with mouse/keyboard ports. One could probably hack one of those to use it for general purpose computing, but it would not be cost effective.
Going the other way, you can take a general purpose computer and shove it into an embedded application. However, systems optimized for embedded use may be more robust, support better real-world I/O (often retaining legacy ports), and use parts expected to be available over longer lifetimes than used in commodity PCs (if one fails, you want to be able to replace it with the exact same thing).
Often embedded systems are smaller - 8 bit processors (even 4-bit or serial-core historically) with limited memory; though 32 bit cores such as the arm family are now inexpensive and commonplace. Nor are tens to hundreds of megabytes of memory unknown.
Older cellphones would have a lot in common with embedded systems, but rather obviously contemporary smartphones are catching up in power and versatility, though still often constrained by user interface. Software wise some "think small" habits endure - for example, Android's compact bionic C library and toolbox shell have similar design goals to embedded C libraries and busybox. In other ways though, expansive resource-gobbling user experiences are now the norm on phones. Toss tablets based on the same processors and accessorized with keyboard into the mix, run a kernel designed originally for desktop computers on them, and the real difference is between UI software stacks designed to run segregated "apps" on a touch interface, vs one designed to run more traditional programs.
This is a question that even embedded systems experts often ask and discuss. There is as with many things a spectrum, and simple definitions are difficult.
My preferred definition is: a system containing one or more computing or processing element that is not a general purpose computer.
Some systems are inarguably embedded within that definition, and include such things as washing machine controllers, telephone switches, satellite navigation equipment, marine chart-plotters, automotive ECUs, laser printers etc.
Some are less easily categorised. A first generation digital mobile phone, is probably certainly an embedded system while more modern feature and smart phones however are somehow different. They can run apps chosen and installed by end-users allowing them to perform tasks not determined by the manufacturer. With increasing capabilities they are essentially hand-held computers and the range of apps sufficient to be able to regard them as "general purpose".
With these more ambiguous systems, it is useful to ask perhaps not what is an embedded system, but rather what is embedded systems development? For example, the manufacturer of your smart-phone deployed on it an operating system, the signal processing and communications stack required for it to operate as a telephone, all the device drivers and stacks for WiFi, USB, data storage etc., and this is certainly embedded systems development. However the guys writing apps for PlayStore or AppStore etc. are writing to a defined common platform abstracted by all that embedded code - that is not embedded systems development by any definition that I would accept, unless perhaps the application were for some bespoke vertical market application - like the delivery signature apps UPS drivers have on PDAs for example - in that environment the "general-purpose" device has been re-purposed as a "special-purpose" device.
With respect to a PC; a PC can be the embedded computing element in a system that is not a general purpose computer. Industrial PCs are commonly found embedded in manufacturing and packaging machinery, CNC machine tools, medical equipment etc. Although they share hardware architecture with desktop PCs they do not necessarily look like desktop PCs and come in many different form factors of both boards, and enclosures. Even within a desktop PC however, there are many examples of embedded computing elements, and embedded software such as the BIOS responsible for bootstrapping the system, the keyboard controller and disc drive controllers for example.
An embedded system is any electronic system that uses a CPU chip, but that is not a general-purpose workstation, desktop or laptop computer.
An embedded system is a special-purpose computer system designed to perform a dedicated function. Unlike a general-purpose computer, such as a personal computer, an embedded system performs one or a few pre-defined tasks, usually with very specific requirements, and often includes task-specific hardware and mechanical parts not usually found in a general-purpose computer.
Read more: http://romux-loc.com/tutorials/embedded-system#ixzz3113gchPt
Embedded system are devices that do some specific job not like our laptops which can play music, click pictures and format documents. They are devices like water filter , washing machines, vending machine etc.
They are programmed for some specific work and they do that work in a super loop depending on the user input.Like the vending machine always perform same thing when you opt for coffee in it with the help of button provided in it.
So in that way mobile phone is not an embedded system because it has no super loop and it can do various general purpose things just like a computer.
An embedded system has memory constrain, timing constrain and they do things in limited space.
Embedded system is any device that includes a programmable computer put it is not itself a general-purpose computer, so the mobile is not an embedded system because it has no super loop and it can do various general purpose things just like a computer, and an embedded system has memory constrain timing, constrain and they do things in limit space.
The embedded system is a microprocessed system in which a computer is attached to the system it controls. An embedded system can perform a set of tasks that have been predefined. The system is used for specific tasks, and thus, through engineering it is possible to optimize a given product and decrease the size, as well as the computational resources and its final value.
Embedded systems are all around us, and for that reason, we are not aware of their computational capacity, since we are so involved with such mechanisms. Embedded systems operate on machines that can work for several years without stopping, and which still, in some cases, have the ability to self-correct.
An excellent example of items that use embedded systems are the famous smartphones, which perform specific functions, and which have more limited mechanisms than computers.
Check below a list with some examples that receive the application of embedded systems:
Electronic ballot box
Video games
Calculators
Printers
Hospital equipment
In vehicles
Some home appliances
Cellular apparatus
Routers
A definition that may help to get the difference.
An embedded system can be considered as a system with which another embedded system cannot be developed. So presently, using a mobile phone, one cannot develop an 'embedded system'. If it is possible by the mobile device, then it should be considered as a general purpose system.
An example of an "embedded system" is a chip that is inserted underneath a dog's skin for identification purposes. Words like "embedded system" have specific meanings that only specialists understand. Such ambiguities make understanding technical language difficult for ordinary people.
embedded (ɪmˈbɛdɪd)
adj
fixed firmly and deeply in a surrounding solid mass
constituting a permanent and noticeable feature of something
(Journalism & Publishing) journalism assigned to accompany an active military unit
(Grammar) grammar inserted into a sentence
(Computer Science) computing (of a piece of software) made an integral part of other software

Accessing memory space / registers on externally connected devices through software

This question is a bit vague, and I apologzie for that, but a fairly vague answer will do :)
How do people typically access memory adresses of external devices (say, connected to a PC through USB, or even just say, a multipurpose microcontroller)? I'm wondering how software is able to find address to write to registers or EEPROM space.
For example if I want to write a value to register 0x1234, does software just send this information (the register and the value to be written) to some sort of driver that "talks" to the device and takes care of the value change through hardware?
Is implementation of this functionality mostly a hardware endeavor?
Thanks!
Let's use as an example a fairly common USB peripheral controller that is based on an 8-bit 8051 microcontroller core. One side of it attaches to the USB host controller on a desktop computer. The other end goes to a USB device controller that presents itself as a FIFO endpoint to the host.
Some 8051 firmware will be required to initialize the device side. A class driver will be required on the host side. Once those are in place, the application developer will have a device name on the host side which may be opened for read/write. Sometimes a vendor will provide a library to perform device specific tasks and isolate the user from the raw device. Often a Windows DLL is available to hide the low level I/O and present device operations as function calls.
Additional 8051 firmware monitors the FIFO from device end and interprets messages sent from the host application or DLL then takes actions. These actions may be low level such as read/write from a memory location or register. They may be high level such as setting the PWM value of a programmable counter array.
So your hypothetical description of a write to register 0x1234 is not far from how it is often implemented.