Facebook EmaiInACirclel
Embedded/IoT

Audio Integration and Audio Customer Support in Mobile World: What does it Mean?

PentaGuy
PentaGuy
Blogger

Our colleagues from Audio team would like to share an overview of their activity. Thank you Florin, Vasile, Alex and Stefan for this piece.

Being part of multimedia audio team for mobile platforms, we can say that the most important goal is to ensure the quality of service (QoS) as our work is in the hands of many mobile station users. With medium, low cost and ultra low cost devices, where the budget is low, the audio experience and the quality of sound are vital to prove the QoS. Even if the target is in the low cost and ultra low cost market, the audio development efforts are  comparable with the efforts done during the development of high end mobile terminals (smartphones).

The main responsibility of our team is to solve issues in our area of expertise, to respond to the customer’s wishes to implement new features, providing both development expertise and continuous support to integrate correctly our software with their applications.

During integration/development process, issues can occur due to DSP algorithm performance, in which case tuning experts are involved, after a rigorous description of the problem. Interfacing the system and its peripherals with physical components from customer product boards raises other issues. From product to product, the customer may want to change external components, hence the software must be adapted too, to meet the desirable quality specifications.

Other types of issues that can occur are strictly logic-related. The customer can provide a baseband chip equipped with a software suite and several interfaces oriented to the end client and integrator, but the interfaces must be understood and matched to the baseband’s performance, timing requirements and physical constraints; it becomes obvious that support is needed in this area, as the end client’s requests must be in accordance to the customer’s  products. The audio team can cover the interfacing process, as well as come up with new and compatible approaches in requests that could be categorized as “hard to be done”.

For all the issues, the root cause must be precisely indicated, so that the software correction doesn’t impact any other modules or the behavior of the entire system. For instance, a ‘pop-noise’ issue can originate in the digital or in the analogical part of the system. In order to identify its cause, an engineer would most likely disable the analogical part of the audio system to try and see if the data in the digital part is continuous and if it is a correct representation of what is expected. If the spectral analysis in the digital domain reveals that there are discrepancies in the frequency domain or if there are discontinuities in the overall expected waveform, then the issue can be further analyzed in the “faulted domain”. Likewise, if there are no issues found in the digital domain, the analogical (often exterior of the baseband chip) part of the system can be investigated in detail because almost for sure there is the problem. Thus a special attention is paid to the calibration and tuning of algorithms and the timing sequence of power-ups and hardware initializations; that is an electrically loaded output driver will most likely produce an undesired effect. The solution in this case is to use sequencers which activate step by step in the right order the analog audio output driver.

For issues like wrong software management by upper layers where the problem is not present in the analogical output stage, on-chip debugging mechanisms are employed, and the issue is tracked down to the responsible line(s) of code. Certain DSP algorithms may not be properly controlled, or parameters can be corrupted due to misalignment between the initialization files and the DSP parameter space and this must be always special-list. This is also the case for accessory insertion and removal. The audio engineer must assure that if the accessory is correctly detected, its tuning parameters are correctly loaded, and its exact list of algorithms is activated (e.g. some accessories might have different gains and  different algorithms parameters than the default onboard speaker and microphone). If some decoding is ongoing, using codecs like AMR or MP3, the buffering mechanism between different software layers of the baseband must be synchronized. It is very useful to track the transfer mechanism for debugging purpose. If timing constraints are not met (e.g. decoding takes more time than it’s supposed to or the interrupt mechanism preempts the decoding), these buffers will not behave properly, resulting in discontinuities and an overall bad audio experience. It is up to the audio engineers to ensure proper data and timing negotiations between the different layers in the software stack.

In the support and debug context, the audio team makes use of a large range of equipments and software tools such as: audio analyzers, head-torso simulator, acoustic chamber for precise measurements, dedicated audio analysis instruments and network simulators. Debugging is faster by using professional equipment. Last but not least, oscilloscope and logic analyzer are used, when it is the case. For on-chip debugging, tools from Lauterbach company are used, T32 – JTAG compliant, which allows for all of the advantages of an In-Circuit Debugger (e.g. stepping and breakpoints setting, memory dumping, etc.) providing an in-depth look at the system and at the audio framework in particular. When combining these debugging options with additional software traces or with hardwired tracing mechanisms such as in the case of the DSP, an overview of the system under test can be achieved.

As far as the standards compliance is a main goal, the audio team is involved in the development and maintenance of features that are overlaid with the audio framework and existing support; for instance, several entities require that the mobile stations must comply with the eCall/ERA-GLONASS safety standard (automatic emergency call in case of a car accident) or with the TTY (a special device that lets people who are deaf, hard of hearing, or speech-impaired use the telephone to communicate, by allowing them to type messages back and forth to one another instead of talking and listening). The frameworks that the audio team handles do not specifically support these features, but using the already existing support for algorithms, extra features can be implemented in line with 3GPP standards or other proprietary requirement sets. These algorithms and transport protocols that use the audio stack can be implemented on a wide range of devices, from off the mobile phones to the embedded machine to machine (M2M)  devices.

All of the above are achieved in a monitored process, with varying visibility levels for customers, end users and the committed resources. These have an impact over response time and customer satisfaction, that’s why this kind of project uses a specific development environment, as any large scale project: a versioning tool from IBM Rational (Clearcase) assuring concurrent work on source code, a bug tracking tool (DDTS) to keep the history of all the requests/changes. This is how we can meet stringent market timing demands and bring satisfaction to the customer.

Related content

Check out our embedded systems developmet offer. Pentalog is a mobile development orientated outsourcing company, check out our  telecommunications business expertise.


Leave a Reply

Your email address will not be published. Required fields are marked *