MATLAB爱好者论坛-LabFans.com

MATLAB爱好者论坛-LabFans.com (https://www.labfans.com/bbs/index.php)
-   MATLAB技术文章 (https://www.labfans.com/bbs/forumdisplay.php?f=25)
-   -   Deploying Simulink Designs on Your DSP: An Accelerated Approach to Custom Implementat (https://www.labfans.com/bbs/showthread.php?t=1084)

TechnicalArticles 2008-01-06 16:32

Deploying Simulink Designs on Your DSP: An Accelerated Approach to Custom Implementat
 
[B]Deploying Simulink Designs on Your DSP: An Accelerated Approach to Custom Implementation[/B]


[B]by[/B] [EMAIL="[email protected]"]Mike Donovan[/EMAIL] [URL="http://www.mathworks.com/cmspro/req11670.html?eventid=34090"][IMG]http://www.mathworks.com/company/newsletters/digest/images/download_scripts.gif[/IMG][/URL]
[URL="http://www.mathworks.com/products/simulink/"]Simulink[/URL] and the [URL="http://www.mathworks.com/products/sigprocblockset/"]Signal Processing Blockset[/URL] are two of the MathWorks products for designing and analyzing fixed-point digital signal processing (DSP) systems. If you already use those tools to develop signal-processing algorithms, you probably have had to implement your algorithms on embedded DSPs. MathWorks code-generation tools can produce the C code to implement your algorithms on [I]any[/I] embedded DSP, such as those from the TI C5000 family or the Blackfin from Analog Devices. In this article we will demonstrate how to use [URL="http://www.mathworks.com/products/rtwembedded/"]Real-Time Workshop Embedded Coder[/URL] to convert your Simulink models to ANSI/ISO C code and deploy that code on a fixed-point processor.
Many engineers implement their Simulink representations of DSP algorithms by manually writing C and assembly code. That approach is labor intensive, time consuming, and often leads to translation and coding errors. Code-generation tools, on the other hand, let you spend more time optimizing algorithms, a practice that typically results in final designs that can be converted to higher-quality C code.
Here we show how to use Real-Time Workshop Embedded Coder to generate C code for a signal-processing algorithm and then deploy that code to an embedded DSP. For the sake of clarity we use a simple audio-processing application, but as you will see, the example can be extended to other DSP algorithms. Our deployment hardware is the TI C5510 DSK, which has a TI C5510 fixed-point DSP and some basic audio hardware.
[B]Design Audio Filters and Convert to Fixed Point[/B]

We begin with a Simulink model of a set of audio-signal filters. We specify the audio filters’ low-pass and high-pass frequency responses with the Filter Design and Analysis Tool from the [URL="http://www.mathworks.com/products/signal/"]Signal Processing Toolbox[/URL] . Figures 1a, 1b, and 1c show a top-level filtering model and the lower-level subsystems. We apply a test source to the model’s audio filtering subsystem. The switch2 and switch3 control signals select a low-pass, high-pass, or unfiltered audio response.
Next we convert the filters to fixed-point implementation. In Simulink Fixed Point, set the fixed-point characteristics of the filters and test environment in accordance with the word-length characteristics of the TI C5510 DSP.

[URL="http://www.mathworks.com/company/newsletters/digest/2006/jan/images/audiofilt_model_wl.gif"][IMG]http://www.mathworks.com/company/newsletters/digest/2006/jan/images/audiofilt_model_w.gif[/IMG][/URL]
[I]Figure 1a. Audio Filtering Model. Click on image to see enlarged view.[/I]
[URL="http://www.mathworks.com/company/newsletters/digest/2006/jan/images/audiofilt_subsys_wl.gif"][IMG]http://www.mathworks.com/company/newsletters/digest/2006/jan/images/audiofilt_subsys_w.gif[/IMG][/URL]
[I]Figure 1b. AudioFilters Subsystem. Click on image to see enlarged view.[/I]
[IMG]http://www.mathworks.com/company/newsletters/digest/2006/jan/images/lowpass.gif[/IMG]
[I]Figure 1c. LowPassFilters Subsystem[/I]

[B]Generate Code for the Audio Filtering Subsystem[/B]

In Simulink, you can generate code for either an entire model or a subsystem. In this case we generate code for the AudioFilters subsystem, shown in Figure 1b.
To interface the subsystem's generated code with externally written code we must first define the subsystem’s input and output signals. We do this by naming the signals in the model, such as pfiltInLeft, and then setting the Real-Time Workshop storage class. There are four storage class settings:[LIST][*][B]Auto[/B][*][B]ExportedGlobal[/B][*][B]ImportedExtern[/B][*][B]ImportedExternPointer [/B][/LIST]Use either of the last two selections if you intend to declare the variables for those signals in your manually written code. Here we use the [B]ImportedExternPointer[/B] option for the input and output audio signals (pfiltInLeft, pfiltInRight, pfiltOutLeft, pfiltOutRight). Choosing [B]ImportedExternPointer[/B] requires us to place the declaration for the pointers in the manually written code. Choose [B]ImportedExtern[/B] for the switch control signals (switch2, switch3), and declare those variables in the manually written code.
After defining the subsystem interface we can prepare to generate code by setting the options that define the system-timing model, the style of generated code, and the data characteristics of our target processor. Select the following settings:
[B]Solver:[/B] This sets up the fundamental clock for the generated code. Set the Solver Type to Fixed Step, Solver to Discrete, and Fixed Step Size to Auto.
[B]Real-Time Workshop:[/B] Set the System Target File to ert.tlc, and then use the "auto configures for optimized fixed-point code" option (see Figure 2).
[B]Hardware Implementation:[/B] Set the Device Type to Custom, and enter the Number of Bits for each data type in the generated code. All the TI C5510 word lengths are 16 bits except for the data type long, which is 32 bits.
[URL="http://www.mathworks.com/company/newsletters/digest/2006/jan/images/menu_systarget_wl.jpg"][IMG]http://www.mathworks.com/company/newsletters/digest/2006/jan/images/menu_systarget_w.jpg[/IMG][/URL]
[I]Figure 2. Menu for Setting System Target File. Click on image to see enlarged view.[/I]
We can now launch the code-generation process, for which Figure 3 shows the report and which will produce the following files:[LIST][*][B]AudioFilters.c:[/B] Contains the entry points for all code implementing the DSP algorithm. In this example it contains AudioFilters_step() and AudioFilters_initialize() functions.[*][B]AudioFilters_data.c:[/B] Contains the initial conditions and coefficients for the digital filters.[*] [B]AudioFilters.h:[/B] Declares data structures and a public interface to the model.[*] [B]ert_main.c:[/B] Provides an example interface for calling the generated code.[/LIST][URL="http://www.mathworks.com/company/newsletters/digest/2006/jan/images/codegen_report_wl.jpg"][IMG]http://www.mathworks.com/company/newsletters/digest/2006/jan/images/codegen_report_w.jpg[/IMG][/URL]

[I]Figure 3. Code Generation Report. Click on image to see enlarged view.[/I]


The application need not use all the generated code. For example, the ert_main.c file provides a harness that shows how to call the audio processing code. The scheduling function, rt_OneStep(), calls the audio processing code, which is contained in the AudioFilters_step() function. In most cases the correct practice is to call the rt_OneStep() function to implement your algorithm, but in an example as simple as this it is more straightforward to call the AudioFilters_step() function.
Real-Time Workshop Embedded Coder generates documented and readable code. The code sample below shows a portion of the AudioFilters.c file, including the beginning statements for the AudioFilters_step() function.
/*
* File: AudioFilters.c
*
* Real-Time Workshop code generated for Simulink model AudioFilters.
*
* Model version : 1.62
* Real-Time Workshop file version : 6.3 (R14SP3) 26-Jul-2005
* Real-Time Workshop file generated on : Fri Sep 16 13:08:32 2005
* TLC version : 6.3 (Aug 5 2005)
* C source code generated on : Fri Sep 16 13:08:33 2005
*/

#include "AudioFilters.h"
#include "AudioFilters_private.h"

/* Block signals (auto storage) */
BlockIO_AudioFilters AudioFilters_B;

/* Block states (auto storage) */
D_Work_AudioFilters AudioFilters_DWork;

/* Real-time model */
RT_MODEL_AudioFilters AudioFilters_M_;
RT_MODEL_AudioFilters *AudioFilters_M = &AudioFilters_M_;

/* Model step function */
void AudioFilters_step(void)
{

/* local block i/o variables*/
boolean_T rtb_Compare;
boolean_T rtb_Compare_a;
boolean_T rtb_Compare_o;

{
int16_T i1;

/* Sum: '<S1>/Add' incorporates:
* Gain: '<S1>/Gain'
* Inport: '<Root>/afIn3'
* Inport: '<Root>/afIn4'
*/
AudioFilters_B.Add = (int16_T)((int32_T)switch2 *
(int32_T)AudioFilters_P.Gain_Gain) + switch3;
..... [I]Generated Code Sample for Audio Filtering Subsystem (from AudioFilters.c, lines 1-41)[/I]

[B]Set Up Framework for Calling Generated Code[/B]

Texas Instruments provides demonstration applications with the 5510 DSK. We can use the audio-filtering demo (the CCS project, dsk_app1.pjt) as a framework to call the code generated for this example. This framework must[LIST][*]Control the device peripherals on the target hardware board[*]Set up the RTOS resources used by the C5510 DSP[*]Create a scheduler that will call the generated code at an appropriate hardware interrupt[/LIST]The device peripherals and RTOS resources were set up using their Board Support Library and Chip Support Library for the 5510 DSK. The application code stores audio data in buffers that can hold 1024 samples and, when the buffers are full, generates a hardware interrupt that calls our generated code.
[B]Integrate Generated Code with Framework[/B]

Next we insert the audio filtering subsystem’s generated code into the programming framework. We begin by inserting an include statement for the generated AudioFilters.h file into the application code, shown below. This header file is the interface to the rest of the generated code and header files.
/* The 5510 DSK Board Support Library is divided into several modules, each of which has its own include file. The file dsk5510.h must be included in every program that uses the BSL. This example also uses the DIP, LED and AIC23 modules.
*/
#include "dsk5510.h"
#include "dsk5510_led.h"
#include "dsk5510_dip.h"
#include "dsk5510_aic23.h" #include "AudioFilters.h" [I]Insert #include Statements for Generated Code Header Files (from dsk_app1.c, lines 136-147)[/I]

The generated code uses input- and output-signal pointers, such as pfiltInLeft, to access the audio data. We must declare the pointers in the manually written code since we specified their storage class to be [B]ImportedExternPointer[/B]. We also must declare the switch variables (switch2, switch3), but since their storage class is [B]ImportedExtern[/B], we do not declare the variables as pointers, as shown in the code below.
/* Declare the pointers to the filter inputs and outputs in the generated code */
Int16 *pfiltInLeft;
Int16 *pfiltOutLeft;
Int16 *pfiltInRight;
Int16 *pfiltOutRight;
Int16 switch3;
Int16 switch2; [I]Declare Pointers for Audio Filter Input and Output Signals (from dsk_app1.c, lines 216-222)[/I]

When the input audio buffer is full the handwritten code creates an interrupt that calls the process_buffer() function. The AudioFilters_step() function is then called to apply the selected audio filter to the audio data, shown below. This is the only call that must be made to execute the generated code for the DSP algorithm that we designed in Simulink.
/*
* processBuffer() - Process audio data once it has been received, then
* set the DMA configuration registers up for the next
* transfer.
If Dip switch #2 is down, filtering is turned on.
If Dip switch #2 is up, audio passes through
If DIP switch #3 is down, turn on low pass filter.
If DIP switch #3 is up, turn on high pass filter.
*/
void processBuffer(void)
{
...
// Read DIP switch 2 and 3
switch2 = DSK5510_DIP_get(2);
switch3 = DSK5510_DIP_get(3);

// Determine which ping-pong state we're in
if (pingPong == PING)
{
// Assign pointers to buffer locations and call _step() function
pfiltInLeft = &gBufferRcvPingL[0];
pfiltOutLeft = &gBufferXmtPingL[0];
pfiltInRight = &gBufferRcvPingR[0];
pfiltOutRight = &gBufferXmtPingR[0];
AudioFilters_step();


... [I]Read DIP Switch Settings, Assign Pointers to Memory Locations, and Call AudioFilters_step() Function (from dsk_app1.c, lines 312-334)[/I]

Collect the manually written and generated code in a TI CCS project. That action causes Real-Time Workshop Embedded Coder to generate the MAKE file, AudioFilters.mk, to which you can refer for the compilation environment settings. Check the [B]Include Path[/B] section to find the MathWorks directories that must be included (<MATLABROOT>\rtw\c\libsrc and <MATLABROOT>\toolbox\dspblks\include), and set the compiler’s build options to include those directories.
Compile the TI CCS project to create a DSP executable that can be downloaded to the DSK. Once the executable is running on the DSK, we can play audio test signals on the board through the DSK's audio I/O ports, and control the filtering mode with the DSK's DIP switches.
[B]Verify the Code[/B]

Verify the application on the DSK with the test signals from the original Simulink model. Figure 4 shows a test harness built out of Source and Sink blocks from the Signal Processing Blockset. This model generates an audio signal and sends it from a PC speaker port to the 5510 DSK through a patch cord. The test harness model receives the processed audio from the DSK through another patch cord plugged into a Microphone input, displaying the audio signal’s frequency response in Simulink.
[IMG]http://www.mathworks.com/company/newsletters/digest/2006/jan/images/test_harness.gif[/IMG]




[I]Figure 4. Test Harness For Hardware Verification[/I]


[B]Extending This Example to Other DSP Algorithms and Processors[/B]

We can easily leverage the framework of this example to other algorithms and processors. For example, we can build a Simulink model of a denoising algorithm such as the one shown in Figures 5a and 5b and generate code for it using the process we used here. This case, in which Real-Time Workshop Embedded Coder generates approximately 1,600 lines of C code for the two-stage denoising algorithm, is perhaps a better illustration of the benefits of code generation, as 1,600 lines is a significant amount of hand coding. This denoising algorithm can still be called with a single call to a generated _step() function. The single points of declaration and entry for the code generated by Real-Time Workshop Embedded Coder make it easy to reuse your manually written framework for many audio processing algorithms. Using the process described in this paper, other development environments such as Analog Devices VisualDSP++ can collect the generated code as a project that the Blackfin and other fixed-point processors can implement.
[URL="http://www.mathworks.com/company/newsletters/digest/2006/jan/images/denoise_subsys_wl.gif"][IMG]http://www.mathworks.com/company/newsletters/digest/2006/jan/images/denoise_subsys_w.gif[/IMG][/URL] [I]Figure 5a. Denoising Subystem (two-stage denoising). Click on image to see enlarged view.[/I]
[URL="http://www.mathworks.com/company/newsletters/digest/2006/jan/images/denoise_subsys2s_wl.gif"][IMG]http://www.mathworks.com/company/newsletters/digest/2006/jan/images/denoise_subsys2s_w.gif[/IMG][/URL]
[I]Figure 5b. Denoise Two-Stage Subsystem. Click on image to see enlarged view.[/I]


[URL="http://www.mathworks.com/company/newsletters/digest/2006/jan/simdsp.html"]更多...[/URL]


所有时间均为北京时间。现在的时间是 21:24

Powered by vBulletin
版权所有 ©2000 - 2025,Jelsoft Enterprises Ltd.