Using Model-Based Design to Develop and Deploy a Video Processing Application
By Houman Zarrinkoub
According to the U.S. National Highway Traffic Safety Administration, single-vehicle road departures result in many serious accidents each year. To reduce the likelihood of a vehicle’s straying out of lane, automotive engineers have developed lane tracking and departure warning systems that use a small camera to transmit video information about lane markings and road conditions to a microprocessor unit installed on the vehicle.
In this article, we show how
Model-Based Design with
Simulink and the
Video and Image Processing Blockset can be used to design a lane-detection and lane-departure warning system, implement the design on a Texas Instruments DSP, and verify its on-target performance in real time.
The core element of Model-Based Design is an accurate system model—an executable specification that includes all software and hardware implementation requirements, including fixed-point and timing behavior. You use the model to automatically generate code and test benches for final system verification and deployment.
This approach makes it easy to express a design concept, simulate the model to verify the algorithms, automatically generate the code to deploy it on a hardware target, and verify exactly the same operation on silicon.
Building the System Model
Using Simulink, the
Signal Processing Blockset, and the Video and Image Processing Blockset, we first develop a floating-point model of the lane-detection system. We model lane markers as line segments, detected by maximizing the Hough transform of the edges in a video frame.
We input a video stream to the simulation environment using the From Multimedia File block from the Video and Image Processing Blockset. During simulation, the video data is processed in the Lane Marker Detection and Tracking subsystem, which outputs the detection algorithm results to the To Video Display block for computer visualization (Figure 1).
Figure 1. Lane-detection model. Click on image to see enlarged view.
Figure 2. Floating-point model: the Lane Marker Detection and Tracking subsystem. Click on image to see enlarged view.
Lane Detection and Visualization
Figure 2 shows the main subsystem of our Simulink model. The sequence of steps in the lane marker detection and tracking algorithm maps naturally to the sequence of subsystems in the model.
We begin with a preprocessing step in which we define a relevant field of view and filter the output of this operation to reduce image noise. We then determine the edges of the image using the Edge Detection block in the Video and Image Processing Blockset. With this block we can use the Sobel, Prewitt, Roberts, or Canny methods to output a binary image, a matrix of Boolean values corresponding to edges.
Next, we detect lines using the Hough Transform block, which maps points in the Cartesian image space to curves in the Hough parameter space using the following equation:
rho = x * cos(
theta) +
y * sin(
theta)
The block output is a parameter space matrix whose rows and columns correspond to the
rho and
theta values, respectively. Peak values in this matrix represent potential lines in the input image.
Our lane marker detection and tracking subsystem uses a feedback loop to further refine the lane marker definitions. We post-process the Hough Transform output, using line segment correction to deal with image boundary outliers, and then compute the Hough lines. The Hough Lines block in the Video and Image Processing Blockset finds the Cartesian coordinates of line end-points by locating the intersections between the lines, characterized by the
theta and
rho parameters and the boundaries of the reference image.
The subsystem then uses the computed end-points to draw a polygon, and reconstructs the image. The sides of the polygon correspond to the detected lanes, and the polygon is overlaid onto the original video. We simulate the model to verify the lane detection and tracking design (Figure 3).
Figure 3. Lane tracking simulation results, with a trapezoidal figure marking the lanes in the video image. Click on image to see enlarged view.
Converting the Design from Floating Point to Fixed Point
To implement this system on a fixed-point processor, we convert the algorithm to use fixed-point data types. In a traditional design flow based on C programming, this conversion would require major code modification. Conversion of the Simulink model involves three basic steps:
- Change the source block output data types. During automatic data type propagation, Simulink displays messages indicating the need to change block parameters to ensure data type consistency in the model.
- Set the fixed-point attributes of the accumulators and product outputs using Simulink Fixed Point tools, such as Min-max and Overflow logging.
- Examine blocks whose parameters are sensitive to the pixel values to ensure that these parameters are consistent with the input signal data type. (The interpretation of pixel values depends on the data type. For example, the maximum intensity of a pixel is denoted by a value of 1 in floating point and by a value of 255 in an unsigned 8-bit integer representation.)
Figure 4 shows the resulting fixed-point model. During simulation, the flexibility and generality provided by fixed-point operators as they check for overflows and perform scaling and saturations can cause a fixed-point model to run slower than a floating-point model. To speed up the simulation, we can run the fixed-point model in Accelerator mode. The
Simulink Accelerator can substantially improve performance for larger Simulink models by generating C code for the model, compiling the code, and generating a single executable for the model that is customized to a model’s particular configuration. In Accelerator mode, the simulation for the fixed-point model runs at the speed of compiled C code.
Figure 4. Fixed-point model: main subsystem. Click on image to see enlarged view.
Implementing and Verifying the Application on TI Hardware
Using
Real-Time Workshop and
Real-Time Workshop Embedded Coder, we automatically generate code and implement our embedded video application on a TI C6400™ processor using the
Embedded Target for TI C6000™ DSP. To verify that the implementation meets the original system specifications, we use
Link for Code Composer Studio to perform real-time hardware-in-the-loop validation and visualization of the embedded application.
Before implementing our design on a TI C6416 DSK evaluation board, we must convert the fixed-point, target-independent model to a target-specific model. For this task we use Real-Time Data eXchange (RTDX), a TI real-time communications protocol that enables the transfer of data to and from the host. RTDX blocks let us ensure that the same test bench used to validate the design in simulation is used in implementation.
Creating the target-specific model involves three steps:
- Replace the source block of the target-independent model with the From RTDX block and set its parameters.
- Replace the Video Viewer block of the target-independent model with the To RTDX block and set its parameters.
- Set up Real-Time Workshop target-specific preferences by dragging a block specific to our target board from the C6000 Target Preferences library into the model. Figure 5 shows the resulting target-specific model.
Figure 5. The TI C6416 DSK block automatically sets up all Real-Time Workshop targeting parameters based on the configuration of the TI board and Code Composer Studio installed locally. Click on image to see enlarged view.
To automate the process of building the application and verifying accurate real-time behavior on the hardware, we create a script, using Link for Code Composer Studio to perform the following tasks:
- Invoke the Link for Code Composer Studio IDE to automatically generate the Link for Code Composer Studio project.
- Compile and link the generated code from the model.
- Load the code onto the target.
- Run the code: Send the video signal to the target-specific model from the same input file used in simulation and retrieve the processed video output from the DSP.
- Plot and visualize the results in a MATLAB figure window.
Figure 6 shows the script used to automate embedded software verification for TI DSPs from MATLAB. Link for Code Composer Studio provides several functions that can be invoked from MATLAB to parameterize and automate the test scripts for embedded software verification.
Figure 6. Link for Code Composer Studio IDE script. Click on image to see enlarged view.
Figure 7 shows the results of the automatically generated code executing on the target DSP. We observe that the application running on the target hardware properly detects the lane markers, and we verify that the application meets the requirements of the original model.
Figure 7. Automatically generated code executing on the target DSP verifies that the application correctly detects the lane markers. Click on image to see enlarged view.
After running our application on the target, we may find that our algorithm does not meet the real-time hardware requirements. In Model-Based Design, simulation and code generation are based on the same model, so we can quickly conduct multiple iterations to optimize the design. For example, we can use the profiling capabilities in Link for Code Composer Studio to identify the most computation-intensive segments of our algorithm. Based on this analysis, we can change the model parameters, use a more efficient algorithm, or even replace the general-purpose blocks used in the model with target-optimized blocks supplied with the
Embedded Target for TI C6000™ DSP. Such design iterations help us optimize our application for the best deployment on the hardware target.
Resources:
更多...