Taking Care of Data

Recently much of the public conversation in the news and media explores how ownership of personal data is defined; especially in the capital market. This is seen through national policy (General Data…

Smartphone

独家优惠奖金 100% 高达 1 BTC + 180 免费旋转




CHOOSING RIGHT PROCESSOR IN DIGITAL SIGNAL PROCESSING

Digital Signal Processing (DSP) processors are microprocessors designed to perform digital signal processing — the mathematical manipulation of digitally represented signals.

Digital signal processing is one of the core technologies in rapidly growing application areas such as wireless communications, audio and video processing, and industrial control. Along with the rising popularity of DSP applications, the variety of DSP-capable processors has expanded greatly since the introduction of the first commercially successful DSP chips in the early 1980s. With semiconductor manufacturers vying for bigger shares of this booming market, designers’ choices will broaden even further in the next few years.

Today’s DSP processors are sophisticated devices with impressive capabilities. In this blog, we introduce the features common to modern commercial DSP processors, explain some of the important differences among these devices, and focus on features that a system designer should examine to find the processor that best fits his or her application.

SP processors find use in an extremely diverse array of applications, from radar systems to consumer electronics. Naturally, no one processor can meet the needs of all or even most applications. Therefore, the first task for the designer selecting a DSP processor is to weigh the relative importance of performance, cost, integration, ease of development, power consumption, and other factors for the application at hand. Here we’ll briefly touch on the needs of just a few classes of DSP applications.

In terms of dollar volume, the biggest applications for digital signal processors are inexpensive, high-volume embedded systems, such as cellular telephones, disk drives and portable digital audio players. In these applications, cost and integration are paramount. For portable, battery-powered products, power consumption is also critical. Ease of development is usually less important; even though these applications typically involve the development of custom software to run on the DSP and custom hardware surrounding the DSP, the huge manufacturing volumes justify expending extra development effort.

A second important class of applications involves processing large volumes of data with complex algorithms for specialized needs. Examples include sonar and seismic exploration, where production volumes are lower, algorithms more demanding, and product designs larger and more complex. As a result, designers favor processors with maximum performance, good ease of use, and support for multiprocessor configurations. In some cases, rather than designing their own hardware and software from scratch, designers assemble such systems using off-the-shelf development boards, and ease their software development tasks by using existing function libraries as the basis of their application software.

One can consider a number of features that vary from one DSP to another in selecting a processor. These features are discussed below:

One of the most fundamental characteristics of a programmable digital signal processor is the type of native arithmetic used in the processor. Most DSPs use fixed point arithmetic, while other processors use floating-point arithmetic. Floating-point arithmetic is a more flexible and general mechanism than fixed-point. With floating-point, system designers have access to a wider dynamic range . As a result, floating-point DSP processors are generally easier to program than their fixed point cousins, but usually are also more expensive and have higher power consumption. The ease-of-use advantage of floating-point processors is due to the fact that in many cases the programmer doesn’t have to be concerned about dynamic range and precision.

In contrast, on a fixed-point processor, programmers often must carefully scale signals at various stages of their programs to ensure adequate numeric precision with the limited dynamic range of the fixed-point processor. Most high-volume, embedded applications use fixed point processors because the priority is on low cost and, often, low power. For applications that have extremely demanding dynamic range and precision requirements, or where ease of development is more important than unit cost, floating-point processors have the advantage.

All common floating-point DSPs use a 32-bit data word. For fixed-point DSPs, the most common data word size is 16 bits. The size of the data word has a major impact on cost, because it strongly influences the size of the chip and the number of package pins required, as well as the size of external memory devices connected to the DSP. Therefore, designers try to use the chip with the smallest word size that their application can tolerate.

As with the choice between fixed and floating point chips, there is often a trade-off between word size and development complexity. For example, with a 16-bit fixed-point processor, a programmer can perform double-precision 32-bit arithmetic operations by stringing together an appropriate combination of instructions. If the bulk of an application can be handled with single-precision arithmetic, but the application needs more precision for a small section of the code, the selective use of double-precision arithmetic may make sense. If most of the application requires more precision, a processor with a larger data word size is likely to be a better choice.

A key measure of the suitability of a processor for a particular application is its execution speed. There are a number of ways to measure a processor’s speed. Perhaps the most fundamental is the processor’s instruction cycle time: the amount of time required to execute the fastest instruction on the processor. The reciprocal of the instruction cycle time divided by one million and multiplied by the number of instructions executed per cycle is the processor’s peak instruction execution rate in millions of instructions per second, or MIPS.

A problem with comparing instruction execution times is that the amount of work accomplished by a single instruction varies widely from one processor to another. Some of the newest DSP processors use VLIW (very long instruction word) architectures, in which multiple instructions are issued and executed per cycle. These processors typically use very simple instructions that perform much less work than the instructions typical of conventional DSP processors. Hence, comparisons of MIPS ratings between VLIW processors and conventional DSP processors can be particularly misleading, because of fundamental differences in their instruction set styles.

Fig. 2Execution times for a 256-point complex FFT, in microseconds (Lower is better)

Two final notes of caution on processor speed: First, be careful when comparing processor speeds quoted in terms of “millions of operations per second” (MOPS) or “millions of floating-point operations per second” (MFLOPS) figures, because different processor vendors have different ideas of what constitutes an “operation.” Second, use caution when comparing processor clock rates. A DSP’s input clock may be the same frequency as the processor’s instruction rate, or it may be two to four times higher than the instruction rate, depending on the processor.

DSPs are increasingly being used in portable applications (such as cellular phones and portable audio players) where power consumption is a major concern. As a result, many processor vendors are reducing processor supply voltages and adding power management features to give programmers greater influence over processor power consumption. Power management features available on some DSPs include:

Reduced voltage operation: Many vendors offer low-voltage (3.3-, 2.5-, or 1.8-volt) versions of their DSP processors. These processors consume far less power than five-volt equivalents at the same clock rate.

Sleep or idle modes: Most DSPs feature modes that turn off the processor’s clock to all but certain sections of the processor, reducing power consumption.

Programmable clock dividers: Some DSPs allow the processor’s clock frequency to be varied under software control to use the minimum clock speed required for a particular task.

Peripheral control: Some DSPs allow the programmer to disable peripherals that are not in use.

Obviously, processor cost is a major concern for products that are to be produced in volume. For such applications, designers try to use the lowest cost DSP that meets the requirements of the application, even though such devices may be considerably less flexible and more difficult to program than costlier processors. Among processor families, the least expensive family members tend to have significantly fewer features, less on-chip memory, and lower performance than the more expensive members.

A key factor in processor pricing is the dependence of price on device packaging. For example, plastic thin quad flat pack (PQFP and TQFP) packages can be significantly less expensive than pin grid array (PGA) packages. Finally, when considering prices, it is important to remember two things. First, processor prices are continually falling. Second, prices are strongly dependent on quantity, and prices for, say, a quantity 100,000 order may be significantly lower than for a quantity 1,000 order.

The degree to which ease of system development is a concern depends on the application. Engineers performing research or prototyping will probably require tools that make system development as simple as possible. A fundamental question to ask when choosing a DSP is how the chip will be programmed. Typically, developers choose either assembly language, a high-level language — such as C or Ada — or a combination of both. Surprisingly, a large portion of DSP programming is still done in assembly language. Because DSP applications have voracious number-crunching requirements, programmers are often unable to use compilers, which often generate assembly code that executes slowly. Rather, programmers can be forced to hand-optimize assembly code to lower execution time and code size to acceptable levels.

Despite some manufacturers’ claims, there isn’t a single best DSP chip. Rather, the right DSP depends on the application; a good choice for one application might be a poor choice for another. In this blog we have reviewed a number of criteria useful for choosing a DSP: arithmetic format, data width, speed, power consumption, cost and ease of development. Which of these are most important is a decision the system designer must make based on his or her application.

Add a comment

Related posts:

Dark Chocolate Smeared Bedsheets

Sun in a night shaded sky. Moon at noon. Trust washing like lava with no cool cycle throughout byways and boulevards littered with quashed gnats Hayya alal falah Come to well-being, refreshment even…

Less is More

One can easily make this statement and it can stand for so many things, with minimalism you allow yourself to be a lot more conscious of your decision making process. In current society norms, the…

10 Reasons NOT To Go Vegan

Next time a vegan calls you out, remember these ten reasons to keep on eating meat, dairy and eggs