Skip to main content

Topic: Real Time Digital Video Processing In Hardware Using Multiple Parallel FPGA's

Name: Seth Jacobsen

Advisor: Bruce Maxwell

Processing digital video in software can be a cumbersome task that can eat up many of the resources of a computer. Even the simplest of algorithms, when done on several million pixels per second, can cause major slowdowns. Real-time video processing is extremely important in applications such as robotics where processor power is limited or devoted to many tasks. I propose to implement simple algorithms, including edge detection, motion detection, and some simple filters. These processes are characterized by their simple mathematics, lack of branches to slow processing speed, and identical actions on every pixel. These properties allow data to be segmented and sent through many different chips at the same time. The parallel processing allows the chips to operate at slow speeds, while still accomplishing the same task as a single, much more powerful and expensive chip.

One disadvantage of using hardware over software is that hardware lacks the versatility of software. By using FPGA's, the chips can be dynamically reprogrammed as needed. Algorithms can be stored in ROM on the board, and loaded into the chips as needed. Each chip is identical, so a relatively small amount of memory is needed. RAM's can also be used so that the user can write (using VHDL, or some equivalent) and load in their algorithms.

I/O presents the largest challenge for logic using multiple parallel programmable chips (LUMPP chips). The slow speeds mean that the data must be brought onto and sent off of the chip in large segments. This requires a large buffering and immense bandwidth between the buffer and the chips.

After discussing the project with Bruce Maxwell, I believe this project can process full sized video at a good frame rate, at a relatively low cost, and can be useful in freeing up resources on his robots.