设为首页 加入收藏 联系我们
网站首页 中心概况 实验教学 实验队伍 管理模式 设备与环境 创新基地 科学研究 师生交流 对外交流
 欢迎登录南昌大学通信实验中心网站!

实验教学

  教学案例
  教学体系与教学内容
  教学理念与改革思路
  教学方法与教学手段
当前位置:网站首页实验教学教学资源实验教学大纲数字图像处理实验

 

Digital Image Processing

课程编号:X61020006                      课程类别:指导性选修课

适用专业:通信工程  电子信息工程 生物医学工程.

课程总学时:32        实验学时:4           分:2

先修课程:数字信号处理  CC++程序设计.

实验指导书:自编

参考书目:1《数字图象处理》夏良正      东南大学出版社  1999

             2《数字图像处理》阮秋琦等译  电子工业出版社  2002

开课实验室:
通信实验中心

一、     目的与任务

本实验课程帮助学生加深理解和消化数字图像处理的基本理论和技术,锻炼学生独立解决问题的能力,从而能够解决图像应用中的许多问题。

二、     基本要求

1.图像采集与量化。在取样和量化过程中,取样密度,量化等级取决于是否满足取样定理以及图像的内容和应用要求。通过使用一些专门的应用软件(如PHOTOSHOP)输入图像,观察不同的采样率和量化等级对图像数字化的质量的影响。

2.图像彩色空间。了解彩色空间模型,通过实验观察色调、色饱和度、强度之的关系。掌握彩色空间模型之间的变换关系。

3.图像显示。学习用PHOTOSHOP对数字图像进行读、写和格式转换,了解不同格式特别是BMP、TIF格式图像中图像文件头的特点,并通过编写程序实现图像的显示。

4.图像变换。用二维FFT对图像进行变换处理,并在显示器上显示幅度图像,观察一些典型的幅度谱。

5.图像增强。编写程序对图像进行邻域平均、中值滤波、梯度、直方图均衡等运算,认识和理解上述增强技术对数字图像的处理效果。

6.图像分割。用边缘提取和阈值分割法对图像进行分割,比较动态阈值法和静态阈值法的性能的异同点,特别是当背景亮度不均匀时动态阈值分割的优点。

7.图像编码。了解并掌握预测编码法原理,通过编写程序实现图像的DPCM压缩。

8.图像恢复。了解图像退化模型以及反向滤波法和约束还原的机理和应用。以水平运动模糊图像为例进行图像恢复实验。

9.图像识别。了解基本的图像识别方法,用简单的图形做图像识别实验。

学生应掌握的实验技能是用C语言进行编程的能力,开发平台可以是TURBO C、VISUAL C++或VISUAL BASIC 或DELPI或Matlab 。

三、     考核方式及成绩评定方法

实验指导教师考核,独立完成实验要求,给出成绩。

四、     说明

本实验为课内实验,从所列实验项目中按课时数选做.

五、     实验项目数据表

实验名称

实验学时

实验类别

实验要求

实验

类型

每组

人数

实验

套数

每组常规仪器

设备名称、数量

每组主要

消耗材料

名称、数量

图像采集与量化

2

1

计算机1台、扫描仪1

演示软件

图像色彩空间

2

1

计算机1台。

演示软件

图像显示

2

1

计算机1台、扫描仪1

Vc++ 软件

图像变换

2

1

计算机1台、扫描仪1

演示软件

5

图像增强

2

1

计算机1台、扫描仪1

Vc++ 软件

6

图像恢复

2

1

计算机1台、扫描仪1

演示软件

7

图像编码

2

2

1

计算机1台、扫描仪1

Vc++软件

8

图像分割

2

2

1

计算机1台、扫描仪1

演示软件

9

图像识别

2

2

1

计算机1台、扫描仪1

演示软件

制定人:卢新发、吴建华(双语部分)   审核人:王平   批准:万国金


 

双语部分附件:

《数字图象处理》课程多媒体双语实验项目部分

Projects

(Note: The material in this section is also included in the Teaching Outlines - see Navigation Bar above - and may be downloaded as part of that document).

One of the most interesting aspects of a course in digital image processing is the pictorial nature of the subject. Using the concepts developed in Appendix A and the coded images in Appendix B, the instructor can assign meaningful projects that will allow students to gain familiarity with the implementation of image processing algorithms. Alternatively, if the facilities are available, the instructor may wish to work with larger images obtained from the Image Gallery (see Navigation Bar).

At the beginning of the course, if manually-coded images are used, students are asked to type the codes for three images and file them on disk for future use. Images B.1, B.3, and B.11 are good for this purpose. Since this is a tedious task, students can be asked to work in teams to accomplish this task and then share the image data. Usually, students are asked to work individually after that. The idea is for each student to have access to three images during the entire term. We ask them to go through the manual entry process as a way of impressing upon them the significant amount of data contained in even small images. In addition, as discussed in Project 1, we also ask them to write a program to generate a synthetic image (a Gaussian intensity distribution or gray scale wedge) to test the display routines before beginning work with actual image data. Instructors wishing to eliminate the manual process from the course can select appropriate images from the Image Gallery.

Since computer projects are in addition to course work, we try to keep the project description short, and organized in a uniform manner. A useful project format to give students at the beginning of the course is listed below. The idea is to achieve as much uniformity as possible to facilitate handling and comparative grading.

Suggested Project Presentation Format

Page 1. Cover page, typed or printed neatly, containing:

·                     Project title

·                     Project number

·                     Course number

·                     Student name

·                     Date due

·                     Date handed in

·                     Abstract (not to exceed 1/2 page)

Page 2. One to two pages (max) of technical discussion.

Page 3 (or 4). Discussion of results. One to two pages (max).

Results. Image results printed in gray scale using the concepts from Appendix A.

Appendix. Program listings, with all standard system sheets stripped away if a batch system is being used.

Layout. The entire report must be in 8-1/2 x 11 format. If large printer sheets are used for program listings and image printouts, they must be folded (individually) to 8-1/2 x 11 size, or photo-reduced to that size.

Sample Projects

The following projects are representative of the type of material we have found useful over the years. Only the "skeleton requirements" are asked for in each project. Specific items of discussion to be addressed in the project reports are at the discretion of the instructor. It is helpful to students from the point of view of schedule management if all, or at least most, of the projects for the course are assigned at the beginning of the term. Although image size is often referred to as 64 x 64, this assumes manually-coded images.  If the instructor elects to use downloaded images obviously the sizes will be different.

Project 1.1. (a) Write a halftoning program based on the overstrike method discussed in Appendix A. The program must have a switch that will cause it to automatically scale its output to integers in the range [0, 31] for any input image with (positive, negative, and not necessarily integer) values in the range [Amin, Amax]. (b) Write a program to generate a synthetic image of size 64 x 64. Choose a Gaussian intensity distribution with the maximum allowed gray level (31) at the center of the image or a gray scale wedge going from 0 on the left to 31 on the right (columns will have to be duplicated). (c) Use (b) to test (a), and printout (display) the resulting image.

Project 2.1. (a) Write a computer program to generate random numbers having a uniform distribution with values in the range [-1, 1]. (b) Generate a noise image of size 64 x 64 with values in this range. (c) Print the image using the printing routine developed in Project 1.1 using auto scaling (this will serve as a test of the auto scaling routine).

Project 2.2. (a) Write a computer program to generate random numbers having a Gaussian distribution with a specified mean and variance. (b) Generate a noise image of size 64 x 64 with intensity mean m = 0 and variance 2 = 100. (c) Print the image using the printing routine developed in Project 1.1 using auto scaling (this will serve as a test of the auto scaling routine).

Project 3.1. (a) Write a program to compute the two-dimensional forward and inverse Fourier transform of a 64 x 64 image. (b) Test your program using image B.3. (Note: a simple test is to compute the forward and inverse transforms of B.3. Take the difference between the inverse and the original B3. Create a binary image by setting each result of the difference to 0 if the absolute value of the difference is smaller than some preset value, say 10-8, or to 31 if it is higher. Print the binary image; it should be all black.)

Project 3.2. (a) Write a computer program that uses the result of Project 3.1 and has the capability to (1) compute the two-dimensional spectrum and phase angle of an image, and (2) has the capability of multiplying the transform by a two-dimensional (in general complex) function H(u, v). (b) Compute and display the spectrum and phase of image B.1 (use Eq. 3.3-1 to display the spectrum).

Project 4.1. (a) Write a program to perform histogram equalization on an image. (b) Apply your program to image B.11. (c) Display the original and histogram-equalized images.

Project 4.2. (a) Write a program to perform 3 x 3 spatial filtering of an image. (b) Perform spatial image smoothing on image B.3. (c) Perform image sharpening on B.3 using the mask of Fig. 4.24. (d) Display the original image and the result in each case.

Project 4.3. (a) Generate a noisy image from image B.3 by changing the value at location (x, y) to 31 if a uniform random number generator (Project 2.1) with possible outputs [-1, 1] gives a value grater than 0.75 when the generator is called to produce an output associated with location (x, y). If the random value is less than or equal to 0.75 the pixel at (x, y) is left unchanged. Clearly, the random number generator will be called 4,096 times, once for each pixel in B.3. (b) Perform median filtering in the resulting image. (c) Display the noisy and filtered images.

Project 4.4. (a) Write a program to generate a lowpass Butterworth filter, H(u, v). (b) Use the programs from Projects 3.1 and 3.2 to apply the filter to image B.3 with n = 1, 3, and 9. (c) Display the resulting spectrum and the blurred image in each case. Remember to use a log transformation when displaying the spectrum.

Project 4.5. (a) Write a program to generate a high-frequency-emphasis Butterworth filter, H(u, v), with parameters n (order of the filter) and K (amplitude displacement value from 0). (b) Select (experiment) with n and K to generate an enhanced version of image B.11 using frequency domain techniques. (c) Apply histogram equalization to the result using the program from Project 4.1. (d) Display the original image, the result of (b), and the result (c).

Project 5.1. (a) Generate a version of image B.1 blurred with a Butterworth filter of order 1, and add to each pixel of the blurred image Gaussian noise with mean 0 and variance 100 (see Project 2.2). (b) Restore the image by using inverse filtering. (c) Display the original image and the results of (a) and (b).

Project 5.2. (a) Add to image B.1 a sinusoidal waveform with frequency of your choice and running vertically on the image. The waveform should be strong enough to be visible, and should have an integral number of periods in the 64 x 64 window. (To restore this image, we would simply eliminate the frequency component corresponding to the waveform. The result would be a perfect copy of the original). (b) Add to image B.1 a waveform of the same frequency and amplitude as in (a), but having an incomplete number of periods in the window. (c) Restore this image using the same filter you would have used to restore the image in (a). Display the original, the result of (a), the result of (b), and the result of (c).

Project 6.1. Write a program to compute both the first- and second-order entropy estimates of an image. Use the program to estimate the entropy of images B.1 and B.11, and interpret the results.

Project 6.2. A reasonable measure of a compression scheme’s performance is the average difference between the first-order entropies of a representative set of input images and their compressed and reconstructed approximations. This computation provides an estimate of the average information loss of the compression-decompression process - as opposed to the amount of compression. Write a program to estimate the information loss associated with the following transform coding scheme:

Transform: Fourier

Subimage Size: 8 x 8

Bit Allocation: 8-largest coding

Use the program to characterize the performance of the transform coding scheme for input images B.1 and B.11.

Project 6.3. The same as Project 6.2, but using the discrete cosine transform.

Project 6.4. Assume that image B.1 was generated by a 2-D Markov source with separable autocorrelation function. Estimate the horizontal and vertical correlation coefficients of the source from the image and DPCM the image using an optimal fourth-order linear predictor. Compute a first-order estimate of the entropy of the DPCM result and estimate the compression achieved.

Project 7.1. (a) Compute the Sobel gradient of image B.3. (b) Compute the Laplacian of this image using Eq. (7.1-10). (c) Display the original image and the results of (a) and (b).

Project 7.2. (a) Perform a Sobel edge detection on image B.3. (b) If necessary, apply the linking procedure discussed in Section 7.2.1. (c) Display the original image and the results of (a) and (b).

Project 7.3. Segment the characters out of image B.3 by using a single, global threshold.

Project 7.4. Segment the characters out of image B.3 by using a global, optimum threshold. Assume Gaussian densities for the pixels on the objects and background (note that you have to specify or estimate the means and variances).

Project 7.5. Segment the objects out of image B.11 by using a global, optimum threshold. Assume Gaussian densities for the pixels of the objects and background (note that you have to specify or estimate the means and variances).

Project 8.1. Use the algorithm described in Section 8.1.5 to obtain the skeletons of the characters in image B.3. Note that the characters have to be isolated (segmented) into individual subimages; use a single global threshold.

Project 8.2. (a) Compute the Fourier descriptors of the boundary of each character in image B.3. (b) Approximate the boundary of each character by using the first 10%, 25%, and 50% of its Fourier coefficients. (c) Display the results from (b). Note that the boundaries have to be extracted (segmented) from the image; use a single, global threshold to create a binary image, and then write a simple algorithm to extract the boundary points of each white object (a boundary point is any object point having one or more 8-neighbors that are background points).

Project 8.3. (a) Binarize image B.3 using a single global threshold. (b) Extract the boundary of each object using the morphological algorithm described in Section 8.4.4. (c) Display the original image, the thresholded image, and the result of (b).

Project 8.4. (a) Use a cube structuring element of size 3 x 3 x 3 to perform the morphological gradient of image B.11. (b) Display the original image and the result of (a).

All the projects for Chapter 9 are based on the following data set. With reference to the method described in Project 8.2, obtain the boundary of each character in image B.3. Let r1() through r6() be signatures for these boundaries, sampled at values of  in increments of 45. Thus, each of these signatures can be represented as an 8-dimensional vector. For example, the components of signature r1() may be expressed as the (prototype) vector r1 = (r1(0), r1(45), . . . , r1(315))T. We now create six pattern classes by adding randomness to each component of the prototype vector for each class. This is accomplished by generating Gaussian samples with 0 mean and standard deviation  = [rmax()]/10, where rmax() is the largest component of the prototype vector for the particular pattern class being generated. Use this approach to generate six pattern classes of 150 patterns each. Call the first 100 samples of each class, training samples, and the remaining 50 samples, independent test samples.

Project 9.1. (a) Use the above data set to design a minimum-distance classifier for the six classes. (b) Test the recognition performance for each class (percent of patterns recognized correctly) by using the classifier to recognize the patterns of the training set for that class. (c) Repeat for the patterns of the independent test set.

Project 9.2. Repeat Project 9.1 using a Bayes classifier. Assume multivariate Gaussian densities.

Project 9.3. (a) Use the training set of class w1 (the class of A's) and the the training set of class w4 (the class of 1's) to train a two-class perceptron. Since convergence is guaranteed only if the classes are separable, and this is not known a priori, stop the algorithm after 80 iterations (ten times the dimension of the pattern vectors) through all training samples of the two classes if no convergence has been achieved by then. (b) As in Project 9.1, test the recognition performance using the samples of the training set (recognition performance on these patterns should be 100% if the algorithm converged). (c) Repeat for the patterns of the independent test set.

Project 9.4. For any of the six initial contours described above, imagine joining the point at r(0) to the point at r(315) by a straight line segment. This point is then joined to the point at r(270) by another segment, and so on to create a polygonal approximation for each contour. Then, compute and quantize the interior angles to form a string using the method described in the example on page 621. The resulting six strings generated from the six prototype vectors will constitute the prototype strings for the six pattern classes under consideration. (Clearly these strings can be obtained directly from the pattern vectors of the above data sets.) (a) Convert each vector in the training data set to a string representation. Compute the measure of similarity given in Eq. (9.4-4) for each string by comparing it against the prototype string of each class. Assign the string in question to the class that yielded the largest value of R. Determine the percentage of correct recognition for the strings of the training set. (b) Repeat for the strings of the independent test set.

 

版权所有©南昌大学通信实验中心(2007)     建议分辨率:1024*768  
网站开发:余军     管理员信箱:zhuqibiao2006+163.com(+换成@)