Open Graduate Research Topics
Below are descriptions of potential projects that have been identified within the scope of the BSU Signal Processing Lab to fulfil the thesis requirements for the Masters of Science or Doctoral (PhD) programs. Students who have entered or are considering entering the BSU MS or PhD programs may find these useful as they begin the process to find research projects necessary to complete their degrees.
Some of these projects have funding available for qualified students but not all. Qualified PhD students can receive their first 2 years of PhD support through the department funded assistantships.
Document Image Processing
- Integration of Printer & Scanner Models
- Selecting document image enhancement filters based on degradation model parameters
- Expanding the printer degradation model
- OCR Training Procedures
Biomedical Image Processing
- Feedback Fluoroscopy Image Matching System
- Feedback Magnetic Resonance Image Matching System
- Semi-Automated 3D Image Segmentation
- Image Processing of Ultrasound Spinal Images
Document Image Processing
Integration of Printer & Scanner Models
Fields: System Modeling, Parameter Estimation, Image Processing
To create a paper document, it must be printed. Printing is also the output process of photocopying and FAXing. The component of this proposed graduate project is development of a calibration method for the printing process suitable for use in Document Image Analysis and integration of the printing & scanning models.
For this project, only the electrophotographic printing process will be considered. The toner is applied to the paper in quantities related to the charge on the photoconductor. The charge is related to the laser intensity. A Masters Student graduating in May 2004, Margaret Norris, has developed a model of how toner is dispersed on paper that includes prior work done by  on how laser intensity is related to source image shape. The exact amount of toner applied to the paper and the resulting absorptance level (darkness grey level) are probabilistic measures, therefore a probabilistic model has been developed. This model needs to be expanded to a broader class of input pictures. A method needs to be developed to calibrate this model based on samples of printed characters.
Printing and scanning are the building block processes for document generation. OCR analysis is done on digitized images requiring all documents be scanned. Thus the scanning degradations are the easiest to isolate and were the first to be looked at in [, , , , , ]. Prior work of Dr. Barney Smith has generated new understanding of the relationship between scanner system parameters and document image degradations (stroke width and corner erosion) as well as a collection of tools on scanning models. Printing has been studied in the field of halftoning to decide what to print to get a desired grey level. These two fields will be combined for use on bilevel images common in Document Image Analysis.
The printer model must be combined with the existing model of scanning. To simplify this combination of models, we want to see whether the nonlinear printing model can be approximated by a linear two-dimensional convolution. If so, then the kernel for the convolution in the scanning model can be combined with the kernel for the convolution in the scanning model to make a single print/scan kernel. Doing this would enable all the methods that exist for the scanning model to be maintained. This is one possible route to the next goal of developing defect models that incorporate both printing and scanning sub-system models.
Methods have been developed to calibrate the scanner defect model without extensive equipment, predominantly using information in text images as opposed to specialized test charts. These need to be expanded to the combined print/scan model.
Models are only useful to the scientific community when they are validated. A method of validating defect models was proposed by Kanungo . This method has been used by Dr. Barney Smith in research done with Dr. Qui . Code to make this flexible and to try some other experimental possibilities is currently being developed by a pair of undergraduate research students. Using this procedure to validate of each of these models is the final part of this work.
Selecting document image enhancement filters based on degradation model parameters
Fields: System Modeling, Filter Selection, Image Processing
Images are degraded by a number of different mechanisms. If those mechanisms are primarily dictated by a common degradation model that contains blurring (convolution), sampling, additive noise and thresholding, and if the parameters to that degradation model are known, then the choice of appropriate filter should be easier to determine.
Past efforts to chose restoration filters have looked at characteristics of the image, and how the filters have improved recognition. They have not considered the mathematical source of teh degradations.
The restored image is not likely to be the original image due to the non-linear nature of the degradation. The criteria for determining the best output image, and thus the best filter could be to compare the final output with the input, but other metrics such as recognition could be used.
Expanding the printer degradation model
Fields: Pattern Recognition, Image Processing
A printer degradation model has been developed. It is not as developed as the scanner model. Still open questions include:
- Validation of printer model – the Scanner model is being validated and compared to other scanner models through non-parametric statistical testing. This approach among others could be applied to validating the printer model. The validation can be done on the pixel level or on a more macro level with multi pixel strokes and other shapes.
- Including fuser effects in printer model – It has been observed that while the printer model does a very good job of representing the images on paper at a microscopic level, after the toner is attracted to the paper, during the fusing stage there is a directional effect on the image. Incorporating this into the model is an open problem.
- Calibration of printer model – The printer model is a much more stochastic model than the scanner model. In the scanner model, the additive noise is stochastic, but the general form of the resulting image is not. For the printer, the toner that produces the image is adhered in a stochastic manner. The expected coverage of the paper by toner can be predicted, but not the sample image. Still the calibration can be done by looking at edge ragedness, fill density and the effective width of a stroke of a known number and spacing of laser traces. Taking these measurements, and correlating them to the output of the printer model parameters is an open project.
OCR Training Procedures
Fields: Pattern Recognition, Image Processing
The most prevalent method of improving individual character recognition is to train the classifier with as large of a training set as possible. Recognition accuracies can also be increased by matching the training set to the document. This has been done by extracting templates from the document in one step and then using them in recognition in a second pass through the document [, ]. We propose to use the model calibration developed previously to combine these two methods providing a training set that is both large and matched to the OCR process.
There are several benchmark datasets available for testing OCR systems, and OCR companies have their own large datasets. When a page can be analyzed to characterize the degradation’s relationship to the model, the database can be subdivided into large sets of matched characters.
For good recognition accuracy a large training set and a training set closely matched to the test set are needed. This work proposes as a goal to develop the framework to also match the degradation level of the document to a large training set by developing methods to partition a large training set and to use the model calibration to select the appropriate training set.
The parameters affecting the printer model currently include reflectance of the paper and ink, trace of the laser, spread of the toner, and size and quantity of toner particles. This study will also show what types of image degradations each parameter affects. The information about the relationship between the parameters and the degradations could benefit printer design.
We will determine how to divide the model and parameter spaces to capture the difference in the image degradation. At the same time we want to limit the number of partitions to keep management simple and to maintain the ability to generalize. The partitions should be set such that an error in model calibration will not often point to a different template set. We will determine a method to partition the degradation space for classifier training as one focus of this proposed research. Work by Dr. Barney Smith in  showed that characters will have a similar appearance as quantified by the Hamming distance when degraded with combinations of degradation parameters yielding the same edge displacement degradation feature so long as the difference in PSF width was not very large. It is expected that subdividing the space in regions of common edge displacement will work better than using a Cartesian division. Other metrics of similarity will be examined also.
Ho & Baird [ ] compared how a classifier trained on a single font (25 phases, 125 (5x5x5) degradations, 94 char classes) degraded over the whole degradation space responds to samples at each point in the degradation space. This showed under which model parameters characters are difficult to recognize when the classifier is trained on a global training set. We will train a family of classifiers each on characters degraded with parameters from a different subset of the degradation space. Then evaluate the classification results for each classifier over the whole space.
We will compare recognition accuracies both with and without this partitioning method. This will be done with both a spatially invariant model and using the adaptive parameter variation. With OCR accuracy on highly degraded document images at 92%, there is need for improvement. A 1% improvement in recognition accuracy on a typical page of 2500 characters will remove 25 errors.
The Neuromorphic Computing project combines
- Machine learning
- Circuit and chip design
- Device design and processing
to create a circuit capable of learning in a way inspired by how mammalian brains function. This is a very interdisciplinary project. More information on the project is available at Neuromorphic Computing. Our project is hiring outstanding PhD candidates in all three focus areas.
Dr. Barney Smith is the lead in the machine learning part of the project. While many machine learning neural network structures and algorithms exist, most are not suited for implementation in hardware because they need to know the explicit values of the weights in many places in the network to decide on changes to each individual weight. We are using memristors as a reprogrammable variable weight that represents the synapse. We are investigating algorithms that are suitable for use in this environment. The algorithms we develop will then be implemented in a circuit designed by Dr. Saxena. Once the chip is fabricated, the memristor technology developed by Dr. Campbell will be added to the chip in a back-end-of-line (BEOL) process.
Biomedical Image Processing
Feedback Fluoroscopy Image Matching System
Fields: Biomedical image and signal processing, mechanics
Female athletes sustain anterior cruciate ligament (ACL) injuries at rates from three to seven times higher than male athletes in the same sports. Several studies have recently suggested that differences between the genders in the mechanics of landing from jumps may result in increased ACL loads in female athletes. To date, no studies have quantified the internal kinematics of the knee joint during landings in athletes of either gender. Because the ACL connects the femur and tibia at the knee joint, relative motion between these two bones during landing may predispose the ligament to injury. Accurate bony motion data cannot be collected using standard non-invasive motion capture techniques. However computer vision and image processing algorithms can be combined to develop minimally invasive techniques using new medical imaging technologies to quantify joint motion in live human subjects. This technique matches 3-D joint images of human joints with 2-D video fluoroscopy (video X-ray) image sequences to track the motions of bones at a joint very accurately. This will enable researchers to collect accurate, three-dimensional kinematic data of bones and joints in vivo and to accurately quantify how bones in a joint move relative to one another during dynamic activities. Knowledge of the exact spatial position of the two joint bones will allow biomedical researchers to develop techniques to diagnose the extent of joint injuries.
This project involves the design of computer vision and image processing algorithms to match 3-D joint images of human joints with 2-D fluoroscopy (video X-ray) image sequences. The data for this project consists of sets of CT images representing a 3-D volume, and 2-D fluoroscopy image sequences of the same joint. The CT image has already been processed to extract a 3-D solid model of the bones. The procedure is to:
- Segment the 3-D CT volume to separate the two bones into 2 different 3-D CT volumes
- Match the 3-D solid models to the bones in the segmented CT ‘images’
- Use projection software (developed previously for this research) to produce 2-D simulated fluoroscopy images through a process called Digitally Reconstructed Radiographs.
- Develop edge detection algorithms to locate the bone edges in the real and projected fluoroscopy images
- Use a matching algorithm, to iteratively adjust the pose of each bone model in 6 degrees of freedom (3 position and 3 orientation) until edges detected in #4 from a projection of the implant model matches edges detected in #4 from the fluoroscopy image. From this the exact spatial position of the two joint bones is known.
This will be used to study the differences in knee joint motions during landing between genders, to quantify joint motions in people with movement or skeletal abnormalities, and to study both normal and pathologic motion in a wide range of skeletal joints. Our goal is to extend the fluoroscopy technique for analysis of very dynamic activities, such as running, jumping, and cutting, which are of particular interest in the study of ACL injury mechanisms in athletes.
Many components of this project have been completed, but many still remain. Some of the remaining portions are
- Choosing better features for image matching – Currently edges of the Flouroscopic and projected CT images are matched in both gray scale and bilevel to determine quality of fit. This leaves out a large portion of the image information. It is thought that some filled bone images/features may help orient the search into the right portion of the feature space, then the exact match could be done with edges for detailed match. Also getting a model of the overall orientation of these long bones in 2D and making some preliminary matches to bone orientation in 3D would assist in the fine matching search.
- Separating overlapping X-ray components – There is interest in using this methodology for canine ACL research. Canines are less cooperative in moving in prescribed motion. Image capture from canines can occur from them walking on a treadmill infront of the camera. The quality of the image is good, but the two legs will pass by each other. The two images need to be segmented from each other to be useful. This involves image segmentation and tracking.
- Bone Coordinate System matching – Once the 2D and 3D images are matched, the resulting data has to be interpreted. For this to happen the 3D bones need to be given a coordinate system and that coordinate system has to be mapped to real-space. A coordinate system has been defined. Features on the 3D bone image need to be extracted to map the image to the coordinate system. Often the coordinate system is defined using features of the bone that will not be available within the field of view of our 3D images. This too must be compensated for.
Tracking – Once the 2D images can be reliably found relative to the 3D image, the outputs must be tracked. This has two components. The first is tracking in imaging coordinate space. If the elevation, athimuth and rotation are determined in one frame, there should not be much change by the next frame. The hypothesis on where the search algorithm should begin its search should be influenced by the previous frames. Then once the bones are mapped to bone coordinate space, these must be tracked so the dynamics of the motion can be analyzed.
Feedback Magnetic Resonance Image Matching System
Fields: Biomedical image and signal processing, mechanics
This project is a similar project to the Flouroscopic matching problem. This project is aimed at migrating the Fluroscopic/CT approach to images from Magnetic Resonance (MRI). MRI are preferable because the MR imaging technique is less dangerous to the subjects due to CT & Fluoroscopy modalities use of x-rays.
For MRI image registration, a sequence of MRI images are taken at the same location in 3D space while the subject moves the joint in the frame. The MR images are essentially a slice of a 3D space. Instead of matching this “2D” image to the 3D volume through a projective model, a model using slices is used.
A similar search algorithm can be used. Open project components include
- Segmenting MR Images – both the “2D” slice and the 3D volume need to be segmented to extract just the bone portion. The flesh portion is expected to have significant deformation during the motion activity and can’t be used for matching. MR Images are very good at producing high contrast in soft tissue, and are less well suited to hard tissue. Some imaging pulse paramters have been found that can produce adequate contrast, but segmentation is still needed. The resolution is also lower in these images than they are for Fluroscopic images.
- Selecting features – The images sliced from the 3D MRI and the 2D MRI need to be compared to determine when a suitable match is present. Features and metrics for the quality of match need to be determined.
Semi-Automated 3D Image Segmentation
Fields: Biomedical image and signal processing, Image segmentation
In an attempt to see why female athletes sustain anterior cruciate ligament (ACL) injuries at rates from three to seven times higher than male athletes in the same sports, a model of the skeletal and muscular systems in a human female is being developed. To populate this model, the non-linear strength and location of muscles must be determined. It is desired to know the mass and placement of the muscles over their extent. The visible slices from the female Visual Human are being analyzed to determine the center of mass and the cross-sectional area of each slice. To aid in this endevour, we are helping to automate the segmentation process by implementing the intelligent scissors algorithm (developed by Barrett at BYU) and combining this with some interpolation to automate the acquisition of data from intermediate slices.
These slices will be ‘stacked’ to form a 3-D model of the muscles.
Research is open to develop methods to determine intermediate slice boundaries without human intervention. This will reduce the number of required slices, therefore automating the process further. The algorithms will be evaluated for smoothness and accuracy. Its portability to other images will also be analyzed.
See also COBR.
Image Processing of Ultrasound Spinal Images
(Barney Smith & Reischl)
Fields: Biomedical image and signal processing, Segmentation
During the course of the day, the spacing between the human vertebrae decreases. Then at night, or during other extended periods when not in a vertical position, they will relax and the spacing will increase. There is evidence that the amount and rate of this compression is correlated to overall spinal health.
To further this study detailed measurements of the spacing of the vertebrae is needed. The least intrusive measurement method is through ultrasound imaging. These ultrasound images need to be analyzed to segment the vertebrae from the surrounding soft tissue, then measure their spacing. Doing so with 3D ultrasound would provide even more information for analysis.