National Academies Press: OpenBook

Guidelines for Nighttime Visibility of Overhead Signs (2016)

Chapter: Appendix B - Assessment of Background Complexity Using Digital Images of Roadway Scenes by Image Processing

« Previous: Appendix A - Incremental Effects of Light Sources and Sign Sheeting on Legend Luminance for Overhead Guide Signs
Page 44
Suggested Citation:"Appendix B - Assessment of Background Complexity Using Digital Images of Roadway Scenes by Image Processing." National Academies of Sciences, Engineering, and Medicine. 2016. Guidelines for Nighttime Visibility of Overhead Signs. Washington, DC: The National Academies Press. doi: 10.17226/23512.
×
Page 44
Page 45
Suggested Citation:"Appendix B - Assessment of Background Complexity Using Digital Images of Roadway Scenes by Image Processing." National Academies of Sciences, Engineering, and Medicine. 2016. Guidelines for Nighttime Visibility of Overhead Signs. Washington, DC: The National Academies Press. doi: 10.17226/23512.
×
Page 45
Page 46
Suggested Citation:"Appendix B - Assessment of Background Complexity Using Digital Images of Roadway Scenes by Image Processing." National Academies of Sciences, Engineering, and Medicine. 2016. Guidelines for Nighttime Visibility of Overhead Signs. Washington, DC: The National Academies Press. doi: 10.17226/23512.
×
Page 46
Page 47
Suggested Citation:"Appendix B - Assessment of Background Complexity Using Digital Images of Roadway Scenes by Image Processing." National Academies of Sciences, Engineering, and Medicine. 2016. Guidelines for Nighttime Visibility of Overhead Signs. Washington, DC: The National Academies Press. doi: 10.17226/23512.
×
Page 47
Page 48
Suggested Citation:"Appendix B - Assessment of Background Complexity Using Digital Images of Roadway Scenes by Image Processing." National Academies of Sciences, Engineering, and Medicine. 2016. Guidelines for Nighttime Visibility of Overhead Signs. Washington, DC: The National Academies Press. doi: 10.17226/23512.
×
Page 48
Page 49
Suggested Citation:"Appendix B - Assessment of Background Complexity Using Digital Images of Roadway Scenes by Image Processing." National Academies of Sciences, Engineering, and Medicine. 2016. Guidelines for Nighttime Visibility of Overhead Signs. Washington, DC: The National Academies Press. doi: 10.17226/23512.
×
Page 49
Page 50
Suggested Citation:"Appendix B - Assessment of Background Complexity Using Digital Images of Roadway Scenes by Image Processing." National Academies of Sciences, Engineering, and Medicine. 2016. Guidelines for Nighttime Visibility of Overhead Signs. Washington, DC: The National Academies Press. doi: 10.17226/23512.
×
Page 50
Page 51
Suggested Citation:"Appendix B - Assessment of Background Complexity Using Digital Images of Roadway Scenes by Image Processing." National Academies of Sciences, Engineering, and Medicine. 2016. Guidelines for Nighttime Visibility of Overhead Signs. Washington, DC: The National Academies Press. doi: 10.17226/23512.
×
Page 51
Page 52
Suggested Citation:"Appendix B - Assessment of Background Complexity Using Digital Images of Roadway Scenes by Image Processing." National Academies of Sciences, Engineering, and Medicine. 2016. Guidelines for Nighttime Visibility of Overhead Signs. Washington, DC: The National Academies Press. doi: 10.17226/23512.
×
Page 52
Page 53
Suggested Citation:"Appendix B - Assessment of Background Complexity Using Digital Images of Roadway Scenes by Image Processing." National Academies of Sciences, Engineering, and Medicine. 2016. Guidelines for Nighttime Visibility of Overhead Signs. Washington, DC: The National Academies Press. doi: 10.17226/23512.
×
Page 53
Page 54
Suggested Citation:"Appendix B - Assessment of Background Complexity Using Digital Images of Roadway Scenes by Image Processing." National Academies of Sciences, Engineering, and Medicine. 2016. Guidelines for Nighttime Visibility of Overhead Signs. Washington, DC: The National Academies Press. doi: 10.17226/23512.
×
Page 54
Page 55
Suggested Citation:"Appendix B - Assessment of Background Complexity Using Digital Images of Roadway Scenes by Image Processing." National Academies of Sciences, Engineering, and Medicine. 2016. Guidelines for Nighttime Visibility of Overhead Signs. Washington, DC: The National Academies Press. doi: 10.17226/23512.
×
Page 55
Page 56
Suggested Citation:"Appendix B - Assessment of Background Complexity Using Digital Images of Roadway Scenes by Image Processing." National Academies of Sciences, Engineering, and Medicine. 2016. Guidelines for Nighttime Visibility of Overhead Signs. Washington, DC: The National Academies Press. doi: 10.17226/23512.
×
Page 56

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

44 Assessment of Background Complexity Using Digital Images of Roadway Scenes by Image Processing This appendix contains information about assessing the com- plexity of a roadway scene based on images captured by mobile photometric equipment at night. The procedure that produced a complexity rating from a combination of parameters for each image was applied to generate a complexity rating for each sign in the open-road study described in Chapter 4. The material pre- sented in this appendix is also published in an article in Trans- portation Research Record: Journal of the Transportation Research Board, No. 2384 (1). Background The open-road study included the development of a tool for quantifying visual complexity based on calibrated photometric images taken while approaching the signs of interest. The tool calculates a value from elements within a digital image that are attributed to a driver’s perspective of visual complexity based on surveys of ratings from drivers using images from the open- road study sites. The process for evaluating the visual complex- ity with the image processing tool is described herein. It was anticipated during the initial stages of the research that the image processing tool would be made available to departments of transportation and other interested agencies through a Web-powered interface or software. To use the tool, a practitioner would capture an image of a target sign at night and then upload the digital picture to the online tool, which would compute the visual complexity based on the compo- nents of the image discussed in Appendix B. The calculated visual complexity would be used to determine an appropriate amount of legend luminance. As the project was coming to an end, it was decided that a simpler approach would be to use images that had been run through the tool to show what the various visual complexity levels look like and include those images in the guidelines for easier implementation. There is a need for less expensive and easier-to-use equipment to capture the necessary calibrated photometric images. Introduction The visibility of traffic signs is a critical component to transportation safety. All nighttime traffic control devices that are intended to provide visibility in terms of the roadway scene are developed, deployed, and tested in isolation. Effec- tive traffic signing provides drivers with the information they need to make safe, appropriate, and timely decisions, while also maintaining a certain level of driver comfort, especially at nighttime. The existing sign placement guidance is meant to help practitioners install signs where they will be visible to the driver while not being a hazard. For the most part, the guid- ance focuses on the installation of an isolated sign and does not effectively take into context the background and other adjacent signs that add to the visual complexity and may impair a driver’s ability to detect and obtain information from a particular sign. Figure B-1 contains images of test signs poised in images of varying background complexity. Overhead guide and street name signs can be difficult to detect and obtain information from because of the background complexity, particularly in urban conditions where the back- ground can be very complex. While the Manual on Uniform Traffic Control Devices does discuss sign visibility, the consid- eration of background complexity and the potential impact of background complexity are not expressly mentioned or discussed (2). The concept of and concern for background complexity has been developing over the years as state practi- tioners adapt to expanding urban environments. While there have been various studies indirectly related to the concept of background complexity, NCHRP Project 05-20: “Guidelines for Nighttime Visibility of Overhead Guide Signs,” was initiated to specifically address growing practitioner concerns over the dilemma of whether overhead guide signs need lighting, especially in complex urban environments. Therefore, it is necessary to develop a new method or system independent of human perception to assess the background complexity of traffic signs at nighttime environments with high accuracy A P P E N D I X B

45 and consistency. This study aimed to design a system that auto- matically evaluates the background complexity of overhead traffic signs from digital images of nighttime roadway scenes by using image-processing techniques and multiple linear regression. The proposed system has the potential to be com- bined with the current system for measuring the visibility of traffic signs in practice. Previous Studies Previous studies analyzing the effects of background com- plexity of traffic signs did not provide a numerical model to assess the complexity but studied the effects of background complexity on a driver’s ability to recognize signs (3). Hence, the focus of the review of literature was on the evaluation of two-dimensional (2D) image processing in terms of complexity. Image processing of 2D signals has a wide application in several fields, such as automatic target recognition, traffic surveillance, pavement crack estimation, remote sensing, and medical applications. The background complexity from a driver’s perspective has the potential to be evaluated using image processing techniques. The information theory, which has been widely used in data analysis for clustering, feature selection, blind signal separation, and so forth, is the most frequently used method in image complexity analysis. Work conducted by Okawa focused on the color picture complexity measure considering six factors, such as the dis- tribution of color variance, the total number of regions, and the color distribution of the regions (4). The six factors were mathematically defined and measured using a computer. Five students were invited to grade the complexity of 251 realis- tic images. Finally, the image complexity was expressed by a (a) Low-Complexity Overhead Guide Sign (b) High-Complexity Overhead Guide Sign (c) Low-Complexity Street Name Sign (d) High-Complexity Street Name Sign Figure B-1. Examples of signs in scenes of varying levels of complexity.

46 linear combination of the six factors, and their weights were determined by the least-squares method. It was found that the structural factor of a color picture and the color variance could significantly affect the image complexity. In a study by Mario et al., a novel fuzzy approach was devel- oped to determine the complexity of an image mainly based on an analysis of edge level percentages in the image (5). The devel- oped method did not depend on a priori human evaluation of complexity for the analysis. The complexity for all images was defined by the classes of Little Complex, More or Less Com- plex, and Very Complex. Each class could be determined by the in-class membership functions developed in the study. The developed method performed well in the determination of the image complexity based on testing 150 real images. Cardaci et al. applied a fuzzy mathematical model to evalu- ate the image complexity via a specific entropy function based on local and global spatial features of the image, because it was more perceptible and appropriate to describe the complexity (6). The classic entropic distance function was adopted in the study. After a comparison with subjective estimation results for the image complexity, the developed model was correlated with the subjective model, which proved that such a model was capable of determining the complexity of the image. Rigau et al. introduced a new information-theoretical method to analyze the image complexity based on the seg- mentation of the image (7). The information channel that goes from the histogram to the regions of the partitioned image to maximize the mutual information was applied to partition the image. In the study, the authors took into account the entropy of the image intensity histogram as well as the spatial distribu- tion of pixels. The final complexity analysis was conducted by two measures: the number of partitioning regions needed to extract a given ration of information from the image and the compositional complexity from the partitioned image. Perkio and Hyvarinen presented a novel information theory method to determine the single image and the pair-wise image complexity based on independent component analysis (8). Based on the experimental results, the developed model was shown to be reliable and more responsive to textures than two other compared methods. Patel and Holt (9) conducted an experiment to determine image complexity by applying the Klinger-Salingaros algorithm (10) that was developed for a quantitative pattern measure of harmony, temperature, life, and complexity. In the study, the authors tested the Klinger-Salingaros algorithm using the realistic images for the complexity analysis and explored how well the complexity values calculated by the algorithm corre- lated with human ratings of the same images. A high correla- tion value was shown to support that the Klinger-Salingaros algorithm was useful in estimating image complexity with respect to human perception of complexity. Methodology Input Factors The researchers used several image processing techniques to extract seven different intrinsic properties from night- time roadway images for the development of a background complexity model. The seven intrinsic properties describing the image texture were entropy, contrast, energy, homogene- ity, number of saturation pixels, edge ratio, and number of objects in the image. All these properties are considered as input factors to develop the complexity model for nighttime images of roadway scenes. These factors are derived from the gray-level co-occurrence matrix (GLCM) defined over an image to be the distribution of co-occurring values at a given offset. An N-bit image could produce an N × N matrix. In the GLCM, the value denoted as p(i, j) is equal to the number of occurrences of two pixels that have the gray levels i and j, respectively, with a constant distance. The texture of the image can be measured by the GLCM, which is typically large and sparse. Various metrics of the GLCM are usually taken to get a more useful set of features. As shown in Figure B-2, images with different complexity levels can have significantly various co-occurrence matrices. Number of Objects In general, the number of objects denoted by O in an image is capable of directly reflecting the degree of complexity. Com- monly, the more objects that appear in the image, the more complicated the image is (and vice versa). The number of objects in an image is automatically computed from labeled connected components in the binary image. Nevertheless, some fine textures in the large object, such as words on com- mercial billboards, were counted as isolated objects in this study. It is reasonable that an object that possesses complicated textures can more negatively affect drivers, as demonstrated in Figure B-3. Number of Saturation Pixels In this study, saturation pixels denoted by S were defined as ones whose grayscales reached the highest values (e.g., 255 for 8-bit image, and 65,535 for 16-bit image). In the- ory, the center areas of lighting sources are so bright that pixels of corresponding areas in the image will be given by the highest gray level, because they exceed the scale capabil- ity of the image. In practice, the threshold is usually equal to approximately 90 to 95 percent of the highest grayscale value for the scale of an image, as shown in Figure B-3, which applied the percentage of 95 percent. More saturation pixels appearing in the image implies that drivers likely view a more complex background of guide signs with a large number of

47 objects, such as lighting sources, commercial billboards, and oncoming vehicles, all of which could strongly affect drivers’ observations. Contrast Contrast is a measurement used to represent the degree of difference in the grayscales between a pixel and its neighbor over an image. Contrast is capable of assessing the amount of local variations in the image. Human beings are more sensi- tive to contrast than to absolute grayscales in images. Similar to entropy and energy, the contrast of an image can also be derived from the GLCM, as demonstrated in the following equation. ∑∑ ( ) ( )= − == , 2 11 G i j p i j j N i N The focus of this work was on the nighttime roadway scene, and the contrast in the background of nighttime images has the potential to provide encoded information in terms of the complexity. The higher contrast means that there are likely more lighting sources in the background. The perfect cir- cumstance for clearly viewing the guide signs for drivers is a completely black background, which has a zero contrast. The viewing experience for drivers could be significantly changed as the contrast increases, which is why contrast was considered an important factor in modeling the background complexity of traffic signs in this study. Entropy Entropy, denoted by E, is a quantity normally used to statistically measure the randomness of an image. In the information theory, entropy is used to measure the degree Image 2 with More Complexity Image 1 with Less Complexity GLCM for the Image 1 GLCM for the Image 2 Figure B-2. GLCMs of images of varying complexity.

48 Figure B-3. Example of image properties.

49 of uncertainty associated with random variables, considered a statistical measure of complexity (11). It can be calculated based on the following equation. ∑∑ ( ) ( )( )= − == , log , 11 E p i j p i j j N i N where N × N is the dimension of the GLCM. Low-entropy images, such as those containing regular pixels or regions, have very little contrast and very similar grayscale values. High-entropy images, such as one with heavily cratered areas like on Mars or the Moon, have very large contrast between pixels. Hence, entropy has the potential to be used to provide information with respect to the complexity of images. It is likely that entropy is greater with the increase of image complexity. Energy Energy is a measure of uniformity of grayscale values in an image. It is denoted by J and calculated by the following equation. , 2 11 J p i j j N i N ∑∑ ( )( )= == High-energy images have gray-level distributions with either constant or periodic forms. A homogenous image usu- ally consists of coarser texture with very few dominant gray peaks. Therefore, the co-occurrence matrix for such an image will have few large magnitudes resulting in large values for the energy feature. In contrast, the co-occurrence matrix with a large number of small entries produces small values of energy in an image. Hence, the coarser the texture is, the larger the energy is, and vice versa. Homogeneity Homogeneity is used to measure the spatial closeness of the distribution of elements in the GLCM. Its calculation formula is as follows. ∑∑ ( )= + − == , 111 H p i j i jj N i N As the extreme case, when the distribution of the GLCM is uniform, the homogeneity of such an image is equal to 0. Contrarily, it is equal to 1 when the distribution of the GLCM lies only on the diagonal of the matrix. Edge Ratio Edge, as a crucial characteristic of object, is used to describe the texture of objects as well as their shape information. Hence, the occurrence of objects in images can be represented by the edge ratio, which is defined as follows. R N N edge total = where Nedge is denoted by the number of pixels located at the edges of all objects in an image, and is the total number of pixels in an image. The edge of objects is the place where the grayscales of an image significantly change. Generally, the edge can be calculated by the difference algorithm depending on edge detection opera- tors. An image with a large number of edge pixels is commonly complicated by more objects in the image. This is the reason for employing the edge ratio as a factor to evaluate the background complexity of nighttime images of roadway scenes. However, the edge ratio is sensitive to noise and accuracy of selected edge detection operators. In this work, Canny edge detection, as one of the most famous multi-stage edge detection algorithms, was adopted to extract edge pixels of the background image (12). Modeling of Complexity As stated above, all seven properties derived from an image are considered factors for analyzing the background complex- ity of nighttime images of roadway scenes. In this study, the research team assumed that the complexity is linearly related with these factors. Therefore, the multiple linear regression (MLR) model was employed to model the background com- plexity (13–15). MLR is a multivariate statistical technique used to examine the linear correlations between multiple independent variables and a single dependent variable. It can be demonstrated as follows: 0 1 1 2 2 3 3 4 4 5 5 6 6 7 7 y x x x x x x x i i i i i i i i i = β + β + β + β + β + β + β + β + ε where yi is the i-th observation of the dependent variable, which was the complexity rate in this work; xij is the i-th observation of the j-th independent variable, which is one of the properties introduced previously; bi is the parameter to be estimated for the j-th independent variable factor; and ei is the error follow- ing the independent identically normal distribution. It also can be expressed by the matrix format illustrated below. = +Y XB Err where Y = (y1, y2, . . . , ym)T is a matrix with measurements of the dependent variable, X is a matrix with a series of multi- variate measurements from input factors, B = (b0, b1, . . . , b7)T is a parameter matrix that needs to be estimated, and Err is the noise matrix.

50 The noise is usually assumed to follow a multivariate nor- mal distribution. The ordinary least square (OLS) method is employed to estimate parameters (b0, b1, . . . , b7)T of the model of background complexity for nighttime images of roadway scenes (16). The ability of any visual background complexity model will only be as good as the human factors data used to calibrate it. The sample size available for this analysis was such that it was decided to use bootstrapping, a common resampling method, to improve the performance of MLR (17). The general proce- dure of bootstrapping is as follows. 1. Plug the original samples of size N into the multiple linear regression model. 2. Compute the desired estimation of parameters in the model. 3. From the original samples, resample with a replacement bootstrap sample with the same size of N as the original samples. The meaning of “replacement” is that some data- sets in the original samples may be drawn several times in a bootstrap sample, and some may be excluded. 4. Plug the bootstrap sample produced in the previous step into the multiple linear regression model and obtain new estimation of parameters. 5. Repeat Steps 3 and 4 many times and store all results. The number of iterations needs to be set at an appropriate value since it affects the performance of bootstrapping in the regression. Most of the time, 200 times is sufficient. 6. Each estimated parameter is the mean of stored bootstrap estimates. The estimated standard error is the standard deviation of bootstrapping estimates. Data Description Human factors rating data of nighttime images taken by the Basler Scout Camera with a 35 mm Fujinon lens were collected from 30 participants and used with bootstrapping to calibrate the MLR. The survey was designed to rate images of night- time roadway scenes based upon the background complexity of the target traffic sign, overhead guide sign, or street name sign. A total of 33 images were rated individually by each of the participants. The rating of the background complexity for each image was based on a scale of 1 through 5, with 1 = low complexity and 5 = high complexity. The participants were told that high complexity was defined as difficulty detecting the test sign in each image. Two randomized image presentation orders were developed. Half of the participants, referred to as Group A, viewed one of the presentations, and the other half, Group B, viewed the second presentation. Before conducting the survey, each participant was given five images to rank in order to introduce the rating concept and the type of images he/she would be rating. Participants were also instructed to comment on any factors that seemed to increase or decrease the background complexity of the target traffic sign. Table B-1 contains the results of the survey, with the aver- age and standard deviation for the rate of each sign by group as well as overall rating. The results of the rating by each group for each sign were compared using a t-test. Further- more, an independent paired sample t-test was conducted to determine whether the survey results by Groups A and B were different. The results showed there was not enough evidence to reject the null hypothesis, and the groups were the same since the p-value was equal to 0.48. According to the survey results in Table B-1, and as shown in Figure B-4, Images 12 and 28 were rated by the participants as the two least complex images. Images 15 and 17 were rated the two most complex images among all 33 images. The participants commented that the reason they selected Image 15 as the most complex one was due to the busy background, small size of the sign, and multiple signs and lights close to the sign. Another dataset collected from a previous survey with a dif- ferent group of 21 participants focused on 16 different night- time images of roadway scenes was also used in this study. This dataset, as given in Table B-2, was mainly used to validate the proposed model to evaluate its fitting performance. Before applying image processing techniques to obtain image proper- ties, the target traffic sign in each image was removed manually and replaced with a totally black area to eliminate the effects of the sign on the analysis of the background complexity. This way the complexity was analyzed only for the background, not for the image that also included the sign. Results and Analysis Parameter Estimation of Multivariate Regression Model Using image processing, the properties of all 33 images were automatically computed. These properties included entropy, energy, contrast, homogeneity, number of saturation pixels, edge ratio, and number of objects. These values are given in Table B-3. The parameter estimations of the multivariate linear regression model by OLS from the original small samples and 1,000 bootstrapping samples are presented in Table B-4, along with the corresponding standard error for each esti- mated parameter. The root mean square error (RMSE), as shown below, was applied as the model fit index to compare the estimates from the original small samples and 1,000 bootstrap samples. ∑( )= − = 1 2 1 RMSE N Y Yi i i N where Yi is the rating of background complexity from the survey, and Yi  is the predicted value from the proposed multivariate linear regression model.

51 Images Group A Group B Overall Average Overall Std. Dev. P-Value Overall RatingAverage Std. Dev. Average Std. Dev. 1 4.4 0.74 3.5 1.19 3.9 1.08 0.02 4 2 2.6 0.94 2.2 0.56 2.4 0.78 0.2 2 3 4.5 0.65 3.6 0.83 4 0.87 0 4 4 3 0.96 2.6 1.12 2.8 1.05 0.31 3 5 2.9 1.14 3.3 1.05 3.1 1.09 0.33 3 6 2.4 0.84 2.1 0.74 2.2 0.79 0.45 2 7 2.6 0.93 3.1 1.16 2.9 1.06 0.29 3 8 2.7 0.73 2.3 0.62 2.5 0.69 0.14 3 9 2 0.39 2.4 0.83 2.2 0.68 0.11 2 10 1.9 0.95 1.9 0.74 1.9 0.83 0.98 2 11 2.7 0.99 3.5 1.19 3.1 1.14 0.08 3 12 1.1 0.27 1.1 0.26 1.1 0.26 0.96 1 13 3.1 0.83 3.3 1.18 3.2 1.01 0.5 3 14 2.7 0.73 2.8 0.77 2.8 0.74 0.76 3 15 4.8 0.43 4.7 0.82 4.7 0.65 0.63 5 16 2.1 0.73 1.5 0.52 1.8 0.68 0.03 2 17 4.4 0.84 4.1 1.06 4.2 0.95 0.54 4 18 1.6 0.51 1.5 0.64 1.6 0.57 0.86 2 19 2.7 0.47 2.9 0.74 2.8 0.62 0.52 3 20 1.8 0.89 1.5 0.64 1.6 0.78 0.28 2 21 2.7 0.99 2.6 0.91 2.7 0.94 0.75 3 22 1.1 0.36 1.3 0.46 1.2 0.41 0.43 1 23 2.3 0.73 2.2 0.68 2.2 0.69 0.74 2 24 1.6 0.65 1.6 0.51 1.6 0.57 0.9 2 25 1.2 0.43 1.5 0.64 1.4 0.56 0.13 1 26 4 0.88 4.1 0.83 4.1 0.84 0.68 4 27 1.8 0.89 1.5 0.52 1.6 0.73 0.25 2 28 1 0 1.1 0.35 1.1 0.26 0.17 1 29 3.4 0.93 3.6 0.91 3.5 0.91 0.48 4 30 1.4 0.63 1.2 0.41 1.3 0.53 0.43 1 31 1.5 0.52 1.9 0.74 1.7 0.66 0.14 2 32 2.2 0.58 1.9 0.74 2 0.68 0.17 2 33 3.6 0.93 3.7 1.1 3.7 1 0.81 4 Table B-1. Background complexity results: Survey 1. No. 12 Image (a) Average Complexity: 1.1 No. 28 Image (b) Average Complexity: 1.1 Figure B-4. Survey images: (a) Image 12, (b) Image 28, (c) Image 15, and (d) Image 17. (continued on next page)

52 Images Overall Average Overall Std. Dev. Overall Rating 1 1.3 0.23 1 2 4.3 1.13 4 3 3.3 0.90 3 4 3.3 1.03 3 5 3.3 1.20 3 6 2.3 0.73 2 7 1.7 0.98 2 8 3.0 0.94 3 9 2.0 0.51 2 10 1.3 0.37 1 11 2.3 0.88 2 12 3.0 1.08 3 13 1.7 0.61 2 14 5.0 0.84 5 15 2.0 0.82 2 16 3.7 1.10 4 Table B-2. Background complexity results: Survey 2. Image Property No. of Objects No. of Saturation Pixels Homogeneity Edge Ratio Contrast Energy Entropy Mean 40.29 8.50 0.87 0.0056 45.21 0.16 3.61 Std. Dev. 16.47 1.02 0.11 0.0023 43.75 0.19 1.25 Table B-3. Statistical summary for image properties. No. 15 Image (c) Average Complexity: 4.7 No. 17 Image (d) Average Complexity: 4.2 Figure B-4. (Continued). The RMSE for the original small samples was 0.393094, and the RMSE for the bootstrap samples was 0.372485. As shown, these two models had similar performance, but esti- mates from the bootstrap samples were slightly downward biased for the analysis of background complexity of nighttime roadway scenes in the empirical surveyed data. The dependent variable employed in the regression was the average value of ratings from 30 participants as a continuous variable. How- ever, the background complexity of traffic signs in the night- time roadway scenes was defined as five levels, from 1 (least) through 5 (most), all of which were integers. Therefore, it was necessary to take a look at the performance of the proposed multivariate linear regression model with rounded values (integers). The results can be found in Figure B-5. Apparently, there were only three images (No. 16, 26, and 27) in which predicted ratings of complexity derived from the proposed multivariate linear regression model deviated from the ones from the survey. Nevertheless, these differences were ±1 in all cases, as shown in Figure B-5. It can be tolerated in practice with such bias in the analysis. Model Validation After building up the multivariate linear regression model, there was a need to validate the performance of the proposed model. The leave-one-out cross validation (LOOCV) was employed to evaluate the proposed model. As its name implies, LOOCV uses a single observation from the original datasets as the validation data, and other data as the training data. The whole process continues until all observations have been used once as the validation data. The validation results are illustrated in Figure B-6.

53 Parameters Ordinary Least Square Bootstrap for OLS Estimation Value Standard Error Estimation Value Standard Error Intercept −7.1612 1.7524 −6.1491 2.6007 Entropy 0.2422 0.3651 0.1907 0.4276 Contrast 0.0138 0.0049 0.0128 0.0088 Energy 0.3789 2.8844 0.0543 3.4869 Homogeneity 3.9557 1.361 2.7531 2.8927 No. of Saturation Pixels 0.4068 0.1691 0.4414 0.1888 Edge Ratio 92.9387 57.5493 102.1324 61.2105 No. of Objects 0.0197 0.0056 0.0196 0.0080 Table B-4. Multivariate regression results. As shown in this figure, the fit of the model was very good and certainly acceptable, as the largest biases were 1.17 for the averaged ratings and 1 for the rounded ratings, respec- tively. Based upon such results, the error with respect to average ratings for background complexity in the validation was computed with a mean of 0.3182 and a standard devia- tion of 0.2951. Additionally, the error of ±1 in the rounded ratings was reasonable since the survey was a subjective pro- cedure in which it was difficult for participants to accurately distinguish the difference in the background complexity of two nighttime images, especially when they had close com- plexity ratings (e.g., 4 and 5). To further validate the multivariate linear regression model proposed, data were used from a preliminary survey that con- sisted of 16 nighttime images of overhead guide signs rated by 21 participants using a similar methodology. Figure B-7 demonstrates the performance of the proposed model in such datasets. Based on the validation datasets, it is apparent that the proposed model performed well, as the largest error was less than 1.5 for the averaged ratings and 1 for the rounded ratings. In the rounded ratings, differences occurred in only 3 of 16 images with the bias of 1. As mentioned previously, the error of ±1 was acceptable since the procedure of rating complexity was subjective, and the differences were difficult to accurately distinguish. This validation with the second data- set was particularly important, because the model is trained from a different dataset from a different survey. This valida- tion effort shows that the developed model is robust and has strong potential to be used to rate the background complexity of any digital image of a roadway scene. Conclusions The goal of this study was to assess the background complex- ity of overhead traffic signs using nighttime images of roadway scenes via image processing techniques. A multivariate linear regression model considering entropy, contrast, energy, homo- geneity, the number of saturation pixels, edge ratio, and the number of objects as input properties is proposed. These input properties are all directly derived from images by image pro- cessing techniques. Image rating data collected from 30 partici- pants from one survey and 21 from another survey were used to train and validate the model. The predicted ratings from the model with respect to the background complexity were consis- tent with ones from the surveys. It is believed that this model can be used to effectively rate nighttime images for background complexity with respect to overhead guide and street name Figure B-5. Results of rounded values in multivariate linear regression model.

54 Figure B-6. Results of validation by LOOCV.

55 signs, and those ratings can be used to more accurately assess the visibility of the signs. Suggestions for future research include extending the work to measure other important characteristics of night- time images, such as 2D spectrum information and rela- tive localization of traffic signs. This model should also be validated with respect to other types of signs, such as warning and regulatory signs. It is also suggested to fur- ther develop the technique by automating the detection of all signs in the image and individually rating the back- ground complexity of each and to collect new and more comprehensive image samples to further train and validate the proposed model. References 1. Ge. H., Y. Zhang, J. D. Miles, and P. J. Carlson. “Assessment of Background Complexity of Overhead Guide Signs.” In Transporta- tion Research Record: Journal of the Transportation Research Board, No. 2384, Transportation Research Board of the National Acad- emies, Washington, DC, 2015, pp. 74–84. 2. Manual on Uniform Traffic Control Devices. Federal Highway Admin- istration, U.S. Department of Transportation, Washington, DC, 2009. Figure B-7. Performance of proposed model by validation datasets.

56 3. Mace, D. J., R. B. King, and G. W. Dauber. Sign Luminance Require- ments for Various Background Complexities. FHWA-RD-85-056, Federal Highway Administration, U.S. Department of Transpor- tation, Washington, DC, 1985. 4. Okawa, Y. “A Complexity Measure for Colored Pictures in Com- mercial Design.” Computer Graphics and Image Processing, Vol. 17, No. 4, 1981, pp. 345–61. 5. Mario, I., M. Chacon, D. Alma, and S. Corral. “Image Complexity Measure: A Human Criterion Free Approach.” 2005 Annual Meeting of the North American Fuzzy Information Processing Society, 2005, pp. 241–46. 6. Cardaci, M., V. Di Gesù, M. Petrou, and M. Tabacchi. “On the Eval- uation of Images Complexity: A Fuzzy Approach.” Lecture Notes in Computer Science, Vol. 3849, 2006, pp. 305–11. 7. Rigau, J., M. Feixas, and M. Sbert. “An Information-Theoretic Framework for Image Complexity.” In Proceedings of Computa- tional Aesthetics, 2005, pp. 177–84. 8. Perkio, J., and A. Hyvarinen. “Modelling Image Complexity by Independent Component Analysis, with Application to Content- Based Image Retrieval.” In Proceedings of the 19th International Conference on Artificial Neural Networks: Part II, 2009, pp. 704–14. 9. Patel, L. N., and P. O. B. Holt. “Testing a Computational Model of Visual Complexity in Background Scenes.” Advanced Concepts for Intelligent Systems, 2000, pp. 119–23. 10. Klinger, A., and N. A. Salingaros. “A Pattern Measure.” Environ- ment and Planning B: Planning and Design, Vol. 27, No. 4, 2000, pp. 537–47. 11. Alan, R. P. II, and R. N. Strickland. “Image Complexity Metrics for Automatic Target Recognizers.” In Proceedings of the Automatic Target Recognition System and Technology Conference, Naval Surface Warfare Center, Silver Spring, MD, 1990. 12. Canny, J. “A Computational Approach to Edge Detection.” IEEE Transactions on Pattern Analysis and Machine Intelligence. Vol. 8, No. 6, 1986, pp. 679–98. 13. Kemp, F. “Applied Multiple Regression/Correlation Analysis for the Behavioral Sciences.” Journal of the Royal Statistical Society: Series D, Vol. 52, 2003, pp. 691. 14. Oja, H. Multivariate Nonparametric Methods with R. Springer, New York, 2010, pp. 183–200. 15. Breiman, L., and J. Friedman. “Predicting Multivariate Responses in Multiple Linear Regression.” Journal of the Royal Statistical Society: Series B., Vol. 59, 1997, pp. 3–37. 16. Denzil, G. F., M. Michael, and B. Robert. “Properties of Ordinary Least Squares Estimators in Regression Models with Nonspheri- cal Disturbances.” Journal of Econometrics, Vol. 54(1–3), 1992, pp. 321–34. 17. Darlington, R. B. “Multiple Regression in Psychological Research and Practice.” Psychological Bulletin, Vol. 69, 1968, pp. 161–82.

Next: Appendix C - Open-Road Study Details »
Guidelines for Nighttime Visibility of Overhead Signs Get This Book
×
 Guidelines for Nighttime Visibility of Overhead Signs
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

TRB's National Cooperative Highway Research Program (NCHRP) Report 828: Guidelines for Nighttime Visibility of Overhead Signs explores legibility distances for drivers in controlled conditions and the effects of sign luminance and visual complexity on the distance at which a driver can read overhead signs and street signs. While the Manual on Uniform Traffic Control Devices (MUTCD) provides minimum retroreflectivity standards for overhead signs, there are few guidelines that agencies can reference to decide how to provide sufficient nighttime performance of overhead signs in site-specific situations. The report presents proposed guidelines for nighttime overhead sign visibility, formatted as a potential replacement for the current Chapter 10, Roadway Sign Lighting, in the 2005 AASHTO Roadway Lighting Design Guide.

READ FREE ONLINE

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!