User Tools

Site Tools


resources:pdf_document_resources:topic_face_color

Summary

  • Color information can improve face perception
  • Several cortical regions contain single neurons whose responses are influenced by color and shape simultaneously
  • Cortical regions supporting color and face perception abilities are close
  • ADC found literature supporting behavioral improvements in two domains: face detection and gender recognition
  • Color may aid emotion recognition, but ADC only found computer vision literature supporting this

The references listed below are ones that support a role for color information in aiding face perception. The importance of the role depends strongly on the perceptual task in question, however. There is a long literature showing that ventral occipitotemporal cortical regions from V1 onwards contain single neurons that respond to specific combinations of shape and color features; only two references are included here. In as much as there exists a “color region” of cortex, it is located just posterior and medial to “face regions.” Color aids face detection, especially in naturalistic backgrounds, when it is done by both human observers and computer vision algorithms. Color appears to aid gender discrimination, although the specific color/gender associations seem up in the air. Not much research has been done on the role of color in recognizing facial expressions of emotion, but there is at least one computer vision article that found a use for color in that domain.

FIRST OF ALL

Refer to this very comprehensive list of computer vision articles:

http://www.visionbib.com/bibliography/people902.html#Finding%20Faces%20by%20Color%20Features

For a list of computer vision face image data sets, navigate here and skip to Dataset, Faces. Note that this is not a complete list.

http://datasets.visionbib.com/index.html

See also VNLab's list of face image data sets available online:

Face data sets

Close relationship between cortical regions for color and face perception

  • Clark, V. P., Parasuraman, R., Keil, K., Kulansky, R., Fannon, S., Maisog, J. M., … Haxby, J. V. (1997). Selective attention to face identity and color studied with f MRI. Human Brain Mapping, 5(4), 293–297. doi:10.1002/(SICI)1097-0193(1997)5:4<293::AID-HBM15>3.0.CO;2-F
  • Tanaka, K., Saito, H., Fukada, Y., & Moriya, M. (1991). Coding visual images of objects in the inferotemporal cortex of the macaque monkey. Journal of Neurophysiology, 66(1), 170–189.

Color aids face detection

Behavioral

  • Yip, Andrew W., and Pawan Sinha. “Contribution of Color to Face Recognition.” Perception 31, no. 8 (2002): 995–1003. doi:10.1068/p3376.

NOTE: at least one article hints that if individuals can be distinguished based solely on color information, then ordinary, holistic face recognition processes might not be used:

  • McKone, Elinor, and Galit Yovel. “Why Does Picture-plane Inversion Sometimes Dissociate Perception of Features and Spacing in Faces, and Sometimes Not? Toward a New Theory of Holistic Processing.” Psychonomic Bulletin & Review 16, no. 5 (October 2009): 778–97. doi:10.3758/PBR.16.5.778.

Computational

  • Maglogiannis, Ilias, Demosthenes Vouyioukas, and Chris Aggelopoulos. “Face Detection and Recognition of Natural Human Emotion Using Markov Random Fields.” Personal and Ubiquitous Computing 13, no. 1 (January 1, 2009): 95–101. doi:10.1007/s00779-007-0165-0.

Color aids gender recognition

Behavioral

  • Hill, H., Bruce, V., & Akamatsu, S. (1995). Perceiving the sex and race of faces: the role of shape and colour. Proceedings. Biological Sciences / The Royal Society, 261(1362), 367–373. doi:10.1098/rspb.1995.0161
  • Tarr, M. J., Kersten, D., Cheng, Y., & Rossion, B. (2001). It’s Pat! Sexing faces using only red and green. Journal of Vision, 1(3), 337–337. doi:10.1167/1.3.337
  • “The Segmental Structure of Faces and Its Use in Gender Recognition” Accessed August 5, 2014. http://repository.cmu.edu/cgi/viewcontent.cgi?article=1392&context=psychology.

Color aids emotion recognition

Computational

  • Maglogiannis, Ilias, Demosthenes Vouyioukas, and Chris Aggelopoulos. “Face Detection and Recognition of Natural Human Emotion Using Markov Random Fields.” Personal and Ubiquitous Computing 13, no. 1 (January 1, 2009): 95–101. doi:10.1007/s00779-007-0165-0.

Detailed information for the references

Sorted by author and date.

<html> <head><meta http-equiv=“Content-Type” content=“text/html; charset=utf-8” /> <title>Zotero Report</title> <link rel=“stylesheet” type=“text/css” href=“zotero:report/detail.css”/> <link rel=“stylesheet” type=“text/css” media=“screen,projection” href=“zotero:report/detail_screen.css”/> <link rel=“stylesheet” type=“text/css” media=“print” href=“zotero:report/detail_print.css”/> </head> <body> <ul class=“report combineChildItems”> <li id=“i22046” class=“item journalArticle”> <h2>Color Face Recognition for Degraded Face Images</h2> <table> <tr> <th>Type</th> <td>Journal Article</td> </tr> <tr> <th class=“author”>Author</th> <td>Jae-Young Choi</td> </tr> <tr> <th class=“author”>Author</th> <td>Yong-Man Ro</td> </tr> <tr> <th class=“author”>Author</th> <td>K.N. Plataniotis</td> </tr> <tr> <th>Volume</th> <td>39</td> </tr> <tr> <th>Issue</th> <td>5</td> </tr> <tr> <th>Pages</th> <td>1217-1230</td> </tr> <tr> <th>Publication</th> <td>IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics</td> </tr> <tr> <th>ISSN</th> <td>1083-4419</td> </tr> <tr> <th>Date</th> <td>October 2009</td> </tr> <tr> <th>DOI</th> <td>10.1109/TSMCB.2009.2014245</td> </tr> <tr> <th>Library Catalog</th> <td>IEEE Xplore</td> </tr> <tr> <th>Abstract</th> <td>In many current face-recognition (FR) applications, such as video surveillance security and content annotation in a Web environment, low-resolution faces are commonly encountered and negatively impact on reliable recognition performance. In particular, the recognition accuracy of current intensity-based FR systems can significantly drop off if the resolution of facial images is smaller than a certain level (e.g., less than 20 times 20 pixels). To cope with low-resolution faces, we demonstrate that facial color cue can significantly improve recognition performance compared with intensity-based features. The contribution of this paper is twofold. First, a new metric called ldquovariation ratio gainrdquo (VRG) is proposed to prove theoretically the significance of color effect on low-resolution faces within well-known subspace FR frameworks; VRG quantitatively characterizes how color features affect the recognition performance with respect to changes in face resolution. Second, we conduct extensive performance evaluation studies to show the effectiveness of color on low-resolution faces. In particular, more than 3000 color facial images of 341 subjects, which are collected from three standard face databases, are used to perform the comparative studies of color effect on face resolutions to be possibly confronted in real-world FR systems. The effectiveness of color on low-resolution faces has successfully been tested on three representative subspace FR methods, including the eigenfaces, the fisherfaces, and the Bayesian. Experimental results show that color features decrease the recognition error rate by at least an order of magnitude over intensity-driven features when low-resolution faces (25 times 25 pixels or less) are applied to three FR methods.</td> </tr> <tr> <th>Date Added</th> <td>Tuesday, August 05, 2014 12:39:03 PM</td> </tr> <tr> <th>Modified</th> <td>Tuesday, August 05, 2014 12:39:03 PM</td> </tr> </table><h3 class=“tags”>Tags:</h3> <ul class=“tags”> <li>Algorithms</li> <li>Artificial Intelligence</li> <li>Bayesian</li> <li>Color</li> <li>color face recognition</li> <li>Color face recognition (FR)</li> <li>Colorimetry</li> <li>degraded face images</li> <li>eigenfaces</li> <li>Face</li> <li>face databases</li> <li>face recognition</li> <li>face resolution</li> <li>fisherfaces</li> <li>Humans</li> <li>Identification</li> <li>image colour analysis</li> <li>Image Interpretation, Computer-Assisted</li> <li>Pattern Recognition, Automated</li> <li>Subtraction Technique</li> <li>variation ratio gain</li> <li>variation ratio gain (VRG)</li> <li>verification (VER)</li> <li>video surveillance</li> <li>web-based FR</li> </ul> <h3 class=“attachments”>Attachments</h3> <ul class=“attachments”> <li id=“i22047”>IEEE Xplore Abstract Record</li> </ul> </li> <li id=“i22038” class=“item journalArticle”> <h2>Selective attention to face identity and color studied with f MRI</h2> <table> <tr> <th>Type</th> <td>Journal Article</td> </tr> <tr> <th class=“author”>Author</th> <td>Vincent P. Clark</td> </tr> <tr> <th class=“author”>Author</th> <td>Raja Parasuraman</td> </tr> <tr> <th class=“author”>Author</th> <td>Katrina Keil</td> </tr> <tr> <th class=“author”>Author</th> <td>Rachel Kulansky</td> </tr> <tr> <th class=“author”>Author</th> <td>Sean Fannon</td> </tr> <tr> <th class=“author”>Author</th> <td>Jose Ma. Maisog</td> </tr> <tr> <th class=“author”>Author</th> <td>Leslie G. Ungerleider</td> </tr> <tr> <th class=“author”>Author</th> <td>James V. Haxby</td> </tr> <tr> <th>URL</th> <td><a href=“http://onlinelibrary.wiley.com/doi/10.1002/(SICI)1097-0193(1997)5:4&lt;293::AID-HBM15&gt;3.0.CO;2-F/abstract”>http://onlinelibrary.wiley.com/doi/10.1002/(SICI)1097-0193(1997)5:4&lt;293::AID-HBM15&gt;3.0.CO;2-F/abstract</a></td> </tr> <tr> <th>Rights</th> <td>Published 1997 Wiley-Liss, Inc.</td> </tr> <tr> <th>Volume</th> <td>5</td> </tr> <tr> <th>Issue</th> <td>4</td> </tr> <tr> <th>Pages</th> <td>293-297</td> </tr> <tr> <th>Publication</th> <td>Human Brain Mapping</td> </tr> <tr> <th>ISSN</th> <td>1097-0193</td> </tr> <tr> <th>Date</th> <td>January 1, 1997</td> </tr> <tr> <th>Journal Abbr</th> <td>Hum. Brain Mapp.</td> </tr> <tr> <th>DOI</th> <td>10.1002/(SICI)1097-0193(1997)5:4&lt;293::AID-HBM15&gt;3.0.CO;2-F</td> </tr> <tr> <th>Accessed</th> <td>Tuesday, August 05, 2014 12:38:08 PM</td> </tr> <tr> <th>Library Catalog</th> <td>Wiley Online Library</td> </tr> <tr> <th>Language</th> <td>en</td> </tr> <tr> <th>Abstract</th> <td>Cortical areas associated with selective attention to the color and identity of faces were located using functional magnetic resonance imaging (fMRI). Six subjects performed tasks which required selective attention to face identity or color similarity using the same color-washed face stimuli. Performance of the color attention task but not the face attention task was associated with a region of activity in the collateral sulcus and nearby regions of the lingual and fusiform gyri. Performance of both tasks was associated with a region of activity in ventral occipitotemporal cortex that was lateral to the color responsive area and had a greater spatial extent. These fMRI results converge with results obtained from PET and ERP studies to demonstrate similar anatomical locations of functional areas for face and color processing across studies. Hum. Brain Mapping5:293–297, 1997. Published 1997 Wiley-Liss, Inc. This article was prepared by a group consisting of both United States government employees and non-United States government employees, and as such is subject to 17 U.S.C. Sec. 105.</td> </tr> <tr> <th>Date Added</th> <td>Tuesday, August 05, 2014 12:38:08 PM</td> </tr> <tr> <th>Modified</th> <td>Tuesday, August 05, 2014 12:38:08 PM</td> </tr> </table><h3 class=“tags”>Tags:</h3> <ul class=“tags”> <li>Brain Mapping</li> <li>Color Perception</li> <li>extrastriate cortex</li> <li>face recognition</li> <li>Magnetic Resonance Imaging</li> <li>selective attention</li> <li>Visual Cortex</li> <li>Visual Pathways</li> </ul> <h3 class=“attachments”>Attachments</h3> <ul class=“attachments”> <li id=“i22039”>Snapshot</li> </ul> </li> <li id=“i22048” class=“item journalArticle”> <h2>Combining color and shape information for illumination-viewpoint invariant object recognition</h2> <table> <tr> <th>Type</th> <td>Journal Article</td> </tr> <tr> <th class=“author”>Author</th> <td>A Diplaros</td> </tr> <tr> <th class=“author”>Author</th> <td>T. Gevers</td> </tr> <tr> <th class=“author”>Author</th> <td>I Patras</td> </tr> <tr> <th>Volume</th> <td>15</td> </tr> <tr> <th>Issue</th> <td>1</td> </tr> <tr> <th>Pages</th> <td>1-11</td> </tr> <tr> <th>Publication</th> <td>IEEE Transactions on Image Processing</td> </tr> <tr> <th>ISSN</th> <td>1057-7149</td> </tr> <tr> <th>Date</th> <td>January 2006</td> </tr> <tr> <th>DOI</th> <td>10.1109/TIP.2005.860320</td> </tr> <tr> <th>Library Catalog</th> <td>IEEE Xplore</td> </tr> <tr> <th>Abstract</th> <td>In this paper, we propose a new scheme that merges color- and shape-invariant information for object recognition. To obtain robustness against photometric changes, color-invariant derivatives are computed first. Color invariance is an important aspect of any object recognition scheme, as color changes considerably with the variation in illumination, object pose, and camera viewpoint. These color invariant derivatives are then used to obtain similarity invariant shape descriptors. Shape invariance is equally important as, under a change in camera viewpoint and object pose, the shape of a rigid object undergoes a perspective projection on the image plane. Then, the color and shape invariants are combined in a multidimensional color-shape context which is subsequently used as an index. As the indexing scheme makes use of a color-shape invariant context, it provides a high-discriminative information cue robust against varying imaging conditions. The matching function of the color-shape context allows for fast recognition, even in the presence of object occlusion and cluttering. From the experimental results, it is shown that the method recognizes rigid objects with high accuracy in 3-D complex scenes and is robust against changing illumination, camera viewpoint, object pose, and noise.</td> </tr> <tr> <th>Date Added</th> <td>Tuesday, August 05, 2014 12:39:59 PM</td> </tr> <tr> <th>Modified</th> <td>Tuesday, August 05, 2014 12:39:59 PM</td> </tr> </table><h3 class=“tags”>Tags:</h3> <ul class=“tags”> <li>Algorithms</li> <li>Artificial Intelligence</li> <li>camera viewpoint</li> <li>cameras</li> <li>cluttering</li> <li>Color</li> <li>Colorimetry</li> <li>color-invariant derivatives</li> <li>Color-shape context</li> <li>composite information</li> <li>geometric invariants</li> <li>illumination-viewpoint invariant object recognition</li> <li>image colour analysis</li> <li>Image Enhancement</li> <li>Image Interpretation, Computer-Assisted</li> <li>image matching</li> <li>Image recognition</li> <li>image retrieval</li> <li>Imaging, Three-Dimensional</li> <li>indexing scheme</li> <li>Information Storage and Retrieval</li> <li>Layout</li> <li>Lighting</li> <li>matching function</li> <li>multidimensional color-shape context</li> <li>Multidimensional systems</li> <li>object occlusion</li> <li>object pose</li> <li>object recognition</li> <li>Pattern Recognition, Automated</li> <li>photometric changes</li> <li>photometric invariants</li> <li>Photometry</li> <li>Robustness</li> <li>Shape</li> <li>similarity invariant shape descriptors</li> <li>Subtraction Technique</li> </ul> <h3 class=“attachments”>Attachments</h3> <ul class=“attachments”> <li id=“i22049”>IEEE Xplore Abstract Record</li> </ul> </li> <li id=“i22033” class=“item journalArticle”> <h2>Perceiving the sex and race of faces: the role of shape and colour</h2> <table> <tr> <th>Type</th> <td>Journal Article</td> </tr> <tr> <th class=“author”>Author</th> <td>H. Hill</td> </tr> <tr> <th class=“author”>Author</th> <td>V. Bruce</td> </tr> <tr> <th class=“author”>Author</th> <td>S. Akamatsu</td> </tr> <tr> <th>Volume</th> <td>261</td> </tr> <tr> <th>Issue</th> <td>1362</td> </tr> <tr> <th>Pages</th> <td>367-373</td> </tr> <tr> <th>Publication</th> <td>Proceedings. Biological Sciences / The Royal Society</td> </tr> <tr> <th>ISSN</th> <td>0962-8452</td> </tr> <tr> <th>Date</th> <td>Sep 22, 1995</td> </tr> <tr> <th>Extra</th> <td>PMID: 8587879</td> </tr> <tr> <th>Journal Abbr</th> <td>Proc. Biol. Sci.</td> </tr> <tr> <th>DOI</th> <td>10.1098/rspb.1995.0161</td> </tr> <tr> <th>Library Catalog</th> <td>NCBI PubMed</td> </tr> <tr> <th>Language</th> <td>eng</td> </tr> <tr> <th>Abstract</th> <td>Theories of object recognition have emphasized the information conveyed by shape information, whereas theories of face recognition have emphasized properties of superficial features. In the experiments reported here we used novel technology to investigate the relative contributions of shape and superficial colour information to simple categorization decisions about the sex and &apos;race&apos; of faces. The results show that both shape and colour provide useful information for these decisions; shape information was particularly useful for race decisions while colour dominated sex decisions. When both sources of information were combined, the dominant source depended on viewpoint, with angled views emphasizing the contribution of shape and the full-face view colour. The results are discussed within the context of theories of face recognition and their implications for telecommunication applications are considered.</td> </tr> <tr> <th>Short Title</th> <td>Perceiving the sex and race of faces</td> </tr> <tr> <th>Date Added</th> <td>Tuesday, August 05, 2014 12:36:25 PM</td> </tr> <tr> <th>Modified</th> <td>Tuesday, August 05, 2014 12:36:25 PM</td> </tr> </table><h3 class=“tags”>Tags:</h3> <ul class=“tags”> <li>Adult</li> <li>Analysis of Variance</li> <li>Asian Continental Ancestry Group</li> <li>Continental Population Groups</li> <li>European Continental Ancestry Group</li> <li>Face</li> <li>Female</li> <li>Form Perception</li> <li>Humans</li> <li>Japan</li> <li>Male</li> <li>Sex Characteristics</li> <li>Skin Pigmentation</li> </ul> <h3 class=“attachments”>Attachments</h3> <ul class=“attachments”> <li id=“i22034”>PubMed entry</li> </ul> </li> <li id=“i22050” class=“item journalArticle”> <h2>Face detection and recognition of natural human emotion using Markov random fields</h2> <table> <tr> <th>Type</th> <td>Journal Article</td> </tr> <tr> <th class=“author”>Author</th> <td>Ilias Maglogiannis</td> </tr> <tr> <th class=“author”>Author</th> <td>Demosthenes Vouyioukas</td> </tr> <tr> <th class=“author”>Author</th> <td>Chris Aggelopoulos</td> </tr> <tr> <th>URL</th> <td><a href=“http://link.springer.com/article/10.1007/s00779-007-0165-0”>http://link.springer.com/article/10.1007/s00779-007-0165-0</a></td> </tr> <tr> <th>Volume</th> <td>13</td> </tr> <tr> <th>Issue</th> <td>1</td> </tr> <tr> <th>Pages</th> <td>95-101</td> </tr> <tr> <th>Publication</th> <td>Personal and Ubiquitous Computing</td> </tr> <tr> <th>ISSN</th> <td>1617-4909, 1617-4917</td> </tr> <tr> <th>Date</th> <td>2009/01/01</td> </tr> <tr> <th>Journal Abbr</th> <td>Pers Ubiquit Comput</td> </tr> <tr> <th>DOI</th> <td>10.1007/s00779-007-0165-0</td> </tr> <tr> <th>Accessed</th> <td>Tuesday, August 05, 2014 12:40:15 PM</td> </tr> <tr> <th>Library Catalog</th> <td>link.springer.com</td> </tr> <tr> <th>Language</th> <td>en</td> </tr> <tr> <th>Abstract</th> <td>This paper presents an integrated system for emotion detection. In this research effort, we have taken into account the fact that emotions are most widely represented with eye and mouth expressions. The proposed system uses color images and it is consisted of three modules. The first module implements skin detection, using Markov random fields models for image segmentation and skin detection. A set of several colored images with human faces have been considered as the training set. A second module is responsible for eye and mouth detection and extraction. The specific module uses the HLV color space of the specified eye and mouth region. The third module detects the emotions pictured in the eyes and mouth, using edge detection and measuring the gradient of eyes’ and mouth’s region figure. The paper provides results from the system application, along with proposals for further research.</td> </tr> <tr> <th>Date Added</th> <td>Tuesday, August 05, 2014 12:40:15 PM</td> </tr> <tr> <th>Modified</th> <td>Tuesday, August 05, 2014 12:40:15 PM</td> </tr> </table><h3 class=“tags”>Tags:</h3> <ul class=“tags”> <li>emotion recognition</li> <li>Face detection</li> <li>ICM</li> <li>image segmentation</li> <li>Markov random field</li> <li>User Interfaces and Human Computer Interaction</li> </ul> <h3 class=“attachments”>Attachments</h3> <ul class=“attachments”> <li id=“i22051”>Snapshot</li> </ul> </li> <li id=“i22042” class=“item journalArticle”> <h2>Why does picture-plane inversion sometimes dissociate perception of features and spacing in faces, and sometimes not? Toward a new theory of holistic processing</h2> <table> <tr> <th>Type</th> <td>Journal Article</td> </tr> <tr> <th class=“author”>Author</th> <td>Elinor McKone</td> </tr> <tr> <th class=“author”>Author</th> <td>Galit Yovel</td> </tr> <tr> <th>Volume</th> <td>16</td> </tr> <tr> <th>Issue</th> <td>5</td> </tr> <tr> <th>Pages</th> <td>778-797</td> </tr> <tr> <th>Publication</th> <td>Psychonomic Bulletin &amp; Review</td> </tr> <tr> <th>ISSN</th> <td>1531-5320</td> </tr> <tr> <th>Date</th> <td>Oct 2009</td> </tr> <tr> <th>Extra</th> <td>PMID: 19815781</td> </tr> <tr> <th>Journal Abbr</th> <td>Psychon Bull Rev</td> </tr> <tr> <th>DOI</th> <td>10.3758/PBR.16.5.778</td> </tr> <tr> <th>Library Catalog</th> <td>NCBI PubMed</td> </tr> <tr> <th>Language</th> <td>eng</td> </tr> <tr> <th>Abstract</th> <td>Classically, it has been presumed that picture-plane inversion primarily reduces sensitivity to spacing/configural information in faces (distance between location of the major features) and has little effect on sensitivity to local feature information (e.g., eye shape or color). Here, we review 22 published studies relevant to this claim. Data show that the feature inversion effect varied substantially across studies as a function of the following factors: whether the feature change was shape only or included color/brightness, the number of faces in the stimulus set, and whether the feature was in facial context. For shape-only changes in facial context, feature inversion effects were as large as typical spacing inversion effects. Small feature inversion effects occurred only when a task could be efficiently solved by visual-processing areas outside whole-face coding. The results argue that holistic/configural processing for upright faces integrates exact feature shape and spacing between blobs. We describe two plausible approaches to this process.</td> </tr> <tr> <th>Short Title</th> <td>Why does picture-plane inversion sometimes dissociate perception of features and spacing in faces, and sometimes not?</td> </tr> <tr> <th>Date Added</th> <td>Tuesday, August 05, 2014 12:38:38 PM</td> </tr> <tr> <th>Modified</th> <td>Tuesday, August 05, 2014 12:38:38 PM</td> </tr> </table><h3 class=“tags”>Tags:</h3> <ul class=“tags”> <li>Face</li> <li>Form Perception</li> <li>Humans</li> <li>Individuality</li> <li>Models, Psychological</li> <li>Space Perception</li> <li>Visual Perception</li> </ul> <h3 class=“attachments”>Attachments</h3> <ul class=“attachments”> <li id=“i22043”>PubMed entry</li> </ul> </li> <li id=“i22040” class=“item journalArticle”> <h2>Coding visual images of objects in the inferotemporal cortex of the macaque monkey</h2> <table> <tr> <th>Type</th> <td>Journal Article</td> </tr> <tr> <th class=“author”>Author</th> <td>K. Tanaka</td> </tr> <tr> <th class=“author”>Author</th> <td>H. Saito</td> </tr> <tr> <th class=“author”>Author</th> <td>Y. Fukada</td> </tr> <tr> <th class=“author”>Author</th> <td>M. Moriya</td> </tr> <tr> <th>Volume</th> <td>66</td> </tr> <tr> <th>Issue</th> <td>1</td> </tr> <tr> <th>Pages</th> <td>170-189</td> </tr> <tr> <th>Publication</th> <td>Journal of Neurophysiology</td> </tr> <tr> <th>ISSN</th> <td>0022-3077</td> </tr> <tr> <th>Date</th> <td>Jul 1991</td> </tr> <tr> <th>Extra</th> <td>PMID: 1919665</td> </tr> <tr> <th>Journal Abbr</th> <td>J. Neurophysiol.</td> </tr> <tr> <th>Library Catalog</th> <td>NCBI PubMed</td> </tr> <tr> <th>Language</th> <td>eng</td> </tr> <tr> <th>Abstract</th> <td>1. The inferotemporal cortex (IT) has been thought to play an essential and specific role in visual object discrimination and recognition, because a lesion of IT in the monkey results in a specific deficit in learning tasks that require these visual functions. To understand the cellular basis of the object discrimination and recognition processes in IT, we determined the optimal stimulus of individual IT cells in anesthetized, immobilized monkeys. 2. In the posterior one-third or one-fourth of IT, most cells could be activated maximally by bars or disks just by adjusting the size, orientation, or color of the stimulus. 3. In the remaining anterior two-thirds or three-quarters of IT, most cells required more complex features for their maximal activation. 4. The critical feature for the activation of individual anterior IT cells varied from cell to cell: a complex shape in some cells and a combination of texture or color with contour-shape in other cells. 5. Cells that showed different types of complexity for the critical feature were intermingled throughout anterior IT, whereas cells recorded in single penetrations showed critical features that were related in some respects. 6. Generally speaking, the critical features of anterior IT cells were moderately complex and can be thought of as partial features common to images of several different natural objects. The selectivity to the optimal stimulus was rather sharp, although not absolute. We thus propose that, in anterior IT, images of objects are coded by combinations of active cells, each of which represents the presence of a particular partial feature in the image.</td> </tr> <tr> <th>Date Added</th> <td>Tuesday, August 05, 2014 12:38:29 PM</td> </tr> <tr> <th>Modified</th> <td>Tuesday, August 05, 2014 12:38:29 PM</td> </tr> </table><h3 class=“tags”>Tags:</h3> <ul class=“tags”> <li>Animals</li> <li>Brain Mapping</li> <li>Color</li> <li>Discrimination (Psychology)</li> <li>Macaca</li> <li>Neurons</li> <li>Photic Stimulation</li> <li>Temporal Lobe</li> <li>Visual Perception</li> </ul> <h3 class=“attachments”>Attachments</h3> <ul class=“attachments”> <li id=“i22041”>PubMed entry</li> </ul> </li> <li id=“i22031” class=“item journalArticle”> <h2>It&apos;s Pat! Sexing faces using only red and green</h2> <table> <tr> <th>Type</th> <td>Journal Article</td> </tr> <tr> <th class=“author”>Author</th> <td>M. J. Tarr</td> </tr> <tr> <th class=“author”>Author</th> <td>D. Kersten</td> </tr> <tr> <th class=“author”>Author</th> <td>Y. Cheng</td> </tr> <tr> <th class=“author”>Author</th> <td>B. Rossion</td> </tr> <tr> <th>URL</th> <td><a href=“http://www.journalofvision.org/content/1/3/337”>http://www.journalofvision.org/content/1/3/337</a></td> </tr> <tr> <th>Volume</th> <td>1</td> </tr> <tr> <th>Issue</th> <td>3</td> </tr> <tr> <th>Pages</th> <td>337-337</td> </tr> <tr> <th>Publication</th> <td>Journal of Vision</td> </tr> <tr> <th>ISSN</th> <td>, 1534-7362</td> </tr> <tr> <th>Date</th> <td>12/12/2001</td> </tr> <tr> <th>Journal Abbr</th> <td>J Vis</td> </tr> <tr> <th>DOI</th> <td>10.1167/1.3.337</td> </tr> <tr> <th>Accessed</th> <td>Tuesday, August 05, 2014 12:36:04 PM</td> </tr> <tr> <th>Library Catalog</th> <td>www.journalofvision.org</td> </tr> <tr> <th>Language</th> <td>en</td> </tr> <tr> <th>Abstract</th> <td>The reflectance properties of facial hair and skin across sexes produce different degrees of red and green in male (more red) and female (more green) faces. Consequently, measuring the overall ratio of red/green energy in a face is sufficient for accurate sex classification. The optimal red/green threshold for discriminating 200 Caucasian faces by sex yielded an accuracy rate of 75% correct with a d&apos; of 2.0 Faces had no makeup and were edited to remove all hair around the head. A second set of Caucasian faces produced similar results. Preliminary analyses suggest that the red/green ratio is also sufficient for sex classification of Asian and African faces. In contrast, pre-pubescent Caucasian faces were classified at chance. Thus, the red/green difference between males and females may be attributed to post-puberty sexual dimorphism in the spectral properties of human faces. We compared these computational findings with the human ability to discriminate male faces from females faces. To prevent observers from relying on shape information useful for sex classification, the 200 Caucasian faces were dramatically blurred using a Gaussian filter. Faces were presented for 100ms and observers simply judged whether each face was male or female. For female faces there was a -0.66 correlation between red/green ratio and accuracy in sex classification; for males the correlation was +0.42. Reinforcing the relationship between our model and human performance, observers were at chance in their ability to discriminate pre-pubescent faces. Our results may provide a mechanism for rapid sex classification through the differential response of early color opponent processes to male and female faces. In sum, red/green energy appears to be a reliable cue for fast and accurate discrimination of faces by sex.</td> </tr> <tr> <th>Date Added</th> <td>Tuesday, August 05, 2014 12:36:04 PM</td> </tr> <tr> <th>Modified</th> <td>Tuesday, August 05, 2014 12:36:04 PM</td> </tr> </table><h3 class=“attachments”>Attachments</h3> <ul class=“attachments”> <li id=“i22032”>Snapshot</li> </ul> </li> <li id=“i22044” class=“item conferencePaper”> <h2>The importance of the color information in face recognition</h2> <table> <tr> <th>Type</th> <td>Conference Paper</td> </tr> <tr> <th class=“author”>Author</th> <td>L. Torres</td> </tr> <tr> <th class=“author”>Author</th> <td>J. Y. Reutter</td> </tr> <tr> <th class=“author”>Author</th> <td>L. Lorente</td> </tr> <tr> <th>Volume</th> <td>3</td> </tr> <tr> <th>Pages</th> <td>627-631 vol.3</td> </tr> <tr> <th>Date</th> <td>1999</td> </tr> <tr> <th>DOI</th> <td>10.1109/ICIP.1999.817191</td> </tr> <tr> <th>Library Catalog</th> <td>IEEE Xplore</td> </tr> <tr> <th>Conference Name</th> <td>1999 International Conference on Image Processing, 1999. ICIP 99. Proceedings</td> </tr> <tr> <th>Abstract</th> <td>A common feature found in practically all technical approaches proposed for face recognition is the use of only the luminance information associated to the face image. One may wonder if this is due to the low importance of the color information in face recognition or due to other less technical reasons such as the no availability of color image database. Motivated by this reasoning, we have performed a variety of tests using a global eigen approach developed previously, which has been modified to cope with the color information. Our results show that the use of the color information embedded in a eigen approach, improve the recognition rate when compared to the same scheme which uses only the luminance information</td> </tr> <tr> <th>Proceedings Title</th> <td>1999 International Conference on Image Processing, 1999. ICIP 99. Proceedings</td> </tr> <tr> <th>Date Added</th> <td>Tuesday, August 05, 2014 12:38:52 PM</td> </tr> <tr> <th>Modified</th> <td>Tuesday, August 05, 2014 12:38:52 PM</td> </tr> </table><h3 class=“tags”>Tags:</h3> <ul class=“tags”> <li>Availability</li> <li>Color</li> <li>color image database</li> <li>color information</li> <li>Covariance matrix</li> <li>face image</li> <li>face recognition</li> <li>global eigen approach</li> <li>image colour analysis</li> <li>Image databases</li> <li>Image recognition</li> <li>luminance information</li> <li>Performance evaluation</li> <li>principal component analysis</li> <li>reasoning</li> <li>Testing</li> <li>Vectors</li> <li>visual databases</li> </ul> <h3 class=“attachments”>Attachments</h3> <ul class=“attachments”> <li id=“i22045”>IEEE Xplore Abstract Record</li> </ul> </li> <li id=“i22036” class=“item journalArticle”> <h2>Contribution of color to face recognition</h2> <table> <tr> <th>Type</th> <td>Journal Article</td> </tr> <tr> <th class=“author”>Author</th> <td>Andrew W. Yip</td> </tr> <tr> <th class=“author”>Author</th> <td>Pawan Sinha</td> </tr> <tr> <th>Rights</th> <td>© 2012 APA, all rights reserved</td> </tr> <tr> <th>Volume</th> <td>31</td> </tr> <tr> <th>Issue</th> <td>8</td> </tr> <tr> <th>Pages</th> <td>995-1003</td> </tr> <tr> <th>Publication</th> <td>Perception</td> </tr> <tr> <th>ISSN</th> <td>1468-4233(Electronic);0301-0066(Print)</td> </tr> <tr> <th>Date</th> <td>2002</td> </tr> <tr> <th>DOI</th> <td>10.1068/p3376</td> </tr> <tr> <th>Library Catalog</th> <td>APA PsycNET</td> </tr> <tr> <th>Abstract</th> <td>One of the key challenges in face perception lies in determining how different facial attributes contribute to judgments of identity. This study focuses on the role of color cues. Although color appears to be a salient attribute of faces, past research has suggested that it confers little recognition advantage for identifying people. Using 37 subjects (aged 18-40 yrs) with normal or corrected-to-normal vision, the authors report experimental results suggesting that color cues do play a role in face recognition and their contribution becomes evident when shape cues are degraded. Under such conditions, recognition performance with color images is significantly better than that with gray-scale images. The results also indicate that the contribution of color may lie not so much in providing diagnostic cues to identity as in aiding low-level image-analysis processes such as segmentation.</td> </tr> <tr> <th>Date Added</th> <td>Tuesday, August 05, 2014 12:37:36 PM</td> </tr> <tr> <th>Modified</th> <td>Tuesday, August 05, 2014 12:37:36 PM</td> </tr> </table><h3 class=“tags”>Tags:</h3> <ul class=“tags”> <li>*Color</li> <li>*Cues</li> <li>*Face Perception</li> <li>visual discrimination</li> </ul> <h3 class=“attachments”>Attachments</h3> <ul class=“attachments”> <li id=“i22037”>APA PsycNET Snapshot</li> </ul> </li> <li id=“i22035” class=“item attachment”> <h2>The segmental structure of faces and its use in gender recognition - viewcontent.cgi</h2> <table> <tr> <th>Type</th> <td>Attachment</td> </tr> <tr> <th>URL</th> <td><a href=“http://repository.cmu.edu/cgi/viewcontent.cgi?article=1392&amp;context=psychology”>http://repository.cmu.edu/cgi/viewcontent.cgi?article=1392&amp;context=psychology</a></td> </tr> <tr> <th>Accessed</th> <td>Tuesday, August 05, 2014 12:37:04 PM</td> </tr> <tr> <th>Date Added</th> <td>Tuesday, August 05, 2014 12:37:04 PM</td> </tr> <tr> <th>Modified</th> <td>Tuesday, August 05, 2014 12:37:08 PM</td> </tr> </table></li> </ul> </body> </html>

resources/pdf_document_resources/topic_face_color.txt · Last modified: 2023/10/14 21:29 by admin