Samuel Kim

Data Scientist
Signal Processing
. . .


Publications

  1. Samuel Kim, Fabio Valente, Maurizio Filippone, and Alessandro Vinciarelli, "Predicting Continuous Conflict Perception with Bayesian Gaussian Processes," IEEE Transaction on affective computing, May 2014.
  2. Jung-Won Lee, Samuel Kim, and Hong-Goo Kang, "Detecting pathological speech using contour modeling of harmonic-to-noise ratio," IEEE International Conference on Acoustic, Speech, and Signal Processing (ICASSP), Florence, Italy, May 2014.
  3. Samuel Kim, "Detecting prominent content in unstructured audio using intensity-based attack/release patterns," Journal of The Instituted of Electronics Engineers of Korea, Dec. 2013.
  4. Jung-Won Lee, Hong-Goo Kang, Samuel Kim, and Yoonjae Lee, "Detecting pathological speech using local and global characteristics of harmonic-to-noise ratio," in Asia Pacific Signal and Information Processing Association (APSIPA) annual summit and conference, Oct. 2013.
  5. Samuel Kim, Panayiotis Georgiou, and Shrikanth Narayanan, "Annotation and classification of political advertisements," Proc. INTERSPEECH 2013, Lyon, France, Aug. 2013.
  6. Samuel Kim, Fabio Valente, and Alessandro Vinciarelli, "Annotation and detection of conflict escalation in political debates," Proc. INTERSPEECH 2013, Lyon, France, Aug. 2013.
  7. B. Schuller, S. Steidl, A. Batliner, A. Vinciarelli, K. Scherer, F. Ringeval, M. Chetouani, F. Weninger, F. Eyben, E. Ma and S. Kim, "The Interspeech 2013 Computational Paralinguistics Challenge: Social Signals, Conflict, Emotion, Autism," Proc. INTERSPEECH 2013, Lyon, France, Aug. 2013.
  8. Samuel Kim, Panayiotis Georgiou, and Shrikanth Narayanan, "On-line genre classification of TV programs using audio content," IEEE International Conference on Acoustic, Speech, and Signal Processing (ICASSP), Vancouver, Canada, May 2013.
  9. Samuel Kim, Panayiotis Georgiou, and Shrikanth Narayanan, "Latent Acoustic Topic Models for Unstructured Audio Classification," APSIPA Transactions on Signal and Information Processing, Vol. 1, Dec. 2012.
  10. Samuel Kim, Maurizio Filippone, Fabio Valente, and Alessandro Vinciarelli, “Predicting the Conflict Level in Television Political Debates: an Approach Based on Crowdsourcing, Nonverbal Communication and Gaussian Processes,” in ACM Multimedia, Oct. 2012.
  11. Samuel Kim, Sree Harsha Yella, and Fabio Valente, "Automatic detection of conflict escalation in spoken conversations," in INTERSPEECH, Sep. 2012.
  12. Fabio Valente, Samuel Kim, and Petr Motlicek, "Annotation and Recognition of Personality Traits in Spoken Conversations from the AMI Meetings Corpus," in INTERSPEECH, Sep. 2012.
  13. Samuel Kim, Panayiotis G. Georgiou, and Shrikanth Narayanan, "Supervised acoustic topic model with a consequent classifier for unstructured audio classification," in Workshop on Content-Based Multimedia Indexing (CBMI), Jun. 2012.
  14. Alessandro Vinciarelli, Samuel Kim, Fabio Valente, and Hugues Salamin, "Collecting data for socially intelligent surveillance and monitoring approaches: the case of conflict in competitive conversations," in International Symposium on communications, control, and signal processing (ISCCSP), May. 2012.
  15. Samuel Kim, Fabio Valente, and Alessandro Vinciarelli, "Automatic detection of conflicts in spoken conversations: ratings and analysis of broadcast political debates," in IEEE International Conference on Acoustic, Speech, and Signal Processing (ICASSP), Mar. 2012.
  16. Samuel Kim, Ming Li, Sangwon Lee, Urbashi Mitra, Donna Spruijt-Metzy, Murali Annavaram, and Shrikanth Narayanan, “High-level descriptions of real-life physical acitivities using latent topic modeling of multimodal sensor signals,” in IEEE Engineering in Medicine and Biology Society (EMBC), Aug. 2011.
  17. Samuel Kim, “Contextual modeling of audio signals toward information retrieval,” Ph.D. Dissertation, University of Southern California, Dec. 2010.
  18. Samuel Kim, Panayiotis G. Georgiou, and Shrikanth Narayanan, "Supervised acoustic topic model for unstructured audio information retrieval," in Asia Pacific Signal and Information Processing Association (APSIPA) annual summit and conference, Dec. 2010.
  19. Samuel Kim, Shiva Sundaram, Panayiotis Georgiou, and Shrikanth Narayanan, "An N-gram model for unstructured audio signals toward information retrieval," in IEEE International Workshop on Multimedia Signal Processing (MMSP), Oct. 2010.
  20. Daniel Bone, Samuel Kim, Sungbok Lee, and Shrikanth Narayanan, "A Study of Intra-Speaker and Inter-Speaker Affective Variability using Electroglottograph and Inverse Filtered Glottal Waveforms," in INTERSPEECH, Sep. 2010.
  21. Samuel Kim, Shiva Sundaram, Panayiotis Georgiou, and Shrikanth Narayanan, “Acoustic stopwords for unstructured audio information retrieval,” in European Signal Processing Conference (EUSIPCO), Aug. 2010.
  22. Samuel Kim, Panayiotis Georgiou, Shrikanth Narayanan, and Shiva Sundaram, “Using naïve text queries for robust audio information retrieval system,” in IEEE International Conference on Acoustic, Speech and Signal Processing (ICASSP), Mar. 2010.
  23. Samuel Kim, Shiva Sundaram, Panayiotis Georgiou, and Shrikanth Narayanan, “Audio scene understanding using topic models,” in Neural Information Processing Systems (NIPS) Workshop on Applications for Topic Models: Text and Beyond, Dec. 2009.
  24. Samuel Kim, Shrikanth Narayanan, and Shiva Sundaram, “Acoustic Topic Model for Audio Information Retrieval,” in IEEE Workshop on Applications of Signal Processing to Audio and Acoustic (WASPAA), Oct. 2009.
  25. Samuel Kim, Panayiotis G. Georgiou, and Shrikanth Narayanan, "A robust harmony structure modeling scheme for Classical music opus identification," in IEEE International conference on Acoustic, Speech and Signal Processing (ICASSP), Apr. 2009.
  26. Samuel Kim and Shrikanth Narayanan, "Dynamic chroma feature vectors with applications to cover song identification," in IEEE Multimedia Signal Processing (MMSP) Workshop, Oct. 2008.
  27. Carlos Busso, Mutaza Bulut, Chi-Chun Lee, Abe Kazemzadeh, Emily Mower, Samuel Kim, Jeannette Chang, Sungbok Lee and Shrikanth Narayanan, "IEMOCAP: Interactive emotional dyadic motion capture database," Journal of Language Resources and Evaluation, 2008.
  28. Kyu Jeong Han, Samuel Kim, Shrikanth Narayanan, "Strategies to Improve the Robustness of Agglomerative Hierarchical Clustering under Data Source Variation for Speaker Diarization," IEEE transaction of speech, audio, and language processing, pp. 1590-1601, Nov. 2008.
  29. Samuel Kim, Erdem Unal, and Shrikanth Narayanan, "Music fingerprint extraction for classical music cover song identification," in International Conference of Multimedia and Expo (ICME), pp. 1261-1264, May. 2008.
  30. Kyu Jeong Han, Samuel Kim, and Shrikanth S. Narayanan, "Robust speaker clustering strategies to data source variation for improved speaker diarization," in Automatic Speech Recognition and Understanding (ASRU), Dec. 2007.
  31. Samuel Kim, Sungbok Lee, and Shrikanth Narayanan, "On voicing activity under the control of emotion and loudness," in 154th Meeting of the Acoustic Society of America (ASA), Nov. 2007.
  32. Samuel Kim, Panayiotis G. Georgiou, Sungbok Lee, and Shrikanth Narayanan, “Real-time Emotion Detection System using Speech: Multi-modal Fusion of Different Timescale Features,” in Multimedia Signal Processing (MMSP) Workshop, Oct. 2007.
  33. Samuel Kim, Sungwan Yoon, Thomas Eriksson, Hong-Goo Kang, and Dae Hee Youn, “A noise-robust pitch synchronous feature extraction algorithm for speaker recognition systems," in Proc. INTERSPEECH'05, p.p. 2029-2032, Sep. 2005.
  34. Thomas Eriksson, Samuel Kim, Hong-Goo Kang, and Chungyoung Lee, “An information-theoretic perspective on feature selection in speaker recognition," IEEE Signal Processing Letters, Vol. 12, No. 7, pp. 500-503, Jul. 2005.
  35. Samuel Kim, Jeong-Tae Seo, and Hong-Goo Kang, “Quantitative measure of speaker specific information in human voice: from the perspective of information theoretic approach," Journals of Acoustical Society of Korea, vol. 24, Mar. 2005.
  36. Samuel Kim, “On feature extraction algorithms for GMM-based text independent speaker recognition systems," Master thesis, Yonsei University, Feb. 2005.
  37. Sungwan Yoon, Samuel Kim, Hong-Goo Kang, and Dae Hee Youn, “An experiment of supplementary feature for speaker verification in constrained environment," in Proc. Korean Institute of Communication Sciences, vol.29, p.p. 206, Nov. 2004.
  38. Bongjin Lee, Samuel Kim, and Hong-Goo Kang, “Speaker recognition based on transformed line spectral frequencies," in Proc. International Symposium on Intelligent Signal Processing and Communication System, p.p. 177-180, Nov. 2004.
  39. Samuel Kim, Thomas Eriksson, and Hong-Goo Kang, “On the time variability of vocal tract for speaker recognition," in Proc. International Conf. Spoken Language Process. INTERSPEECH'04, vol. III, p.p. 2377-2380, Oct. 2004.
  40. Thomas Eriksson, Samuel Kim, Hong-Goo Kang, and Chungyoung Lee, “Theory for speaker recognition over IP," in Proc. International Conf. Spoken Language Process. INTERSPEECH'04, vol. I, p.p. 625-628, Oct. 2004.
  41. Sungwan Yoon, Samuel Kim, and Hong-Goo Kang, “Preliminary experiment searching for the supplementary feature in speaker verification," in Proc. ISISP, p.p.189-192, July 2004.
  42. Samuel Kim, Thomas Eriksson, Hong-Goo Kang, and Dae Hee Youn, “A pitch synchronous feature extraction method for speaker recognition," in IEEE International Conference Acoustic Speech Signal Processing (ICASSP), vol. I, p.p. 405-408, May 2004.
  43. Thomas Eriksson, Samuel Kim, and Hong-Goo Kang, “On feature extraction in speaker recognition," Technical Report 485L, Department of Signals and Systems, Chalmers University of Technology, Gothenburg, Sweden, Apr. 2004.