Home / Articles / Volume 5 (2008) / Text-centered versus multimodal analysis of Instant Messaging conversation
Document Actions


Introduction: Theoretical Background and Research Questions

Linguistic analyses of computer-mediated communication (CMC) are generally limited to text for methodological reasons. Studies that focus on logs of mediated texts separate messages from the physical context of their production and reception; as a result, they often neglect the situated dimension of discursive CMC exchanges and the multimodal nature of these communicative activities. Furthermore, the methodological choice to focus on text logs is related to the questions raised in the studies. CMC studies in general have tended to focus on "online behaviors:" What is analyzed is what we see on the screen. For example, linguists deal with the features of "digital discourse" (Collot & Belmore, 1996; Crystal, 2001), sociolinguists and sociologists emphasize the mechanisms of virtual community formation (Paolillo, 1999; Smith & Kollock, 1999), and social psychologists examine the emergence of group norms in cyberspace (Postmes, Spears, & Lea, 2000). According to Jones (2005a,b), these kinds of studies give the impression that interaction takes place in a virtual space with no connection to the physical space in which participants are operating their computers.

However, the physical context in which CMC takes place may have an impact on the way interaction is managed. If one considers CMC as situated human interaction (Goodwin, 2000; Relieu, 2005), it seems necessary to go "beyond the screen" (Jones, 2001) and to adopt a multimodal methodology, by which we mean recording and observing not only the textual production of the participants but also their postures, gestures, facial expressions, and reading/writing activities. Such an approach would focus on computer-mediated communication not only as a textual product, but also as an interactional process.

Through the analysis of nonverbal behaviors of participants involved in an instant messaging session, this study1 intends to show the methodological interest of such an approach. Several questions will illustrate the benefits of a multimodal analysis.

The first question asks what types of gestures are more common: ergonomic (or manipulative, i.e., task-oriented gestures), interactional (which permit inter-synchronization), expressive (which indicate affect), symbolic (emblematic), or referential (which represent specific referents and semantic contents) (Ekman & Friesen, 1969; Kendon, 1970; Nehaniv, 2005; Tobin, 2000)? Description of gestures can show that participants are involved not in a solitary task of reading/writing digital texts but also in dialogic activity, in which they engage in exchanges with real recipients.

Second, can the kinesic behaviors of online participants be considered as indicating the degree to which this dialogic activity is conversational and interactional? Whereas textual logs of written discussion display a clear dichotomy between sender and receiver without mutual or simultaneous interdependency, the analysis of kinesic behaviors reveals that an instant messaging discussion can be seen as an interactional achievement (Schegloff, 1982). For example, feedback phenomena occur when a participant reads a post. Gestures also occur that may indicate inter-synchronization and turn-taking phenomena.

Similarly, to what extent can the analysis of kinesic behaviors permit a better understanding of the structural organization of a discussion, including turn-taking? For example, analyzing reading and writing activities shows that the linear transcripts of the discussion generated by computer software hide overlap phenomena.

Last, can the study of the emotional expressivity of CMC be enriched through the description of kinesic behaviors? For example, how are facial expressions linked to such textual devices as smileys, acronyms of emotion, and expressive punctuation? The divergence that can be observed between facial expressions and these textual devices shows the limits of a log-centered analysis of the emotional dimensions of CMC.


The study reported in this article is based on semi-natural data that were collected in June 2006. Students at the University of Technology of Troyes were asked to take part in a study about "cyberlanguage". Four dyads were constituted and invited to discuss several topics provided by the researchers (narrations of positive/negative events, argumentative discussions about their training) for 30 minutes. These discussions took place in two separate rooms of the Tech-CICO Laboratory.

Students communicated in French by using Microsoft's MSN Messenger™, which was limited to textual messages and graphical smileys. These students were regular users of Microsoft's MSN Messenger™. The conversations did not involve the use of a webcam, and there was no other visual channel between the participants. Thus the students could not see each other.

The students agreed to be videotaped. For this analysis, the instant messaging discussions of the four dyads were videotaped focusing on three sources: the computer screen, the face, and the body of the subjects (Figures 1 and 2).

Figure 1. Source 1 (computer screen) and source 2 (face)

Figure 2. Source 3 (body)


Classification of Kinesic Behaviors

A systematic observation of the kinesic behaviors of the participants shows various results. It is obvious that participants perform kinesic behaviors that are linked to human-computer interaction: typing, scrolling, handling keyboard and mouse, looking at the screen, etc. At the same time, not all gestures relate to using a computer.

Expressive gestures can be observed—for example, smiles during typing or reading messages. As these facial expressions are rare in human-computer interaction (Tobin, 2000), one can consider that, in the present data, they play a particular role in the dialogic activity. Moreover, few symbolic or referential gestures were observed. These observations suggest that the gestures and facial expressions that are produced by the participants are not necessarily related to the tasks of typing and reading. Rather, inter-synchronization gestures and expressive behaviors seem to be related to the dialogic dimension of the activity. This phenomenon may seem quite banal (instant messaging is unquestionably a communication tool), but it is, in fact, rather paradoxical. Facial expressions and interactional gestures cannot play any role in the communication between the participants, because there is no mutual visual contact. In other words, synchronization and expression of emotions by nonverbal means can be at most a communicative intention; it cannot have any efficacy.

Some kinesic behaviors are not easy to classify. For example, position shifts forward and back can be seen as ergonomic gestures (related to the phases of typing and reading) but also as interactional gestures, i.e., gestures that could be dedicated to inter-synchronization between the two participants (as in face-to-face discussions) or between the participant and his or her computer, as in human-computer interaction (Tobin, 2000).

Our hypothesis is that these kinesic behaviors reveal the ways in which the participants define the communicative activity in which they are taking part. One can wonder if this frame (Goffman, 1974) is closer to written or to face-to-face communication. In other words, to what extent is instant messaging discussion an interactional achievement?

Kinesic Behaviors and Engagement in Communicative Activity

The kinesic behaviors described above show that participants are engaged simultaneously in human-computer interaction and communicative activity. Analyzing these markers of engagement allows us to identify the nature of this communicative activity and its degree of interactivity.

According to many researchers (Rintel, Mulholland, & Pittam, 2001; Schulze, 1999; Werry, 1996), discussions through chat systems such as instant messaging or Internet Relay Chat closely resemble conversational exchanges, mainly because of the synchronous mode of transmission of messages. For example, Beaudoin (2002) asserts that the rhythm of exchanges in real-time computer-mediated communication connects this kind of CMC to oral conversation.

In principle, however, this criterion is insufficient to consider an instant messaging discussion as an interactional achievement (Schegloff, 1982). The main characteristic of an interaction is that the participants' behaviors are interdependent, i.e., mutually and simultaneously determined (Kerbrat-Orecchioni, 1990). Synchronous chat discussions do not imply mutual and simultaneous determination between participants. For example, Fornel (2004) shows that the turn-taking system in IRC is not based on a strict mutual dependence between turns, insofar as self-selecting prevails over the rule of next-speaker selecting (this analysis is also relevant for instant messaging). Furthemore, Kerbrat-Orecchioni (2005) suggests that CMC tools such as electronic mail enable dialogue but not interaction, because they do not permit immediate feedback.

These claims are effectively consistent with observations of textual logs of IRC, email, or instant messaging. It is obvious that through their textual productions, participants are engaged in a written dialogue. However, the analysis of kinesic behaviors brings to the fore that instant messaging conversations also have an interactional dimension, even if instant messaging discussion is not an interactional achievement in the strict sense of the word. In such discussions, participants ratify and synchronize each other through phatic and feedback gestures, even when these gestures are not mutually perceptible.

For example, some kinesic behaviors play the same role as shifts of posture in turn-taking do in face-to-face interaction (Duncan & Fisk, 1977): Participants prepare to take the "speaking role" by moving forward (Figure 3) or indicate their yielding of the floor by moving back as they finish their utterances (Figure 4).

Figure 3. Moving forward while reading and before producing a message

Figure 4. Moving back after producing a message

Similarly, many facial expressions are produced by participants while they read the messages sent by the other participant. For example, a participant may smile or laugh when reading a message (Figure 5).

Figure 5. Laughing while reading a message

This kind of facial expression occurs in other reading activities (for example, smiling when reading a "traditional" letter or a novel), but it is particularly frequent in computer-mediated communication (Smith & Gorsuch, 2004). These facial reactions resemble nonverbal feedback in face-to-face communication (even if they cannot function as feedback because they are not perceptible to the other participant). In any case, the high frequency of these facial reactions in instant messaging shows that the participants are active when they are in the recipient role: They are simultaneously involved in activities of production (of nonverbal messages) and interpretation.

The phenomena of synchronization of turns between participants and nonverbal reactions underscore an interactional dimension of instant messaging discussions that is not observable through a text-centered analysis. The kinesic behaviors of the instant messaging participants convey their interactional and conversational engagement, which means an observable state of being in coordinated interaction. As in face-to-face conversation, when involved in instant messaging conversation, both participants display their engagement, either directly through words or indirectly through gestures or similar nonverbal signals (Gumperz, 1982).

It is obvious that this interactional dimension is not efficacious, because the participants can not see each other. However, the lack of visual contact does not prevent participants from producing interactional gestures or facial expressions. In this respect, instant messaging is comparable to telephone conversation. The most important characteristic of the situation is not the absence of visibility but rather its dialogical nature. Gestures are produced when participants have the feeling that they are engaged in a dialogue (Bavelas, Gerwing, Sutton, & Prevost, 2008).

In other words, participants are engaged in an interactional activity (or have the feeling of being engaged in an interaction), even if the situation is not highly interactive. In other words, for the participants, the model of the situation (Jones, 2005a) of instant messaging conversation corresponds to an interaction.

Synchronicity and Overlap

Instant messaging, like IRC, is often characterized as a form of CMC that involves synchronous interaction. This characterization is accurate if we consider the fact that all the participants are online at the same time. However, IM interaction appears to be asynchronous if we only observe and analyze the textual log on the screen. Such a text-centered analysis gives the impression that the interaction is managed on a turn-by-turn basis, with the impossibility of overlapping contributions (Herring, 1999; Hutchby, 2001). For this reason, in analyzing this phenomenon, Garcia and Jacobs (1999) refer to IRC as quasi-synchronous.

In fact, the synchronicity of the conversation can be observed only if a multimodal method is adopted. It is possible to observe a difference between the structural organization of the discussion represented visually by the linear transcript on the computer screen and the gestural and postural turn-taking phenomena manifested by participants. Analyzing Internet messaging conversation as a situated reading and writing activity shows that these linear transcripts generated by computer software hide overlap phenomena.

More precisely, the linear log does not make overlaps visible and enforces a visual representation that provides information only about the sequential order of the messages, with two types of structure: ABAB (alternating turns) or AAAB (one message splits up into two or three different turns). In contrast, analysis of the videotaped activity of participants reveals the presence of several overlaps. These overlaps occur during typing activity but are not visible on the computer screen. Two types of overlaps can be identified:

(a) B is in the process of typing a response to A at the same time as A is sending a new message (overlap is indicated by square brackets):


The structural organization of the exchange in example (1) is not clear unless we take into account the overlaps. B produces a reaction, (B4), to (A2) immediately after reading this initiating message, but at the same time A is sending the next turn to the server. Therefore, this turn, (A3), intervenes between (A2) and (B3), which should logically constitute an adjacency pair. In a text-centered analysis, this exchange has a fuzzy structural organization and its logical structure must be reconstructed by the analyst (Herring, 1999; Panyametheekul & Herring, 2003), whereas a multimodal analysis permits a direct understanding of how the participants organized the exchange.

(b) While A is writing, B types a message but does not send it because the message of A renders it problematic:

(2) (the crossed-out part of the text was deleted by B)

A multimodal analysis permits observation of the way in which B engages in self-repair of a message (B2) when he reads the contribution sent by A (A1), which appears while he is typing (B2).

These overlaps are not really similar to overlap in face-to-face conversation. In instant messaging, overlap occurs when two activities of writing happen at the same time. These simultaneous activities are not perceptible by participants in real time (but rather only when participants receive messages while they are writing), in contrast to overlap in face-to-face conversation. However, as in face-to-face interaction, overlaps have an impact on the organization of the discussion.

Overlap is a good illustration of the differences between the results of a text-centered and a multimodal analysis. The appearance of the structural organization of the discussion depends on the kind of data that are examined: textual logs or participants' physical activities. Moreover, the synchronous nature of instant messaging conversation is revealed to be a determining factor of the management of the interaction when the physical activities of the participants, and not just their textual production, are analyzed.

Facial Expressions and Emotion

A number of studies have dealt with the socio-emotional dimension of CMC. These studies are often limited to the analysis of textual devices: graphic and typographic devices such as smileys (Marcoccia, 2000; Mourlhon-Dallies & Colin, 1995; Walther & D'Addario, 2001; Wilson 1993), expressive punctuation (Anis, 1994), metaphors (Delfino & Manca, 2007), and self-disclosure and emotional narratives (Atifi, Gauducheau, & Marcoccia, in press).

According to several text-centered studies, smileys or "emotional acronyms" like LOL ('laughing out loud') compensate for the lack of nonverbal cues and give the recipient access to the feelings and emotions of the author (Frias, 2003; Marcoccia, 2000; Mourlhon-Dallies & Colin, 1995; Wilson, 1993). In particular, several authors assume that smileys function like nonverbal behaviors do in face-to-face interaction: They reflect people's feelings (Derks, Fischer, & Bos, 2008). For example, smileys can emphasize the tone of a message (Rezabek & Cochenour, 1998; Wilson, 1993) or clarify the emotional state of the author (Constantin, Kalyanaraman, Stavrositu, & Wagoner, 2002).

However, other authors underline the difference between nonverbal behaviors and smileys (Crystal, 2001; Marcoccia & Gauducheau, 2007; Walther & D'Addario, 2001). Smileys are always deliberate, whereas nonverbal behaviors are often involuntary. Moreover, the absence of smileys does not signal the absence of an emotion, whereas the absence of nonverbal expression raises questions about the presence of an emotion. At the same time, the presence of a smiley does not necessarily signal an experienced emotion, whereas most facial expressions are linked with an emotional experience (Ekman, 1984).

These studies all deal with the question of the analogy between textual expression of emotion and nonverbal behavior, and make two competing hypotheses:

  • Smileys and emotional acronyms are "textual translations" of nonverbal behaviors.
  • The analogy between textual devices and nonverbal behaviors is not relevant. Smileys and emotional acronyms are modes of emotional expression distinct from nonverbal behaviors.

A multimodal analysis permits one to transcend the comparison between textual devices and nonverbal behaviors to observe the relation between these two kinds of emotional expression. The analysis of the nonverbal behaviors of IM participants, essentially their facial expressions, allows one to examine whether, and if so how, the expressive behaviors and the production of smileys or emotional acronyms are related (Gauducheau & Marcoccia, 2007).

Analyzing the relation between graphic/textual devices for the expression of emotions and facial expressions gives prominence to two functions of the textual device:

(a) The textual device encodes a nonverbal expression. In many cases, verbal and nonverbal means express the same emotion, as in (3).

(3) (Nonverbal behavior is described in square brackets.)

In this example, verbal and nonverbal are redundant (for the analyst): The text seems to be a reliable indicator of the emotional state of the participant.

(b) The textual device constructs the emotional expression: A divergence between verbal and nonverbal is observed. Such divergence can be of two types:

(b1) An emotion is expressed in the text, but no emotion is expressed through nonverbal behavior.


In this case, the textual/graphic device displays an emotion that is not expressed nonverbally. In this example, we can make the assumption that the absence of a smile signals the absence of a positive emotion. This assumption is based on observation of the data: The association of a textual/graphic device with nonverbal behaviors is very frequent. Thus, the absence of nonverbal behavior in this example is atypical. Consequently, LOL can be seen as a controlled expression of emotion and the application of conventional politeness. A makes a joke (A1) about his competence in discussing this topic. In response, B produces an expected message, respecting the preference for agreement (Pomerantz, 1984), but he probably does not really find the joke funny.

(b2) Contradictory emotions are expressed in the verbal and nonverbal channels. In example 5, the smiley communicates a different emotion from the one facially expressed.


In this example, A (in A4) pretends to regret the vulgar expression he used (or the sexual allusion he made) in a previous message (A1), because he knows that the logs are recorded. With a text-centered approach, the "sad" smiley can be analyzed as a means to emphasize the regret expressed in the text, even though this emotion is contradictory with the one previously expressed in A2, through an emoticon (^ ^). A multimodal analysis permits reappraisal of the interpretation of the "sad" smiley, because the facial expression of A (a smile, in A4) displays a contradictory emotion. It can be hypothesized that, as in example 4, the emotion expressed in the graphic device corresponds to the application of social norms. Vulgar language (or sexual disclosure), in this context, is supposed to elicit regret or embarrassment. However, it seems that A finds it funny; his “blunder” makes him smile in A4, and this facial expression confirms the emotion expressed in A2.

This example can be likened to a channel discrepancy, which is a disagreement between the verbal and nonverbal dimensions of a communicative act. Channel discrepancy is usually analyzed as a nonverbal leakage: The person discloses via the nonverbal channel a trait that he/she wants to dissimulate (Ekman & Friesen, 1969). In face-to-face communication, the recipient usually attaches greatest importance to the nonverbal acts and their significance, even though they are equivocal and involuntary. It is assumed that nonverbal information is more reliable and sincere than verbal behavior, because it seems more difficult to manipulate and control. A multimodal analysis shows that instant messaging users run the risk of making inappropriate inferences about the emotional state of their partners, insofar as channel discrepancy is not perceptible by participants.

In sum, a multimodal analysis can distinguish three kinds of relations between textual/graphic devices for the expression of emotion and nonverbal expression: encoding a nonverbal emotional expression (example 3), communicating an emotion that is not expressed through nonverbal behavior (example 4), and communicating an emotion contradictory to a nonverbal behavior (example 5).

Thus, the IM writer can choose among three strategies:

  • Encoding his/her emotion with a textual/graphic device in order to compensate for the lack of nonverbal cues in CMC, e.g., using a happy smiley when physically smiling;
  • Expressing a socially-expected emotion with a textual/graphic device that does not correspond to a nonverbal expression of emotion, in order to communicate an emotion compatible with social norms, e.g., using an "expected" happy smiley even when not smiling;
  • Expressing a socially-expected emotion with a textual/graphic device in order to communicate an emotion compatible with social norms, even though this emotion is contradictory with the nonverbal expression, e.g., using an "expected" sad smiley even when one is smiling.

In the third case, a text-centered analysis may focus on the less reliable display of the emotion, as verbal behaviors are in principle less reliable than nonverbal ones in the case of discrepancy (Ekman & Friesen, 1969).

Through the comparison of textual production and facial expressions, a multimodal analysis permits the researcher to underline the intentional and strategic dimension of the use of smileys. This dimension is mentioned in some text-centered studies (for example, Crystal, 2001) but it seems that the hypothesis of the smiley as a textual encoding of the emotional state is implicitly preferred in studies based on text analysis (for example, Derks et al., 2008).


The main objective of this study was to highlight the limits of a text-centered analysis of CMC and the benefits of integrating kinesic data into the analysis, even if nonverbal behaviors are not perceptible to the participants. Several differences between a text-centered and a multimodal analysis (that includes text, gestures, postures, facial expressions, and other physical activities) of instant messaging can be underlined.

First, a multimodal analysis of participants’ physical behaviors allows for a fuller and richer appreciation of the interactional dimension of instant messaging discussion than does analysis of text logs alone. A kinesic analysis makes the engagement of the participants perceptible.

Second, a multimodal analysis shows clearly that the position of the participants cannot be reduced to a dichotomy between an active sender and a passive recipient; both participants may be continuously involved in the discussion.

Third, analysis of the structural organization of discussion benefits from a multimodal approach. For example, whereas sequences are displayed linearly on the computer screen, video recordings of the participants reveal many overlaps. In a text-centered analysis, the logical structure of the discussion is reconstructed by the analyst, whereas a multimodal analysis permits understanding of how the participants organized the exchange.

Last, a multimodal analysis underscores the complexity of the use of textual/graphic devices for the expression of emotions.

The main difference between these two approaches is that a text-centered analysis deals with a textual product, whereas a multimodal analysis allows for the examination of a communicative process. More precisely, the communicative processes can only be inferred from text, while analyzing participants’ kinesics allows such processes to be appreciated more fully and directly.

At the same time, one can object that the textual product is the only thing shared by the participants when they use a medium without a visual channel. From this point of view, CMC is effectively limited to written communication. Thus one can assert that a text-centered analysis is appropriate for the observation of CMC. All things considered, a multimodal analysis does not challenge the relevance of text-centered analysis; rather, it constitutes a complementary approach that adds information about the context of CMC.


1. This study was first presented at the 10th International Pragmatics Association Conference in Gothenburg, Sweden (Marcoccia, Atifi, & Gauducheau, 2007).


Anis, J. (1994). Pour une graphématique des usages: le cas de la ponctuation dans le dialogue télématique. LINX: Revue des linguistes de l'université Paris X - Nanterre, 31, 81-97.

Atifi, H., Gauducheau, N., & Marcoccia, M. (In press). L'expression des émotions dans les forums de discussion sur l'internet. In N. Hubé, A. Lamy, & P. Lefébure (Eds.), Les médias à vif: Analyse des dynamiques émotionnelles dans l'espace public.

Bavelas, J., Gerwing, J., Sutton, C., & Prevost, D. (2008). Gesturing on the telephone: Independent effects of dialogue and visibility. Journal of Memory and Language, 58, 495-520.

Beaudoin, V. (2002). De la publication à la conversation. Lecture et écriture électronique. Réseaux, 20(116), 199-225.

Constantin, C., Kalyanaraman, S., Stavrositu, C., & Wagoner, N. (2002, August). To be or not to be emotional: Impression formation effects of emoticons in moderated chatrooms. Paper presented at the Communication Technology and Policy Division at the 85th annual convention of the Association for Education in Journalism and Mass Communication (AEJMC). Miami Beach, FL. Retrieved August 12, 2008 from http://www.psu.edu/dept/medialab/research/AEJMC.htm

Collot, M., & Belmore, N. (1996). Electronic language: A new variety of English. In S. C. Herring (Ed.), Computer-mediated communication: Linguistic, social, and cross-cultural perspectives (pp. 13-28). Amsterdam/Philadelphia: John Benjamins.

Crystal, D. (2001). Language and the Internet. Cambridge: Cambridge University Press.

Derks, D., Fischer, A., & Bos, A. (2008). The role of emotion in computer-mediated communication: A review. Computers in Human Behavior24(3), 766-785.

Delfino, M., & Manca, S. (2007). The expression of social presence through the use of figurative language in a web-based learning environment. Computers in Human Behavior, 23(5), 2190-2211.

Duncan, S. D., & Fisk, D.W. (1977). Face-to-face interaction. New York: John Wiley and Sons.

Ekman, P. (1984). Expression and the nature of emotion. In K. Scherer & P. Ekman (Eds.), Approaches to emotion (pp. 319-344). Hillsdale, NJ: Lawrence Erlbaum.

Ekman, P., & Friesen, W. V. (1969). Nonverbal leakage and clues to deception. Psychiatry, 32, 88-105.

Ekman, P., Friesen, W. V., & Ellsworth, P. (1972). Emotion in the human face: Guidelines for research and an integration of findings. New York: Pergamon Press.

Fornel, M. de. (2004, February). Les fondements conversationnels et sociolinguistiques de la communication électronique. Paper presented at the conference "La communication électronique: approches linguistiques et anthropologiques," Paris.

Frias, A. (2003). Esthétique ordinaire et chats: ordinateur, corporéité et expression codifiée des affects. Techniques & Culture, 42, 1-22.

Garcia, A. C., & Jacobs, J. B. (1999). The eyes of the beholder: Understanding the turn-taking system in quasi-synchronous computer-mediated communication. Research on Language and Social Interaction, 32, 337-367.

Gauducheau, N., & Marcoccia, M. (2007, June). Analyser la mimo-gestualité: un apport méthodologique pour l'étude de la dimension socio-affective des échanges en ligne. In M.-N. Lamy, F. Mangenot, & E. Nissen (Eds.), Actes du colloque Echanger pour apprendre en ligne (EPAL). Grenoble. Retrieved August 12, 2008 from http://w3.u-grenoble3.fr/epal/actes.html

Goffman, E. (1974). Frame analysis. An essay on the organization of experience. New York: Harper & Row.

Goodwin, C. (2000). Action and embodiment within situated human interaction. Journal of Pragmatics, 32, 1489-1522.

Gumperz, J. (1982). Discourse strategies. Cambridge, UK: Cambridge University Press.

Herring, S. C. (1999). Interactional coherence in CMC. Journal of Computer-Mediated Communication, 4(4). Retrieved August 12, 2008 from http://jcmc.indiana.edu/vol4/issue4/herring.html

Herring, S. C. (2004). Computer-mediated discourse analysis: An approach to researching online behavior. In. S. A. Barab, R. Kling, & J. H. Gray (Eds.), Designing for virtual communities in the service of learning (pp. 338-376). New York: Cambridge University Press.

Hutchby, I. (2001). Conversation and technology. From the telephone to the Internet. Cambridge, UK: Polity Press.

Jones, R. H. (2001, November-December). Beyond the screen: A participatory study of computer-mediated communication among Hong Kong youth. Paper presented at the Annual Meeting of the American Anthropological Association, Washington D.C. Retrieved August 12, 2008 from http://personal.cityu.edu.hk/~enrodney/Research/ICQPaper.doc

Jones, R. H. (2005a). The problem of context in computer-mediated communication. In P. LeVine & R. Scollon (Eds.), Discourse and technology: Multimodal discourse analysis (pp. 20-33). Washington, D.C.: Georgetown University Press.

Jones, R. H. (2005b). Sites of engagement as sites of attention: Time, space, and culture in electronic discourse. In S. Norris & R. H. Jones (Eds.), Discourse in action. Introducing mediated discourse analysis (pp. 141-154). London: Routledge.

Kendon, A. (1970). Movement coordination in social interaction: Some examples described. Acta Psychologica, 32, 100-125.

Kerbrat-Orecchioni, C. (1990). Les interactions verbales. Tome 1. Paris: Armand Colin.

Kerbrat-Orecchioni, C. (2005). Le discours en interaction. Paris: Armand Colin.

Marcoccia, M. (2000). Les smileys: une représentation iconique des émotions dans la communication médiatisée par ordinateur. In C. Plantin, M. Doury, & V. Traverso (Eds.), Les émotions dans les interactions communicatives (pp. 249-263). Lyon: ARCI - Presses Universitaires de Lyon.

Marcoccia, M., Atifi, H., & Gauducheau, N. (2007, July). Analysing kinesic behaviours of online discussants: A methodological contribution to CMC studies. Paper presented at the 10th International Pragmatics Conference. Göteborg, Sweden. Retrieved August 12, 2008 from https://tremonia.fb15.uni-dortmund.de:4433/ipra-panel

Marcoccia, M., & Gauducheau, N. (2007). L'analyse du rôle des smileys en production et en réception: un retour sur la question de l'oralité des écrits numériques. Glottopol, 10, 38-55. Retrieved August 12, 2008 from http://www.univrouen.fr/dyalang/glottopol/numero_10.html

Mourlhon-Dallies, F., & Colin, J.-Y. (1995). Les rituels énonciatifs des réseaux informatiques entre scientifiques. Les Carnets du CEDISCOR, 3, 161-172.

Nehaniv, C. (2005). Classifying gestures and inferring intent. Proceeding of AISB'05 Symposium on Robot Companions: Hard Problems and Open Challenges in Robot-Human Interaction (pp. 74-81). Hatfield, UK: The Society for the Study of Artificial Intelligence and Simulation of Behaviour. Retrieved August 12, 2008 from http://www.aisb.org.uk/publications/proceedings/aisb05/5_RoboComp_final.pdf

Norris, S. (2004). Analyzing multimodal interaction. A methodological framework. London: Routledge.

Panyametheekul, S., & Herring, S. C. (2003). Gender and turn allocation in a Thai chat room. Journal of Computer-Mediated Communication, 9 (1). Retrieved August 12, 2008 from http://jcmc.indiana.edu/vol9/issue1/panya_herring.html

Paolillo, J. C. (1999) The virtual speech community: Social network and language variation on IRC. Journal of Computer-Mediated Communication, 4(4). Retrieved August 12, 2008 from http://jcmc.indiana.edu/vol4/issue4/paolillo.html

Pomerantz, A. (1984). Agreeing and disagreeing with assessments: Some features of preferred/dispreferred turn shapes. In J. M. Atkinson & J. Heritage (Eds.), Structure of social action. Studies in conversational analysis (pp. 57-101). Cambridge: Cambridge University Press.

Postmes, T., Spears, R., & Lea, M. (2000). The formation of group norms in computer-mediated communication. Human Communication Research, 26, 341-371.

Relieu, M. (2005). Les usages des TIC en situation naturelle: une approche ethnométhodologique de l'hybridation des espaces d'activités. Intellectica, 2(3), 139-162.

Rezabek, L., & Cochenour J. (1998). Visual cues in computer-mediated communication: Supplementing texts with emoticons. Journal of Visual Literacy, 18, 201-215.

Rintel, E. S., Mulholland, J., & Pittam, J. (2001). First things first: Internet relay chat openings. Journal of Computer-Mediated Communication, 6(3). Retrieved August 12, 2008 from http://jcmc.indiana.edu/vol6/issue3/rintel.html

Schegloff, E. A. (1982). Discourse as an interactional achievement: Some uses of 'uh huh' and other things that come between sentences. In D. Tannen (Ed.), Analyzing discourse: Text and talk (pp. 71-93). Washington. D.C.: Georgetown University Press.

Scherer, K. R. (1980). The functions of nonverbal signs in conversation. In R. N. St. Clair & H. Giles (Eds.), The social and psychological contexts of language (pp. 225-244). Hillsdale, NJ: Lawrence Erlbaum.

Schulze, M. (1999). Substitution of paraverbal and nonverbal cues in the written medium of IRC. In. B. Naumann (Ed.), Dialogue analysis and the mass media (pp. 65-82). Tübingen: Max Niemeyer.

Smith, B., & Gorsuch, G. (2004). Synchronous computer mediated communication captured by usability lab technologies: New interpretations. System, 32, 553-575.

Smith, M. A., & Kollock P. (Eds.). (1999). Communities in cyberspace. London: Routledge.

Tobin, L. (2000). La gestuelle d'accompagnement de la relation humain-ordinateur. Communication & Organisation, 18, 253-264.

Walther, J. B., & D'Addario, K. P. (2001). The impact of emoticons on message interpretation in computer-mediated communication. Social Science Computer Review, 19(3), 324-347.

Werry, C. C. (1996). Linguistic and interactional features of Internet Relay Chat. In S. C. Herring (Ed.), Computer-mediated communication: Linguistic, social, and cross-cultural perspectives (pp. 47-63). Amsterdam/Philadelphia: John Benjamins.

Wilson, A. (1993). A pragmatic device in electronic communication. Journal of Pragmatics, 19, 389-398.

Biographical Notes

Michel Marcoccia [ michel.marcoccia@utt.fr ] is Assistant Professor of Communication Studies at the University of Technology of Troyes (France). His research interests include conversational analysis of computer-mediated communication, computer-mediated social support, and virtual speech communities.

Hassan Atifi [ hassan.atifi@utt.fr ] is Assistant Professor of Communication Studies at the University of Technology of Troyes (France). His research interests include ethnography of computer-mediated communication, digital corpora, and cultural variation in CMC.

Nadia Gauducheau [ nadia.gauducheau@utt.fr ] is Assistant Professor of Psychology at the University of Technology of Troyes (France). Her research interests include emotion in CMC, evaluation of information and communication technologies, and communicative competence in CMC.


Any party may pass on this Work by electronic means and make it available for download under the terms and conditions of the Digital Peer Publishing License. The text of the license may be accessed and retrieved at http://www.dipp.nrw.de/lizenzen/dppl/dppl/DPPL_v2_en_06-2004.html.