Pedagogical agents with affective and cognitive dimensions


Magda Bercht
Universidade Federal do Rio Grande do Sul
Instituto de Informática
Brasil
bercht@inf.ufrgs.br
Rosa Maria Viccardi
Universidade Federal do Rio Grande do Sul
Instituto de Informática
Brasil
rosa@inf.ufrgs.br

 

Abstract

This work presents an agent definition which includes affective and cognitive dimensions in order to model the student when interacting in Learning Environments. The model is based on mental states used to describe the student model and adopt a cognitive perspective to emotional situations appraisal. In this paper, it is also described the formalism and the environment used to implement our hypothesis about Intelligent Tutoring Systems which would adapt its actions to the a studentís subjectivity and to the pedagogical appraisal with qualitative factors.
 

1. Introduction

The research in Artificial Intelligence about applications of the educational/training domains, has been progressing significantly over the last years. The move from behaviourist Intelligent Tutoring Systems (ITS) to the new proposals which intend to be more flexible and to have constructivist pedagogical views, impose new social interactions and new needs to the pedagogical appraisal [2].

Those paradigm changes, pedagogical, technological and/or social, require, naturally, adjustments in ITS architectures.
Architectures which use multi-agent technologies have been opening new possibilities of making possible those changes. These statements can be proved through prototypes and experiments found in [4], [6].

We verify that in current ITSs there is a great research effort dedicated to student modelling in regards to the evaluation of his/her development in concepts or abilities domain, as well as in the eagerness to solve problems, but a few is done in relation to a student modelling that takes into consideration aspects such as intrinsic motivation, mood influences, effort in the tasks development, that is, affective dimensions.

This work tries to implement the required dynamics that support the individuality of each student when interacting within a teaching/learning environment. The studentís model will be explored through mental states due to the possibility of abstraction which is necessary in the mentalistic ideia, as well as the possibility of realizing relations among the different affective and cognitive episodes involved in the process of learning connected to mental states and studentís actions.

The paper is organised as follows: section 2 presents an overview about our ideias and concepts adopted. It is discussed the necessity of a dynamic student model with affective dimensions incorporated to its architecture.
Section 3 describes the formalism adopted and the BDI agent model. Section 4 presents the architecture of a tutor embbeding our student model proposal which is being implemented and which integrates some affective factors involved in learning processes. Section 5 points out our conclusions, and section 6 presents the references.
 

2. Using Agents in Intelligent Tutoring Systems

According to Huhns and Singh: "Agents are active software components, persistent that realize, reason, act and communicate", in [5,page 1]. However, to some researchers, it is not enough that a system presents such properties to be considered agents. An agent can also be seen as an entity to which mental states are attributed. Mental states usually used in agents characterization are: beliefs, intentions, capabilities, objectives, compromises, expectations [8] and are modelled resembling human mental states. That focus of supplying agents with mental states similar to human's is called mentalistic approach. We adopt a mentalistic approach to model and construct our student model in this work.

In recent years, the term pedagogical agent emerges to indicate agents designed to support human learning. In spite of the pedagogic agents to be based on previous researches on ITSs, they have brought new perspectives to the problem of how to make easier the process of learning in the context of interactive educational systems. The hard task is adapting the instructional interactions to the needs of the student, aiding the co-operation with students and other agents, and so on. Indeed, the agents can appear to students as alive caracters, inducing to affective answers [6]. The great advantage of using agents is in properties as system flexibility, pro-activity, sociability and adaptability, which determine the enhancement of pedagogical quality.

Usually, pedagogical evaluation in an ITS is based on how much a student knows about a certain topic or manages a specific ability, which leads to study and detect student's knowledge in relation to the domain. It is important to highlight that the student model, must be able to represent all possible qualitites for a pedagogical appraisal, both quantitative (development) and qualitative (factors and student's affective characteristics). However, when taking into account motivational and affective factors, pedagogical evaluation complements itself, because we can add the "the how" the student knows the dominium and which is her/his pre-disposition, trying to infer the related student's emotional and affective state. Giraffa [4] appoints that difficulty in modelling the student is found in the lack of necessary knowledge to model the learning process. However, we verify that this adversity comes also from the lack of knowledge when modelling a pedagogical appraisal, which takes into consideration factors such as specific affective evaluation [2].

The student, in his/her system representation - student model - is one of the most complex and at the same time fragile parts of an ITS, due to problems not solved yet, such as hardware and software technological capacitites, few knowledge about a student's learning process, impreciseness and subjectivity of emotional and motivational factors in teaching and learning environments, problem of pedagogical knowledge representation and many other.

There are not many experimental works or other already in commercial scale that develop ITSs taking emotional and/or motivational factors into consideration, once identification of those factors in learning interactions is very difficult, due to technologic conditions currently existent. Research that implement affective factors in teaching/learning systems, which are guided by the studentís behaviour, can be studied in: Soldato[15], Clark Elliot [3], Vicente et al. [16].

The student model is naturally a dynamic structure, once the student hardly has right conceptions in the domain being studied: he/she is not logically omniscient and his/her motivations and affections can vary in a very dynamic manner. These caracteristics impose, consequently, the need of implementing a knowledge basis with ease of revision and frequent changes. The student's conceptions have a logic beliefs behaviour, constantly subjected to revisions, because they represent the view or hypotesis that the tutor has about the student's beliefs. The beliefs represent the student's affective and cognitive state, which is constantly modified because of his/her interaction with the tutor. A multi-agent approach to student modelling appears as a natural framework to trace the representation and co-ordination of the diferents learning/teaching interactions episodes, in order to carve the system according to the wholeness of actions. Therefore, we adopt the BDI paradigm, which allows the description of agents through the notion of mental states, because it is a tool able to specify and describe complex activities involved in interaction in teaching/learning systems.
 

3. BDI Model using Extended Logic Programming

We follow the approach adopted by Móra et al [7], [8] in his works to overcome the limitations of BDI models when building an agent. The approach define BDI models using a suitable logical formalism that is both powerful enough to represent mental states and allow us to use the logic as a knowledge representation.

3.1. The Logical Formalism

The formalism we are using is logic programming extended with explicit negation (ELP) with the Well-Founded Semantics eXtended for explicit negation (WFSX). ELP with WFSX (simply ELP, from now on) extends normal logic programs with a second negation named explicit, in addition to the usual negation as failure of normal logic programs, which is called implicit negation in the ELP context. The ELP framework, besides providing the computational proof procedure for theories expressed in its language, also provides a mechanism to determine how to minimally change a logic program in order to remove contradictions. As it is usually done, we focus on the formal definition of mental states and on how the agent behaves, given such mental states.

But, contrasting with former approaches, the model is not only an agent specification, but it may also be executed in order to verify the actual agent behaviour, as well as it may be used as reasoning mechanism by actual agents.

An extended logic program (ELP) is a set of rules: ( http:\\penta.ufrgs.br\~luis\Magda\Image14.gif ) where (http:\\penta.ufrgs.br\~luis\Magda\Image15.gif )are objective literals. An objective literal is either an atom A or its explicit negation ØA. The symbol not stands for negation by default and not L is a default literal. Literals are either objective or default literals. Also, ØØL º L. The language also allows for integrity constraints of the form ( http:\\penta.ufrgs.br\~luis\Magda\Image16.gif where ( http:\\penta.ufrgs.br\~luis\Magda\Image17.gif ) are objective literals, stating that A should hold if its body holds. Particularly, when ( http:\\penta.ufrgs.br\~luis\Magda\Image18.gif ), where ( http:\\penta.ufrgs.br\~luis\Magda\Image19.gif ) stands for contradiction, it means that a contradiction is raised when the constraint body holds.

In order to represent and reason about pro-attitudes like desires and intentions, we need to have a logical formalism that deals with actions and time. The work done by [7] et al. use a modified version of the Event Calculus (EC). The predicate holds_at(P,T), defining that property P is true at a time T is:

 





The predicate happens(E,T) means that event E occurred at time T; initiates(E,P) means that event E initiates property P at the time event E occurs; terminates(E,P) means that event E terminates P; persists(Te,P,T) means that P persists since Te until T (at least). The model assume there is a special time variable Now that represents the present time. Note that a property P is true at a time T (holds_at(P,T)) if there is a previous event that initiates P and if P persists until T. P persists until T if it can not be proved by default the existence of another event that terminates P before the time T.

These are the original provisions of EC for the holds_at(P,T) predicate. It allows us to reason about the future, by hypothetically assuming a sequence of actions represented by happens(E,T) and act(E,A) predicates and verifying which properties would hold. It also allows us to reason about the past. In order to know if a given property P holds at time T, the EC checks what properties remain valid after the execution of the actions that happened before T. But it assumes that properties change only as a consequence of the actions performed by the agent. This is not a reasonable assumption, as we would like to allow other agents to act, as well as to allow for actions that are not noticed by the agent. This is the role of sense(P,T). It means that property P is perceived by the agent at time T.

ELP with the EC provides the elements that are needed to model mental states. Nevertheless, the model also cast the ELP language and the EC primitives in a simpler syntax that is an extension to the A language proposed by Quaresma et al. [12]. The sentences are:

A causes F if P1,...,Pn Ė action A causes property F if propositions Pi hold;

F after A if P1,...,Pn Ė property F holds after action A if propositions Pi hold;

A occurs_in E Ė action A occurs when event E occurs;

E preceeds Eí Ė event E occurs before event Eí.

The language still provides means to reference beliefs that should hold in the future or that should have held in the past.
In this paper, we make use only of the operator next(P) stating that property P should hold at the next instant of time.

3.2 The Agent Model

The Móraís Agent Model [8] has been adopted as basis to model the student. The model does not define a complete agent, but only the cognitive structure that is a part of a agent model. The agent cognitive structure is a tuple ( http:\\penta.ufrgs.br\~luis\Magda\Image21.gif ) where B is the set of agentís beliefs, D is the set of agentís desire, I is the set of agentís intentions and T is the set of time axioms, as defined above.

The desires of the agent is a set of sentences DES(Ag,P,Atr) if Body, where Ag is an agent identification, P is a property and Atr is a list of attributes. Desires are related to the state of affairs the agent eventually wants to bring about. But, the fact of an agent having a desire, means that before such an agent decides what to do, it will be engaged in a reasoning process, confronting its with its beliefs (the current circumstances and constraints the world imposes). The agent will choose those desires that are possible according to some criteria.

Beliefs represent the information agents have about the environment and about themselves. They are represented by sentences which describe the problem domain using ELP and constitue the set B. An agent A believes a property P holds at a time T if, from B and T, the agent can deduce BEL (Ag,P) for the time T. According Móra in [8], the agent continuously updates its beliefs to reflect changes it detects in the environment and whenever a new belief is added to the beliefs set, consistency is maintained.

Intentions are characterised by a choice of a state of affairs to achieve, and a commitment to this choice. Since intentions are viewed as a compromise the agent assumes with a specific possible future, an intention may not be contradictory with other intentions. Also, intentions should be supported by the agentís beliefs. Once an intention is adopted, the agent will pursue that intention, planning actions to accomplish it, re-planning when a failure occurs, and so. Agents must also adopt these actions, as means that are used to achieve intentions, as intentions.

According the theory we adopt, in [7] and [8], an agent should not intend something at a time that has already past; an agent should not intend something it believes is already satisfied or that will be satisfied with no efforts by the agent; an agent only intends something it believes is possible to be achieved.

Agents choose their intentions from two different sources: from its desires and as a refinement from other intentions. By definition, there are no constraints on the agentís desires. Therefore, an agent may have conflicting desires.To choose intentions, agents first, determine those subsets of the desires that are relevant according to the current beliefs of the agent. Afterwards, it is necessary to determine desires that are jointly achievable. In general, there may be more than one subset of the relevant desires that are jointly achievable. Therefore, we should somehow indicate which of these subsets are preferred to be adopted as intentions. This is done through a preference relation defined on the attributes of desires, and in according to the theory defined in [8,12], the agent should prefer to satisfy first the most important desires. Additionally to preferring the most important ones, the agent adopts as much desires as it can. The selection is made combining the different forms of non-monotonic reasoning provided by the logical formalism.

The next step is to define when the agent should perform all this reasoning about intentions. We take the stance that an agent should revise its intentions when it believes a certain condition holds. Recall that we assume that an agent constantly has to maintain its beliefs consistent, whenever new facts are incorporated.
Whenever the agent revises its beliefs and one of the conditions for revising intentions hold, a contradiction is raised. The intention revision process is triggered when one of these constraints is violated.

We adopt a cognitivist focus on the affective dimension. That focus is based on Ortony et al. [10] works. We know that emotion has several aspects: it involves experience, physiology, social and cultural behaviour, and it also carries cognition and concepts of each individual. But, what we intend is bringing to artificial reasoning an interlace of the so called pure "cartesian" reasoning with emotion: the changes of actions and of behaviour that are brought by emotion, to idea of adaptability of actions for intelligent systems.
 

4. Building a Student Model with affective and cognitive (schemes) dimensions

According to Picard [11], if "intelligent machines" should communicate effectively with humans in several situations for a wide and different field of objectives, for example: to teach, advise,..., then they must consider the motivational and emotional states of those who are interacting.

We will concentrate our attention here on the Student Model depicted by the greatest rectangule in the Fig.1. It has the special task of supplying the tutoring systems as a whole with the ability of adapting to the context and personalizing the teaching strategies according to the studentís characteristics.

Our student model main point is the integration between cognitive and affective states for the formation of a student global state, and the integration of this with other components of a tutor. The student representation is now composed of two schemes, one for the cognitive state and other for the affective state. Fig. 1 presents a solution for the proposition of this research, through an architecture for the Student Model and a perceptive system (Perception) in communication with other functions of an ITS. To build the student model while interaction with him, we need a way to look at studentís actions and a mechanism able analyse these actions and to store the results. As a result we designed a agents society of perceptions agents (Perceptions) which have the funcionality of observing, reifying and analysing student actions to building the model each time interaction.

 


 
  Fig.1. Student Model with emotion dimension



Perceptions is a society composed of highly specialised agents, each one with a specific aim, for example, identifying the student's behaviour and attitudes. Those attitudes can be understood as affective and cultural answers, taking into consideration the present learning episode. Another kind of specialised agent can analyse facial expressions. It can identify student's expressions thus giving another dimension of the student's global state analisys. In this first stage, we have implemented only the first kind of agents.

The current perception agent system (the first kind) works together with an interface and trace the interaction between tutor and student, reporting data and actions accomplished in terms of student's answer, which attitudes and behaviours the student showed when facing a certain event or task. The analysis of that tracing composes a historic both of the behavior related to information and determinant factors of cognitive aspects (student's performance and development in the domain) and affective aspects. Also the Tutor actions are being observed, mainly concerning its interventions, for example, what kind of help was put available, which tasks were proposed, and so on.

The rectangle Tutor Actions gathers other functions of an ITS, such as pedagogical actions, controls, communications with the interface and many others. The Cognitive Scheme has the role of dealing with student's conditions in relation to domains: his/her points and mistakes, performance, topics understood and learned.

The student's learning historic concerning the content, is represented in the Cognitive Scheme as rules, facts, beliefs, and desires in relation to the knowledge the tutor believes the student has.

Which are the most determining of the emotional factors in interactions of teaching and learning that can be modelled in a context of an artificial agent acting as a human agent? Motivation is one of the keys in learning, and emotions have an important rule in motivation [15], [3]. The other key is found in clues mentioned work, that is, it is in beliefs the student has about his/her own: confidence, independence.[11], because it makes easier or inhibits the process of discovering and of creation.
Social factors, such as hierarchical relations involved in a learning context constitute another important factor.

Detection of affective factors is obviously restricted by the interface limitations, which do not have sensors of physiologic characteristics, of eyesight for the study of the student's body expressions patterns and of the difficulty in communication in natural language. However, some behavior can be identified in an interaction. For example, the lack of persistence of a student in the execution of tasks can be defined in terms of number of times that a student asks for help, the student's effort in the place of development is a reliable indication of intrinsic motivation. The behaviour of a student who asks for help before trying to develop a task can show lack of confidence.Factors we intend to identify through behaviour in student's interaction are effort, confidence and independence that directly affect motivation.

Tutor is working with some student, for example Student 1, who is interacting for the first time, and who is already performing a task proposed. If the ITS environment does not have initial profiles through questions or other options, then, default values are undertaken by motivational and affective factors which are being identified and analysed by a perceptive system agent. We undertake the hypothesis that the system does not have initial profiles. In that case, as it is foreseen the identification of factors such as Effort, Confidence and Independence, default values are:

Effort = no one Confidence = average Independence = average

The Information referring to Student 1 actions are passed to the Student Model, through a character range where the identified and inferred characters are. In order to illustrate, we identified that: the Student 1 performed a task proposed in the single and first attempt, not asking specific help to the task, and there was a hit in the process and result. In this case, the agent charged of the perceptive system informs through a strings range to the Student Model, in the X-BDI formalism.

[sense (effort (student 1, no one), 1), sense (confidence (student1, average value + 1), 1), sense (independence ( student 1, average value + 2), 1 )]

In fact, the tutor has only one desire: to receive the diagnostic of the student's affective and cognitive state which can be modeled as follows:

Des(tutor, receive-student-global-state(X)). Bel(tutor, next(Bel(student, send-student- global-state(X)) if Bel(sudent, consolidation-affection-cognition(X)).

Thus, once it is the only desire of the tutor, it becomes a candidate for intention. However, it will only be considered effectively as an intention if the student believes that there is a succession of actions possible to be performed in a way of accomplishing that intention. That set of beliefs and desires can be modeled at the Student Model as follows:

Des(student, sending-global-state(X)).

Bel(student, sending-global-state(X)) if Bel(student, student(X)), Bel(student consolidation-affection-cognition(X).

sending-data-consolidate(X, Esf, Conf, Indep) causes Bel(student, consolidation-affection-cognition(X)) if Bel(student, esq-Affective(X, Eff, Conf, Indep)), Bel(student, esq-Cognitive(X, Development, Indep).

Bel(student, esq-Affective(X, Eff, Conf, Indep) ) if
Bel(student, effort(X, Eff ) ),
Bel(student, confidence(X, Conf ) ), Bel(student,
independence(X, Indep ) ) .

The Cognitive and Affective Schemes of the Student Model, presented in Fig. 1, will have kept their functionality, because in the adoption of X-BDI formalism they will be considered as a set of beliefs. The Student Model is built dynamically and in real time, along the process of interaction. In the case of the proposed architecture, it means that the designer must only declare his/her heuristic regarding the area of teaching-learning in order to guide the tutor pedagogical actions.

We are testing our ideas and hypothesis on the Jade Student Model Manager agent. Jade tutor is an on going work of the GIA (Grupo de Inteligência Artificial da UFRGS), designed by Silveira[14]. Jade environment is a client-server paradigm. The subject matter covered by the system is a content about Physics - Electricity (Ohmís Law and its applications). For a complete description of the Jade environment and Eletrotutor (first implementation) see [http://www.inf.ufrgs.br/~rsilv/Jade/jade.html] and to navigate the Eletrotutor see [http://www.inf.ufrgs.br/eletro31/eletro.html].
 

5. Conclusion

Our work aggregates some advances in correlated areas. It is intended to achieve some improvements in the efficiency of educational computing environments, aggregating new features and advances through distributed ITS system by using the Agent Society based architecture, in which all the elements, human or not, can have learning and teaching competence. The use of MAS allow us to explore different possibilities that we cannot do with traditional techniques like a more powerful and realistic interaction with human agents when affective dimensions are aggregated to an ITS.

We proposed an architecture which integrates affective and cognitive dimensions in order to support the adaptation of teaching strategies in regard with the subjectivity and performance inherent to each student.

The agentís behaviour is based on the fact that the only thing the programmer has to do is to specify the agentís mental states that are regarded with the learning and teaching situations.

Although the step we had made is not large, itís implication to the future research in ITSía area is not small, since by looking to the structure of mental states and the coreographie involved, it wil probably help us to understand how these biological systems work. We believe that such computer models are unlocking the secrets of a variety of artificial reactions.
 

6. References

  • ANDRÉ, E. et al. Life-Like Presentation Agents: A New Perspective for Computer-Based Technical Documentation. In: Proceedings of Workshop V. Pedagogical Agents. Kobe.Japão.1997.p.1-8
  • BERCHT,M. Uma Arquitetura para Fatores Emocionais e Motivacionais em Agentes Pedagógicos Proposta de Tese.CPGCC.UFRGS.2000. 54p.
  • ELLIOT; C: Affective Reasoner Personality models for Automated Tutoring Systems.In: Proceedings of Workshop V. Pedagogical Agents. Kobe Japão. 1997 pgs.33- 39.
  • GIRAFFA, L.M.M. Uma arquitetura de Tutor utilizando estados mentais. Tese de Doutorado. PPGCC.UFRGS: 1999. p.151
  • HUHNS,M. N. & SINGH,M. P. Readings in Agents.Morgan Kaufmann Publishers, Inc. San Francisco, California. 1998.pags. 1- 23.
  • JOHNSON,W.L. Pedagogical Agents. In: http: www.isi.edu/isd/carte/
  • MÓRA, M. & LOPEZ,G.P. & COELHO, H, & VICCARI, R. BDI Models and Systems: Reducing the Gap. In: Proceedings of the ATAL`98 Ė 5th International Workshop on Agent Theories, Architectures and Languages. Paris. França.1998. pp. 153- 167.
  • MÓRA, M. C. Um modelo de agente executável. Porto Alegre: PPGCC/UFRGS. Tese de Doutorado.1999.
  • MOUSSALE, N. & VICCARI,R. Interações Tutor-Aluno Analisadas Através de Seus Estados Mentais.Dissertação de Mestrado. PPGCC/UFRGS. 1996.236pgs.
  • ORTONY,A.;CLORE,C.;COLLINS,A. The Cognitive Structure of Emotion.Cambridge University Press. 1999
  • PICARD, R. Affective Computing. MIT Press. Cambridge. Massachusetts. 1997. 292p
  • QUARESMA, J.; LOPES,J.G. A Logic Programming Framework for the Abduction of Events in a Dialogue System. In: Workshop on Autonomous. Reasoning. Proceedings. England, 1997.AISB Workshop Series,1997
  • RAO, A.Modelling Rational Agents Within a BDI-Architecture. In: Files,R.Eds., Proceedings of the Knowledge Representation and Reasoningí91.(KR&Rí91).Proceedings..San Mateo,C.A. Morgan Kaufman Publishers.1991.
  • SILVEIRA, R.A. Modelagem orientada a Agentes aplicado a Ambientes Inteligentes Distribuídos de Ensino. Proposta de Tese. PPGCC/UFRGS.1999.
  • SOLDATO, T. & BOULAY,B. Implementation of Motivational Tactics in Tutoring Systems. In: Journal of Artificial Intelligence in Education. vol(6), n.4.1995.p.337-378.
  • VICENTE, A & PAIN, H. Motivation Diagnosis in Intelligent Tutoring Systems. In:Proceedings 4th International Conference. ITSí98. San Antonio. Springer-Verlag.pags.86-95.