SciELO - Scientific Electronic Library Online

 
vol.29Terciarización y configuración espacial en MéxicoEl contenido en los mensajes de prevención de la obesidad que no previene la salud índice de autoresíndice de materiabúsqueda de artículos
Home Pagelista alfabética de revistas  

Servicios Personalizados

Revista

Articulo

Indicadores

Links relacionados

  • No hay artículos similaresSimilares en SciELO

Compartir


Acta universitaria

versión On-line ISSN 2007-9621versión impresa ISSN 0188-6266

Acta univ vol.29  México  2019  Epub 05-Nov-2019

https://doi.org/10.15174/au.2019.1796 

Artículos

Validation of an instrument for measuring the technology acceptance of a virtual learning environment

Validación de un instrumento para medir la aceptación tecnológica de un entorno virtual de aprendizaje

Pedro César Santana-Mancilla1  * 

Osval Antonio Montesinos-López1 

Miguel Angel Garcia-Ruiz2 

Juan José Contreras-Castillo1 

Laura Sanely Gaytan-Lugo3 

1 Facultad de Telemática, Campus Central, Universidad de Colima. Av. Universidad 333, Col. Las Víboras. Colima, Colima, México. C.P. 28040.

2 Department of Mathematics and Computer Science, Algoma University. Canadá.

3 Facultad de Ingeniería Mecánica y Eléctrica, Campus Coquimatlán, Universidad de Colima. México.


Abstract

Virtual Learning Environments (VLE) provide platforms to make online education more convenient and affordable for learners. Although VLE are currently in great demand, their acceptance needs to be assessed. In this research, an instrument that measures the technology acceptance of a VLE is validated by applying a confirmatory factor analysis on 15 items and five factors. Results show that the overall fit of the model was satisfactory and that all correlations between the latent factors were higher than 0.48. It was found that the assessment of technology acceptance is very important, because the VLE’s success depends largely on the favorable reception of professors, researchers, and educational leaders.

Keywords: Technology acceptance; virtual learning environments; confirmatory factor analysis; learning technology

Resumen

Los Entornos Virtuales de Aprendizaje (EVA) proveen una plataforma para lograr que la educación a distancia sea más conveniente y accesible para los estudiantes. Aunque los EVA actualmente cuentan con gran demanda, su aceptación necesita ser evaluada. En esta investigación, se validó un instrumento que mide la aceptación tecnológica de un EVA. Aplicando un análisis factorial confirmatorio, se validó un instrumento compuesto por 15 ítems y cinco factores. Los resultados muestran que el ajuste general del modelo fue satisfactorio y que todas las correlaciones entre los factores latentes fueron mayores de 0.48. Se encontró que la evaluación de la aceptación tecnológica es muy importante porque el éxito depende en gran medida de la acogida favorable de profesores, investigadores y líderes educativos.

Palabras clave: Aceptación tecnológica; entornos virtuales de aprendizaje; análisis factorial confirmatorio; tecnología de aprendizaje

Introduction

Over the past decade, several countries have promoted the integration of Information and Communication Technologies (ICT) in the education system (Padilla-Meléndez, Del Águila-Obra & Garrido-Moreno, 2014). As many different sectors are growing because of the effects of globalization, education has been evolving in different ways (Arteaga-Sánchez, Cortijo & Javed, 2014; Baturay, Gökçearslan & Ke, 2017; van Raaij & Schepers, 2008). One particular interest is the growing number of students who fully enroll to online courses (Digital Learning Compass, 2017). In Mexico, virtual education, also known as online or e-learning, has reported a large growth in the field of higher education, from 32 000 students in 1981 to up to 467 552 in 2014, including mixed and online education (Zubieta-García & Rama-Vilate, 2015). In this context, the integration of learning technology and applications is a key factor for these new generations of students.

Virtual Learning Environments (VLE) provide a technological platform for developing and delivering courses regardless of time or location via the internet. This complements traditional, face-to-face education and transforms it into blended learning (Garrison & Kanuka, 2004; McComas, 2014), or VLE is only used for online courses and for online learning (Moore & Kearsley, 2011). Since VLE-based courses are always available online, they offer diverse advantages (Garrison, 2011). For instance, students have the opportunity to study at their own pace (Moskal, Dziuban & Hartman, 2013; Richardson & Swan, 2003); they also have access to more information or multiple representations about definitions, concepts or topics related with the course (Mayer, 2009; Paivio, 2014); VLE allows file exchange between students and instructors, the ability to schedule educational activities and monitor the progress of students of a course. Notably, Richardson & Swan (2003), Galey (2014), and Porter, Graham, Spring & Welch (2014) add two advantages that could be the most important: the first one is that VLE are location-independent, which means that students do not need to move from one place to another in order to attend courses, so they save time; the second one refers that, unlike traditional courses, VLE allows students to reflect about learning materials and their responses before posting comments or answers on forums. For these reasons, VLE are widely used in education (Codreanu, Michel, Bobillier-Chaumon & Vigneau, 2017). Salmerón-Pérez, Rodríguez-Fernández & Gutiérrez-Braojos (2010) state that the use of virtual platforms improves collaborative and cooperative learning regardless of the educational level of students.

However, according to Galey (2014), VLE also has disadvantages. According to Evans (2013), some critics argue that online education is not as effective as traditional classroom education, because it lacks face-to-face interaction, and others mention that all participants involved in online education must make a stronger effort than in traditional courses in order to build a good relationship and maintain focus on class topics. Rabe-Hemp, Woollen & Humiston (2009) claim that students favor face-to-face interactions when discussing class topics. In addition, Tuckman (2007) states that the lack of supervision on students often leads to excessive online procrastination and reduced performance, because many of them treat the opportunity for self-pacing as an opportunity to procrastinate. Furthermore, the fact that not all students have internet at home represents an inequality (Codreanu et al., 2017).

As in any learning environment, in VLE students are the primary participants. Nowadays, VLE are an important tool that in the right hands can support the teaching-learning process (Codreanu et al., 2017; Pituch & Lee, 2006).

VLE characteristics

According to Moore, Dickson-Deane & Galyen (2011), there is disagreement among researchers about common definitions and terminology on topics related with VLE. This makes it difficult not only to perform meaningful cross-study comparisons but also to bring a standardized taxonomy of VLE. In fact, these authors explained that VLE can be found in the literature as different types of learning environments, such as learning management system (LMS), course management system (CMS) or knowledge management system (KMS).

According to Govindasamy (2001), there are seven quality parameters that VLE need in order to success: instructional design, course development, teaching and learning process, course structure, student support, faculty or school support, and evaluation and assessment. These seven benchmarks are only related to the pedagogical part. However, after a systematic literature review, Mueller & Strohmeier (2011) expose the following characteristics in two different sets:

  • System-related: communicativeness, feedback, media synchronicity, flexibility, perceived quality, perceived usability, interface design, reliability, adaptability, system quality, user adaptation, and user tools.

  • Information-related: course attributes, course quality, format, information quality, information relevance, and terminology.

As can be seen, these two sets do not only contain pedagogical foundations but also technical aspects.

Problem Statement

Although VLE is an evolving area of research, which is receiving more attention every day, Liaw, Huang & Chen (2007) and Paechter, Maier & Macher, (2010) claim that there is minimal research on students’ experience and attitudes toward VLE. Therefore, there is a need to quantify the acceptance of VLE by students. Technology acceptance is a set of quality attributes that measure the relationships that exist between usefulness, ease of use and system use (Davis, 1989). In this way, there is some research about validated measuring instruments applied to gauge the technology acceptance of VLE. Šumak, Polancic & Hericko (2010) conducted an online survey with undergraduate students (n = 235) to understand their perception about the use of Moodle; their instrument was based on the unified theory of acceptance and use of technology (UTAUT) (Venkatesh, Morris, Davis & Davis, 2003). Sánchez & Hueros (2010) surveyed 226 students, using the technology acceptance model (TAM) (Davis, 1989) in order to improve their understanding of the motivational factors behind students’ level of satisfaction with Moodle. Lin, Persada & Nadlifatin (2014) tried to examine the students’ acceptance of Blackboard Learning System use, by employing TAM as an analysis approach; they conducted an online questionnaire, extracting information from a total of 302 respondents. It is evident that models and theories such as the widely-used TAM and the UTAUT have been proposed and applied, but they do not address technology acceptance of factors such as communication, design, usability, general aspects, and reliability in a mixed fashion.

On the other hand, there is a concern about statistical analysis to validate technology acceptance instruments such as questionnaires. It is possible to use the Cronbach’s alpha, which, according to Tavakol & Dennick (2011), it measures reliability, but not validity, of instruments. These two concepts are closely associated. Furthermore, Malhotra (1997) explains that the value of this coefficient tends to increase when more items are added to the instrument; this explains why the coefficient can be inflated. Cronbach’s alpha shows internal consistency but does not indicate the degree of correlation of the items that make up a construct (Tavakol & Dennick, 2011).

In this context, it is important to point out that some studies have not been particularly concerned with the operationalization of the acceptance of VLE constructs, since they do not fully report the psychometric properties (internal consistency, statistical exploration or confirmation of the instruments) of the instrument measuring the acceptance of VLE (DeVellis, 2003). Since the psychometric properties are missed, this raises questions about the validity of these studies’ outcomes. Considering that a robust operationalization is equally important with a robust theorization of the construct under study (DeVellis, 2016), this paper thoroughly portrays the development and validation of the acceptance of VLE based on Moodle.

The aim of this paper is to present the validation of an instrument to assess the acceptance of the VLE Moodle. The validated instrument was created by Ruiz & Romero (2008) and consists of five factors: communication, design, usability, general aspects, and reliability, with 15 items (Table 1). It is hypothesized that the technology acceptance of a VLE based on Moodle will be valid and reliable to measure and, consequently, the instrument will provide dependable information about the technology acceptance of a VLE system. Therefore, this study was used to evaluate the validity of the instrument, using a confirmatory factor analysis (CFA) and a data set obtained through the responses of a sample of students from the University of Colima in Mexico, to the mentioned instrument.

Table 1 Latent factors and items used for measuring the acceptance of VLE with Moodle. 

Latent Factors Observed Variable Statement
Communication (CM) MFCTS I believe Moodle facilitates communication among teachers and students.
MFCBS I believe Moodle facilitates communication among students.
UMCL I believe the use of tools such as Moodle enhance communication and learning.
Design (DE) DCA The courses design is appropriate.
UCA Color use is acceptable.
OIA The information organization is acceptable.
Usability (US) NTDMS Overall, I consider that navigation through the different sections of a course in Moodle is simple.
EFICM I believe it is easy to find information within a course in Moodle.
ITCUHL The Moodle interface is comfortable to use and easy to learn.
General aspects (GA) UMSS Overall, I think the use of Moodle in the courses has been satisfactory.
IWUMFT I would like to continue using Moodle at the School of Telematics.
IWUMAS I would like to use Moodle in all courses.
Reliability (RE) CWPLQ The course web pages are fast to load.
TPUCAQ I have not had any technical problems uploading files to the course or answering questionnaires.
VSMCA The Moodle server has always been available.

Source: Author’s own elaboration.

Data and Methods

Instrument and data collection

This research was conducted at the School of Telematics of the University of Colima, Mexico. At the time of the data collection, the student population of the School of Telematics was 469 students. The sample used in this study was one hundred students selected randomly from the student population of the School of Telematics. The sample of 100 subjects in this study may seem small, but earlier studies in the field of technology adoption have shown that with smaller or similar samples good results can still be obtained (Caine, 2016; van Raaij & Schepers, 2008).

The questionnaire consisted of fifteen indicators and five latent factors measured on a 5-point Likert scale (1 = strongly disagree, 2 = disagree, 3 = neither agree nor disagree, 4 = agree, 5 = strongly agree). Therefore, the analysis was done through a categorical ordinal approach. Table 1 shows the dimensions and items of the instrument used for measuring the acceptance of VLE with Moodle.

According to the literature reviewed, a hypothesized linked relationship is proposed, using CFA between the 15 indicators and the five underlying factors, as shown in Figure 1. The first loading of each factor has been fixed at 1 for identification. Correlated factors are admissible for this model. Since there were five factors, it was possible to establish 10 correlations or covariances. In this case, the number of observations was greater than the number of parameters; for that reason, the model is considered as over-identified. Fifteen indicators yielded 15 (15+1)⁄2=120 number of observations. There were 85 parameters to be estimated; these parameters were 10 loadings, five factor variances, 60 thresholds, and 10 covariances among factors.

Source: Author’s own elaboration.

Figure 1 A path diagram of CFA representing the factors structure of the acceptance of VLE with Moodle. 

Figure 1 is a schematic drawing that represents a concise overview of the hypothesized model to be fit. It includes the 15 observed items (represented by square boxes) and the latent variables (represented by five circles), with arrows that illustrate the (hypothesized) relationships among the items. A direct effect of one item on another is represented by a single-headed arrow, while (unexplained) correlations between latent variables and items are represented by double-headed arrows. Also, it is important to point out that all the indicators are assumed as independent of each other since there are no double-headed arrows between indicators.

Construct validation and analysis

Validity refers to the extent to which an instrument measures what it is intended to measure (Kimberlin & Winterstein, 2008). On the other hand, construct validation pertains to the parallel process of measurement and theory validation (Strauss & Smith, 2009). According to Heilman & Brusa (2006), it establishes that a measure appropriately operationalizes its underlying construct. In this sense, exploratory factor analysis (EFA) and CFA are frequently used methods for construct validation (Boelen, van den Hout & van den Bout, 2008). EFA is commonly used to confirm construct validity in cases where the relationships among variables are unknown or ambiguous, while CFA is properly applied when the researcher has some knowledge of the theory, empirical research, or both, and it postulates relations between the observed measures and the underlying factors a priori and, then, test this hypothesized structure statistically (Byrne, 2012). CFA and EFA are commonly used by researchers. Both statistical techniques are used in order to reduce the overall number of observed variables into latent factors, based on commonalities within the data (McArdle, 1996). CFA is also regularly used to compare the factor structure of different groups, for example, different types of schools like private or public. By using CFA, researchers who want to validate their instruments add a level of statistical precision (Atkinson et al., 2011).

For this reason, the statistical technique used to assess the construct validity of the hypothetical model given in Figure 1 was CFA (Kline, 2011). CFA seeks to determine whether the number of factors and the loading of measured (items) variables conform to what is expected in the model given in Figure 1, based on a pre-established theory. The factors or latent constructs are assumed to cause the observed scores in the indicators (Kismiantini, Montesinos-López, García-Martínez & Franco-Pérez, 2014). By using Chi-Square, the evaluation of the goodness-of-fit of the proposed model with CFA was done. A non-significant Chi-Square is an indication of a good fit. Nevertheless, it is difficult to get a good fit when samples are well over 200 (Newsom, 2012). The Chi-Square test is more sensitive with larger sample sizes. Hence, it is important to report three additional indices (Hu & Bentler, 1999): 1) the comparative fit index (CFI) evaluates the fit of the model relative to another baseline model (Kismiantini et al., 2014) and, according to Yu & Muthen (2002), the criteria for a good CFI is a value >0.96 to a maximum of 1; 2) the root mean square error of approximation (RMSEA) shows how a model does not fit compared to a perfect model (Kismiantini et al., 2014). The criterium for a good model is a value <0.06 (Browne & Cudeck, 1993); and 3) the weighted root mean square (WRMR) assesses average weighted residuals, which range from 0 to 1 (Kismiantini et al., 2014). Yu & Muthen (2002) explain that values close to 1 are indicators of a good fit. In this research, for scaling and statistical identification purposes, the factor loading of one indicator in each sub-factor is set to 1. The implementation of all CFA was done in MPLUS version 6.11.

Since the item responses are categorical, it is not appropriate to base the CFA on the Pearson sample variance-covariance matrix, because the Pearson correlation coefficients are higher when computed between two continuous variables than when computed between the same two variables, restructured with an ordered categorical scale. For this reason, this paper is based on the polychoric correlation matrix given in Table 2, by using the weighted least squares means and variance adjusted (WLSMV) estimation method with probit link to get the appropriate parameter estimates of the ordered categorical variables. The polychoric correlation matrix is the correct correlation matrix when the variables are ordinal categorical.

Table 2 Proportions and counts of categorical variables. 

Proportion of each category Count of each category
Factor Item 1 2 3 4 5 1 2 3 4 5
CM MFCTS 0.04 0.03 0.13 0.5 0.3 4 3 13 50 30
MFCBS 0.07 0.14 0.37 0.32 0.1 7 14 37 32 10
UMCL 0.01 0.03 0.19 0.51 0.26 1 3 19 51 26
DE DCA 0.02 0.06 0.18 0.5 0.24 2 6 18 50 24
UCA 0.04 0.06 0.19 0.41 0.3 4 6 19 41 30
OIA 0.03 0.09 0.11 0.47 0.3 3 9 11 47 30
US NTDMS 0.03 0.06 0.14 0.38 0.39 3 6 14 38 39
EFICM 0.02 0.03 0.2 0.4 0.35 2 3 20 40 35
ITCUHL 0.03 0.04 0.14 0.43 0.36 3 4 14 43 36
GA UMSS 0.03 0.03 0.13 0.51 0.3 3 3 13 51 30
IWUMFT 0.01 0.05 0.2 0.26 0.48 1 5 20 26 48
IWUMAS 0.05 0.08 0.26 0.26 0.35 5 8 26 26 35
RE CWPLQ 0.04 0.1 0.17 0.47 0.22 4 10 17 47 22
TPUCAQ 0.09 0.24 0.13 0.24 0.3 9 24 13 24 30
VSMCA 0.05 0.21 0.28 0.2 0.26 5 21 28 20 26

Source: Author’s own elaboration.

Results and Discussion

Table 2 shows the proportions and counts for each item. As it can be seen in Table 2, the item distribution is highly skewed to categories 4 (agree) and 5 (strongly agree). This indicates relatively high levels of satisfaction with the use of Moodle as a VLE. The exception is the items for the factor PU which show low proportions in categories 4 and 5.

Confirmatory Factor Analysis

In this phase, it was attempted to fit the hypothesized model given in Figure 1 with a CFA. Since the latent factors of the technology acceptance of VLE are continuous variables and the indicators are categorical variables, therefore, this CFA is also known as item response theory (Kismiantini et al., 2014). The data was fitted by two parameters: logistic for polytomous responses (5 category) using WLSMV estimation with probit link. The overall fit of the model was reasonable, since χ_((80))^2=156.32, p<0.0001, the RMSEA was 0.098>0.08 with a 90% confidence interval between 0.075 and 0.120, the CFI was 0.979 > 0.96, and the WRMR was 0.732 close to 1. However, since the RMSEA was not satisfied (>0.08), the model given in Figure 2 was proposed, which is slightly different than the model shown in Figure 1, because this new hypothesized model assumed dependence among some indicators and fixed some threshold parameters that were not significant.

Source: Author’s own elaboration.

Figure 2 Adjusted path diagram of CFA representing the factors structure of the technology acceptance of VLE with Moodle. 

Again, using CFA (under an item response theory model with two parameters logistic for polytomous responses with probit link and with a WLSMV estimation method), the hypothesized model given in Figure 2 is fitted. The overall fit of the model was improved and deemed satisfactory, χ_((80))^2=105.180, p<0.0001, the RMSEA was 0.056<0.08 with a 90% confidence interval between 0.018 and 0.084, the CFI was 0.993>0.96, and the WRMR was 0.56 close to 1. Under this model, the number of estimated parameters was 85 as well (10 loadings, five factor variances, 56 thresholds, 10 covariances between factors, and four covariances between the following indicators: MFCTS with MFCBS; EFICM with ITCUHL; IWUMFT with IWUMAS; TPUCAQ with VSMCA).

In Table 3 and Table 4 it can be seen that all the indicators, variances, covariances, and thresholds are statistically significant (p-value<0.0001) with exception of Var(US) and Var(RE) with p-values of 0.127 and 0.205, respectively. In addition, it is relevant to point out that all the correlations between the latent factors are larger than 0.48, while the correlations among the indicators with dependence are larger than 0.40.

Table 3 Estimates for the CFA model of the technology acceptance of VLE with Moodle (correlations and covariances). 

  Standardized   Unstandardized  
  Item Estimate Estimate S.E. p-value
CM MFCTS 0.876 1 0 -
MFCBS 0.568 0.38 0.089 <0.001
UMCL 0.847 0.877 0.173 <0.001
DE DCA 0.877 1 0 -
UCA 0.856 0.909 0.152 <0.001
OIA 0.856 0.91 0.155 <0.001
US NTDMS 0.968 1 0 -
EFICM 0.893 0.512 0.197 0.009
ITCUHL 0.867 0.447 0.157 0.004
GA UMSS 0.907 1 0 -
IWUMFT 0.844 0.734 0.145 <0.001
IWUMAS 0.692 0.447 0.095 <0.001
RE CWPLQ 0.912 1 0 -
TPUCAQ 0.596 0.335 0.166 0.044
VSMCA 0.797 0.596 0.282 0.035
Covariances Corr Cov S.E. p-value
DE with CM 0.948 3.132 0.782 <0.001
US with CM 0.874 6.159 2.399 0.01
US with DE 0.906 6.415 2.45 0.009
GA with CM 0.995 3.873 1.115 0.001
GA with DO 0.934 3.653 0.988 <0.001
GA with US 0.897 7.484 2.99 0.012
RE with CM 0.612 2.461 1.069 0.021
RE with DE 0.677 2.737 1.134 0.016
RE with US 0.489 4.212 2.248 0.061
RE with GA 0.651 3.101 1.41 0.028
MFCTS with MFCBS 0.332 0.332 0.135 0.014
EFICM with ITCUHL 0.405 0.405 0.11 <0.001
IWUMFT with IWUMAS 0.585 0.585 0.065 <0.001
TPUCAQ with VSMCA 0.592 0.592 0.101 <0.001
Var(CM) 1 3.286 1.21 0.007
Var(DE) 1 3.321 0.941 <0.001
Var(US) 1 15.108 9.893 0.127
Var(GA) 1 4.61 1.716 0.007
  Var(RE) 1 4.917 3.881 0.205

Source: Author’s own elaboration.

Table 4 Estimates for the CFA model of the technology acceptance of VLE with Moodle (thresholds). 

Unstandardized thresholds estimates
Thresholds Estimate S.E. p-value Thresholds Estimate S.E. p-value
MFCTS$1 -3.625 0.585 <0.001 ITCUHL$3 -1.617 0.32 <0.001
MFCTS$2 -3.055 0.488 <0.001 ITCUHL$4 0.719 0.249 0.004
MFCTS$3 -1.742 0.37 <0.001 UMSS$1 -4.455 0.678 <0.001
MFCTS$4 1.086 0.278 <0.001 UMSS$2 -3.683 0.44 <0.001
MFCBS$1 -1.793 0.238 <0.001 UMSS$3 -2.079 0.449 <0.001
MFCBS$2 -0.98 0.172 <0.001 UMSS$4 1.242 0.323 <0.001
MFCBS$3 0 0 - IWUMFT$1 -4.343 0.706 <0.001
MFCBS$4 1.557 0.212 <0.001 IWUMFT$2 -2.903 0.386 <0.001
UMCL$1 -4.37 0.776 <0.001 IWUMFT$3 -1.201 0.278 <0.001
UMCL$2 -3.289 0.492 <0.001 IWUMFT$4 0 0 -
UMCL$3 -1.388 0.305 <0.001 IWUMAS$1 -2.28 0.272 <0.001
UMCL$4 1.209 0.262 <0.001 IWUMAS$2 -1.561 0.219 <0.001
DCA$1 -4.269 0.649 <0.001 IWUMAS$3 -0.387 0.179 0.031
DCA$2 -2.921 0.415 <0.001 IWUMAS$4 0.534 0.172 0.002
DCA$3 -1.337 0.315 <0.001 CWPLQ$1 -4.259 1.263 0.001
DCA$4 1.468 0.274 <0.001 CWPLQ$2 -2.628 0.884 0.003
UCA$1 -3.387 0.467 <0.001 CWPLQ$3 -1.206 0.492 0.014
UCA$2 -2.479 0.36 <0.001 CWPLQ$4 1.878 0.635 0.003
UCA$3 -1.071 0.277 <0.001 TPUCAQ$1 -1.67 0.243 <0.001
UCA$4 1.014 0.246 <0.001 TPUCAQ$2 -0.548 0.167 0.001
OIA$1 -3.643 0.447 <0.001 TPUCAQ$3 0 0 -
OIA$2 -2.276 0.323 <0.001 TPUCAQ$4 0.653 0.16 <0.001
OIA$3 -1.431 0.297 <0.001 VSMCA$1 -2.725 0.424 <0.001
OIA$4 1.016 0.249 <0.001 VSMCA$2 -1.066 0.242 <0.001
NTDMS$1 -7.548 2.281 0.001 VSMCA$3 0 0 -
NTDMS$2 -5.381 1.643 0.001 VSMCA$4 1.066 0.211 <0.001
NTDMS$3 -2.965 1.082 0.006 HATOM$1 -0.994 0.151 <0.001
NTDMS$4 1.121 0.499 0.025 HATOM$2 -0.253 0.127 0.046
EFICM$1 -4.571 0.806 <0.001 HATOM$3 0.842 0.143 <0.001
EFICM$2 -3.661 0.607 <0.001 HATOM$4 1.476 0.19 <0.001
EFICM$3 -1.501 0.336 <0.001 TPM$1 -0.385 0.129 0.003
EFICM$4 0.858 0.276 0.002 TPM$2 0.253 0.127 0.046
ITCUHL$1 -3.772 0.575 <0.001 TPM$3 1.476 0.19 <0.001
ITCUHL$2 -2.96 0.439 <0.001 TPM$4 1.645 0.211 <0.001

Source: Author’s own elaboration.

Interpretation of category response curves and item information curves

Figures 3, 4, and 5 show the category response curves (CRC) for the three items belonging to factor CM. In general, the CRC would preferably cover a range of values on the latent characteristic. Low peaks indicate low probability of endorsement of a specific category and relatively poor discrimination parameters. For example, the probability of endorsement of category 2 (disagree) in item MFCTS is around 0.22 for students with a latent value around -3.5, whereas category 4 (agree) works better because it covers almost the same range with a higher probability of endorsement that is around 0.8 for students with a latent value around -0.5. Therefore, visually inspecting all such plots for all items, it was decided to merge response categories, removing items with similar properties or items with undesirable properties. In simple words, the CRC are often used to evaluate the need for item and scale reduction (Li & Baser, 2012). It can be seen in Figure 3, 4, and 5 that there is a visible overlap between categories 2 and 3; however, this overlap is not severe. For this reason, it is reasonable to keep the 5-point-Likert scale used in this instrument. The CRC of the other items are not shown but have a similar behavior.

Source: Author’s own elaboration.

Figure 3 Category response curves for item MFCTS in factor CM. 

Source: Author’s own elaboration.

Figure 4 Category response curves for item MFCBS in factor CM. 

Source: Author’s own elaboration.

Figure 5 Category response curves for item UMCL in factor CM. 

In the section Confirmatory Factor Analysis, it was declared that the fit of the model was appropriate; however, to see the level of certainty about the estimates of the latent continues factors (in this case five latent dimensions are assumed: CM, DE, US, GA, RE), it is inspected the Fisher information of an item over a range of latent values. A higher information means lowers uncertainty for the estimate of the latent values and vice versa (Li & Baser, 2012). Figure 6 plots the item information curves across a range of latent values of factor CM for items MFCTS, MFCBS, and UMCL. Item MFCTS is clearly the most informative item. The next one is UMCL and the item MFCB shows a low information profile.

Source: Author’s own elaboration.

Figure 6 Item information curves for items MFCTS, MFCBS, and UMCL that correspond to factor CM. 

Figure 7 plots the item information curves across a range of latent values of factor DE for items DCA, UCA, and IOA. In this factor the three items showed a similar level of information.

Source: Author’s own elaboration.

Figure 7 Item information curves for items DCA, UCA, and OIA that correspond to factor DE. 

In Figure 8 the item NTDMS appears to be the most informative, while items EFICM and ITCUHL show low levels of information. This implies that these two items do not provide much information to the latent values of factor usability.

Source: Author’s own elaboration.

Figure 8 Item information curves for items NTDMS, EFICM, and ITCUHL that correspond to factor US. 

Conclusions

To successfully use VLE with Moodle, it is important to assess the technology acceptance because such success depends mostly on the professors, researchers, and educational leaders’ acceptance. For this reason, in this paper, a set of feasible measurement models to examine five factors of technology acceptance of VLE with Moodle have been presented: communication, design, usability, general aspects, and reliability. An ordinal categorical instrument of 15 items and five factors was confirmed to measure the technology acceptance of a VLE with Moodle in a Mexican University.

It is believed that the use of strong statistical techniques is very important when validating these kinds of instruments and their outcomes. This is very important in the education field, because there is a controversy about different assessment topics, especially when it is related with learning technology.

Finally, it is important to point out that more research is required to be able to generalize these results to other technology acceptance instruments used in educational settings and with other group samples from a larger student population. For this reason, the use of the proposed instrument should be taken with caution. The English version of the proposed instrument requires further validation since current findings are based on the Spanish version, and some words may change their meaning in the translation. In addition, it would be very interesting to verify measurement invariance among areas of knowledge within the whole University of Colima.

References

Arteaga-Sánchez, R., Cortijo, V., & Javed, U. (2014). Students’ perceptions of Facebook for academic purposes. Computers & Education, 70, 138-149. doi: http://doi.org/10.1016/j.compedu.2013.08.012 [ Links ]

Atkinson, T. M., Rosenfeld, B. D., Sit, L., Mendoza, T. R., Fruscione, M., Lavene, D., Shaw, M., Li, Y., Hay, J., Cleeland, C. S., Scher, H. I., Breitbar, W. S., & Basch, E. (2011). Using confirmatory factor analysis to evaluate construct validity of the Brief Pain Inventory (BPI). Journal of Pain and Symptom Management, 41(3), 558-565. doi: http://doi.org/10.1016/j.jpainsymman.2010.05.008 [ Links ]

Baturay, M. H., Gökçearslan, Ş., & Ke, F. (2017). The relationship among pre-service teachers’ computer competence, attitude towards computer-assisted education, and intention of technology acceptance. International Journal of Technology Enhanced Learning, 9(1). doi: http://doi.org/10.1504/IJTEL.2017.084084 [ Links ]

Boelen, P. A., van den Hout, M. A., & van den Bout, J. (2008). The factor structure of Posttraumatic Stress Disorder symptoms among bereaved individuals: A confirmatory factor analysis study. Journal of Anxiety Disorders, 22(8), 1377-1383. doi: http://doi.org/10.1016/j.janxdis.2008.01.018 [ Links ]

Browne, M. W., & Cudeck, R. (1992). Alternative ways of assessing model fit. Sociological Methods & Research, 21(2), 230-258. doi: http://doi.org/10.1177/0049124192021002005 [ Links ]

Byrne, B. M. (2012). A primer of LISREL: Basic applications and programming for confirmatory factor analytic models. Springer Science & Business Media. [ Links ]

Caine, K. (2016). Local standards for sample size at CHI. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems - CHI ’16 (pp. 981-992). N. Y., USA: ACM Press. doi: http://doi.org/10.1145/2858036.2858498 [ Links ]

Codreanu, E., Michel, C., Bobillier-Chaumon, M.-E., & Vigneau, O. (2017). Assessing the adoption of virtual learning environments in primary schools: An activity-oriented study of teacher’s acceptance. In International Conference on Computer Supported Education (pp. 513-531). doi: http://doi.org/10.1007/978-3-319-63184-4_27 [ Links ]

Davis, F. D. (1989). Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Quarterly, 13(3), 319-340. doi: http://doi.org/10.2307/249008 [ Links ]

DeVellis, R. F. (2003). Factor analysis. Scale development, theory and applications. Appl. Soc. Res. Method Ser, 26, 10-137. [ Links ]

DeVellis, R. F. (2016). Scale development: Theory and applications (Vol. 26). Sage publications. [ Links ]

Evans, N. S. (2013). A cross-sectional descriptive study of graduate students' perceptions of learning effectiveness in face-to-face and online courses (Doctoral thesis). Wilmington University, Delaware, USA. Retrieved March 13 2019 from: https://search.proquest.com/docview/1458440970Links ]

Galey, K. (2014). Use of virtual learning environments at Paro College of Education (Bhutan). Journal of the International Society for Teacher Education, 18(1), 80. Retrieved March 13 2019 from: http://isfte.hkbu.edu.hk/Links ]

Garrison, D. R. (2011).E-learning in the 21st century: A framework for research and practice. New York: Routledge. doi: https://doi.org/10.4324/9780203838761 [ Links ]

Garrison, D. R., & Kanuka, H. (2004). Blended learning: Uncovering its transformative potential in higher education. The Internet and Higher Education, 7(2), 95-105. doi: http://doi.org/10.1016/j.iheduc.2004.02.001 [ Links ]

Govindasamy, T. (2001). Successful implementation of e-Learning: Pedagogical considerations. The Internet and Higher Education, 4(3-4), 287-299. doi: http://doi.org/10.1016/S1096-7516(01)00071-9 [ Links ]

Heilman, G. E., & Brusa, J. (2006). Validating the end-user computing satisfaction survey instrument in Mexico. International Journal of Technology and Human Interaction (IJTHI), 2(4), 84-96. doi: http://doi.org/10.4018/jthi.2006100105 [ Links ]

Hu, L., & Bentler, P. M. (1999). Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Structural Equation Modeling: A Multidisciplinary Journal, 6(1), 1-55. doi: http://doi.org/10.1080/10705519909540118 [ Links ]

Kimberlin, C. L., & Winterstein, A. G. (2008). Validity and reliability of measurement instruments used in research. American Journal of Health-System Pharmacy, 65(23), 2276-2284. doi: http://doi.org/10.2146/ajhp070364 [ Links ]

Kismiantini, -., Montesinos-Lopez, O. A., García-Martínez, J. J., & Franco-Pérez, E. (2014). Analyzing the factors of job satisfaction in a Mexican hospital with binary indicators by confirmatory factor analysis, 9(8), 61-83. doi: http://doi.org/10.5539/ijbm.v9n8p61 [ Links ]

Kline, R. B. (2011). Principles and practice structural equation modeling. NY: The Guilford Press. [ Links ]

Malhotra, N. K. (1997). Investigación de mercados. Un enfoque práctico. México: Prentice Halll. [ Links ]

Mayer, R. E. (2009).Multimedia learning. NY: Cambridge university press. [ Links ]

McArdle, J. J. (1996). Current directions in structural factor analysis. Current Directions in Psychological Science, 5(1), 11-18. doi: http://doi.org/10.1111/1467-8721.ep10772681 [ Links ]

McComas, W. F. (2014). Virtual learning environment. In W. F. McComas (Ed.), The Language of Science Education (pp. 110-110). Rotterdam: SensePublishers. doi: http://doi.org/10.1007/978-94-6209-497-0_99 [ Links ]

Moore, J. L., Dickson-Deane, C., & Galyen, K. (2011). e-Learning, online learning, and distance learning environments: Are they the same? The Internet and Higher Education, 14(2), 129-135. doi: http://doi.org/10.1016/j.iheduc.2010.10.001 [ Links ]

Moore, M. G., & Kearsley, G. (2011). Distance education: A systems view of online learning. California: Wadsworth. [ Links ]

Moskal, P., Dziuban, C., & Hartman, J. (2013). Blended learning: A dangerous idea? The Internet and Higher Education, 18, 15-23. doi: http://doi.org/10.1016/j.iheduc.2012.12.001 [ Links ]

Mueller, D., & Strohmeier, S. (2011). Design characteristics of virtual learning environments: State of research. Computers & Education, 57(4), 2505-2516. doi: http://doi.org/10.1016/j.compedu.2011.06.017 [ Links ]

Newsom, J. T. (2018). Some Clarifications and Recommendations on Fit Indices (Psy 523/623). Retrieved March 13 2019 from http://web.pdx.edu/~newsomj/semclass/ho_fit.pdfLinks ]

Li, Y., & Baser, R. (2012). Using R and WinBUGS to fit a generalized partial credit model for developing and evaluating patient-reported outcomes assessments. Statistics in Medicine, 31(18), 2010-2026. doi: http://doi.org/10.1002/sim.4475 [ Links ]

Liaw, S. S., Huang, H. M., & Chen, G. D. (2007). Surveying instructor and learner attitudes toward e-learning. Computers & Education, 49(4), 1066-1080. doi: http://doi.org/10.1016/j.compedu.2006.01.001 [ Links ]

Lin, S. C., Persada, S. F., & Nadlifatin, R. (2014). A study of student behavior in accepting the Blackboard Learning System: A Technology Acceptance Model (TAM) approach. In Proceedings of the 2014 IEEE 18 th International Conference on Computer Supported Cooperative Work in Design (CSCWD) (pp. 457-462). IEEE. doi: http://doi.org/10.1109/CSCWD.2014.6846888 [ Links ]

Padilla-Meléndez, A., Del Águila-Obra, A. R., & Garrido-Moreno, A. (2014). Empleo de moodle en los procesos de enseñanza-aprendizaje de dirección de empresas: Nuevo perfil del estudiante en el EEES. Educación XX1, 18(1). doi: http://doi.org/10.5944/educxx1.18.1.12314 [ Links ]

Paechter, M., Maier, B., & Macher, D. (2010). Students’ expectations of, and experiences in e-learning: Their relation to learning achievements and course satisfaction. Computers & Education, 54(1), 222-229. doi: http://doi.org/10.1016/j.compedu.2009.08.005 [ Links ]

Paivio, A. (2014). Mind and its evolution: A dual coding theoretical approach. Psychology Press. [ Links ]

Pituch, K. A., & Lee, Y. (2006). The influence of system characteristics on e-learning use. Computers & Education, 47(2), 222-244. doi: http://doi.org/10.1016/j.compedu.2004.10.007 [ Links ]

Porter, W. W., Graham, C. R., Spring, K. A., & Welch, K. R. (2014). Blended learning in higher education: Institutional adoption and implementation. Computers & Education, 75, 185-195. doi: http://doi.org/10.1016/j.compedu.2014.02.011 [ Links ]

Rabe-Hemp, C., Woollen, S., & Humiston, G. S. (2009). A comparative analysis of student engagement, learning, and satisfaction in lecture hall and online learning settings. Quarterly Review of Distance Education, 10(2), 207. [ Links ]

Richardson, J., & Swan, K. (2003). Examining social presence in online courses in relation to students’ perceived learning and satisfaction. Journal of Asynchronous Learning Network, 7(1), 68-88. Enlace: https://www.ideals.illinois.edu/bitstream/handle/2142/18713/RichardsonSwan JALN7(1).pdfLinks ]

Ruiz, I., & Romero, S. (2008). Moodle: una herramienta eficaz aplicada a la enseñanza de las prácticas, en el área de electrónica y arquitectura de los computadores. In VIII Congreso de Tecnologías Aplicadas a la Enseñanza de la Electrónica (pp. 165). España: Universidad de Zaragoza. [ Links ]

Salmerón-Pérez, H., Rodríguez-Fernández, S., & Gutiérrez-Braojos, C. (2010). Methodologies to improve communication in virtual learning environments. Comunicar, 17(34), 163-171. doi: http://doi.org/10.3916/C34-2010-03-16 [ Links ]

Sánchez, R. A., & Hueros, A. D. (2010). Motivational factors that influence the acceptance of Moodle using TAM. Computers in Human Behavior, 26(6), 1632-1640. doi: http://doi.org/10.1016/j.chb.2010.06.011 [ Links ]

Strauss, M. E., & Smith, G. T. (2009). Construct validity: Advances in theory and methodology. Annual Review of Clinical Psychology, 5(1), 1-25. doi: http://doi.org/10.1146/annurev.clinpsy.032408.153639 [ Links ]

Šumak, B., Polancic, G., & Hericko, M. (2010). An empirical study of virtual learning environment adoption using UTAUT. In 2010 Second International Conference on Mobile, Hybrid, and On-Line Learning (pp. 17-22). IEEE. doi: http://doi.org/10.1109/eLmL.2010.11 [ Links ]

Tavakol, M., & Dennick, R. (2011). Making sense of Cronbach’s alpha. International Journal of Medical Education, 2, 53-55. doi: http://doi.org/10.5116/ijme.4dfb.8dfd [ Links ]

Tuckman, B. W. (2007). The effect of motivational scaffolding on procrastinators’ distance learning outcomes. Computers & Education, 49(2), 414-422. doi: http://doi.org/10.1016/j.compedu.2005.10.002 [ Links ]

van Raaij, E. M., & Schepers, J. J. L. (2008). The acceptance and use of a virtual learning environment in China. Computers & Education, 50(3), 838-852. doi: http://doi.org/10.1016/j.compedu.2006.09.001 [ Links ]

Venkatesh, V., Morris, M., Davis, G., & Davis, F. (2003). User Acceptance of Information Technology: Toward a Unified View. MIS Quarterly, 27(3), 425-478. doi: http://doi.org/10.2307/30036540 [ Links ]

Yu, C. Y., & Muthen, B. (2002). Evaluation of model fit indices for latent variable models with categorical and continuous outcomes (Technical report). Los Angeles, CA: University of California at Los Angeles, Graduate School of Education & Information Studies. [ Links ]

Zubieta-García, J., & Rama-Vilate, C. (2015). La Educación a Distancia en México: Una nueva realidad universitaria. CDMX: UNAM. Retrieved March 13 2019 from https://web.cuaed.unam.mx/wp-content/uploads/2015/09/PDF/educacionDistancia.pdfLinks ]

Como citar: Santana-Mancilla, P. C., Montesinos-López, O. A., García-Ruiz, M.A., Contreras-Castillo, J. J., & Gaytan-Lugo, L. S., (2019). Validation of an instrument for measuring the technology acceptance of a virtual learning environment. Acta Universitaria 29, e1796. doi. http://doi.org/10.15174.au.2019.1796

Received: February 21, 2017; Accepted: September 24, 2018; Published: April 08, 2019

*Autor de correspondencia

Creative Commons License This is an open-access article distributed under the terms of the Creative Commons Attribution License