Measuring scientific knowledge: can we use questions that are denied by the religious?

In reply to www.ljzigerell.com/?p=534 and his working paper here: www.ljzigerell.com/?p=2376

We are discussing his working paper over email, and I had some reservations about his factor analysis. I decided to run the analyses I wanted myself, but it turned into a longer project which should be placed in a short paper instead of in a private email.

I fetched the data from his source. The raw data did not have variable names, so was unwieldy to work with. I opened the SPSS file, and it did have variable names. Then I exported the CSV with the desired variables (see supp. material). Then I had to recoded the variables so that the true answers are coded as 1, false answers as 0, and missing as NA. This took some time. I followed his coding procedure for most cases (see his STATE file and my R code below).

How many factors to extract

It seems that he relies on some kind of method for determining the number of factors to extract, presumably Eigenvalue>1. I always use three different methods using the nFactors package. Using all 22 variables (note that he did not this all of them at once), all methods agreed to extract 5 factors (at max). Here’s the factor solutions for extracting 1 thru 5 factors and their intercorrelations:

Factor analyses with 1-5 factors and their correlations

Interpretation

We see that in the 1-factor solution, all variables load in the expected direction, and we can speak of a general scientific knowledge factor. This is the one we want to use for other analyses. We see that faith loads negatively. This variable is not a true/false question, and thus should be excluded from any actual measurement of the general scientific knowledge factor.

Increasing the number of factors to extract simply divides this general factor into correlated parts. E.g. in the 2-factor solution, we see a probability factor that correlates .35 with the remaining semi-general factor. In solution 3, we see MR2 as the probability factor, MR3 as the knowledge related to religious beliefs factor and MR1 as the remaining items. Intercorrelations are .29, .25 and .43. This pattern continues until the 5th solution which still produces 5 correlated factors: MR2 is the probability factor, MR1 is an astronomy factor, MR3 is the one having to do with religious beliefs, MR5 looks like a medicine/genetics factor, and MR4 is the rest.

Just because scree tests etc. tell you to extract >1 factor does not mean that there is no general factor. This is the old fallacy made in the study of cognitive ability. See discussion in Jensen 1998 (chapter 3). It is sometimes still made e.g. Hampshire, et al (2012). Generally, as one increases the number of variables, the suggested number of factors to extract goes up. This does not mean that there is no general factor, just that with increasing number of variables, one can see a more fine-grained structure in the data than one can with only e.g. 5 variables.

Should we use them or not?

Before discussing whether one should theoretically use them or not, one can measure if it makes much of a difference. One can do this by extracting the general factor with and without the items in questions. I did this, also excluding the onfaith item. Then I correlated the scores from these two analysis: r=.992. In other words, it hardly matters whether one includes these religious-tinged items or not. The general factor is measured quite well already without them and they do not substantially change the factor scores. However, since adding more indicator items/variables generally reduces measurement error of a latent trait/factor, I would include them in my analyses.

How many factors should we extract and use?

There is also the question of how many factors one should extract. The answer is that it depends on what one wants to do. As Zigerell points out in a review comment of this paper on Winnower:

For example, for diagnostic purposes, if we know only that students A, B, and C miss 3 items on a test of general science knowledge, then the only remediation is more science; but we can provide more tailored remediation if we have separate components so that we observe that, say, A did poorly only on the religion-tinged items, B did poorly only on the probability items, and C did poorly only on the astronomy items.

For remedial education, it is clearly preferable to extract the highest number of interpretable factors because this gives the most precise information where knowledge is lacking for a given person. In regression analysis where we want to control for scientific knowledge, one should use the general factor.

References

Hampshire, A., Highfield, R. R., Parkin, B. L., & Owen, A. M. (2012). Fractionating human intelligence. Neuron, 76(6), 1225-1237.

Jensen, A. R. (1998). The g factor: The science of mental ability. Westport, CT: Praeger.

Supplementary material

Datafile: science_data

R code

library(plyr) #for mapvalues data = read.csv("science_data.csv") #load data #Coding so that 1 = true, 0 = false data\$smokheal = mapvalues(data\$smokheal, c(9,7,8,2),c(NA,0,0,0)) data\$condrift = mapvalues(data\$condrift, c(9,7,8,2),c(NA,0,0,0)) data\$earthhot = mapvalues(data\$earthhot, c(9,7,8,2),c(NA,0,0,0)) data\$rmanmade = mapvalues(data\$rmanmade, c(9,7,8,1,2),c(NA,0,0,0,1)) #reverse data\$oxyplant = mapvalues(data\$oxyplant, c(9,7,8,2),c(NA,0,0,0)) data\$lasers = mapvalues(data\$lasers, c(9,7,8,2,1),c(NA,0,0,1,0)) #reverse data\$atomsize = mapvalues(data\$atomsize, c(9,7,8,2),c(NA,0,0,0)) data\$antibiot = mapvalues(data\$antibiot, c(9,7,8,2,1),c(NA,0,0,1,0)) #reverse data\$dinosaur = mapvalues(data\$dinosaur, c(9,7,8,2,1),c(NA,0,0,1,0)) #reverse data\$light = mapvalues(data\$light, c(9,7,8,2,3),c(NA,0,0,0,0)) data\$earthsun = mapvalues(data\$earthsun, c(9,7,8,2),c(NA,0,0,0)) data\$suntime = mapvalues(data\$suntime, c(9,7,8,2,3,1,4,99),c(0,0,0,0,1,0,0,NA)) data\$dadgene = mapvalues(data\$dadgene, c(9,7,8,2),c(NA,0,0,0)) data\$getdrug = mapvalues(data\$getdrug, c(9,7,8,2,1),c(NA,0,0,1,0)) #reverse data\$whytest = mapvalues(data\$whytest, c(1,2,3,4,5,6,7,8,9,99),c(1,0,0,0,0,0,0,0,0,NA)) data\$probno4 = mapvalues(data\$probno4, c(9,8,2,1),c(NA,0,1,0)) #reverse data\$problast = mapvalues(data\$problast, c(9,8,2,1),c(NA,0,1,0)) #reverse data\$probreq = mapvalues(data\$probreq, c(9,8,2),c(NA,0,0)) data\$probif3 = mapvalues(data\$probif3, c(9,8,2,1),c(NA,0,1,0)) #reverse data\$evolved = mapvalues(data\$evolved, c(9,7,8,2),c(NA,0,0,0)) data\$bigbang = mapvalues(data\$bigbang, c(9,7,8,2),c(NA,0,0,0)) data\$onfaith = mapvalues(data\$onfaith, c(9,1,2,3,4,7,8),c(NA,1,1,0,0,0,0)) #How many factors to extract? library(nFactors) nScree(data[complete.cases(data),]) #use complete cases only #extract factors library(psych) #for factor analysis for (num in 1:5) {   print(paste0("Factor analysis, extracting ",num," factors using oblimin and MinRes"))   fa = fa(data,num) #extract factors   print(fa\$loadings) #print   if (num>1){ #print factor cors     phi = round(fa\$Phi,2) #round to 2 decimals     colnames(phi) = rownames(phi) = colnames(fa\$scores) #set names     print(phi) #print   } } #Does it make a difference? fa.all = fa(data[1:21]) #no onfaith fa.noreligious = fa(data[1:19]) #no onfaith, bigbang, evolved cor(fa.all\$scores,fa.noreligious\$scores, use="pair") #correlation, ignore missing cases

Showing 1 Reviews

• L.J Zigerell
Confidence in paper
2

Hi. My competing interest is that I am responsible for the
blog post discussed in the letter.

I think that the data support the author's conclusions, that
religious-tinged items can be used to measure science knowledge, but I'd like
to discuss the analysis and in particular the question of whether religious-tinged
items *should* be used to measure science knowledge.

1. There are some differences between the blog post and the
working paper. The blog post discussed whether religion-tinged items such as
evolution and the big bang should be included when measuring science knowledge.
I think that it is an important correction to my post to note that "Just
because scree tests etc. tell you to extract >1 factor does not mean that
there is no general factor", but I'd be interested in thoughts on the idea
that the presence of a general factor does not require that the items be merged into a single scale for all analyses. For
example, for diagnostic purposes, if we know only that students A, B, and C miss
3 items on a test of general science knowledge, then the only remediation is
more science; but we can provide more tailored remediation if we have separate
components so that we observe that, say, A did poorly only on the
religion-tinged items, B did poorly only on the probability items, and C did
poorly only on the astronomy items.

I think that the religious-tinged science knowledge items
have value, similar to the value of items on GMOs, hbd, vaccines, and climate
change, but I think that there is value in analyzing those types of science knowledge
items separately from items about how lasers work.

Part of my concern with including religious-tinged and hot-button political items in a scale of science knowledge is the magnitude of the penalty that is arbitrarily assigned to persons who refuse to accept the scientific consensus on certain issues. However, segregating such items from the general science knowledge scale removes the concern about whether religious-tinged items should represent 10%, 20%, or more of science knowledge.

2. The science knowledge scale for the working paper is a
similar scale that contains some of the items discussed in the blog post, but
the science knowledge scale in the working paper is used as a control variable
and has fewer items, in part because some respondents did not receive each item
(such as the probability items) and, in the case of the big bang (and
acceptance of the big bang and evolution, and other respondents were asked to
respond to meta-statements about the big bang and evolution, such as
"According to astronomers, the universe began with a huge explosion."
From what I can tell, two items can be included in the science knowledge scale
without a loss of observations or concern about a split ballot effect. One item
is a follow-up item to the earth-around-the-Sun item, in which the follow-up item
measured knowledge about how much time the trip around the Sun takes, but this
follow-up items was asked only of respondents who indicated that the Earth
travels around the Sun; I excluded this follow-up item from the science
knowledge scale in the working paper because of the "double counting"
of the revolution concept. The other test item that I am aware of that can be included in the science
knowledge scale concerns continental drift, but I figured that the added value
in predicting science knowledge for the purposes of a control variable was not
worth adding a religious-tinged item to the science knowledge test.

Generally speaking, I'd agree with the conclusion of the
analysis in the letter, that items such as continental drift load onto a
general science knowledge factor and that this general science knowledge factor
has valid uses. But I also think that there are valid reasons to not place all
of the items into a single scale.

3. Yes, the number of factors was extracted based on
eigenvalues having a value greater than 1.

4. I don't remember the details of the analysis, but note 3
in my post indicates that all 22 items were used in unreported analyses, that
including the items added a fifth and sixth factor in some cases, that the
inference about the disputed items did not change when all 22 items were included,
and that 5 of the 22 items were not included in reported analyses because there
were missing data on those items for and within some years.

This review has 1 comments. Click to view.
• Emil O. W. Kirkegaard

You are right about the question of whether one should include the science religious-tinged items or not is not answered by whether they load on a general factor alone. However, due to the strong intercorrelations*, it does not matter much. To see this, I extracted the general factor without the onfaith variable, and the general factor without the onfaith+bigbang+evolved variables. Then I correlated them: r=.992, so in practice it hardly makes a difference whether to include them or not (e.g. as in a control variable), but it may change the interpretation if the author thinks that these religious-tinged questions do not load on the general factor. I have seen this claim been made a few times for some of the political-tinged items (e.g. climate change), but it was not made by you IIRC.

I will update the post with this.

* These are correlations based on dichotomous items, so they are artificially low when one uses Pearson correlations. One can use item-level factor analysis/polychoric correlations to get the estimated Pearson correlations if they were continuous variables.

• L.J Zigerell

Excellent. Thanks for the extra analysis!