# Load packages
library(tabula)
library(magrittr)

Introduction

The matrix seriation problem in archaeology is based on three conditions and two assumptions, which Dunnell (1970) summarizes as follows.

The homogeneity conditions state that all the groups included in a seriation must:

  • Be of comparable duration,
  • Belong to the same cultural tradition,
  • Come from the same local area.

The mathematical assumptions state that the distribution of any historical or temporal class:

  • Is continuous through time,
  • Exhibits the form of a unimodal curve.

Theses assumptions create a distributional model and ordering is accomplished by arranging the matrix so that the class distributions approximate the required pattern. The resulting order is inferred to be chronological.

Reciprocal ranking

Reciprocal ranking iteratively rearrange rows and/or columns according to their weighted rank in the data matrix until convergence (Ihm 2005).

For a given incidence matrix \(C\):

  • The rows of \(C\) are rearranged in increasing order of:

\[ x_{i} = \sum_{j = 1}^{p} j \frac{c_{ij}}{c_{i \cdot}} \]

  • The columns of \(C\) are rearranged in a similar way:

\[ y_{j} = \sum_{i = 1}^{m} i \frac{c_{ij}}{c_{\cdot j}} \]

These two steps are repeated until convergence. Note that this procedure could enter into an infinite loop.

The positive difference from the column mean percentage (in french “écart positif au pourcentage moyen”, EPPM) represents a deviation from the situation of statistical independence (Desachy 2004). As independence can be interpreted as the absence of relationships between types and the chronological order of the assemblages, EPPM is a useful graphical tool to explore significance of relationship between rows and columns related to seriation (Desachy 2004).

correspondence analysis

Seriation

correspondence Analysis (CA) is an effective method for the seriation of archaeological assemblages. The order of the rows and columns is given by the coordinates along one dimension of the CA space, assumed to account for temporal variation. The direction of temporal change within the correspondence analysis space is arbitrary: additional information is needed to determine the actual order in time.

## Coerce dataset to an abundance matrix
zuni_counts <- as_count(zuni)

## correspondence analysis of the whole dataset
corresp <- ca::ca(zuni_counts)
coords <- ca::cacoord(corresp, type = "principal")

## Plot CA results
ggplot2::ggplot(mapping = ggplot2::aes(x = Dim1, y = Dim2)) +
  ggplot2::geom_vline(xintercept = 0, linetype = 2) +
  ggplot2::geom_hline(yintercept = 0, linetype = 2) +
  ggplot2::geom_point(data = as.data.frame(coords$rows), color = "black") +
  ggplot2::geom_point(data = as.data.frame(coords$columns), color = "red") +
  ggplot2::coord_fixed() + 
  ggplot2::theme_bw()

Refining

Peeples and Schachner (2012) propose a procedure to identify samples that are subject to sampling error or samples that have underlying structural relationships and might be influencing the ordering along the CA space. This relies on a partial bootstrap approach to CA-based seriation where each sample is replicated n times. The maximum dimension length of the convex hull around the sample point cloud allows to remove samples for a given cutoff value.

According to Peeples and Schachner (2012), “[this] point removal procedure [results in] a reduced dataset where the position of individuals within the CA are highly stable and which produces an ordering consistend with the assumptions of frequency seriation.”

## Replicates Peeples and Schachner 2012 results

## Samples with convex hull maximum dimension length greater than the cutoff
## value will be marked for removal.
## Define cutoff as one standard deviation above the mean
fun <- function(x) { mean(x) + sd(x) }

## Get indices of samples to be kept
## Warning: this may take a few seconds!
set.seed(123)
(zuni_keep <- refine_seriation(zuni_counts, cutoff = fun, n = 1000))
#> Partial bootstrap CA seriation refinement:
#> - Cutoff values: 2.22 (rows) - 0.37 (columns)
#> - Rows to keep: 349 of 420 (83%)
#> - Columns to keep: 14 of 18 (78%)

## Plot convex hull
## blue: convex hull for samples; red: convex hull for types
### All bootstrap samples
ggplot2::ggplot(mapping = ggplot2::aes(x = x, y = y, group = id)) +
  ggplot2::geom_vline(xintercept = 0, linetype = 2) +
  ggplot2::geom_hline(yintercept = 0, linetype = 2) +
  ggplot2::geom_polygon(data = zuni_keep[["rows"]], 
                        fill = "blue", alpha = 0.05) +
  ggplot2::geom_polygon(data = zuni_keep[["columns"]], 
                        fill = "red", alpha = 0.5) +
  ggplot2::coord_fixed() + 
  ggplot2::labs(title = "Whole dataset", x = "Dim. 1", y = "Dim. 2") + 
  ggplot2::theme_bw()
### Only retained samples
ggplot2::ggplot(mapping = ggplot2::aes(x = x, y = y, group = id)) +
  ggplot2::geom_vline(xintercept = 0, linetype = 2) +
  ggplot2::geom_hline(yintercept = 0, linetype = 2) +
  ggplot2::geom_polygon(data = subset(zuni_keep[["rows"]], 
                                      id %in% names(zuni_keep[["keep"]][[1]])),
                        fill = "blue", alpha = 0.05) +
  ggplot2::geom_polygon(data = zuni_keep[["columns"]], 
                        fill = "red", alpha = 0.5) +
  ggplot2::coord_fixed() + 
  ggplot2::labs(title = "Selected samples", x = "Dim. 1", y = "Dim. 2") + 
  ggplot2::theme_bw()

## Histogram of convex hull maximum dimension length
hull_length <- cbind.data.frame(length = zuni_keep[["lengths"]][[1]])
ggplot2::ggplot(data = hull_length, mapping = ggplot2::aes(x = length)) +
  ggplot2::geom_histogram(breaks = seq(0, 4.5, by = 0.5), fill = "grey70") +
  ggplot2::geom_vline(xintercept = fun(hull_length$length), colour = "red") +
  ggplot2::labs(title = "Convex hull max. dim.", 
                x = "Maximum length", y = "Count") + 
  ggplot2::theme_bw()

If the results of refine_seriation is used as an input argument in seriate, a correspondence analysis is performed on the subset of object which matches the samples to be kept. Then excluded samples are projected onto the dimensions of the CA coordinate space using the row transition formulae. Finally, row coordinates onto the first dimension give the seriation order.

References

Desachy, Bruno. 2004. “Le sériographe EPPM : un outil informatisé de sériation graphique pour tableaux de comptages.” Revue archéologique de Picardie 3 (1): 39–56. https://doi.org/10.3406/pica.2004.2396.

Dunnell, Robert C. 1970. “Seriation Method and Its Evaluation.” American Antiquity 35 (03): 305–19. https://doi.org/10.2307/278341.

Ihm, Peter. 2005. “A Contribution to the History of Seriation in Archaeology.” In Classification the Ubiquitous Challenge, edited by Claus Weihs and Wolfgang Gaul, 307–16. Berlin Heidelberg: Springer. https://doi.org/10.1007/3-540-28084-7_34.

Peeples, Matthew A., and Gregson Schachner. 2012. “Refining Correspondence Analysis-Based Ceramic Seriation of Regional Data Sets.” Journal of Archaeological Science 39 (8): 2818–27. https://doi.org/10.1016/j.jas.2012.04.040.