\documentclass[ALCO,ThmDefs,Unicode,epreuves]{cedram}
\OneNumberAllTheorems
\usepackage{tikz}
\usepackage{blkarray}
\newcommand{\PP}{\mathbb{P}}
\newcommand{\CC}{\mathbb{C}}
\newcommand{\QQ}{\mathbb{Q}}
\newcommand{\RR}{\mathbb{R}}
\newcommand{\kk}{\Bbbk}
\newcommand{\BB}{\mathcal{B}}
\newcommand{\II}{\mathcal{I}}
\newcommand{\VV}{\mathcal{V}}
\newcommand{\HH}{\mathcal{H}}
\newcommand{\ww}{\mathbf{w}}
\newcommand{\xx}{\mathbf{x}}
\newcommand{\uu}{\mathbf{u}}
\newcommand{\vv}{\mathbf{v}}
\newcommand{\yy}{\mathbf{y}}
\newcommand{\pp}{\mathbf{p}}
\newcommand{\rr}{\mathbf{r}}
\newcommand{\ttbf}{\mathbf{t}}
\newcommand{\ssbf}{\mathbf{s}}
\newcommand{\qq}{\mathbf{q}}
\DeclareMathOperator{\Gr}{Gr}
\DeclareMathOperator{\GL}{GL}
\DeclareMathOperator{\rk}{rank}
\DeclareMathOperator{\sgn}{sgn}
\DeclareMathOperator{\supp}{supp}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\newcommand*{\mk}{\mkern -1mu}
\newcommand*{\Mk}{\mkern -2mu}
\newcommand*{\mK}{\mkern 1mu}
\newcommand*{\MK}{\mkern 2mu}
\hypersetup{urlcolor=purple, linkcolor=blue, citecolor=red}
\newcommand*{\romanenumi}{\renewcommand*{\theenumi}{\roman{enumi}}}
\newcommand*{\Romanenumi}{\renewcommand*{\theenumi}{\Roman{enumi}}}
\newcommand*{\alphenumi}{\renewcommand*{\theenumi}{\alph{enumi}}}
\newcommand*{\Alphenumi}{\renewcommand*{\theenumi}{\Alph{enumi}}}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%% Auteur
\author{\firstname{Madeline} \lastname{Brandt}}
\address{University of California, Berkley\\
Dept. of mathematics\\
970 Evans Hall\\
Berkeley\\
CA 94720, USA}
\email{brandtm@berkeley.edu}
\thanks{M. Brandt was partially supported by the National Science Foundation Graduate Research Fellowship
under Grant No. DGE 1106400.}
%%%
\author{\firstname{Amy} \lastname{Wiebe}} %%%2
\address{University of Washington\\
Dept. of mathematics\\
Box 354350\\
Seattle\\
WA 98195, USA}
\email{awiebe@uw.edu}
\thanks{A. Wiebe was partially supported by an NSERC Postgraduate Scholarship.}
%%%%% Sujet
\subjclass{52B40}
\keywords{matroid, realization space}
%%%%% Gestion
\DOI{10.5802/alco.68}
\datereceived{2018-05-02}
\daterevised{2019-01-22}
\dateaccepted{2019-02-23}
%%%%% Titre et résumé
\title[The slack realization space of a matroid]{The slack realization space of a matroid}
\begin{abstract}
We introduce a new model for the realization space of a matroid, which is obtained from a variety defined by a saturated determinantal ideal, called the slack ideal, coming from the vertex-hyperplane incidence matrix of the matroid. This is inspired by a similar model for the slack realization space of a polytope. We show how to use these ideas to certify non-realizability of matroids, and describe an explicit relationship to the standard Grassmann--Pl\"ucker realization space model. We also exhibit a way of detecting projectively unique matroids via their slack ideals by introducing a toric ideal that can be associated to any matroid.
\end{abstract}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{document}
\maketitle
\section{Introduction}
Realization spaces of matroids are well studied objects~\cite{BLSWZ, weird_bernd_book, mnev} which encode not only whether or not the matroid is realizable, but also carry additional information about the structure of the matroid. A realization (or representation) of a rank $d+1$ matroid $M$ is a set of vectors in $\kk^{d+1}$ which captures its independence structure. Roughly speaking, a realization space is the set of all such choices of vectors.
Fundamental questions in the study of realization spaces of matroids include discovering whether or not a given matroid is realizable, determining over which field it is realizable, finding the structure of the set of realizations, and characterizing when realizations exist.
A celebrated theorem of Mn\"ev states that every semialgebraic set defined over the integers is stably equivalent to the realization space of some oriented matroid. That is, realization spaces of matroids can become arbitrarily complicated. In light of this, we aim to connect the combinatorics of the matroid to properties of its realization space.
%\comment{Something about how we need more tools to study them because they are hard.}
Our realization space model will be based off the \emph{slack matrix} of a matroid.
This is a generalization of the slack matrix of a polytope~\cite{yanni}, which has been used extensively
in the study of extended formulations of polytopes; see for example~\cite{lifts, rothvoss, yanni}.
In~\cite{GPRT} the slack matrix of a polytope was used to construct a realization space for the polytope via its \emph{slack ideal}. This realization space model and its properties were recently explored in detail in~\cite{slack_paper}. We extend the results in~\cite{slack_paper} to matroids both defining the slack realization space of a matroid and examining its properties.
%This is a generalization of the slack matrix of a polytope, which has been used in the study of extended formulations of polytopes~\cite{lifts, rothvoss, yanni}, and of positive semidefinite rank of polytopes in~\cite{GPRT} where the slack ideal and realization space of a polytope were first introduced. This properties and uses of this realization space were explored in more detail in~\cite {slack_paper}.
%%We generalize a construction of~\cite{slack_paper} in which they describe a model for the realization space of a polytope using the \emph{slack matrix} of the polytope. This model gave a new framework for answering questions about the realizability of polytopes.
%In the spirit of~\cite{slack_paper},} we extend these results to the setting of matroids, creating the beginnings of a dictionary between the combinatorial properties of the matroid and the algebraic description of its realization space.
% \subsection{Organization}
In Section~\ref{SEC:bg} we introduce the main objects of study, as well as preliminary results and notation. In Section~\ref{SEC:realsp} we discuss two models for the realization space of a matroid. One of our main theorems, Theorem~\ref{THM:mothervariety}, shows how the two realization space models can be described via a single overarching variety. In Section~\ref{SEC:nonreal} we show how the slack realization model can be used to determine whether a matroid has a realization over a certain field. We also reframe the tools of final polynomials~\cite{weird_bernd_book} in terms of slack ideals, and show how they can be used to improve computational efficiency of realizability checking. In Section~\ref{SEC:toric} we introduce a toric ideal associated to a matroid and study its relationship to the projective uniqueness of the matroid. In Appendix~\ref{AP:notation} we include a table of notation used throughout the paper.
The computations in this paper are done in \texttt{Macaulay2}~\cite{M2} with the help of the \texttt{Matroids} package~\cite{M2matroids}; code for the computations can be found at \href{http://sites.math.washington.edu/~awiebe}{\texttt{http://sites.math.washington.edu/$\sim$awiebe}}.
%==============================================================================
\section{The slack matrix of a matroid}
\label{SEC:bg}
Much of this section is analogous to~\cite[\S~2]{slack_paper} to which we refer the reader for further details and excluded proofs.
We assume the reader has familiarity with the basic definitions from matroid theory, see~\cite{Oxley} or~\cite{GM}. Throughout the paper, we assume all matroids are simple (having no loops or parallel elements).
Let $\kk$ be a field, and $\kk^*:=\kk \backslash \{0\}$. Let $M = (E,\BB)$ be a matroid of rank $d+1$ with ground set $E = \{\vv_1,\ldots, \vv_n\}$, where each $\vv_i\in\kk^{d+1}$ and $\BB$ is its set of bases. If $V$ is the matrix with columns $\vv_1,\ldots, \vv_n$, then the independent sets of $M$ are the linearly independent subsets of columns, and we write $M = M[V]$. %Throughout the paper we assume that all matroids are simple. i.e., no loops or parallel elements.
Let $\HH(M)$ denote the set of hyperplanes of $M$, which are the closed subsets (flats) of rank $d$. In $M[V]$, each hyperplane $H\in\HH(M)$ corresponds to a linear subspace of $\kk^{d+1}$, so is determined by some linear equation; that is,
$H = \{\xx\in E: \alpha_H^\top\xx = 0\}$.
For $\HH(M) = \{H_1,\ldots, H_h\}$, let $W$ be the matrix whose columns are the hyperplane defining normals $\alpha_1, \ldots, \alpha_h$, or some multiple, $\lambda_j\alpha_j$ for $\lambda_j\in\kk^*$, thereof.
\begin{defi}
A \emph{slack matrix} of the matroid $M = M[V]$ over $\kk$ is an $n\times h$ matrix $S_{M[V]} = V^\top W$ for matrices $V$ and $W$ as above. %This depends on the realization $V$.
\label{DEFN:slackmatrix}
\end{defi}
%Observe for any invertible diagonal matrix $D\in\kk^{h\times h}$, the matrix $S_MD$ is also a slack matrix of $M$.
We wish to parametrize the set of realizations of a matroid by its slack matrices. So, we must determine the characteristics which define the set of all possible slack matrices of a given matroid. We begin by considering the rank of a slack matrix. Given a matrix $S$, its \emph{support} consists of the entries which are non-zero. Two matrices have the same support if they have the same zero pattern.
\newbox\toto
\setbox\toto=\hbox{See~\cite[Lemma~3.1]{slack_paper}}
\begin{lemma}[\box\toto] If $S$ is a matrix having the same support as a slack matrix of some rank $d+1$ matroid $M = M[V]$, then $\rk(S)\geq d+1$. \label{LEM:trianglesubmat} \end{lemma}
\begin{coro} If $M=M[V]$ is a rank $d+1$ matroid then $\rk(S_M) = d+1$. \label{COR:rank} \end{coro}
\begin{proof} The factored form of $S_M \in \kk^{n\times (d+1)}\times\kk^{(d+1)\times h}$ implies that $\rk(S_M)\leq d+1$. The result then follows from Lemma~\ref{LEM:trianglesubmat}.
%Since $M$ is rank $d+1$, we have $\rk(V) = d+1$. It remains to show that $\rk(W) = d+1$. (True if $\HH$ is an essential hyperplane arrangement, i.e., $\cap_{H\in\HH} H = \{0\}$, which I think it has to be for a realizable matroid - use same argument as in Polytope paper Lemma 3.1 using lattice of flats?)
\end{proof}
Now, let $M = (E,\BB)$ be an abstract matroid of rank $d+1$.
Unless otherwise stated, we take $E = [n]=\{1,\ldots, n\}$.
A \emph{realization} of $M$ over $\kk$ is a collection of vectors $V = \{\vv_1,\ldots, \vv_n\} \subset \kk^\ell$ such that the independent subsets of $V$ are indexed by the independent sets of the matroid, so $M=M[V]$. %Equivalently, as in~\cite[Chapter 8]{BLSWZ}, we can consider a realization as a map $\phi: E\to \kk^{d+1}$ such that $\det(\phi(e_0),\ldots, \phi(e_d)) = 0$ if and only if $\{e_0,\ldots, e_d\}\notin\BB$. Not all abstract matroids have such a representation.
A matroid with a realization is called \emph{realizable} (also \emph{representable, linear} or \emph{coordinatizable}).
\begin{lemma} The rows of a slack matrix $S_M$ form a realization of the matroid $M$. \label{LEM:rowrealiz} \end{lemma}
\begin{proof} It suffices to show that if we label the rows of $S_M$ with $[n]$, the subsets indexing linearly independent rows of $S_M$ are the independent sets of $M$.
Since $S_M = V^TW$, if a subset $J$ of $E$ is dependent, then there exists a vector $\beta\in\kk^n$ with support indexed by $J$ such that $V\beta = 0$. But now, $\beta^\top S_M = (V\beta)^\top W = 0$, so $J$ also indexes a dependent subset of the rows of $S_M$.
Conversely, suppose $J$ indexes a dependent subset of the rows of $S_M$. Then for some $\beta\in\kk^n$ with support indexed by $J$, we have
$0 = \beta^\top S_M = (V\beta)^\top W$. Since $W$ has full rank by Corollary~\ref{COR:rank}, it must be the case that $V\beta = 0$, so that $J$ also indexes a dependent set of $M$. \end{proof}
From now on we assume that realizations come with a fixed labelling of ground set elements and hyperplanes. Then, two slack matrices of the same matroid (which a priori may have had different labellings) now cannot differ by permutations of rows and columns. This also allows us to identify hyperplanes of a realization by their normal vectors or the indices of those vectors.
Now, we characterize the set of matrices which correspond to slack matrices of a matroid $M$.
\begin{theorem} Let $M$ be a rank $d+1$ matroid with $n$ elements and hyperplanes $\mathcal{H}(M) = \{H_1, \ldots, H_h\}$. A matrix $S \in \kk^{n \times h}$ is a slack matrix of some realization of~$M$ if and only if both of the following hold:
\begin{enumerate}[label=(\roman{enumi})]
%\hspace{0.4 in}
%\begin{minipage}{0.3 \textwidth}
\item\label{theo2.5_i}{$\supp (S) = \supp (S_M)$}
%\end{minipage}
%\hspace{0.6 in}
%\begin{minipage}{0.3 \textwidth}
\item\label{theo2.5_ii}{$\rk(S) = d+1$.}
%\end{minipage}
\end{enumerate}
\label{THM:slackconditions}
\end{theorem}
\begin{proof}
Suppose $S$ is a slack matrix of some realization of $M$. Then~\ref{theo2.5_i} holds trivially, and~\ref{theo2.5_ii} holds by Corollary~\ref{COR:rank}.
Conversely, suppose $S$ is a matrix satisfying~\ref{theo2.5_i} and~\ref{theo2.5_ii}. By~\ref{theo2.5_ii}, $S$ has some rank factorization $S=AB$, where $A\in\kk^{n\times(d+1)}$ and $B\in\kk^{(d+1)\times h}$. %is an $n\times(d+1)$ matrix and $B$ is a $(d+1)\times h$ matrix.
Let $\mathbf{a}_1,\ldots, \mathbf{a}_n \in \kk^{d+1}$ be the rows of $A$ and $\mathbf{b}_1,\ldots, \mathbf{b}_h \in \kk^{d+1}$ be the columns of $B$. Then we claim that the rows of $A$ give a realization of $M$; that is $M = M[A^\top]$.
To see this, we show that the hyperplanes of $M$ are also hyperplanes of $M[A^\top]$, and that $M[A^\top]$ can not contain more hyperplanes.
By~\ref{theo2.5_i}, for each hyperplane $H_j$ of $M$, there is a column of $S$ with zeros in positions indexed by elements of $H_j$. Since $S= AB$, we have $\mathbf{b}_{j}^\top \mathbf{a}_i = 0$ if and only if $i\in H_j$. Thus
$$\{\xx\in\kk^{d+1} : \mathbf{b}_j^\top \xx = 0\}\cap \{\text{rows}(A)\} = \{\mathbf{a}_i\}_{i\in H_j}$$
so that $H_j$ is also a hyperplane of the matroid $M[A^\top]$.
%Every subset of $d$ independent elements $\{i_1, \ldots, i_d\}$ is contained in a unique hyperplane, namely its closure $\overline{\{i_1, \ldots, i_d\}}$.
Now suppose $M[A^\top]$ has an extra hyperplane $H \not \in \mathcal{H}(M)$. Let $\{i_1, \ldots, i_d\} \subset H$ be any $d$ distinct independent elements of $H$. Since $i_1,\ldots, i_d$ are also elements of matroid~$M$, the flat $\overline{\{i_1, \ldots, i_d\}}$ is a hyperplane $H'$ of $M$, and thus also a hyperplane of $M[A^\top]$, but $H' \neq H$. However, this means that $\{i_1, \ldots, i_d\}$ are contained in two distinct hyperplanes of $M[A^\top]$, which is not possible, so we arrive at a contradiction.
\end{proof}
Let $\GL(d,\kk)$ denote the general linear group of degree $d$ over $\kk$, that is, the set of $d \times d$ invertible matrices over $\kk$. We now recall two equivalence relations on the set of realizations of a matroid $M$, and illustrate how these equivalences are reflected in slack matrices. For $A\in \GL({d+1},\kk)$, it is easy to check that $V$ and $AV$ define the same matroid. We call these realizations \emph{linearly equivalent}.
% The proof of "one can show that" in the previous sentence. (deleted by Maddie)
%Let $A\in GL(\kk^{d+1})$. Notice that $V, AV$ define the same matroid as follows: $J$ is a dependent set of $M[V]$ if and only if there exists a vector $\beta\in\kk^n$ with support indexed by $J$ such that $V\beta = 0$, which happens if and only if $(AV)\beta = 0$, since $A$ is full rank. Thus $M[V],M[AV]$ have the same dependent sets, so they are the same matroid, as claimed. The realizations $V, AV$ are called \emph{linearly equivalent}.
If $P\in\kk^{n\times n}$ is a permutation matrix which sends $i\mapsto \sigma(i)$, then $V$ and $VP$ define the same matroid up to relabelling the ground set $E=[n]$ with $\sigma(1),\ldots, \sigma(n)$. Thus if $A\in \GL({d+1},\kk)$ and $B$ is a permutation matrix with any element of $\kk^*$ in the $1$'s positions, then $V, AVB$ define the same matroid. We call the realizations $V, AVB$ \emph{projectively equivalent}. Call a matroid $M$ \emph{projectively unique} (over $\kk$) if all realizations are projectively equivalent.
%Two matroids $M[V],M[U]$ are called \emph{projectively equivalent} if $U = AVB$, where . This is equivalent to the definition of~\cite[\S6.3]{Oxley}, and is so named because their realizations are related via an automorphism of the underlying projective space $\PP^d$.
%A matroid $M$ for which all realizations are projectively equivalent is called \emph{projectively unique}.
\begin{lemma} Two realizations of a matroid $M$ are projectively equivalent if and only if their slack matrices are the same up to row and column scaling. %and permutations.
\label{LEM:pe} \end{lemma}
\begin{proof} Suppose we have projectively equivalent representations $U,V$ of $M$. Then $U = AVB$, where $A\in \GL(d+1, \kk)$ and without loss of generality $B$ is an invertible $n\times n$ diagonal matrix (since we have assumed a fixed labelling of our matroid).%a permutation simply relabels the ground set and the hyperplanes accordingly).
If $H=\{\vv_{i_1},\ldots, \vv_{i_k}\}$ is a hyperplane of $M[V]$, %defined by $\alpha_H\in\kk^{d+1}$.
then $H' = \{\uu_{i_1},\ldots, \uu_{i_k}\}$ is a hyperplane of $M[U]$. Furthermore, if $\alpha_H\in\kk^{d+1}$ is normal to $H$, then since {${\uu_i = A\vv_i\cdot b_i}$}, $A^{-\top}\alpha_H$ is normal to $H'$, %we have $H' = \{A\xx \in\kk^{d+1} : \alpha_H^\top\xx=0\}\cap U$. Hence $H' = \{\yy\in\kk^{d+1} : \alpha_H^\top A^{-1}\yy=0\}\cap U$,
so that a slack matrix of $M[U]$ is
$$
S_{M[U]} = U^\top \begin{bmatrix} A^{-\top}\alpha_H \end{bmatrix}_{H\in\HH}
= B^\top V^\top A^\top\begin{bmatrix} A^{-\top}\alpha_H \end{bmatrix}_{H\in\HH}
= B^\top V^\top W
= B^\top S_{M[V]}.
$$
Since we can always scale columns of a slack matrix, this completes the proof.
Conversely, suppose we have realizations $U$ and $V$ of the matroid $M$ such that
${S_{M[U]} = D_n S_{M[V]} D_h}$ for invertible diagonal matrices $D_n\in\kk^{n\times n},D_h\in\kk^{h\times h}$.
By definition, $S_{M[U]} = U^\top W$ and $S_{M[V]} = V^\top W'$. Multiplying both sides of the above equation on the right by the right inverse $Y$ of matrix $W$, we find
$$
U^\top = D_n V^\top W' D_h Y.
$$
We see that $W' D_h Y$ is a $(d+1) \times (d+1)$ invertible matrix, which makes $U$ and $V$ projectively equivalent, as desired. \end{proof}
\begin{lemma}
\label{LEM:columnscale}
Two realizations of a matroid $M$ are linearly equivalent if and only if their slack matrices are the same up to column scaling.
\end{lemma}
\begin{proof}
By taking $B,D_n$ each to be the $n\times n$ identity matrix in the proof of Lemma~\ref{LEM:pe}, we obtain the desired result.
\end{proof}
We now define an analog of the slack matrix which can be constructed for any abstract matroid, even ones which are not realizable, as follows.
\begin{defi} \label{DEFN:symbslack}
Define the \emph{symbolic slack matrix} of matroid $M$ to be the matrix $S_M(\xx)$ with rows indexed by elements $i\in E$, columns indexed by hyperplanes $H_j\in \HH(M)$ and $(i,j)$-entry
$$S_M(\xx)_{ij} = \begin{cases} x_{ij} &\text{ if } i\notin H_j \\ 0 &\text{ if } i\in H_j. \end{cases}$$
% This matrix can be obtained by replacing each nonzero entry of $S_M$ by a distinct variable $x_{ij}$.
The \emph{slack ideal} of $M$ is the saturation of the ideal generated by the $(d+2)$-minors of $S_M(\xx)$, namely
\[
I_M : %&
=\Big\langle (d+2)-\text{minors of }S_M(\xx)\Big\rangle { :} \left(\prod_{i=1}^n\prod_{j:i\not \in H_j} x_{ij}\right)^\infty
%&
\subset \kk[\xx]. %= \kk[x_{ij}\ |\ 1 \leq i \leq n,\ 1 \leq j \leq h,\ i \not \in H_j].
\]
Suppose there are $t$ variables in $S_M(\xx)$. The \emph{slack variety} is the variety $\VV(I_M)\subset \kk^t$. The saturation of $I_M$ by the product of all the variables guarantees that there are no components of $\VV(I_M)$ that live entirely in coordinate hyperplanes. If $\mathbf{s}\in\kk^t$ is a zero of $I_M$, then we identify it with the matrix $S_M(\mathbf{s})$.
\end{defi}
\begin{example}
\label{EG:four_lines}
Consider the rank $3$ matroid $M_4 = M[V]$ for $V$ whose columns are
%Consider the rank 3 matroid $M_4 = ([6],\BB)$ where $\BB$ consists of all 3 element sets except $123$, $246$, $345$, and $156$. This matroid is realizable in $\RR^3$ via the vectors
$\vv_1 = (-2,-2,1)^\top$,
$\vv_2 = (-1,1,1)^\top$,
$\vv_3 = (0,4,1)^\top$,
$\vv_4 = (2,-2,1)^\top$,
$\vv_5 = (1,1,1)^\top$,
$\vv_6 = (0,0,1)^\top$.
Projecting onto the plane $z=1$, this can be visualized as the points of intersection of four lines in the plane, as in Figure~\ref{FIG:four_lines}. %The four non-bases give us four of the hyperplanes, and in addition to this, there are three hyperplanes $25,14,36$.
\begin{figure}[htb]\centering
%0\begin{center}
\begin{minipage}{0.27 \textwidth}
\includegraphics[height = 2 in]{161_figures/four_lines_ex}
\end{minipage}
\begin{minipage}{0.61 \textwidth}
\[%S_{M_4}(\xx) =
\begin{blockarray}{cccccccc}
&H_1 & H_2& H_3 & H_4 & H_5 & H_6 & H_7 \\
&123 & 246 & 345 & 156 & 25 & 14 & 36\\
\begin{block}{c[ccccccc]}
1 & 0 & x_{12} & x_{13} & 0 & x_{15} &0 & x_{17} \\
2 & 0 & 0 & x_{23} & x_{24} &0 &x_{26} & x_{27} \\
3 & 0 & x_{32} & 0 & x_{34} &x_{35} & x_{36} &0 \\
4 & x_{41} & 0 & 0 & x_{44} &x_{45} & 0& x_{47} \\
5 & x_{51} & x_{52} & 0 & 0 & 0& x_{56} & x_{57} \\
6 & x_{61} & 0 & x_{63} & 0 &x_{65} & x_{66} & 0\\
\end{block}
\end{blockarray}
\]
\end{minipage}
\caption{The point-line configuration $M_4$ of Example~\ref{EG:four_lines}, and its symbolic slack matrix $S_{M_4}(\xx)$.}
\label{FIG:four_lines}
%\end{center}
\end{figure}
A slack matrix for this realization is then
%{
%\fontsize{8.5}{10}\selectfont
\begin{align*}
S_{M_4} &=
\begin{blockarray}{ccc}
\begin{block}{@{\;\;}[*{3}{@{\;}c@{\;}}]}
-2 & -2 & 1 \\
-1 & 1 & 1 \\
0 & 4 & 1 \\
2 & -2 & 1 \\
1 & 1 & 1 \\
0 & 0 & 1\\
\end{block}
\end{blockarray}\;\;
\begin{blockarray}{*{7}{@{\,}c@{\,}}}
H_1 & H_2& H_3 & H_4 & H_5 & H_6 & H_7 \\
\scriptstyle{123} & \scriptstyle{246} & \scriptstyle{345} & \scriptstyle{156} & \scriptstyle{25} & \scriptstyle{14} & \scriptstyle{36}\\
\begin{block}{[*{7}{@{\,}c@{\,}}]}
-3 & 3 & 6& -3 & 0 & 0 & 4\\
1 & 3 & 2 & 3 & 2 & 4 & 0 \\
-4 & 0 & -8 & 0 & -2 & 8 & 0\\
\end{block}
\end{blockarray}
\\
& =
\begin{blockarray}{c*{7}{@{\,}c@{\,}}}
&H_1 & H_2& H_3 & H_4 & H_5 & H_6 & H_7 \\
& \scriptstyle{123} & \scriptstyle{246} & \scriptstyle{345} & \scriptstyle{156} & \scriptstyle{25} & \scriptstyle{14} & \scriptstyle{36}\\
\begin{block}{c@{\;\;}[*{7}{@{\,}c@{\,}}]@{\;}}
\vspace{3pt}\raisebox{-2pt}{$1$}& \raisebox{-3pt}{$0$} & \raisebox{-3pt}{$-12$} & \raisebox{-3pt}{$-24$} & \raisebox{-3pt}{$0$} & \raisebox{-3pt}{$-6$} & \raisebox{-3pt}{$0$} & \raisebox{-3pt}{$-8$} \\
2& 0 & 0 & -12 & 6 & 0 & 12 & -4 \\
3& 0 & 12 & 0 & 12 & 6 & 24 & 0 \\
4& -12 & 0 & 0 & -12 & -6 & 0 & 8 \\
5& -6 & 6 & 0 & 0 & 0 & 12 & 4 \\
6& -4 & 0 & -8 & 0 & -2 & 8 & 0 \\[2pt]
\end{block}
\end{blockarray},
\end{align*}
%}
where using $\{\vv_{j_1}, \ldots, \vv_{j_d}\}\subset H$ independent, we calculate each $\alpha_H$ as
\begin{equation}\label{EQ:pluckerdet}
\det
\begin{bmatrix}
\widehat{e_1}& | & \cdots & |\\
\vdots & \vv_{j_1} & \cdots & \vv_{j_d}\\
\widehat{e_{d+1}} & | & \cdots & |\\
\end{bmatrix}.
\end{equation}
The symbolic slack matrix of $M_4$ is in Figure~\ref{FIG:four_lines}.
We take the ideal of $4$-minors of this matrix, and saturate with respect to the product of all of the variables to get the slack ideal $I_{M_4}$. This has codimension~$12$, degree~$293$ and is generated by the $72$~binomial generators in Table~\ref{TAB:72things}. In Section~\ref{SEC:toric} we will see these correspond to the $72$ cycles in the bipartite non-incidence graph of this configuration (Figure~\ref{FIG:fourlinesgraph}).
\begin{table}[htb]\centering
%\begin{center}
%\footnotesize
\scalebox{0.89}{\begin{tabular}{| l | }
\hline
$x_{36}x_{65}\Mk + \Mk x_{35}x_{66},\ x_{26}x_{63}\Mk - \Mk x_{23}x_{66},\ x_{15}x_{63}\Mk - \Mk x_{13}x_{65},\ x_{56}x_{61}\Mk - \Mk x_{51}x_{66},\
x_{45}x_{61}\Mk - \Mk x_{41}x_{65},$\\
$x_{27}x_{56}\Mk + \Mk x_{26}x_{57},\ x_{36}x_{52}\Mk - \Mk x_{32}x_{56},\ x_{17}x_{52}\Mk - \Mk x_{12}x_{57},\ x_{47}x_{51}\Mk - \Mk x_{41}x_{57},\ x_{17}x_{45}\Mk + \Mk x_{15}x_{47},$\\
$x_{35}x_{44}\Mk - \Mk x_{34}x_{45},\ x_{27}x_{44}\Mk - \Mk x_{24}x_{47},\ x_{26}x_{34}\Mk - \Mk x_{24}x_{36},\ x_{15}x_{32}\Mk - \Mk x_{12}x_{35},\ x_{17}x_{23}\Mk - \Mk x_{13}x_{27}$\\
\hline
$x_{47}x_{56}x_{65}\Mk - \Mk x_{45}x_{57}x_{66},\ x_{17}x_{56}x_{65}\Mk + \Mk x_{15}x_{57}x_{66},\ x_{12}x_{56}x_{65}\Mk + \Mk x_{15}x_{52}x_{66},\ x_{26}x_{47}x_{65}\Mk + \Mk x_{27}x_{45}x_{66},$\\
$x_{26}x_{44}x_{65}\Mk + \Mk x_{24}x_{45}x_{66},\ x_{17}x_{26}x_{65}\Mk - \Mk x_{15}x_{27}x_{66},\ x_{17}x_{56}x_{63}\Mk + \Mk x_{13}x_{57}x_{66},\ x_{12}x_{56}x_{63}\Mk + \Mk x_{13}x_{52}x_{66},$\\
$x_{27}x_{45}x_{63}\Mk + \Mk x_{23}x_{47}x_{65},\ x_{24}x_{45}x_{63}\Mk + \Mk x_{23}x_{44}x_{65},\ x_{12}x_{36}x_{63}\Mk + \Mk x_{13}x_{32}x_{66},\ x_{24}x_{35}x_{63}\Mk + \Mk x_{23}x_{34}x_{65},$\\
$x_{23}x_{57}x_{61}\Mk + \Mk x_{27}x_{51}x_{63},\ x_{15}x_{57}x_{61}\Mk + \Mk x_{17}x_{51}x_{65},\ x_{13}x_{57}x_{61}\Mk + \Mk x_{17}x_{51}x_{63},\ x_{35}x_{52}x_{61}\Mk + \Mk x_{32}x_{51}x_{65},$\\
$x_{15}x_{52}x_{61}\Mk + \Mk x_{12}x_{51}x_{65},\ x_{13}x_{52}x_{61}\Mk + \Mk x_{12}x_{51}x_{63},\ x_{26}x_{47}x_{61}\Mk + \Mk x_{27}x_{41}x_{66},\ x_{23}x_{47}x_{61}\Mk + \Mk x_{27}x_{41}x_{63},$\\
$x_{13}x_{47}x_{61}\Mk + \Mk x_{17}x_{41}x_{63},\ x_{36}x_{44}x_{61}\Mk + \Mk x_{34}x_{41}x_{66},\ x_{26}x_{44}x_{61}\Mk + \Mk x_{24}x_{41}x_{66},\ x_{23}x_{44}x_{61}\Mk + \Mk x_{24}x_{41}x_{63},$\\
$x_{35}x_{47}x_{56}\Mk + \Mk x_{36}x_{45}x_{57},\ x_{34}x_{47}x_{56}\Mk + \Mk x_{36}x_{44}x_{57},\ x_{17}x_{35}x_{56}\Mk - \Mk x_{15}x_{36}x_{57},\ x_{35}x_{47}x_{52}\Mk + \Mk x_{32}x_{45}x_{57},$\\
$x_{34}x_{47}x_{52}\Mk + \Mk x_{32}x_{44}x_{57},\ x_{27}x_{34}x_{52}\Mk + \Mk x_{24}x_{32}x_{57}, \ x_{13}x_{26}x_{52}\Mk + \Mk x_{12}x_{23}x_{56},\ x_{36}x_{45}x_{51}\Mk + \Mk x_{35}x_{41}x_{56},$\\
$x_{32}x_{45}x_{51}\Mk + \Mk x_{35}x_{41}x_{52},\ x_{12}x_{45}x_{51}\Mk + \Mk x_{15}x_{41}x_{52},\ x_{36}x_{44}x_{51}\Mk + \Mk x_{34}x_{41}x_{56},\ x_{32}x_{44}x_{51}\Mk + \Mk x_{34}x_{41}x_{52},$\\
$x_{26}x_{44}x_{51}\Mk + \Mk x_{24}x_{41}x_{56},\ x_{27}x_{36}x_{45}\Mk - \Mk x_{26}x_{35}x_{47},\ x_{17}x_{32}x_{44}\Mk + \Mk x_{12}x_{34}x_{47},\ x_{15}x_{23}x_{44}\Mk + \Mk x_{13}x_{24}x_{45},$\\
$x_{17}x_{26}x_{35}\Mk + \Mk x_{15}x_{27}x_{36},\ x_{13}x_{26}x_{35}\Mk + \Mk x_{15}x_{23}x_{36},\ x_{15}x_{27}x_{34}\Mk + \Mk x_{17}x_{24}x_{35},\ x_{15}x_{23}x_{34}\Mk + \Mk x_{13}x_{24}x_{35},$\\
$x_{17}x_{26}x_{32}\Mk + \Mk x_{12}x_{27}x_{36},\ x_{13}x_{26}x_{32}\Mk + \Mk x_{12}x_{23}x_{36},\ x_{17}x_{24}x_{32}\Mk + \Mk x_{12}x_{27}x_{34},\ x_{13}x_{24}x_{32}\Mk + \Mk x_{12}x_{23}x_{34}$\\
\hline
$x_{27}x_{35}x_{52}x_{63}\Mk - \Mk x_{23}x_{32}x_{57}x_{65},\ x_{17}x_{36}x_{44}x_{63}\Mk - \Mk x_{13}x_{34}x_{47}x_{66},\ x_{24}x_{35}x_{57}x_{61}\Mk - \Mk x_{27}x_{34}x_{51}x_{65},$\\
$x_{23}x_{34}x_{52}x_{61}\Mk - \Mk x_{24}x_{32}x_{51}x_{63},\ x_{12}x_{36}x_{47}x_{61}\Mk - \Mk x_{17}x_{32}x_{41}x_{66},\ x_{13}x_{32}x_{44}x_{61}\Mk - \Mk x_{12}x_{34}x_{41}x_{63},$\\
$x_{15}x_{26}x_{44}x_{52}\Mk - \Mk x_{12}x_{24}x_{45}x_{56},\ x_{13}x_{26}x_{45}x_{51}\Mk - \Mk x_{15}x_{23}x_{41}x_{56},\ x_{12}x_{23}x_{44}x_{51}\Mk - \Mk x_{13}x_{24}x_{41}x_{52}$\\
\hline
\end{tabular}}
%\end{center}
\vspace*{3mm}
\caption{The 72 generators of $I_{M_4}$.}
\label{TAB:72things}
\end{table}
\end{example}
\begin{rema} In~\cite{slack_paper}, given a set of $n$ vertices $V\subset\kk^d$ defining a $d$-polytope $P=\text{conv}(V)$, they include only the facet defining hyperplanes in the slack matrix. We can also form a matroid associated to this polytope by considering all the hyperplanes; that is, we define the matroid $M = M[V']$ where $V' \subset \kk^{(d+1)\times n} $ is the matrix obtained from $V$ by appending a 1 to each vector. Then the symbolic slack matrix of~$P$ defined in~\cite{slack_paper} is the restriction of the symbolic slack matrix of matroid $M$ to the subset of columns corresponding to facet-defining hyperplanes.
%Notice that the construction of this slack ideal is analogous to the construction of~\cite{slack_paper} for $d$-polytopes. Given a set of $n$ vertices $V \subset \kk^d$, in the polytope case we consider only the facet defining hyperplanes, where as here we would consider all the hyperplanes which are defined by the matroid $M=M[V']$, where $V' \subset \kk^{(d+1)\times n} $ is the matrix obtained from $V$ by appending a 1 to each vector. This means if $P=\text{conv}(V)$, then
Thus the slack ideal of the polytope is always contained in the slack ideal of the matroid, $I_P\subseteq I_M$. We illustrate with the following example.%\comment{say something more detailed about this or / and add an example}
\end{rema}
\begin{example} \label{EG:toblerone} We consider the triangular prism $P$ labelled as in Figure~\ref{FIG:toblerone}. As a $3$-polytope, its facets are given by the hyperplanes $1234,1256,3456,135,246$ and the symbolic slack matrix is in Figure~\ref{FIG:toblerone}. Its slack ideal $I_P$ is generated by three binomials.
\begin{figure}[htb]\centering
%\begin{center}
\begin{minipage}{1.8in}
\includegraphics[width=1.6in]{161_figures/toblerone2}
\end{minipage}\begin{minipage}{3.1in}
\[ S_P(\xx)=
\begin{blockarray}{cccccc}
&H_1 & H_2 & H_3 & H_4 & H_5 \\
&1234 & 1256 & 3456 & 135 & 246 \\
\begin{block}{c[ccccc]}
1 &0 &0 &x_{13} &0 &x_{15} \\
2 &0 &0 &x_{23} &x_{24} &0 \\
3 & 0 &x_{32} & 0 & 0 &x_{35} \\
4 & 0 &x_{42} & 0 &x_{44} &0 \\
5 & x_{51} & 0 & 0 & 0 &x_{55} \\
6 & x_{61} & 0 & 0 &x_{64} & 0 \\
\end{block}
\end{blockarray}
\]
\end{minipage}
% \end{center}
\caption{The triangular prism $P$ of Example~\ref{EG:toblerone} and its symbolic slack matrix as a polytope.}
\label{FIG:toblerone}
\end{figure}
Considering $P$ as a rank $4$ matroid which has the three facets $1234$, $1256$, $3456$ of $P$ as its non-bases, we obtain following symbolic slack matrix.
\[
S_M(\xx)=
\begin{blockarray}{cccccc|cccccc}
&H_1 & H_2 & H_3 & H_4 & H_5 &H_6 &H_7 &H_8 &H_9 &H_{10} &H_{11}\\
&1234 & 1256 & 3456 & 135 & 246 &136 & 146 &145 &245 &235 &236\\
\begin{block}{c@{\;}[*{5}{@{\;}c@{\;}}|@{\hspace{-3.3pt}}*{6}{@{\;}c@{\;}}]}
1 &0 &0 &x_{13} &0 &x_{15} &0 &0 &0 &x_{19} &x_{1,10} &x_{1,11} \\
2 &0 &0 &x_{23} &x_{24} &0 &x_{26} &x_{27} &x_{28} &0 &0 &0 \\
3 & 0 &x_{32} & 0 & 0 &x_{35} &0 &x_{37} &x_{38} &x_{39} &0 &0 \\
4 & 0 &x_{42} & 0 &x_{44} &0 &x_{46} &0 &0 &0 &x_{4,10} &x_{4,11} \\
5 & x_{51} & 0 & 0 & 0 &x_{55} &x_{56} &x_{57} &0 &0 &0 &x_{5,11} \\
6 & x_{61} & 0 & 0 &x_{64} & 0 &0 &0 &x_{68} &x_{69} &x_{6,10} &0 \\
\end{block}
\end{blockarray}
\]
Not only is $I_P\subseteq I_M$ but in this case $I_P$ is the elimination ideal given by eliminating the variables in the columns indexed by the additional hyperplanes $H_6,\ldots,H_{11}$.
%\begin{figure}
%\begin{center}
%\includegraphics[width=3in]{toblerone}
%\caption{The triangular prism.}
%\label{FIG:toblerone}
%\end{center}
% \end{figure}
\end{example}
%==============================================================================
\section{Realization space models}
\label{SEC:realsp}
A realization space for a rank $d+1$ matroid $M$ with $n$ elements is, roughly speaking, a space whose points are in correspondence with (equivalence classes of) collections of vectors $V = \{\vv_1,\ldots, \vv_n\}\subset\kk^{d+1}$ whose matroid $M[V]$ is $M$. In this section we show that the slack variety defined in Section~\ref{SEC:bg} provides such a realization space, and we relate it to another realization space called the Grassmannian of the matroid.
Theorem~\ref{THM:slackconditions} characterizes the slack matrices of realizations of a matroid. The next theorem shows that the slack variety captures exactly these matrices.
\begin{theorem} Let $M$ be a rank $d+1$ matroid. Then $V$ is a realization of $M$ if and only if $S_{M[V]} = S_M(\ssbf )$ where $\ssbf \in \VV(I_M)\cap(\kk^*)^t$.
\label{THM:realizvariety}
%The slack variety is exactly the set of matrices satisfying conditions (i) and (ii) of Theorem~\ref{THM:slackconditions}.
\end{theorem}
\begin{proof} Let $V$ be a realization of $M$. Then $S_{M[V]} = S_M(\ssbf )$ for some $\ssbf \in (\kk^*)^t$ by Theorem~\ref{THM:slackconditions}~\ref{theo2.5_i}. Furthermore,
$\rk(S_{M[V]}) = d+1$ by Corollary~\ref{COR:rank}, so that its $(d+2)$-minors vanish and thus $\ssbf \in\VV(I_M)$, as desired. %Since $V$ is a realization, it satisfies condition (i) of Theorem~\ref{THM:slackconditions}. This means there exists $\ssbf \in(\kk^*)^t$ such that $S_{M[V]} = S_M(\ssbf )$, and $\ssbf \in \VV(I_M)\cap(\kk^*)^t$.
Let $V \in\kk^{(d+1)\times n}$ be such that $S_{M[V]}=S_M(\ssbf )$ for some $\ssbf \in\VV(I_M)\cap(\kk^*)^t$. Then $\supp(S_{M[V]}) = \supp(S_M)$ and $\rk(S_{M[V]})\leq d+1$. But now by Lemma~\ref{LEM:trianglesubmat}, $\rk(S_{M[V]})\geq d+1$, making $V$ a realization of $M$ by Theorem~\ref{THM:slackconditions}.
\end{proof}
Since we know that the set of realizations of a matroid is closed under row and column scalings, Theorem~\ref{THM:realizvariety} implies the following corollary. We denote the torus of row and column scalings, $(\kk^*)^n\times(\kk^*)^h$, by $T_{n,h}$.
\begin{coro}The slack variety is closed under the action of the group $T_{n,h}$, where $(\kk^*)^n$ acts by row scaling (left multiplication by diagonal matrices) and $(\kk^*)^h$ acts by column scaling (right multiplication by diagonal matrices). \label{COR:scaleclosed}\end{coro}
Theorem~\ref{THM:realizvariety} and Corollary~\ref{COR:scaleclosed} tell us that the slack variety is a realization space for matroid $M$ and the slack variety modulo the action of $T_{n,h}$ is a realization space for the projective equivalence classes of realizations of $M$.
\begin{defi} \label{DEFN:slackreal} The \emph{slack realization space} of a rank $d+1$ matroid $M$ on $n$ elements with $h$ hyperplanes is the image of the slack variety inside a product of projective spaces
$\VV(I_M)\cap(\kk^*)^t \hookrightarrow (\mathbb{P}^{n-1})^h,
$
where $\ssbf $ is sent to the columns of $S_M(\ssbf )$.
\end{defi}
\begin{prop}
Let $M$ be a rank $d+1$ matroid on $n$ elements with $h$ hyperplanes. Then the points of its slack realization space are in one-to-one correspondence with linear equivalence classes of realizations of $M$.
\end{prop}
\begin{proof}
Under this embedding, two slack matrices which differ by column scaling are the same point in $(\mathbb{P}^{n-1})^h$. So, the result follows by Lemma~\ref{LEM:columnscale}.
\end{proof}
Next we describe a known model for the realization space of a matroid arising from a subvariety of a Grassmannian.
The \emph{Grassmannian} $\Gr (d+1,n)$ is a variety whose points correspond to ${(d+1)}$-dimensional linear subspaces of a fixed $n$-dimensional vector space $\Lambda$. %The Grassmannian is a smooth, projective variety of dimension $(d+1)(n-d-1)$, and
It embeds into $\PP^{\binom{n}{d+1}-1}$ as follows. Any $(d+1)$-dimensional linear subspace of $\Lambda$ can be described as the row space of a $(d+1)\times n$ matrix of rank $d+1$. However, two matrices $A$ and $B$ have the same row space when there is a matrix $G \in \GL(d+1,\kk)$ such that $A = GB$. Thus, to ensure that we have a one-to-one correspondence between subspaces and points in the Grassmannian, we record a $(d+1) \times n$ matrix by its vector of $(d+1)$-minors. We call this the \emph{Pl\"{u}cker vector}, and it has coordinates indexed by subsets $\sigma$ of $[n]$ of size $d+1$.
% [[Old version of this paragraph, deleted and shortened by Maddie ]]
%Any $(d+1)$-dimensional linear subspace of $\Lambda$ can be described as the row space of a $(d+1)\times n$ matrix of rank $d+1$. However, there are multiple ways to assign a matrix to a subspace in this way, since many matrices have the same row space. To ensure that we have a one-to-one correspondence between subspaces and points in the Grassmannian, we record a $(d+1) \times n$ matrix by its vector of $(d+1)$ minors. We call this the \emph{Pl\"{u}cker vector}, and it has coordinates indexed by subsets $\sigma$ of $[n]$ of size $d+1$. Indeed, if two matrices $A$ and $B$ have the same row space, then there is an element $G \in GL(d+1,\Lambda)$ such that $A = GB$, and the Pl\"{u}cker vector of $A$ is $\text{det}(G)$ times the Pl\"{u}cker vector of $B$, so these represent the same point in $\PP^{{n \choose d+1}-1}$; thus we get an embedding of the Grassmannian in projective space.
The \emph{Pl\"{u}cker ideal} $P_{d+1,n} \subseteq \kk[\pp] = \kk[p_\sigma\ |\ \sigma \subset [n],\ |\sigma| = d+1]$ is the set of all polynomials which vanish on every vector of $(d+1)$-minors coming from some $(d+1)\times n$ matrix. It is generated by the homogeneous quadratic Pl\"ucker relations and cuts out the Grassmannian as a variety inside $\PP^{\binom{n}{d+1} -1}$.
%\textcolor{red}{We could put an example here for reviewer 2 if this is part of the algebra that confuses him?}
% It is generated by the Pl\"ucker relations, which are defined as follows. Fix a subsets $I,J\subset[n]$ with $|I| = d$, $|J| = d+2$. For $j\in J$ denote by $\sgn(j,I,J)$ the sign $(-1)^\ell$, where $\ell$ is the number of elements $j' \in J$ with $j