\item{sep}{the field separator character. Values on each line of the
file are separated by this character. If \code{sep = ""} (the
default for \code{read.table}) the separator is \sQuote{white space},
that is one or more spaces, tabs, newlines or carriage returns.}
\item{header}{a logical value indicating whether the file contains the
names of the variables as its first line. If missing, the value is
determined from the file format: \code{header} is set to \code{TRUE}
if and only if the first row contains one fewer field than the
number of columns.}
\item{dec}{the character used in the file for decimal points.}
\item{na}{the string to use for missing values in the data.}
\item{startrow}{\emph{n}th row to start importing from. For \code{clipboard_import}, when \code{header = TRUE} the import will start on row \code{startrow} \emph{below} the header.}
\item{as_vector}{a logical value indicating whether data consisting of only one column should be imported as vector using \code{\link[dplyr]{pull}}. This will strip off the header.}
\item{x}{the object to be written, preferably a matrix or data frame.
If not, it is attempted to coerce \code{x} to a data frame.}
These are helper functions around \code{\link{read.table}} and \code{\link{write.table}} to import from and export to clipboard. The data will be read and written as tab-separated by default, which makes it possible to copy and paste from other software like Excel and SPSS without further transformation.
}
\details{
For \code{clipboard_export}, the reserved clipboard size for exporting will be set automatically to 125\% of the object size of \code{x}. This way, it is possible to export data with thousands of rows as the only limit will be your systems RAM.