Plotting data for reports is often done in programmes such as R, matplotlib or MATLAB. However, sometimes one wants to have a consistent layout for all plots and this is far from easy when you need to combine some of these tools. One way to tackle this, is to store all plot data in a CSV-file and then import that into LaTeX. We can plot it using a special library if the plot does not contain too many points (more on that later).

Plotting Graphics From File

The easiest way is by using the pgfplots package, as illustrated in this example:

% \usepackage{pgfplots}       % <-- required in preamble
% \usepackage{SIunits}        % <-- required in preamble
% \pgfplotsset{compat=newest} % <-- optional in preamble

	xlabel=$f$ (\hertz), % \hertz requires SIunits
	ylabel=$T$ (-),
	title={Measured transfer function of analogue filter},
	minor grid style={gray!25},
	major grid style={gray!25},
	no marks]
\addplot[line width=1pt,solid,color=blue] %
	table[x=f,y=T,col sep=comma]{latex-import-data.csv};
\addlegendentry{Transfer function};

Plot data example

First, we need to load the pgfplots package, since it makes loading data from file very easy. Next, we state that we want to use the newest version (this removes a warning). Now it is time to define the plot. Use the axis environment inside a tikzpicture environment (pgfplots uses tikz internally).

We pass a lot of options to the axis environment. Their meaning is very straightforward: labels, grids, ticks etc. Personally, I like a light grid (minor grid style={gray!25} and major grid style={gray!25}). Furthermore, we limit the width of the plot and remove any markers. Sometimes markers are needed (for small sets of data). In this case, you can use mark=* to mark the data points with a small dot. If we need a normal (linear) axis, use the default normal instead of log mode for x- or y-axis.

By now we have defined the axes-system and it’s time to add a plot. \addplot does exactly that. We want a solid blue line of 1 point thick. The data is loaded from a table, with x-data in the f column and y-data in the T-column. These names are derived from the first line in the CSV-file. This line is not required (an index-based system is also possible), but their use is advised since these names give meaning to the data. The actual data is loaded from latex-import-data.csv and separated by commas. Notice that the syntax is very similar to the syntax to plot a function with pgfplots, see also this post.

An extract from the CSV-file used in this example is:


We also notice the \addlegendentry command. I personally prefer adding the legend entry directly with the plot addition, but you can opt to add it later too. Finally, we have the inclusion of the SIunits package. This might seem a bit over the top for just the Hz from \hertz. But this package also provides the \unit command which is really nice to use for units and numbers! Notice the difference (after typesetting) between \unit{15}{\kilo\hertz} and 15 kHz in a multi-line paragraph and be convinced of this package’s need. Hint: the space between the number and unit is a little smaller with the used of \unit, and they will never be on separate lines.

The example is generated by a relatively small number of data points (about 30). Sometimes it can happen that a plot has thousands of points. In that case, you might run out of buffer space. One way to solve this, is to plot only a small fraction of all points with the following additional options:

\addplot[line width=1pt,solid,color=blue] %
          col sep=comma,
          each nth point=100,
          filter discard warning=false,
          unbounded coords=discard]

Loading Data to a Table

Apart from generating plots based on CSV-files, we can also create entire tables from a file. This is done with the pgfplotstable package. A basic example loads the data previously plotted (and more):

% \usepackage{pgfplotstable} % <-- required in preamble
% \usepackage{booktabs}      % <-- required in preamble

% activate this to allow the table to be split over multiple pages
% \usepackage{longtable}     % <-- required in preamble
% \pgfplotstableset{
% begin table=\begin{longtable},
% end table=\end{longtable},
% }

    columns               ={f,V1,V2,phi,T},
    columns/f/.style      ={column name=$f (\hertz)$,sci},
    columns/V1/.style     ={column name=$V_P (\volt)$},
    columns/V2/.style     ={column name=$V_S (\volt)$,fixed},
    columns/phi/.style    ={column name=$\phi (\degree)$},
    columns/T/.style      ={column name=$T (\frac{\volt}{\volt})$,fixed,precision=3},
    every head row/.style ={before row=\toprule, after row=\midrule},
    every last row/.style ={after row=\bottomrule%
    %\\\caption{Measurement data and transfer function%
    col sep=comma,
    dec sep align]{latex-import-data.csv}

As always, we need to load appropriate packages. To create the table, we just have to use the \pgfplotstabletypeset with the required a required argument that indicates the file. In this case, this is the filename: latex-import-data.csv. We can also use a command-line system when we need the same table multiple times:

\pgfplotstabletypeset[<optinal arguments>]{\transferFunctionData}

% also valid code:
        \addplot[<optinal commands>] table[<optinal commands>]{\transferFunctionData};

Take note that we should wrap this table inside a table environment, but because these snippets are typeset to the svg-images below using the standalone package, this is not possible. The omission of the table environment also explains why the caption (and rule break) have been commented out.

The options are fairly straightforward to use. First, we specify which columns we want to display. Note that the CSV-file must contain a header that is readable by LaTeX. Best practice is to use a single word column name that has clear semantics. If we want to plot all columns, we can also omit this option. Afterward, we add some styling to our header and define our caption (if needed). We can also specify the way individual columns are displayed (from the head name to the decimal precision).

At the end, we also have two special options: col sep=comma and dec sep align. col sep=comma is required for CSV-files, because by default the separation symbol is a space. We thus need to change this to a comma. Finally, dec sep align is an optional option to align decimal points. This is standard in scientific literature and makes it easy to compare numbers.

Table data example

Loading Individual Numbers From File

We can also include individual numbers inside a text with my own dynamicnumber package. This package allows the inclusion of individual numbers inside a text. An example is depicted below.

The content of the measurement.dnlist file is automatically created by the scripting side of the dynamicnumber package:

\pgfkeys{/dynamicnumber/simulation/measurement-gain = -2.23}

With this file at hand, one can use it to include variables stored inside the file, like in this snippet:

% \usepackage{dynamicnumber} % <-- required in preamble


From different measurements, we can conclude that the overall loop gain can is
approximate $\dnget{measurement-gain}$. However, the theory predicted a gain of
$-1$. It is clear the used model incorrectly predicts the loop gain. To address
this issue, we have constructed a second, more complicated model that explicitly
models the non-linearity of the system.

First, we load the dynamic number source file (measurement), afterwards we use the variables stored inside with the \dnget command.

For more information on how to use this package (it is very simple to use), see the GitHub README or CTAN documentation.