Dataset Transforms

aka. derived datasets.


experimental feature, the API may change. The data sources in intake.source.derived are not yet declared as top-level named drivers in the package entrypoints.

Intake allows for the definition of data sources which take as their input another source in the same directory, so that you have the opportunity to present processing to the user of the catalog.

For the time being, a derived dataset must live in the same catalogue as the target from which it derives.

This can be done by defining classes which inherit from intake.source.derived.DerivedSource, or using one of the pre-defined classes in the same module, which usually need to be passed a reference to a function in a python module. We will demonstrate both.


Consider the following target dataset, which loads some simple facts about US states from a CSV file. This example is taken from the Intake test suite.

We now show two ways to apply a super-simple transform to this data, which selects two of the dataframe’s columns.

Class Example

The first version uses an approach in which the transform is derived in a data source class, and the parameters passed are specific to the transform type. Note that the driver is referred to by it’s fully-qualified name in the Intake package.

The source class for this is included in the Intake codebase, but the important part is:

class Columns(DataFrameTransform):

    def pick_columns(self, df):
        return df[self._params["columns"]]

We see that this specific class inherits from DataFrameTransform, with transform=self.pick_columns. We know that the inputs and outputs are both dataframes. This allows for some additional validation and an automated way to infer the output dataframe’s schema that reduces the number of line of code required.

The given method does exactly what you might imagine: it takes and input dataframe and applies a column selection to it.

Running will indeed, as expected, produce a version of the data with only the selected columns included. It does this by defining the original dataset, appying the selection, and then getting Dask to generate the output. For some datastets, this can mean that the selection is pushed down to the reader, and the data for the dropped columns is never loaded. The user may choose to do .to_dask() instead, and manipulate the lazy dataframe directly, before loading.

Functional Example

This second version of the same output uses the more generic and flexible intake.source.derived.DataFrameTransform.

  driver: intake.source.derived.DataFrameTransform
      - input_data
    transform: "intake.source.tests.test_derived._pick_columns"
      columns: ["state", "slug"]

In this case, we pass a reference to a function defined in the Intake test suite. Normally this would be declared in user modules, where perhaps those declarations and catalog(s) are distributed together as a package.

def _pick_columns(df, columns):
    return df[columns]

This is, of course, very similar to the method shown in the previous section, and again applies the selection in the given named argument to the input. Note that Intake does not support including actual code in your catalog, since we would not want to allow arbitrary execution of code on catalog load, as opposed to execution.

Loading this data source proceeds exactly the same way as the class-based approach, above. Both Dask and in-memory (Pandas, via .read()) methods work as expected. The declaration in YAML, above, is slightly more verbose, but the amount of code is smaller. This demonstrates a tradeoff between flexibility and concision. If there were validation code to add for the arguments or input dataset, it would be less obvious where to put these things.

Barebone Example

The previous two examples both did dateframe to dataframe transforms. However, totally arbitrary computations are possible. Consider the following:

  driver: intake.source.derived.GenericTransform
      - input_data
    transform: builtins.len
    transform_kwargs: {}

This applies len to the input dataframe. cat.barebones.describe() gives the output container type as “other”, i.e., not specified. The result of read() on this gives the single number 50, the number of rows in the input data. This class, and DerivedDataSource and included with the intent as superclasses, and probably will not be used directly often.

Execution engine

None of the above example specified explicitly where the compute implied by the transformation will take place. However, most Intake drivers support in-memory containers and Dask; remembering that the input dataste here is a dataframe. However, the behaviour is defined in the driver class itself - so it would be fine to write a driver in which we make different assumptions. Let’s suppose, for instance, that the original source is to be loaded from spark (see the intake-spark package), the driver could explicitly call .to_spark on the original source, and be assured that it has a Spark object to work with. It should, of course, explain in its documentation what assumptions are being made and that, presumably, the user is expected to also call .to_spark if they wished to directly manipulate the spark object.


intake.source.derived.DerivedSource(*args, …)

Base source deriving from another source in the same catalog



Transform where the input and output are both Dask-compatible dataframes

intake.source.derived.Columns(*args, **kwargs)

Simple dataframe transform to pick columns

class intake.source.derived.DerivedSource(*args, **kwargs)

Base source deriving from another source in the same catalog

Target picking and parameter validation are performed here, but you probably want to subclass from one of the more specific classes like DataFrameTransform.

class intake.source.derived.GenericTransform(*args, **kwargs)
optional_params = {'allow_dask': True}

Perform an arbitrary function to transform an input

transform: function to perform transform

function(container_object) -> output, or a fully-qualified dotted string pointing to it

transform_params: dict

The keys are names of kwargs to pass to the transform function. Values are either concrete values to pass; or param objects which can be made into widgets (but must have a default value) - or a spec to be able to make these objects.

allow_dask: bool (optional, default True)

Whether to_dask() is expected to work, which will in turn call the target’s to_dasK()


Load entire dataset into a container and return it


Return a dask container for this data source

class intake.source.derived.Columns(*args, **kwargs)

Simple dataframe transform to pick columns

Given as an example of how to make a specific dataframe transform. Note that you could use DataFrameTransform directly, by writing a function to choose the columns instead of a method as here.