Model Visualization

class ai4water.postprocessing.visualize.Visualize(model, save=True, show=True)[source]

Bases: ai4water.utils.plotting_tools.Plots

Hepler class to peek inside the machine learning mdoel.

If the machine learning model consists of layers of neural networks, then this class can be used to plot following 4 items

  • outputs of individual layers

  • gradients of outputs of individual layers

  • weights and biases of individual layers

  • gradients of weights of individual layers

If the machine learning model consists of tree, then this class can be used to plot the learned tree of the model.

- get_activations
- activations
- get_activation_gradients
- activation_gradients
- get_weights
- weights
- get_weight_gradients
- weight_gradients
- decision_tree
__init__(model, save=True, show=True)[source]
Parameters

model – the learned machine learning model.

activation_gradients(layer_names: Union[str, list], data='training', x=None, y=None, examples_to_use=None, plot_type='2D', show: bool = False)[source]

Plots the gradients o activations/outputs of layers

Parameters
  • layer_names – the layer name for which the gradients of its outputs are to be plotted.

  • data – the data to be used for calculating gradients

  • x – alternative to data

  • y – alternative to data

  • examples_to_use – the examples from the data to use. If None, then all examples will be used, which is equal to the length of data.

  • plot_type

  • show

activation_gradients_1D(layer_names, data='training', x=None, y=None, examples_to_use=None, show=False)[source]

Plots gradients of layer outputs as 1D

Parameters
  • layer_names

  • examples_to_use

  • data

  • x

  • y

  • show

activation_gradients_2D(layer_names=None, data='training', x=None, y=None, examples_to_use=24, show=True)[source]

Plots activations of intermediate layers except input and output

Parameters
  • layer_names

  • data

  • x

  • y

  • examples_to_use – if integer, it will be the number of examples to use. If array like, it will be index of examples to use.

  • show

activations(layer_names=None, data: str = 'training', x=None, examples_to_use: Optional[Union[int, list, numpy.ndarray, range]] = None, show: bool = False)[source]

Plots outputs of any layer of neural network.

Parameters
  • data – The data to be used for calculating outputs of layers.

  • x – if given, will override, ‘data’.

  • layer_names – name of layer whose output is to be plotted. If None, it will plot outputs of all layers

  • examples_to_use – If integer, it will be the number of examples to use. If array like, it will be the indices of examples to use.

  • show

decision_tree(show=False, **kwargs)[source]

Plots the decision tree

decision_tree_leaves(save=True, data='training')[source]

Plots dtreeviz related plots if dtreeviz is installed

features_1d(data, save=True, name='', **kwargs)[source]
features_2d(data, name, save=True, slices=24, slice_dim=0, **kwargs)[source]

Calls the features_2d from see-rnn

find_num_lstms(layer_names=None) list[source]

Finds names of lstm layers in model

get_activation_gradients(layer_names: Optional[Union[str, list]] = None, x=None, y=None, data: str = 'training') dict[source]

Finds gradients of outputs of a layer.

either x,y or data is required :param layer_names: The layer for which, the gradients of its outputs are to be

calculated.

Parameters
  • x – input data. Will overwrite data

  • y – corresponding label of x. Will overwrite data.

  • data – one of training, test or validation

get_activations(layer_names: Optional[Union[str, list]] = None, x=None, data: str = 'training', batch_size=None) dict[source]

gets the activations/outputs of any layer of the Keras Model.

Parameters
  • layer_names – name of list of names of layers whose activations are to be returned.

  • x – If provided, it will override data.

  • data – data to use to get activations. Only relevent if x is not provided. By default training data is used. Possible values are training, test or validation.

Returns

a dictionary whose keys are names of layers and values are weights of those layers as numpy arrays

get_rnn_weights(weights: dict, layer_names=None) dict[source]

Finds RNN related weights.

It combines kernel recurrent curnel and bias of each layer into a list.

get_weight_gradients(data: str = 'training', x=None, y=None) dict[source]

Returns the gradients of weights.

Parameters
  • data – the data to use to calculate gradients of weights.

  • x

  • y

Returns

dictionary whose keys are names of layers and values are gradients of weights as numpy arrays.

get_weights()[source]

returns all trainable weights as arrays in a dictionary

rnn_histogram(data, save=True, name='', show=False, **kwargs)[source]
rnn_weight_grads_as_hist(layer_name=None, data='training', x=None, y=None, show=False)[source]
rnn_weights_histograms(layer_name, show=False)[source]
weight_gradients(layer_names: Optional[Union[str, list]] = None, data='training', x=None, y=None, show: bool = False)[source]

Plots gradient of all trainable weights

Parameters
  • layer_names – the layer whose weeights are to be considered.

  • data – the data to use to calculate gradients of weights

  • x – alternative to data

  • y – alternative to data

  • show – whether to show the plot or not.

weights(layer_names: Optional[Union[str, list]] = None, show: bool = False)[source]

Plots the weights of a specific layer or all layers.

Parameters
  • layer_names – The layer whose weights are to be viewed.

  • show