models

DualAttentionModel

class ai4water.tf_models.DualAttentionModel(enc_config: Optional[dict] = None, dec_config: Optional[dict] = None, teacher_forcing: bool = True, **kwargs)[source]

Bases: Model

This is Dual-Attention LSTM model of Qin et al., 2017. The code is adopted from this repository

Example

>>> from ai4water import DualAttentionModel
>>> from ai4water.datasets import busan_beach
>>> data = busan_beach()
>>> model = DualAttentionModel(lookback=5,
...                            input_features=data.columns.tolist()[0:-1],
...                            output_features=data.columns.tolist()[-1:])
... #If you do not wish to feed previous output as input to the model, you
... #can set teacher forcing to False. The drop_remainder argument must be
... #set to True in such a case.
>>> model = DualAttentionModel(teacher_forcing=False, batch_size=4,
...                            drop_remainder=True, ts_args={'lookback':5})
>>> model.fit(data=data)
__init__(enc_config: Optional[dict] = None, dec_config: Optional[dict] = None, teacher_forcing: bool = True, **kwargs)[source]
Parameters:
  • enc_config

    dictionary defining configuration of encoder/input attention. It must have following three keys

    • n_h: 20

    • n_s: 20

    • m: 20

    • enc_lstm1_act: None

    • enc_lstm2_act: None

  • dec_config

    dictionary defining configuration of decoder/output attention. It must have following three keys

    • p: 30

    • n_hde0: None

    • n_sde0: None

  • teacher_forcing – Whether to use the prvious target/observation as input or not. If yes, then the model will require 2 inputs. The first input will be of shape (num_examples, lookback, num_inputs) while the second input will be of shape (num_examples, lookback-1, 1). This second input is supposed to be the target variable observed at previous time step.

  • kwargs – The keyword arguments for the [ai4water’s Model][ai4water.Model] class

build(input_shape=None)[source]
decoder_attention(_h_en_all, _y, _s0, _h0)[source]
encoder_attention(_input, _s0, _h0, num_ins, suf: str = '1')[source]
fetch_data(x, y, source, data=None, **kwargs)[source]
get_attention_weights(layer_name: Optional[str] = None, x=None, data=None, data_type='training') ndarray[source]
Parameters:
  • layer_name (str, optional) – the name of attention layer. If not given, the final attention layer will be used.

  • x (optional) – input data, if given, then data must not be given

  • data

  • data_type (str, optional) –

    the data to make forward pass to get attention weghts. Possible values are

    • training

    • validation

    • test

    • all

Return type:

a numpy array of shape (num_examples, lookback, num_ins)

inputs_for_attention(inputs)[source]

returns the inputs for attention mechanism

interpret(data=None, data_type='training', **kwargs)[source]

Interprets the underlying model. Call it after training.

Returns:

An instance of ai4water.postprocessing.interpret.Interpret class

Example

>>> from ai4water import Model
>>> from ai4water.datasets import busan_beach
>>> model = Model(model=...)
>>> model.fit(data=busan_beach())
>>> model.interpret()
one_decoder_attention_step(_h_de_prev, _s_de_prev, _h_en_all, t)[source]
Parameters:
  • _h_de_prev – previous hidden state

  • _s_de_prev – previous cell state

  • _h_en_all – (None,T,m),n is length of input series at time t,T is length of time series

  • t – int, timestep

Returns:

x_t’s attention weights,total n numbers,sum these are 1

one_encoder_attention_step(h_prev, s_prev, x, t, suf: str = '1')[source]
Parameters:
  • h_prev – previous hidden state

  • s_prev – previous cell state

  • x – (T,n),n is length of input series at time t,T is length of time series

  • t – time-step

  • suf – str, Suffix to be attached to names

Returns:

x_t’s attention weights,total n numbers,sum these are 1

plot_act_along_inputs(data, layer_name: str, data_type='training', vmin=None, vmax=None, show=False)[source]
plot_act_along_lookback(activations, sample=0)[source]
test_data(x=None, y=None, data='test', **kwargs)[source]

returns the x,y pairs for test. x,y are not used but only given to be used if user overwrites this method for further processing of x, y as shown below.

>>> from ai4water import Model
>>> class MyModel(Model):
>>>     def ttest_data(self, *args, **kwargs) ->tuple:
>>>         train_x, train_y = super().training_data(*args, **kwargs)
...         # further process x, y
>>>         return train_x, train_y
training_data(x=None, y=None, data='training', key=None)[source]

returns the x,y pairs for training. x,y are not used but only given to be used if user overwrites this method for further processing of x, y as shown below.

>>> from ai4water import Model
>>> class MyModel(Model):
>>>     def training_data(self, *args, **kwargs) ->tuple:
>>>         train_x, train_y = super().training_data(*args, **kwargs)
...         # further process x, y
>>>         return train_x, train_y
validation_data(x=None, y=None, data='validation', **kwargs)[source]

returns the x,y pairs for validation. x,y are not used but only given to be used if user overwrites this method for further processing of x, y as shown below.

>>> from ai4water import Model
>>> class MyModel(Model):
>>>     def validation_data(self, *args, **kwargs) ->tuple:
>>>         train_x, train_y = super().training_data(*args, **kwargs)
...         # further process x, y
>>>         return train_x, train_y

TemporalFusionTransformer

NBeats

HARHNModel

class ai4water.pytorch_models.HARHNModel(*args, **kwargs)[source]

Bases: Model

__init__(use_cuda=True, teacher_forcing=True, **kwargs)[source]

Initializes the layers of NN model using initialize_layers method. All other input arguments goes to BaseModel.

forward(*inputs: Any, **kwargs: Any)[source]

implements forward pass implementation for pytorch based NN models.

initialize_layers(layers_config: dict, inputs=None)[source]

Initializes the layers/weights/variables which are to be used in forward or call method.

Parameters:
  • layers_config (python dictionary to define neural network. For details) – [see](https://ai4water.readthedocs.io/en/latest/build_dl_models.html)

  • inputs (if None, it will be supposed the the Input layer either) – exists in layers_config or an Input layer will be created withing this method before adding any other layer. If not None, then it must be in Input layer and the remaining NN architecture will be built as defined in layers_config. This can be handy when we want to use this method several times to build a complex or parallel NN structure. Avoid Input in layer names.

IMVModel

class ai4water.pytorch_models.IMVModel(*args, **kwargs)[source]

Bases: HARHNModel

__init__(*args, teacher_forcing=False, **kwargs)[source]

Initializes the layers of NN model using initialize_layers method. All other input arguments goes to BaseModel.

forward(*inputs: Any, **kwargs: Any)[source]

implements forward pass implementation for pytorch based NN models.

initialize_layers(layers_config: dict, inputs=None)[source]

Initializes the layers/weights/variables which are to be used in forward or call method.

Parameters:
  • layers_config (python dictionary to define neural network. For details) – [see](https://ai4water.readthedocs.io/en/latest/build_dl_models.html)

  • inputs (if None, it will be supposed the the Input layer either) – exists in layers_config or an Input layer will be created withing this method before adding any other layer. If not None, then it must be in Input layer and the remaining NN architecture will be built as defined in layers_config. This can be handy when we want to use this method several times to build a complex or parallel NN structure. Avoid Input in layer names.

interpret(data='training', x=None, annotate=True, vmin=None, vmax=None, **bar_kws)[source]

Interprets the underlying model. Call it after training.

Returns:

An instance of ai4water.postprocessing.interpret.Interpret class

Example

>>> from ai4water import Model
>>> from ai4water.datasets import busan_beach
>>> model = Model(model=...)
>>> model.fit(data=busan_beach())
>>> model.interpret()