[ ]:
%matplotlib inline

Open In Colab

View Source on GitHub

[ ]:
try:
    import ai4water
except (ImportError, ModuleNotFoundError):
    !pip install ai4water[tf2]

neural networks for classification

This file shows how to build neural networks for a classification problem.

[1]:

import site site.addsitedir("D:\\mytools\\AI4Water") import numpy as np import pandas as pd from ai4water import Model from ai4water.models import MLP from ai4water.utils.utils import get_version_info from ai4water.datasets import MtropicsLaos
D:\C\Anaconda3\envs\tfcpu27_py39\lib\site-packages\numpy\_distributor_init.py:30: UserWarning: loaded more than 1 DLL from .libs:
D:\C\Anaconda3\envs\tfcpu27_py39\lib\site-packages\numpy\.libs\libopenblas.EL2C6PLE4ZYW3ECEVIV3OXXGRN2NRFM2.gfortran-win_amd64.dll
D:\C\Anaconda3\envs\tfcpu27_py39\lib\site-packages\numpy\.libs\libopenblas.GK7GX5KEQ4F6UYO3P26ULGBQYHGQO7J4.gfortran-win_amd64.dll
  warnings.warn("loaded more than 1 DLL from .libs:"
D:\C\Anaconda3\envs\tfcpu27_py39\lib\site-packages\sklearn\experimental\enable_hist_gradient_boosting.py:16: UserWarning: Since version 1.0, it is not needed to import enable_hist_gradient_boosting anymore. HistGradientBoostingClassifier and HistGradientBoostingRegressor are now stable and can be normally imported from sklearn.ensemble.
  warnings.warn(
[2]:
for k,v in get_version_info().items():
    print(f"{k} version: {v}")
python version: 3.9.7 | packaged by conda-forge | (default, Sep 29 2021, 19:20:16) [MSC v.1916 64 bit (AMD64)]
os version: nt
ai4water version: 1.06
lightgbm version: 3.3.1
tcn version: 3.4.0
catboost version: 0.26
xgboost version: 1.5.0
easy_mpl version: 0.21.2
SeqMetrics version: 1.3.3
tensorflow version: 2.7.0
keras.api._v2.keras version: 2.7.0
numpy version: 1.21.0
pandas version: 1.3.4
matplotlib version: 3.4.3
h5py version: 3.5.0
sklearn version: 1.0.1
shapefile version: 2.3.0
xarray version: 0.20.1
netCDF4 version: 1.5.7
optuna version: 2.10.1
skopt version: 0.9.0
hyperopt version: 0.2.7
plotly version: 5.3.1
lime version: NotDefined
seaborn version: 0.11.2
[3]:

dataset = MtropicsLaos(save_as_nc=True, # if set to True, then netcdf must be installed convert_to_csv=False, path="F:\\data\\MtropicsLaos", ) data = dataset.make_classification(lookback_steps=1) data.shape

    Not downloading the data since the directory
    F:\data\MtropicsLaos already exists.
    Use overwrite=True to remove previously saved files and download again
Value based partial slicing on non-monotonic DatetimeIndexes with non-existing keys is deprecated and will raise a KeyError in a future Version.
[3]:
(258, 9)
[4]:
model = Model(
    input_features=data.columns.tolist()[0:-1],
    output_features=data.columns.tolist()[-1:],
    model=MLP(units=10, mode="classification"),
    lr=0.009919,
    batch_size=8,
    split_random=True,
    x_transformation="zscore",
    epochs=200,
    loss="binary_crossentropy"
)


            building DL model for
            classification problem using Model
Model: "model"
_________________________________________________________________
 Layer (type)                Output Shape              Param #
=================================================================
 input_1 (InputLayer)        [(None, 8)]               0

 Dense_0 (Dense)             (None, 10)                90

 Flatten (Flatten)           (None, 10)                0

 Dense_out (Dense)           (None, 1)                 11

=================================================================
Total params: 101
Trainable params: 101
Non-trainable params: 0
_________________________________________________________________
[5]:
h = model.fit(data=data)
***** Training *****
input_x shape:  (144, 8)
target shape:  (144, 2)
***** Validation *****
input_x shape:  (36, 8)
target shape:  (36, 2)
Epoch 1/200
assigning name input_1 to IteratorGetNext:0 with shape (8, 8)
assigning name input_1 to IteratorGetNext:0 with shape (8, 8)
 1/18 [>.............................] - ETA: 25s - loss: 1.4284assigning name input_1 to IteratorGetNext:0 with shape (None, 8)
18/18 [==============================] - 2s 7ms/step - loss: 0.6951 - val_loss: 0.5219
Epoch 2/200
18/18 [==============================] - 0s 2ms/step - loss: 0.5498 - val_loss: 0.5585
Epoch 3/200
18/18 [==============================] - 0s 2ms/step - loss: 0.5060 - val_loss: 0.5405
Epoch 4/200
18/18 [==============================] - 0s 2ms/step - loss: 0.5016 - val_loss: 0.5410
Epoch 5/200
18/18 [==============================] - 0s 2ms/step - loss: 0.5022 - val_loss: 0.5525
Epoch 6/200
18/18 [==============================] - 0s 2ms/step - loss: 0.4975 - val_loss: 0.5404
Epoch 7/200
18/18 [==============================] - 0s 2ms/step - loss: 0.5017 - val_loss: 0.5364
Epoch 8/200
18/18 [==============================] - 0s 2ms/step - loss: 0.4999 - val_loss: 0.5451
Epoch 9/200
18/18 [==============================] - 0s 2ms/step - loss: 0.4963 - val_loss: 0.5359
Epoch 10/200
18/18 [==============================] - 0s 2ms/step - loss: 0.4954 - val_loss: 0.5537
Epoch 11/200
18/18 [==============================] - 0s 2ms/step - loss: 0.4968 - val_loss: 0.5275
Epoch 12/200
18/18 [==============================] - 0s 2ms/step - loss: 0.4980 - val_loss: 0.5299
Epoch 13/200
18/18 [==============================] - 0s 2ms/step - loss: 0.5008 - val_loss: 0.5274
Epoch 14/200
18/18 [==============================] - 0s 2ms/step - loss: 0.5034 - val_loss: 0.5417
Epoch 15/200
18/18 [==============================] - 0s 2ms/step - loss: 0.4939 - val_loss: 0.5340
Epoch 16/200
18/18 [==============================] - 0s 2ms/step - loss: 0.4939 - val_loss: 0.5319
Epoch 17/200
18/18 [==============================] - 0s 2ms/step - loss: 0.4922 - val_loss: 0.5318
Epoch 18/200
18/18 [==============================] - 0s 2ms/step - loss: 0.4969 - val_loss: 0.5433
Epoch 19/200
18/18 [==============================] - 0s 2ms/step - loss: 0.4956 - val_loss: 0.5374
Epoch 20/200
18/18 [==============================] - 0s 2ms/step - loss: 0.4935 - val_loss: 0.5304
Epoch 21/200
18/18 [==============================] - 0s 2ms/step - loss: 0.4997 - val_loss: 0.5263
Epoch 22/200
18/18 [==============================] - 0s 2ms/step - loss: 0.4976 - val_loss: 0.5416
Epoch 23/200
18/18 [==============================] - 0s 2ms/step - loss: 0.4970 - val_loss: 0.5415
Epoch 24/200
18/18 [==============================] - 0s 2ms/step - loss: 0.4986 - val_loss: 0.5306
Epoch 25/200
18/18 [==============================] - 0s 2ms/step - loss: 0.4957 - val_loss: 0.5372
Epoch 26/200
18/18 [==============================] - 0s 2ms/step - loss: 0.4941 - val_loss: 0.5444
Epoch 27/200
18/18 [==============================] - 0s 2ms/step - loss: 0.4976 - val_loss: 0.5291
Epoch 28/200
18/18 [==============================] - 0s 2ms/step - loss: 0.4943 - val_loss: 0.5412
Epoch 29/200
18/18 [==============================] - 0s 2ms/step - loss: 0.4935 - val_loss: 0.5301
Epoch 30/200
18/18 [==============================] - 0s 2ms/step - loss: 0.5011 - val_loss: 0.5183
Epoch 31/200
18/18 [==============================] - 0s 2ms/step - loss: 0.4965 - val_loss: 0.5463
Epoch 32/200
18/18 [==============================] - 0s 2ms/step - loss: 0.4923 - val_loss: 0.5307
Epoch 33/200
18/18 [==============================] - 0s 2ms/step - loss: 0.4959 - val_loss: 0.5374
Epoch 34/200
18/18 [==============================] - 0s 2ms/step - loss: 0.5000 - val_loss: 0.5367
Epoch 35/200
18/18 [==============================] - 0s 2ms/step - loss: 0.4975 - val_loss: 0.5375
Epoch 36/200
18/18 [==============================] - 0s 2ms/step - loss: 0.4979 - val_loss: 0.5427
Epoch 37/200
18/18 [==============================] - 0s 2ms/step - loss: 0.4960 - val_loss: 0.5214
Epoch 38/200
18/18 [==============================] - 0s 2ms/step - loss: 0.4965 - val_loss: 0.5238
Epoch 39/200
18/18 [==============================] - 0s 2ms/step - loss: 0.5011 - val_loss: 0.5486
Epoch 40/200
18/18 [==============================] - 0s 2ms/step - loss: 0.4974 - val_loss: 0.5446
Epoch 41/200
18/18 [==============================] - 0s 2ms/step - loss: 0.4933 - val_loss: 0.5342
Epoch 42/200
18/18 [==============================] - 0s 2ms/step - loss: 0.5027 - val_loss: 0.5213
Epoch 43/200
18/18 [==============================] - 0s 2ms/step - loss: 0.5004 - val_loss: 0.5542
Epoch 44/200
18/18 [==============================] - 0s 2ms/step - loss: 0.4963 - val_loss: 0.5333
Epoch 45/200
18/18 [==============================] - 0s 2ms/step - loss: 0.4976 - val_loss: 0.5325
Epoch 46/200
18/18 [==============================] - 0s 2ms/step - loss: 0.4934 - val_loss: 0.5324
Epoch 47/200
18/18 [==============================] - 0s 2ms/step - loss: 0.4948 - val_loss: 0.5344
Epoch 48/200
18/18 [==============================] - 0s 2ms/step - loss: 0.4968 - val_loss: 0.5309
Epoch 49/200
18/18 [==============================] - 0s 2ms/step - loss: 0.4959 - val_loss: 0.5341
Epoch 50/200
18/18 [==============================] - 0s 2ms/step - loss: 0.4917 - val_loss: 0.5391
Epoch 51/200
18/18 [==============================] - 0s 2ms/step - loss: 0.4947 - val_loss: 0.5308
Epoch 52/200
18/18 [==============================] - 0s 2ms/step - loss: 0.4969 - val_loss: 0.5294
Epoch 53/200
18/18 [==============================] - 0s 2ms/step - loss: 0.4941 - val_loss: 0.5413
Epoch 54/200
18/18 [==============================] - 0s 2ms/step - loss: 0.4938 - val_loss: 0.5405
Epoch 55/200
18/18 [==============================] - 0s 2ms/step - loss: 0.4965 - val_loss: 0.5269
Epoch 56/200
18/18 [==============================] - 0s 2ms/step - loss: 0.4955 - val_loss: 0.5339
Epoch 57/200
18/18 [==============================] - 0s 2ms/step - loss: 0.4918 - val_loss: 0.5444
Epoch 58/200
18/18 [==============================] - 0s 2ms/step - loss: 0.4936 - val_loss: 0.5460
Epoch 59/200
18/18 [==============================] - 0s 2ms/step - loss: 0.4955 - val_loss: 0.5432
Epoch 60/200
18/18 [==============================] - 0s 2ms/step - loss: 0.4933 - val_loss: 0.5246
Epoch 61/200
18/18 [==============================] - 0s 2ms/step - loss: 0.4933 - val_loss: 0.5335
Epoch 62/200
18/18 [==============================] - 0s 2ms/step - loss: 0.4984 - val_loss: 0.5265
Epoch 63/200
18/18 [==============================] - 0s 2ms/step - loss: 0.4938 - val_loss: 0.5428
Epoch 64/200
18/18 [==============================] - 0s 2ms/step - loss: 0.4943 - val_loss: 0.5386
Epoch 65/200
18/18 [==============================] - 0s 2ms/step - loss: 0.4943 - val_loss: 0.5326
Epoch 66/200
18/18 [==============================] - 0s 2ms/step - loss: 0.5044 - val_loss: 0.5208
Epoch 67/200
18/18 [==============================] - 0s 2ms/step - loss: 0.4963 - val_loss: 0.5469
Epoch 68/200
18/18 [==============================] - 0s 2ms/step - loss: 0.4954 - val_loss: 0.5328
Epoch 69/200
18/18 [==============================] - 0s 2ms/step - loss: 0.4943 - val_loss: 0.5390
Epoch 70/200
18/18 [==============================] - 0s 2ms/step - loss: 0.4994 - val_loss: 0.5237
Epoch 71/200
18/18 [==============================] - 0s 2ms/step - loss: 0.4935 - val_loss: 0.5426
Epoch 72/200
18/18 [==============================] - 0s 2ms/step - loss: 0.4930 - val_loss: 0.5340
Epoch 73/200
18/18 [==============================] - 0s 2ms/step - loss: 0.4964 - val_loss: 0.5309
Epoch 74/200
18/18 [==============================] - 0s 2ms/step - loss: 0.4944 - val_loss: 0.5488
Epoch 75/200
18/18 [==============================] - 0s 2ms/step - loss: 0.4958 - val_loss: 0.5282
Epoch 76/200
18/18 [==============================] - 0s 2ms/step - loss: 0.4945 - val_loss: 0.5439
Epoch 77/200
18/18 [==============================] - 0s 2ms/step - loss: 0.4931 - val_loss: 0.5300
Epoch 78/200
18/18 [==============================] - 0s 2ms/step - loss: 0.4962 - val_loss: 0.5398
Epoch 79/200
18/18 [==============================] - 0s 2ms/step - loss: 0.4934 - val_loss: 0.5418
Epoch 80/200
18/18 [==============================] - 0s 2ms/step - loss: 0.4926 - val_loss: 0.5386
Epoch 81/200
18/18 [==============================] - 0s 2ms/step - loss: 0.4939 - val_loss: 0.5208
Epoch 82/200
18/18 [==============================] - 0s 2ms/step - loss: 0.4916 - val_loss: 0.5343
Epoch 83/200
18/18 [==============================] - 0s 2ms/step - loss: 0.4925 - val_loss: 0.5282
Epoch 84/200
18/18 [==============================] - 0s 2ms/step - loss: 0.4978 - val_loss: 0.5319
Epoch 85/200
18/18 [==============================] - 0s 2ms/step - loss: 0.4964 - val_loss: 0.5495
Epoch 86/200
18/18 [==============================] - 0s 2ms/step - loss: 0.4934 - val_loss: 0.5404
Epoch 87/200
18/18 [==============================] - 0s 2ms/step - loss: 0.4977 - val_loss: 0.5237
Epoch 88/200
18/18 [==============================] - 0s 2ms/step - loss: 0.4923 - val_loss: 0.5322
Epoch 89/200
18/18 [==============================] - 0s 2ms/step - loss: 0.5016 - val_loss: 0.5560
Epoch 90/200
18/18 [==============================] - 0s 2ms/step - loss: 0.4901 - val_loss: 0.5374
Epoch 91/200
18/18 [==============================] - 0s 2ms/step - loss: 0.4987 - val_loss: 0.5364
Epoch 92/200
18/18 [==============================] - 0s 2ms/step - loss: 0.4945 - val_loss: 0.5258
Epoch 93/200
18/18 [==============================] - 0s 2ms/step - loss: 0.4923 - val_loss: 0.5268
Epoch 94/200
18/18 [==============================] - 0s 2ms/step - loss: 0.4929 - val_loss: 0.5366
Epoch 95/200
18/18 [==============================] - 0s 2ms/step - loss: 0.4941 - val_loss: 0.5325
Epoch 96/200
18/18 [==============================] - 0s 2ms/step - loss: 0.4942 - val_loss: 0.5360
Epoch 97/200
18/18 [==============================] - 0s 2ms/step - loss: 0.4977 - val_loss: 0.5341
Epoch 98/200
18/18 [==============================] - 0s 2ms/step - loss: 0.4938 - val_loss: 0.5376
Epoch 99/200
18/18 [==============================] - 0s 2ms/step - loss: 0.4954 - val_loss: 0.5452
Epoch 100/200
18/18 [==============================] - 0s 2ms/step - loss: 0.4929 - val_loss: 0.5389
Epoch 101/200
18/18 [==============================] - 0s 2ms/step - loss: 0.4959 - val_loss: 0.5302
Epoch 102/200
18/18 [==============================] - 0s 2ms/step - loss: 0.4986 - val_loss: 0.5399
Epoch 103/200
18/18 [==============================] - 0s 2ms/step - loss: 0.4927 - val_loss: 0.5361
Epoch 104/200
18/18 [==============================] - 0s 2ms/step - loss: 0.4931 - val_loss: 0.5351
Epoch 105/200
18/18 [==============================] - 0s 2ms/step - loss: 0.4940 - val_loss: 0.5348
Epoch 106/200
18/18 [==============================] - 0s 2ms/step - loss: 0.4949 - val_loss: 0.5270
Epoch 107/200
18/18 [==============================] - 0s 2ms/step - loss: 0.4958 - val_loss: 0.5360
Epoch 108/200
18/18 [==============================] - 0s 2ms/step - loss: 0.4915 - val_loss: 0.5342
Epoch 109/200
18/18 [==============================] - 0s 2ms/step - loss: 0.4947 - val_loss: 0.5344
Epoch 110/200
18/18 [==============================] - 0s 2ms/step - loss: 0.4947 - val_loss: 0.5400
Epoch 111/200
18/18 [==============================] - 0s 2ms/step - loss: 0.4932 - val_loss: 0.5438
Epoch 112/200
18/18 [==============================] - 0s 2ms/step - loss: 0.4931 - val_loss: 0.5218
Epoch 113/200
18/18 [==============================] - 0s 2ms/step - loss: 0.4962 - val_loss: 0.5416
Epoch 114/200
18/18 [==============================] - 0s 2ms/step - loss: 0.4917 - val_loss: 0.5262
Epoch 115/200
18/18 [==============================] - 0s 2ms/step - loss: 0.4919 - val_loss: 0.5315
Epoch 116/200
18/18 [==============================] - 0s 2ms/step - loss: 0.4922 - val_loss: 0.5319
Epoch 117/200
18/18 [==============================] - 0s 2ms/step - loss: 0.4930 - val_loss: 0.5398
Epoch 118/200
18/18 [==============================] - 0s 2ms/step - loss: 0.4940 - val_loss: 0.5343
Epoch 119/200
18/18 [==============================] - 0s 2ms/step - loss: 0.4909 - val_loss: 0.5375
Epoch 120/200
18/18 [==============================] - 0s 2ms/step - loss: 0.4962 - val_loss: 0.5312
Epoch 121/200
18/18 [==============================] - 0s 2ms/step - loss: 0.4947 - val_loss: 0.5318
Epoch 122/200
18/18 [==============================] - 0s 2ms/step - loss: 0.4967 - val_loss: 0.5371
Epoch 123/200
18/18 [==============================] - 0s 1ms/step - loss: 0.5007 - val_loss: 0.5337
Epoch 124/200
18/18 [==============================] - 0s 2ms/step - loss: 0.4948 - val_loss: 0.5350
Epoch 125/200
18/18 [==============================] - 0s 2ms/step - loss: 0.4931 - val_loss: 0.5399
Epoch 126/200
18/18 [==============================] - 0s 2ms/step - loss: 0.4938 - val_loss: 0.5264
Epoch 127/200
18/18 [==============================] - 0s 2ms/step - loss: 0.4939 - val_loss: 0.5321
Epoch 128/200
18/18 [==============================] - 0s 2ms/step - loss: 0.4924 - val_loss: 0.5386
Epoch 129/200
18/18 [==============================] - 0s 2ms/step - loss: 0.4922 - val_loss: 0.5414
Epoch 130/200
18/18 [==============================] - 0s 2ms/step - loss: 0.4914 - val_loss: 0.5328
../../_images/_notebooks_model_dl_cls_8_1.png
********** Successfully loaded weights from weights_030_0.51834.hdf5 file **********
[7]:
test_x, test_y = model.test_data(data=data)
***** Test *****
input_x shape:  (78, 8)
target shape:  (78, 2)
[9]:
p = model.predict(x=test_x, y=test_y, verbose=0)
../../_images/_notebooks_model_dl_cls_10_0.png
invalid value encountered in true_divide
invalid value encountered in true_divide
[ ]: