deeplearn@ML-RefVm-967342:~/speech$ wget https://cross-entropy.net/ML530/speech-tensors.py.txt --2022-11-13 05:01:35-- https://cross-entropy.net/ML530/speech-tensors.py.txt Resolving cross-entropy.net (cross-entropy.net)... 107.180.57.14 Connecting to cross-entropy.net (cross-entropy.net)|107.180.57.14|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 1675 (1.6K) [text/plain] Saving to: ‘speech-tensors.py.txt’ speech-tensors.py.txt 100%[============================================================>] 1.64K --.-KB/s in 0s 2022-11-13 05:01:35 (1.11 GB/s) - ‘speech-tensors.py.txt’ saved [1675/1675] deeplearn@ML-RefVm-967342:~/speech$ time python speech-tensors.py.txt real 7m55.611s user 26m27.028s sys 15m54.828s deeplearn@ML-RefVm-967342:~/speech$ wget https://cross-entropy.net/ML530/transformer.py --2022-11-13 05:14:49-- https://cross-entropy.net/ML530/transformer.py Resolving cross-entropy.net (cross-entropy.net)... 107.180.57.14 Connecting to cross-entropy.net (cross-entropy.net)|107.180.57.14|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 1532 (1.5K) Saving to: ‘transformer.py’ transformer.py 100%[============================================================>] 1.50K --.-KB/s in 0s 2022-11-13 05:14:49 (974 MB/s) - ‘transformer.py’ saved [1532/1532] deeplearn@ML-RefVm-967342:~/speech$ wget https://cross-entropy.net/ML530/speech-train.py.txt --2022-11-13 05:14:58-- https://cross-entropy.net/ML530/speech-train.py.txt Resolving cross-entropy.net (cross-entropy.net)... 107.180.57.14 Connecting to cross-entropy.net (cross-entropy.net)|107.180.57.14|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 2713 (2.6K) [text/plain] Saving to: ‘speech-train.py.txt’ speech-train.py.txt 100%[============================================================>] 2.65K --.-KB/s in 0s 2022-11-13 05:14:58 (1.68 GB/s) - ‘speech-train.py.txt’ saved [2713/2713] deeplearn@ML-RefVm-967342:~/speech$ time python speech-train.py.txt 2022-11-13 05:15:37.339080: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags. 2022-11-13 05:15:49.137409: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1532] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 10794 MB memory: -> device: 0, name: Tesla K80, pci bus id: 0001:00:00.0, compute capability: 3.7 (256, 4, 64) (4, 64) (256, 4, 64) (4, 64) (256, 4, 64) (4, 64) (4, 64, 256) (256,) (256, 1024) (1024,) (1024, 256) (256,) (256,) (256,) (256,) (256,) Model: "model" __________________________________________________________________________________________________ Layer (type) Output Shape Param # Connected to ================================================================================================== input_1 (InputLayer) [(None, 32, 20)] 0 [] input_2 (InputLayer) [(None, 32)] 0 [] conv1d (Conv1D) (None, 32, 256) 10496 ['input_1[0][0]'] embedding (Embedding) (None, 32, 256) 8192 ['input_2[0][0]'] add (Add) (None, 32, 256) 0 ['conv1d[0][0]', 'embedding[0][0]'] layer_normalization (LayerNorm (None, 32, 256) 512 ['add[0][0]'] alization) encoder1 (TransformerEncoder) (None, 32, 256) 789760 ['layer_normalization[0][0]'] encoder2 (TransformerEncoder) (None, 32, 256) 789760 ['encoder1[0][0]'] encoder3 (TransformerEncoder) (None, 32, 256) 789760 ['encoder2[0][0]'] encoder4 (TransformerEncoder) (None, 32, 256) 789760 ['encoder3[0][0]'] lambda (Lambda) (None, 256) 0 ['encoder4[0][0]'] dense_8 (Dense) (None, 320) 82240 ['lambda[0][0]'] dropout_8 (Dropout) (None, 320) 0 ['dense_8[0][0]'] dense_9 (Dense) (None, 30) 9630 ['dropout_8[0][0]'] ================================================================================================== Total params: 3,270,110 Trainable params: 3,270,110 Non-trainable params: 0 __________________________________________________________________________________________________ Epoch 1/32 2022-11-13 05:16:11.361587: I tensorflow/stream_executor/cuda/cuda_dnn.cc:384] Loaded cuDNN version 8500 400/400 [==============================] - 85s 109ms/step - loss: 1.5095 - accuracy: 0.5512 - val_loss: 0.5498 - val_accuracy: 0.8342 Epoch 2/32 400/400 [==============================] - 43s 107ms/step - loss: 0.5144 - accuracy: 0.8469 - val_loss: 0.3648 - val_accuracy: 0.8885 Epoch 3/32 400/400 [==============================] - 43s 107ms/step - loss: 0.3678 - accuracy: 0.8913 - val_loss: 0.3099 - val_accuracy: 0.9078 Epoch 4/32 400/400 [==============================] - 43s 107ms/step - loss: 0.3134 - accuracy: 0.9058 - val_loss: 0.2892 - val_accuracy: 0.9138 Epoch 5/32 400/400 [==============================] - 43s 107ms/step - loss: 0.2561 - accuracy: 0.9232 - val_loss: 0.2743 - val_accuracy: 0.9212 Epoch 6/32 400/400 [==============================] - 43s 107ms/step - loss: 0.2163 - accuracy: 0.9344 - val_loss: 0.2582 - val_accuracy: 0.9260 Epoch 7/32 400/400 [==============================] - 43s 107ms/step - loss: 0.1888 - accuracy: 0.9424 - val_loss: 0.2555 - val_accuracy: 0.9263 Epoch 8/32 400/400 [==============================] - 43s 108ms/step - loss: 0.1745 - accuracy: 0.9474 - val_loss: 0.2470 - val_accuracy: 0.9315 Epoch 9/32 400/400 [==============================] - 43s 108ms/step - loss: 0.1535 - accuracy: 0.9534 - val_loss: 0.2515 - val_accuracy: 0.9292 Epoch 10/32 400/400 [==============================] - 43s 108ms/step - loss: 0.1395 - accuracy: 0.9574 - val_loss: 0.2601 - val_accuracy: 0.9289 Epoch 11/32 400/400 [==============================] - 43s 108ms/step - loss: 0.1289 - accuracy: 0.9611 - val_loss: 0.2590 - val_accuracy: 0.9287 Epoch 12/32 400/400 [==============================] - 43s 108ms/step - loss: 0.1223 - accuracy: 0.9626 - val_loss: 0.2601 - val_accuracy: 0.9334 Epoch 13/32 400/400 [==============================] - 43s 108ms/step - loss: 0.1062 - accuracy: 0.9680 - val_loss: 0.2614 - val_accuracy: 0.9301 Epoch 14/32 400/400 [==============================] - 43s 108ms/step - loss: 0.1049 - accuracy: 0.9689 - val_loss: 0.2648 - val_accuracy: 0.9307 Epoch 15/32 400/400 [==============================] - 43s 108ms/step - loss: 0.0940 - accuracy: 0.9715 - val_loss: 0.2683 - val_accuracy: 0.9335 Epoch 16/32 400/400 [==============================] - 44s 109ms/step - loss: 0.0861 - accuracy: 0.9738 - val_loss: 0.2727 - val_accuracy: 0.9306 Epoch 17/32 400/400 [==============================] - 44s 109ms/step - loss: 0.0789 - accuracy: 0.9768 - val_loss: 0.2872 - val_accuracy: 0.9312 Epoch 18/32 400/400 [==============================] - 44s 109ms/step - loss: 0.0726 - accuracy: 0.9781 - val_loss: 0.2848 - val_accuracy: 0.9320 Epoch 19/32 400/400 [==============================] - 43s 109ms/step - loss: 0.0684 - accuracy: 0.9793 - val_loss: 0.2847 - val_accuracy: 0.9366 Epoch 20/32 400/400 [==============================] - 43s 109ms/step - loss: 0.0645 - accuracy: 0.9811 - val_loss: 0.2992 - val_accuracy: 0.9309 Epoch 21/32 400/400 [==============================] - 43s 109ms/step - loss: 0.0615 - accuracy: 0.9814 - val_loss: 0.3085 - val_accuracy: 0.9307 Epoch 22/32 400/400 [==============================] - 44s 109ms/step - loss: 0.0582 - accuracy: 0.9824 - val_loss: 0.3040 - val_accuracy: 0.9335 Epoch 23/32 400/400 [==============================] - 44s 109ms/step - loss: 0.0533 - accuracy: 0.9835 - val_loss: 0.3154 - val_accuracy: 0.9301 Epoch 24/32 400/400 [==============================] - 43s 109ms/step - loss: 0.0490 - accuracy: 0.9854 - val_loss: 0.3257 - val_accuracy: 0.9317 Epoch 25/32 400/400 [==============================] - 44s 110ms/step - loss: 0.0464 - accuracy: 0.9858 - val_loss: 0.3252 - val_accuracy: 0.9345 Epoch 26/32 400/400 [==============================] - 43s 109ms/step - loss: 0.0462 - accuracy: 0.9860 - val_loss: 0.3168 - val_accuracy: 0.9359 Epoch 27/32 400/400 [==============================] - 43s 109ms/step - loss: 0.0448 - accuracy: 0.9866 - val_loss: 0.3151 - val_accuracy: 0.9351 Epoch 28/32 400/400 [==============================] - 43s 109ms/step - loss: 0.0415 - accuracy: 0.9878 - val_loss: 0.3342 - val_accuracy: 0.9316 Epoch 29/32 400/400 [==============================] - 44s 109ms/step - loss: 0.0386 - accuracy: 0.9884 - val_loss: 0.3447 - val_accuracy: 0.9340 Epoch 30/32 400/400 [==============================] - 43s 109ms/step - loss: 0.0374 - accuracy: 0.9886 - val_loss: 0.3517 - val_accuracy: 0.9337 Epoch 31/32 400/400 [==============================] - 43s 109ms/step - loss: 0.0352 - accuracy: 0.9896 - val_loss: 0.3632 - val_accuracy: 0.9317 Epoch 32/32 400/400 [==============================] - 43s 109ms/step - loss: 0.0345 - accuracy: 0.9900 - val_loss: 0.3579 - val_accuracy: 0.9329 214/214 [==============================] - 3s 13ms/step real 24m38.821s user 17m24.014s sys 0m40.913s deeplearn@ML-RefVm-967342:~/speech$ kaggle competitions submit -c ml530-2022-fall-speech -f predictions.csv -m "24:38" 100%|█████████████████████████████████████████████████████████████████████████████████████████████| 58.0k/58.0k [00:00<00:00, 97.6kB/s] Successfully submitted to ml530-2022-fall-speechdeeplearn@ML-RefVm-967342:~/speech$ deeplearn@ML-RefVm-967342:~/speech$