For this week's kaggle assignment, how many weight matrices and vectors are there in the "encoder1" block? Check for the output of the get_weights() statement in this week's assignment. The encoder1 block contains 6 weight matrices, 6 bias vectors, and 4 LayerNormalization vectors. The multi-head self-attention block contains 4 weight matrices: one for query features, one for key features, one for value features, and a final projection matrix; while the feed-forward network block contains 2 weight matrices. Each of the 6 weight matrices has its own bias vector. There are scale and shift parameter vectors for each LayerNormalization() layer. A LayerNormalization() layer occurs after each residual connection: one for the multi-head self-attention block and another for the feed-forward network block. (256, 4, 64) (4, 64) (256, 4, 64) (4, 64) (256, 4, 64) (4, 64) (4, 64, 256) (256,) (256, 1024) (1024,) (1024, 256) (256,) (256,) (256,) (256,) (256,)