>source

활동 인식을 수행하기 위해 모델을 작성하려고합니다. 사전 훈련 된 가중치를 사용하여 감지를 위해 InceptionV3 및 백본 및 LSTM 사용.

 train_generator = datagen.flow_from_directory(
          'dataset/train',
          target_size=(1,224, 224),
          batch_size=batch_size,
          class_mode='categorical',  # this means our generator will only yield batches of data, no labels
          shuffle=True,
          classes=['PlayingPiano','HorseRiding','Skiing', 'Basketball','BaseballPitch'])
  validation_generator = datagen.flow_from_directory(
          'dataset/validate',
          target_size=(1,224, 224),
          batch_size=batch_size,
          class_mode='categorical',  # this means our generator will only yield batches of data, no labels
          shuffle=True,
          classes=['PlayingPiano','HorseRiding','Skiing', 'Basketball','BaseballPitch'])
  return train_generator,validation_generator


5 개의 수업을 훈련 시키므로 훈련 및 확인을 위해 데이터를 폴더로 분할합니다. 이것이 나의 CNN + LSTM 아키텍처입니다

image = Input(shape=(None,224,224,3),name='image_input')
    cnn = applications.inception_v3.InceptionV3(
        weights='imagenet',
        include_top=False,
        pooling='avg')
    cnn.trainable = False
    encoded_frame = TimeDistributed(Lambda(lambda x: cnn(x)))(image)
    encoded_vid = LSTM(256)(encoded_frame)
    layer1 = Dense(512, activation='relu')(encoded_vid)
    dropout1 = Dropout(0.5)(layer1)
    layer2 = Dense(256, activation='relu')(dropout1)
    dropout2 = Dropout(0.5)(layer2)
    layer3 = Dense(64, activation='relu')(dropout2)
    dropout3 = Dropout(0.5)(layer3)
    outputs = Dense(5, activation='softmax')(dropout3)
 model = Model(inputs=[image],outputs=outputs)
    sgd = SGD(lr=0.001, decay = 1e-6, momentum=0.9, nesterov=True)
    model.compile(optimizer=sgd,loss='categorical_crossentropy', metrics=['accuracy'])
    model.fit_generator(train_generator,validation_data = validation_generator,steps_per_epoch=300, epochs=nb_epoch,callbacks=callbacks,shuffle=True,verbose=1)


_________________________________________________________________
Layer (type)                 Output Shape              Param #
=================================================================
image_input (InputLayer)     (None, None, 224, 224, 3) 0
_________________________________________________________________
time_distributed_1 (TimeDist (None, None, 2048)        0
_________________________________________________________________
lstm_1 (LSTM)                (None, 256)               2360320
_________________________________________________________________
dense_1 (Dense)              (None, 512)               131584
_________________________________________________________________
dropout_1 (Dropout)          (None, 512)               0
_________________________________________________________________
dense_2 (Dense)              (None, 256)               131328
_________________________________________________________________
dropout_2 (Dropout)          (None, 256)               0
_________________________________________________________________
dense_3 (Dense)              (None, 64)                16448
_________________________________________________________________
dropout_3 (Dropout)          (None, 64)                0
_________________________________________________________________
dense_4 (Dense)              (None, 5)                 325
_________________________________________________________________

모델은 문제없이 정상적으로 컴파일됩니다. 훈련 중에 문제가 시작됩니다. val_acc = 0.50에 도달 한 후 val_acc = 0.30으로 다시 떨어지고 손실은 0.80에서 정지하고 대부분 움직이지 않습니다.

여기서 약간의 주제에 대한 모델이 개선 된 다음 천천히 떨어졌다가 나중에 정지되는 것을 볼 수 있듯이 훈련의 로그가 표시됩니다. 어떤 이유가 될 수 있는지 아십니까?

Epoch 00002: val_loss improved from 1.56471 to 1.55652, saving model to ./weights_inception/Inception_V3.02-0.28.h5
Epoch 3/500
300/300 [==============================] - 66s 219ms/step - loss: 1.5436 - acc: 0.3281 - val_loss: 1.5476 - val_acc: 0.2981
Epoch 00003: val_loss improved from 1.55652 to 1.54757, saving model to ./weights_inception/Inception_V3.03-0.30.h5
Epoch 4/500
300/300 [==============================] - 66s 220ms/step - loss: 1.5109 - acc: 0.3593 - val_loss: 1.5284 - val_acc: 0.3588
Epoch 00004: val_loss improved from 1.54757 to 1.52841, saving model to ./weights_inception/Inception_V3.04-0.36.h5
Epoch 5/500
300/300 [==============================] - 66s 221ms/step - loss: 1.4167 - acc: 0.4167 - val_loss: 1.4945 - val_acc: 0.3553
Epoch 00005: val_loss improved from 1.52841 to 1.49446, saving model to ./weights_inception/Inception_V3.05-0.36.h5
Epoch 6/500
300/300 [==============================] - 66s 221ms/step - loss: 1.2941 - acc: 0.4683 - val_loss: 1.4735 - val_acc: 0.4443
Epoch 00006: val_loss improved from 1.49446 to 1.47345, saving model to ./weights_inception/Inception_V3.06-0.44.h5
Epoch 7/500
300/300 [==============================] - 66s 221ms/step - loss: 1.2096 - acc: 0.5116 - val_loss: 1.3738 - val_acc: 0.5186
Epoch 00007: val_loss improved from 1.47345 to 1.37381, saving model to ./weights_inception/Inception_V3.07-0.52.h5
Epoch 8/500
300/300 [==============================] - 66s 221ms/step - loss: 1.1477 - acc: 0.5487 - val_loss: 1.2337 - val_acc: 0.5788
Epoch 00008: val_loss improved from 1.37381 to 1.23367, saving model to ./weights_inception/Inception_V3.08-0.58.h5
Epoch 9/500
300/300 [==============================] - 66s 221ms/step - loss: 1.0809 - acc: 0.5831 - val_loss: 1.2247 - val_acc: 0.5658
Epoch 00009: val_loss improved from 1.23367 to 1.22473, saving model to ./weights_inception/Inception_V3.09-0.57.h5
Epoch 10/500
300/300 [==============================] - 66s 221ms/step - loss: 1.0362 - acc: 0.6089 - val_loss: 1.1704 - val_acc: 0.5774
Epoch 00010: val_loss improved from 1.22473 to 1.17035, saving model to ./weights_inception/Inception_V3.10-0.58.h5
Epoch 11/500
300/300 [==============================] - 66s 221ms/step - loss: 0.9811 - acc: 0.6317 - val_loss: 1.1612 - val_acc: 0.5616
Epoch 00011: val_loss improved from 1.17035 to 1.16121, saving model to ./weights_inception/Inception_V3.11-0.56.h5
Epoch 12/500
300/300 [==============================] - 66s 221ms/step - loss: 0.9444 - acc: 0.6471 - val_loss: 1.1533 - val_acc: 0.5613
Epoch 00012: val_loss improved from 1.16121 to 1.15330, saving model to ./weights_inception/Inception_V3.12-0.56.h5
Epoch 13/500
300/300 [==============================] - 66s 221ms/step - loss: 0.9072 - acc: 0.6650 - val_loss: 1.1843 - val_acc: 0.5361
Epoch 00013: val_loss did not improve from 1.15330
Epoch 14/500
300/300 [==============================] - 66s 221ms/step - loss: 0.8747 - acc: 0.6744 - val_loss: 1.2135 - val_acc: 0.5258
Epoch 00014: val_loss did not improve from 1.15330
Epoch 15/500
300/300 [==============================] - 67s 222ms/step - loss: 0.8666 - acc: 0.6829 - val_loss: 1.1585 - val_acc: 0.5443
Epoch 00015: val_loss did not improve from 1.15330
Epoch 16/500
300/300 [==============================] - 66s 222ms/step - loss: 0.8386 - acc: 0.6926 - val_loss: 1.1503 - val_acc: 0.5482
Epoch 00016: val_loss improved from 1.15330 to 1.15026, saving model to ./weights_inception/Inception_V3.16-0.55.h5
Epoch 17/500
300/300 [==============================] - 66s 221ms/step - loss: 0.8199 - acc: 0.7023 - val_loss: 1.2162 - val_acc: 0.5288
Epoch 00017: val_loss did not improve from 1.15026
Epoch 18/500
300/300 [==============================] - 66s 222ms/step - loss: 0.8018 - acc: 0.7150 - val_loss: 1.1995 - val_acc: 0.5179
Epoch 00018: val_loss did not improve from 1.15026
Epoch 19/500
300/300 [==============================] - 66s 221ms/step - loss: 0.7923 - acc: 0.7186 - val_loss: 1.2218 - val_acc: 0.5137
Epoch 00019: val_loss did not improve from 1.15026
Epoch 20/500
300/300 [==============================] - 67s 222ms/step - loss: 0.7748 - acc: 0.7268 - val_loss: 1.2880 - val_acc: 0.4574
Epoch 00020: val_loss did not improve from 1.15026
Epoch 21/500
300/300 [==============================] - 66s 221ms/step - loss: 0.7604 - acc: 0.7330 - val_loss: 1.2658 - val_acc: 0.4861

  • 답변 # 1

    모델이 과적 합되기 시작했습니다. 에포크 수를 늘리면 이상적으로 학습 손실에 따라 학습 손실이 감소합니다 (학습 속도에 따라 다름). 감소 할 수없는 경우 모델에 데이터에 대한 높은 편향이있을 수 있습니다. 더 큰 모델 (더 많은 매개 변수 또는 더 깊은 모델)을 사용할 수 있습니다.

    학습 속도를 줄일 수도 있습니다. 그래도 학습이 중단되면 모델의 편향이 낮아질 수 있습니다.

  • 답변 # 2

    도움을 주셔서 감사합니다. 예, 문제가 과적 합했기 때문에 LSTM에 대해 더욱 공격적 인 드롭 아웃을 만들었고 도움이되었습니다. 그러나 val_loss 및 acc_val의 정확도는 여전히 매우 낮습니다

       video = Input(shape=(None, 224,224,3))
    cnn_base = VGG16(input_shape=(224,224,3),
                    weights="imagenet",
                    include_top=False)
    cnn_out = GlobalAveragePooling2D()(cnn_base.output)
    cnn = Model(inputs=cnn_base.input, outputs=cnn_out)
    cnn.trainable = False
    encoded_frames = TimeDistributed(cnn)(video)
    encoded_sequence = LSTM(32, dropout=0.5, W_regularizer=l2(0.01), recurrent_dropout=0.5)(encoded_frames)
    hidden_layer = Dense(units=64, activation="relu")(encoded_sequence)
    dropout = Dropout(0.2)(hidden_layer)
    outputs = Dense(5, activation="softmax")(dropout)
    model = Model([video], outputs)
    
    

    여기에서 로그

    Epoch 00033: val_loss improved from 1.62041 to 1.57951, saving model to 
    ./weights_inception/Inception_V3.33-0.76.h5
    Epoch 34/500
    100/100 [==============================] - 54s 537ms/step - loss: 0.6301 - acc: 
    0.9764 - val_loss: 1.6190 - val_acc: 0.7627
    Epoch 00034: val_loss did not improve from 1.57951
    Epoch 35/500
    100/100 [==============================] - 54s 537ms/step - loss: 0.5907 - acc: 
    0.9840 - val_loss: 1.5927 - val_acc: 0.7608
    Epoch 00035: val_loss did not improve from 1.57951
    Epoch 36/500
    100/100 [==============================] - 54s 537ms/step - loss: 0.5783 - acc: 
    0.9812 - val_loss: 1.3477 - val_acc: 0.7769
    Epoch 00036: val_loss improved from 1.57951 to 1.34772, saving model to 
    ./weights_inception/Inception_V3.36-0.78.h5
    Epoch 37/500
    100/100 [==============================] - 54s 537ms/step - loss: 0.5618 - acc: 
    0.9802 - val_loss: 1.6545 - val_acc: 0.7384
    Epoch 00037: val_loss did not improve from 1.34772
    Epoch 38/500
    100/100 [==============================] - 54s 537ms/step - loss: 0.5382 - acc: 
    0.9818 - val_loss: 1.8298 - val_acc: 0.7421
    Epoch 00038: val_loss did not improve from 1.34772
    Epoch 39/500
    100/100 [==============================] - 54s 536ms/step - loss: 0.5080 - acc: 
    0.9844 - val_loss: 1.7948 - val_acc: 0.7290
    Epoch 00039: val_loss did not improve from 1.34772
    Epoch 40/500
    100/100 [==============================] - 54s 537ms/step - loss: 0.4800 - acc: 
    0.9892 - val_loss: 1.8036 - val_acc: 0.7522
    
    

  • 이전 sql - postgresql - 긴 형식에서 넓은 형식으로
  • 다음 PHP 날짜 기능은 나에게 같은 시간을 반복해서 준다