데이터 과학 (15주차) 6월13일
함수형모델 공부
• 최서연 • 11 min read
네트워크 오브젝트를 만들지 않고 레이어를 설계한 다음 인풋 오브젝트를 만들고 아웃풋 오브젝트를 만든다.
-
Imports
import tensorflow as tf
import tensorflow.experimental.numpy as tnp
import matplotlib.pyplot as plt
tnp.experimental_enable_numpy_behavior()
-
data
x= tnp.linspace(0,1,100).reshape(100,1)
y= x + tf.random.normal([100,1])*0.1
plt.plot(x,y,'.')
[<matplotlib.lines.Line2D at 0x7f48a065b580>]
-
input layer
tf.keras.layers.Input?
Signature: tf.keras.layers.Input( shape=None, batch_size=None, name=None, dtype=None, sparse=None, tensor=None, ragged=None, type_spec=None, **kwargs, ) Docstring: `Input()` is used to instantiate a Keras tensor. A Keras tensor is a symbolic tensor-like object, which we augment with certain attributes that allow us to build a Keras model just by knowing the inputs and outputs of the model. For instance, if `a`, `b` and `c` are Keras tensors, it becomes possible to do: `model = Model(input=[a, b], output=c)` Args: shape: A shape tuple (integers), not including the batch size. For instance, `shape=(32,)` indicates that the expected input will be batches of 32-dimensional vectors. Elements of this tuple can be None; 'None' elements represent dimensions where the shape is not known. batch_size: optional static batch size (integer). name: An optional name string for the layer. Should be unique in a model (do not reuse the same name twice). It will be autogenerated if it isn't provided. dtype: The data type expected by the input, as a string (`float32`, `float64`, `int32`...) sparse: A boolean specifying whether the placeholder to be created is sparse. Only one of 'ragged' and 'sparse' can be True. Note that, if `sparse` is False, sparse tensors can still be passed into the input - they will be densified with a default value of 0. tensor: Optional existing tensor to wrap into the `Input` layer. If set, the layer will use the `tf.TypeSpec` of this tensor rather than creating a new placeholder tensor. ragged: A boolean specifying whether the placeholder to be created is ragged. Only one of 'ragged' and 'sparse' can be True. In this case, values of 'None' in the 'shape' argument represent ragged dimensions. For more information about RaggedTensors, see [this guide](https://www.tensorflow.org/guide/ragged_tensors). type_spec: A `tf.TypeSpec` object to create the input placeholder from. When provided, all other args except name must be None. **kwargs: deprecated arguments support. Supports `batch_shape` and `batch_input_shape`. Returns: A `tensor`. Example: ```python # this is a logistic regression in Keras x = Input(shape=(32,)) y = Dense(16, activation='softmax')(x) model = Model(x, y) ``` Note that even if eager execution is enabled, `Input` produces a symbolic tensor-like object (i.e. a placeholder). This symbolic tensor-like object can be used with lower-level TensorFlow ops that take tensors as inputs, as such: ```python x = Input(shape=(32,)) y = tf.square(x) # This op will be treated like a layer model = Model(x, y) ``` (This behavior does not work for higher-order TensorFlow APIs such as control flow and being directly watched by a `tf.GradientTape`). However, the resulting model will not track any variables that were used as inputs to TensorFlow ops. All variable usages must happen within Keras layers to make sure they will be tracked by the model's weights. The Keras Input can also create a placeholder from an arbitrary `tf.TypeSpec`, e.g: ```python x = Input(type_spec=tf.RaggedTensorSpec(shape=[None, None], dtype=tf.float32, ragged_rank=1)) y = x.values model = Model(x, y) ``` When passing an arbitrary `tf.TypeSpec`, it must represent the signature of an entire batch instead of just one example. Raises: ValueError: If both `sparse` and `ragged` are provided. ValueError: If both `shape` and (`batch_input_shape` or `batch_shape`) are provided. ValueError: If `shape`, `tensor` and `type_spec` are None. ValueError: If arguments besides `type_spec` are non-None while `type_spec` is passed. ValueError: if any unrecognized parameters are provided. File: ~/anaconda3/envs/csy/lib/python3.8/site-packages/keras/engine/input_layer.py Type: function
x0= tf.keras.layers.Input(shape=(1,))
x.shape
TensorShape([100, 1])
길이가 1일 튜플을 넣자
-
아키텍처
l1=tf.keras.layers.Dense(30)
a1=tf.keras.layers.Activation(tf.nn.relu)
x1=a1(l1(x0))
l2=tf.keras.layers.Dense(30)
a2=tf.keras.layers.Activation(tf.nn.relu)
x2=a2(l2(x1))
l3=tf.keras.layers.Dense(1)
x3=l3(x2) # output
-
input, output 으로 모델(net)설정
tf.keras.Model?
Init signature: tf.keras.Model(*args, **kwargs) Docstring: `Model` groups layers into an object with training and inference features. Args: inputs: The input(s) of the model: a `keras.Input` object or list of `keras.Input` objects. outputs: The output(s) of the model. See Functional API example below. name: String, the name of the model. There are two ways to instantiate a `Model`: 1 - With the "Functional API", where you start from `Input`, you chain layer calls to specify the model's forward pass, and finally you create your model from inputs and outputs: ```python import tensorflow as tf inputs = tf.keras.Input(shape=(3,)) x = tf.keras.layers.Dense(4, activation=tf.nn.relu)(inputs) outputs = tf.keras.layers.Dense(5, activation=tf.nn.softmax)(x) model = tf.keras.Model(inputs=inputs, outputs=outputs) ``` Note: Only dicts, lists, and tuples of input tensors are supported. Nested inputs are not supported (e.g. lists of list or dicts of dict). A new Functional API model can also be created by using the intermediate tensors. This enables you to quickly extract sub-components of the model. Example: ```python inputs = keras.Input(shape=(None, None, 3)) processed = keras.layers.RandomCrop(width=32, height=32)(inputs) conv = keras.layers.Conv2D(filters=2, kernel_size=3)(processed) pooling = keras.layers.GlobalAveragePooling2D()(conv) feature = keras.layers.Dense(10)(pooling) full_model = keras.Model(inputs, feature) backbone = keras.Model(processed, conv) activations = keras.Model(conv, feature) ``` Note that the `backbone` and `activations` models are not created with `keras.Input` objects, but with the tensors that are originated from `keras.Inputs` objects. Under the hood, the layers and weights will be shared across these models, so that user can train the `full_model`, and use `backbone` or `activations` to do feature extraction. The inputs and outputs of the model can be nested structures of tensors as well, and the created models are standard Functional API models that support all the existing APIs. 2 - By subclassing the `Model` class: in that case, you should define your layers in `__init__()` and you should implement the model's forward pass in `call()`. ```python import tensorflow as tf class MyModel(tf.keras.Model): def __init__(self): super().__init__() self.dense1 = tf.keras.layers.Dense(4, activation=tf.nn.relu) self.dense2 = tf.keras.layers.Dense(5, activation=tf.nn.softmax) def call(self, inputs): x = self.dense1(inputs) return self.dense2(x) model = MyModel() ``` If you subclass `Model`, you can optionally have a `training` argument (boolean) in `call()`, which you can use to specify a different behavior in training and inference: ```python import tensorflow as tf class MyModel(tf.keras.Model): def __init__(self): super().__init__() self.dense1 = tf.keras.layers.Dense(4, activation=tf.nn.relu) self.dense2 = tf.keras.layers.Dense(5, activation=tf.nn.softmax) self.dropout = tf.keras.layers.Dropout(0.5) def call(self, inputs, training=False): x = self.dense1(inputs) if training: x = self.dropout(x, training=training) return self.dense2(x) model = MyModel() ``` Once the model is created, you can config the model with losses and metrics with `model.compile()`, train the model with `model.fit()`, or use the model to do prediction with `model.predict()`. File: ~/anaconda3/envs/csy/lib/python3.8/site-packages/keras/engine/training.py Type: type Subclasses: Model, LinearModel, WideDeepModel, Functional
net = tf.keras.Model(inputs=x0, outputs=x3)
net.summary()
Model: "model" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= input_1 (InputLayer) [(None, 1)] 0 dense (Dense) (None, 30) 60 activation (Activation) (None, 30) 0 dense_1 (Dense) (None, 30) 930 activation_1 (Activation) (None, 30) 0 dense_2 (Dense) (None, 1) 31 ================================================================= Total params: 1,021 Trainable params: 1,021 Non-trainable params: 0 _________________________________________________________________
-
compile and fit
net.compile(loss='mse', optimizer='sgd')
net.fit(x,y,epochs=100)
Epoch 1/100 4/4 [==============================] - 0s 2ms/step - loss: 0.0170 Epoch 2/100 4/4 [==============================] - 0s 2ms/step - loss: 0.0168 Epoch 3/100 4/4 [==============================] - 0s 2ms/step - loss: 0.0166 Epoch 4/100 4/4 [==============================] - 0s 2ms/step - loss: 0.0164 Epoch 5/100 4/4 [==============================] - 0s 2ms/step - loss: 0.0163 Epoch 6/100 4/4 [==============================] - 0s 3ms/step - loss: 0.0162 Epoch 7/100 4/4 [==============================] - 0s 3ms/step - loss: 0.0161 Epoch 8/100 4/4 [==============================] - 0s 3ms/step - loss: 0.0160 Epoch 9/100 4/4 [==============================] - 0s 2ms/step - loss: 0.0158 Epoch 10/100 4/4 [==============================] - 0s 3ms/step - loss: 0.0155 Epoch 11/100 4/4 [==============================] - 0s 3ms/step - loss: 0.0153 Epoch 12/100 4/4 [==============================] - 0s 3ms/step - loss: 0.0151 Epoch 13/100 4/4 [==============================] - 0s 3ms/step - loss: 0.0150 Epoch 14/100 4/4 [==============================] - 0s 2ms/step - loss: 0.0149 Epoch 15/100 4/4 [==============================] - 0s 3ms/step - loss: 0.0148 Epoch 16/100 4/4 [==============================] - 0s 3ms/step - loss: 0.0146 Epoch 17/100 4/4 [==============================] - 0s 2ms/step - loss: 0.0145 Epoch 18/100 4/4 [==============================] - 0s 2ms/step - loss: 0.0144 Epoch 19/100 4/4 [==============================] - 0s 3ms/step - loss: 0.0143 Epoch 20/100 4/4 [==============================] - 0s 2ms/step - loss: 0.0144 Epoch 21/100 4/4 [==============================] - 0s 3ms/step - loss: 0.0141 Epoch 22/100 4/4 [==============================] - 0s 2ms/step - loss: 0.0140 Epoch 23/100 4/4 [==============================] - 0s 2ms/step - loss: 0.0139 Epoch 24/100 4/4 [==============================] - 0s 2ms/step - loss: 0.0137 Epoch 25/100 4/4 [==============================] - 0s 3ms/step - loss: 0.0136 Epoch 26/100 4/4 [==============================] - 0s 3ms/step - loss: 0.0135 Epoch 27/100 4/4 [==============================] - 0s 3ms/step - loss: 0.0134 Epoch 28/100 4/4 [==============================] - 0s 2ms/step - loss: 0.0133 Epoch 29/100 4/4 [==============================] - 0s 2ms/step - loss: 0.0132 Epoch 30/100 4/4 [==============================] - 0s 3ms/step - loss: 0.0131 Epoch 31/100 4/4 [==============================] - 0s 3ms/step - loss: 0.0130 Epoch 32/100 4/4 [==============================] - 0s 2ms/step - loss: 0.0130 Epoch 33/100 4/4 [==============================] - 0s 2ms/step - loss: 0.0128 Epoch 34/100 4/4 [==============================] - 0s 3ms/step - loss: 0.0128 Epoch 35/100 4/4 [==============================] - 0s 2ms/step - loss: 0.0127 Epoch 36/100 4/4 [==============================] - 0s 2ms/step - loss: 0.0126 Epoch 37/100 4/4 [==============================] - 0s 3ms/step - loss: 0.0126 Epoch 38/100 4/4 [==============================] - 0s 2ms/step - loss: 0.0125 Epoch 39/100 4/4 [==============================] - 0s 2ms/step - loss: 0.0124 Epoch 40/100 4/4 [==============================] - 0s 2ms/step - loss: 0.0123 Epoch 41/100 4/4 [==============================] - 0s 2ms/step - loss: 0.0123 Epoch 42/100 4/4 [==============================] - 0s 3ms/step - loss: 0.0122 Epoch 43/100 4/4 [==============================] - 0s 3ms/step - loss: 0.0121 Epoch 44/100 4/4 [==============================] - 0s 3ms/step - loss: 0.0121 Epoch 45/100 4/4 [==============================] - 0s 3ms/step - loss: 0.0120 Epoch 46/100 4/4 [==============================] - 0s 3ms/step - loss: 0.0120 Epoch 47/100 4/4 [==============================] - 0s 3ms/step - loss: 0.0119 Epoch 48/100 4/4 [==============================] - 0s 3ms/step - loss: 0.0119 Epoch 49/100 4/4 [==============================] - 0s 3ms/step - loss: 0.0119 Epoch 50/100 4/4 [==============================] - 0s 3ms/step - loss: 0.0118 Epoch 51/100 4/4 [==============================] - 0s 3ms/step - loss: 0.0118 Epoch 52/100 4/4 [==============================] - 0s 2ms/step - loss: 0.0117 Epoch 53/100 4/4 [==============================] - 0s 3ms/step - loss: 0.0117 Epoch 54/100 4/4 [==============================] - 0s 2ms/step - loss: 0.0117 Epoch 55/100 4/4 [==============================] - 0s 2ms/step - loss: 0.0116 Epoch 56/100 4/4 [==============================] - 0s 3ms/step - loss: 0.0117 Epoch 57/100 4/4 [==============================] - 0s 3ms/step - loss: 0.0117 Epoch 58/100 4/4 [==============================] - 0s 3ms/step - loss: 0.0115 Epoch 59/100 4/4 [==============================] - 0s 3ms/step - loss: 0.0115 Epoch 60/100 4/4 [==============================] - 0s 2ms/step - loss: 0.0115 Epoch 61/100 4/4 [==============================] - 0s 3ms/step - loss: 0.0115 Epoch 62/100 4/4 [==============================] - 0s 2ms/step - loss: 0.0114 Epoch 63/100 4/4 [==============================] - 0s 2ms/step - loss: 0.0114 Epoch 64/100 4/4 [==============================] - 0s 3ms/step - loss: 0.0114 Epoch 65/100 4/4 [==============================] - 0s 3ms/step - loss: 0.0113 Epoch 66/100 4/4 [==============================] - 0s 2ms/step - loss: 0.0113 Epoch 67/100 4/4 [==============================] - 0s 3ms/step - loss: 0.0113 Epoch 68/100 4/4 [==============================] - 0s 3ms/step - loss: 0.0113 Epoch 69/100 4/4 [==============================] - 0s 3ms/step - loss: 0.0112 Epoch 70/100 4/4 [==============================] - ETA: 0s - loss: 0.013 - 0s 3ms/step - loss: 0.0112 Epoch 71/100 4/4 [==============================] - 0s 2ms/step - loss: 0.0112 Epoch 72/100 4/4 [==============================] - 0s 2ms/step - loss: 0.0111 Epoch 73/100 4/4 [==============================] - 0s 3ms/step - loss: 0.0111 Epoch 74/100 4/4 [==============================] - 0s 2ms/step - loss: 0.0112 Epoch 75/100 4/4 [==============================] - 0s 2ms/step - loss: 0.0111 Epoch 76/100 4/4 [==============================] - 0s 3ms/step - loss: 0.0110 Epoch 77/100 4/4 [==============================] - 0s 3ms/step - loss: 0.0110 Epoch 78/100 4/4 [==============================] - ETA: 0s - loss: 0.008 - 0s 3ms/step - loss: 0.0110 Epoch 79/100 4/4 [==============================] - 0s 3ms/step - loss: 0.0110 Epoch 80/100 4/4 [==============================] - 0s 2ms/step - loss: 0.0110 Epoch 81/100 4/4 [==============================] - 0s 2ms/step - loss: 0.0109 Epoch 82/100 4/4 [==============================] - 0s 3ms/step - loss: 0.0109 Epoch 83/100 4/4 [==============================] - 0s 2ms/step - loss: 0.0110 Epoch 84/100 4/4 [==============================] - 0s 2ms/step - loss: 0.0109 Epoch 85/100 4/4 [==============================] - 0s 3ms/step - loss: 0.0109 Epoch 86/100 4/4 [==============================] - 0s 3ms/step - loss: 0.0109 Epoch 87/100 4/4 [==============================] - 0s 2ms/step - loss: 0.0109 Epoch 88/100 4/4 [==============================] - 0s 2ms/step - loss: 0.0108 Epoch 89/100 4/4 [==============================] - 0s 2ms/step - loss: 0.0108 Epoch 90/100 4/4 [==============================] - 0s 3ms/step - loss: 0.0109 Epoch 91/100 4/4 [==============================] - 0s 2ms/step - loss: 0.0108 Epoch 92/100 4/4 [==============================] - 0s 2ms/step - loss: 0.0108 Epoch 93/100 4/4 [==============================] - 0s 2ms/step - loss: 0.0108 Epoch 94/100 4/4 [==============================] - 0s 2ms/step - loss: 0.0108 Epoch 95/100 4/4 [==============================] - 0s 2ms/step - loss: 0.0108 Epoch 96/100 4/4 [==============================] - 0s 2ms/step - loss: 0.0108 Epoch 97/100 4/4 [==============================] - 0s 2ms/step - loss: 0.0107 Epoch 98/100 4/4 [==============================] - 0s 2ms/step - loss: 0.0107 Epoch 99/100 4/4 [==============================] - 0s 2ms/step - loss: 0.0107 Epoch 100/100 4/4 [==============================] - 0s 2ms/step - loss: 0.0107
<keras.callbacks.History at 0x7f488c45ce50>
plt.plot(x,y,'.')
plt.plot(x,net(x),'--')
[<matplotlib.lines.Line2D at 0x7f488c437220>]