What are Recurrent Neural Networks?¶
Recurrent Neural Network(RNN) are a type of Neural Network where the output from previous step are fed as input to the current step. In traditional neural networks, all the inputs and outputs are independent of each other, but in cases like when it is required to predict the next word of a sentence, the previous words are required and hence there is a need to remember the previous words. Thus RNN came into existence, which solved this issue with the help of a Hidden Layer. The main and most important feature of RNN is Hidden state, which remembers some information about a sequence.
- RNN have a “memory” which remembers all information about what has been calculated. It uses the same parameters for each input as it performs the same task on all the inputs or hidden layers to produce the output. This reduces the complexity of parameters, unlike other neural networks.
How RNN works¶
The working of a RNN can be understood with the help of below example:
Example:¶
Suppose there is a deeper network with one input layer, three hidden layers and one output layer. Then like other neural networks, each hidden layer will have its own set of weights and biases, let’s say, for hidden layer 1 the weights and biases are (w1, b1), (w2, b2) for second hidden layer and (w3, b3) for third hidden layer. This means that each of these layers are independent of each other, i.e. they do not memorize the previous outputs.
Now the RNN will do the following:
- RNN converts the independent activations into dependent activations by providing the same weights and biases to all the layers, thus reducing the complexity of increasing parameters and memorizing each previous outputs by giving each output as input to the next hidden layer.
- Hence these three layers can be joined together such that the weights and bias of all the hidden layers is the same, into a single recurrent layer.
Training through RNN
- A single time step of the input is provided to the network.
- Then calculate its current state using set of current input and the previous state.
- The current ht becomes ht-1 for the next time step.
- One can go as many time steps according to the problem and join the information from all the previous states.
- Once all the time steps are completed the final current state is used to calculate the output.
- The output is then compared to the actual output i.e the target output and the error is generated.
- The error is then back-propagated to the network to update the weights and hence the network (RNN) is trained.
Advantages of Recurrent Neural Network
- An RNN remembers each and every information through time. It is useful in time series prediction only because of the feature to remember previous inputs as well. This is called Long Short Term Memory.
Part 1: Data Preprocessing
In this step, we import three Libraries in Data Preprocessing part. Basically, Library is a tool that you can ua specific job. First of all, we import the numpy library used for multidimensional array then import the pandas library used to import the dataset and in last we import matplotlib library used for plotting the graph.
# import libraries
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
In this step, we import the dataset with pandas library using read_csv() function. After importing the dataset, we take only one attribute of Google stock price prediction which is open Google stock price which we want to predict.
# import training set
training_set=pd.read_csv('../datasets/Google_Stock_Price_Train.csv')
training_set
Date | Open | High | Low | Close | Volume | |
---|---|---|---|---|---|---|
0 | 1/3/2012 | 325.25 | 332.83 | 324.97 | 663.59 | 7,380,500 |
1 | 1/4/2012 | 331.27 | 333.87 | 329.08 | 666.45 | 5,749,400 |
2 | 1/5/2012 | 329.83 | 330.75 | 326.89 | 657.21 | 6,590,300 |
3 | 1/6/2012 | 328.34 | 328.77 | 323.68 | 648.24 | 5,405,900 |
4 | 1/9/2012 | 322.04 | 322.29 | 309.46 | 620.76 | 11,688,800 |
… | … | … | … | … | … | … |
1253 | 12/23/2016 | 790.90 | 792.74 | 787.28 | 789.91 | 623,400 |
1254 | 12/27/2016 | 790.68 | 797.86 | 787.66 | 791.55 | 789,100 |
1255 | 12/28/2016 | 793.70 | 794.23 | 783.20 | 785.05 | 1,153,800 |
1256 | 12/29/2016 | 783.33 | 785.93 | 778.92 | 782.79 | 744,300 |
1257 | 12/30/2016 | 782.75 | 782.78 | 770.41 | 771.82 | 1,770,000 |
1258 rows × 6 columns
training_set=training_set.iloc[:,1:2].values
training_set
array([[325.25], [331.27], [329.83], ..., [793.7 ], [783.33], [782.75]])
Feature Scaling is the most important part of data preprocessing. If we see our dataset then some attribute contains information in Numeric value some value very high and some are very low. This will cause some issues in our machine Learning model, To solve that problem we set all values on the same scale there are two methods to solve that problem first one is Normalize and Second is Standard Scaler.
Here we use Normalize Scalar because in Google Stock Price Prediction we build the LSTM model which has several sigmoid functions as a activation function (which is 0 or 1) that is why we choose Normalize Scalar here.
# feature Scaling
from sklearn.preprocessing import MinMaxScaler
sc= MinMaxScaler()
training_set=sc.fit_transform(training_set)
training_set
array([[0.08581368], [0.09701243], [0.09433366], ..., [0.95725128], [0.93796041], [0.93688146]])
len(training_set)
1258
# Geting the input and output
X_train= training_set[0:1257]
y_train= training_set[1:1258]
In this step, we create our input and output from training data. Here X_train is our input which is t and Y_train is our output which is t+1.
In next step, we reshape our input. Why I Reshape here Because our input shape have 2 dimension one dimension corresponds to observation and second correspond to feature which has only one feature here and we convert our input into three-dimension last 1 corresponds to a timestamp because our input is t and output is t+1. So t+1-t= 1 that’s why 1 here. So here 1257 is observation, 1 is time step, 1 is feature scaling.
# Reshaping
X_train=np.reshape(X_train,(1257 , 1 , 1))
X_train
array([[[0.08581368]], [[0.09701243]], [[0.09433366]], ..., [[0.95163331]], [[0.95725128]], [[0.93796041]]])
Part 2: Building the RNN¶
In this step, we import the Library which will build our RNN model. We import Keras Library which will build a deep neural network based on Tensorflow because we use Tensorflow backend. Here we import the three modules from Keras. The first one is Sequential used for initializing our RNN model and second is Dense used for adding different layers of RNN and third is LSTM which we use in the RNN model.
# importing the Keras libraries and Packages
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import LSTM
# initialize the RNN
regressor = Sequential()
# adding the input layer and LSTM layer
regressor.add(LSTM(units=4, activation= 'sigmoid', input_shape= (None,1)))
C:\Users\Mehak\AppData\Local\Programs\Python\Python312\Lib\site-packages\keras\src\layers\rnn\rnn.py:204: UserWarning: Do not pass an `input_shape`/`input_dim` argument to a layer. When using Sequential models, prefer using an `Input(shape)` object as the first layer in the model instead. super().__init__(**kwargs)
# adding the output layer
regressor.add(Dense( units=1 ))
# compiling the RNN
regressor.compile(optimizer='adam', loss='mean_squared_error')
# fitting the RNN to the training set
regressor.fit(X_train, y_train, batch_size=32, epochs=100)
Epoch 1/100 40/40 ━━━━━━━━━━━━━━━━━━━━ 1s 1ms/step - loss: 0.6514 Epoch 2/100 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - loss: 0.5015 Epoch 3/100 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.3931 Epoch 4/100 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.3144 Epoch 5/100 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - loss: 0.2391 Epoch 6/100 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.1919 Epoch 7/100 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.1524 Epoch 8/100 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.1259 Epoch 9/100 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.1012 Epoch 10/100 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - loss: 0.0942 Epoch 11/100 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - loss: 0.0763 Epoch 12/100 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - loss: 0.0758 Epoch 13/100 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - loss: 0.0674 Epoch 14/100 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0678 Epoch 15/100 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - loss: 0.0688 Epoch 16/100 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0664 Epoch 17/100 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0649 Epoch 18/100 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - loss: 0.0669 Epoch 19/100 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - loss: 0.0660 Epoch 20/100 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0650 Epoch 21/100 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0621 Epoch 22/100 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0606 Epoch 23/100 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0619 Epoch 24/100 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0596 Epoch 25/100 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0592 Epoch 26/100 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0558 Epoch 27/100 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0561 Epoch 28/100 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0550 Epoch 29/100 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - loss: 0.0553 Epoch 30/100 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - loss: 0.0531 Epoch 31/100 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0518 Epoch 32/100 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - loss: 0.0493 Epoch 33/100 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0506 Epoch 34/100 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0482 Epoch 35/100 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - loss: 0.0495 Epoch 36/100 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0450 Epoch 37/100 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0457 Epoch 38/100 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0425 Epoch 39/100 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0433 Epoch 40/100 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0416 Epoch 41/100 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0401 Epoch 42/100 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0390 Epoch 43/100 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - loss: 0.0389 Epoch 44/100 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - loss: 0.0359 Epoch 45/100 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0350 Epoch 46/100 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - loss: 0.0336 Epoch 47/100 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - loss: 0.0329 Epoch 48/100 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - loss: 0.0302 Epoch 49/100 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 801us/step - loss: 0.0297 Epoch 50/100 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - loss: 0.0279 Epoch 51/100 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 968us/step - loss: 0.0259 Epoch 52/100 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - loss: 0.0254 Epoch 53/100 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0227 Epoch 54/100 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0220 Epoch 55/100 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0214 Epoch 56/100 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0191 Epoch 57/100 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0179 Epoch 58/100 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 802us/step - loss: 0.0165 Epoch 59/100 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - loss: 0.0148 Epoch 60/100 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - loss: 0.0139 Epoch 61/100 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - loss: 0.0131 Epoch 62/100 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - loss: 0.0113 Epoch 63/100 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0108 Epoch 64/100 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0096 Epoch 65/100 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0087 Epoch 66/100 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - loss: 0.0078 Epoch 67/100 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - loss: 0.0067 Epoch 68/100 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - loss: 0.0061 Epoch 69/100 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - loss: 0.0053 Epoch 70/100 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - loss: 0.0051 Epoch 71/100 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - loss: 0.0044 Epoch 72/100 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - loss: 0.0039 Epoch 73/100 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - loss: 0.0035 Epoch 74/100 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 956us/step - loss: 0.0030 Epoch 75/100 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - loss: 0.0026 Epoch 76/100 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - loss: 0.0022 Epoch 77/100 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - loss: 0.0019 Epoch 78/100 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - loss: 0.0018 Epoch 79/100 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - loss: 0.0014 Epoch 80/100 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - loss: 0.0013 Epoch 81/100 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - loss: 0.0011 Epoch 82/100 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 803us/step - loss: 9.9220e-04 Epoch 83/100 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 968us/step - loss: 8.3229e-04 Epoch 84/100 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - loss: 8.0336e-04 Epoch 85/100 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - loss: 7.0241e-04 Epoch 86/100 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - loss: 6.5778e-04 Epoch 87/100 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - loss: 6.9740e-04 Epoch 88/100 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 969us/step - loss: 5.9230e-04 Epoch 89/100 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 801us/step - loss: 5.7496e-04 Epoch 90/100 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 801us/step - loss: 5.3812e-04 Epoch 91/100 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 5.1161e-04 Epoch 92/100 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - loss: 4.9532e-04 Epoch 93/100 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - loss: 4.6922e-04 Epoch 94/100 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - loss: 4.6221e-04 Epoch 95/100 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - loss: 4.0059e-04 Epoch 96/100 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 968us/step - loss: 4.5165e-04 Epoch 97/100 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - loss: 4.1458e-04 Epoch 98/100 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - loss: 4.4985e-04 Epoch 99/100 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - loss: 4.3820e-04 Epoch 100/100 40/40 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - loss: 4.0105e-04
<keras.src.callbacks.history.History at 0x29d3b5cf290>
Part 3: Making the Prediction and Visulising the result¶
# Geting the real stock price of 2017
test_set = pd.read_csv('../datasets/Google_Stock_Price_Test.csv')
real_stock_price = test_set.iloc[:,1:2].values
real_stock_price
array([[778.81], [788.36], [786.08], [795.26], [806.4 ], [807.86], [805. ], [807.14], [807.48], [807.08], [805.81], [805.12], [806.91], [807.25], [822.3 ], [829.62], [837.81], [834.71], [814.66], [796.86]])
# Geting the Predicted Stock Price of 2017
inputs = real_stock_price
inputs = sc.transform(inputs)
inputs
array([[-0.51750586], [-0.51747281], [-0.5174807 ], [-0.51744893], [-0.51741038], [-0.51740533], [-0.51741522], [-0.51740782], [-0.51740664], [-0.51740803], [-0.51741242], [-0.51741481], [-0.51740861], [-0.51740744], [-0.51735536], [-0.51733003], [-0.51730168], [-0.51731241], [-0.51738179], [-0.51744339]])
inputs = np.reshape(inputs, (20 , 1, 1))
inputs
array([[[-0.51750586]], [[-0.51747281]], [[-0.5174807 ]], [[-0.51744893]], [[-0.51741038]], [[-0.51740533]], [[-0.51741522]], [[-0.51740782]], [[-0.51740664]], [[-0.51740803]], [[-0.51741242]], [[-0.51741481]], [[-0.51740861]], [[-0.51740744]], [[-0.51735536]], [[-0.51733003]], [[-0.51730168]], [[-0.51731241]], [[-0.51738179]], [[-0.51744339]]])
predicted_stock_price = regressor.predict(inputs)
predicted_stock_price
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 169ms/step
array([[-0.40077525], [-0.4007554 ], [-0.40076017], [-0.40074092], [-0.4007178 ], [-0.40071476], [-0.40072078], [-0.4007163 ], [-0.40071547], [-0.40071636], [-0.40071905], [-0.40072048], [-0.40071672], [-0.40071607], [-0.40068465], [-0.40066952], [-0.4006524 ], [-0.40065873], [-0.4007007 ], [-0.40073758]], dtype=float32)
predicted_stock_price = sc.inverse_transform(predicted_stock_price)
predicted_stock_price
array([[63.679253], [63.689922], [63.68736 ], [63.69771 ], [63.71014 ], [63.711777], [63.708538], [63.71094 ], [63.71139 ], [63.71091 ], [63.70947 ], [63.7087 ], [63.710716], [63.71107 ], [63.727955], [63.736095], [63.74529 ], [63.741894], [63.719337], [63.699505]], dtype=float32)
# Visulising the Result
plt.plot( real_stock_price , color = 'red' , label = 'Real Google Stock Price')
plt.plot( predicted_stock_price , color = 'blue' , label = 'Predicted Google Stock Price')
plt.title('Google Stock Price Prediction')
plt.xlabel( 'time' )
plt.ylabel( 'Google Stock Price' )
plt.legend()
plt.show()
import math
from sklearn.metrics import mean_squared_error
rmse = math.sqrt(mean_squared_error(real_stock_price, predicted_stock_price))
rmse
743.9603582498944
inputs
array([[[-0.51750586]], [[-0.51747281]], [[-0.5174807 ]], [[-0.51744893]], [[-0.51741038]], [[-0.51740533]], [[-0.51741522]], [[-0.51740782]], [[-0.51740664]], [[-0.51740803]], [[-0.51741242]], [[-0.51741481]], [[-0.51740861]], [[-0.51740744]], [[-0.51735536]], [[-0.51733003]], [[-0.51730168]], [[-0.51731241]], [[-0.51738179]], [[-0.51744339]]])
Predict Next Day Value¶
n1=float(input("Enter Today Stock Price to Predict Next Day Stock Price:"))
n1=np.array(n1).reshape(-1,1)
n2=sc.transform(n1)
V=n2.reshape(-1,1,1)
predict=regressor.predict(V)
final=sc.inverse_transform(predict)
print("Predicted Value:",final)
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 138ms/step Predicted Value: [[1035.3373]]
Predict for Next 5 Days¶
n1=float(input("Enter Today Stock Price to Predict Next Day Stock Price:"))
n1=np.array(n1).reshape(-1,1)
for i in range(5):
n2=sc.transform(n1)
V=n2.reshape(-1,1,1)
predict=regressor.predict(V)
final=sc.inverse_transform(predict)
n1=final
print("Predicted Value:",final)
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 16ms/step Predicted Value: [[69.18118]] 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 31ms/step Predicted Value: [[107.329994]] 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 16ms/step Predicted Value: [[134.08624]] 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 22ms/step Predicted Value: [[153.913]] 1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 16ms/step Predicted Value: [[169.18369]]