Author: Brian A. Ree
0: Tensorflow Linear Regression: Generating Features Review
Welcome to the third part of the tensorflow linear regression tutorial. We're going to do a quick review of some of the aspects of
generating features and creating our tensors for process by our simple neural network. Let's take a quick look at some of the code
from our LoadFeatureData class to focus on some slight adjustments to the code we worked with in the previous tutorial.
# The answer in linear regression models will always be stored in the 'Answer' column
lrows[i].setMember('Answer', 8, float(lrows[i].getMemberByName('Close')))
In our custom feature generation method we have to add some new information. Since we're going to be training and validating our model
we need to store the correct answer value that we will use to check our error and train our network. To do this we will store a new value in our DataRow
instances called, you guessed it, 'Answer'. Notice that we need to keep track of what columns have what column index. So we set the 'Answer' column to index 8.
Now we have a answer column to check when we are training our model. We need to push this data through to the tensor objects too though. Let's take a quick look at
our LoadTensorData class. Let's take a look at the changes we made to that class next.
rows = []
answers = []
rowCount = 0
train = []
trainAnswers = []
trainCount = 0;
validate = []
validateANswers = []
validateCount = 0
trainPrct = 0.70
validatePrct = 0.30
Notice that we need to store more information than in our previous tutorial. Since we're gearing up for training and validating our model we need to store our answer
tensors and we also want to keep track of how many rows we've loaded into each tensor object. You might also notice that we have some other new class variables trainPrct
and validatePrct, these get set in our constructor. They are data driven from our execution configuration dictionary.
def __init__(self, lVerbose=False, lTrainPrct=0.70, lValidatePrct=0.30, lLoadFeatureData=None):
self.verbose = lVerbose
self.trainPrct = lTrainPrct
self.validatePrct = lValidatePrct
self.loadFeatureData = lLoadFeatureData
#edef
Our new constructor takes some arguments for the training and validation percentages that we will pull from our data set. I used to have the pull be random.
But this adds a lot of extra overhead and processing time that we really don't need for this basic tutorial. A more robust implementation is left to you.
Let's quickly look at how we're creating our answer tensor. Again the changes here are small and I'll leave looking over the rest of this class to you. The new
code should be very straight forward.
# Convert base data
val2 = []
val3 = []
rowcnt = 0
for row in self.loadFeatureData.rows:
val = []
for col in self.columnMap:
val.append(float(row.getMemberByName(col)))
# efl
val2.append(val)
val3.append(float(row.getMemberByName('Answer')))
rowcnt += 1
# efl
self.rows = tf.to_float(val2)
self.answers = tf.to_float(val3)
self.rowCount = rowcnt
print("TensorRow Answer Shape: %s" % self.answers.get_shape())
print("TensorRow Data Shape: %s" % self.rows.get_shape())
print('TensorRow Count: %i' % (self.rowCount))
As you can see when we're converting our list data into the target tensor we want, we are also now populating a tensor of answer data that matches our data tensor.
A similar approach is used to create the training and validation sets. Pretty cool huh, is cool the right word? Anyhow, next up is the main event. The moment you've all been
waiting for......... our linear regression model. Fully abstracted and ready to run a linear regression on any such tensor data of the correct shape. But first! Let's look at our execution
code. This will give us an idea how we're using the linear regression model.
if featureType != '':
print("Found feature type: " + featureType)
fData = LoadFeatureData.LoadFeatureData(limitLoad, rowLimit, cleanData, verbose, data)
fData.generateData(featureType)
else:
print("Found no feature type.")
fData = LoadFeatureData.LoadFeatureData(limitLoad, rowLimit, cleanData, verbose, data)
fData.generateData('')
# eif
tData = LoadTensorData.LoadTensorData(verbose, trainPrct, validatePrct, fData)
tData.generateData(DataRow2Tensor.columns[datarow_2_tensor_type])
if model_type == 'linear_regression':
tfModel = RegModelLinear.RegModelLinear(verbose, tData, lin_reg_positive_result, randomSeed, trainStepsMultiplier, logPrint, learning_rate, evalType)
tfModel.startTraining()
# eif
1: Tensorflow Linear Regression: Model Details
You can see that our new LoadTensorData class takes a trainPrct and a validatePrct argument as well as our LoadFeatureData class instance.
Now let's take a look at the RegModelLinear class which stands for regression model linear. It takes our LoadTensorData instance as an argument
as well as a few other arguments. Let's list them here.
- verbose: Boolean flag that indicates if we should use verbose logging for extra debugging information.
- tData: This is an instance of our LoadTensorData class and will be used to access the tensor data we've prepared.
- lin_reg_positive_result: This is a deprecated argument, I have the code printing the overall error of the trained model compared to the validation data.
- randomSeed: This is a boolean flag that indicates if we should try to prep our weights with a ramdom seed value.
- trainStepsMultiplier: This is an integer value that multiplies the training steps by itself to set the total number of training steps. Training steps are calculated from the size of the training set.
- logPrint: This is a value controlling how often the error is printed to the console as the model is trained.
- learning_rate: This is an important argument, it controls how quickly the model learns. However this should be a very small incremental value like 0.000001 or so depending on your data.
- evalType: This is a string representing a specific evaluation to run for the given model. This allows us to specify special custom checks in our model's code.
So let's take a look at our RegModelLinear implementation. One quick note is that we're going to overlook the checkpoint code for now.
This feature allows the code to save it's trained model at a checkpoint so that we can save any precious time spent training our model.
I've played with the code and it does seem to do it's job but I will leave that as an exploration exercise for you.
import tensorflow as tf
import os
import random
import LoadTensorData
class RegModelLinear:
""" A general implementation of a TensorFlow linear regression model. """
verbose = False
w = None
b = None
dataModelColCount = 0
totalTrainingSteps = 10000
trainStepsMultiplier = 1
checkpoint = False
randomSeed = False
learning_rate = 0.0000001
positiveResult = 5.0
verbose = False
loadTensorData = None
logPrint = 100
evalType = ''
def __init__(self, lVerbose=False, lLoadTensorData=None, lPositiveResult=0.50, lRandomSeed=False, lTrainStepsMultiplier=1.0, lLogPrint=100, lLearningRate=0.0000001, lEvalType=''):
self.verbose = lVerbose
self.learning_rate = lLearningRate
self.loadTensorData = lLoadTensorData
self.positiveResult = lPositiveResult
self.randomSeed = lRandomSeed
self.trainStepsMultiplier = lTrainStepsMultiplier
self.dataModelColCount = self.loadTensorData.dataModelColCount
self.logPrint = lLogPrint
self.evalType = lEvalType
# edef
def inference(self, x):
# Compute inference model over data x and return the result.
return tf.matmul(x, self.w) + self.b
# edef
def loss(self, x, y):
# Compute loss over training data x and expected outputs y.
Y_predicted = self.inference(x)
return tf.reduce_mean(tf.squared_difference(y, Y_predicted))
# edef
def inputs(self):
# Read/generate input training data x and expected outputs y.
return self.loadTensorData.train, self.loadTensorData.trainAnswers
# edef
def train(self, totalLoss):
return tf.train.GradientDescentOptimizer(self.learning_rate).minimize(totalLoss)
# edef
def evaluate(self, sess, test_x, test_y):
Y_predicted = self.inference(test_x)
mse = tf.reduce_mean(tf.squared_difference(test_y, Y_predicted))
print('Mean Squared Error: %.4f' % sess.run(mse))
print('Custom Evaluation: %s' % (self.evalType))
if self.evalType == 'weight_age':
print(sess.run(self.inference([[80.0, 25.0]])))
print(sess.run(self.inference([[65.0, 25.0]])))
# eif
# edef
def startTraining(self):
self.totalTrainingSteps = int(self.loadTensorData.trainCount * self.trainStepsMultiplier)
print('Found training steps: %i' % self.totalTrainingSteps)
with tf.Session() as sess:
print('Found tensor dimension: %i' % self.dataModelColCount)
if self.randomSeed == True:
self.w = tf.Variable(tf.random_normal([self.dataModelColCount, 1], stddev=0.5), name='weights')
self.b = tf.Variable(tf.random_normal([1], stddev=0.5), name='bias')
else:
self.w = tf.Variable(tf.zeros([self.dataModelColCount, 1]), name='weights')
self.b = tf.Variable(0.00, name='bias')
# eif
# Model setup
tf.global_variables_initializer().run()
# Create a saver
if self.checkpoint == True:
saver = tf.train.Saver()
# eif
x, y = self.inputs()
total_loss = self.loss(x, y)
train_op = self.train(total_loss)
coord = tf.train.Coordinator()
threads = tf.train.start_queue_runners(sess, coord)
training_steps = self.totalTrainingSteps
initial_step = 0
if self.checkpoint == True:
# Verify we don't have a checkpoint saved already
ckpt = tf.train.get_checkpoint_state(os.path.dirname(__file__))
if ckpt and ckpt.model_checkpoint_path:
# Restores from checkpoint
saver.restore(sess, ckpt.model_checkpoint_path)
initial_step = int(ckpt.model_checkpoint_path.rsplit('-', 1)[1])
# eif
# eif
# Training loop
if self.checkpoint == True:
for step in range(initial_step, training_steps):
sess.run([train_op])
if step % self.logPrint == 0:
print ("Loss: ", sess.run([total_loss]))
# eif
if step % 1000 == 0:
saver.save(sess, 'eod-model', global_step=step)
# eif
# efl
else:
for step in range(training_steps):
sess.run([train_op])
if step % self.logPrint == 0:
print ("Loss: ", sess.run([total_loss]))
# eif
# efl
# eif
self.evaluate(sess, self.loadTensorData.validate, self.loadTensorData.validateAnswers)
# ewith
# edef
# eclass
Not too bad for a cool linear regression model. As usual let's go over the class variables first. We'll ignore the variables that were covered in our review of the
constructor arguments.
- w: The tensor containing our weights.
- b: The tensor containing our bias.
- dataModelColCount: The number of columns in our data model. Based on the data in DataRow2Tensor entry used by this class.
- totalTrainingSteps: The total number of training steps is set to the number of rows in the training set times the trainStepsMultiplier
Nothing too crazy right. All the arguments make sense we're defining some data driven values that control how our class operates its linear regression model.
Next up let's review the methods in this class and then start diving into some code.
- inference: Defines the operations necessary to create our inference formula, this is the formula that defines our guess.
- loss: Defines the operations necessary to calulate the loss associated with the current weights and bias.
- inputs: This is a simple method that returns our input data, this will be the training and training answer data stored in our local instance of LoadTensorData class.
- train: Defines the operations needed to train the model.
- evaluate: Defines the operations needed to evaluate the accuracy of the model.
Let's take a look at the inference method. This method contains the set of operations to create our inference formula. That is given an input X and our current
value of weights and biases we generate an output, Y_predicted.
def inference(self, x):
# Compute inference model over data x and return the result.
return tf.matmul(x, self.w) + self.b
# edef
Let's take a look at how inference is used, it'll make more sense that way. Next up the loss function where we compare our inferred
answer to our actual answer.
def loss(self, x, y):
# Compute loss over training data x and expected outputs y.
Y_predicted = self.inference(x)
return tf.reduce_mean(tf.squared_difference(y, Y_predicted))
# edef
Ah, our Y_predicted value is created from our inference method (our best guess at answers given the state of our neural network).
The loss of our model is returned by the loss method, big surprise there. Take a look what's happening though. We're taking a reduced average
error of the squared difference between y (actual answer) and Y_predicted (predicted answer). The reason why we take a squared difference is because this creates
a smoother error function for us to optimize (tensor flow does all the work no worries). Next up inputs and train methods will be reviewed.
Let's take a look at them.
def inputs(self):
# Read/generate input training data x and expected outputs y.
return self.loadTensorData.train, self.loadTensorData.trainAnswers
# edef
def train(self, totalLoss):
return tf.train.GradientDescentOptimizer(self.learning_rate).minimize(totalLoss)
# edef
I know, I know tons of code to review. Well let's dig in. The inputs method is very simple it just provides an abstraction layer for us to pull in tensor data
used in our model. You can see that we're pulling information from our instance of the LoadTensorData class. Our train is also very simple but it actually does a lot.
This method uses tensor flows train method to minimize the error in our neural network by incrementally trying to find the lowest point on the error curve. We do this buy slowly
walking the value of our weights based on the slop of our error curve until the best fit to a minimum is found. This technique is what enables us to optimize a complex neural network
efficiently. It can also cause strange behavior like over shooting and jumping back and forth. We won't cover those issues in this tutorial but you can look up those types of
neural network errors so that you can be better prepared to address them.
We're going to take a look at our evaluate method next. This method is used after training to compare the results of our neural network to the known answers we loaded
in our LoadTensorData class.
def evaluate(self, sess, test_x, test_y):
Y_predicted = self.inference(test_x)
mse = tf.reduce_mean(tf.squared_difference(test_y, Y_predicted))
print('Mean Squared Error: %.4f' % sess.run(mse))
print('Custom Evaluation: %s' % (self.evalType))
if self.evalType == 'weight_age':
print(sess.run(self.inference([[80.0, 25.0]])))
print(sess.run(self.inference([[65.0, 25.0]])))
# eif
# edef
If you noticed that our evaluate method looks a lot like our loss and inference methods combined then you're sharp.
Well let's think about what we're doing in this method. We are evaluating the accuracy of our trained model against our evaluation data.
In order to do so we need to generate a set of predicted answers, Y_predicted takes care of that for us, and we get it again with a call to our
inference method except we pass in validation data now and not training data.
Once we have our predicted answers we then calculate the loss, similar to how we did it during our training step.
You'll notice a special section at the end of the evaluate method that lets us run custom evaluation code.
This will come in handy when we want to check values specific to a certain data set.
Alright now, we're almost done. Soon we'll be doing some test runs to check our model's performance. But first we need to review the startTraining
method. This is the most complex part of this class but you've seen all the supporting methods so you have an idea what we're doing. Let's look at the code.
def startTraining(self):
self.totalTrainingSteps = int(self.loadTensorData.trainCount * self.trainStepsMultiplier)
print('Found training steps: %i' % self.totalTrainingSteps)
with tf.Session() as sess:
print('Found tensor dimension: %i' % self.dataModelColCount)
if self.randomSeed == True:
self.w = tf.Variable(tf.random_normal([self.dataModelColCount, 1], stddev=0.5), name='weights')
self.b = tf.Variable(tf.random_normal([1], stddev=0.5), name='bias')
else:
self.w = tf.Variable(tf.zeros([self.dataModelColCount, 1]), name='weights')
self.b = tf.Variable(0.00, name='bias')
# eif
# Model setup
tf.global_variables_initializer().run()
# Create a saver
if self.checkpoint == True:
saver = tf.train.Saver()
# eif
x, y = self.inputs()
total_loss = self.loss(x, y)
train_op = self.train(total_loss)
coord = tf.train.Coordinator()
threads = tf.train.start_queue_runners(sess, coord)
training_steps = self.totalTrainingSteps
initial_step = 0
if self.checkpoint == True:
# Verify we don't have a checkpoint saved already
ckpt = tf.train.get_checkpoint_state(os.path.dirname(__file__))
if ckpt and ckpt.model_checkpoint_path:
# Restores from checkpoint
saver.restore(sess, ckpt.model_checkpoint_path)
initial_step = int(ckpt.model_checkpoint_path.rsplit('-', 1)[1])
# eif
# eif
# Training loop
if self.checkpoint == True:
for step in range(initial_step, training_steps):
sess.run([train_op])
if step % self.logPrint == 0:
print ("Loss: ", sess.run([total_loss]))
# eif
if step % 1000 == 0:
saver.save(sess, 'eod-model', global_step=step)
# eif
# efl
else:
for step in range(training_steps):
sess.run([train_op])
if step % self.logPrint == 0:
print ("Loss: ", sess.run([total_loss]))
# eif
# efl
# eif
self.evaluate(sess, self.loadTensorData.validate, self.loadTensorData.validateAnswers)
# ewith
# edef
The method start by calculating the desired training step count. We use the size of the data set and then multiply that by a factor to create a larger or smaller
total number of training steps to run. We set the tensor flow session to use during our training with this line of code, with tf.Session() as sess.
The ensures that all tensor flow operations are run on the same session and that the session is released once we exit the with block.
Let's take a look at the first few lines of our training method.
print('Found tensor dimension: %i' % self.dataModelColCount)
if self.randomSeed == True:
self.w = tf.Variable(tf.random_normal([self.dataModelColCount, 1], stddev=0.5), name='weights')
self.b = tf.Variable(tf.random_normal([1], stddev=0.5), name='bias')
else:
self.w = tf.Variable(tf.zeros([self.dataModelColCount, 1]), name='weights')
self.b = tf.Variable(0.00, name='bias')
# eif
# Model setup
tf.global_variables_initializer().run()
# Create a saver
if self.checkpoint == True:
saver = tf.train.Saver()
# eif
As a quick sanity check we print out the dimension of our training tensor by printing out the number of columns loaded
from the column data map. The next piece of code determines if we intialize our weight and bias tensors with initial values or just zeros.
Notice that our weight tensor shape is determined by the columns we're including in our data, training, and validation tensors.
The bias tensor is a single dimension and has a value for each column in our data. After we setup the shape of our weight and bias tensors
we run the tensor flow variable initialization call. If checkpoints are enabled we initialize a new instance of the saver class.
x, y = self.inputs()
total_loss = self.loss(x, y)
train_op = self.train(total_loss)
coord = tf.train.Coordinator()
threads = tf.train.start_queue_runners(sess, coord)
training_steps = self.totalTrainingSteps
initial_step = 0
if self.checkpoint == True:
# Verify we don't have a checkpoint saved already
ckpt = tf.train.get_checkpoint_state(os.path.dirname(__file__))
if ckpt and ckpt.model_checkpoint_path:
# Restores from checkpoint
saver.restore(sess, ckpt.model_checkpoint_path)
initial_step = int(ckpt.model_checkpoint_path.rsplit('-', 1)[1])
# eif
# eif
In the next few lines of code we initialize our input variables x and y with our input training data by calling out local
loss method. We then store a call to our training method in the train_op variable. We also store a local instance of
a training coordinator. The training coordinator works with tensorflow's threaded training via the start_queue_runners method.
We also store a local copy of the number of total training steps and set our training step tracking variable to zero.
Next up if the checkpoint boolean is set we setup our checkpoint tracker by setting the directory where the checkpoint files
will be stored. If there are checkpoint files in the target checkpoint directory then we load the latest training checkpoint
and we also set the current training step. The next block of code will run our actual training loop and then call our evaluate
method.
# Training loop
if self.checkpoint == True:
for step in range(initial_step, training_steps):
sess.run([train_op])
if step % self.logPrint == 0:
print ("Loss: ", sess.run([total_loss]))
# eif
if step % 1000 == 0:
saver.save(sess, 'eod-model', global_step=step)
# eif
# efl
else:
for step in range(training_steps):
sess.run([train_op])
if step % self.logPrint == 0:
print ("Loss: ", sess.run([total_loss]))
# eif
# efl
# eif
self.evaluate(sess, self.loadTensorData.validate, self.loadTensorData.validateAnswers)
If our checkpoint flag is set to true we start our training loop at the initial_step that was loaded from our
checkpoint file. The inner part of the loops are essentially the same. For each training step we call the train_op
operations which in turn calls the loss method and passed into our train method. If our loop iteration
makes it to an index such that step % self.logPrint == 0, then we print out the loss we've calculated up to that point.
The only difference in the checkpoint enabled loop is that after each 1000 training steps we save a checkpoint file to
track our training progress. And last but not least we call our validate method which generates a mean squared error
against the validation set of data. Any special validation steps are handled by the local variable evalType.
So now that we have covered all of the code running our linear regression tensorflow model let's actually run it!
Uncomment the following line run(exes["goog_lin_reg_avg100day"]);, and comment this one run(exes["weight_age_lin_reg"]);.
Now run Main.py and you should see output similar to the output depicted below. This execution loads all the stock price data we have
for the SPY exchange traded fund, a fund that tracks the S&P 500. We're trying to predict the closing price buy looking at the patterns created by the
closing price, the opening price, and the simple 100 day moving average.
Application Version: 0.4.0.5
Found loader: load_csv_data
Loading Data: ./data/spy.csv.xls Type: csv Version: 1.0 Reset: False
Loaded 3130 rows from this data file.
CleanCount: 0 RowCount: 3129 RowsFound: 3129
Found feature type: goog_stock_sma100
Generating Feature Data: Type: goog_stock_sma100
Cleaning row data...
Loaded 3110 rows from this data file.
Cleaning row data...
CleanCount: 19 RowCount: 3091 RowsFound: 3110
Generating Tensor Data:
TensorRow Answer Shape: (3110,)
TensorRow Data Shape: (3110, 3)
TensorRow Count: 3110
TensorTrain Answer Shape: (2163,)
TensorTrain Data Shape: (2163, 3)
TensorTrain Count: 2163
TensorValidate Answer Shape: (2164,)
TensorValidate Data Shape: (2164, 3)
TensorValidate Count: 2164
Found training steps: 2163
Found tensor dimension: 3
('Loss: ', [27536.791])
('Loss: ', [23831.895])
('Loss: ', [20694.104])
('Loss: ', [18045.791])
('Loss: ', [15801.108])
('Loss: ', [13902.822])
('Loss: ', [12295.392])
('Loss: ', [10934.216])
('Loss: ', [9783.0303])
('Loss: ', [8807.3838])
('Loss: ', [7982.5605])
('Loss: ', [7284.0532])
('Loss: ', [6692.4165])
('Loss: ', [6192.0903])
('Loss: ', [5768.5732])
('Loss: ', [5410.1362])
('Loss: ', [5106.6519])
('Loss: ', [4849.876])
('Loss: ', [4632.4131])
('Loss: ', [4448.3242])
('Loss: ', [4292.5913])
('Loss: ', [4160.7041])
('Loss: ', [4049.0051])
('Loss: ', [3954.5781])
('Loss: ', [3874.666])
('Loss: ', [3806.9558])
('Loss: ', [3749.551])
('Loss: ', [3701.0183])
('Loss: ', [3659.948])
('Loss: ', [3625.1284])
('Loss: ', [3595.6699])
('Loss: ', [3570.752])
('Loss: ', [3549.6558])
('Loss: ', [3531.814])
('Loss: ', [3516.6018])
('Loss: ', [3503.7312])
('Loss: ', [3492.8438])
('Loss: ', [3483.6479])
('Loss: ', [3475.8372])
('Loss: ', [3469.1931])
('Loss: ', [3463.5942])
('Loss: ', [3458.8254])
('Loss: ', [3454.8076])
('Loss: ', [3451.3948])
Mean Squared Error: 3455.7385
Custom Evaluation: goog_stock_sma100
Now switch the commented lines so that we are running our 'weight_age_lin_reg' execution configuration. Again execute Main.py
and you should see output similar to that listed below. This linear regression model learns how weight and age are related to blood fat.
With only 25 samples and a mean squared error of 8000, we can predict the blood fat given an age and weight. You can see our special evaluation
print out at the bottom of the execution output. Our estimated blood fat values are 320 and 268, which actually aren't bad estimates for this limited
data sample.
Application Version: 0.4.0.5
Found loader: load_csv_data
Loading Data: ./data/weight_age.csv.xls Type: csv Version: 1.0 Reset: False
Found data mapping:
('Age', '1')
('BloodFat', '2')
('Weight', '0')
Found append cols mapping:
Weight : 84
Age : 46
BloodFat: 354
Weight : 73
Age : 20
BloodFat: 190
Weight : 65
Age : 52
BloodFat: 405
Weight : 70
Age : 30
BloodFat: 263
Weight : 76
Age : 57
BloodFat: 451
Weight : 69
Age : 25
BloodFat: 302
Weight : 63
Age : 28
BloodFat: 288
Weight : 72
Age : 36
BloodFat: 385
Weight : 79
Age : 57
BloodFat: 402
Weight : 75
Age : 44
BloodFat: 365
Weight : 27
Age : 24
BloodFat: 209
Weight : 89
Age : 31
BloodFat: 290
Weight : 65
Age : 52
BloodFat: 346
Weight : 57
Age : 23
BloodFat: 254
Weight : 59
Age : 60
BloodFat: 395
Weight : 69
Age : 48
BloodFat: 434
Weight : 60
Age : 34
BloodFat: 220
Weight : 79
Age : 51
BloodFat: 374
Weight : 75
Age : 50
BloodFat: 308
Weight : 82
Age : 34
BloodFat: 220
Weight : 59
Age : 46
BloodFat: 311
Weight : 67
Age : 23
BloodFat: 181
Weight : 85
Age : 37
BloodFat: 274
Weight : 55
Age : 40
BloodFat: 303
Weight : 63
Age : 30
BloodFat: 244
Loaded 26 rows from this data file.
CleanCount: 0 RowCount: 25 RowsFound: 25
Found feature type: weight_age
Generating Feature Data: Type: weight_age
Loaded 25 rows from this data file.
Cleaning row data...
CleanCount: 0 RowCount: 25 RowsFound: 25
Generating Tensor Data:
TensorRow Answer Shape: (25,)
TensorRow Data Shape: (25, 2)
TensorRow Count: 25
TensorTrain Answer Shape: (25,)
TensorTrain Data Shape: (25, 2)
TensorTrain Count: 25
TensorValidate Answer Shape: (25,)
TensorValidate Data Shape: (25, 2)
TensorValidate Count: 25
Found training steps: 2500
Found tensor dimension: 2
('Loss: ', [16262.184])
('Loss: ', [8532.3477])
('Loss: ', [8517.0068])
('Loss: ', [8501.7539])
('Loss: ', [8486.582])
Mean Squared Error: 8471.5303
Custom Evaluation: weight_age
[[ 320.54483032]]
[[ 268.23522949]]
Congrats you've made it to the end of the tensorflow linear regression tutorial series. Take some time to make adjustments to the execution
configuration dictionary and see how it effects the output of your program. Try putting in a very large learning rate, try putting in very very small
learning rates? Did the linear regression diverge? Play around with the settings and have some fun!