In the context of

To illustrate this, we will use the data in

Here is one version of some commands that will generate that data and do a linear regression on it.

gnrnd4( key1=789321006, key2=6120480812, key3=5500010 ) L1 L2 summary(L1) summary(L2) lm_L2L1 <- lm(L2~L1) lm_L2L1 cor(L1,L2)Figure 1 holds the image of the console after running those commands.

From the information in Figure 1 we see that the

The following commands generate the plot of the points and the regression line shown in Figure 2.

plot(L1,L2, xlab="x values", ylab="y values", main="Linear Regression Line for Table 1 Values", xlim=c(0,80), ylim=c(0,60), xaxp=c(0,80,16), yaxp=c(0,60,12), pch=19, col="green", las=1, cex.axis=0.7 ) abline( v=seq(0,80,5),col="darkgray", lty=3) abline( h=seq(0,60,5),col="darkgray", lty=3) abline(lm_L2L1, col="green", lwd=2)

Let us return to our question. We have the equation

Clearly, we just need to evaluate

`7.3633+0.5685*15`

and get the results, as shown in Figure 3.
What we have found is that the point

`points(15,15.8908, pch=5, col="orange", cex=1.5)`

and we can see this in Figure 4.
In Figure 3 we saw how to find one point on the line, but what if we want to find a number of points, say all of the points with

`7.3633+0.5685*seq(20,55,5)`

shown in the console image in Figure 5
Now that we know those values we could create a

A more efficient approach is to put all of the

x_vals <- seq(20,55,5) y_vals <- 7.3633+0.5685*x_vals points(x_vals,y_vals, pch=17, col="red", cex=1.5)with the result being shown in Figure 6.

Of course, part of the process that we just went through involved reading the output of the

`y_vals<-7.3633+0.5685*x_vals`

. It might have been better if we
could have R find and use those values directly.
The following commands do just that
c_vals <- coefficients(lm_L2L1) c_vals x_vals <- seq(22.5,57.5,5) y_vals <- c_vals[1]+c_vals[2]*x_vals points(x_vals,y_vals, pch=17, col="blue")In an earlier page we had seen the use of the

The values shown in Figure 7 have more significant digits than what we saw in Figure 1. By using the

Returning our attention to the commands listed above, to demonstrate using the extracted values we create a new sequence of

Of course, if we had actually wanted to know those

This particular example does provide us with a region where we have a gap in the

`7.3633+0.5685*80`

to compute the value,
finding that it is `points(80,52.8433, pch=25, col="brown", cex=1.5)`

as shown in Figure 10.
Although we "can" do the

Let us explore this with some real life data. As it turns out, I have been keeping track of my heart rate during exercise. I have the following table of values.

We can use the following commands

tm <- c( 0, 1, 3, 4, 5.5, 6.5 ) hr <- c( 72,93,105,112,128,139) rog_ex <- lm(hr~tm) rog_ex cor(tm,hr)to create our model of this data. The result is shown in Figure 11.

We can use the following commands

plot( tm,hr, xlab="Time in minutes", ylab="Heart Rate in beats per minute", main="Roger's Exercise Record", xlim=c(0,10), xaxp=c(0,10,10), ylim=c(0,220), yaxp=c(0,220,22), pch=19, col="red", las=1, cex.axis=0.7 ) abline( v=seq(0,10,1), col="darkgray", lty=3) abline( h=seq(0,220,10), col="darkgray", lty=3) abline( rog_ex, col="blue", lwd=2)to generate the plot in Figure 12.

From Figure 11 and Figure 12 it seems that our linear model is quite good. We can get interpolated values for minutes 1 through 6 by performing the following commands:

x_vals <- 1:6 c_vals <- coefficients(rog_ex) y_vals <- c_vals[1]+c_vals[2]*x_vals x_vals y_valsand the results of those commands are shown in Figure 13.

Then we can add those points to the graph via the command

`points(x_vals, y_vals, pch=6, col="darkgreen")`

to produce the image shown in Figure 14.
The values that we found for time equal to 2, 5, and 6 minutes, namely, about 96, 124, and 133, are quite likely to be really close to the actual values that I experienced in that exercise session.

On the other hand, we could follow the similar steps to

x_vals <- seq(10,30,10) c_vals <- coefficients(rog_ex) y_vals <- c_vals[1]+c_vals[2]*x_vals x_vals y_valsand the results of those commands are shown in Figure 15.

These results demonstrate the danger of

What we see here is that the recorded data, the original data in

The absurdity of blindly applying the model to values outside the

©Roger M. Palay Saline, MI 48176 November, 2015