Normal view MARC view ISBD view

A second course in statistics : regression analysis / William Mendenhall, Terry Sincich.

By: Mendenhall, William.
Contributor(s): Sincich, Terry.
Material type: TextTextPublisher: Boston, Mass. ; London : Prentice Hall, c2012Edition: 7th ed.Description: xiii, 797 p. : ill. ; 26 cm. +.ISBN: 9780321748249 (pbk.) :; 0321748247 (pbk.) :.Other title: Regression analysis.Subject(s): Regression analysis -- Textbooks | Regression analysis -- Case studiesDDC classification: 519.536 Online resources: WorldCat details | Ebook Fulltext
Contents:
Table of contents <P><B>1. A Review of Basic Concepts (Optional)</B></P><P>1.1 Statistics and Data</P><P>1.2 Populations, Samples, and Random Sampling</P><P>1.3 Describing Qualitative Data</P><P>1.4 Describing Quantitative Data Graphically</P><P>1.5 Describing Quantitative Data Numerically</P><P>1.6 The Normal Probability Distribution</P><P>1.7 Sampling Distributions and the Central Limit Theorem</P><P>1.8 Estimating a Population Mean</P><P>1.9 Testing a Hypothesis About a Population Mean</P><P>1.10 Inferences About the Difference Between Two Population Means</P><P>1.11 Comparing Two Population Variances</P><P> </P><P><B>2. Introduction to Regression Analysis</B></P><P>2.1 Modeling a Response</P><P>2.2 Overview of Regression Analysis</P><P>2.3 Regression Applications</P><P>2.4 Collecting the Data for Regression</P><P> </P><P><B>3. Simple Linear Regression</B></P><P>3.1 Introduction</P><P>3.2 The Straight-Line Probabilistic Model</P><P>3.3 Fitting the Model: The Method of Least Squares</P><P>3.4 Model Assumptions</P><P>3.5 An Estimator of s2</P><P>3.6 Assessing the Utility of the Model: Making Inferences About the Slope �1</P><P>3.7 The Coefficient of Correlation</P><P>3.8 The Coefficient of Determination</P><P>3.9 Using the Model for Estimation and Prediction</P><P>3.10 A Complete Example</P><P>3.11 Regression Through the Origin (Optional)</P><P> </P><P>Case Study 1: Legal Advertising--Does It Pay?</P><P> </P><P><B>4. Multiple Regression Models</B></P><P>4.1 General Form of a Multiple Regression Model</P><P>4.2 Model Assumptions</P><P>4.3 A First-Order Model with Quantitative Predictors</P><P>4.4 Fitting the Model: The Method of Least Squares</P><P>4.5 Estimation of s2, the Variance of e</P><P>4.6 Testing the Utility of a Model: The Analysis of Variance F-Test</P><P>4.7 Inferences About the Individual � Parameters</P><P>4.8 Multiple Coefficients of Determination: R2 and R2adj</P><P>4.9 Using the Model for Estimation and Prediction</P><P>4.10 An Interaction Model with Quantitative Predictors</P><P>4.11 A Quadratic (Second-Order) Model with a Quantitative Predictor</P><P>4.12 More Complex Multiple Regression Models (Optional)</P><P>4.13 A Test for Comparing Nested Models</P><P>4.14 A Complete Example</P><P> </P><P>Case Study 2: Modeling the Sale Prices of Residential Properties in Four Neighborhoods</P><P> </P><P><B>5. Principles of Model Building</B></P><P>5.1 Introduction: Why Model Building is Important</P><P>5.2 The Two Types of Independent Variables: Quantitative and Qualitative</P><P>5.3 Models with a Single Quantitative Independent Variable</P><P>5.4 First-Order Models with Two or More Quantitative Independent Variables</P><P>5.5 Second-Order Models with Two or More Quantitative Independent Variables</P><P>5.6 Coding Quantitative Independent Variables (Optional)</P><P>5.7 Models with One Qualitative Independent Variable</P><P>5.8 Models with Two Qualitative Independent Variables</P><P>5.9 Models with Three or More Qualitative Independent Variables</P><P>5.10 Models with Both Quantitative and Qualitative Independent Variables</P><P>5.11 External Model Validation</P><P> </P><P><B>6. Variable Screening Methods</B></P><P>6.1 Introduction: Why Use a Variable-Screening Method?</P><P>6.2 Stepwise Regression</P><P>6.3 All-Possible-Regressions Selection Procedure</P><P>6.4 Caveats</P><P> </P><P>Case Study 3: Deregulation of the Intrastate Trucking Industry</P><P> </P><P><B>7. Some Regression Pitfalls</B></P><P>7.1 Introduction</P><P>7.2 Observational Data Versus Designed Experiments</P><P>7.3 Parameter Estimability and Interpretation</P><P>7.4 Multicollinearity</P><P>7.5 Extrapolation: Predicting Outside the Experimental Region</P><P>7.6 Variable Transformations</P><P> </P><P><B>8. Residual Analysis</B></P><P>8.1 Introduction</P><P>8.2 Plotting Residuals</P><P>8.3 Detecting Lack of Fit</P><P>8.4 Detecting Unequal Variances</P><P>8.5 Checking the Normality Assumption</P><P>8.6 Detecting Outliers and Identifying Influential Observations</P><P>8.7 Detection of Residual Correlation: The Durbin-Watson Test</P><P> </P><P>Case Study 4: An Analysis of Rain Levels in California</P><P>Case Study 5: An Investigation of Factors Affecting the Sale Price of Condominium Units Sold at Public Auction</P><P> </P><P><B>9. Special Topics in Regression (Optional)</B></P><P>9.1 Introduction</P><P>9.2 Piecewise Linear Regression</P><P>9.3 Inverse Prediction</P><P>9.4 Weighted Least Squares</P><P>9.5 Modeling Qualitative Dependent Variables</P><P>9.6 Logistic Regression</P><P>9.7 Ridge Regression</P><P>9.8 Robust Regression</P><P>9.9 Nonparametric Regression Models</P><P> </P><P><B>10. Introduction to Time Series Modeling and Forecasting</B></P><P>10.1 What is a Time Series?</P><P>10.2 Time Series Components</P><P>10.3 Forecasting Using Smoothing Techniques (Optional)</P><P>10.4 Forecasting: The Regression Approach</P><P>10.5 Autocorrelation and Autoregressive Error Models</P><P>10.6 Other Models for Autocorrelated Errors (Optional)</P><P>10.7 Constructing Time Series Models</P><P>10.8 Fitting Time Series Models with Autoregressive Errors</P><P>10.9 Forecasting with Time Series Autoregressive Models</P><P>10.10 Seasonal Time Series Models: An Example</P><P>10.11 Forecasting Using Lagged Values of the Dependent Variable (Optional)</P><P> </P><P>Case Study 6: Modeling Daily Peak Electricity Demands</P><P> </P><P><B>11. Principles of Experimental Design</B></P><P>11.1 Introduction</P><P>11.2 Experimental Design Terminology</P><P>11.3 Controlling the Information in an Experiment</P><P>11.4 Noise-Reducing Designs</P><P>11.5 Volume-Increasing Designs</P><P>11.6 Selecting the Sample Size</P><P>11.7 The Importance of Randomization</P><P> </P><P> </P><P><B>12. The Analysis of Variance for Designed Experiments</B></P><P>12.1 Introduction</P><P>12.2 The Logic Behind an Analysis of Variance</P><P>12.3 One-Factor Completely Randomized Designs</P><P>12.4 Randomized Block Designs</P><P>12.5 Two-Factor Factorial Experiments</P><P>12.6 More Complex Factorial Designs (Optional)</P><P>12.7 Follow-Up Analysis: Tukey's Multiple Comparisons of Means</P><P>12.8 Other Multiple Comparisons Methods (Optional)</P><P>12.9 Checking ANOVA Assumptions</P><P> </P><P>Case Study 7: Reluctance to Transmit Bad News: The MUM Effect</P><P> </P><P><B>Appendix A: Derivation of the Least Squares Estimates of �0 and �1 in Simple Linear Regression</B></P><P><B>Appendix B: The Mechanics of a Multiple Regression Analysis</B></P><P>B.1 Introduction</P><P>B.2 Matrices and Matrix Multiplication</P><P>B.3 Identity Matrices and Matrix Inversion</P><P>B.4 Solving Systems of Simultaneous Linear Equations</P><P>B.5 The Least Squares Equations and Their Solution</P><P>B.6 Calculating SSE and s2</P><P>B.7 Standard Errors of Estimators, Test Statistics, and Confidence Intervals for �0, �1, ... , �k</P><P>B.8 A Confidence Interval for a Linear Function of the � Parameters; A Confidence Interval for E(y)</P><P>B.9 A Prediction Interval for Some Value of y to be Observed in the Future</P><P> </P><P><B>Appendix C: A Procedure for Inverting a Matrix</B></P><P> </P><P><B>Appendix D: Statistical Tables</B></P><P>Table D.1: Normal Curve Areas</P><P>Table D.2: Critical Values for Student's t</P><P>Table D.3: Critical Values for the F Statistic: F.10</P><P>Table D.4: Critical Values for the F Statistic: F.05</P><P>Table D.5: Critical Values for the F Statistic: F.025</P><P>Table D.6: Critical Values for the F Statistic: F.01</P><P>Table D.7: Random Numbers</P><P>Table D.8: Critical Values for the Durbin-Watson d Statistic (a =.05)</P><P>Table D.9: Critical Values for the Durbin-Watson d Statistic (a =.01)</P><P>Table D.10: Critical Values for the X2-Statistic</P><P>Table D.11: Percentage Points of the Studentized Range, q(p,v), Upper 5%</P><P>Table D.12: Percentage Points of the Studentized Range, q(p,v), Upper 1%</P><P> </P><P><B>Appendix E: File Layouts for Case Study Data Sets</B></P><P> </P><P> </P><P>Answers to Selected Odd Numbered Exercises</P><P>Index</P><P>Technology Tutorials: SAS, SPSS, MINITAB, and R (on CD)
Tags from this library: No tags from this library for this title. Log in to add tags.
    Average rating: 0.0 (0 votes)
Item type Current location Collection Call number Copy number Status Date due Barcode Item holds
E-Book E-Book EWU Library
E-book
Non-fiction 519.536 MES 2012 (Browse shelf) Not for loan
Text Text EWU Library
Reserve Section
Non-fiction 519.536 MES 2012 (Browse shelf) C-1 Not For Loan 26619
Total holds: 0

"International edition"--Cover.

Previous ed.: c2003.

CD-ROM in envelope preceding back cover.

Includes index.

Table of contents <P><B>1. A Review of Basic Concepts (Optional)</B></P><P>1.1 Statistics and Data</P><P>1.2 Populations, Samples, and Random Sampling</P><P>1.3 Describing Qualitative Data</P><P>1.4 Describing Quantitative Data Graphically</P><P>1.5 Describing Quantitative Data Numerically</P><P>1.6 The Normal Probability Distribution</P><P>1.7 Sampling Distributions and the Central Limit Theorem</P><P>1.8 Estimating a Population Mean</P><P>1.9 Testing a Hypothesis About a Population Mean</P><P>1.10 Inferences About the Difference Between Two Population Means</P><P>1.11 Comparing Two Population Variances</P><P> </P><P><B>2. Introduction to Regression Analysis</B></P><P>2.1 Modeling a Response</P><P>2.2 Overview of Regression Analysis</P><P>2.3 Regression Applications</P><P>2.4 Collecting the Data for Regression</P><P> </P><P><B>3. Simple Linear Regression</B></P><P>3.1 Introduction</P><P>3.2 The Straight-Line Probabilistic Model</P><P>3.3 Fitting the Model: The Method of Least Squares</P><P>3.4 Model Assumptions</P><P>3.5 An Estimator of s2</P><P>3.6 Assessing the Utility of the Model: Making Inferences About the Slope �1</P><P>3.7 The Coefficient of Correlation</P><P>3.8 The Coefficient of Determination</P><P>3.9 Using the Model for Estimation and Prediction</P><P>3.10 A Complete Example</P><P>3.11 Regression Through the Origin (Optional)</P><P> </P><P>Case Study 1: Legal Advertising--Does It Pay?</P><P> </P><P><B>4. Multiple Regression Models</B></P><P>4.1 General Form of a Multiple Regression Model</P><P>4.2 Model Assumptions</P><P>4.3 A First-Order Model with Quantitative Predictors</P><P>4.4 Fitting the Model: The Method of Least Squares</P><P>4.5 Estimation of s2, the Variance of e</P><P>4.6 Testing the Utility of a Model: The Analysis of Variance F-Test</P><P>4.7 Inferences About the Individual � Parameters</P><P>4.8 Multiple Coefficients of Determination: R2 and R2adj</P><P>4.9 Using the Model for Estimation and Prediction</P><P>4.10 An Interaction Model with Quantitative Predictors</P><P>4.11 A Quadratic (Second-Order) Model with a Quantitative Predictor</P><P>4.12 More Complex Multiple Regression Models (Optional)</P><P>4.13 A Test for Comparing Nested Models</P><P>4.14 A Complete Example</P><P> </P><P>Case Study 2: Modeling the Sale Prices of Residential Properties in Four Neighborhoods</P><P> </P><P><B>5. Principles of Model Building</B></P><P>5.1 Introduction: Why Model Building is Important</P><P>5.2 The Two Types of Independent Variables: Quantitative and Qualitative</P><P>5.3 Models with a Single Quantitative Independent Variable</P><P>5.4 First-Order Models with Two or More Quantitative Independent Variables</P><P>5.5 Second-Order Models with Two or More Quantitative Independent Variables</P><P>5.6 Coding Quantitative Independent Variables (Optional)</P><P>5.7 Models with One Qualitative Independent Variable</P><P>5.8 Models with Two Qualitative Independent Variables</P><P>5.9 Models with Three or More Qualitative Independent Variables</P><P>5.10 Models with Both Quantitative and Qualitative Independent Variables</P><P>5.11 External Model Validation</P><P> </P><P><B>6. Variable Screening Methods</B></P><P>6.1 Introduction: Why Use a Variable-Screening Method?</P><P>6.2 Stepwise Regression</P><P>6.3 All-Possible-Regressions Selection Procedure</P><P>6.4 Caveats</P><P> </P><P>Case Study 3: Deregulation of the Intrastate Trucking Industry</P><P> </P><P><B>7. Some Regression Pitfalls</B></P><P>7.1 Introduction</P><P>7.2 Observational Data Versus Designed Experiments</P><P>7.3 Parameter Estimability and Interpretation</P><P>7.4 Multicollinearity</P><P>7.5 Extrapolation: Predicting Outside the Experimental Region</P><P>7.6 Variable Transformations</P><P> </P><P><B>8. Residual Analysis</B></P><P>8.1 Introduction</P><P>8.2 Plotting Residuals</P><P>8.3 Detecting Lack of Fit</P><P>8.4 Detecting Unequal Variances</P><P>8.5 Checking the Normality Assumption</P><P>8.6 Detecting Outliers and Identifying Influential Observations</P><P>8.7 Detection of Residual Correlation: The Durbin-Watson Test</P><P> </P><P>Case Study 4: An Analysis of Rain Levels in California</P><P>Case Study 5: An Investigation of Factors Affecting the Sale Price of Condominium Units Sold at Public Auction</P><P> </P><P><B>9. Special Topics in Regression (Optional)</B></P><P>9.1 Introduction</P><P>9.2 Piecewise Linear Regression</P><P>9.3 Inverse Prediction</P><P>9.4 Weighted Least Squares</P><P>9.5 Modeling Qualitative Dependent Variables</P><P>9.6 Logistic Regression</P><P>9.7 Ridge Regression</P><P>9.8 Robust Regression</P><P>9.9 Nonparametric Regression Models</P><P> </P><P><B>10. Introduction to Time Series Modeling and Forecasting</B></P><P>10.1 What is a Time Series?</P><P>10.2 Time Series Components</P><P>10.3 Forecasting Using Smoothing Techniques (Optional)</P><P>10.4 Forecasting: The Regression Approach</P><P>10.5 Autocorrelation and Autoregressive Error Models</P><P>10.6 Other Models for Autocorrelated Errors (Optional)</P><P>10.7 Constructing Time Series Models</P><P>10.8 Fitting Time Series Models with Autoregressive Errors</P><P>10.9 Forecasting with Time Series Autoregressive Models</P><P>10.10 Seasonal Time Series Models: An Example</P><P>10.11 Forecasting Using Lagged Values of the Dependent Variable (Optional)</P><P> </P><P>Case Study 6: Modeling Daily Peak Electricity Demands</P><P> </P><P><B>11. Principles of Experimental Design</B></P><P>11.1 Introduction</P><P>11.2 Experimental Design Terminology</P><P>11.3 Controlling the Information in an Experiment</P><P>11.4 Noise-Reducing Designs</P><P>11.5 Volume-Increasing Designs</P><P>11.6 Selecting the Sample Size</P><P>11.7 The Importance of Randomization</P><P> </P><P> </P><P><B>12. The Analysis of Variance for Designed Experiments</B></P><P>12.1 Introduction</P><P>12.2 The Logic Behind an Analysis of Variance</P><P>12.3 One-Factor Completely Randomized Designs</P><P>12.4 Randomized Block Designs</P><P>12.5 Two-Factor Factorial Experiments</P><P>12.6 More Complex Factorial Designs (Optional)</P><P>12.7 Follow-Up Analysis: Tukey's Multiple Comparisons of Means</P><P>12.8 Other Multiple Comparisons Methods (Optional)</P><P>12.9 Checking ANOVA Assumptions</P><P> </P><P>Case Study 7: Reluctance to Transmit Bad News: The MUM Effect</P><P> </P><P><B>Appendix A: Derivation of the Least Squares Estimates of �0 and �1 in Simple Linear Regression</B></P><P><B>Appendix B: The Mechanics of a Multiple Regression Analysis</B></P><P>B.1 Introduction</P><P>B.2 Matrices and Matrix Multiplication</P><P>B.3 Identity Matrices and Matrix Inversion</P><P>B.4 Solving Systems of Simultaneous Linear Equations</P><P>B.5 The Least Squares Equations and Their Solution</P><P>B.6 Calculating SSE and s2</P><P>B.7 Standard Errors of Estimators, Test Statistics, and Confidence Intervals for �0, �1, ... , �k</P><P>B.8 A Confidence Interval for a Linear Function of the � Parameters; A Confidence Interval for E(y)</P><P>B.9 A Prediction Interval for Some Value of y to be Observed in the Future</P><P> </P><P><B>Appendix C: A Procedure for Inverting a Matrix</B></P><P> </P><P><B>Appendix D: Statistical Tables</B></P><P>Table D.1: Normal Curve Areas</P><P>Table D.2: Critical Values for Student's t</P><P>Table D.3: Critical Values for the F Statistic: F.10</P><P>Table D.4: Critical Values for the F Statistic: F.05</P><P>Table D.5: Critical Values for the F Statistic: F.025</P><P>Table D.6: Critical Values for the F Statistic: F.01</P><P>Table D.7: Random Numbers</P><P>Table D.8: Critical Values for the Durbin-Watson d Statistic (a =.05)</P><P>Table D.9: Critical Values for the Durbin-Watson d Statistic (a =.01)</P><P>Table D.10: Critical Values for the X2-Statistic</P><P>Table D.11: Percentage Points of the Studentized Range, q(p,v), Upper 5%</P><P>Table D.12: Percentage Points of the Studentized Range, q(p,v), Upper 1%</P><P> </P><P><B>Appendix E: File Layouts for Case Study Data Sets</B></P><P> </P><P> </P><P>Answers to Selected Odd Numbered Exercises</P><P>Index</P><P>Technology Tutorials: SAS, SPSS, MINITAB, and R (on CD)

Applied Statistics

There are no comments for this item.

Log in to your account to post a comment.

Library Home | Contacts | E-journals
Copyright @ 2011-2019 EWU Library
East West University