利用California Housing Data数据集,实现线性回归,岭回归(L2正则化最小二乘)和Lasso回归(L1正则化最小二乘)
时间: 2024-02-06 15:02:52 浏览: 103
首先,我们需要导入数据集并进行数据预处理:
```python
from sklearn.datasets import fetch_california_housing
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
housing = fetch_california_housing()
X_train, X_test, y_train, y_test = train_test_split(housing.data, housing.target, random_state=0)
scaler = StandardScaler()
X_train_scaled = scaler.fit_transform(X_train)
X_test_scaled = scaler.transform(X_test)
```
然后,我们可以使用线性回归模型进行拟合和预测:
```python
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_squared_error
lin_reg = LinearRegression()
lin_reg.fit(X_train_scaled, y_train)
y_pred = lin_reg.predict(X_test_scaled)
mse = mean_squared_error(y_test, y_pred)
print("Mean Squared Error (Linear Regression):", mse)
```
接下来,我们可以使用岭回归模型进行拟合和预测:
```python
from sklearn.linear_model import Ridge
ridge_reg = Ridge(alpha=1)
ridge_reg.fit(X_train_scaled, y_train)
y_pred = ridge_reg.predict(X_test_scaled)
mse = mean_squared_error(y_test, y_pred)
print("Mean Squared Error (Ridge Regression):", mse)
```
最后,我们可以使用Lasso回归模型进行拟合和预测:
```python
from sklearn.linear_model import Lasso
lasso_reg = Lasso(alpha=0.1)
lasso_reg.fit(X_train_scaled, y_train)
y_pred = lasso_reg.predict(X_test_scaled)
mse = mean_squared_error(y_test, y_pred)
print("Mean Squared Error (Lasso Regression):", mse)
```
需要注意的是,岭回归和Lasso回归都需要设置参数alpha,这个参数控制了正则化的强度。较大的alpha值会导致更强的正则化,因此模型的复杂度会降低,但是也会导致更高的偏差。相反,较小的alpha值会导致较低的正则化,因此模型的复杂度会增加,但是也会导致更高的方差。因此,我们需要根据具体问题和数据集的特点来选择合适的alpha值。
阅读全文