python实现逻辑回归(包含逻辑回归函数)
时间: 2023-07-11 15:50:05 浏览: 89
好的,以下是Python实现逻辑回归的示例代码:
首先,导入需要的库:
```python
import numpy as np
import matplotlib.pyplot as plt
from sklearn.datasets import make_classification
```
接下来,生成一个二分类数据集:
```python
X, y = make_classification(n_samples=100, n_features=2, n_informative=2, n_redundant=0, n_clusters_per_class=1, random_state=42)
```
我们可以使用 matplotlib 库来可视化数据集:
```python
plt.scatter(X[:,0], X[:,1], c=y)
plt.show()
```
现在,我们定义逻辑回归函数:
```python
def sigmoid(z):
return 1 / (1 + np.exp(-z))
class LogisticRegression:
def __init__(self, lr=0.01, num_iter=100000, fit_intercept=True, verbose=False):
self.lr = lr
self.num_iter = num_iter
self.fit_intercept = fit_intercept
self.verbose = verbose
def __add_intercept(self, X):
intercept = np.ones((X.shape[0], 1))
return np.concatenate((intercept, X), axis=1)
def __loss(self, h, y):
return (-y * np.log(h) - (1 - y) * np.log(1 - h)).mean()
def fit(self, X, y):
if self.fit_intercept:
X = self.__add_intercept(X)
self.theta = np.zeros(X.shape[1])
for i in range(self.num_iter):
z = np.dot(X, self.theta)
h = sigmoid(z)
gradient = np.dot(X.T, (h - y)) / y.size
self.theta -= self.lr * gradient
if(self.verbose == True and i % 10000 == 0):
z = np.dot(X, self.theta)
h = sigmoid(z)
print(f'loss: {self.__loss(h, y)} \t')
def predict_prob(self, X):
if self.fit_intercept:
X = self.__add_intercept(X)
return sigmoid(np.dot(X, self.theta))
def predict(self, X, threshold):
return self.predict_prob(X) >= threshold
```
然后,我们使用数据集来训练模型:
```python
model = LogisticRegression(lr=0.1, num_iter=300000)
model.fit(X, y)
```
最后,我们可以使用模型来进行预测:
```python
print(model.predict_prob(np.array([[1, 2]])))
print(model.predict(X, 0.5))
```
希望这个示例代码可以帮助你理解逻辑回归的实现过程。
阅读全文