AT89C51单片机驱动的数字血压计:示波法原理与系统设计

需积分: 45 15 下载量 40 浏览量 更新于2024-08-06 收藏 141KB PDF 举报
《Amax的瞬态位置在单片机外围电路设计中的应用——基于AT89C51的数字血压计》 本篇文章详细阐述了一种基于AT89C51单片机的便携式数字血压计的设计,该血压计利用示波法原理进行血压测量。工作原理是通过将袖带包裹在上臂,加压阻断血液流动,随后缓慢减压,期间检测袖带内传来的压力脉冲。采用固定比率计算法来分析这些脉冲,具体步骤是确定压力脉冲的最大振幅Amax,然后找到最大值前幅度为0.5Amax的位置对应的直流分量,作为收缩压;最大值后幅度为0.8Amax的位置对应的直流分量为舒张压。 系统设计的关键组成部分包括AT89C51单片机作为核心处理器,压力传感器BPOl用于捕获血液压力变化,气泵负责加压,滤波放大电路用于信号预处理,键盘模块用于用户交互,LCD显示模块实时显示血压读数,语音提示模块则辅助用户理解和操作。软件设计采用汇编语言和C语言结合,确保信号的准确采集和干扰信号的排除,软件主流程包括充气、压力检测、放气、数据计算和显示等步骤。 当血压读数出现异常,比如收缩压超过140mmHg或低于95mmHg,或舒张压超过90mmHg或低于45mmHg,系统会通过语音报出相应的异常原因,为使用者提供及时的健康警报。这种设计不仅操作简便,特别适合视力不佳的老人和盲人使用,有助于提高家庭血压管理的便利性和准确性。 这篇文章深入介绍了数字血压计的技术实现细节,包括硬件选择、工作原理、软件设计和异常处理机制,展示了单片机技术在医疗健康领域的重要应用。

Traceback (most recent call last): File "D:\kelly\PycharmProjects\pythonProject8\大作业.py", line 145, in <module> model = smf.ols('ExRet ~ PEL1', data=datafit[['ExRet', 'PEL1']].iloc[:(n_in+i),:]) File "D:\python3.10\lib\site-packages\statsmodels\base\model.py", line 226, in from_formula mod = cls(endog, exog, *args, **kwargs) File "D:\python3.10\lib\site-packages\statsmodels\regression\linear_model.py", line 906, in __init__ super(OLS, self).__init__(endog, exog, missing=missing, File "D:\python3.10\lib\site-packages\statsmodels\regression\linear_model.py", line 733, in __init__ super(WLS, self).__init__(endog, exog, missing=missing, File "D:\python3.10\lib\site-packages\statsmodels\regression\linear_model.py", line 190, in __init__ super(RegressionModel, self).__init__(endog, exog, **kwargs) File "D:\python3.10\lib\site-packages\statsmodels\base\model.py", line 267, in __init__ super().__init__(endog, exog, **kwargs) File "D:\python3.10\lib\site-packages\statsmodels\base\model.py", line 92, in __init__ self.data = self._handle_data(endog, exog, missing, hasconst, File "D:\python3.10\lib\site-packages\statsmodels\base\model.py", line 132, in _handle_data data = handle_data(endog, exog, missing, hasconst, **kwargs) File "D:\python3.10\lib\site-packages\statsmodels\base\data.py", line 700, in handle_data return klass(endog, exog=exog, missing=missing, hasconst=hasconst, File "D:\python3.10\lib\site-packages\statsmodels\base\data.py", line 88, in __init__ self._handle_constant(hasconst) File "D:\python3.10\lib\site-packages\statsmodels\base\data.py", line 132, in _handle_constant exog_max = np.max(self.exog, axis=0) File "<__array_function__ internals>", line 180, in amax File "D:\python3.10\lib\site-packages\numpy\core\fromnumeric.py", line 2793, in amax return _wrapreduction(a, np.maximum, 'max', axis, None, out, File "D:\python3.10\lib\site-packages\numpy\core\fromnumeric.py", line 86, in _wrapreduction return ufunc.reduce(obj, axis, dtype, out, **passkwargs) ValueError: zero-size array to reduction operation maximum which has no identity报错如何四u该

2023-06-10 上传

function [state, Y] = Interpolate(Enable,params,TV,t) %% input % Tavd = [Tj1 Ta-2*Tj1 Tj1 Tv Tj2 Td-2*Tj2 Tj2]; Tj1=TV(1); Ta=2*Tj1+TV(2); Tv=TV(4); Tj2=TV(5); Td=2*Tj2+TV(6); T=sum(TV); % params = [g_vs, g_ve, S, g_Jconst, g_Amax, g_Vmax]; vs = params(1); ve = params(2); Jmax = params(4); ac_Amaxa = Jmax*Tj1; ac_Amaxd = -Jmax*Tj2; ac_Vmax = vs + (Ta-Tj1)*(ac_Amaxa); v_lim = ac_Vmax; a_lima = ac_Amaxa; a_limd = ac_Amaxd; j_lim = Jmax; q0=0; q1=params(3); s = 0; state = 0; if Enable == 1 %% Phase 1: acceleration period %% a) increasing acceleration if t < Tj1 s = q0 + vs*t + j_lim*t*t*t/6; v= vs + j_lim*t*t/2; a= j_lim*t; j= j_lim; end %% b) constant acceleration if t >= Tj1 && t < (Ta-Tj1) s = q0 + vs*t + a_lima*(3*t*t-3*Tj1*t+Tj1*Tj1)/6; v = vs + a_lima*(t-Tj1/2); a = a_lima; j= 0; end %% c) decreasing acceleration if t >= (Ta-Tj1) && t < Ta s = q0 + (v_lim + vs)*Ta/2 - v_lim*(Ta-t) + j_lim*(Ta-t)*(Ta-t)*(Ta-t)/6; v= v_lim - j_lim*(Ta-t)*(Ta-t)/2; a = j_lim*(Ta-t); j= -j_lim; end %% Phase 2: constant velocity period if t >= Ta && t < (Ta+Tv) s = q0 + (v_lim + vs)*Ta/2 + v_lim*(t-Ta); v = v_lim; a = 0; j = 0; end %% Phase 3: deceleration period %% a) decreasing acceleration if t >= (T-Td) && t < (T-Td+Tj2) s = q1 - (v_lim + ve)*Td/2 + v_lim*(t-T+Td) - j_lim*(t-T+Td)*(t-T+Td)*(t-T+Td)/6; v= v_lim - j_lim*(t-T+Td)*(t-T+Td)/2; a = -j_lim*(t-T+Td); j = -j_lim; end %% b) constant acceleration if t >= (T-Td+Tj2) && t < (T-Tj2) s = q1 - (v_lim + ve)*Td/2 + v_lim*(t-T+Td) + a_limd/6*(3*(t-T+Td)*(t-T+Td)-3*Tj2*(t-T+Td)+Tj2*Tj2); v = v_lim + a_limd*(t-T+Td-Tj2/2); a = a_limd; j = 0; end %% c) increasing acceleration if t >= (T-Tj2) && t<T s = q1 - ve*(T-t) - j_lim/6*(T-t)*(T-t)*(T-t); v= ve + j_lim*(T-t)*(T-t)/2; a = -j_lim*(T-t); j = j_lim; end if t>T s = q1; % Y = [s v a j]; state = 2 ; end Y = s; else %% Output state =0; Y = 0; end end 上述代码中Tj1 Ta T Tv Td Tj2 vs ve Jmax ac_Amaxa ac_Amaxd ac_Vmax v_lim a_lima a_limd j_lim q0 q1 s state Enable state function [state, Y] = Interpolate(Enable,params,TV,t)分别表示什么意思

2023-07-20 上传

解释:def levenberg_marquardt(fun, grad, jacobian, x0, iterations, tol): """ Minimization of scalar function of one or more variables using the Levenberg-Marquardt algorithm. Parameters ---------- fun : function Objective function. grad : function Gradient function of objective function. jacobian :function function of objective function. x0 : numpy.array, size=9 Initial value of the parameters to be estimated. iterations : int Maximum iterations of optimization algorithms. tol : float Tolerance of optimization algorithms. Returns ------- xk : numpy.array, size=9 Parameters wstimated by optimization algorithms. fval : float Objective function value at xk. grad_val : float Gradient value of objective function at xk. grad_log : numpy.array The record of gradient of objective function of each iteration. """ fval = None # y的最小值 grad_val = None # 梯度的最后一次下降的值 x_log = [] # x的迭代值的数组,n*9,9个参数 y_log = [] # y的迭代值的数组,一维 grad_log = [] # 梯度下降的迭代值的数组 x0 = asarray(x0).flatten() if x0.ndim == 0: x0.shape = (1,) # iterations = len(x0) * 200 k = 1 xk = x0 updateJ = 1 lamda = 0.01 old_fval = fun(x0) gfk = grad(x0) gnorm = np.amax(np.abs(gfk)) J = [None] H = [None] while (gnorm > tol) and (k < iterations): if updateJ == 1: x_log = np.append(x_log, xk.T) yk = fun(xk) y_log = np.append(y_log, yk) J = jacobian(x0) H = np.dot(J.T, J) H_lm = H + (lamda * np.eye(9)) gfk = grad(xk) pk = - np.linalg.inv(H_lm).dot(gfk) pk = pk.A.reshape(1, -1)[0] # 二维变一维 xk1 = xk + pk fval = fun(xk1) if fval < old_fval: lamda = lamda / 10 xk = xk1 old_fval = fval updateJ = 1 else: updateJ = 0 lamda = lamda * 10 gnorm = np.amax(np.abs(gfk)) k = k + 1 grad_log = np.append(grad_log, np.linalg.norm(xk - x_log[-1:])) fval = old_fval grad_val = grad_log[-1] return xk, fval, grad_val, x_log, y_log, grad_log

2023-06-06 上传