CN109009170A - Detect the method and apparatus of mood - Google Patents
Detect the method and apparatus of mood Download PDFInfo
- Publication number
- CN109009170A CN109009170A CN201810804712.7A CN201810804712A CN109009170A CN 109009170 A CN109009170 A CN 109009170A CN 201810804712 A CN201810804712 A CN 201810804712A CN 109009170 A CN109009170 A CN 109009170A
- Authority
- CN
- China
- Prior art keywords
- voice signal
- user
- characteristic information
- sound
- information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000036651 mood Effects 0.000 title claims abstract description 44
- 238000000034 method Methods 0.000 title claims abstract description 32
- 230000002159 abnormal effect Effects 0.000 claims abstract description 37
- 238000001514 detection method Methods 0.000 claims abstract description 37
- 230000008451 emotion Effects 0.000 claims abstract description 23
- 230000005236 sound signal Effects 0.000 claims abstract description 16
- 230000033001 locomotion Effects 0.000 claims description 60
- 230000035479 physiological effects, processes and functions Effects 0.000 claims description 20
- 230000009471 action Effects 0.000 claims description 18
- 230000004807 localization Effects 0.000 claims description 14
- 238000000605 extraction Methods 0.000 claims description 11
- 238000004458 analytical method Methods 0.000 claims description 9
- 238000010586 diagram Methods 0.000 claims description 7
- 230000000694 effects Effects 0.000 abstract description 5
- 230000002996 emotional effect Effects 0.000 description 9
- 230000008569 process Effects 0.000 description 7
- 230000036772 blood pressure Effects 0.000 description 6
- 230000036760 body temperature Effects 0.000 description 6
- 230000010349 pulsation Effects 0.000 description 6
- 238000005516 engineering process Methods 0.000 description 4
- 230000008859 change Effects 0.000 description 3
- 206010001488 Aggression Diseases 0.000 description 2
- 230000016571 aggressive behavior Effects 0.000 description 2
- 208000012761 aggressive behavior Diseases 0.000 description 2
- 230000006399 behavior Effects 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 2
- 235000013399 edible fruits Nutrition 0.000 description 2
- 230000003203 everyday effect Effects 0.000 description 2
- 230000005281 excited state Effects 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 230000014509 gene expression Effects 0.000 description 2
- 229940088597 hormone Drugs 0.000 description 2
- 239000005556 hormone Substances 0.000 description 2
- 230000000116 mitigating effect Effects 0.000 description 2
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 2
- 241000209140 Triticum Species 0.000 description 1
- 235000021307 Triticum Nutrition 0.000 description 1
- 230000001154 acute effect Effects 0.000 description 1
- 238000007664 blowing Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/16—Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
- A61B5/165—Evaluating the state of mind, e.g. depression, anxiety
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/02—Detecting, measuring or recording for evaluating the cardiovascular system, e.g. pulse, heart rate, blood pressure or blood flow
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/02—Detecting, measuring or recording for evaluating the cardiovascular system, e.g. pulse, heart rate, blood pressure or blood flow
- A61B5/0205—Simultaneously evaluating both cardiovascular conditions and different types of body conditions, e.g. heart and respiratory condition
- A61B5/02055—Simultaneously evaluating both cardiovascular condition and temperature
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/02—Detecting, measuring or recording for evaluating the cardiovascular system, e.g. pulse, heart rate, blood pressure or blood flow
- A61B5/021—Measuring pressure in heart or blood vessels
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/68—Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient
- A61B5/6801—Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient specially adapted to be attached to or worn on the body surface
- A61B5/6802—Sensor mounted on worn items
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/68—Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient
- A61B5/6801—Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient specially adapted to be attached to or worn on the body surface
- A61B5/6802—Sensor mounted on worn items
- A61B5/6804—Garments; Clothes
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61M—DEVICES FOR INTRODUCING MEDIA INTO, OR ONTO, THE BODY; DEVICES FOR TRANSDUCING BODY MEDIA OR FOR TAKING MEDIA FROM THE BODY; DEVICES FOR PRODUCING OR ENDING SLEEP OR STUPOR
- A61M21/00—Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis
- A61M21/02—Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis for inducing sleep or relaxation, e.g. by direct nerve stimulation, hypnosis, analgesia
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61M—DEVICES FOR INTRODUCING MEDIA INTO, OR ONTO, THE BODY; DEVICES FOR TRANSDUCING BODY MEDIA OR FOR TAKING MEDIA FROM THE BODY; DEVICES FOR PRODUCING OR ENDING SLEEP OR STUPOR
- A61M21/00—Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis
- A61M2021/0005—Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis by the use of a particular sense, or stimulus
- A61M2021/0027—Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis by the use of a particular sense, or stimulus by the hearing sense
Landscapes
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Engineering & Computer Science (AREA)
- Heart & Thoracic Surgery (AREA)
- Veterinary Medicine (AREA)
- Public Health (AREA)
- General Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- Animal Behavior & Ethology (AREA)
- Biomedical Technology (AREA)
- Pathology (AREA)
- Medical Informatics (AREA)
- Molecular Biology (AREA)
- Surgery (AREA)
- Biophysics (AREA)
- Cardiology (AREA)
- Physiology (AREA)
- Anesthesiology (AREA)
- Psychiatry (AREA)
- Psychology (AREA)
- Educational Technology (AREA)
- Developmental Disabilities (AREA)
- Hospice & Palliative Care (AREA)
- Child & Adolescent Psychology (AREA)
- Social Psychology (AREA)
- Vascular Medicine (AREA)
- Pain & Pain Management (AREA)
- Acoustics & Sound (AREA)
- Hematology (AREA)
- Pulmonology (AREA)
- Measuring And Recording Apparatus For Diagnosis (AREA)
Abstract
Present invention discloses the method and apparatus of detection mood, wherein the method for detection mood includes: collected sound signal;Whether the sound size for judging the voice signal is more than threshold value;If being more than threshold value, judge in the voice signal with the presence or absence of specific word;Specific word if it exists acquires the voice signal and corresponds to the characteristic information of user, and judges whether the characteristic information is abnormal;If abnormal, determine that the voice signal corresponds to user emotion excitement, to reach timely and accurately identify the technical effect of exciting mood and guarantee the accuracy of Emotion identification.
Description
Technical field
The present invention relates to medical fields, more particularly to the method and apparatus of detection mood.
Background technique
Under the background built a harmonious society and developed on a large scale now, family is the Component units of society.So family and
Humorous is the basis of building harmonious society and guarantee, while being also the effective way built a harmonious society.Harmonious family is by family
It is constructed based on mutual equality, love, containing, gratitude etc. between member, therefore the emotional change of each kinsfolk can shadow
Ring the relationship of entire family.If can correctly be reminded by the means of science and technology in the early period of emotional change, to guide,
Problem is discongested, the continuous variation of mood is prevented.To ensure that the relationship between kinsfolk is harmonious, so it is physically and mentally healthy.Such as
What identifies the mood of kinsfolk, and gives and remind when excited, in this regard, there is no effective solutions for the prior art
Certainly scheme.
Summary of the invention
The main object of the present invention is to provide a kind of method and apparatus for detecting mood, to reach timely and accurately identify
The technical effect of exciting mood.
The present invention provides a kind of method for detecting mood, comprising the following steps:
Collected sound signal;
Whether the sound size for judging the voice signal is more than threshold value;
If being more than threshold value, judge in the voice signal with the presence or absence of specific word;
Specific word if it exists acquires the voice signal and corresponds to the characteristic information of user, and judges the feature letter
Whether breath is abnormal;
If abnormal, determine that the voice signal corresponds to user emotion excitement.
Further, the characteristic information includes physiological characteristic information, and the acquisition voice signal corresponds to user's
Characteristic information, and judge the characteristic information whether Yi Chang step, comprising:
Acquire the current physiology characteristic information that the voice signal corresponds to user;
Judge that current physiology characteristic information user's corresponding with the voice signal prestores normal physiological characteristic information
Whether match;
If mismatching, determine that the current physiology characteristic information is abnormal, if matching, determines the current physiology feature
Information is normal.
Further, the characteristic information includes motion images information, and the acquisition voice signal corresponds to user's
Characteristic information, and judge the characteristic information whether Yi Chang step, comprising:
Acquire the motion images information that the voice signal corresponds to user;
The limb action feature that the voice signal corresponds to user is extracted from the motion images information;
Judge whether limb action feature user's corresponding with the voice signal prestores normal limb motion characteristic
Matching;
If mismatching, the motion images Information abnormity is determined, if matching, is determining the motion images information just
Often.
Further, it is multiple, the spy for acquiring the voice signal and corresponding to user that the voice signal, which corresponds to user,
Reference breath, and judge the characteristic information whether Yi Chang step, comprising:
Analyze the type for the sound characteristic for including in the voice signal;
According to the type of the sound characteristic, the corresponding each user of the voice signal is positioned respectively;
The one-to-one characteristic information of each user is acquired respectively;
Judge whether collected each characteristic information is abnormal respectively.
Further, the step of acquisition voice signal corresponds to the motion images information of user, comprising:
Start sound source detection and localization algorithm, calculates sound bearing parameter;
Preset video acquisition device, which is adjusted, according to the sound bearing parameter turns to corresponding orientation;
The motion images information that the voice signal corresponds to user is acquired using the preset video acquisition device.
The present invention provides a kind of device for detecting mood, comprising:
Acquisition module is used for collected sound signal;
First judgment module, for judging whether the sound size of the voice signal is more than threshold value;
Second judgment module, for judging in the voice signal with the presence or absence of specific word;
Judgment module is acquired, corresponds to the characteristic information of user for acquiring the voice signal, and judges the feature letter
Whether breath is abnormal;
Emotion judgment module, for determining that the voice signal corresponds to user emotion excitement.
Further, the characteristic information includes physiological characteristic information, and the acquisition judgment module includes:
Physiological characteristic acquisition module corresponds to the current physiology characteristic information of user for acquiring the voice signal;
Physiological characteristic judgment module, for judging current physiology characteristic information user's corresponding with the voice signal
Prestore whether normal physiological characteristic information matches;
Physiological characteristic determination module, for determining whether the current physiology characteristic information is abnormal.
Further, the characteristic information includes motion images information, and the acquisition judgment module includes:
Image information collecting module corresponds to the motion images information of user for acquiring the voice signal;
Motion characteristic extraction module, the limb for corresponding to user for extracting the voice signal from the motion images information
Body motion characteristic;
Motion characteristic judgment module, for judging prestoring for the limb action feature user corresponding with the voice signal
Whether normal limb motion characteristic matches.
Motion images determination module, if determining the motion images Information abnormity for mismatching, if matching, sentences
The fixed motion images information is normal.
Further, it is multiple, the acquisition judgment module that the voice signal, which corresponds to user, further include:
Sound analysis module, for analyzing the type for the sound characteristic for including in the voice signal;
It is corresponding each to position the voice signal for the type according to the sound characteristic respectively for sound locating module
User;
Acquisition module respectively, for acquiring the one-to-one characteristic information of each user respectively;
Judgment module respectively, for judging whether collected each characteristic information is abnormal respectively.
Further, further includes:
Auditory localization module calculates sound bearing parameter for starting sound source detection and localization algorithm;
Orientation adjustment module turns to respective party for adjusting preset video acquisition device according to the sound bearing parameter
Position;
Motion images acquisition module acquires the voice signal using the preset video acquisition device and corresponds to the dynamic of user
Make image information.
The method and apparatus of the detection mood provided according to the present invention can reach timely and accurately identify exciting mood
Technical effect.By multiple deterministic process, guarantee the accuracy of Emotion identification;By the identification of physiological signal or action signal,
To guarantee the accuracy of Emotion identification;By the excited prompting sound of audio output, facilitates the voice signal and correspond to user
It chills out.
Detailed description of the invention
Fig. 1 is a kind of flow diagram of the method for detection mood of one embodiment of the invention;
Fig. 2 is the flow diagram of the step S40 of one embodiment of the invention;
Fig. 3 is the flow diagram of the step S40 of one embodiment of the invention;
Fig. 4 is a kind of structural schematic block diagram of the device of detection mood of one embodiment of the invention;
Fig. 5 is the structural schematic block diagram of the acquisition judgment module 40 of one embodiment of the invention;
Fig. 6 is the structural schematic block diagram of the acquisition judgment module 40 of one embodiment of the invention.
The embodiments will be further described with reference to the accompanying drawings for the realization, the function and the advantages of the object of the present invention.
Specific embodiment
It should be appreciated that the specific embodiments described herein are merely illustrative of the present invention, it is not intended to limit the present invention.
The embodiment of the present invention is described below in detail, examples of the embodiments are shown in the accompanying drawings, wherein from beginning to end
Same or similar label indicates same or similar element or element with the same or similar functions.Below with reference to attached
The embodiment of figure description is exemplary, and for explaining only the invention, and is not construed as limiting the claims.
Those skilled in the art of the present technique are appreciated that unless expressly stated, singular " one " used herein, " one
It is a ", " described " and "the" may also comprise plural form.It is to be further understood that being arranged used in specification of the invention
Diction " comprising " refer to that there are the feature, integer, step, operation, element and/or component, but it is not excluded that in the presence of or addition
Other one or more features, integer, step, operation, element, component and/or their group.
Those skilled in the art of the present technique are appreciated that unless otherwise defined, all terms used herein (including technology art
Language and scientific term), there is meaning identical with the general understanding of those of ordinary skill in fields of the present invention.Should also
Understand, those terms such as defined in the general dictionary, it should be understood that have in the context of the prior art
The consistent meaning of meaning, and unless idealization or meaning too formal otherwise will not be used by specific definitions as here
To explain.
Referring to Fig.1, it is provided by the invention detection mood method a kind of embodiment the following steps are included:
S10, collected sound signal;
Whether S20, the sound size for judging the voice signal are more than threshold value;
If being more than S30, threshold value, judge in the voice signal with the presence or absence of specific word;
S40, if it exists specific word acquire the voice signal and correspond to the characteristic information of user, and judge the spy
Whether reference breath is abnormal;
If S50, exception, determine that the voice signal corresponds to user emotion excitement.
The step S10 of collected sound signal can realize by arbitrary voice collection device, for example, by microphone or
Microphone array is realized.
Next step S20 is carried out, judges whether the sound size of the voice signal is more than threshold value.Wherein threshold value can basis
Specific requirement arranges, and may be configured as 60 decibels or more, such as 60 decibels, 70 decibels, 80 decibels or 120 decibels.
Next step S30 is carried out, if being more than threshold value, is judged in the voice signal with the presence or absence of specific word.Wherein,
Specific word include the word to speak with indignation, coarse language, it is excited when everyday expressions etc., such as " what you look ", " his mother ",
" you you you you you ... " etc..When there are specific word, showing may there is the voice signal to correspond to user in voice signal
It is quarrelling, the voice signal corresponds to user may be excited.Specifically process can be with are as follows: to sound carry out speech recognition,
After the intelligent algorithms processing such as semanteme parsing, carried out with the particular words (such as storing particular words in memory) of storage
Compare, has specific word to judge to whether there is in sound.
Next step S40 is carried out, specific word, acquires the characteristic information that the voice signal corresponds to user if it exists,
And judge whether the characteristic information is abnormal.Wherein, the characteristic information that the voice signal corresponds to user, which refers to, can react the sound
Sound signal corresponds to the signal of user emotion.Acquire the voice signal correspond to user characteristic information can using any means,
Image frame is such as captured using camera, pulse signals are acquired using sensor.
Next step S50 is carried out, if abnormal, determines that the voice signal corresponds to user emotion excitement.It has determined described
The characteristic information that voice signal corresponds to user is abnormal, it is possible thereby to confirm that the voice signal corresponds to user and is in excited shape
State.
Further, after the judgement voice signal corresponds to the step of user emotion excitement, comprising: pass through sound
Frequency exports excited prompting sound.When the voice signal, which corresponds to user, is in excited state, by audio to prompt
The voice signal corresponds to user's control mood.Wherein prompting sound can be people's voice music, natural phonation (blow, rain by water flow
Equal sound) etc., such as can be " owner drowns one's sorrows ", soft music etc..
Further, the voice signal correspond to user be it is multiple, the word specific if it exists acquires the sound
Signal corresponds to the characteristic information of user, and judges whether Yi Chang step S40 includes: the characteristic information
S401: the type for the sound characteristic for including in the voice signal is analyzed;
S402: according to the type of the sound characteristic, the corresponding each user of the voice signal is positioned respectively;
S403: the one-to-one characteristic information of each user is acquired respectively;
S404: judge whether collected each characteristic information is abnormal respectively.
As described in step S401-S402, sound can be judged using voice signal is compared with the voice signal prestored
Which user is sound signal be derive specifically from.It can record in advance to each user, as the voice signal prestored.Described in analysis
Method of the type for the sound characteristic for including in voice signal, including analysis tone color, analysis tone etc..
As described in step S403-S404, characteristic information has reacted the emotional state of user.Pass through S403-S404 pairs of step
The characteristic information of each user has all carried out acquiring and judging, such as image frame can be captured using camera, using sensor
Acquisition pulse signals are to judge whether limbs signal, physiological signal are abnormal.
Referring to Fig. 2, a kind of embodiment of the method for detection mood provided by the invention, comprising: the characteristic information packet
Physiological characteristic information is included, the word specific if it exists acquires the voice signal and corresponds to the characteristic information of user, and judges
The characteristic information whether Yi Chang step S40, comprising:
S411: the current physiology characteristic information that the voice signal corresponds to user is acquired;
S412: judge that current physiology characteristic information user's corresponding with the voice signal prestores normal physiological feature
Whether information matches;
S413: if mismatching, determining that the current physiology characteristic information is abnormal, if matching, judgement is described to work as previous existence
It is normal to manage characteristic information.
As described in step S411-413, when people is in affective state, the physiological characteristic information of human body is different from usually, therefore
Physiological characteristic information be can use to detect the emotional state of people.Physiological characteristic information include: pulsation, heart rate, blood pressure, hormone,
Body temperature etc..Physiological characteristic information can be acquired using any way, for example, by using vibrating sensor, ultrasonic sensor, infrared
Sensor, temperature sensor etc. (these sensors may be provided on clothing or wearable device) measure pulsation, heart rate, blood pressure,
The physiological characteristic informations such as body temperature.The mode that the wireless transmissions such as bluetooth, WIFI can be used transmits physiological characteristic information.Such as step
Described in S412-S413, when judging physiological characteristic information and prestoring normal physiological characteristic information whether to match, show the user's
State is abnormal, then determines its physiological characteristic information exception, normal if matching.Wherein, prestoring normal physiological characteristic information can
To acquire simultaneously typing in advance, comprising: the physiological characteristic informations such as pulsation, heart rate, blood pressure, body temperature.
Referring to Fig. 3, a kind of embodiment of the method for detection mood provided by the invention includes: that the characteristic information includes
Motion images information, the acquisition voice signal corresponds to the characteristic information of user, and judges whether the characteristic information is different
Normal step S40, comprising:
S421: the motion images information that the voice signal corresponds to user is acquired;
S422: the limb action feature that the voice signal corresponds to user is extracted from the motion images information;
S423: judge that limb action feature user's corresponding with the voice signal prestores normal limb motion characteristic
Whether match;
S424: if mismatching, determining the motion images Information abnormity, if matching, determines the motion images letter
Breath is normal.
As described in step S421-S424, when people is in affective state, the behavior act of human body also different from usually, such as
Arm may be acutely brandished in violent quarrel, may be cuffed and kicked under angry state, therefore can use human body
Action signal, especially limbs feature, to detect the emotional state of people.Wherein, the process of step S423-S424 judgement includes
The limbs feature of extraction is compared with special characteristic, special characteristic refers to limbs feature when people is in affective state, such as acute
It is strong to brandish arm, cuff and kick.
Concrete mode is for example: corresponding to user to the voice signal by camera and takes pictures to obtain image information, and is right
Described image information carries out limbs feature extraction and characteristic matching;Alternatively, acquiring the voice signal to application by camera
The video information at family, the random frame obtained in video carry out limbs feature extraction as image information, and to described image information
And characteristic matching.When the limbs feature of extraction is matched with special characteristic, determine that the voice signal corresponds to the movement letter of user
Number exception.
Further, step S40 can include step S411-S413 and step S421-S424 simultaneously.In a kind of situation,
When step S411-S413 and step S421-S424 determine the characteristic information exception of the corresponding user of the voice signal, just sentence
The characteristic information that the fixed voice signal corresponds to user is abnormal, and step S411-S413 has a step to sentence with step S421-S424
When the characteristic information that the fixed voice signal corresponds to user is not abnormal, then determine that the voice signal corresponds to the characteristic information of user
It is not abnormal.In a kind of situation, step S411-S413 determines that the voice signal is corresponding with either step in step S421-S424
When the characteristic information exception of user, decide that the voice signal corresponds to the characteristic information exception of user.
A kind of embodiment of the method for detection mood provided by the invention includes: that the acquisition voice signal is corresponding
The step S421 of the motion images information of user, comprising:
S4211: starting sound source detection and localization algorithm calculates sound bearing parameter;
S4212: preset video acquisition device is adjusted according to the sound bearing parameter and turns to corresponding orientation;
S4213: the motion images information that the voice signal corresponds to user is acquired using the preset video acquisition device.
Wherein auditory localization detection algorithm can be any effective algorithm, can calculate sound source.Obtaining direction parameter
Afterwards, adjustment video acquisition device turn to corresponding orientation, thus allow video acquisition module collect make a sound it is described
Voice signal corresponds to user.
Further, when there are multiple sound sources, judge the sound size of each sound source, start auditory localization
Detection algorithm calculates each sound bearing parameter, and video acquisition device is turned to the maximum sound source of sound.
The method of the detection mood provided according to the present invention can reach technology effect that is timely and accurately identifying exciting mood
Fruit.By multiple deterministic process, guarantee the accuracy of Emotion identification;By the identification of physiological signal or action signal, to guarantee
The accuracy of Emotion identification;By the excited prompting sound of audio output, facilitates the voice signal and correspond to user's mitigation feelings
Thread.
Referring to Fig. 4, a kind of embodiment of the device 100 of detection mood provided by the invention includes:
Acquisition module 10 is used for collected sound signal;
First judgment module 20, for judging whether the sound size of the voice signal is more than threshold value;
Second judgment module 30, for judging in the voice signal with the presence or absence of specific word;
Judgment module 40 is acquired, corresponds to the characteristic information of user for acquiring the voice signal, and judge the feature
Whether information is abnormal;
Emotion judgment module 50, for determining that the voice signal corresponds to user emotion excitement.
Acquisition module 10 is used for collected sound signal.It can be realized by arbitrary voice collection device, such as pass through wheat
Gram wind or microphone array are realized.
First judgment module 20 judges whether the sound size of the voice signal is more than threshold value.Wherein threshold value can basis
Specific requirement arranges, and may be configured as 60 decibels or more, such as 60 decibels, 70 decibels, 80 decibels or 120 decibels.
Second judgment module 30 judges in the voice signal if being more than threshold value with the presence or absence of specific word.Wherein,
Specific word include the word to speak with indignation, coarse language, it is excited when everyday expressions etc., such as " what you look ", " his mother ",
" you you you you you ... " etc..When there are specific word, showing may there is the voice signal to correspond to user in voice signal
It is quarrelling, the voice signal corresponds to user may be excited.Specifically process can be with are as follows: to sound carry out speech recognition,
After the intelligent algorithms processing such as semanteme parsing, carried out with the particular words (such as storing particular words in memory) of storage
Compare, has specific word to judge to whether there is in sound.
Judgment module 40 is acquired, specific word, acquires the characteristic information that the voice signal corresponds to user if it exists, and
Judge whether the characteristic information is abnormal.Wherein, the characteristic information that the voice signal corresponds to user, which refers to, can react the sound
Signal corresponds to the signal of user emotion.Acquire the voice signal correspond to the characteristic information of user can be using any means, such as
Image frame is captured, using sensor acquisition pulse signals etc. using camera.
Emotion judgment module 50 determines that the voice signal corresponds to user emotion excitement if abnormal.Have determined the sound
The characteristic information that sound signal corresponds to user is abnormal, it is possible thereby to confirm that the voice signal corresponds to user and is in excited shape
State.
Further, further includes: audio output module, for passing through the excited prompting sound of audio output.When the sound
When sound signal corresponds to user and is in excited state, the voice signal is prompted to correspond to user's control mood by audio.
Wherein reminding sound can be people's voice music, natural phonation sound such as (water flow) blowing, rain etc., such as can be that " owner disappears
Gas ", soft music etc..
Further, the voice signal correspond to user be it is multiple, the acquisition judgment module 40 includes:
Sound analysis module 401, for analyzing the type for the sound characteristic for including in the voice signal;
It is corresponding to position the voice signal for the type according to the sound characteristic respectively for sound locating module 402
Each user;
Acquisition module 403 respectively, for acquiring the one-to-one characteristic information of each user respectively;
Judgment module 404 respectively, for judging whether collected each characteristic information is abnormal respectively.
Such as sound analysis module 401 and sound locating module 402, can use voice signal and the voice signal prestored
It compares, judges which user voice signal is derive specifically from.It can record in advance to each user, as the sound prestored
Sound signal.The method for analyzing the type for the sound characteristic for including in the voice signal, including analysis tone color, analysis tone etc..
Such as distinguish acquisition module 403 and judgment module 404, characteristic information have reacted the emotional state of user respectively.Pass through
Respectively acquisition module 403 and respectively judgment module 404 characteristic information of each user has all been carried out to acquire and judge, such as can be with
It uses camera to capture image frame, use sensor acquisition pulse signals to judge whether limbs signal, physiological signal are abnormal.
Reference Fig. 5, a kind of embodiment of the device 100 of detection mood provided by the invention, the characteristic information include
Physiological characteristic information, the acquisition judgment module 40 include:
Physiological characteristic acquisition module 411 corresponds to the current physiology characteristic information of user for acquiring the voice signal;
Physiological characteristic judgment module 412, for judging the current physiology characteristic information and the voice signal to application
Family prestores whether normal physiological characteristic information matches;
Physiological characteristic determination module 413, for determining whether the current physiology characteristic information is abnormal.
As described in physiological characteristic acquisition module 411, physiological characteristic judgment module 412 and physiological characteristic determination module 413, when
When people is in affective state, the physiological characteristic information of human body can use physiological characteristic information to detect people different from usually
Emotional state.Physiological characteristic information includes: pulsation, heart rate, blood pressure, hormone, body temperature etc..It can be acquired using any way
Physiological characteristic information, for example, by using (these sensors such as vibrating sensor, ultrasonic sensor, infrared sensor, temperature sensor
May be provided on clothing or wearable device) measurement pulsation, heart rate, blood pressure, the physiological characteristic informations such as body temperature.Can be used bluetooth,
The mode of the wireless transmissions such as WIFI transmits physiological characteristic information.When judging physiological characteristic information and to prestore normal physiological characteristic information
When whether matching, show that the state of the user is abnormal, then determines its physiological characteristic information exception, it is normal if matching.Wherein,
Simultaneously typing can be acquired in advance by prestoring normal physiological characteristic information, comprising: the physiological characteristics such as pulsation, heart rate, blood pressure, body temperature letter
Breath.
Reference Fig. 6, a kind of embodiment of the device 100 of detection mood provided by the invention, the characteristic information include
Motion images information, the acquisition judgment module 40 include:
Image information collecting module 421 corresponds to the motion images information of user for acquiring the voice signal;
Motion characteristic extraction module 422 corresponds to user for extracting the voice signal from the motion images information
Limb action feature;
Motion characteristic judgment module 423, for judging limb action feature user's corresponding with the voice signal
Prestore whether normal limb motion characteristic matches;
Motion images determination module 424, if determining the motion images Information abnormity for mismatching, if matching,
Determine that the motion images information is normal.
As described in above-mentioned module, when people is in affective state, the behavior act of human body is also different from usually, such as in fierceness
Quarrel in may acutely brandish arm, may be cuffed and kicked under angry state, therefore can use the movement of human body
Signal, especially limbs feature, to detect the emotional state of people.Wherein, the process of module 423-S424 judgement includes that will extract
Limbs feature compare with special characteristic, special characteristic refers to limbs feature when people is in affective state, such as acutely brandishes
Arm is cuffed and kicked.
Concrete mode is for example: corresponding to user to the voice signal by camera and takes pictures to obtain image information, and is right
Described image information carries out limbs feature extraction and characteristic matching;Alternatively, acquiring the voice signal to application by camera
The video information at family, the random frame obtained in video carry out limbs feature extraction as image information, and to described image information
And characteristic matching.When the limbs feature of extraction is matched with special characteristic, determine that the voice signal corresponds to the movement letter of user
Number exception.
Further, acquisition judgment module 40 can include module 411-413 and module 421-424 simultaneously.
A kind of embodiment of the device of detection mood provided by the invention, the device for detecting mood include:
Auditory localization module 60 calculates sound bearing parameter for starting sound source detection and localization algorithm;
Orientation adjustment module 70 turns to accordingly for adjusting preset video acquisition device according to the sound bearing parameter
Orientation;
Motion images acquisition module 80, for acquiring the voice signal to application using the preset video acquisition device
The motion images information at family.
Wherein auditory localization detection algorithm can be any effective algorithm, can calculate sound source.Obtaining direction parameter
Afterwards, adjustment video acquisition device turn to corresponding orientation, thus allow video acquisition module collect make a sound it is described
Voice signal corresponds to user.
Further, when there are multiple sound sources, judge the sound size of each sound source, start auditory localization
Detection algorithm calculates each sound bearing parameter, and video acquisition device is turned to the maximum sound source of sound.
The method that citing herein uses the device 100 of detection mood:
Enter standby mode after booting, acquisition module 10 is constantly in working condition in stand-by mode;
When the sound size of the collected voice signal of acquisition module 10 is more than threshold value, acoustic front-end processing module (or
Person's state switching module) will wake up detection mood device 100 enter working condition;
Sound passes through subsequent processing, speech recognition, after the intelligent algorithms processing such as semanteme parsing, judge in sentence whether
There are specific words;
If so, then starting sound source detection and localization algorithm, sound source apparent azimuth parameter is calculated;
Video acquisition module is opened, the motion control motor of video acquisition module is adjusted to phase by the direction parameter that obtains
Answer orientation;
The random frame obtained in video flowing carries out image procossing, by limbs feature extraction, characteristic matching, judges that movement is
It is no to have aggressive behavior appearance;
The physiologic character parameter that physiological characteristic acquisition module circulation obtains kinsfolk one by one is opened, is then carried out pair
Than.Check whether parameters have in the presence of abnormal;
If there is aggressive behavior and parameters exist it is abnormal (alternatively, having too drastic action but parameters are normal;Alternatively,
No too drastic action but parameters exist abnormal), then it can be determined that the voice signal of emotional change corresponds to the mood of user
It is abnormal;
It is reminded by audio output.
The device of the detection mood provided according to the present invention can reach technology effect that is timely and accurately identifying exciting mood
Fruit.By multiple deterministic process, guarantee the accuracy of Emotion identification;By the identification of physiological signal or action signal, to guarantee
The accuracy of Emotion identification;By the excited prompting sound of audio output, facilitates the voice signal and correspond to user's mitigation feelings
Thread.
The above description is only a preferred embodiment of the present invention, is not intended to limit the scope of the invention, all utilizations
Equivalent structure or equivalent flow shift made by description of the invention and accompanying drawing content is applied directly or indirectly in other correlations
Technical field, be included within the scope of the present invention.
Claims (10)
1. a kind of method for detecting mood, which is characterized in that the method comprising the steps of:
Collected sound signal;
Whether the sound size for judging the voice signal is more than threshold value;
If being more than threshold value, judge in the voice signal with the presence or absence of specific word;
Specific word if it exists acquires the voice signal and corresponds to the characteristic information of user, and judges that the characteristic information is
No exception;
If abnormal, determine that the voice signal corresponds to user emotion excitement.
2. the method for detection mood according to claim 1, which is characterized in that the characteristic information includes physiological characteristic letter
Breath, the acquisition voice signal correspond to the characteristic information of user, and judge the characteristic information whether Yi Chang step, wrap
It includes:
Acquire the current physiology characteristic information that the voice signal corresponds to user;
Judge whether current physiology characteristic information user's corresponding with the voice signal prestores normal physiological characteristic information
Matching;
If mismatching, determine that the current physiology characteristic information is abnormal, if matching, determines the current physiology characteristic information
Normally.
3. the method for detection mood according to claim 1, which is characterized in that the characteristic information includes motion images letter
Breath, the acquisition voice signal correspond to the characteristic information of user, and judge the characteristic information whether Yi Chang step, wrap
It includes:
Acquire the motion images information that the voice signal corresponds to user;
The limb action feature that the voice signal corresponds to user is extracted from the motion images information;
Judge that limb action feature user's corresponding with the voice signal prestores whether normal limb motion characteristic matches;
If mismatching, the motion images Information abnormity is determined, if matching, determines that the motion images information is normal.
4. the method for detection mood according to claim 1-3, which is characterized in that the voice signal is to application
Family be it is multiple, the acquisition voice signal correspond to the characteristic information of user, and judge the characteristic information whether exception
Step, comprising:
Analyze the type for the sound characteristic for including in the voice signal;
According to the type of the sound characteristic, the corresponding each user of the voice signal is positioned respectively;
The one-to-one characteristic information of each user is acquired respectively;
Judge whether collected each characteristic information is abnormal respectively.
5. the method for detection mood according to claim 3, which is characterized in that the acquisition voice signal is to application
The step of motion images information at family, comprising:
Start sound source detection and localization algorithm, calculates sound bearing parameter;
Preset video acquisition device, which is adjusted, according to the sound bearing parameter turns to corresponding orientation;
The motion images information that the voice signal corresponds to user is acquired using the preset video acquisition device.
6. a kind of device for detecting mood, which is characterized in that the device includes:
Acquisition module is used for collected sound signal;
First judgment module, for judging whether the sound size of the voice signal is more than threshold value;
Second judgment module, for judging in the voice signal with the presence or absence of specific word;
Judgment module is acquired, corresponds to the characteristic information of user for acquiring the voice signal, and judge that the characteristic information is
No exception;
Emotion judgment module, for determining that the voice signal corresponds to user emotion excitement.
7. the device of detection mood according to claim 6, the characteristic information includes physiological characteristic information, and feature exists
In the acquisition judgment module includes:
Physiological characteristic acquisition module corresponds to the current physiology characteristic information of user for acquiring the voice signal;
Physiological characteristic judgment module, for judging prestoring for the current physiology characteristic information user corresponding with the voice signal
Whether normal physiological characteristic information matches;
Physiological characteristic determination module, for determining whether the current physiology characteristic information is abnormal.
8. the device of detection mood according to claim 6, the characteristic information includes motion images information, and feature exists
In the acquisition judgment module includes:
Image information collecting module corresponds to the motion images information of user for acquiring the voice signal;
Motion characteristic extraction module, the limbs that user is corresponded to for extracting the voice signal from the motion images information move
Make feature;
Motion characteristic judgment module, for judging that prestoring for limb action feature user corresponding with the voice signal is normal
Whether limb action feature matches.
Motion images determination module, if determining the motion images Information abnormity for mismatching, if matching, determines institute
It is normal to state motion images information.
9. according to the device of the described in any item detection moods of claim 6-8, which is characterized in that the voice signal is to application
Family is multiple, the acquisition judgment module further include:
Sound analysis module, for analyzing the type for the sound characteristic for including in the voice signal;
Sound locating module positions the corresponding each user of the voice signal for the type according to the sound characteristic respectively;
Acquisition module respectively, for acquiring the one-to-one characteristic information of each user respectively;
Judgment module respectively, for judging whether collected each characteristic information is abnormal respectively.
10. the device of detection mood according to claim 8, which is characterized in that further include:
Auditory localization module calculates sound bearing parameter for starting sound source detection and localization algorithm;
Orientation adjustment module turns to corresponding orientation for adjusting preset video acquisition device according to the sound bearing parameter;
Motion images acquisition module acquires the action diagram that the voice signal corresponds to user using the preset video acquisition device
As information.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810804712.7A CN109009170A (en) | 2018-07-20 | 2018-07-20 | Detect the method and apparatus of mood |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810804712.7A CN109009170A (en) | 2018-07-20 | 2018-07-20 | Detect the method and apparatus of mood |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109009170A true CN109009170A (en) | 2018-12-18 |
Family
ID=64643881
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810804712.7A Pending CN109009170A (en) | 2018-07-20 | 2018-07-20 | Detect the method and apparatus of mood |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109009170A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113033336A (en) * | 2021-03-08 | 2021-06-25 | 北京金山云网络技术有限公司 | Home device control method, apparatus, device and computer readable storage medium |
Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1838237A (en) * | 2000-09-13 | 2006-09-27 | 株式会社A·G·I | Emotion recognizing method and system |
CN101068308A (en) * | 2007-05-10 | 2007-11-07 | 华为技术有限公司 | System and method for controlling image collector to make target positioning |
CN103595953A (en) * | 2013-11-14 | 2014-02-19 | 华为技术有限公司 | Method and device for controlling video shooting |
CN103700371A (en) * | 2013-12-13 | 2014-04-02 | 江苏大学 | Voiceprint identification-based incoming call identity identification system and identification method |
US20140171762A1 (en) * | 2009-02-25 | 2014-06-19 | Valencell, Inc. | Wearable light-guiding bands and patches for physiological monitoring |
CN104905803A (en) * | 2015-07-01 | 2015-09-16 | 京东方科技集团股份有限公司 | Wearable electronic device and emotion monitoring method thereof |
CN104939810A (en) * | 2014-03-25 | 2015-09-30 | 上海斐讯数据通信技术有限公司 | Method and device for controlling emotion |
CN105244023A (en) * | 2015-11-09 | 2016-01-13 | 上海语知义信息技术有限公司 | System and method for reminding teacher emotion in classroom teaching |
CN105852823A (en) * | 2016-04-20 | 2016-08-17 | 吕忠华 | Medical intelligent anger appeasing prompt device |
CN106985768A (en) * | 2016-09-14 | 2017-07-28 | 蔚来汽车有限公司 | Vehicle-mounted gestural control system and method |
CN206946938U (en) * | 2017-01-13 | 2018-01-30 | 深圳大森智能科技有限公司 | Intelligent robot Active Service System |
CN107809596A (en) * | 2017-11-15 | 2018-03-16 | 重庆科技学院 | Video conference tracking system and method based on microphone array |
-
2018
- 2018-07-20 CN CN201810804712.7A patent/CN109009170A/en active Pending
Patent Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1838237A (en) * | 2000-09-13 | 2006-09-27 | 株式会社A·G·I | Emotion recognizing method and system |
CN101068308A (en) * | 2007-05-10 | 2007-11-07 | 华为技术有限公司 | System and method for controlling image collector to make target positioning |
US20140171762A1 (en) * | 2009-02-25 | 2014-06-19 | Valencell, Inc. | Wearable light-guiding bands and patches for physiological monitoring |
CN103595953A (en) * | 2013-11-14 | 2014-02-19 | 华为技术有限公司 | Method and device for controlling video shooting |
CN103700371A (en) * | 2013-12-13 | 2014-04-02 | 江苏大学 | Voiceprint identification-based incoming call identity identification system and identification method |
CN104939810A (en) * | 2014-03-25 | 2015-09-30 | 上海斐讯数据通信技术有限公司 | Method and device for controlling emotion |
CN104905803A (en) * | 2015-07-01 | 2015-09-16 | 京东方科技集团股份有限公司 | Wearable electronic device and emotion monitoring method thereof |
CN105244023A (en) * | 2015-11-09 | 2016-01-13 | 上海语知义信息技术有限公司 | System and method for reminding teacher emotion in classroom teaching |
CN105852823A (en) * | 2016-04-20 | 2016-08-17 | 吕忠华 | Medical intelligent anger appeasing prompt device |
CN106985768A (en) * | 2016-09-14 | 2017-07-28 | 蔚来汽车有限公司 | Vehicle-mounted gestural control system and method |
CN206946938U (en) * | 2017-01-13 | 2018-01-30 | 深圳大森智能科技有限公司 | Intelligent robot Active Service System |
CN107809596A (en) * | 2017-11-15 | 2018-03-16 | 重庆科技学院 | Video conference tracking system and method based on microphone array |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113033336A (en) * | 2021-03-08 | 2021-06-25 | 北京金山云网络技术有限公司 | Home device control method, apparatus, device and computer readable storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6101684B2 (en) | Method and system for assisting patients | |
EP1282113B1 (en) | Method for detecting emotions from speech using speaker identification | |
WO2019144658A1 (en) | Smart toilet and electric appliance system | |
JP7241499B2 (en) | Information processing method, information processing apparatus, and information processing program | |
JPH0962293A (en) | Speech recognition dialogue device and speech recognition dialogue processing method | |
JP2003255993A (en) | System, method, and program for speech recognition, and system, method, and program for speech synthesis | |
JP2002182680A (en) | Operation indication device | |
CN113454710A (en) | System for evaluating sound presentation | |
CN109044303A (en) | A kind of blood pressure measuring method based on wearable device, device and equipment | |
WO2021032556A1 (en) | System and method of detecting falls of a subject using a wearable sensor | |
CN113764099A (en) | Psychological state analysis method, device, equipment and medium based on artificial intelligence | |
WO2019171780A1 (en) | Individual identification device and characteristic collection device | |
FI128000B (en) | Speech recognition method and apparatus based on a wake-up word | |
CN109009170A (en) | Detect the method and apparatus of mood | |
JP6729923B1 (en) | Deafness determination device, deafness determination system, computer program, and cognitive function level correction method | |
CN104615252B (en) | Control method, control device, wearable electronic equipment and electronic equipment | |
WO2022160938A1 (en) | Emergency help-seeking function triggering method and apparatus, terminal, and storage medium | |
KR20090070325A (en) | Emergency calling system and method based on multimodal information | |
US11687049B2 (en) | Information processing apparatus and non-transitory computer readable medium storing program | |
US11657821B2 (en) | Information processing apparatus, information processing system, and information processing method to execute voice response corresponding to a situation of a user | |
JP6712028B1 (en) | Cognitive function determination device, cognitive function determination system and computer program | |
KR20230154380A (en) | System and method for providing heath-care services fitting to emotion states of users by behavioral and speaking patterns-based emotion recognition results | |
WO2023075746A1 (en) | Detecting emotional state of a user | |
CN114515137B (en) | Gastrointestinal disease identification method and gastrointestinal disease sensing system | |
JP2000194252A (en) | Ideal action support device, and method, system, and recording medium therefor |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20181218 |
|
RJ01 | Rejection of invention patent application after publication |