Digital Voice Systems, Inc.

United States of America

Back to Profile

1-25 of 25 for Digital Voice Systems, Inc. Sort by
Query
Aggregations
IP Type
        Patent 22
        Trademark 3
Jurisdiction
        United States 18
        World 5
        Europe 1
        Canada 1
Date
New (last 4 weeks) 2
2025 April 2
2025 (YTD) 2
2024 1
2023 7
See more
IPC Class
G10L 19/00 - Speech or audio signal analysis-synthesis techniques for redundancy reduction, e.g. in vocodersCoding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis 9
G10L 19/02 - Speech or audio signal analysis-synthesis techniques for redundancy reduction, e.g. in vocodersCoding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using spectral analysis, e.g. transform vocoders or subband vocoders 6
G10L 19/005 - Correction of errors induced by the transmission channel, if related to the coding algorithm 4
G10L 19/018 - Audio watermarking, i.e. embedding inaudible data in the audio signal 4
G10L 19/12 - Determination or coding of the excitation functionDetermination or coding of the long-term prediction parameters the excitation function being a code excitation, e.g. in code excited linear prediction [CELP] vocoders 3
See more
NICE Class
09 - Scientific and electric apparatus and instruments 3
42 - Scientific, technological and industrial services, research and design 1
Status
Pending 3
Registered / In Force 22

1.

BIT ERROR CORRECTION IN DIGITAL SPEECH

      
Application Number 18482350
Status Pending
Filing Date 2023-10-06
First Publication Date 2025-04-10
Owner Digital Voice Systems, Inc. (USA)
Inventor
  • Clark, Thomas
  • Hardwick, John C.

Abstract

This disclosure provides a method of decoding a digital speech signal, a speech decoder, a handset or mobile radio, and a base station or console. The method includes receiving a voice bit stream including at least one frame of bits that includes block codes, determining least confident bits in a first block code, generating candidates for the first block code based on the least confident bits, determining a first distance between each candidate and the first block code, and demodulating at least one other block code to obtain at least one demodulated vector. For each demodulated vector, a second distance between the demodulated vector and possible transmitted vectors is determined, and from the possible transmitted vectors, a vector corresponding to a minimum second distance is selected as a corrected demodulated vector. A minimum total distance is determined, and a candidate is selected as a corrected first block code.

IPC Classes  ?

  • G10L 19/005 - Correction of errors induced by the transmission channel, if related to the coding algorithm

2.

BIT ERROR CORRECTION IN DIGITAL SPEECH

      
Application Number US2024049998
Publication Number 2025/076379
Status In Force
Filing Date 2024-10-04
Publication Date 2025-04-10
Owner DIGITAL VOICE SYSTEMS, INC. (USA)
Inventor
  • Clark, Thomas
  • Hardwick, John C.

Abstract

This disclosure provides a method of decoding a digital speech signal, a speech decoder, a handset or mobile radio, and a base station or console. The method includes receiving a voice bit stream including at least one frame of bits that includes block codes, determining least confident bits in a first block code, generating candidates for the first block code based on the least confident bits, determining a first distance between each candidate and the first block code, and demodulating at least one other block code to obtain at least one demodulated vector. For each demodulated vector, a second distance between the demodulated vector and possible transmitted vectors is determined, and from the possible transmitted vectors, a vector corresponding to a minimum second distance is selected as a corrected demodulated vector. A minimum total distance is determined, and a candidate is selected as a corrected first block code.

IPC Classes  ?

  • G10L 19/00 - Speech or audio signal analysis-synthesis techniques for redundancy reduction, e.g. in vocodersCoding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
  • G10L 21/00 - Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
  • G10L 13/047 - Architecture of speech synthesisers
  • G10L 15/26 - Speech to text systems
  • G10L 25/00 - Speech or voice analysis techniques not restricted to a single one of groups
  • G10L 13/00 - Speech synthesisText to speech systems

3.

Reducing Perceived Effects of Non-Voice Data in Digital Speech

      
Application Number 18667251
Status Pending
Filing Date 2024-05-17
First Publication Date 2024-09-12
Owner Digital Voice Systems, Inc. (USA)
Inventor Hardwick, John C.

Abstract

Non-voice data is embedded in a voice bit stream that includes frames of voice bits by selecting a frame of voice bits to carry the non-voice data, placing non-voice identifier bits in a first portion of the voice bits in the selected frame, and placing the non-voice data in a second portion of the voice bits in the selected frame. The non-voice identifier bits are employed to reduce a perceived effect of the non-voice data on audible speech produced from the voice bit stream.

IPC Classes  ?

  • G10L 19/02 - Speech or audio signal analysis-synthesis techniques for redundancy reduction, e.g. in vocodersCoding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using spectral analysis, e.g. transform vocoders or subband vocoders
  • G10L 15/24 - Speech recognition using non-acoustical features
  • G10L 19/005 - Correction of errors induced by the transmission channel, if related to the coding algorithm
  • G10L 25/18 - Speech or voice analysis techniques not restricted to a single one of groups characterised by the type of extracted parameters the extracted parameters being spectral information of each sub-band

4.

Tone Frame Detector for Digital Speech

      
Application Number 17716845
Status Pending
Filing Date 2022-04-08
First Publication Date 2023-10-12
Owner Digital Voice Systems, Inc. (USA)
Inventor
  • Clark, Thomas
  • Hardwick, John C.

Abstract

Tone data embedded in a voice bit stream that includes frames of non-tone bits and frames of tone bits is detected and extracted by selecting a frame of bits, analyzing the selected frame of bits to determine whether it is a frame of tone bits, and, when it is a frame of tone bits, extracting tone data from it. Analyzing the selected frame includes comparing bits of the selected frame to sets of tone data to produce error criteria representative of differences between the selected frame and each of multiple sets of tone data. Based on the error criteria, a set of tone data that most closely corresponds to the bits of the selected frame is selected. When the error criteria corresponding to the selected set of tone data satisfies a set of thresholds, the selected frame is designated as a frame of tone bits.

IPC Classes  ?

  • G10L 19/16 - Vocoder architecture
  • G06F 16/683 - Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
  • G10L 19/005 - Correction of errors induced by the transmission channel, if related to the coding algorithm
  • H04W 88/18 - Service support devicesNetwork management devices

5.

SPEECH MODEL PARAMETER ESTIMATION AND QUANTIZATION

      
Application Number US2023017748
Publication Number 2023/196509
Status In Force
Filing Date 2023-04-06
Publication Date 2023-10-12
Owner DIGITAL VOICE SYSTEMS, INC. (USA)
Inventor
  • Griffin, Daniel
  • Hardwick, John C.

Abstract

Quantizing speech model parameters includes, for each of multiple vectors of quantized excitation strength parameters, determining first and second errors between first and second elements of a vector of excitation strength parameters and, respectively, first and second elements of the vector of quantized excitation strength parameters, and determining a first energy and a second energy associated with, respectively, the first and second errors. First and second weights for, respectively, the first error and the second error, are determined and are used to produce first and second weighted errors, which are combined to produce a total error. The total errors of each of the multiple vectors of quantized excitation strength parameters are compared and the vector of quantized excitation strength parameters that produces the smallest total error is selected to represent the vector of excitation strength parameters.

IPC Classes  ?

  • G10L 19/08 - Determination or coding of the excitation functionDetermination or coding of the long-term prediction parameters
  • G10L 25/21 - Speech or voice analysis techniques not restricted to a single one of groups characterised by the type of extracted parameters the extracted parameters being power information

6.

TONE FRAME DETECTOR FOR DIGITAL SPEECH

      
Application Number US2023017755
Publication Number 2023/196515
Status In Force
Filing Date 2023-04-06
Publication Date 2023-10-12
Owner DIGITAL VOICE SYSTEMS, INC. (USA)
Inventor
  • Clark, Thomas
  • Hardwick, John, C.

Abstract

Tone data embedded in a voice bit stream that includes frames of non-tone bits and frames of tone bits is detected and extracted by selecting a frame of bits, analyzing the selected frame of bits to determine whether it is a frame of tone bits, and, when it is a frame of tone bits, extracting tone data from it. Analyzing the selected frame includes comparing bits of the selected frame to sets of tone data to produce error criteria representative of differences between the selected frame and each of multiple sets of tone data. Based on the error criteria, a set of tone data that most closely corresponds to the bits of the selected frame is selected. When the error criteria corresponding to the selected set of tone data satisfies a set of thresholds, the selected frame is designated as a frame of tone bits.

IPC Classes  ?

  • G10L 25/90 - Pitch determination of speech signals
  • G10L 19/16 - Vocoder architecture
  • G10L 25/51 - Speech or voice analysis techniques not restricted to a single one of groups specially adapted for particular use for comparison or discrimination
  • G10L 19/22 - Mode decision, i.e. based on audio signal content versus external parameters
  • G10L 25/57 - Speech or voice analysis techniques not restricted to a single one of groups specially adapted for particular use for comparison or discrimination for processing of video signals

7.

Speech model parameter estimation and quantization

      
Application Number 17716805
Grant Number 11715477
Status In Force
Filing Date 2022-04-08
First Publication Date 2023-08-01
Grant Date 2023-08-01
Owner Digital Voice Systems, Inc. (USA)
Inventor
  • Griffin, Daniel W.
  • Hardwick, John C.

Abstract

Quantizing speech model parameters includes, for each of multiple vectors of quantized excitation strength parameters, determining first and second errors between first and second elements of a vector of excitation strength parameters and, respectively, first and second elements of the vector of quantized excitation strength parameters, and determining a first energy and a second energy associated with, respectively, the first and second errors. First and second weights for, respectively, the first error and the second error, are determined and are used to produce first and second weighted errors, which are combined to produce a total error. The total errors of each of the multiple vectors of quantized excitation strength parameters are compared and the vector of quantized excitation strength parameters that produces the smallest total error is selected to represent the vector of excitation strength parameters.

IPC Classes  ?

  • G10L 19/087 - Determination or coding of the excitation functionDetermination or coding of the long-term prediction parameters using mixed excitation models, e.g. MELP, MBE, split band LPC or HVXC
  • G10L 19/038 - Vector quantisation, e.g. TwinVQ audio
  • G10L 25/21 - Speech or voice analysis techniques not restricted to a single one of groups characterised by the type of extracted parameters the extracted parameters being power information
  • G10L 19/00 - Speech or audio signal analysis-synthesis techniques for redundancy reduction, e.g. in vocodersCoding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
  • G10L 19/18 - Vocoders using multiple modes

8.

Reducing perceived effects of non-voice data in digital speech

      
Application Number 17387412
Grant Number 11990144
Status In Force
Filing Date 2021-07-28
First Publication Date 2023-02-09
Grant Date 2024-05-21
Owner Digital Voice Systems, Inc. (USA)
Inventor Hardwick, John C.

Abstract

Non-voice data is embedded in a voice bit stream that includes frames of voice bits by selecting a frame of voice bits to carry the non-voice data, placing non-voice identifier bits in a first portion of the voice bits in the selected frame, and placing the non-voice data in a second portion of the voice bits in the selected frame. The non-voice identifier bits are employed to reduce a perceived effect of the non-voice data on audible speech produced from the voice bit stream.

IPC Classes  ?

  • G10L 19/00 - Speech or audio signal analysis-synthesis techniques for redundancy reduction, e.g. in vocodersCoding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
  • G10L 15/24 - Speech recognition using non-acoustical features
  • G10L 19/005 - Correction of errors induced by the transmission channel, if related to the coding algorithm
  • G10L 19/02 - Speech or audio signal analysis-synthesis techniques for redundancy reduction, e.g. in vocodersCoding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using spectral analysis, e.g. transform vocoders or subband vocoders
  • G10L 25/18 - Speech or voice analysis techniques not restricted to a single one of groups characterised by the type of extracted parameters the extracted parameters being spectral information of each sub-band

9.

REDUCING PERCEIVED EFFECTS OF NON-VOICE DATA IN DIGITAL SPEECH

      
Application Number US2022074176
Publication Number 2023/010028
Status In Force
Filing Date 2022-07-27
Publication Date 2023-02-02
Owner DIGITAL VOICE SYSTEMS, INC. (USA)
Inventor Hardwick, John C.

Abstract

Non-voice data is embedded in a voice bit stream that includes frames of voice bits by selecting a frame of voice bits to carry the non-voice data, placing non-voice identifier bits in a first portion of the voice bits in the selected frame, and placing the non-voice data in a second portion of the voice bits in the selected frame. The non-voice identifier bits are employed to reduce a perceived effect of the non-voice data on audible speech produced from the voice bit stream.

IPC Classes  ?

  • G10L 19/018 - Audio watermarking, i.e. embedding inaudible data in the audio signal
  • H04M 11/06 - Simultaneous speech and data transmission, e.g. telegraphic transmission over the same conductors
  • H04H 20/28 - Arrangements for simultaneous broadcast of plural pieces of information

10.

Detecting and compensating for the presence of a speaker mask in a speech signal

      
Application Number 17366782
Grant Number 12254895
Status In Force
Filing Date 2021-07-02
First Publication Date 2023-01-05
Grant Date 2025-03-18
Owner Digital Voice Systems, Inc. (USA)
Inventor
  • Clark, Thomas
  • Hardwick, John C.

Abstract

Compensating a speech signal for the presence of a speaker mask includes receiving a speech signal, dividing the speech signal into subframes, generating speech parameters for a subframe, and determining whether the subframe is suitable for use in detecting a mask. If the subframe is suitable for use in detecting a mask, the speech parameters for the subframe are used in determining whether a mask is present. If a mask is present, the speech parameters for the subframe are modified to produce modified speech parameters that compensate for the presence of the mask.

IPC Classes  ?

  • G10L 21/00 - Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
  • G10L 21/0272 - Voice signal separating
  • G10L 25/18 - Speech or voice analysis techniques not restricted to a single one of groups characterised by the type of extracted parameters the extracted parameters being spectral information of each sub-band
  • G10L 25/78 - Detection of presence or absence of voice signals

11.

SPEECH CODING USING TIME-VARYING INTERPOLATION

      
Application Number US2021012608
Publication Number 2021/142198
Status In Force
Filing Date 2021-01-08
Publication Date 2021-07-15
Owner DIGITAL VOICE SYSTEMS, INC. (USA)
Inventor Clark, Thomas

Abstract

Encoding a sequence of digital speech samples into a bit stream includes dividing the digital speech samples into frames including N subframes (where N is greater than 1); computing subframe model parameters including spectral parameters; and generating a representation of the frame that includes information representing the spectral parameters of P subframes (where P < N) and information identifying the P subframes. The representation excludes information representing the spectral parameters of the N-P subframes not included in the P subframes. Generating the representation includes selecting the P subframes by, for multiple combinations of P subframes, determining an error induced by representing the frame using the spectral parameters for the P subframes and using interpolated spectral parameter values for the N-P subframes. A combination of P subframes is selected based on the determined error for the combination of P subframes.

IPC Classes  ?

  • G10L 19/02 - Speech or audio signal analysis-synthesis techniques for redundancy reduction, e.g. in vocodersCoding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using spectral analysis, e.g. transform vocoders or subband vocoders
  • G10L 19/00 - Speech or audio signal analysis-synthesis techniques for redundancy reduction, e.g. in vocodersCoding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
  • G10L 19/06 - Determination or coding of the spectral characteristics, e.g. of the short-term prediction coefficients
  • G10L 25/93 - Discriminating between voiced and unvoiced parts of speech signals

12.

Speech coding using time-varying interpolation

      
Application Number 16737543
Grant Number 11270714
Status In Force
Filing Date 2020-01-08
First Publication Date 2021-07-08
Grant Date 2022-03-08
Owner Digital Voice Systems, Inc. (USA)
Inventor Clark, Thomas

Abstract

Encoding a sequence of digital speech samples into a bit stream includes dividing the digital speech samples into frames including N subframes (where N is an integer greater than 1); computing model parameters for the subframes, the model parameters including spectral parameters; and generating a representation of the frame. The representation includes information representing the spectral parameters of P subframes (where P is an integer and P

IPC Classes  ?

  • G10L 19/02 - Speech or audio signal analysis-synthesis techniques for redundancy reduction, e.g. in vocodersCoding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using spectral analysis, e.g. transform vocoders or subband vocoders
  • G10L 19/12 - Determination or coding of the excitation functionDetermination or coding of the long-term prediction parameters the excitation function being a code excitation, e.g. in code excited linear prediction [CELP] vocoders
  • G10L 19/24 - Variable rate codecs, e.g. for generating different qualities using a scalable representation such as hierarchical encoding or layered encoding

13.

Audio watermarking via correlation modification using an amplitude and a magnitude modification based on watermark data and to reduce distortion

      
Application Number 16151671
Grant Number 11244692
Status In Force
Filing Date 2018-10-04
First Publication Date 2020-04-09
Grant Date 2022-02-08
Owner Digital Voice Systems, Inc. (USA)
Inventor Griffin, Daniel W.

Abstract

To convey information using an audio channel, an audio signal is modulated to produce a modulated signal by embedding additional information into the audio signal. Modulating the audio signal processing the audio signal to produce a set of filter responses; creating a delayed version of the filter responses; modifying the delayed version of the filter responses based on the additional information to produce an echo audio signal; and combining the audio signal and the echo audio signal to produce the modulated signal. Modulating the audio signal may involve employing a modulation strength, and a psychoacoustic model may be used to modify the modulation strength based on a comparison of a distortion of the modified audio signal relative to the audio signal and a target distortion.

IPC Classes  ?

  • G10L 19/018 - Audio watermarking, i.e. embedding inaudible data in the audio signal
  • G11B 20/00 - Signal processing not specific to the method of recording or reproducingCircuits therefor
  • G10L 19/00 - Speech or audio signal analysis-synthesis techniques for redundancy reduction, e.g. in vocodersCoding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
  • G10L 19/26 - Pre-filtering or post-filtering
  • G10L 25/21 - Speech or voice analysis techniques not restricted to a single one of groups characterised by the type of extracted parameters the extracted parameters being power information

14.

Audio watermarking via phase modification

      
Application Number 16000381
Grant Number 10210875
Status In Force
Filing Date 2018-06-05
First Publication Date 2018-10-04
Grant Date 2019-02-19
Owner Digital Voice Systems, Inc. (USA)
Inventor
  • Hardwick, John C.
  • Griffin, Daniel W.

Abstract

An audio watermarking system conveys information using an audio channel by modulating an audio signal to produce a modulated signal by embedding additional information into the audio signal. Modulating the audio signal includes segmenting the audio signal into overlapping time segments using a non-rectangular analysis window function produce a windowed audio signal, processing the windowed audio signal for a time segment to produce frequency coefficients representing the windowed time segment and having phase values and magnitude values, selecting one or more of the frequency coefficients, modifying phase values of the selected frequency coefficients using the additional information to map the phase values onto a known phase constellation, and processing the frequency coefficients including the modified phase values to produce the modulated signal.

IPC Classes  ?

  • G10L 19/018 - Audio watermarking, i.e. embedding inaudible data in the audio signal
  • G10L 19/02 - Speech or audio signal analysis-synthesis techniques for redundancy reduction, e.g. in vocodersCoding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using spectral analysis, e.g. transform vocoders or subband vocoders

15.

AMBE

      
Application Number 014969455
Status Registered
Filing Date 2015-12-31
Registration Date 2021-03-17
Owner Digital Voice Systems, Inc. (USA)
NICE Classes  ?
  • 09 - Scientific and electric apparatus and instruments
  • 42 - Scientific, technological and industrial services, research and design

Goods & Services

computer hardware for audio compression, audio analysis and audio processing; computer hardware for voice analysis and voice synthesis processing; voice compression hardware; Vocoders; integrated circuits for audio compression, audio analysis and audio processing; vocoder chips; computer software for audio compression, audio analysis and audio processing; embedded software for audio compression, audio analysis and audio processing; voice compression software; computer software for voice analysis and voice synthesis processing; software for use in vocoder chips. design, development, programming, customization and rental of vocoders; design, development, programming, customization and rental of integrated circuits; providing temporary use of non-downloadable software for audio compression, audio analysis and audio processing; providing temporary use of non-downloadable software for voice compression, voice analysis and voice processing; digital compression, analysis and processing of digital data for others; digital compression, analysis and processing of digital voice data for others; information and advisory services relating to all the abovementioned services.

16.

Audio watermarking via phase modification

      
Application Number 14702536
Grant Number 09990928
Status In Force
Filing Date 2015-05-01
First Publication Date 2015-11-26
Grant Date 2018-06-05
Owner Digital Voice Systems, Inc. (USA)
Inventor
  • Hardwick, John C.
  • Griffin, Daniel W.

Abstract

An audio watermarking system conveys information using an audio channel by modulating an audio signal to produce a modulated signal by embedding additional information into the audio signal. Modulating the audio signal includes segmenting the audio signal into overlapping time segments using a non-rectangular analysis window function produce a windowed audio signal, processing the windowed audio signal for a time segment to produce frequency coefficients representing the windowed time segment and having phase values and magnitude values, selecting one or more of the frequency coefficients, modifying phase values of the selected frequency coefficients using the additional information to map the phase values onto a known phase constellation, and processing the frequency coefficients including the modified phase values to produce the modulated signal.

IPC Classes  ?

  • G10L 19/018 - Audio watermarking, i.e. embedding inaudible data in the audio signal
  • G10L 19/02 - Speech or audio signal analysis-synthesis techniques for redundancy reduction, e.g. in vocodersCoding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using spectral analysis, e.g. transform vocoders or subband vocoders

17.

Speech coder that determines pulsed parameters

      
Application Number 13269204
Grant Number 08433562
Status In Force
Filing Date 2011-10-07
First Publication Date 2012-04-12
Grant Date 2013-04-30
Owner Digital Voice Systems, Inc. (USA)
Inventor Griffin, Daniel W.

Abstract

Methods for estimating speech model parameters are disclosed. For pulsed parameter estimation, a speech signal is divided into multiple frequency bands or channels using bandpass filters. Channel processing reduces sensitivity to pole magnitudes and frequencies and reduces impulse response time duration to improve pulse location and strength estimation performance. These methods are useful for high quality speech coding and reproduction at various bit rates for applications such as satellite and cellular voice communication.

IPC Classes  ?

  • G10L 19/00 - Speech or audio signal analysis-synthesis techniques for redundancy reduction, e.g. in vocodersCoding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis

18.

Digital audio server

      
Application Number 10067990
Grant Number 08131389
Status In Force
Filing Date 2002-02-08
First Publication Date 2012-03-06
Grant Date 2012-03-06
Owner Digital Voice Systems, Inc. (USA)
Inventor
  • Hardwick, John C.
  • Kalvaitis, Timothy E.
  • Mckinney, William S.
  • Clooney, Brian D.

Abstract

A digital audio server may be used to automatically download music from a collection of audio media, such as CDs or DVDs. The server also may automatically identify the media using track offset information.

IPC Classes  ?

  • G06F 17/00 - Digital computing or data processing equipment or methods, specially adapted for specific functions

19.

Breathing apparatus speech enhancement using reference sensor

      
Application Number 12021789
Grant Number 08265937
Status In Force
Filing Date 2008-01-29
First Publication Date 2009-07-30
Grant Date 2012-09-11
Owner Digital Voice Systems, Inc. (USA)
Inventor
  • Griffin, Daniel W.
  • Hardwick, John C.

Abstract

Speech enhancement in a breathing apparatus is provided using a primary sensor mounted near a breathing mask user's mouth, at least one reference sensor mounted near a noise source, and a processor that combines the signals from these sensors to produce an output signal with an enhanced speech component. The reference sensor signal may be filtered and the result may be subtracted from the primary sensor signal to produce the output signal with an enhanced speech component. A method for detecting the exclusive presence of a low air alarm noise may be used to determine when to update the filter. A triple filter adaptive noise cancellation method may provide improved performance through reduction of filter maladaptation. The speech enhancement techniques may be employed as part of a communication system or a speech recognition system.

IPC Classes  ?

  • G10L 21/02 - Speech enhancement, e.g. noise reduction or echo cancellation

20.

Estimation of pulsed speech model parameters

      
Application Number 11615414
Grant Number 08036886
Status In Force
Filing Date 2006-12-22
First Publication Date 2008-06-26
Grant Date 2011-10-11
Owner Digital Voice Systems, Inc. (USA)
Inventor Griffin, Daniel W.

Abstract

Methods for estimating speech model parameters are disclosed. For pulsed parameter estimation, a speech signal is divided into multiple frequency bands or channels using bandpass filters. Channel processing reduces sensitivity to pole magnitudes and frequencies and reduces impulse response time duration to improve pulse location and strength estimation performance. These methods are useful for high quality speech coding and reproduction at various bit rates for applications such as satellite and cellular voice communication.

IPC Classes  ?

  • G10L 19/00 - Speech or audio signal analysis-synthesis techniques for redundancy reduction, e.g. in vocodersCoding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis

21.

Half-rate vocoder

      
Application Number 10402938
Grant Number 08359197
Status In Force
Filing Date 2003-04-01
First Publication Date 2005-12-15
Grant Date 2013-01-22
Owner Digital Voice Systems, Inc. (USA)
Inventor Hardwick, John C.

Abstract

Encoding a sequence of digital speech samples into a bit stream includes dividing the digital speech samples into one or more frames, computing model parameters for a frame, and quantizing the model parameters to produce pitch bits conveying pitch information, voicing bits conveying voicing information, and gain bits conveying signal level information. One or more of the pitch bits are combined with one or more of the voicing bits and one or more of the gain bits to create a first parameter codeword that is encoded with an error control code to produce a first FEC codeword that is included in a bit stream for the frame. The process may be reversed to decode the bit stream.

IPC Classes  ?

  • G10L 19/12 - Determination or coding of the excitation functionDetermination or coding of the long-term prediction parameters the excitation function being a code excitation, e.g. in code excited linear prediction [CELP] vocoders
  • G10L 19/00 - Speech or audio signal analysis-synthesis techniques for redundancy reduction, e.g. in vocodersCoding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis

22.

Voice transcoder

      
Application Number 10353974
Grant Number 07634399
Status In Force
Filing Date 2003-01-30
First Publication Date 2004-08-05
Grant Date 2009-12-15
Owner Digital Voice Systems, Inc. (USA)
Inventor Hardwick, John C.

Abstract

First encoded voice bits are transcoded into second encoded voice bits by dividing the first encoded voice bits into one or more received frames, with each received frame containing multiple ones of the first encoded voice bits. First parameter bits for at least one of the received frames are generated by applying error control decoding to one or more of the encoded voice bits contained in the received frame, speech parameters are computed from the first parameter bits, and the speech parameters are quantized to produce second parameter bits. Finally, a transmission frame is formed by applying error control encoding to one or more of the second parameter bits, and the transmission frame is included in the second encoded voice bits.

IPC Classes  ?

  • G10L 19/00 - Speech or audio signal analysis-synthesis techniques for redundancy reduction, e.g. in vocodersCoding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis

23.

Interoperable vocoder

      
Application Number 10292460
Grant Number 07970606
Status In Force
Filing Date 2002-11-13
First Publication Date 2004-05-13
Grant Date 2011-06-28
Owner Digital Voice Systems, Inc. (USA)
Inventor Hardwick, John C.

Abstract

Encoding a sequence of digital speech samples into a bit stream includes dividing the digital speech samples into one or more frames and computing a set of model parameters for the frames. The set of model parameters includes at least a first parameter conveying pitch information. The voicing state of a frame is determined and the first parameter conveying pitch information is modified to designate the determined voicing state of the frame, if the determined voicing state of the frame is equal to one of a set of reserved voicing states. The model parameters are quantized to generate quantizer bits which are used to produce the bit stream.

IPC Classes  ?

  • G10L 19/12 - Determination or coding of the excitation functionDetermination or coding of the long-term prediction parameters the excitation function being a code excitation, e.g. in code excited linear prediction [CELP] vocoders
  • G10L 11/06 - Discriminating between voiced and unvoiced parts of speech signals (G10L 11/04 takes precedence);;

24.

AMBE

      
Application Number 076463500
Status Registered
Filing Date 1994-09-13
Registration Date 1997-03-18
Owner DIGITAL VOICE SYSTEMS, INC., A CORPORATION OF MASSACHUSETTS, (USA)
NICE Classes  ? 09 - Scientific and electric apparatus and instruments

Goods & Services

(1) Voice compression hardware and software. (2) Voice Compression Software.

25.

AMBE

      
Serial Number 74502507
Status Registered
Filing Date 1994-03-21
Registration Date 1995-10-31
Owner Digital Voice Systems, Inc. ()
NICE Classes  ? 09 - Scientific and electric apparatus and instruments

Goods & Services

voice compression hardware and software