C12P 19/40 - Nucléosides avec un système cyclique condensé, contenant un cycle à six chaînons, comportant deux atomes d'azote dans le même cycle, p. ex. nucléosides puriques
A musical performance assistance method implemented by a computer according to an embodiment comprises: a performance data acquisition step for acquiring performance data pertaining to a performance by a user; a part acquisition step for acquiring first part data corresponding to a predetermined first part from musical piece data including a plurality of parts; an identification step for identifying a performance position of the performance by the user on the basis of the correspondence relationship between the performance data and the musical piece data; and a playback step for playing the first part data according to the identified performance position.
G10H 1/00 - Éléments d'instruments de musique électrophoniques
G10G 3/04 - Enregistrement de la musique sous forme de notation, p. ex. enregistrement du fonctionnement mécanique d'un instrument de musique en employant des moyens électriques
A computer analyzes the degree of correlation between an L channel signal and an R channel signal of content, and executes, in accordance with the analyzed degree of correlation and with respect to the content, first EQ processing 311L and 311R for emphasizing a signal in a first frequency band and/or second EQ processing 312L and 312R for suppressing a signal in a second frequency band different from the first frequency band.
G10L 21/0364 - Amélioration de l'intelligibilité de la parole, p. ex. réduction de bruit ou annulation d'écho en changeant l’amplitude pour améliorer l'intelligibilité
An image processing method according to one embodiment involves: when first framing information changes, determining second framing information indicating the position and size of a virtual background in an image of the virtual background on the basis of the changed first framing information; performing second framing processing on the image of the virtual background on the basis of the determined second framing information; and combining an image including a speaker subjected to first framing processing and the image of the virtual background subjected to the second framing processing.
Provided is a sound signal processing method wherein, as a result of an audio signal being input, a sound field effect signal for producing a sound field effect being generated by filtering the audio signal, the first gain of the audio signal being controlled, the second gain of the sound field effect signal being controlled, a volume operation being received, and the first gain or the second gain being adjusted in conjunction with the volume operation, the level balance between the audio signal and the sound field effect signal is adjusted, and, when and a volume operation is received during the volume operation, at least the first gain is increased or decreased and a gain-controlled audio signal and gain-controlled sound field effect signal are output.
This information processing system includes: an information generation unit that generates instruction information containing a first character string in which lyrics of a musical piece, sound of content, or text of a comic book is expressed in a first language and a constraint condition related to the length of a character string; and an information acquisition unit that acquires a second character string which is generated as a result of the processing of the instruction information by a trained generation model, the second character string resulting from conversion of the first character string into a second language different from the first language, under the constraint condition.
G06F 40/44 - Méthodes statistiques, p. ex. modèles probabilistes
G06F 40/58 - Utilisation de traduction automatisée, p. ex. pour recherches multilingues, pour fournir aux dispositifs clients une traduction effectuée par le serveur ou pour la traduction en temps réel
A video display method according to one embodiment is for an event and is achieved by a computer, said video display method: acquiring audio data pertaining to the sounds of the event; outputting a first video which has been processed by determining, on the basis of the audio data, a viewpoint for capturing a video of the event or a cropping range of the captured video of the event; and displaying a second video based on the first video on a display device of a listener of the event.
G10L 25/57 - Techniques d'analyse de la parole ou de la voix qui ne se limitent pas à un seul des groupes spécialement adaptées pour un usage particulier pour comparaison ou différentiation pour le traitement des signaux vidéo
H04N 7/18 - Systèmes de télévision en circuit fermé [CCTV], c.-à-d. systèmes dans lesquels le signal vidéo n'est pas diffusé
H04N 23/60 - Commande des caméras ou des modules de caméras
8.
SPEAKER DEVICE, SOUND SIGNAL PROCESSING METHOD, AND PROGRAM
A speaker device according to one embodiment comprises: an acquisition unit that receives, as a sound signal, a sound based on an operation by a user; a processing unit that, on the basis of the sound signal received by the acquisition unit and a head-related transfer function, generates a reverberation signal which produces a three-dimensional reverberation effect; and an output unit that is disposed in contact with or close to the body of the user and that outputs, from outside the ear canal of the user, a sound based on the reverberation signal which produces the three-dimensional reverberation effect.
This driver unit comprises a driver unit body having a diaphragm and a ventilation hole formed on a rear-surface side of the diaphragm, a ventilation member 2 for covering the ventilation hole from the outside of the driver unit body, and a reinforcement member 3 for reinforcing the ventilation member 2. The ventilation member 2 has an annular attachment part 21 attached to a peripheral edge region of the outer surface of the driver unit body where the ventilation hole opens. The reinforcement member has a plurality of belt-form parts 31 extending from the center O of the ventilation member 2 to the attachment part 21 in mutually different directions as seen from a direction in which the ventilation hole and the ventilation member 2 overlap.
A terminal device 10 is equipped with: an information generating unit 41 for generating instruction information P including question information Q representing a question from a user relating to a guidance target; an information acquiring unit 42 for acquiring answer information R corresponding to the result of processing of the instruction information P by a trained generative model M, the answer information R representing an answer for providing guidance relating to the guidance target to the user in relation to the question represented by the question information Q, and the answer corresponding to auxiliary information X relating to the user; and an information output unit 43 for outputting the answer information R.
This data processing method generates multi-track audio data in which audio data of a plurality of channels including at least a first channel and a second channel is stored. In the data processing method, a data string of a digital audio signal is stored in the first channel, and motion data which is information of movements of a character and is related to the digital audio signal is stored, as the data string of the digital audio signal, in the second channel.
H04N 5/272 - Moyens pour insérer une image de premier plan dans une image d'arrière plan, c.-à-d. incrustation, effet inverse
H04N 21/434 - Désassemblage d'un flux multiplexé, p. ex. démultiplexage de flux audio et vidéo, extraction de données additionnelles d'un flux vidéoRemultiplexage de flux multiplexésExtraction ou traitement de SIDésassemblage d'un flux élémentaire mis en paquets
12.
INFORMATION PROCESSING SYSTEM, INFORMATION PROCESSING METHOD, AND PROGRAM
This information processing system comprises: a performance analysis unit 52 that determines whether or not a performance of a musical instrument by a performer matches an assignment; and a display control unit 53 that displays the result of determination by the performance analysis unit 52 on a display device.
G10G 3/04 - Enregistrement de la musique sous forme de notation, p. ex. enregistrement du fonctionnement mécanique d'un instrument de musique en employant des moyens électriques
13.
INFORMATION PROCESSING SYSTEM, MOVING-IMAGE EDITING METHOD, AND PROGRAM
An information processing system 100 includes: a music analysis unit 51 that identifies the phrase periods of a musical composition by analyzing performance data D representing the musical composition; and a moving-image editing unit 52 that generates an edited moving-image V by editing, according to the phrase periods, one or more recorded moving-images Ya showing a performance of the musical composition.
This information processing system includes: a characteristic acquisition unit 51 for acquiring an operation characteristic Xa related to a keyboard instrument; a performance analysis unit 52 for specifying a performance characteristic F corresponding to the operation characteristic Xa by analyzing a result of performance by a user using the keyboard instrument; and a display control unit 53 for displaying the operation characteristic Xa and the performance characteristic F on a display device 34.
G10G 3/04 - Enregistrement de la musique sous forme de notation, p. ex. enregistrement du fonctionnement mécanique d'un instrument de musique en employant des moyens électriques
15.
INFORMATION PROCESSING SYSTEM, INFORMATION PROCESSING METHOD, AND PROGRAM
This information processing system comprises: a data acquisition unit 510 that acquires a plurality of pieces of performance data each representing performance of a partial section of music; a performance data classification unit 511 that classifies the plurality of pieces of performance data into a plurality of performance data groups corresponding to different sections of the music; and an editing processing unit 514 that replaces a specific section of music data representing the music with performance data selected from a performance data group corresponding to the specific section among the plurality of performance data groups.
G10G 3/04 - Enregistrement de la musique sous forme de notation, p. ex. enregistrement du fonctionnement mécanique d'un instrument de musique en employant des moyens électriques
An information processing system according to the present invention comprises: a recorded data acquisition unit 511 that acquires performance data which indicates a musical instrument performance by a performer and audio data which indicates speech audio pertaining to the performance by the performer; and a display control unit 514 that displays, on a display device, a reference screen indicating, under a common time axis, one or more performance periods in which the performance indicated by the performance data is present and one or more speech periods in which speech audio indicated by the audio data is present.
This acoustic processing system comprises: a first processing unit 41 that executes first processing on an acoustic signal Ak representing a target sound, and thereby generates an acoustic signal Bk for cancelling the target sound; and a second processing unit 42 that executes second processing on the acoustic signal Bk to reduce nonlinear distortion of a reproduced sound from a speaker unit by utilizing a nonlinear model simulating nonlinearity of a characteristic parameter related to the speaker unit, and thereby generates an acoustic signal Ck to be supplied to the speaker unit.
G10K 11/178 - Procédés ou dispositifs de protection contre le bruit ou les autres ondes acoustiques ou pour amortir ceux-ci, en général utilisant des effets d'interférenceMasquage du son par régénération électro-acoustique en opposition de phase des ondes acoustiques originales
A component 3 for a wind instrument has a cylindrical wall part 10. At least a portion of the cylindrical wall part 10 is provided with: an inner wall 11 that constitutes an inner cylindrical surface 10a of the cylindrical wall part 10; an outer wall 12 that constitutes an outer cylindrical surface 10b of the cylindrical wall part 10 and is positioned spaced apart from the inner wall 11 in the thickness direction of the cylindrical wall part 10; and an infill pattern 13 formed in a space S1 between the inner wall 11 and the outer wall 12.
A stringed instrument according to the present invention comprises a neck, a plurality of conductive frets that are arranged at intervals from each other along the neck, a plurality of conductive strings that can contact any of the plurality of frets, a plurality of drive coils that respectively correspond to the plurality of frets, a signal supply unit that sequentially supplies a drive signal to each of the plurality of drive coils, a feedback line that is connected to each of the plurality of frets, and a voltage detection unit that detects a first detection voltage between a first string of the plurality of strings and the feedback line during supply of the drive signal to each of the plurality of drive coils.
An information processing system 100 is provided with: an information acquisition unit 22 that acquires performance information X representing a first musical piece composed of N (N is a natural number greater than or equal to 1) parts; and an information generation unit 23 that processes control data C including the performance information X by a generation model G trained by machine learning to generate performance information Y representing a second musical piece obtained by arranging the first musical piece into M (M is a natural number greater than or equal to 1 and different from N) parts.
An information processing system 100 comprises: an information acquisition unit 22 that acquires performance information X representing a first part that corresponds to one or more musical instruments in a musical composition; and an information generation unit 23 that, by processing control data that includes the performance information X using a generation model G that was trained by machine learning, generates performance information Y representing a second part that corresponds to a different musical instrument from the one or more musical instruments in the musical composition.
According to one aspect of the present invention, an information processing system is provided which converts voice of a user into the voice of another person different from the user. This information processing system is provided with a processor that can execute a program such that the following steps are performed. In a reception step, impression information indicating the impression of the voice of another person requested by the user is received. In a setting step, a registered voice corresponding to the impression information is set, from among a plurality of registered voices prepared in advance, as the voice of the other person.
This device control method acquires an image in a room, acquires position information of a device and position information of a person from the image, and controls an operation state of the device on the basis of a relationship between the position information of the device and the position information of the person.
This device control method acquires an image of an acoustic device installed in a room, acquires first device information from the acquired image of the acoustic device, determines, from the first device information, whether the acoustic device is wirelessly connectible, and instructs a wireless device to connect to the acoustic device when the acoustic device is determined to be wirelessly connectible.
A keyboard device 1 according to one embodiment comprises: a support body 50; a plurality of rod-shaped sounding bodies 30 that are swingably suspended from the support body 50 each via a thread-shaped member 40, the plurality of sounding bodies 30 including at least a first sounding body and a second sounding body having a natural frequency different from that of the first sounding body; a plurality of keys 10 that are swingable and include a first key and a second key; and a plurality of striking parts 20 that include a first striking part that strikes the first sounding body by rotating in response to the movement of the first key and a second striking part that strikes the second sounding body by rotating in response to the movement of the second key.
G10D 13/09 - Instruments de musique à sons multiples, avec des barres, des blocs, des fourchettes, des gongs, des plaques, des baguettes ou des dents sonores avec des claviers
G10D 13/08 - Instruments de musique à sons multiples, avec des barres, des blocs, des fourchettes, des gongs, des plaques, des baguettes ou des dents sonores
26.
SOUND PROCESSING METHOD, SOUND PROCESSING DEVICE, AND PROGRAM
This sound processing method comprises: acquiring location information of a virtual wall arranged in a virtual space; acquiring information including sound transmission characteristics of the virtual wall, which are a first transmission characteristic of sound in instances of passing through the virtual wall along a first direction and a second transmission characteristic of sound in instances of passing through the virtual wall along a second direction that is the inverse direction of the first direction; acquiring information on a sound source location and a listening location; determining either the first transmission characteristic or the second transmission characteristic on the basis of the sound source location, the listening location, and the location information of the wall; and running sound processing on a sound signal of the sound source on the basis of the determined transmission characteristic.
This information processing system comprises an instruction data generation unit and a music data processing unit. The instruction data generation unit uses a generation model trained by machine leaning to generate first instruction data indicating an instruction to one or more processing models among a plurality of processing models that each execute different processing on the basis of first input data including an instruction from a user regarding processing of music data and attribute data indicating an attribute of music indicated by the music data. The music data processing unit uses the one or more processing models indicated by the first instruction data among the plurality of processing models to execute processing indicated by the first instruction data on the music data. The instruction data generation unit uses the generation model to generate second instruction data indicating an instruction to one or more processing models among the plurality of processing models on the basis of second input data including an instruction from the user regarding processing of the music data on which the processing indicated by the first instruction data has been executed, and the attribute data. The music data processing unit uses the one or more processing models indicated by the second instruction data among the plurality of processing models to execute processing indicated by the second instruction data on the music data.
This information processing system includes a first information processing device, a second information processing device, an adjustment unit, and a transmission unit. The first information processing device is installed at a first location. The second information processing device is installed at a second location. The adjustment unit adjusts environmental sound generated at the first location on the basis of performance sound generated at the first location or the second location. The transmission unit transmits the performance sound generated at the first location and the environmental sound adjusted by the adjustment unit to the second information processing device.
An information processing system according to one embodiment of the present invention comprises a request information acquisition unit, a proposal unit, and a reception unit. The request information acquisition unit acquires user request information related to a musical instrument. The proposal unit selects one or more candidate musical instruments corresponding to the acquired request information, and proposes, to the user, the selected one or more candidate musical instruments together with performance sound data of each of the one or more candidate musical instruments. The reception unit receives a selection, input by the user, of a musical instrument desired to be provided from among the one or more candidate musical instruments.
According to one embodiment, a sound signal processing method involves receiving a sound signal, performing vowel determination on the basis of the spectral shape of the sound signal during a prescribed first period of time or performing voice determination on the basis of the rate of change in the pitch of the sound signal during a prescribed second period of time, and adjusting the time characteristics or the frequency characteristics of the sound signal on the basis of the results of the vowel determination or the voice determination.
G10L 21/0364 - Amélioration de l'intelligibilité de la parole, p. ex. réduction de bruit ou annulation d'écho en changeant l’amplitude pour améliorer l'intelligibilité
G10L 15/10 - Classement ou recherche de la parole utilisant des mesures de distance ou de distorsion entre la parole inconnue et les gabarits de référence
G10L 21/0332 - Détails du traitement à cet effet impliquant la modification des formes d’onde
G10L 25/84 - Détection de la présence ou de l’absence de signaux de voix pour différencier la parole du bruit
31.
PLATE MATERIAL, MUSICAL INSTRUMENT, AND STRINGED INSTRUMENT
Provided is a plate material 1 in which a first shear modulus in a plane including a plate thickness direction Z and a first direction Y orthogonal to a plate thickness direction Z is lower than a second shear modulus in a plane including the plate thickness direction Z and a second direction X orthogonal to the plate direction Z and the first direction Y, and which has, in the inside thereof, a plurality of voids 2 arranged at intervals in a plane direction including the first direction Y and the second direction X.
The present invention comprises: a detection unit that detects, as a creation motion for creating a creation target, a first musical performance motion that is a motion of musical performance by a user, and detects a second musical performance motion that is a motion of musical performance by the user and that has occurred after the first musical performance motion; a generation unit that, on the basis of the creation motion and as musical performance information from which the creation target can be created, generates first musical performance information based on the first musical performance motion and second musical performance information in which the first musical performance information has been changed on the basis of the second musical performance motion; and an output unit that outputs the second musical performance information.
G10G 3/04 - Enregistrement de la musique sous forme de notation, p. ex. enregistrement du fonctionnement mécanique d'un instrument de musique en employant des moyens électriques
A keyboard device according to one embodiment of the present invention includes: a plurality of keys; an acquisition unit that acquires information for specifying a pressed key among the plurality of keys, and a pressing amount of the pressed key; and a sound signal generation unit that generates, if two or more keys are pressed and the pressing amounts of the two or more pressed keys each exceed a reference pressing amount, a sound signal on the basis of a relative relationship between the pressing amounts of the two or more keys, information for specifying the pressed keys, and the pressing amounts of the pressed keys.
G10H 1/053 - Moyens pour contrôler la fréquence des sons, p. ex. attaque ou affaiblissementMoyens pour produire des effets musicaux particuliers, p. ex. vibratos ou glissandos par modulation additionnelle pendant l'exécution seulement
34.
MUSICAL INSTRUMENT SOUNDBOARD AND STRINGED MUSICAL INSTRUMENT
A musical instrument soundboard 1 according to one embodiment of the present disclosure comprises: a first plate material 11; a pair of second plate materials 12 disposed so as to face each other with the first plate material 11 interposed therebetween; and a pair of third plate materials 13 disposed so as to face each other with the pair of second plate materials 12 interposed therebetween. The third plate material 13 has stronger anisotropy than the first plate material 11 and the second plate material 12 with respect to a loss tangent. Where a direction which is parallel to the surface of the third plate material 13 and in which the loss tangent of the third plate material 13 is minimized is defined as an X direction, the loss tangent of the first plate material 11 in the X direction is larger than any of the loss tangent of the second plate material in the X direction 12 and the loss tangent of the third plate material 13 in the X direction.
Provided is a guitar body structure 2 comprising a body 20 that includes a wooden part 30 composed of rosewood. As viewed from the thickness direction of the body 20, the wood grain direction of the wooden part 30 is inclined with respect to the longitudinal direction of a guitar neck 3.
This information processing device comprises: an acquisition unit that acquires performance information, which indicates performance of a user, and/or behavior-related information, which is information different from the performance information and indicates information related to the behavior of the user; an information processing unit that generates output information, which is non-performance information responding to the request of the user and indicates information relating to music, on the basis of the acquired performance information and/or behavior-related information; and an output processing unit that outputs the output information.
A molded body 10 according to one embodiment of the present disclosure includes a plurality of wood pieces 1. The molded body 10 is anisotropic in terms of elastic modulus and strength in the trunk-axis direction of the wood used as a feedstock and in a direction orthogonal to the trunk-axis direction. Fibers 1a of the wood pieces 1 intermingle at the interfaces between the plurality of wood pieces 1.
A listening sound acquisition method according to the present invention is for a loudspeaker system that delivers spoken sound from a speaker to a listener via a loudspeaker transmission path that includes a sound collection means, an amplification means, and a sound emission means. The method involves acquiring the sound pressure of the spoken sound at an arbitrary location on the transmission path and finding the sound pressure at a listening location for the listener on the basis of the acquired sound pressure.
This noise reduction device (dummy coil 1) is attached to a musical instrument (electric guitar 7) having an electromagnetic pickup 72 for detecting a magnetic field change based on vibration of a string 71. The noise reduction device includes a coil 20, a core material 4 made of a soft magnetic material, and an adjustment member 3 for variably adjusting a relative position of the coil 20 and the core material 4. The core material 4 is not magnetized.
G10H 3/18 - Instruments dans lesquels les sons sont produits par des moyens électromécaniques utilisant des générateurs résonnants mécaniques, p. ex. des cordes ou des instruments à percussion, dont les sons sont captés par des transducteurs électromécaniques, les signaux électriques étant alors traités ou amplifiés puis convertis en ondes sonores au moyen d'un haut-parleur ou d'un dispositif équivalent utilisant des vibrateurs excités mécaniquement et munis de capteurs utilisant des cordes, p. ex. guitares électriques
This speaker diaphragm comprises a base material having a resin matrix containing a thermoplastic resin as a main component and fibers dispersed in the resin matrix. The fibers have a plurality of loss regions in which the fibers are locally bent and broken, or a plurality of loss regions each having a diameter that is 5% to 50% larger than the average diameter of the fibers.
This performance input device includes an illumination-equipped operation area and an illumination-free operation area. The illumination-equipped operation area is played. The illumination-equipped operation area is configured to be lightable. The illumination-free operation area is played. When the illumination-equipped operation area is played, the lighting mode of the illumination-equipped operation area is changed.
A signal generation method according to an embodiment comprises: acquiring intermediate feature data corresponding to a predetermined time step by providing, to a first trained model, a first parameter in a predetermined time range including ranges before and after the predetermined time step of sound control data including the first parameter and a second parameter for controlling a generated sound in a plurality of time steps corresponding to the passage of time; updating the intermediate feature data when the value of the first parameter has been changed; and generating a sound signal in accordance with data obtained by providing the second parameter and the intermediate feature data to a second trained model when a reproduction start instruction has been received.
G10H 7/08 - Instruments dans lesquels les sons sont synthétisés à partir d'une mémoire de données, p. ex. orgues à calculateur par le calcul de fonctions ou d'approximations polynomiales afin de déterminer l'amplitude en des points d'échantillonnage successifs d'une forme d'onde sonore
G10H 1/00 - Éléments d'instruments de musique électrophoniques
G10L 13/00 - Synthèse de la paroleSystèmes de synthèse de la parole à partir de texte
43.
SIGNAL PROCESSING METHOD, SIGNAL PROCESSING SYSTEM, AND SPEAKER SYSTEM
According to the present invention, an acoustic processing system is provided with: a signal processing unit that utilizes a nonlinear model simulating a relationship in which an acoustic resistance of an acoustic port for acoustic radiation employing Helmholtz resonance depends on a volumetric flow velocity at the acoustic port, to determine an input voltage from target parameters including the volumetric flow velocity at the acoustic port; and a voltage supply unit that supplies the input voltage to a speaker device.
NATIONAL UNIVERSITY CORPORATION TOKAI NATIONAL HIGHER EDUCATION AND RESEARCH SYSTEM (Japon)
YAMASA CORPORATION (Japon)
Inventeur(s)
Ueno Yoshihito
Abrégé
Provided is a nucleoside derivative or a salt thereof, which is represented by formula (1) or (2). (In formula (1) and formula (2), R1denotes a hydrogen atom, a hydroxyl group, a hydroxyl group in which the hydrogen atom is substituted by an alkyl group or an alkenyl group, or a protected group. In formula (1) and formula (2), R2and R4nnR5R6(n is 0 or 1, and R5and R6may be the same as, or different from, each other and each denote a hydrogen atom, a hydroxyl group, a protected hydroxyl group, a mercapto group, a protected mercapto group, a lower alkoxy group, a cyano-lower alkoxy group, an amino group or a substituted amino group. However, if n is 1, R5and R6cannot both be hydrogen atoms. R3denotes NHR7(R7 denotes a hydrogen atom, an alkyl group, an alkenyl group or a protecting group for an amino group, and B denotes a purin-9-yl group, a 2-oxo-pyrimidin-1-yl group, a substituted purin-9-yl group or a substituted 2-oxo-pyrimidin-1-yl group.)
C07H 19/067 - Radicaux pyrimidine avec un ribosyle comme radical saccharide
A61K 31/712 - Acides nucléiques ou oligonucléotides ayant des sucres modifiés, c.-à-d. autres que le ribose ou le 2'-désoxyribose
A61K 31/7072 - Composés ayant des radicaux saccharide et des hétérocycles ayant l'azote comme hétéro-atome d'un cycle, p. ex. nucléosides, nucléotides contenant des cycles à six chaînons avec l'azote comme hétéro-atome d'un cycle contenant des pyrimidines condensées ou non-condensées ayant des groupes oxo liés directement au cycle pyrimidine, p. ex. cytidine, acide cytidylique ayant deux groupes oxo liés directement au cycle pyrimidine, p. ex. uridine, acide uridylique, thymidine, zidovudine
A61K 31/7125 - Acides nucléiques ou oligonucléotides ayant des liaisons internucléosides modifiées, c.-à-d. autres que des liaisons 3'-5' phosphodiester
A61P 43/00 - Médicaments pour des utilisations spécifiques, non prévus dans les groupes
C07H 19/073 - Radicaux pyrimidine avec un désoxy-2 ribosyle comme radical saccharide
This speaker position detection method comprises: acquiring speech of a speaker, by using a microphone; obtaining direction information of the speaker, on the basis of the acquired speech of the speaker; acquiring a face image of the speaker from an image acquired by a camera, on the basis of the direction information; and detecting position information, including height information of the speaker, on the basis of the acquired face image of the speaker.
H04N 23/611 - Commande des caméras ou des modules de caméras en fonction des objets reconnus les objets reconnus comprenant des parties du corps humain
G10K 11/175 - Procédés ou dispositifs de protection contre le bruit ou les autres ondes acoustiques ou pour amortir ceux-ci, en général utilisant des effets d'interférenceMasquage du son
H04N 23/63 - Commande des caméras ou des modules de caméras en utilisant des viseurs électroniques
A Helmholtz resonator 1A comprises a cavity portion 10 and a neck portion 20A which provides communication between an internal space 11 of the cavity portion 10 and an external space 31. In the neck portion 20A, the opening area of an open end on the external space 31 side is larger than the opening area of an open end on the internal space 11 side.
G10K 11/02 - Impédances acoustiques mécaniquesAdaptation d'impédance, p. ex. au moyen de pavillonsRésonateurs acoustiques
G10K 11/172 - Procédés ou dispositifs de protection contre le bruit ou les autres ondes acoustiques ou pour amortir ceux-ci, en général utilisant des effets de résonance
47.
SOUND PROCESSING METHOD, SOUND PROCESSING DEVICE, AND PROGRAM
Provided is a sound processing method wherein: sound signals output from a plurality of input channels are received and mixed to generate a mixed sound signal; a spectrum diagram of the mixed sound signal or the sound signals of the plurality of input channels is displayed; from the mixed sound signal and the sound signal of each input channel among the plurality of input channels, a similarity degree in a prescribed frequency band is calculated; and, in the spectrum diagram, with regard to at least one of the plurality of input channels, a display mode in the prescribed frequency band is made to differ on the basis of the similarity degree.
An electronic musical instrument 1 includes a pedal 12, a speed detection unit 16, a speed determination unit 4, and a reproduction unit 6. The speed detection unit 16 detects the moving speed of the pedal 12. The speed determination unit 4 determines a pedal-off speed when the operation of the pedal 12 is cancelled, on the basis of the moving speed detected by the speed detection unit 16. The reproduction unit 6 reproduces a pedal-off sound on the basis of the pedal-off speed determined by the speed determination unit 4.
An acoustic processing system 100 sets N control values Cy_1 to Cy_N in accordance with an instruction from a user, processes an acoustic feature quantity Q of a base component of a target sound and modulation data M corresponding to the N control values Cy_1 to Cy_N, and thereby generates an acoustic signal Z of the target sound in which N modulation components corresponding to the respective N control values Cy_1 to Cy_N have been added to the base component.
This text display method includes: acquiring melody data PD representing a plurality of sounds which have been put in order; acquiring text information that includes a word and a first symbol; and displaying a corresponding text item that corresponds to a first phoneme determined on the basis of the position of the first symbol in the word and phonemes constituting the word, the corresponding text item being displayed in association with a first sound included in the melody data PD and one or a plurality of sounds which succeed the first sound.
This parameter determination method is for determining a parameter of a filter that outputs a second sound signal in which a frequency characteristic is changed with respect to a first sound signal. The parameter determination method includes: calculating a first acoustic feature amount of a first sound signal; calculating a second acoustic feature amount of a second sound signal; and determining a parameter of a filter so that a difference in a specific acoustic feature amount related to audibility between the first acoustic feature amount and the second acoustic feature amount is no greater than a predetermined value.
A detection device 20 comprises: a first actuator 21 that generates sound waves to be transmitted through a pipe 3 of a clarinet 1; a first sensor unit 31 that has a plurality of sensors 31a-31h for detecting the sound waves; and a first attachment part 41 with which the first actuator 21 and the first sensor unit 31 are disposed in the pipe 3. The plurality of sensors 31-31h are arranged at intervals in the longitudinal direction of the pipe 3.
G10H 1/00 - Éléments d'instruments de musique électrophoniques
G10H 1/32 - Éléments d'instruments de musique électrophoniques Parties constitutives
G10H 1/34 - Agencements d'interrupteurs, p. ex. claviers ou interrupteurs mécaniques spécialement adaptés aux instruments de musique électrophoniques
G10H 1/053 - Moyens pour contrôler la fréquence des sons, p. ex. attaque ou affaiblissementMoyens pour produire des effets musicaux particuliers, p. ex. vibratos ou glissandos par modulation additionnelle pendant l'exécution seulement
53.
INFORMATION PROCESSING METHOD AND INFORMATION PROCESSING SYSTEM
This information processing system 100 comprises: an information acquisition unit 41 that acquires performance information pertaining to a performance on an electronic musical instrument 20 by a user U; a response acquisition unit 42 that acquires response information R of a natural language according to the performance information; and an operation control unit 43 that executes a notification operation for notifying the response information R by means of a guide character displayed on the display device 15.
G10G 3/04 - Enregistrement de la musique sous forme de notation, p. ex. enregistrement du fonctionnement mécanique d'un instrument de musique en employant des moyens électriques
G10H 1/00 - Éléments d'instruments de musique électrophoniques
H04L 67/00 - Dispositions ou protocoles de réseau pour la prise en charge de services ou d'applications réseau
54.
INFORMATION PROCESSING METHOD AND INFORMATION PROCESSING DEVICE
This information processing method comprises: detecting a specific gesture of a first performer; generating related information related to the performance of the first performer or a second performer on the basis of the detected specific gesture; and transmitting the related information from a first terminal used by the first performer to a second terminal used by the second performer.
A content information processing method according to the present invention acquires first performance information related to a live performance of a first performer at a first venue, acquires first content information related to a video or sound at a second venue connected to the first venue through a network, generates second content information on the basis of the first performance information and the first content information, and generates a video or sound based on the second content information.
In the present invention, an information processing method involves the following: a plurality of first signals, which are respectively related to the performance of performers located at multiple sites, are transmitted from respective first terminals of the performers; a second terminal receives the plurality of first signals, and at the second terminal, reference information serving as a reference for the performance is acquired; and on the basis of this reference information, one of the plurality of first signals is prioritized for processing.
This performance sound generation method comprises acquiring image information of a first musical instrument and sound information changed by an environmental change of the first musical instrument, acquiring performance operation information of a user, rendering an image of the first musical instrument on the basis of the image information, and generating a performance sound of the first musical instrument on the basis of the performance operation information and the sound information.
G10H 7/02 - Instruments dans lesquels les sons sont synthétisés à partir d'une mémoire de données, p. ex. orgues à calculateur dans lesquels les amplitudes d'une forme d'onde sonore en des points d'échantillonnage successifs sont stockées dans une ou plusieurs mémoires
G10H 1/00 - Éléments d'instruments de musique électrophoniques
58.
INFORMATION PROCESSING METHOD, PRODUCT PRESENTATION METHOD, INFORMATION PROCESSING DEVICE, PRODUCT PRESENTATION DEVICE, AND PROGRAM
This information processing method comprises: preparing a converter that defines the correspondence between product information, including sensory information relating to taste, smell, or touch that is included in information about a product, and sound production-related parameters; and outputting sound information based on sound production-related parameters by inputting product information into the converter, or outputting product information by inputting sound production-related parameters based on acquired sound information into the converter.
This method for evaluating a sound or an image comprises: displaying three icons BJm in a plane (in a display area 25A) displayed on a display 25; outputting sound data SD corresponding to each icon BJm; editing the position of each icon BJm in the plane (in the display area 25A) in response to a first operation in which a user moves each icon BJm in the plane (in the display area 25A) using an input device 24; and associating the edited planar coordinates of each icon BJm with the sound data SD.
G06F 3/04817 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] fondées sur des propriétés spécifiques de l’objet d’interaction affiché ou sur un environnement basé sur les métaphores, p. ex. interaction avec des éléments du bureau telles les fenêtres ou les icônes, ou avec l’aide d’un curseur changeant de comportement ou d’aspect utilisant des icônes
This audio device includes a first amplifier and one or more audio modules. The first amplifier is connected to a head unit that outputs audio data and control data, and processes the audio data on the basis of the control data. The one or more audio modules are connected to the first amplifier via an audio bus capable of two-way communication via serial communication, and function as an extension function or an additional function for the first amplifier. The first amplifier is operated by first electric power supplied from an external power supply. Second electric power is supplied from the first amplifier via the audio bus to at least one of the one or more audio modules. The at least one audio module operates by means of the second electric power.
H04L 12/28 - Réseaux de données à commutation caractérisés par la configuration des liaisons, p. ex. réseaux locaux [LAN Local Area Networks] ou réseaux étendus [WAN Wide Area Networks]
H04R 1/02 - BoîtiersMeublesMontages à l'intérieur de ceux-ci
61.
LAYOUT DESIGN ASSISTANCE METHOD FOR ACOUSTIC EQUIPMENT, INFORMATION PROCESSING DEVICE, AND PROGRAM
In this layout design assistance method for acoustic equipment, an acoustic space, the position of a sound source in the acoustic space, and the position of a noise source in the acoustic space are received, a layout distribution for acoustic equipment corresponding to the received position of the sound source and the position of the noise source in the received acoustic space is calculated, and the calculated layout distribution is displayed on a display.
G06F 30/13 - Conception architecturale, p. ex. conception architecturale assistée par ordinateur [CAAO] relative à la conception de bâtiments, de ponts, de paysages, d’usines ou de routes
G06F 30/20 - Optimisation, vérification ou simulation de l’objet conçu
62.
SPEAKER SYSTEM, SPEAKER, CAMERA, AND SPEAKER ARRANGEMENT ASSISTANCE METHOD
This speaker system 100 includes a marker 10, a speaker 20, a camera 30, and a calculation unit 2. The camera 30 is arranged in a specific positional relationship with respect to the speaker 20. The camera 30 captures an image of an object in an imaging direction corresponding to the sound emission direction of the speaker 20. The calculation unit 2 calculates the relative position of the speaker 20 with respect to the marker 10 on the basis of an image of the marker 10 included in the image captured by the camera 30.
This sound conversion method includes: providing, to a trained model in which sound-quality designation data has been set, first acoustic-characteristic data of an input sound signal and pitch designation data based on the input sound signal; and acquiring, from the trained model, second acoustic-characteristic data for generating a conversion sound signal having sound quality which is in accordance with the sound-quality designation data. The trained model outputs the second acoustic-characteristic data by using the first acoustic-characteristic data and the pitch designation data. The pitch designation data indicates a pitch obtained by adjusting the pitch of the input sound signal on the basis of the relationship between: the pitch of a target sound signal which is the target for determining the amount of pitch adjustment; and the register which is associated with the sound-quality designation data.
A sound conversion method including: providing first acoustic feature data pertaining to an input sound signal and pitch designation data to a trained model; and acquiring, from the trained model, second acoustic feature data for generating a conversion sound signal having a prescribed sound quality and a pitch that is based on the pitch designation data. The trained model outputs the second acoustic feature data using the first acoustic feature data and the pitch designation data.
An information processing system according to an embodiment of the present invention comprises a first acquisition unit, a second acquisition unit, a current status information generation unit, and a support processing unit. The first acquisition unit acquires a captured image of a musical instrument. The second acquisition unit acquires information about a consultation matter relating to the functions of the captured musical instrument. On the basis of the captured image, the current status information generation unit generates current status information relating to the current status of the musical instrument, including information about at least one of the configuration of the musical instrument, the condition of the musical instrument, and the installation position of the musical instrument. The support processing unit generates and outputs support information for solving the consultation matter, on the basis of the information about the consultation matter and the current status information.
This performance information generation method comprises: acquiring first performance information related to the performance of a first performer and second performance information related to the performance of a second performer; generating third performance information obtained by integrating the acquired first performance information and second performance information; and outputting the generated third performance information.
A training device 10 includes a reference data acquisition unit 11, a performance data acquisition unit 12, a detection unit 13, an estimation unit 14, and a guidance unit 15. The reference data acquisition unit 11 acquires reference data indicating the time sequence of reference sounds. The performance data acquisition unit 12 acquires performance data indicating the time sequence of sounds expressed by the performance of a user. The detection unit 13 detects a mistake in the performance by comparing the reference data and the performance data. The estimation unit 14 estimates the cause of the mistake on the basis of a history of the performance. The guidance unit 15 determines the guidance content on the basis of the frequency and cause of the mistake.
G10G 1/00 - Moyens de représentation de la musique
G10G 3/04 - Enregistrement de la musique sous forme de notation, p. ex. enregistrement du fonctionnement mécanique d'un instrument de musique en employant des moyens électriques
G10H 1/00 - Éléments d'instruments de musique électrophoniques
A speaker device 21 comprises: a speaker unit 30; and a support mechanism 40 that elastically supports the speaker unit 30 with respect to an interior member 10. The support mechanism 40 includes a first fixed part 41 that is fixed to the interior member 10, a second fixed part 42 that is fixed to the interior member 10, and a support part 43 that elastically couples the speaker unit 30 to the first fixed part 41 and the second fixed part 42. The speaker unit 30 is located between the first fixed part 41 and the second fixed part 42 when viewed in the direction of a central axis Z, and a straight line passing through the first fixed part 41 and the second fixed part 42 is orthogonal to the central axis Z of the speaker unit 30.
This sound pickup setting method for setting the directivity of a microphone is characterized by including: setting a separation angle from a direction vertically upward with respect to a surface on which the microphone is installed; and making the directivity of the microphone within a range not greater than the set separation angle.
G10L 25/51 - Techniques d'analyse de la parole ou de la voix qui ne se limitent pas à un seul des groupes spécialement adaptées pour un usage particulier pour comparaison ou différentiation
70.
DISPLAY METHOD, DISPLAY PROCESSING DEVICE AND PROGRAM
This display method comprises: accepting a camera image; accepting area information indicating an area used for an event; and causing a boundary image on a certain horizontal plane to be displayed superimposed on the camera image, the boundary image corresponding to the area information.
G06T 19/00 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie
G09G 5/00 - Dispositions ou circuits de commande de l'affichage communs à l'affichage utilisant des tubes à rayons cathodiques et à l'affichage utilisant d'autres moyens de visualisation
G09G 5/37 - Détails concernant le traitement de dessins graphiques
G09G 5/377 - Détails concernant le traitement de dessins graphiques pour mélanger ou superposer plusieurs dessins graphiques
H04N 7/18 - Systèmes de télévision en circuit fermé [CCTV], c.-à-d. systèmes dans lesquels le signal vidéo n'est pas diffusé
H04N 21/431 - Génération d'interfaces visuellesRendu de contenu ou données additionnelles
71.
MOVING-IMAGE PROCESSING METHOD, MOVING-IMAGE PROCESSING SYSTEM, AND PROGRAM
This moving-image processing system: extracts a first reference portion R1, which includes the hands of a performer, from performance moving-images X showing a first keyboard-instrument performance performed by the performer (step Sa3); and generates synthesized moving-images Y by superimposing the first reference portion R1 onto a keyboard portion By of a second keyboard instrument (step Sa6).
This sound signal processing method comprises: accumulating relationships between environment information of past conferences and past sound processing parameters; acquiring environment information of a current conference; determining a current sound processing parameter corresponding to the acquired environment information of the current conference, on the basis of the accumulated relationships and a predetermined inference algorithm; receiving a sound signal; and applying sound processing to the received sound signal on the basis of the obtained current sound processing parameter.
This mask processing method involves: accumulating in advance the association between first participant information and a face image, or the association between the first participant information and audio, such first participant information indicating a participant who has participated in a past event; receiving second participant information indicating a participant participating in a current event; receiving a sensor signal including a face image captured by a camera or a voice acquired by a microphone; comparing the second participant information and the sensor signal on the basis of the correspondence so as to determine whether the face image or the voice included in the sensor signal corresponds to a participant or a non-participant; and performing mask processing on that sensor signal, among sensor signals, if corresponding to the non-participant.
G10K 11/175 - Procédés ou dispositifs de protection contre le bruit ou les autres ondes acoustiques ou pour amortir ceux-ci, en général utilisant des effets d'interférenceMasquage du son
This information processing system comprises: a performance analysis unit that analyzes the state of a performance by a user in a real space; and a display control unit that displays, on a display device, a performing character who performs in a virtual space, and that controls the actions of the performing character in accordance with the state of the performance.
Provided is a sound data processing method comprising: on the basis of performance information, acquiring a first sound data set that corresponds to any one of multiple virtual operators and includes first sound data and second sound data; and, on the basis of the first sound data set, generating a second sound data set that includes multiple pieces of third sound data respectively corresponding to multiple reproduction element positions. The first sound data and the second sound data each include sound information and sound production position information indicating sound production positions relative to a predetermined listening position. Generating the second sound data set comprises generating the third sound data on the basis of the relation between the sound production positions indicated by the sound production position information of each of the first sound data and the second sound data and the multiple reproduction element positions.
In the present invention, an information processing device 1 includes an acquisition unit 101, a first identification unit 102, a second identification unit 103, and a display control unit 104. The acquisition unit 101 acquires designation information indicating a prescribed posture which is designated by a user and is included in a first motion of the user and in a second motion which is a comparison target. The first identification unit 102 identifies a first frame corresponding to the designation information, from among a plurality of frames obtained by imaging the first motion. The second identification unit 103 identifies a second frame corresponding to the designation information, from among a plurality of frames obtained by imaging the second motion. The display control unit 104 reproduces, in parallel, the motion data corresponding to each of the first motion and the second motion such that the reproduction timing of the first frame and the second frame match.
Provided is a sound source. An acquisition unit 401 acquires a tone setting instruction, a generation unit 402 generates half setting information J in accordance with the tone setting instruction, the half setting information J specifying a key damper half region or a key damper half point in producing sound with each of a plurality of keys of an electronic keyboard instrument 200, and an output unit 403 outputs the generated half setting information J.
An information processing system comprises an input data acquisition unit 41 and a finger position data generation unit 42. The input data acquisition unit 41 acquires input data C1 including image data G that represents an image of the hands of a user who plays a musical instrument, finger position data Y that represents each position of a plurality of analysis points on the hands, and performance data E that represents the performance of the musical instrument. The finger position data generation unit 42 generates finger position data Z in which the position of each of the plurality of analysis points is corrected in the finger position data Y according to the position of the hand represented by the image data G and the performance represented by the performance data E by processing input data C1 by means of a generation model M.
This sound volume setting method: accepts a correction instruction for a sound volume change characteristic with respect to a sound volume modifying operation; sets the sound volume at the time when the correction instruction is accepted as a reference value; generates a corrected sound volume change characteristic that is the sound volume change characteristic corrected based on the reference value; and stores the generated corrected sound volume change characteristic.
This wearable acoustic output device, which can be worn on the ear of a user, comprises a control processing unit which: measures a continuous playback period of an output sound output to the ear of the user; and, when the continuous playback period reaches a preset processing threshold value, performs specific processing on the output sound to allow the user to recognize an overload to the ear. The control processing unit initializes the continuous playback period when a playback interruption period in which the output of the output sound is continuously interrupted reaches a preset initialization threshold value.
This sound processing method comprises: inputting a first sound signal sampled at a first sampling frequency; using a training model trained so as to output a second sound signal based on a folding noise of a frequency component higher than a first Nyquist frequency of the first sound signal to generate a third sound signal that is a frequency component higher than the first Nyquist frequency; and generating a fourth sound signal obtained by mixing the first sound signal and the third sound signal.
This sound system comprises: a wearable-type sound output device that can be worn on the ear of a user; and a host device that controls the output sound output by the wearable-type sound output device. The wearable-type sound output device comprises: an exposure amount calculation unit that calculates the amount of sound exposure to which the user was subjected due to the output sound; and a communication unit that transmits, to the host device, the sound exposure amount calculated by the exposure amount calculation unit. The host device comprises a basic control unit which performs basic control related to the output of the output sound, and which outputs information based on the amount of sound exposure received from the wearable-type sound output device.
This percussion detection device 1 comprises: a vibration propagation member 10 that has a contact part 11 contacting an object 100 and that deforms in accordance with vibration; a support base 20 that supports the vibration propagation member 10 and positions the vibration propagation member 10 between the object 100 and the support base 20; and two vibration detection sensors 30 that respectively detect deformation of two mutually different sites of the vibration propagation member 10. The two sites of the vibration propagation member 10 detected by the two vibration detection sensors 30 are arranged side by side in a direction intersecting the arrangement direction of the vibration propagation member 10 and the support base 20.
G10H 3/14 - Instruments dans lesquels les sons sont produits par des moyens électromécaniques utilisant des générateurs résonnants mécaniques, p. ex. des cordes ou des instruments à percussion, dont les sons sont captés par des transducteurs électromécaniques, les signaux électriques étant alors traités ou amplifiés puis convertis en ondes sonores au moyen d'un haut-parleur ou d'un dispositif équivalent utilisant des vibrateurs excités mécaniquement et munis de capteurs
The keyboard instrument comprises: a drive mechanism including a hammer shank that is displaced in conjunction with key operation; a restriction member with which the hammer shank collides due to displacement; a first support member for supporting the restriction member; a first coil installed in the drive mechanism; a wiring substrate on which is installed a signal generation unit including a second coil that generates a magnetic field, the signal generation unit generating a detection signal corresponding to the distance between the first coil and the second coil; a second support member for supporting the wiring substrate; and a foundation member. The first support member and the second support member are fixed to the foundation member.
A mouthpiece according to an embodiment comprises: a body including a first space, a second space at a distance from the first space, a beak, and a table; and a first sensor for measuring the pressure in the second space. The body has a first opening part that connects the first space and the exterior and that is at least partially covered by a reed when the reed is attached to the table, and, on the outer surface of the beak, a second opening part that connects the second space and the exterior.
A keyboard instrument 100 is provided with a plurality of keys 10 including a wooden material. The plurality of keys 10 include: a plurality of natural tone keys 11, each having a first playing surface operated in order to produce a natural tone; and a plurality of accidental tone keys 12, each having a second playing surface operated in order to to produce an accidental tone. In the first playing surface and the second playing surface, a colorimetric value by a specular component included (SCI) method in an L*a*b*color space satisfies condition 1: ΔE*ab ≤ 17 and/or condition 2: ΔH*< 15 and -1.7 < L*/c* < 1.7. The first playing surface and the second playing surface differ in terms of optical reflectance and/or surface roughness.
G10C 1/00 - Conception générale des pianos, des clavecins, des épinettes ou des instruments de musique à cordes similaires avec un ou plusieurs claviers
G10H 1/34 - Agencements d'interrupteurs, p. ex. claviers ou interrupteurs mécaniques spécialement adaptés aux instruments de musique électrophoniques
This musical instrument comprises an exterior member 40 containing a woody material. The exterior member 40 includes a base surface F1 constituting the exterior, and a plurality of decorative portions 41 formed by recesses that are depressed relative to the base surface F1. The brightness of a sidewall surface 412 in each of the plurality of decorative portions 41 is less than the brightness of the base surface F1. Each of the plurality of decorative portions 41 is formed in an elongated shape along the direction of the Y-axis in the base surface F1 in a plan view of the base surface F1.
G10C 9/00 - Procédés, outils ou matériaux spécialement adaptés à la fabrication ou l'entretien des instruments de musique couverts par la présente sous-classe
G10H 1/32 - Éléments d'instruments de musique électrophoniques Parties constitutives
88.
IMPACT DETECTION DEVICE, IMPACT DETECTION METHOD, AND PROGRAM
The present invention comprises: a first acquisition unit 101 which acquires moving image data that includes sound recorded at the time of a swing of a user and is obtained by capturing the swing at a plurality of frames; a second acquisition unit 102 which acquires reference motion data that represents a time transition of the swing and has an identifier attached thereto, the identifier indicating the time of impact in the swing; an identification unit 104 which identifies, from the moving image data, a section including a frame corresponding to the timing of attaching the identifier in the reference motion data; and a determination unit 105 which determines whether a frame including a hitting ball sound is present in the section and, when it is determined that the frame including the hitting ball sound is present in the section, determines that the swing with an impact is included in the moving image data.
This musical instrument is provided with an exterior member 80 that is an isotropic fiberboard. The exterior member 80 includes a foundation surface F1 including a first region R1 and a second region R2. The surface roughness of the first region R1 and the surface roughness of the second region R2 are different. The reflectivity of the first region R1 and the reflectivity of the second region R2 are different.
G10C 9/00 - Procédés, outils ou matériaux spécialement adaptés à la fabrication ou l'entretien des instruments de musique couverts par la présente sous-classe
When a representative pitch of a second part is higher than a representative pitch of a first part, a second object is placed in the positive direction of a first direction on a display area DA with respect to a first object, when the representative pitch of the second part is lower than the representative pitch of the first part, the second object is placed in the negative direction of the first direction with respect to the first object, and when the representative pitch of the second part is the same as the representative pitch of the first part or when another object is already placed at a determined position, the second object is placed in a second direction crossing the first direction on the display area with respect to the first object.
Provided is a piezoelectric sensor comprising: a housing having an opening; a substrate which covers the opening and faces the housing; a sheet-like first piezoelectric element which is located in the housing and disposed so that a first surface faces the substrate; and a weight disposed so as to face a second surface of the first piezoelectric element, wherein the first piezoelectric element has a stretchable/contractible porous layer, and the weight can be displaced with respect to the substrate following the stretching/contraction of the porous layer.
G10H 3/18 - Instruments dans lesquels les sons sont produits par des moyens électromécaniques utilisant des générateurs résonnants mécaniques, p. ex. des cordes ou des instruments à percussion, dont les sons sont captés par des transducteurs électromécaniques, les signaux électriques étant alors traités ou amplifiés puis convertis en ondes sonores au moyen d'un haut-parleur ou d'un dispositif équivalent utilisant des vibrateurs excités mécaniquement et munis de capteurs utilisant des cordes, p. ex. guitares électriques
This filter setting method comprises the steps of: measuring an impulse response of a room in which a speaker is placed; extracting the rear reverberation component of the measured impulse response; detecting the difference between the frequency-amplitude characteristic of the extracted rear reverberation component and a predetermined target characteristic; generating a filter factor indicating the frequency response having an amplification or attenuation in accordance with the difference; and setting the filter factor to a filter used for processing sound signals that are to be supplied to the speaker.
A display method involves receiving an acoustic space and a target sound pressure distribution in the acoustic space, using a prescribed model as a basis to calculate a speaker or microphone placement distribution corresponding to the received target sound pressure distribution in the received acoustic space, and displaying the calculated placement distribution on a display unit.
A musical sound synthesis system 100 comprises: a control data acquisition unit 21 that acquires a time series X of control data indicating the condition of a target musical sound; a control vector generation unit 24 that generates a control vector V representing the feature of temporal change of timbre in response to an instruction from a user; a control vector processing unit 25 that generates a first parameter set Pn from the control vector V; and a musical sound synthesis unit 22 that generates a time series Z of acoustic data representing the acoustic feature quantity of the target musical sound by processing the time series X of the control data by a trained first generative model 30 including a plurality of basic layers and one or more intermediate layers and having learned the relation between the condition of the musical sound and the acoustic feature quantity of the musical sound. A first intermediate layer out of the one or more intermediate layers executes processing in which the first parameter set Pn is applied to data to be inputted to the first intermediate layer, and outputs the data after the application to the next layer.
This support component 6 comprises: a plurality of planar parts 30 that are aligned at intervals in a plate-thickness direction; a plurality of first peripheral-edge connecting parts 31 each of which connects two planar parts 30 adjacent to each other in the plate-thickness direction in a first range of the periphery direction of the planar parts 30 along the peripheral edges of the planar parts 30 as seen in the plate-thickness direction; and a plurality of second peripheral-edge connecting parts 32 each of which connects two planar parts 30 adjacent to each other in the plate-thickness direction in a second range of the periphery direction of the planar parts 30 along the peripheral edges of the planar parts 30 as seen in the plate-thickness direction. The length of each of the first peripheral-edge connecting parts 31 and the length of each of the second peripheral-edge connecting parts 32 in the periphery direction each are shorter than the length of the whole peripheral edge of each of the planar parts 30. The first peripheral-edge connecting parts 31 and the second peripheral-edge connecting parts 32 are alternately placed in the plate-thickness direction. As seen in the plate-thickness direction, each of the first peripheral-edge connecting parts 31 and a respective one of the second peripheral-edge connecting parts 32 are so arranged as to be continuous with each other in the periphery direction.
Provided is a data processing method comprising: acquiring, from a first trained model to which performance data is inputted, first sound control data including pitch information, note value information, and sound generation timing; inputting parameters corresponding to the first sound control data and first user-specified information to a second trained model; and acquiring second sound control data from the second trained model.
G10H 1/00 - Éléments d'instruments de musique électrophoniques
G10G 3/04 - Enregistrement de la musique sous forme de notation, p. ex. enregistrement du fonctionnement mécanique d'un instrument de musique en employant des moyens électriques
97.
SOUND PROCESSING METHOD, SOUND PROCESSING DEVICE, AND SOUND PROCESSING PROGRAM
This sound processing method receives sound information including a sound signal of a sound source and position information of the sound source, applies a first localization process to the sound signal of the sound source to localize a sound image of the direct sound of the sound source on the basis of the position information of the sound source, applies a second localization process to the sound signal of the sound source to localize a sound image of indirect sound of the sound source on the basis of the position information of the sound source, receives conditions regarding the sound source or space, selects on the basis of the conditions, either an object-based process or a channel-based process, and performs the second localization process.
A training system for an acoustic model can make various trainings easily implementable by making it possible to select data to be used for training the acoustic model from a plurality of pieces of training data, and includes a first device that is used by a first user and connectable to a network, and a server that is connectable to the network. The first device uploads a plurality of sound waveforms to the server under the control of the first user, selects, from the plurality of sound waveforms that are already uploaded or to be uploaded, one or more sound waveforms as a first waveform set, and transmits, to the server, a first execution instruction of a first training job for an acoustic model for generating an acoustic feature amount. The server starts the execution of the first training job using the selected first waveform set on the basis of the first execution instruction from the first device, and provides, to the first device, a trained acoustic model trained by the first training job.
A method of displaying information related to an acoustic model established by being trained using a plurality of sound waveforms so as to generate acoustic characteristics that can facilitate the identification of the sound waveforms used for training the acoustic model, the method comprises: acquiring the distribution of any one of a plurality of characteristics of the sound waveforms, the characteristics of the plurality of soundwaves used for training the acoustic model, and displaying information related to the distribution of the characteristic. The distribution of the characteristic may be the distribution of a characteristic obtained by analyzing the plurality of used sound waveforms.
G10L 25/48 - Techniques d'analyse de la parole ou de la voix qui ne se limitent pas à un seul des groupes spécialement adaptées pour un usage particulier
G10H 1/00 - Éléments d'instruments de musique électrophoniques
G10L 13/00 - Synthèse de la paroleSystèmes de synthèse de la parole à partir de texte
G10L 13/033 - Édition de voix, p. ex. transformation de la voix du synthétiseur
G10L 13/06 - Unités élémentaires de parole utilisées dans les synthétiseurs de paroleRègles de concaténation
The present invention provides an acoustic model training method in which it is possible to select whether or not to enable additional training of a trained acoustic model, said acoustic model training method generating an acoustic feature quantity. In the acoustic model training method, a first acoustic model to which additional information has been linked is specified as a target of re-training using a sound waveform, whether or not the re-training of the first acoustic model is possible is determined on the basis of the additional information, and if it is determined that the re-training is possible, a re-trained second acoustic model is created by using an acoustic feature quantity of a sound waveform to re-train the first acoustic model. The additional information may be a permission flag indicating permission for re-training of the acoustic model.