According to one embodiment, a sound signal processing method involves receiving a sound signal, performing vowel determination on the basis of the spectral shape of the sound signal during a prescribed first period of time or performing voice determination on the basis of the rate of change in the pitch of the sound signal during a prescribed second period of time, and adjusting the time characteristics or the frequency characteristics of the sound signal on the basis of the results of the vowel determination or the voice determination.
The present invention comprises: a detection unit that detects, as a creation motion for creating a creation target, a first musical performance motion that is a motion of musical performance by a user, and detects a second musical performance motion that is a motion of musical performance by the user and that has occurred after the first musical performance motion; a generation unit that, on the basis of the creation motion and as musical performance information from which the creation target can be created, generates first musical performance information based on the first musical performance motion and second musical performance information in which the first musical performance information has been changed on the basis of the second musical performance motion; and an output unit that outputs the second musical performance information.
Provided is a plate material 1 in which a first shear modulus in a plane including a plate thickness direction Z and a first direction Y orthogonal to a plate thickness direction Z is lower than a second shear modulus in a plane including the plate thickness direction Z and a second direction X orthogonal to the plate direction Z and the first direction Y, and which has, in the inside thereof, a plurality of voids 2 arranged at intervals in a plane direction including the first direction Y and the second direction X.
This information processing device comprises: an acquisition unit that acquires performance information, which indicates performance of a user, and/or behavior-related information, which is information different from the performance information and indicates information related to the behavior of the user; an information processing unit that generates output information, which is non-performance information responding to the request of the user and indicates information relating to music, on the basis of the acquired performance information and/or behavior-related information; and an output processing unit that outputs the output information.
A keyboard device according to one embodiment of the present invention includes: a plurality of keys; an acquisition unit that acquires information for specifying a pressed key among the plurality of keys, and a pressing amount of the pressed key; and a sound signal generation unit that generates, if two or more keys are pressed and the pressing amounts of the two or more pressed keys each exceed a reference pressing amount, a sound signal on the basis of a relative relationship between the pressing amounts of the two or more keys, information for specifying the pressed keys, and the pressing amounts of the pressed keys.
G10H 1/053 - Means for controlling the tone frequencies, e.g. attack or decayMeans for producing special musical effects, e.g. vibratos or glissandos by additional modulation during execution only
6.
MUSICAL INSTRUMENT SOUNDBOARD AND STRINGED MUSICAL INSTRUMENT
A musical instrument soundboard 1 according to one embodiment of the present disclosure comprises: a first plate material 11; a pair of second plate materials 12 disposed so as to face each other with the first plate material 11 interposed therebetween; and a pair of third plate materials 13 disposed so as to face each other with the pair of second plate materials 12 interposed therebetween. The third plate material 13 has stronger anisotropy than the first plate material 11 and the second plate material 12 with respect to a loss tangent. Where a direction which is parallel to the surface of the third plate material 13 and in which the loss tangent of the third plate material 13 is minimized is defined as an X direction, the loss tangent of the first plate material 11 in the X direction is larger than any of the loss tangent of the second plate material in the X direction 12 and the loss tangent of the third plate material 13 in the X direction.
Provided is a guitar body structure 2 comprising a body 20 that includes a wooden part 30 composed of rosewood. As viewed from the thickness direction of the body 20, the wood grain direction of the wooden part 30 is inclined with respect to the longitudinal direction of a guitar neck 3.
A molded body 10 according to one embodiment of the present disclosure includes a plurality of wood pieces 1. The molded body 10 is anisotropic in terms of elastic modulus and strength in the trunk-axis direction of the wood used as a feedstock and in a direction orthogonal to the trunk-axis direction. Fibers 1a of the wood pieces 1 intermingle at the interfaces between the plurality of wood pieces 1.
A listening sound acquisition method according to the present invention is for a loudspeaker system that delivers spoken sound from a speaker to a listener via a loudspeaker transmission path that includes a sound collection means, an amplification means, and a sound emission means. The method involves acquiring the sound pressure of the spoken sound at an arbitrary location on the transmission path and finding the sound pressure at a listening location for the listener on the basis of the acquired sound pressure.
This performance input device includes an illumination-equipped operation area and an illumination-free operation area. The illumination-equipped operation area is played. The illumination-equipped operation area is configured to be lightable. The illumination-free operation area is played. When the illumination-equipped operation area is played, the lighting mode of the illumination-equipped operation area is changed.
This noise reduction device (dummy coil 1) is attached to a musical instrument (electric guitar 7) having an electromagnetic pickup 72 for detecting a magnetic field change based on vibration of a string 71. The noise reduction device includes a coil 20, a core material 4 made of a soft magnetic material, and an adjustment member 3 for variably adjusting a relative position of the coil 20 and the core material 4. The core material 4 is not magnetized.
G10H 3/18 - Instruments in which the tones are generated by electromechanical means using mechanical resonant generators, e.g. strings or percussion instruments, the tones of which are picked up by electromechanical transducers, the electrical signals being further manipulated or amplified and subsequently converted to sound by a loudspeaker or equivalent device using mechanically actuated vibrators with pick-up means using strings, e.g. electric guitars
This speaker diaphragm comprises a base material having a resin matrix containing a thermoplastic resin as a main component and fibers dispersed in the resin matrix. The fibers have a plurality of loss regions in which the fibers are locally bent and broken, or a plurality of loss regions each having a diameter that is 5% to 50% larger than the average diameter of the fibers.
A signal generation method according to an embodiment comprises: acquiring intermediate feature data corresponding to a predetermined time step by providing, to a first trained model, a first parameter in a predetermined time range including ranges before and after the predetermined time step of sound control data including the first parameter and a second parameter for controlling a generated sound in a plurality of time steps corresponding to the passage of time; updating the intermediate feature data when the value of the first parameter has been changed; and generating a sound signal in accordance with data obtained by providing the second parameter and the intermediate feature data to a second trained model when a reproduction start instruction has been received.
G10H 7/08 - Instruments in which the tones are synthesised from a data store, e.g. computer organs by calculating functions or polynomial approximations to evaluate amplitudes at successive sample points of a tone waveform
G10H 1/00 - Details of electrophonic musical instruments
G10L 13/00 - Speech synthesisText to speech systems
14.
SIGNAL PROCESSING METHOD, SIGNAL PROCESSING SYSTEM, AND SPEAKER SYSTEM
According to the present invention, an acoustic processing system is provided with: a signal processing unit that utilizes a nonlinear model simulating a relationship in which an acoustic resistance of an acoustic port for acoustic radiation employing Helmholtz resonance depends on a volumetric flow velocity at the acoustic port, to determine an input voltage from target parameters including the volumetric flow velocity at the acoustic port; and a voltage supply unit that supplies the input voltage to a speaker device.
NATIONAL UNIVERSITY CORPORATION TOKAI NATIONAL HIGHER EDUCATION AND RESEARCH SYSTEM (Japan)
YAMASA CORPORATION (Japan)
Inventor
Ueno Yoshihito
Abstract
Provided is a nucleoside derivative or a salt thereof, which is represented by formula (1) or (2). (In formula (1) and formula (2), R1denotes a hydrogen atom, a hydroxyl group, a hydroxyl group in which the hydrogen atom is substituted by an alkyl group or an alkenyl group, or a protected group. In formula (1) and formula (2), R2and R4nnR5R6(n is 0 or 1, and R5and R6may be the same as, or different from, each other and each denote a hydrogen atom, a hydroxyl group, a protected hydroxyl group, a mercapto group, a protected mercapto group, a lower alkoxy group, a cyano-lower alkoxy group, an amino group or a substituted amino group. However, if n is 1, R5and R6cannot both be hydrogen atoms. R3denotes NHR7(R7 denotes a hydrogen atom, an alkyl group, an alkenyl group or a protecting group for an amino group, and B denotes a purin-9-yl group, a 2-oxo-pyrimidin-1-yl group, a substituted purin-9-yl group or a substituted 2-oxo-pyrimidin-1-yl group.)
C07H 19/067 - Pyrimidine radicals with ribosyl as the saccharide radical
A61K 31/712 - Nucleic acids or oligonucleotides having modified sugars, i.e. other than ribose or 2'-deoxyribose
A61K 31/7072 - Compounds having saccharide radicals and heterocyclic rings having nitrogen as a ring hetero atom, e.g. nucleosides, nucleotides containing six-membered rings with nitrogen as a ring hetero atom containing condensed or non-condensed pyrimidines having oxo groups directly attached to the pyrimidine ring, e.g. cytidine, cytidylic acid having two oxo groups directly attached to the pyrimidine ring, e.g. uridine, uridylic acid, thymidine, zidovudine
A61K 31/7125 - Nucleic acids or oligonucleotides having modified internucleoside linkage, i.e. other than 3'-5' phosphodiesters
A61P 43/00 - Drugs for specific purposes, not provided for in groups
C07H 19/073 - Pyrimidine radicals with 2-deoxyribosyl as the saccharide radical
This speaker position detection method comprises: acquiring speech of a speaker, by using a microphone; obtaining direction information of the speaker, on the basis of the acquired speech of the speaker; acquiring a face image of the speaker from an image acquired by a camera, on the basis of the direction information; and detecting position information, including height information of the speaker, on the basis of the acquired face image of the speaker.
H04N 23/611 - Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
G10K 11/175 - Methods or devices for protecting against, or for damping, noise or other acoustic waves in general using interference effectsMasking sound
H04N 23/63 - Control of cameras or camera modules by using electronic viewfinders
A Helmholtz resonator 1A comprises a cavity portion 10 and a neck portion 20A which provides communication between an internal space 11 of the cavity portion 10 and an external space 31. In the neck portion 20A, the opening area of an open end on the external space 31 side is larger than the opening area of an open end on the internal space 11 side.
Provided is a sound processing method wherein: sound signals output from a plurality of input channels are received and mixed to generate a mixed sound signal; a spectrum diagram of the mixed sound signal or the sound signals of the plurality of input channels is displayed; from the mixed sound signal and the sound signal of each input channel among the plurality of input channels, a similarity degree in a prescribed frequency band is calculated; and, in the spectrum diagram, with regard to at least one of the plurality of input channels, a display mode in the prescribed frequency band is made to differ on the basis of the similarity degree.
An electronic musical instrument 1 includes a pedal 12, a speed detection unit 16, a speed determination unit 4, and a reproduction unit 6. The speed detection unit 16 detects the moving speed of the pedal 12. The speed determination unit 4 determines a pedal-off speed when the operation of the pedal 12 is cancelled, on the basis of the moving speed detected by the speed detection unit 16. The reproduction unit 6 reproduces a pedal-off sound on the basis of the pedal-off speed determined by the speed determination unit 4.
An acoustic processing system 100 sets N control values Cy_1 to Cy_N in accordance with an instruction from a user, processes an acoustic feature quantity Q of a base component of a target sound and modulation data M corresponding to the N control values Cy_1 to Cy_N, and thereby generates an acoustic signal Z of the target sound in which N modulation components corresponding to the respective N control values Cy_1 to Cy_N have been added to the base component.
This text display method includes: acquiring melody data PD representing a plurality of sounds which have been put in order; acquiring text information that includes a word and a first symbol; and displaying a corresponding text item that corresponds to a first phoneme determined on the basis of the position of the first symbol in the word and phonemes constituting the word, the corresponding text item being displayed in association with a first sound included in the melody data PD and one or a plurality of sounds which succeed the first sound.
This parameter determination method is for determining a parameter of a filter that outputs a second sound signal in which a frequency characteristic is changed with respect to a first sound signal. The parameter determination method includes: calculating a first acoustic feature amount of a first sound signal; calculating a second acoustic feature amount of a second sound signal; and determining a parameter of a filter so that a difference in a specific acoustic feature amount related to audibility between the first acoustic feature amount and the second acoustic feature amount is no greater than a predetermined value.
A detection device 20 comprises: a first actuator 21 that generates sound waves to be transmitted through a pipe 3 of a clarinet 1; a first sensor unit 31 that has a plurality of sensors 31a-31h for detecting the sound waves; and a first attachment part 41 with which the first actuator 21 and the first sensor unit 31 are disposed in the pipe 3. The plurality of sensors 31-31h are arranged at intervals in the longitudinal direction of the pipe 3.
G10H 1/34 - Switch arrangements, e.g. keyboards or mechanical switches specially adapted for electrophonic musical instruments
G10H 1/053 - Means for controlling the tone frequencies, e.g. attack or decayMeans for producing special musical effects, e.g. vibratos or glissandos by additional modulation during execution only
24.
INFORMATION PROCESSING METHOD AND INFORMATION PROCESSING SYSTEM
This information processing system 100 comprises: an information acquisition unit 41 that acquires performance information pertaining to a performance on an electronic musical instrument 20 by a user U; a response acquisition unit 42 that acquires response information R of a natural language according to the performance information; and an operation control unit 43 that executes a notification operation for notifying the response information R by means of a guide character displayed on the display device 15.
This information processing method comprises: detecting a specific gesture of a first performer; generating related information related to the performance of the first performer or a second performer on the basis of the detected specific gesture; and transmitting the related information from a first terminal used by the first performer to a second terminal used by the second performer.
A content information processing method according to the present invention acquires first performance information related to a live performance of a first performer at a first venue, acquires first content information related to a video or sound at a second venue connected to the first venue through a network, generates second content information on the basis of the first performance information and the first content information, and generates a video or sound based on the second content information.
In the present invention, an information processing method involves the following: a plurality of first signals, which are respectively related to the performance of performers located at multiple sites, are transmitted from respective first terminals of the performers; a second terminal receives the plurality of first signals, and at the second terminal, reference information serving as a reference for the performance is acquired; and on the basis of this reference information, one of the plurality of first signals is prioritized for processing.
This performance sound generation method comprises acquiring image information of a first musical instrument and sound information changed by an environmental change of the first musical instrument, acquiring performance operation information of a user, rendering an image of the first musical instrument on the basis of the image information, and generating a performance sound of the first musical instrument on the basis of the performance operation information and the sound information.
G10H 7/02 - Instruments in which the tones are synthesised from a data store, e.g. computer organs in which amplitudes at successive sample points of a tone waveform are stored in one or more memories
G10H 1/00 - Details of electrophonic musical instruments
29.
INFORMATION PROCESSING METHOD, PRODUCT PRESENTATION METHOD, INFORMATION PROCESSING DEVICE, PRODUCT PRESENTATION DEVICE, AND PROGRAM
This information processing method comprises: preparing a converter that defines the correspondence between product information, including sensory information relating to taste, smell, or touch that is included in information about a product, and sound production-related parameters; and outputting sound information based on sound production-related parameters by inputting product information into the converter, or outputting product information by inputting sound production-related parameters based on acquired sound information into the converter.
This method for evaluating a sound or an image comprises: displaying three icons BJm in a plane (in a display area 25A) displayed on a display 25; outputting sound data SD corresponding to each icon BJm; editing the position of each icon BJm in the plane (in the display area 25A) in response to a first operation in which a user moves each icon BJm in the plane (in the display area 25A) using an input device 24; and associating the edited planar coordinates of each icon BJm with the sound data SD.
G06F 3/04817 - Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance using icons
This audio device includes a first amplifier and one or more audio modules. The first amplifier is connected to a head unit that outputs audio data and control data, and processes the audio data on the basis of the control data. The one or more audio modules are connected to the first amplifier via an audio bus capable of two-way communication via serial communication, and function as an extension function or an additional function for the first amplifier. The first amplifier is operated by first electric power supplied from an external power supply. Second electric power is supplied from the first amplifier via the audio bus to at least one of the one or more audio modules. The at least one audio module operates by means of the second electric power.
In this layout design assistance method for acoustic equipment, an acoustic space, the position of a sound source in the acoustic space, and the position of a noise source in the acoustic space are received, a layout distribution for acoustic equipment corresponding to the received position of the sound source and the position of the noise source in the received acoustic space is calculated, and the calculated layout distribution is displayed on a display.
G06F 30/13 - Architectural design, e.g. computer-aided architectural design [CAAD] related to design of buildings, bridges, landscapes, production plants or roads
G06F 30/20 - Design optimisation, verification or simulation
33.
SPEAKER SYSTEM, SPEAKER, CAMERA, AND SPEAKER ARRANGEMENT ASSISTANCE METHOD
This speaker system 100 includes a marker 10, a speaker 20, a camera 30, and a calculation unit 2. The camera 30 is arranged in a specific positional relationship with respect to the speaker 20. The camera 30 captures an image of an object in an imaging direction corresponding to the sound emission direction of the speaker 20. The calculation unit 2 calculates the relative position of the speaker 20 with respect to the marker 10 on the basis of an image of the marker 10 included in the image captured by the camera 30.
This sound conversion method includes: providing, to a trained model in which sound-quality designation data has been set, first acoustic-characteristic data of an input sound signal and pitch designation data based on the input sound signal; and acquiring, from the trained model, second acoustic-characteristic data for generating a conversion sound signal having sound quality which is in accordance with the sound-quality designation data. The trained model outputs the second acoustic-characteristic data by using the first acoustic-characteristic data and the pitch designation data. The pitch designation data indicates a pitch obtained by adjusting the pitch of the input sound signal on the basis of the relationship between: the pitch of a target sound signal which is the target for determining the amount of pitch adjustment; and the register which is associated with the sound-quality designation data.
A sound conversion method including: providing first acoustic feature data pertaining to an input sound signal and pitch designation data to a trained model; and acquiring, from the trained model, second acoustic feature data for generating a conversion sound signal having a prescribed sound quality and a pitch that is based on the pitch designation data. The trained model outputs the second acoustic feature data using the first acoustic feature data and the pitch designation data.
An information processing system according to an embodiment of the present invention comprises a first acquisition unit, a second acquisition unit, a current status information generation unit, and a support processing unit. The first acquisition unit acquires a captured image of a musical instrument. The second acquisition unit acquires information about a consultation matter relating to the functions of the captured musical instrument. On the basis of the captured image, the current status information generation unit generates current status information relating to the current status of the musical instrument, including information about at least one of the configuration of the musical instrument, the condition of the musical instrument, and the installation position of the musical instrument. The support processing unit generates and outputs support information for solving the consultation matter, on the basis of the information about the consultation matter and the current status information.
This performance information generation method comprises: acquiring first performance information related to the performance of a first performer and second performance information related to the performance of a second performer; generating third performance information obtained by integrating the acquired first performance information and second performance information; and outputting the generated third performance information.
A training device 10 includes a reference data acquisition unit 11, a performance data acquisition unit 12, a detection unit 13, an estimation unit 14, and a guidance unit 15. The reference data acquisition unit 11 acquires reference data indicating the time sequence of reference sounds. The performance data acquisition unit 12 acquires performance data indicating the time sequence of sounds expressed by the performance of a user. The detection unit 13 detects a mistake in the performance by comparing the reference data and the performance data. The estimation unit 14 estimates the cause of the mistake on the basis of a history of the performance. The guidance unit 15 determines the guidance content on the basis of the frequency and cause of the mistake.
A speaker device 21 comprises: a speaker unit 30; and a support mechanism 40 that elastically supports the speaker unit 30 with respect to an interior member 10. The support mechanism 40 includes a first fixed part 41 that is fixed to the interior member 10, a second fixed part 42 that is fixed to the interior member 10, and a support part 43 that elastically couples the speaker unit 30 to the first fixed part 41 and the second fixed part 42. The speaker unit 30 is located between the first fixed part 41 and the second fixed part 42 when viewed in the direction of a central axis Z, and a straight line passing through the first fixed part 41 and the second fixed part 42 is orthogonal to the central axis Z of the speaker unit 30.
This sound pickup setting method for setting the directivity of a microphone is characterized by including: setting a separation angle from a direction vertically upward with respect to a surface on which the microphone is installed; and making the directivity of the microphone within a range not greater than the set separation angle.
G10L 25/51 - Speech or voice analysis techniques not restricted to a single one of groups specially adapted for particular use for comparison or discrimination
41.
DISPLAY METHOD, DISPLAY PROCESSING DEVICE AND PROGRAM
This display method comprises: accepting a camera image; accepting area information indicating an area used for an event; and causing a boundary image on a certain horizontal plane to be displayed superimposed on the camera image, the boundary image corresponding to the area information.
This moving-image processing system: extracts a first reference portion R1, which includes the hands of a performer, from performance moving-images X showing a first keyboard-instrument performance performed by the performer (step Sa3); and generates synthesized moving-images Y by superimposing the first reference portion R1 onto a keyboard portion By of a second keyboard instrument (step Sa6).
This sound signal processing method comprises: accumulating relationships between environment information of past conferences and past sound processing parameters; acquiring environment information of a current conference; determining a current sound processing parameter corresponding to the acquired environment information of the current conference, on the basis of the accumulated relationships and a predetermined inference algorithm; receiving a sound signal; and applying sound processing to the received sound signal on the basis of the obtained current sound processing parameter.
This mask processing method involves: accumulating in advance the association between first participant information and a face image, or the association between the first participant information and audio, such first participant information indicating a participant who has participated in a past event; receiving second participant information indicating a participant participating in a current event; receiving a sensor signal including a face image captured by a camera or a voice acquired by a microphone; comparing the second participant information and the sensor signal on the basis of the correspondence so as to determine whether the face image or the voice included in the sensor signal corresponds to a participant or a non-participant; and performing mask processing on that sensor signal, among sensor signals, if corresponding to the non-participant.
G10K 11/175 - Methods or devices for protecting against, or for damping, noise or other acoustic waves in general using interference effectsMasking sound
This information processing system comprises: a performance analysis unit that analyzes the state of a performance by a user in a real space; and a display control unit that displays, on a display device, a performing character who performs in a virtual space, and that controls the actions of the performing character in accordance with the state of the performance.
In the present invention, an information processing device 1 includes an acquisition unit 101, a first identification unit 102, a second identification unit 103, and a display control unit 104. The acquisition unit 101 acquires designation information indicating a prescribed posture which is designated by a user and is included in a first motion of the user and in a second motion which is a comparison target. The first identification unit 102 identifies a first frame corresponding to the designation information, from among a plurality of frames obtained by imaging the first motion. The second identification unit 103 identifies a second frame corresponding to the designation information, from among a plurality of frames obtained by imaging the second motion. The display control unit 104 reproduces, in parallel, the motion data corresponding to each of the first motion and the second motion such that the reproduction timing of the first frame and the second frame match.
Provided is a sound data processing method comprising: on the basis of performance information, acquiring a first sound data set that corresponds to any one of multiple virtual operators and includes first sound data and second sound data; and, on the basis of the first sound data set, generating a second sound data set that includes multiple pieces of third sound data respectively corresponding to multiple reproduction element positions. The first sound data and the second sound data each include sound information and sound production position information indicating sound production positions relative to a predetermined listening position. Generating the second sound data set comprises generating the third sound data on the basis of the relation between the sound production positions indicated by the sound production position information of each of the first sound data and the second sound data and the multiple reproduction element positions.
Provided is a sound source. An acquisition unit 401 acquires a tone setting instruction, a generation unit 402 generates half setting information J in accordance with the tone setting instruction, the half setting information J specifying a key damper half region or a key damper half point in producing sound with each of a plurality of keys of an electronic keyboard instrument 200, and an output unit 403 outputs the generated half setting information J.
An information processing system comprises an input data acquisition unit 41 and a finger position data generation unit 42. The input data acquisition unit 41 acquires input data C1 including image data G that represents an image of the hands of a user who plays a musical instrument, finger position data Y that represents each position of a plurality of analysis points on the hands, and performance data E that represents the performance of the musical instrument. The finger position data generation unit 42 generates finger position data Z in which the position of each of the plurality of analysis points is corrected in the finger position data Y according to the position of the hand represented by the image data G and the performance represented by the performance data E by processing input data C1 by means of a generation model M.
This sound volume setting method: accepts a correction instruction for a sound volume change characteristic with respect to a sound volume modifying operation; sets the sound volume at the time when the correction instruction is accepted as a reference value; generates a corrected sound volume change characteristic that is the sound volume change characteristic corrected based on the reference value; and stores the generated corrected sound volume change characteristic.
This wearable acoustic output device, which can be worn on the ear of a user, comprises a control processing unit which: measures a continuous playback period of an output sound output to the ear of the user; and, when the continuous playback period reaches a preset processing threshold value, performs specific processing on the output sound to allow the user to recognize an overload to the ear. The control processing unit initializes the continuous playback period when a playback interruption period in which the output of the output sound is continuously interrupted reaches a preset initialization threshold value.
This sound processing method comprises: inputting a first sound signal sampled at a first sampling frequency; using a training model trained so as to output a second sound signal based on a folding noise of a frequency component higher than a first Nyquist frequency of the first sound signal to generate a third sound signal that is a frequency component higher than the first Nyquist frequency; and generating a fourth sound signal obtained by mixing the first sound signal and the third sound signal.
This sound system comprises: a wearable-type sound output device that can be worn on the ear of a user; and a host device that controls the output sound output by the wearable-type sound output device. The wearable-type sound output device comprises: an exposure amount calculation unit that calculates the amount of sound exposure to which the user was subjected due to the output sound; and a communication unit that transmits, to the host device, the sound exposure amount calculated by the exposure amount calculation unit. The host device comprises a basic control unit which performs basic control related to the output of the output sound, and which outputs information based on the amount of sound exposure received from the wearable-type sound output device.
This percussion detection device 1 comprises: a vibration propagation member 10 that has a contact part 11 contacting an object 100 and that deforms in accordance with vibration; a support base 20 that supports the vibration propagation member 10 and positions the vibration propagation member 10 between the object 100 and the support base 20; and two vibration detection sensors 30 that respectively detect deformation of two mutually different sites of the vibration propagation member 10. The two sites of the vibration propagation member 10 detected by the two vibration detection sensors 30 are arranged side by side in a direction intersecting the arrangement direction of the vibration propagation member 10 and the support base 20.
G10H 3/14 - Instruments in which the tones are generated by electromechanical means using mechanical resonant generators, e.g. strings or percussion instruments, the tones of which are picked up by electromechanical transducers, the electrical signals being further manipulated or amplified and subsequently converted to sound by a loudspeaker or equivalent device using mechanically actuated vibrators with pick-up means
The keyboard instrument comprises: a drive mechanism including a hammer shank that is displaced in conjunction with key operation; a restriction member with which the hammer shank collides due to displacement; a first support member for supporting the restriction member; a first coil installed in the drive mechanism; a wiring substrate on which is installed a signal generation unit including a second coil that generates a magnetic field, the signal generation unit generating a detection signal corresponding to the distance between the first coil and the second coil; a second support member for supporting the wiring substrate; and a foundation member. The first support member and the second support member are fixed to the foundation member.
A mouthpiece according to an embodiment comprises: a body including a first space, a second space at a distance from the first space, a beak, and a table; and a first sensor for measuring the pressure in the second space. The body has a first opening part that connects the first space and the exterior and that is at least partially covered by a reed when the reed is attached to the table, and, on the outer surface of the beak, a second opening part that connects the second space and the exterior.
A keyboard instrument 100 is provided with a plurality of keys 10 including a wooden material. The plurality of keys 10 include: a plurality of natural tone keys 11, each having a first playing surface operated in order to produce a natural tone; and a plurality of accidental tone keys 12, each having a second playing surface operated in order to to produce an accidental tone. In the first playing surface and the second playing surface, a colorimetric value by a specular component included (SCI) method in an L*a*b*color space satisfies condition 1: ΔE*ab ≤ 17 and/or condition 2: ΔH*< 15 and -1.7 < L*/c* < 1.7. The first playing surface and the second playing surface differ in terms of optical reflectance and/or surface roughness.
This musical instrument comprises an exterior member 40 containing a woody material. The exterior member 40 includes a base surface F1 constituting the exterior, and a plurality of decorative portions 41 formed by recesses that are depressed relative to the base surface F1. The brightness of a sidewall surface 412 in each of the plurality of decorative portions 41 is less than the brightness of the base surface F1. Each of the plurality of decorative portions 41 is formed in an elongated shape along the direction of the Y-axis in the base surface F1 in a plan view of the base surface F1.
The present invention comprises: a first acquisition unit 101 which acquires moving image data that includes sound recorded at the time of a swing of a user and is obtained by capturing the swing at a plurality of frames; a second acquisition unit 102 which acquires reference motion data that represents a time transition of the swing and has an identifier attached thereto, the identifier indicating the time of impact in the swing; an identification unit 104 which identifies, from the moving image data, a section including a frame corresponding to the timing of attaching the identifier in the reference motion data; and a determination unit 105 which determines whether a frame including a hitting ball sound is present in the section and, when it is determined that the frame including the hitting ball sound is present in the section, determines that the swing with an impact is included in the moving image data.
This musical instrument is provided with an exterior member 80 that is an isotropic fiberboard. The exterior member 80 includes a foundation surface F1 including a first region R1 and a second region R2. The surface roughness of the first region R1 and the surface roughness of the second region R2 are different. The reflectivity of the first region R1 and the reflectivity of the second region R2 are different.
When a representative pitch of a second part is higher than a representative pitch of a first part, a second object is placed in the positive direction of a first direction on a display area DA with respect to a first object, when the representative pitch of the second part is lower than the representative pitch of the first part, the second object is placed in the negative direction of the first direction with respect to the first object, and when the representative pitch of the second part is the same as the representative pitch of the first part or when another object is already placed at a determined position, the second object is placed in a second direction crossing the first direction on the display area with respect to the first object.
Provided is a piezoelectric sensor comprising: a housing having an opening; a substrate which covers the opening and faces the housing; a sheet-like first piezoelectric element which is located in the housing and disposed so that a first surface faces the substrate; and a weight disposed so as to face a second surface of the first piezoelectric element, wherein the first piezoelectric element has a stretchable/contractible porous layer, and the weight can be displaced with respect to the substrate following the stretching/contraction of the porous layer.
G10H 3/18 - Instruments in which the tones are generated by electromechanical means using mechanical resonant generators, e.g. strings or percussion instruments, the tones of which are picked up by electromechanical transducers, the electrical signals being further manipulated or amplified and subsequently converted to sound by a loudspeaker or equivalent device using mechanically actuated vibrators with pick-up means using strings, e.g. electric guitars
This filter setting method comprises the steps of: measuring an impulse response of a room in which a speaker is placed; extracting the rear reverberation component of the measured impulse response; detecting the difference between the frequency-amplitude characteristic of the extracted rear reverberation component and a predetermined target characteristic; generating a filter factor indicating the frequency response having an amplification or attenuation in accordance with the difference; and setting the filter factor to a filter used for processing sound signals that are to be supplied to the speaker.
A display method involves receiving an acoustic space and a target sound pressure distribution in the acoustic space, using a prescribed model as a basis to calculate a speaker or microphone placement distribution corresponding to the received target sound pressure distribution in the received acoustic space, and displaying the calculated placement distribution on a display unit.
A musical sound synthesis system 100 comprises: a control data acquisition unit 21 that acquires a time series X of control data indicating the condition of a target musical sound; a control vector generation unit 24 that generates a control vector V representing the feature of temporal change of timbre in response to an instruction from a user; a control vector processing unit 25 that generates a first parameter set Pn from the control vector V; and a musical sound synthesis unit 22 that generates a time series Z of acoustic data representing the acoustic feature quantity of the target musical sound by processing the time series X of the control data by a trained first generative model 30 including a plurality of basic layers and one or more intermediate layers and having learned the relation between the condition of the musical sound and the acoustic feature quantity of the musical sound. A first intermediate layer out of the one or more intermediate layers executes processing in which the first parameter set Pn is applied to data to be inputted to the first intermediate layer, and outputs the data after the application to the next layer.
This support component 6 comprises: a plurality of planar parts 30 that are aligned at intervals in a plate-thickness direction; a plurality of first peripheral-edge connecting parts 31 each of which connects two planar parts 30 adjacent to each other in the plate-thickness direction in a first range of the periphery direction of the planar parts 30 along the peripheral edges of the planar parts 30 as seen in the plate-thickness direction; and a plurality of second peripheral-edge connecting parts 32 each of which connects two planar parts 30 adjacent to each other in the plate-thickness direction in a second range of the periphery direction of the planar parts 30 along the peripheral edges of the planar parts 30 as seen in the plate-thickness direction. The length of each of the first peripheral-edge connecting parts 31 and the length of each of the second peripheral-edge connecting parts 32 in the periphery direction each are shorter than the length of the whole peripheral edge of each of the planar parts 30. The first peripheral-edge connecting parts 31 and the second peripheral-edge connecting parts 32 are alternately placed in the plate-thickness direction. As seen in the plate-thickness direction, each of the first peripheral-edge connecting parts 31 and a respective one of the second peripheral-edge connecting parts 32 are so arranged as to be continuous with each other in the periphery direction.
Provided is a data processing method comprising: acquiring, from a first trained model to which performance data is inputted, first sound control data including pitch information, note value information, and sound generation timing; inputting parameters corresponding to the first sound control data and first user-specified information to a second trained model; and acquiring second sound control data from the second trained model.
This sound processing method receives sound information including a sound signal of a sound source and position information of the sound source, applies a first localization process to the sound signal of the sound source to localize a sound image of the direct sound of the sound source on the basis of the position information of the sound source, applies a second localization process to the sound signal of the sound source to localize a sound image of indirect sound of the sound source on the basis of the position information of the sound source, receives conditions regarding the sound source or space, selects on the basis of the conditions, either an object-based process or a channel-based process, and performs the second localization process.
A training system for an acoustic model can make various trainings easily implementable by making it possible to select data to be used for training the acoustic model from a plurality of pieces of training data, and includes a first device that is used by a first user and connectable to a network, and a server that is connectable to the network. The first device uploads a plurality of sound waveforms to the server under the control of the first user, selects, from the plurality of sound waveforms that are already uploaded or to be uploaded, one or more sound waveforms as a first waveform set, and transmits, to the server, a first execution instruction of a first training job for an acoustic model for generating an acoustic feature amount. The server starts the execution of the first training job using the selected first waveform set on the basis of the first execution instruction from the first device, and provides, to the first device, a trained acoustic model trained by the first training job.
A method of displaying information related to an acoustic model established by being trained using a plurality of sound waveforms so as to generate acoustic characteristics that can facilitate the identification of the sound waveforms used for training the acoustic model, the method comprises: acquiring the distribution of any one of a plurality of characteristics of the sound waveforms, the characteristics of the plurality of soundwaves used for training the acoustic model, and displaying information related to the distribution of the characteristic. The distribution of the characteristic may be the distribution of a characteristic obtained by analyzing the plurality of used sound waveforms.
The present invention provides an acoustic model training method in which it is possible to select whether or not to enable additional training of a trained acoustic model, said acoustic model training method generating an acoustic feature quantity. In the acoustic model training method, a first acoustic model to which additional information has been linked is specified as a target of re-training using a sound waveform, whether or not the re-training of the first acoustic model is possible is determined on the basis of the additional information, and if it is determined that the re-training is possible, a re-trained second acoustic model is created by using an acoustic feature quantity of a sound waveform to re-train the first acoustic model. The additional information may be a permission flag indicating permission for re-training of the acoustic model.
The present invention is an acoustic apparatus including: a cylindrical magnet 2; a bottom yoke 3 that adjoins one flat surface of the magnet 2; a pole piece 4 that adjoins a flat surface of the magnet on the side thereof facing the bottom yoke 3; and a ring yoke 5 positioned so as to surround the circumferential surface of the magnet 2. At least one of the bottom yoke 3, the pole piece 4, and the ring yoke 5 is a compression molded body obtained by compression molding a soft magnetic powder.
According to one embodiment, provided is a musical composition creation method that comprises: extracting, from an input content, feature information included in the content; determining at least one attribute corresponding to the extracted feature information from among a plurality of attributes; determining an accompaniment pattern and a chord progression pattern corresponding to the determined attribute; and creating a musical composition based on the determined accompaniment pattern and chord progression pattern.
An output control device according to one aspect of the present disclosure comprises: a slope determination unit which determines the slope of amplitude variation in a sound signal of which the amplitude varies along the time axis; a setting unit which sets a time constant on the basis of the slope determined by the slope determination unit; and a control unit which controls an output value pertaining to the sound signal on the basis of the time constant set by the setting unit.
This information processing device generates a plurality of pieces of output sound data used in each of a plurality of sound output devices. A data acquisition unit acquires input sound data. An information acquisition unit acquires at least one of first information related to the attribute of the input sound data and second information related to the plurality of sound output devices. A parameter determination unit determines, on the basis of at least one of the first information and the second information, a parameter used for acoustic processing that gives an acoustic effect to the input sound data. An output sound generation unit uses the parameter to apply the acoustic processing to the input sound data, thereby generating output sound data that is to be used in one sound output device. A data transmission control unit transmits the output sound data to the one sound output device via a network.
B60R 11/02 - Arrangements for holding or mounting articles, not otherwise provided for for radio sets, television sets, telephones, or the likeArrangement of controls thereof
This musical playing system 100 comprises: a plurality of keys K; a plurality of light-emitting units E arranged along the plurality of keys K; and a guidance control unit that, over time, varies the emission of light by one or more light-emitting units E from among the plurality of light-emitting units E, thereby issuing guidance indicating keys to be operated from among the plurality of keys K.
A61P 25/28 - Drugs for disorders of the nervous system for treating neurodegenerative disorders of the central nervous system, e.g. nootropic agents, cognition enhancers, drugs for treating Alzheimer's disease or other forms of dementia
A61K 31/7072 - Compounds having saccharide radicals and heterocyclic rings having nitrogen as a ring hetero atom, e.g. nucleosides, nucleotides containing six-membered rings with nitrogen as a ring hetero atom containing condensed or non-condensed pyrimidines having oxo groups directly attached to the pyrimidine ring, e.g. cytidine, cytidylic acid having two oxo groups directly attached to the pyrimidine ring, e.g. uridine, uridylic acid, thymidine, zidovudine
78.
MUSICAL INSTRUMENT IDENTIFYING METHOD, MUSICAL INSTRUMENT IDENTIFYING DEVICE, AND MUSICAL INSTRUMENT IDENTIFYING PROGRAM
TOKYO METROPOLITAN PUBLIC UNIVERSITY CORPORATION (Japan)
Inventor
Mizuno Yoshifumi
Kondo Kazunobu
Takahashi Yu
Shiota Sayaka
Shiroma Yuki
Abstract
This musical instrument identifying method comprises: calculating an acoustic feature amount on the basis of a sound signal; calculating an image feature amount for use in an image recognition technology on the basis of the acoustic feature amount; and outputting information for identifying a musical instrument from the sound signal by using a trained model in which the sound signal and the information for identifying the musical instrument corresponds to each other and which is obtained through training using the image feature amount as an input and the information for identifying the musical instrument as an output.
G10L 25/51 - Speech or voice analysis techniques not restricted to a single one of groups specially adapted for particular use for comparison or discrimination
G10G 3/04 - Recording music in notation form, e.g. recording the mechanical operation of a musical instrument using electrical means
G10L 25/30 - Speech or voice analysis techniques not restricted to a single one of groups characterised by the analysis technique using neural networks
This air-core coil has a toroidal core 10a of a non-magnetic material or a weakly magnetic material and a conductor wire wound around the core 10a. The core 10a is an extended toroidal core.
An automatic player piano 100 is an automatic player piano 100 that executes playing by driving keys 1 on the basis of playing information MP, and includes a setting unit 11 that sets an adjustment time AT for adjusting misalignment in sound production timing due to action delay time from receiving a playing instruction for one sound that is specified in the playing information MP to producing the one sound, and a converting unit 12 that converts intensity of each sound specified in the playing information MP in accordance with the adjustment time AT set by the setting unit 11.
This registering device acquires identification information that can be acquired at a specific location, acquires position information representing a position of the registering device, and transmits registration data including the identification information and the position information. An information system: receives the registration data from the registering device; saves reference data corresponding to the identification information and the position information included in the registration data in a storage device; receives, from a terminal device, authentication data including identification information acquired by the terminal device at a specific location, and position information representing a position of the terminal device; and approves provision of an information service to the terminal device on the condition that authentication processing for comparing the reference data and the authentication data is successful.
This pickup device 7 comprises: a bobbin 20 having two plates 21 that are disposed at a distance from each other in the plate thickness direction and that extend in a first direction orthogonal to the plate thickness direction, and a plurality of magnets 22 that are lined up in the first direction and that extend in the plate thickness direction between the two plates 21 and attach to the two plates 21; and a coil 30 wound around the plurality of magnets 22 between the two plates 21. The two ends of the magnets 22 with respect to a second direction of the plates 21 orthogonal to the plate thickness direction and the first direction face the coil 30 in the second direction. The bobbin 20 has a reinforcing part 24 connecting the two plates 21 to each other. The reinforcing part 24 is positioned on the outer side, with respect to the first direction, relative to the first-direction position of a portion, of a magnet 22A positioned at an end in the first direction, that faces the coil 30 in the second direction.
G10H 3/18 - Instruments in which the tones are generated by electromechanical means using mechanical resonant generators, e.g. strings or percussion instruments, the tones of which are picked up by electromechanical transducers, the electrical signals being further manipulated or amplified and subsequently converted to sound by a loudspeaker or equivalent device using mechanically actuated vibrators with pick-up means using strings, e.g. electric guitars
G10H 1/00 - Details of electrophonic musical instruments
83.
SOUND PROCESSING METHOD, SOUND PROCESSING DEVICE, AND PROGRAM
This sound processing method involves: arranging, in a virtual space, objects of a plurality of performers and a plurality of sound volume adjustment interfaces corresponding to the objects of the plurality of performers; receiving a plurality of sound signals corresponding to the plurality of performers; accepting, from a user, sound volume adjustment parameters that are respectively for the plurality of performers and that correspond to the plurality of sound volume adjustment interfaces; obtaining, using a trained model obtained by learning relationships between sound signals corresponding to the performers of the plurality of sound signals and the sound volume adjustment parameters corresponding to the sound signals, sound volume adjustment parameters corresponding to the plurality of performers; and adjusting and mixing sound volumes of the plurality of sound signals on the basis of the sound volume adjustment parameters obtained by the trained model.
A processing device according to the present invention includes: a touch panel that has a display surface that displays an icon related to a processing parameter, and that detects changes in electrostatic capacitance; a physical operator 3 that is disposed on the display surface, and that, as a result of a user operation, changes the electrostatic capacitance detected on the touch panel; a control unit that changes the processing parameter in accordance with changes in electrostatic capacitance based on operation of the physical operator; and a function selection unit that, in response to input to the touch panel accompanying a change in capacitance via an operation on the physical operator, selects the type of processing parameter and an icon corresponding to said processing parameter. The physical operator receives a user operation corresponding to the processing parameter selected by the function selection unit.
This information analysis system 100 comprises: a beat point estimation unit 21 for estimating a plurality of beat points B by an estimation process performed on an acoustic signal A; a beat point editing unit 24 for moving, on a time axis in accordance with an instruction from a user, a target beat point selected by the user from among the plurality of beat points B and one or more adjacent beat points located around the target beat point from among the plurality of beat points B; and an update processing unit 25 for updating the estimation process in accordance with the movement of the target beat point and the one or more adjacent beat points. The beat point estimation unit 21 re-estimates the plurality of beat points B by executing the updated estimation process on the acoustic signal A.
G10G 3/04 - Recording music in notation form, e.g. recording the mechanical operation of a musical instrument using electrical means
G10L 25/51 - Speech or voice analysis techniques not restricted to a single one of groups specially adapted for particular use for comparison or discrimination
86.
INFORMATION PROCESSING SYSTEM, INFORMATION PROCESSING METHOD, AND PROGRAM
An information processing system 100A comprises: an acquisition unit 31 that acquires performance data X1 which specifies, in chronological order, a plurality of sound generation periods corresponding respectively to a plurality of sounds, and that acquires control data Y1 which represents control value transitions that are temporal changes in control values related to the plurality of sounds; a first adjustment unit 321 that moves each of the plurality of sound generation periods on a time axis; and a second adjustment unit 322 that moves a target point in the control value transition on the time axis according to the movement of the end point closest to the target point among end points in each of the plurality of sound generation periods before the movement by the first adjustment unit 321.
Provided is a model data control method comprising: receiving first model data on a player and second model data on a musical instrument; receiving calibration data relating to a size of the player or the musical instrument; receiving motion data on the player; using the calibration data to correct a size of the first model data or the second model data; rendering the first model data or the second model data after the correction; and using the motion data to control the first model data.
This parameter estimation method involves receiving first and second sound signals which have been signal-processed and mixed, processing the first and second sound signals by using an encoder, and generating a first alternative representation that corresponds to a feature amount of the first sound signal and a second alternative representation that corresponds to a feature amount of the second sound signal, wherein the encoder: is trained so that a mixed alterative representation, which can be obtained by mixing the first alternative representation and the second alternative representation, corresponds to a feature amount of a mixed sound signal in which the first sound signal is mixed with the second sound signal; receives reference data that indicates a reference mixed sound having target characteristics; selects a first target alternative representation and a second target alternative representation by using a mixed reference alternative representation corresponding to the reference data; estimates a first parameter using the signal processing for the first sound signal by using the first alternative representation and the first target alternative representation; and estimates a second parameter using the signal processing for the second sound signal by using the second alternative representation and the second target alternative representation.
A performance motion estimation method according to an embodiment of the present invention receives a sound signal related to the sound of a musical instrument, receives a sensor signal from an auxiliary sensor attached to the musical instrument, and estimates a performance motion of the performer on the basis of the sound signal and the sensor signal.
This tweeter diaphragm is constituted by a high-elasticity sheet having an anisotropy. In a vibration profile that is measured by use of a laser Doppler vibration meter in a frequency band the range of which is between 20 Hz and 40 kHz inclusive, no peak that is caused by divided vibration is detected in a frequency band between 20 Hz and 20 kHz inclusive, while a plurality of peaks that are caused by divided vibration are detected in a frequency band the range of which is greater than 20 kHz but equal to or less than 40 kHz.
A cradle 100A which is a contact operation device comprises: an arm 111 having, at the distal end thereof, a contact operation part 114 that contacts a touch panel 205; and a support part 120 supporting the arm 111, said support part 120 including a first support mechanism which causes the arm 111 to move such that the contact operation part 114 moves in the horizontal direction and a second support mechanism which causes the arm 111 to move such that the contact operation part 114 moves in the vertical direction.
A pickup 10 comprises: a base plate 3; a first bobbin 21A and a second bobbin 21B that are disposed in parallel on a first surface of the base plate 3; a first pole piece 5A that is inserted in the first bobbin 21A and penetrates the base plate 3; a second pole piece 5B that is inserted in the second bobbin 21B and penetrates the base plate 3; and a magnet 4 that is disposed on a second surface on the reverse side from the first surface of the base plate 3 and comes into contact with the first pole piece 5A and the second pole piece 5B.
G10H 3/18 - Instruments in which the tones are generated by electromechanical means using mechanical resonant generators, e.g. strings or percussion instruments, the tones of which are picked up by electromechanical transducers, the electrical signals being further manipulated or amplified and subsequently converted to sound by a loudspeaker or equivalent device using mechanically actuated vibrators with pick-up means using strings, e.g. electric guitars
93.
SOUND CONTROL DEVICE, METHOD FOR CONTROLLING SAID DEVICE, PROGRAM, AND ELECTRONIC MUSICAL INSTRUMENT
A sound control device is provided. An acquisition unit 31 acquires a performance signal, and a determination unit 32 determines a note-on and a note-off on the basis of the performance signal. A specifying unit 34 specifies a syllable corresponding to a timing at which a note-on was determined from lyrics data in which a plurality of syllables to be pronounced are arranged in chronological order. An instruction unit 36 instructs to start pronunciation of the specified syllable at a timing corresponding to a note-on, and instructs to pronounce some of the phonemes constituting the specified syllable at a timing corresponding to a note-off.
This signal generation unit includes a driving surface on which a first driving region, a second driving region, and a third driving region are sequentially arranged in the first direction. A first detection coil is installed on a first movable member in a state of the winding axis extended in the first direction, and moves, together with the first movable member, in the second direction intersecting the first direction between the first and second driving regions in plan view. A second detection coil is installed on a second movable member in a state of the winding axis extended in the first direction, and moves, together with the second movable member, in the second direction between the second and third driving regions in plan view. The signal generation unit generates: a magnetic field over the first and second driving regions during a first driving period, thereby generating a first detection signal corresponding to the distance between the first detection coil and the driving surface; and a magnetic field over the second and third driving regions during a second driving period different from the first driving period, thereby generating a second detection signal corresponding to the distance between the second detection coil and the driving surface.
This information processing system acquires a first sound field parameter representing a feature of a first sound field and a second sound field parameter representing a feature of a second sound field having different acoustic characteristics from the first sound field and, using the first sound field parameter and the second sound field parameter, generates a third sound field parameter representing a feature of a third field having different acoustic characteristics from the first sound field and the second sound field.
In the present invention, a first detection coil is provided to a first key, and a second detection coil is provided to a second key. A first signal generation unit includes a first drive coil that is opposite from the first detection coil, and generates a first detection signal in accordance with the distance between the first detection coil and the first drive coil. A second signal generation unit includes a second drive coil that is opposite from the second detection coil, and generates a second detection signal in accordance with the distance between the second detection coil and the second drive coil. The first drive coil includes a first drive part in which current flows in a first direction, and a second drive part in which current flows in a second direction that is opposite to the first direction. The second drive coil includes a third drive part in which current flows in the first direction, and a fourth drive part in which current flows in the first direction. The first detection coil includes a first portion and a second portion in which induced currents in opposite directions to each other are generated via electromagnetic induction by the first drive coil. The second detection coil includes a third portion and a fourth portion in which induced currents in the same direction as each other are generated via electromagnetic induction by the second drive coil.
This displacement amount detection device comprises: a reaction body that includes a metal piece or a passive coil; a first substrate circuit that includes a first coil that generates a magnetic field as a result of an alternate current signal being supplied, and that outputs a first detection signal having an amplitude corresponding to the distance between the first coil and the reaction body; a second substrate circuit that includes a second coil that generates a magnetic field as a result of an alternate current signal being supplied, and that outputs a second detection signal having an amplitude corresponding to the distance between the second coil and the reaction body; and a difference output circuit that outputs the difference between the amplitude of the first detection signal and the amplitude of the second detection signal.
The mRNA cap analog contains a functional group selected from the following: amino group, carboxylic acid active ester groups, aminooxy group, hydrazino group, carbonyl group, thiol group, haloacetyl groups, Michael acceptor functional groups, alkenyl groups, and 1,3-dipolar functional groups. The mRNA cap analog is also provided with, in the sugar moiety or base moiety of a nucleotide, a linker having a structure that can be cleaved by light or fluoride ion.
This speaker-equipped desk comprises: a desk having a top panel; two arms that are attached to the desk, are positioned entirely on the bottom side of the top panel in a storage position, and each have a tip capable of moving between the storage position and a usage position protruding from the edge of the top panel surface as seen from the direction orthogonal to the top panel surface of the top panel; and two speakers provided to the respective tips of the two arms. The speakers are positioned on the bottom side of the top panel when the two arms are disposed at the storage position.
This stringed instrument comprises a musical instrument body 2, strings, a saddle 5 inserted into a groove 9 formed in the musical instrument body 2 to support the strings, and a pickup 6 that has a porous layer stretchable and deformable in the thickness direction and includes a piezoelectric element 20 that outputs a detection signal in response to expansion and contraction deformation of the porous layer. The piezoelectric element 20 is arranged between a first inner side surface 9a of the groove 9 and a first side surface 5a of the saddle 5. The width W9 of the groove 9 from the first inner side surface 9a to a second inner side surface 9b of the groove 9 is larger than the width W5 of the saddle 5 in the arrangement direction of the first inner side surface 9a and the second inner side surface 9b. The thickness T20 of the piezoelectric element 20 in the arrangement direction in an unloaded state is equal to or greater than the difference between the width W9 of the groove 9 and the width W5 of the saddle 5.
G10H 3/18 - Instruments in which the tones are generated by electromechanical means using mechanical resonant generators, e.g. strings or percussion instruments, the tones of which are picked up by electromechanical transducers, the electrical signals being further manipulated or amplified and subsequently converted to sound by a loudspeaker or equivalent device using mechanically actuated vibrators with pick-up means using strings, e.g. electric guitars