A personal care device is proposed with a main body and attachment and wherein the attachment comprises an optical arrangement or light patterning means which receives light supplied at a light input area and processes or guides the light in such a way as to generate an illumination pattern in the attachment body which defines a 3D geometric shape or structure. The light patterning means or optical arrangement has a plurality of portions, which are each arranged to be optically activated upon supply of light to a different respective light input area, or to the same light input area but at a different light input angle, and each, upon optical activation, generating a different portion of the overall 3D illumination pattern. A set of multiple light sources are pro-s ided in the main body for respectively supplying light into different of the portions of the optical arrangement or light patterning means, and are dynamically controlled by a controller to generate a dynamically varying overall 3D illumination pattern.
An apparatus generates a depth map for an image a capture position for the image, and depth maps from at least two depth sense positions. Some depth map pixels are designated as uncertain. A view shift processor (305) applies view shifts to the depth maps from the depth sense positions to the capture position and designates pixels of the view shifted depth maps as uncertain if no certain pixel shifts to the pixel position. A combiner (307) generates a combined depth map for the capture position by combining the view shifted depth maps and designates pixels of the combined depth map as uncertain depth pixels if any pixel used in the combining is designated as an uncertain pixel. A depth map generator (309) generates an output depth map by determining depth values for uncertain pixels based on image values or certain pixels of the combined depth map.
H04N 5/222 - Studio circuitryStudio devicesStudio equipment
H04N 13/271 - Image signal generators wherein the generated image signals comprise depth maps or disparity maps
H04N 19/597 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding
H04N 21/218 - Source of audio or video content, e.g. local disk arrays
An apparatus comprises a receiver (101) receiving audiovisual data representing a scene. Sources (105, 107) provide a vehicle motion signal indicative of a motion of a vehicle and a relative user motion signal indicative of a motion of a user relative to the vehicle. A predictor (109) generates a predicted relative user motion signal by applying a prediction model to the vehicle motion signal A residual signal generator (111) generates a residual user motion signal indicative of the residual difference between the predicted and received relative user motion. A view pose determiner (113) determines a view pose with different dependencies on the predicted relative user motion signal and the residual user motion signal. A renderer (103) renders an audiovisual signal for the view pose from the audiovisual data. The approach may provide enhanced user experiences that may compensate or include effects of user motion caused by vehicle motion.
G06F 3/147 - Digital output to display device using display panels
G09G 5/00 - Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
G09G 5/02 - Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the way in which colour is displayed
H04N 21/434 - Disassembling of a multiplex stream, e.g. demultiplexing audio and video streams or extraction of additional data from a video streamRemultiplexing of multiplex streamsExtraction or processing of SIDisassembling of packetised elementary stream
H04N 21/435 - Processing of additional data, e.g. decrypting of additional data or reconstructing software from modules extracted from the transport stream
H04N 21/84 - Generation or processing of descriptive data, e.g. content descriptors
H04N 19/597 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding
11.
AN AUDIO APPARATUS AND METHOD OF OPERATION THEREFOR
An audio apparatus comprises a first receiver (501) receiving audio datafor audio sources of a scene comprising multiple rooms. A determiner (509) determines a room comprising a listening position and a neighbor room. A second receiver (503) receives spatial acoustic transmission data describing a number of transmission boundary regions for the listening room having an acoustic transmission level of sound from the neighbor room to the listening room exceeding a threshold. A first reverberator (511) determines a reverberation audio signal for the neighbor room. A sound source circuit (513) determines a sound source position in the neighbor room for a transmission boundary region. A renderer (507) renders an audio signal for the listening position which includes an audio component generated by rendering the second room reverberation audio signal from the sound source position. An improved rendering of multi-room scenes can be achieved.
Devices and methods for applying oral care with an adaptable mouthpiece assembly. The adaptable mouthpiece assembly includes a support plate configured to be inserted in a mouth of a user, and a chain member configured to accommodate a section of teeth and gum of the user.The chain member includes a plurality of shackles that are arranged successively on the support plate and that are hingeably coupled to each other. A posterior shackle is coupled to the support plate via a first fixed pivot. An anterior shackle is coupled to the support plate via a movable pivot. A configuration of the plurality of shackles is adjustable by allowing the plurality of shackles to move freely in a horizontal plane along the support plate between the first fixed pivot and the movable pivot.
Drivetrain assemblies for personal care devices to generate controllable power tapping motion alone or with controllable sweeping motion. The drivetrain assemblies include a drivetrain shaft at least partially contained within a body portion. The drivetrain shaft is further configured to engage a brush head member. Some drivetrain assemblies include a single actuator for generating periodic linear movement, wherein the drivetrain shaft transmits the linear movement to the brush head member via a pivoting motion and/or a translational motion. Other drivetrain assemblies include the actuator for generating periodic linear movement and a motor on the drivetrain shaft to periodically rotate the shaft about a central axis. Still other drivetrain assemblies include a motor to rotate the drivetrain shaft about the central axis, an actuator to generate periodic linear movement, parallel flexible flexures to constrain the periodic linear movement, and parallel substantially rigid translation linkages to transmit the periodic linear movement.
A dental appliance and method employing an arcuate mouthpiece with a plurality of substantially H-shaped plate segments having opposing side walls joined by transverse bridge portions splitting the opposing side walls into upper and inverted channels dimensioned for receiving the teeth in the maxillary dental arch and mandibular arches respectively so that one or more cleaning modules, such as tufted bristles, may be activated to simultaneously clean a working surface of the teeth of the maxillary dental arch and the teeth of the mandibular dental arch adjacent to the plate segments.
Systems and methods of generating sweeping and/or tapping motions in an oral cleaning device such as a power toothbrush device to perform optimally at all target areas of the user's mouth. The systems and methods are configured to periodically move a drivetrain shaft in a first direction about a central axis of the device or along a line that is tangential to a cleaning unit of the device, and periodically drive the drivetrain shaft in a second direction, different from the first direction, so that a set of cleaning elements that are attached to the cleaning unit move in a vertical up-down motion. The systems and methods are further configured to dynamically modify an output of the drivetrain assembly to target different areas of the user's mouth.
A power toothbrush device (100) including a brush head member (104) having a set of bristles (116); a body portion (102) coupled with the brush head member; and a drivetrain assembly (122). The drivetrain assembly includes an actuator (840) configured to generate periodic linear movement and a drivetrain shaft (124, 824) configured to transmit the generated periodic linear movement to the brush head member. The set of bristles are configured to be driven in a first direction (TM) at an amplitude equal to or greater than 0.25 mm and less than 3 mm and a frequency equal to or greater than 0.25 Hz and less than 520 Hz in the first direction. The drivetrain shaft is further configured to periodically rotate the set of bristles in a second direction (SM) about a central axis of the power toothbrush device.
A personal care device including a body portion; a cleaning unit having a set of cleaning elements, the set of cleaning elements extending in an axis of alignment; a controller and an actuator assembly arranged to be driven by the controller. The actuator assembly includes an actuator configured to generate first and second periodic movements, wherein the first and second periodic movements operate at different frequencies or out-of-phase with each other. The first periodic movement causes the cleaning elements to move in a first direction about a central axis of the personal care device or along a line that is tangential to a platen of the cleaning unit. The second periodic movement causes the cleaning elements to move in a second direction that is different than the first direction, wherein the second direction is normal to the platen of the cleaning unit.
Drivetrain assemblies for personal care devices using electromagnetic components to generate a combination of controllable power tapping and sweeping motions. The drivetrain assemblies include a drivetrain shaft at least partially contained within a body portion of the device. The drivetrain shaft is further configured to engage a brush head member of the device. The drivetrain assemblies further include a motor mounted on the drivetrain shaft, where the motor is configured to periodically rotate the drivetrain shaft about a first axis of the device, and an electromagnetic assembly configured to drive the drivetrain shaft and thereby the brush head member about an second axis of the device or in a direction parallel to a third axis of the device. The electromagnetic assembly includes a stator, a voice coil actuator, a rotor, or a linear solenoid actuator, for example.
A dental cleaning system comprises a first wall element for mounting against the inner, lingual, tooth surfaces of one or both jaws and a second wall element for mounting against the outer, facial, tooth surfaces of said one or both jaws. A drive mechanism couples the first and second wall elements, and imparts a drive vibration of the first and second wall elements towards each other and apart from each other.
:Provided is a toothbrush adapted to provide an indication of historic brushing performance of a user. In particular, an oral surface of the user engaged by the cleaning element of the toothbrush by a location sensor. A feedback component integrated with the toothbrush then provides an indicator of stored historic brushing performance values corresponding to the oral surface engaged by the cleaning element. In this way, feedback regarding previous tooth brushing sessions may be conveyed to the user without the need for an external display. Moreover, this also reduces the chance of confusion by a user regarding the identification of oral surfaces that may require a change in brushing behavior.
:An (encoding) apparatus comprises a metadata generator (203) generating metadata for audio data for a plurality of audio elements representing audio sources in an environment. The metadata comprises acoustic environment data for the environment where the acoustic environment data describes properties affecting sound propagation for the audio sources in the environment. At least some of the acoustic environment data is applicable to a plurality of listening poses in the environment and the properties include both static and dynamic properties. A bitstream generator (205) generates the bitstream to include the metadata and often also audio data representing the audio elements for the audio sources in the environment. A decoding apparatus may comprise a receiver for receiving the bitstream and a renderer for rendering audio for the audio environment based on the acoustic environment data and on audio data for the audio elements.
An audio apparatus comprising a receiver (501) receiving audio data and metadata including data for reverberation parameters for the environment. A modifier (503) generates a modified first parameter value for a first reverberation parameter which is a reverberation delay parameter or a reverberation decay rate parameter. A compensator (505) generates a modified second parameter value for a second reverberation parameter in response to the modification of the first reverberation parameter/ The second reverberation parameter is indicative of an energy of reverberation in the acoustic environment. A renderer (400) generates audio output signals by rendering the audio data using the metadata and specifically a reverberation renderer (407) generates at least one reverberation signal component for at least one audio output signal from at least one of the audio signals and in response to the first modified parameter value and the second modified parameter value. The compensation may provide improved perceived reverberation while allowing flexible adaptation.
A method for processing multi-view data of a scene. The method comprises obtaining at least two images of the scene from different cameras, determining a sharpness indication for each image and determining a confidence score for each image based on the sharpness indications. The confidence score is for use in the determination of weights when blending the images to synthesize a new virtual image.
A method for generating depth data for a six degree of freedom, 6DoF, video of a scene. The method comprises obtaining a first set of images of the scene, generating a first set of depth components based on the first set of images and analyzing the first set of depth components to determine completeness of the depth components. A second set of images of the scene are further obtained and a second set of depth components are generated based on the second set of images, wherein, if the analysis determines the first set of depth components to be overcomplete, the number of depth components in the second set is selected to be smaller than the number of depth components in the first set.
An acoustic image source model for early reflections in a room is generated by iteratively mirroring (305) rooms around boundaries (e.g. walls) of rooms of the previous iteration. Determination of mirror positions in the image rooms for an audio source in the original room is performed by determining (605, 607) matching reference positions in the two rooms and a relative mapping of directions between the two rooms (611). A mirror position in the mirror room from an audio source in the original room is determined (701, 703, 705) by mapping relative offsets between the positions of the audio source and the reference positions. The approach may provide a computationally very efficient approach.
:A method for transmitting multi-view image frame data. The method comprises obtaining multi-view components representative of a scene generated from a plurality of sensors, wherein each multi-view component corresponds to a sensor and wherein at least one of the multi-view components includes a depth component and at least one of the multi-view components does not include a depth component. A virtual sensor pose is obtained for each sensor in a virtual scene, wherein the virtual scene is a virtual representation of the scene and wherein the virtual sensor pose is a virtual representation of the pose of the sensor in the scene when generating the corresponding multi-view component. Sensor parameter metadata is generated for the multi-view components, wherein the sensor parameter metadata contains extrinsic parameters for the multi-view components and the extrinsic parameters contain at least the virtual sensor pose of a sensor for each of the corresponding multi-view components. The extrinsic parameters enable the generation of additional depth components by warping the depth components based on their corresponding virtual sensor pose and a target position in the virtual scene. The multi-view components and the sensor parameter metadata is thus transmitted.
H04N 19/597 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding
A method for processing multi-view image data. The method comprises obtaining source view data from a plurality of sensors, the source view data containing source texture data and source depth data of a scene with one or more objects. The positions of one or more of the objects in the scene are obtained and a stack of layers is generated in a virtual scene for at least one of the objects, wherein the position of a stack of layers in the virtual scene is based on the position of the corresponding object in the scene. Generating a stack of layers comprises generating a plurality of layers, wherein each layer comprises texture data and transparency data for the corresponding object.
An apparatus comprises a receiver (601) receiving captured video data for a real world scene and being linked with a capture pose region. A store (615) stores a 3D mesh model of the real world scene. A renderer (605) generates an output image for a viewport for a viewing pose. The renderer (605) comprises a first circuit (607) arranged to generate first image data for the output image by projection of captured video data to the viewing pose and second circuit (609) arranged to determine second image data for a first region of the output image in response to the three-dimensional mesh model. A third circuit (611) generates the output image to include at least some of the first image data and to include the second image data for the first region. A fourth circuit (613) determines the first region based on a deviation of the viewing pose relative to the capture pose region.
G06F 3/0481 - Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
G06T 15/00 - 3D [Three Dimensional] image rendering
H04N 13/111 - Transformation of image signals corresponding to virtual viewpoints, e.g. spatial image interpolation
H04N 13/279 - Image signal generators from 3D object models, e.g. computer-generated stereoscopic image signals the virtual viewpoint locations being selected by the viewers or determined by tracking
H04N 21/218 - Source of audio or video content, e.g. local disk arrays
H04N 21/6587 - Control parameters, e.g. trick play commands or viewpoint selection
A method of depth segmentation for the generation of a multi-view video data. The method comprises obtaining a plurality of source view images and source view depth maps representative of a 3D scene from a plurality of sensors. Foreground objects in the 3D scene are segmented from the source view images and/or the source view depth maps. One or more patches are then generated for each source view image and source view depth map containing at least one foreground object, wherein each patch corresponds to a foreground object and wherein generating a patch comprises generating a patch texture image, a patch depth map and a patch transparency map based on the source view images and the source view depth maps.
:The invention provides a light output system for delivering light to a region of interest, for providing at least a minimum light intensity to all of the region of interest. The system has more light sources of a particular kind than are needed to reach the minimum light intensity (to all of the region of interest), and they are operated with a duty cycle. The duty cycle ratio is reduced by a factor which is greater than the factor by which the number of light sources in increased above the minimum number, so that energy savings are obtained as well as an increased lifetime of the system.
H05B 47/155 - Coordinated control of two or more light sources
H05B 47/20 - Responsive to malfunctions or to light source lifeCircuit arrangements for operating light sources in general, i.e. where the type of light source is not relevant for protection
A method for preparing immersive video data prior to processing into an immersive video. The method comprises receiving immersive video data containing one or more images of a scene, the scene comprising one or more image regions, and obtaining a relevance factor for at least one of the image regions, the relevance factor being indicative of the relative importance of the image region to a viewer. The immersive video data is separated into one or more sets of region data, wherein each set of region data corresponds to the data of one or more image regions and, based on the relevance factor of an image region, a bitrate is selected at which the set of region data corresponding to the image region is to be sent to an external processing unit.
H04N 19/115 - Selection of the code volume for a coding unit prior to coding
H04N 19/136 - Incoming video signal characteristics or properties
H04N 19/164 - Feedback from the receiver or from the transmission channel
H04N 19/167 - Position within a video image, e.g. region of interest [ROI]
H04N 19/17 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
H04N 19/597 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding
H04N 19/85 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
H04N 21/218 - Source of audio or video content, e.g. local disk arrays
H04N 21/2343 - Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
32.
DEPTH ORDERS FOR MULTI-VIEW FRAME STORING AND RENDERING
A method for storing multi-view data with depth order data. The method comprises obtaining image frames of a scene from an imaging system with a plurality of cameras, obtaining depth maps from the imaging system and/or the image frames and obtaining qualitative depth information relating to the depth of at least one object present in the scene relative to other objects in the scene, the qualitative depth information being additional to the information conveyed by the depth map. A depth order is determined for a set of at least two objects present in the scene based on the qualitative depth information, wherein the depth order determines the depth of an object relative to other objects with different depth orders. The image frames of the scene, the corresponding depth maps and the depth order for the objects in the scene are then stored as the multi-view data.
An image synthesis apparatus comprises a first receiver (201) receiving three dimensional image data describing at least part of a three dimensional scene and second receiver (203) receiving a view pose for a viewer. An image region circuit (207) determines at least a first image region in the three dimensional image data and a depth circuit (209) determines a depth indication for the first image region from depth data of the three dimensional image data. A region circuit (211) determines a first region for the first image region. A view synthesis circuit (205) generates a view image from the three dimensional image data where the view image representing a view of the three dimensional scene from the view pose. The view synthesis circuit (205) is arranged to adapt a transparency for the first image region in the view image in response to the depth indication and a distance between the view pose and the first region.
H04N 13/117 - Transformation of image signals corresponding to virtual viewpoints, e.g. spatial image interpolation the virtual viewpoint locations being selected by the viewers or determined by viewer tracking
The processing of a depth map comprises for at least a first pixel of the depth map performing the steps of: determining a set of candidate depth values (105) including other depth values of the depth map, determining (107) a cost value for each of the candidate depth values in response to a cost function; selecting (109) a first depth value in response to the cost values for the set of candidate depth values; and determining (111) an updated depth value for the first pixel in response to the first depth value. The set of candidate depth values comprises a first candidate depth value along a first direction which is further away from the first pixel than at least one pixel along the first direction which is not included in the set of candidate depth values or which has a higher cost function than the first candidate depth value.
A teat for a drinking bottle has a nipple opening in the form of a slit arrangement. The slit arrangement has end portions which comprise curved paths. These curved paths resist tearing.
A method for transitioning from a first set of video tracks, VT1, to a second set of video tracks, VT2, when rendering a multi-track video, wherein each video track has a corresponding rendering priority. The method comprises receiving an instruction to transition from a first set of first video tracks VT1 to a second set of second video tracks VT2, obtaining the video tracks VT2 and, if the video tracks VT2 are different to the video tracks VT1, applying a lowering function to the rendering priority of one or more of the video tracks in the first set of video tracks VT1 and/or an increase function to the rendering priority of one or more video tracks in the second set of video tracks VT2. The lowering function and the increase function decrease and increase the rendering priority over time respectively. The rendering priority is used in the determination of the weighting of a video track and/or elements of a video track used to render a multi-track video.
H04N 13/117 - Transformation of image signals corresponding to virtual viewpoints, e.g. spatial image interpolation the virtual viewpoint locations being selected by the viewers or determined by viewer tracking
H04N 21/218 - Source of audio or video content, e.g. local disk arrays
H04N 21/431 - Generation of visual interfacesContent or additional data rendering
A method for storing data representative of virtual objects on a computer storage system. The method comprises storing constant data corresponding to physical properties of the virtual objects which will remain constant when the data is read. The constant data comprises one or more constant elements representative of physical properties of one or more of the virtual objects. The method also comprises storing variable data corresponding to physical properties of the virtual objects which are uncertain at the time of storing the data. The variable data comprises one or more variable elements representative of uncertain physical properties of one or more of the virtual objects and wherein each variable element comprises a range of values and a probability function for the range of values.
A bottle analysis system receives image data of a bottle to be analyzed, and the data is processed to identify a shape of the bottle, and optionally any identifying markings. A bottle type is then determined. Image analysis is used to determine a liquid level in the bottle and thereby determine a liquid volume in the bottle.
G01F 23/02 - Indicating or measuring liquid level or level of fluent solid material, e.g. indicating in terms of volume or indicating by means of an alarm by gauge glasses or other apparatus involving a window or transparent tube for directly observing the level to be measured or the level of a liquid column in free communication with the main body of the liquid
G06V 10/25 - Determination of region of interest [ROI] or a volume of interest [VOI]
39.
A METHOD AND APPARATUS FOR ENCODING AND DECODING ONE OR MORE VIEWS OF A SCENE
Methods are provided for encoding and decoding image or video data comprising two or more views (10) of a scene. The encoding method comprises obtaining (11), for each of the two or more views, a respective block segmentation mask (12) of the view and block image data (13) of the view. The method further comprises generating (14) at least one packed frame (40) containing the two or more block segmentation masks and the block image data of the two or more views; and encoding (15) the at least one packed frame into at least one bitstream (16). Each view is divided into blocks of pixels (30), and the block segmentation mask indicates which blocks of pixels belong to an area of interest (31) in the view. The block image data comprises the blocks of pixels that belong to the area of interest. Also provided are a corresponding encoder, decoder, and bitstream.
H04N 19/132 - Sampling, masking or truncation of coding units, e.g. adaptive resampling, frame skipping, frame interpolation or high-frequency transform coefficient masking
H04N 19/167 - Position within a video image, e.g. region of interest [ROI]
H04N 19/176 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
H04N 19/46 - Embedding additional information in the video signal during the compression process
H04N 19/597 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding
Concepts for encoding and decoding multi-view data for immersive video are disclosed. In an encoding method, metadata is generated comprising a field indicating if a patch data unit of the multi-view data comprises in-painted data for representing missing data. The generated metadata provides a means of distinguishing patch data units comprising original texture and depth data from patch data units comprising in-painted data (e.g. in-painted texture and depth data). The provision of such information within the metadata of immersive video may address problems associated with blending and pruned view reconstruction. Also provided are an encoder and a decoder for multi-view data for immersive video, and a corresponding bitstream, comprising metadata.
H04N 19/597 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding
41.
ANTI-FOULING UNIT AND METHOD OF APPLYING A PLURALITY OF ANTI-FOULING UNITS TO A SURFACE
An anti-fouling unit (1) is configured to be arranged on a surface and comprises at least one electric circuit (30) including a light-emitting arrangement (31) configured to emit anti-fouling light. Further, the anti-fouling unit (1) comprises a carrier slab (40) carrying the at least one electric circuit (30), which carrier slab (40) includes at least one active slab zone (42) where the at least one electric circuit (30) is located and at least one passive slab zone (43) outside the active slab zone (42) that is configured to allow a division of the anti-fouling unit (1) in separate pieces without deteriorating the anti-fouling functionality, and the light-emitting arrangement (31) of the at least one electric circuit (30) is configured to realize the anti-fouling functionality both at a position of the at least one active slab zone (42) and at a position of the at least one passive slab zone (43).
An encoder, decoder, encoding method and decoding method for 3DoF+ video are disclosed. The encoding method comprises receiving (110) multi-view image or video data comprising a basic view and at least a first additional view of a scene. The method proceeds by identifying (220) pixels in the first additional view that need to be encoded because they contain scene-content that is not visible in the basic view. The first additional view is divided (230) into a plurality of first blocks of pixels. First blocks containing at least one of the identified pixels are retained (240); and first blocks that contain none of the identified pixels are discarded. The retained blocks are rearranged (250) so that they are contiguous in at least one dimension. A packed additional view is generated (260) from the rearranged first retained blocks and encoded (264).
H04N 19/129 - Scanning of coding units, e.g. zig-zag scan of transform coefficients or flexible macroblock ordering [FMO]
H04N 19/132 - Sampling, masking or truncation of coding units, e.g. adaptive resampling, frame skipping, frame interpolation or high-frequency transform coefficient masking
H04N 19/134 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
H04N 19/176 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
H04N 19/182 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a pixel
H04N 19/46 - Embedding additional information in the video signal during the compression process
H04N 19/597 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding
43.
A SYSTEM AND METHOD FOR PROVIDING ASSISTANCE DURING BOTTLE-FEEDING
A system provides assistance during bottle-feeding. Video images of a subject bottle-feeding an infant are captured and displayed. Using image analysis, a reorientation of the bottle and/or infant is determined that is required in order to reach a desired bottle orientation and/or infant orientation. Reorientation instructions are provided in combination with the video images to assist the subject in reorienting the bottle and/or the infant to achieve the desired bottle orientation.
An audio apparatus for generating a diffuse reverberation signal comprises a receiver (501) receiving audio signals representing sound sources and metadata comprising a diffuse reverberation signal to total source relationship indicative of a level of diffuse reverberation sound relative to total emitted sound in the environment. The metadata also for each audio signal comprises a signal level indication and a directivity data indicative of directivity of sound radiation from the sound source represented by the audio signal. A circuit (505, 507) determines a total emitted energy indication based on the signal level indication and the directivity data, and a downmix coefficient based on the total emitted energy and the diffuse reverberation signal to total signal relationship. A downmixer (509) generates a downmix signal by combining signal components for each audio signal generated by applying the downmix coefficient for each audio signal to the audio signal. A reverberator (407) generates the diffuse reverberation signal for the environment from the downmix signal component.
G10L 19/008 - Multichannel audio signal coding or decoding using interchannel correlation to reduce redundancy, e.g. joint-stereo, intensity-coding or matrixing
H04S 7/00 - Indicating arrangementsControl arrangements, e.g. balance control
45.
EXTENDED REALITY-BASED USER INTERFACE ADD-ON, SYSTEM AND METHOD FOR REVIEWING 3D OR 4D MEDICAL IMAGE DATA
The invention relates to a system (1) for reviewing 3D or 4D medical image data (2), the system (1) comprising (a) a medical review application (MRA) (4) comprising a processing module (6) configured to process a 3D or 4D dataset (2) to generate 3D content (8), and a 2D user interface (16); wherein the 2D user interface (16) is configured to display the 3D content (8) and to allow a user (30) to generate user input (18) commands; (b) an extended reality (XR)-based user interface add-on (XRA) (100); and (c) a data exchange channel (10), the data exchange channel (10) being configured to interface the processing module (6) with the XRA (100); wherein the XRA (100) is configured to interpret and process the 3D content (8) and convert it to XR content displayable to the user (30) in an XR environment (48); wherein the XR environment (48) is configured to allow a user to generate user input (18) events, and the XRA (100) is configured to process the user input (18) events and convert them to user input (18) commands readable by the MRA (4). The invention also relates to an extended reality -based user interface add-on (100), a related method for analysing a 3D or 4D dataset (2), and a related computer program.
A method for calibrating at least one of the six-degrees-of-freedom of all or part of cameras in a formation positioned for scene capturing, the method comprising a step of initial calibration before the scene capturing. The step comprises creating a reference video frame which comprises a reference image of a stationary reference object. During scene capturing the method further comprises a step of further calibration wherein the position of the reference image of the stationary reference object within a captured scene video frame is compared to the position of the reference image of the stationary reference object within the reference video frame, and a step of adapting the at least one of the six-degrees-of-freedom of a multiple cameras of the formation if needed in order to get an improved scene capturing after the further calibration.
Presented are concepts for generating an input image dataset that may be used for training the alignment of multiple synthesized images. Such concepts may be based on an idea to generate an input image dataset from copies of an arbitrary reference image that are shifted in various directions. In this way, a single arbitrary image may be used to create an artificial misaligned input image input sample (i.e. input image dataset) that can be used to train a neural network.
An internal element (310) for a feeding bottle (100) is provided, the feeding bottle comprising a teat component (110), and a container component (120), which together define a bottle volume extending longitudinally between a base end of the container component, and a top end of the teat component. The internal element (310) comprises a disc element (620) configured to be positioned within the bottle volume extending transverse the longitudinal axis, and further comprises one or more tab elements (640) protruding from an outer periphery (630) of the disc element for being received between interfacing parts of a coupling arrangement (340, 342) of the bottle.
An arrangement (300) for a feeding bottle is provided, the feeding bottle comprising a teat component (110), and a container component (120), which together define an internal bottle volume extending longitudinally between a base end of the container component, and a top end of the teat component. The arrangement comprises an internal element (310) for positioning inside the bottle volume, and a protruding element (320) arranged for extending from the internal element to an outside of the bottle when the bottle is in an assembled state with the internal element in position, for providing an interconnection between inside and outside of the bottle.
A partitioning component (210) for dividing a feeding bottle (110) into two sections: one (125) associated with a container (120) part of the bottle and one (115) associated with a teat part (110) of the bottle. The partition allows for at least partial retention of liquid in the teat part even when the bottle is tipped in a horizontal position, the more natural position for feeding a user such as a baby or toddler. To enable flow of fluid between the two sections, the partitioning component comprises a passageway arrangement (215) which comprises one or more openings (225) and the passageway arrangement configured to enable flow of both liquid and air across the partition in different directions. This allows liquid to pass in, and air to pass out, of the teat section (115) during filling of the teat. To enable maximal retention of liquid inside the teat section when the bottle is tilted in the horizontal position, the openings of the passageway arrangement are all confined to a single region of the partitioning component which, in use, is arranged offset on one diametric side of the bottle volume or of the teat volume.
A breast pump device (1) comprises i) a fluid pressure arrangement (2, 3) configured to interact with a breast from which milk is to be extracted and to realize a pressure cycle at a position where the fluid pressure arrangement (2, 3) is to face the breast, ii) a controller configured to execute an operation software program for controlling operation of the breast pump device (1), the operation software program including instructions to cause the fluid pressure arrangement (2, 3) to realize the pressure cycle, and iii) a monitoring arrangement configured to monitor functioning of the operation software program and to cause release of underpressure in case malfunctioning of the operation software program is detected. As a result, malfunctioning of the operation software program cannot last and any vacuum that may have built up in such a situation is automatically released.
Methods of encoding and decoding video data are provided. In an encoding method, source video data comprising one or more source views is encoded into a video bitstream. Depth data of at least one of the source views is nonlinearly filtered and downsampled prior to encoding. After decoding, the decoded depth data is up-sampled and nonlinearly filtered.
H04N 13/139 - Format conversion, e.g. of frame-rate or size
H04N 13/161 - Encoding, multiplexing or demultiplexing different image signal components
H04N 13/243 - Image signal generators using stereoscopic image cameras using three or more 2D image sensors
H04N 13/271 - Image signal generators wherein the generated image signals comprise depth maps or disparity maps
H04N 19/117 - Filters, e.g. for pre-processing or post-processing
H04N 19/154 - Measured or subjectively estimated visual quality after decoding, e.g. measurement of distortion
H04N 19/59 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial sub-sampling or interpolation, e.g. alteration of picture size or resolution
H04N 19/597 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding
A monitoring system is provided for monitoring an infant during bottle feeding. Based on bottle orientation information and movement information in respect of the feeding bottle during feeding, infant orientation information in respect of the infant is obtained.
G16H 40/60 - ICT specially adapted for the management or administration of healthcare resources or facilitiesICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices
55.
A MONITORING SYSTEM AND METHOD FOR MONITORING MILK FLOW DURING BREAST FEEDING OR MILK EXPRESSION
A monitoring system for monitoring milk flow during breast feeding or milk expression uses a flow sensor arrangement to monitor milk flow levels from different regions of the breast. A map of milk flow levels for different regions of the breast is then generated and displayed.
An image synthesis system comprises receivers (201, 203, 205) receiving scene data describing at least part of a scene; object data describing a 3D object from a viewing zone having a relative pose with respect to the object, and a view pose in the scene. A pose determiner circuit (207) determines an object pose for the object in the scene in response to the scene data and the view pose; and a view synthesis circuit (209) generates a view image of the object from the object data, the object pose, and the view pose. A circuit (211) determines a viewing region in the scene which corresponds to the viewing zone for the object being at the object pose. The pose determiner circuit (207) determines a distance measure for the view pose relative to the viewing region and changes the object pose if the distance measure meets a criterion including a requirement that a distance between the view pose and a pose of the viewing region exceeds a threshold.
A method is provided for selecting a transparency setting and color values of pixels in a virtual image. The virtual image can be formed by combining reference images taken at different angles to produce the virtual image that views an object at a new, uncaptured angle. The method includes determining for each pixel of the virtual image, what information it carries from the reference view images. The information of the pixel is used to define a pixel category, and the category is used to select, based on logical conditions, what information will be displayed by the pixel and to set the color of the pixels.
A method for compressing data includes obtaining a compression schema customized to a format of a delimited text file, and using the compression schema to parse the delimited text file into a plurality of data blocks, split each of the data blocks into a plurality of data units for efficient selective access, and compress the plurality of data units in the plurality of data blocks using different compression algorithms for improved compression ratio. The delimited file is split into a plurality of data blocks based on the region definitions in the schema. Each of the plurality of data blocks is split into the plurality of data units based on its respective data unit size specified in the schema. The plurality of data units in each of the plurality of data blocks are compressed using the different compression algorithms indicated by the compression instructions in the schema. The compressed file consists of the compressed data blocks, the compression schema and various metadata for data decompression, file reconstruction and functionalities such as data security and search query. The delimited text file may include genomic information or another type of information.
A method for storing, by a processor, a genome graph representing a plurality of individual genomes, including: storing a linear representation of a reference genome in a data storage; receiving a first genome; identifying variations in the first genome from the reference genome; generating graph edges for each variation in the first genome from the reference genome; generating for each generated graph edge: an edge identifier that uniquely identifies the current edge in the genome graph; a start edge identifier that identifies the edge from which the current edge branches out; a start position that indicates the position on the start edge that serves as an anchoring point for the current edge; an end edge identifier that identifies the edge into which the current edge joins in; an end position that indicates the position on the end edge that serves as an anchoring point for the current edge; and a sequence indicating the nucleotide sequence of the current edge; and storing the edge identifier, start edge identifier, start position, end edge identifier, end edge position, and sequence for each generated graph edge in the data storage. Based on this genome graph data structure, we further propose a scheme for specifying a path, which may traverse one or more edges, and the ways to extend existing genomic data formats such as SAM, VCF and MPEG-G to support the use of genome graph reference using our proposed coordinate system.
Methods of encoding and decoding immersive video are provided. In an encoding method, source video data comprising a plurality of source views is encoded into a video bitstream. At least one of the source views is down-sampled prior to encoding. A metadata bitstream associated with the video stream comprises metadata describing a configuration of the down-sampling, to assist a decoder to decode the video bitstream. It is believed that the use of down-sampled views may help to reduce coding artifacts, compared with a patch-based encoding approach. Also provided are an encoder and a decoder for immersive video, and an immersive video bitstream.
A computer-implemented system and method are for alerting an expectant mother to a medical risk during pregnancy. A profile of the expectant mother is used, and reports are received from the expectant mother identifying experienced symptoms. In response to a report at a particular time, any reports received over a subsequent time window are monitored, and based on the combination of reports received during the time window and the profile of the expectant mother, the need for a risk alert is determined. The user is more willing to report symptoms, because a risk alert (which functions as reporting symptoms to a medical expert) only takes place when a real risk is identified.
A61B 5/00 - Measuring for diagnostic purposes Identification of persons
G08B 21/02 - Alarms for ensuring the safety of persons
G16H 50/20 - ICT specially adapted for medical diagnosis, medical simulation or medical data miningICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
G16H 50/30 - ICT specially adapted for medical diagnosis, medical simulation or medical data miningICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for calculating health indicesICT specially adapted for medical diagnosis, medical simulation or medical data miningICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for individual health risk assessment
Methods of encoding and decoding depth data are disclosed. In an encoding method, depth values and occupancy data are both encoded into a depth map. The method adapts how the depth values and occupancy data are converted to map values in the depth map. For example, it may adaptively select a threshold, above or below which all values represent unoccupied pixels. By adapting how the depth and occupancy are encoded, based on analysis of the depth values, the method can enable more effective encoding and transmission of the depth data and occupancy data. The encoding method outputs metadata defining the adaptive encoding. This metadata can be used by a corresponding decoding method, to decode the map values. Also provided are an encoder and a decoder for depth data, and a corresponding bitstream, comprising a depth map and its associated metadata.
An apparatus for evaluating a quality for image capture comprises a stored (101) for storing a model of a scene and a capture circuit (105) for generating virtual captured images for a camera configuration by rendering from the model. A depth generation circuit (107) generates model depth data from the model and a depth estimation circuit (111) generates estimated depth data from the virtual captured images. A first synthesis circuit (109) and a second synthesis circuit (113) generates first and second view images for test poses by processing the virtual captured images based on the model depth data or estimated depth data respectively. A reference circuit (103) generates reference images for the f test poses by rendering based on the model. A quality circuit (115) generates a quality metric based on a comparison of the first view images, the second view images, and the reference images.
There is provided a method for analysing medical image data (34) in a virtual multi-user collaboration, wherein the medical image data (34) is analysed by at least two users (A, N,C, S), each user having his/her own workspace (30), wherein the workspace is a VR- and/or AR- and/ or MR-workspace, the method including the steps of providing medical image data (34) including 3D or 4D image information, loading the medical image data (34) into the workspace (30) of each user so as to simultaneously display a visualization of the medical image data (34) to each user, allowing each user to individually and independently from each other change the visualization of the medical image data (34), so as to obtain an individual visualization of the medical image data (34) in each workspace (30) pertaining to each user, allowing at least one user to execute an analysing process of the medical image data (34) in his/her workspace, displaying the result of the analysing process in the workspace (30) in which the analysing process was carried out, and synchronizing the result of the analysing process in real-time with the at least one other workspace (30) such that each workspace (30) displays the result of the analysing process in the respective individual visualization of the medical image data (34). Further, there is provided a computer program relating to the above method. In addition, a user interface and a system used during execution of the above method are provided.
An electrical breast pump (10) comprising a vacuum source (100) having a vacuum pump (120) with an electrical motor (121) and an aerate valve (150) is provided. Furthermore, a controller (110) controls an operation of the vacuum (120) and the aerate valve (150). Each pumping cycle comprises a pumping period (PP) of the vacuum pump (120) and an aerate period (AP) during which the aerate valve (150) is switched on and the vacuum pump (120) is in active and the electrical motor (120) is switched off. A drive circuit supplies a motor supply voltage for the electrical motor (121) under the control of the controller (110). The drive circuit (140) detects an electromotive force induced voltage at the electrical motor (121) when the motor supply voltage is switched off. The controller adapts the control of the operation of the vacuum pump (120) based on the detected induced voltage.
A two-phase recommendation system for a recommendation device, employing both an external recommendation process and an internal, to a recommendation device, recommendation process. In particular, a processing unit uses a first data file, which is modifiable by an external source, and a second data file stored in a memory unit to recommend one or more content items to a user. The first and second data files are stored in a memory unit of the recommendation device.
67.
PRESSURE SENSOR FOR BEING INTRODUCED INTO THE CIRCULATORY SYSTEM OF A HUMAN BEING
The invention relates to a passive pressure sensor (501) for being introduced into the circulatory system of a human being and for being wirelessly read out by an outside reading system. The pressure sensor comprises a casing (502) with a diffusion blocking layer for maintaining a predetermined pressure within the casing and a magneto-mechanical oscillator with a magnetic object (508) providing a permanent magnetic moment. The magneto-mechanical oscillator transduces an external magnetic or electromagnetic excitation field into a mechanical oscillation of the magnetic object, wherein at least a part of the casing is flexible for allowing to transduce external pressure changes into changes of the mechanical oscillation of the magnetic object. The pressure sensor can be very small and nevertheless provide high quality pressure sensing.
A61B 5/00 - Measuring for diagnostic purposes Identification of persons
A61B 5/0215 - Measuring pressure in heart or blood vessels by means inserted into the body
G01L 9/00 - Measuring steady or quasi-steady pressure of a fluid or a fluent solid material by electric or magnetic pressure-sensitive elementsTransmitting or indicating the displacement of mechanical pressure-sensitive elements, used to measure the steady or quasi-steady pressure of a fluid or fluent solid material, by electric or magnetic means
68.
TRACKING SYSTEM AND MARKER DEVICE TO BE TRACKED BY THE TRACKING SYSTEM
A tracking system for tracking a marker device for being attached to a medical device is provided, whereby the marker device includes a sensing unit comprising a magnetic object which may be excited by an external magnetic or electromagnetic excitation field into a mechanical oscillation of the magnetic object, and the tracking system comprises a field generator for generating a predetermined magnetic or electromagnetic excitation field for inducing mechanical oscillations of the magnetic object, a transducer for transducing a magnetic or electromagnetic field generated by the induced mechanical oscillations of the magnetic object into one or more electrical response signals, and a position determination unit for determining the position of the marker device on the basis of the one or more electrical response signals.
A61B 5/00 - Measuring for diagnostic purposes Identification of persons
A61B 5/06 - Devices, other than using radiation, for detecting or locating foreign bodies
A61B 34/20 - Surgical navigation systemsDevices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
A61B 90/00 - Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups , e.g. for luxation treatment or for protecting wound edges
A61M 25/01 - Introducing, guiding, advancing, emplacing or holding catheters
Generating an image signal comprises a receiver (401) receiving source images representing a scene. A combined image generator (403) generates combined images from the source images. Each combined image is derived from only parts of at least two images of the source images. An evaluator (405) determines prediction quality measures for elements of the source images where the prediction quality measure for an element of a first source image is indicative of a difference between pixel values in the first source image and predicted pixel values for pixels in the element. The predicted pixel values are pixel values resulting from prediction of pixels from the combined images. A determiner (407) determines segments of the source images comprising elements for which the prediction quality measure is indicative of a difference above a threshold. An image signal generator (409) generates an image signal comprising image data representing the combined images and the segments of the source images.
An image source (407) provides an image divided into segments of different sizes with only a subset of these comprising image data. A metadata generator (409) generates metadata structured in accordance with a tree data structure where each node is linked to a segment of the image. Each node is a branch node linking the parent node to child nodes linked to segments that are subdivisions of the parent node, or a leaf node which has no children. A leaf node is either an unused leaf node linked to a segment for which the first image comprises no image data or a used leaf node linked to a segment for which the first image comprises image data. The metadata indicates whether each node is a branch node, a used leaf node, or an unused leaf node. An image signal generator (405) generates an image signal comprising the image data of the first image and the metadata.
H04N 19/597 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding
H04N 19/88 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression involving rearrangement of data among different coding units, e.g. shuffling, interleaving, scrambling or permutation of pixel data or permutation of transform coefficient data among different blocks
A method of processing depth maps comprises receiving (301) images and corresponding depth maps. The depth values of a first depth map of the corresponding depth maps are updated (303) based on depth values of at least a second depth map of the corresponding depth maps. The updating is based on a weighted combination of candidate depth values determined from other maps. A weight for a candidate depth value from the second depth map is determined based on the similarity between a pixel value in the first image corresponding to the depth being updated and a pixel value in a third image at a position determined by projecting the position of the depth value being updated to the third image using the candidate depth value. More consistent depth maps may be generated in this way.
An apparatus comprises receivers (201, 203) receiving texture maps and meshes representing a scene from a first and second view point. An image generator (205) determines a light intensity image for a third view point based on the received data. A first view transformer (207) determines first image positions and depth values in the image for vertices of the first mesh and a second view transformer (209) determines second image positions and depth values for vertices of the second mesh. A first shader (211) determines a first light intensity value and a first depth value based on the first image positions and depth value, and a second shader (213) determines a second light intensity value and a second depth value from the second image positions depth values. A combiner (215) generates an output value as a weighted combination of the first and second light intensity values where the weighting of a light intensity value increases for an increasing depth value.
An apparatus comprises a receiver (301) for receiving an image representation of a scene. A determiner (305) determines viewer poses for a viewer with respect to a viewer coordinate system. An aligner (307) aligns a scene coordinate system with the viewer coordinate system by aligning a scene reference position with a viewer reference position in the viewer coordinate system. A renderer (303) renders view images for different viewer poses in response to the image representation and the alignment of the scene coordinate system with the viewer coordinate system. An offset processor (309) determines the viewer reference position in response to an alignment viewer pose where the viewer reference position is dependent on an orientation of the alignment viewer pose and has an offset with respect to a viewer eye position for the alignment viewer pose. The offset includes an offset component in a direction opposite to a view direction of the viewer eye position.
An apparatus comprises a receiver (301) receiving an image signal representing a scene. The image signal includes image data comprising a number of images where each image comprises pixels that represent an image property of the scene along a ray having a ray direction from a ray origin. The ray origins are different positions for at least some pixels. The image signal further comprises a plurality of parameters describing a variation of the ray origins and/or the ray directions for pixels as a function of pixel image positions. A renderer (303) renders images from the number of images based on the plurality of parameters.
H04N 13/117 - Transformation of image signals corresponding to virtual viewpoints, e.g. spatial image interpolation the virtual viewpoint locations being selected by the viewers or determined by viewer tracking
H04N 13/178 - Metadata, e.g. disparity information
A teat (1) comprises a hollow teat body (10) including a deformable hollow mouthpiece (11) and a normally-closed valve (30) arranged at a level of the mouthpiece (11) or a level more downstream, the valve (30) being openable under the influence of suction forces exerted to the mouthpiece (11) by a user of the teat (1) during a liquid intake action. The valve (30) is included in a valve area (32) of a valve body (31) that is configured and arranged to prevent a closed-opened condition of the valve (30) from being changed under the influence of deformation of the mouthpiece (11) inflicted by a user of the teat (1) during a liquid intake action, so that the closed-opened condition of the valve (30) is controllable by means of suction forces exerted to the mouthpiece (11) by a user of the teat (1) during a liquid intake action.
The invention relates to an apparatus for generating or processing an image signal. A first image property pixel structure is a two-dimensional non-rectangular pixel structure representing a surface of a view sphere for the viewpoint. A second image property pixel structure is a two-dimensional rectangular pixel structure and is generated by a processor (305) to have a central region derived from a central region of the first image property pixel structure and at least a first corner region derived from a first border region of the first image property pixel structure. The first border region is a region proximal to one of an upper border and a lower border of the first image property pixel structure. The image signal is generated to include the second image property pixel structure and the image signal may be processed by a receiver to recover the first image property pixel structure.
A baby bottle device (100) is provided which comprises a container (110) with a container volume (115), a teat (120) with a teat volume (125) and a valve unit (200). The teat comprises a first valve (112) for inletting air into the inside volume of the bottle (100). The inside volume of the bottle corresponds to the volume of the container (110) and the teat volume (125). The valve unit (200) with a second valve (210 230) is arranged outside the teat volume (125). The threshold of the vent unit is lower than the threshold of the first vent (112) in the teat.
A baby bottle device (100) is provided which comprises a teat (110) having a teat volume (115), a container (120) having a container volume (125) and a partitioning element (300) between the teat volume (110) and the container volume (125). The partitioning element (300) comprises a plurality of openings (310) for letting fluid in the container volume (125) flow into the teat volume (115). The baby bottle device comprises a floater (400) having a buoyancy, being coupled to the partitioning element (300) and being adapted to close at least one of the plurality of openings (310) in the partitioning element (300).
The present invention relates to a cutter assembly (10) for a hair cutting appliance (100). The cutter assembly comprises a cutting element (20), and a clamping element (30). The cutter assembly is configured to contact skin of a user of the hair cutting appliance. The clamping element is configured to clamp hair of the user that is growing out of the skin. The clamping element is configured to move within the cutter assembly to pull the clamped hair away from the skin of the user. The cutting element is configured to cut the clamped hair that has been pulled away from the skin of the user.
B26B 19/42 - Details of, or accessories for, hair clippers or dry shavers, e.g. housings, casings, grips or guards providing for straightening the hair to be cut, e.g. by means of bristlesDetails of, or accessories for, hair clippers or dry shavers, e.g. housings, casings, grips or guards providing for tensioning the skin, e.g. by means of rollers, ledges
A baby bottle device (100) is provided which comprises at least one movement sensor (140, 150) for detecting a movement of the baby bottle device (100). The movement data from the movement sensor (140, 150) is analyzed in an analyzer (200) to perform a suck-swallow-breathe analysis during a drinking phase of the baby based on the movement data from the movement sensor (140, 150). Thus, a drinking behavior of a baby can be efficiently analyzed.
G16H 40/00 - ICT specially adapted for the management or administration of healthcare resources or facilitiesICT specially adapted for the management or operation of medical equipment or devices
An image synthesis apparatus comprises a receiver (301) for receiving image parts and associated depth data of images representing a scene from different view poses from an image source. A store (311) stores a depth transition metric for each image part of a set of image parts where the depth transition metric for an image part is indicative of a direction of a depth transition in the image part. A determiner (305) determines a rendering view pose and an image synthesizer (303) synthesizes at least one image from received image part. A selector is arranged to select a first image part of the set of image parts in response to the depth transition metric and a retriever (309) retrieves the first image part from the image source. The synthesis of an image part for the rendering view pose is based on the first image part.
H04N 13/117 - Transformation of image signals corresponding to virtual viewpoints, e.g. spatial image interpolation the virtual viewpoint locations being selected by the viewers or determined by viewer tracking
H04N 21/218 - Source of audio or video content, e.g. local disk arrays
H04N 21/231 - Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers or prioritizing data for deletion
H04N 21/6587 - Control parameters, e.g. trick play commands or viewpoint selection
An antifouling system for reducing and/or preventing fouling of an object exposed to fouling conditions when in use, comprising a plurality of antifouling devices (26) for providing an antifouling radiation to at least part of the object and/or at least part of the antifouling system; wherein the antifouling system further comprises: - a power transmission system comprising: - an inductive power emitter (10) comprising at least one inductive emitter element (12); and - a plurality of inductive power receivers (24) each one comprising at least one inductive receiver element;wherein the inductive power emitter and the plurality of inductive power receivers are for mounting on the object in a fixed configuration with respect to each other thereby to provide an inductive coupling between each one of the at least one inductive receiver elements and the at least one inductive emitter element such that power may be inductively transmitted when the power transmission system is in use; andwherein the plurality of antifouling devices (26) are configured to be driven using transmitted power from at least one of the plurality of inductive power receivers when the system is in use.
In the context of anti-biofouling of marine objects, a light emitting unit is configured to be applied to a surface area of a marine object and comprises at least one light source (12, 13) configured to emit anti-fouling light, and two electrically conductive plates (14, 15), wherein the at least one light source (12, 13) is at the one side electrically connected to one of the plates (14, 15) and at the other side to an electric energy distribution arrangement of the light emitting unit. The plates (14, 15) are arranged to constitute respective capacitors (21, 22) with an electrically conductive surface area of, or over, the marine object, said capacitors (21, 22) being connected in series through the electrically conductive surface area once the light emitting unit is actually applied to a surface area of a marine object.
An anti-fouling lighting system is used for protecting a surface (16) against biofouling while the surface (16) is submerged in water. A non-contact water sensor (60) is used for sensing water thereby to detect whether or not a light source arrangement (26), or a portion of the light source arrangement (26), is submerged in water. The light source arrangement (26), or the portion of the light source arrangement (26), is controlled in dependence on the water sensor (60) output.
At least some applications in the total HDR video chain desire some more sophisticated approach, such as a high dynamic range video encoder (900), arranged to receive via an image input (920) an input high dynamic range image (MsterHDR) which has a first maximum pixel luminance (PB_C_H50), the encoder being arranged to receive via a metadata input (921) a master luma mapping function (FL_50t1), which luma mapping function defines the relationship between normalized lumas of the input high dynamic range image and normalized lumas of a corresponding standard dynamic range image (Im_LDR) having a maximum pixel luminance of preferably 100 nit, characterized in that the encoder further comprises a metadata input (923) to receive a second maximum pixel luminance (PB_CH), and the encoder further being characterized in that it comprises: - a HDR function generation unit (901) arranged to apply a standardized algorithm to transform the master luma mapping function (FL_50t1) into a adapted luma mapping function (F_H2hCI), which relates normalized lumas of the input high dynamic range image to normalized luminances of an intermediate dynamic range image (IDR) which is characterized by having a maximum possible luminance being equal to the second maximum pixel luminance (PB_CH); an IDR image calculation unit (902) arranged to apply the adapted luma mapping function (F_H2hCI) to lumas of pixels of the input high dynamic range image (MsterHDR) to obtain lumas of pixels of the intermediate dynamic range image (IDR); and an IDR mapping function generator (903) arranged to derive on the basis of the master luma mapping function (FL_50t1) and the adapted luma mapping function (F_H2hCI) a channel luma mapping function (F_I2sCI), which defines as output the respective normalized lumas of the standard dynamic range image (Im_LDR) when given as input the respective normalized lumas of the intermediate dynamic range image (IDR); the encoder being further characterized to have as output: the intermediate dynamic range image (IDR), as first metadata the second maximum pixel luminance (PB_CH), as second metadata the channel luma mapping function (F_I2sCI); and as third metadata the first maximum pixel luminance (PB_C_H50).
H04N 19/179 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a scene or a shot
The invention relates to a feeding bottle device, feeding method and a partitioning component (210) for a feeding bottle device (100), comprising a teat component (110) defining a teat volume (115) therein and a container component (120) defining a container volume (125) therein, the teat component (110) being attachable to the container component (120) by means of an attachment component (130). The partitioning component (210) comprises a first passage (212) allowing a passage of air and liquid between the container volume (125) and the teat volume (115) and a second passage (214) allowing a passage of liquid and preventing a passage of air between the teat volume (115) and the container volume (125). The solutions increase the user convenience when operating the feeding bottle device without increasing the risk of colic-like symptoms for the infant while feeding in a horizontal or near-horizontal feeding position.
A wireless pressure sensing unit (20) comprises a membrane (25) forming an outer wall portion of a cavity and two permanent magnets (26,28) inside the cavity. One magnet is coupled to the membrane, and at least one magnet is free to oscillate with a rotational movement. At least one is free to oscillate with a rotational movement. The oscillation takes place at a resonance frequency, which is a function of the sensed pressure, which pressure influences the spacing between the two permanent magnets. This oscillation frequency can be sensed remotely by measuring a magnetic field altered by the oscillation. The wireless pressure sensing unit may be provided on a catheter (21) or guidewire.
A controller for registering a magnetic resonance imaging (MRI) image to a tracking space includes a memory that stores instructions; and a processor that executes the instructions. The instructions cause the controller to execute a process that results in generating an image registration of a 3-dimensional magnetic resonance imaging volume in the tracking space based on 2- dimensional coordinates of a midsagittal plane of an organ, an image registration of the midsagittal plane of the organ, and a tracking position in the tracking space of an ultrasound image of the midsagittal plane.
A system (200) comprising a light source (220) configured to generate light source radiation (221), wherein the light source radiation (221) at least comprises UV radiation, wherein the system (200) further comprises a luminescent material (400) configured to convert part of the light source radiation (221) into luminescent material radiation (401), wherein the luminescent material radiation (401) comprises one or more of visible light and infrared radiation, wherein the system (200) is configured to generate system light (201) comprising the light source radiation (221) and the luminescent material radiation (401).
E02B 17/00 - Artificial islands mounted on piles or like supports, e.g. platforms on raisable legsConstruction methods therefor
F21K 9/64 - Optical arrangements integrated in the light source, e.g. for improving the colour rendering index or the light extraction using wavelength conversion means distinct or spaced from the light-generating element, e.g. a remote phosphor layer
F21V 9/30 - Elements containing photoluminescent material distinct from or spaced from the light source
F21V 9/32 - Elements containing photoluminescent material distinct from or spaced from the light source characterised by the arrangement of the photoluminescent material
Disclosed is an in vitro method for assessing whether a human patient has periodontitis. The method is based on the insight to determine biomarker proteins. Accordingly, in a sample of saliva a patient suffering from periodontitis, the concentrations are measured of the Free Light Chain ? protein and/or the Free Light Chain ?. Based on the concentration(s) as measured, a value is determined reflecting the concentration or joint concentrations for said protein or proteins. This value is compared with a threshold value reflecting in the same manner the concentration or joint concentrations associated with periodontitis. The comparison allows assessing whether the testing value is indicative of the presence of periodontitis in said patient. Thereby, typically, a testing value reflecting a concentration or joint concentration below the concentration or joint concentration reflected by the threshold value is indicative for absence of periodontitis in said patient, and a testing value reflecting a concentration or joint concentration at or above the concentration or joint concentration reflected by the threshold value, is indicative for periodontitis in said patient.
An apparatus for generating view images for a scene comprises a store (101) which stores three dimensional scene data representing the scene from a viewing region. The three dimensional scene data may e.g. be images and depth maps captured from capture positions within the viewing region. A movement processor (105) receives motion data, such as head or eye tracking data, for a user and determines an observer viewing position and an observer viewing orientation from the motion data. A change processor (109) determines an orientation change measure for the observer viewing orientation and an adapter (111) is arranged to reduce a distance from the observer viewing position relative to the viewing region in response to the orientation change measure. An image generator (103) generates view images for the observer viewing position and the observer viewing orientation from the scene data.
An electric current supply system (20) is designed to be at least partially submerged in an electrically conductive liquid during operation thereof, and comprises at least one electrically conductive component (21, 22, 23, 24) enveloped in liquid- tight material (40). The component (21, 22, 23, 24) comprises sacrificial material that is capable of reacting electrochemically with the liquid. Further, the component (21, 22, 23, 24) comprises at least one gas trap portion (50) at which the sacrificial material occupies a space in the liquid-tight material (40) that is thereby defined with a gas trapping shape. If, in case of damage to the system (20) in an actual submerged state thereof, the component (21, 22, 23, 24) gets exposed to the liquid, it is achieved that an electrochemical reaction occurring at the exposed area of the component (21, 22, 23, 24) and an outflow of electric current to the liquid are stopped.
A light emitting device (1) is provided that can be used in various contexts, including the context of realizing an anti-fouling action on surfaces. The light emitting device (1) comprises light emitting units (10) being arranged in a plane filling pattern (20) for covering at least a substantial portion of a surface. Individual light emitting units (10) are electrically interconnected through connection areas (12, 13) as present on the light emitting units (10) for providing electrical access to an internal electrical circuit (11) thereof, wherein the light emitting units (10) overlap at the positions of at least portions of the connection areas (12, 13) thereof. Further, it may be so that at least one of the connection areas (12, 13) of the individual light emitting units (10) is electrically connected simultaneously to respective connection areas (12, 13) of at least two other light emitting units (10).
The invention provides a layer stack (500) comprising a first silicone layer (510), wherein the first silicone layer (510) has a first surface (511) and a second surface (512), wherein the first silicone layer (510) is transmissive for UV radiation having one or more wavelengths selected from the range of 200-380 nm, wherein the layer stack (500) further comprises one or more of: - a first layer element configured at a first side of the first surface (511), wherein the first layer element is associated by a chemical binding with the first surface (511) directly or via a first intermediate layer, which is transmissive for UV radiation having one or more wavelengths selected from the range of 200-380 nm, wherein the first layer element at least comprises a first layer differing in composition from the first silicone layer (510), and wherein the first layer element is transmissive for UV radiation having one or more wavelengths selected from the range of 200-380 nm; and - a second layer element (620) configured at a second side of the second surface (512) wherein the second layer element (620) is associated by a chemical binding with the second surface (512) directly or via a second intermediate layer, wherein the second layer element (620) at least comprises a second layer (1220) differing in composition from the first silicone layer (510).
A first electronic network node (110) is provided configured for a key exchange (KEX) protocol, the first network node is configured to - obtain a shared matrix (A) shared with a second network node, entries in the shared matrix A being selected modulo a first modulus q, generate a private key matrix (S I ), entries in the private key matrix being bounded in absolute value by a bound (s) generate a public key matrix (P I ) by computing a matrix product between the shared matrix (A) and the private key matrix (S I ) modulo the first modulus (q) and scaling the entries in the matrix product down to a second modulus (p).
A base station and a user equipment for a wireless communication network having a plurality of logical radio access networks are described. The base station communicates with a plurality of users to be served by the base station for accessing one or more of the logical radio access networks, and selectively controls the physical resources of the wireless communication network assigned to the logical radio access networks and/or controls access of the users or user groups to one or more of the logical radio access networks. The user equipment, for accessing at least one of the logical radio access networks, receives and processes a control signal from the base station, which indicates the physical resources of the wireless communication network assigned to the logical radio access network and/or includes access control information for the user equipment for accessing the logical radio access network.
A first device and a second device are disclosed for reaching agreement on a secret value. Herein, the second device comprises a receiver configured to receive information indicative of a reconciliation data h from the first device, a processor configured to compute a common secret s based on an integer value b, an equation, and system parameters. The processor is configured to compute b based on a key exchange protocol. The first device has a number a in approximate agreement with the number b. The first device comprises a processor configured to determine a common secret s based on an integer value a an equation, and system parameters, and determine a reconciliation data h. The first device further comprises a transmitter configured to transmit information indicative of the reconciliation data h to the second device.
A light emitting arrangement (100) for anti-fouling of a surface (30), comprises an optical medium (10) and at least one light source (20) for emitting anti-fouling light. A first zone (1) of the arrangement (100), which is closest to the light source (20), is arranged and configured to predominantly make the anti-fouling light reflect in a specular manner towards an emission surface (12) of the optical medium (10), through the optical medium (10), a second zone (2) of the arrangement (100) is arranged and configured to predominantly realize propagation of the anti-fouling light through the optical medium (10) by total internal reflection, and a third zone (3) of the arrangement (100), which is furthest away from the light source (20), is arranged and configured to predominantly make the anti- fouling light scatter out of the optical medium (10), through the emission surface (12) of the optical medium (10).
The invention provides a light guide element (1300) comprising a light guide (300) and a layer element (30), wherein the light guide (300) comprises a light guide face (301) and wherein the layer element (30) comprises an optical layer (310), wherein said optical layer (310) is in contact with at least part of the light guide face (301), wherein the optical layer (310) has a first index of refraction (n1) smaller than 1.36 at 280 nm, wherein the light guide (300) comprises a UV radiation transmissive light guide material (305).
F21V 8/00 - Use of light guides, e.g. fibre optic devices, in lighting devices or systems
G02B 6/10 - Light guidesStructural details of arrangements comprising light guides and other optical elements, e.g. couplings of the optical waveguide type
100.
AN ASSEMBLY COMPRISING AT LEAST TWO ELEMENTS IN A MOVABLE ARRANGEMENT RELATIVE TO EACH OTHER AND AN ANTI-FOULING SYSTEM
An assembly (2) comprising at least two elements (10, 24) in a movable arrangement relative to each other for performing a first function of the assembly (2) is furthermore equipped with an anti-biofouling system (30) for subjecting at least an area (11) of the assembly (2) to be at least partially exposed to water during at least a part of its lifetime to an anti-biofouling action as an additional function of the assembly (2), different from the first function of the assembly (2). The anti-biofouling system (30) comprises one or more light sources (32) for emitting anti-biofouling light, and the anti-biofouling system (30) is adapted to realize coverage of the area (11) with the anti-biofouling light on the basis of the movable arrangement of the at least two elements (10, 24) relative to each other for performing the first function of the assembly (2).