An electronic assembly with heat spreading coating is having a PCB carrying conducting traces heat producing electronic components. An electrically isolating polymeric coating is applied over the electric the traces and the heat producing electronic components. The electrically isolating polymeric coating conforms with an irregular structure of the PCB. A heat spreading layer is applied over the polymeric coating. The heat spreading layer comprises: at least one heat spreading zone selected from the group consisting of: a plurality of graphene nano-platelets, a plurality of graphene particles, a plurality of boron-nitride particles, a plurality of graphene flakes, a plurality of boron-nitride flakes, at least one graphene sheets, and combination thereof; and a binder, wherein the electrically isolating polymeric coating adheres to the PCB and covers the electric conducting traces and the heat producing electronic components, and the heat spreading layer conforms to the irregular structure of the polymeric coating.
Head wearable display (HWD) systems and methods of display reliability verification with respect to the HWD operated by a user are provided, which comprise tracking a position and orientation (P&O) of the HWD relative to hardware element(s) at known position(s), which are selected from a surroundings of the user and are visible to the user through the HWD, calculating position(s) of the hardware element(s) with respect to the HWD, and displaying, in the HWD and according to the calculated position(s), verification symbol(s) that correspond to the hardware element(s). Display reliability verification may be carried out automatically using a camera and an image processing module, optionally with an additional illuminator, which compares the element(s)'s captured locations with the element(s)'s expected location according to the tracking, and indicates possible discrepancies. Discrepancy identification may be used to initiate a correction process or to indicate the presented symbology to be unreliable.
A transfer-alignment system for a Head-Mounted Display (HMD), and a display coupled to the HMD, wherein the display is adjustable by a user along the at least one degree of freedom; and a processor configured to: obtain the HMD's inertial readings information, the display's inertial readings information and the information indicating the HMD's position and/or orientation with respect to the frame of reference; continuously analyze movement information of the HMD and movement information of the display to determine relative orientation between the HMD and the display; and cause the display-processor to adjust the images to conform with respect to the frame of reference based on the information indicating the position and/or orientation and the relative movements of the HMD, wherein the frame of reference is selected from the group consisting of a platform coordinates, a fixed coordinate system established in space, an earth coordinate system, and any combination thereof.
The presently disclosed subject matter aims to a head-mounted display (HMD) system including: (i) an image source configured to project one or more light beams associated with one or more respective images, (ii) a first optical element, including (a) a first receiving surface, (b) a first reflective element, and (c) a first projecting surface, at least one of which possesses optical power; (iii) a second optical element including (d) a second receiving surface, (e) a second reflective element, and (f) a second projecting surface, at least one of which possesses optical power; and (iv) an at least partially reflective element configured to enable the displaying of the one or more produced images to the user's eye while allowing said user to continue seeing the real world.
A method for irradiating an image in an optical see-through (OST) head mounted display (HMD) for viewing through, the OST HMD by a user's eye, an object having at least one of known orientation and position and orientation (O/P&O), associated with a first reference frame, the method comprising: generating and irradiating said image for appearing to said user superimposed in an aligned manner to said object, according to predetermined information, eyeball feature position data, and said O/P&O; said predetermined information relates correction data with a plurality of different respective position data values of at least one eyeball feature position of said eye; said predetermined information further includes display corrections of said electro-optical display module with respect to said position data values of said at least one eyeball feature position, with respect to a second reference frame; and said O/P&O is between said second reference and said first reference frame.
A flight simulator having an electronic display system for presenting viewpoint-adjusted synthetic images to two observers, from a common display screen, is provided herein. The flight simulator includes: a display surface; an image generator; a computer processor configured to receive images and objects data representing a virtual scenery; adjust the images and the objects data, so the adjusted virtual scenery appears as observed from different viewpoints associated with the respective location of each of the two observers respectively; drive at least one image generator to generate the multiple images of the adjusted virtual scenery which are presented on the display surface, to the two observers, respectively; and viewing devices located in front of the eyes of each one of the observers, respectively, wherein each viewing device restricts each respective observer to view only the images of the virtual scenery that were adjusted to the location of the respective observer.
G02B 30/24 - Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images by providing first and second parallax images to an observer’s left and right eyes of the stereoscopic type involving temporal multiplexing, e.g. using sequentially activated left and right shutters
H04N 13/341 - Displays for viewing with the aid of special glasses or head-mounted displays [HMD] using temporal multiplexing
H04N 13/337 - Displays for viewing with the aid of special glasses or head-mounted displays [HMD] using polarisation multiplexing
A transfer-alignment system for a Head-Mounted Display (HMD), and a display coupled to the HMD, wherein the display is adapted to display images rendered by a display-processor, and wherein the HMD is monitored by a tracking system configured to provide information indicating position and/or orientation of the HMD with respect to a Frame of Reference (FoR), the system comprising: at least one first inertial sensor attached to the HMD and configured to acquire HMD's Inertial Readings Information (IRI); at least one second inertial sensor attached to the display and configured to acquire display's IRI; and a processor configured to: obtain the HMD's IRI, the display's IRI and the information indicating the HMD's position and/or orientation with respect to the FoR; continuously analyze movement information of the HMD and the display to determine relative orientation therebetween; and cause the display-processor to adjust the images to conform with respect to the FoR.
A touch screen (TS) comprises a display; a frame with edges positioned opposite each other around the display; at least a first sensor array and at least a second sensor array, wherein each of the sensor arrays has a plurality of light transmitters and a plurality of light sensors, and wherein the at least first sensor array and the at least second sensor array are disposed on the first edge and the second edge of the frame, respectively, wherein the transmitters of the first sensor array are facing the light sensors on the second sensor array positioned on two opposing edges of the frame; and at least one physical obstacle located on the third edge or the fourth edge, for reducing stray light scattered or reflected by third edge or the fourth edge and arriving to the light sensors.
09 - Scientific and electric apparatus and instruments
Goods & Services
Firefighting management system, containing multi-layer wide
area detection, observation, alerting, and surveillance
means and communication and mapping apparatus, for rapid
deployment, management, and operation of firefighting
squadrons and handling emergency events.
11.
Eyepiece for night vision devices capable of injecting a synthetic image with a single optical correction mechanism
An eyepiece suitable for a night vision device, having an optical combiner for injecting a synthetic image onto the scene and having a single optical correction mechanism is provided herein. The eyepiece may include an observer-side lens; an objective-side lens; a diopter adjustment knob configured to set a distance between the observer-side lens and the objective-side lens; and an optical combiner located between the objective-side lens and the objective-side lens, wherein the optical combiner reflects towards the observer-side lens, the synthetic image transmitted from outside the eyepiece and transfers towards the observer-side lens, a scene image coming from an objecting lens of the night vision device and passing through the objective side lens, and wherein the diopter adjustment knob serves as a single setting mechanism which simultaneously sets a diopter of the observer and a focal depth of the display source image.
G02B 23/12 - Telescopes, e.g. binocularsPeriscopesInstruments for viewing the inside of hollow bodiesViewfindersOptical aiming or sighting devices with means for image conversion or intensification
A system for coloring a monochromatic image of a scene, the system comprising a processing circuitry configured to: capture the monochromatic image of the scene utilizing a non-visible spectrum sensor, capable of capturing monochromatic images from a viewpoint having a given position and a given orientation; obtain one or more previously captured color images, covering at least part of the scene; determine a registration between the monochromatic image and the previously captured color images; render one or more rendered images, being the previously captured color images adjusted to the viewpoint by utilizing the registration; and generate a colored image of the scene by changing, for at least one given pixel of the monochromatic image, values of one or more chroma components of the given pixel in accordance with values of one or more chroma components of a corresponding pixel of the rendered images.
A method for irradiating an image in an optical see-through (OST) head mounted display (HMD) for viewing through, the OST HMD by a user's eye, an object having at least one of known orientation and position and orientation (O/P&O), associated with a first reference frame, the method comprising: generating and irradiating said image for appearing to said user superimposed in an aligned manner to said object, according to predetermined information, eyeball feature position data, and said O/P&O; said predetermined information relates correction data with a plurality of different respective position data values of at least one eyeball feature position of said eye; said predetermined information further includes display corrections of said electro-optical display module with respect to said position data values of said at least one eyeball feature position, with respect to a second reference frame; and said O/P&O is between said second reference and said first reference frame.
09 - Scientific and electric apparatus and instruments
Goods & Services
Firefighting management system, containing multi-layer wide area detection, observation, alerting, and surveillance means and communication and mapping apparatus, for rapid deployment, management, and operation of firefighting squadrons and handling emergency events comprising (i) wide observation and detection systems including drones, thermal cameras, unmanned aerial vehicles, satellites, an artificial intelligence system; and (ii) command and control center including network of computers for surveillance, detection, response deployment, and prediction and assessment of fire expansion.
A transfer-alignment system for a Head-Mounted Display (HMD), and a display coupled to the HMD, wherein the display is adapted to display images rendered by a display-processor, and wherein the HMD is monitored by a tracking system configured to provide information indicating position and/or orientation of the HMD with respect to a Frame of Reference (FoR), the system comprising: at least one first inertial sensor attached to the HMD and configured to acquire HMD's Inertial Readings Information (IRI); at least one second inertial sensor attached to the display and configured to acquire display's IRI; and a processor configured to: obtain the HMD's IRI, the display's IRI and the information indicating the HMD's position and/or orientation with respect to the FoR; continuously analyze movement information of the HMD and the display to determine relative orientation therebetween; and cause the display-processor to adjust the images to conform with respect to the FoR.
A system and method for generating a focused three-dimensional (3D) point cloud is disclosed. A respective 3D point cloud is generated based on returns of a respective sequence of energy pulses that is emitted towards one or more regions-of-interest (ROIs) within a field-of-view (FOV) during a respective scan of the FOV, the returns including one or more secondary returns from one or more points within the FOV. During an additional scan of the FOV, subsequent to the respective scan, an additional sequence of energy pulses is emitted to generate a focused 3D point cloud that includes additional information regarding one or more selected points of the points associated with the secondary returns relative to the respective 3D point cloud.
G01S 13/10 - Systems for measuring distance only using transmission of interrupted, pulse modulated waves
G01S 13/32 - Systems for measuring distance only using transmission of continuous waves, whether amplitude-, frequency-, or phase-modulated, or unmodulated
G01S 17/66 - Tracking systems using electromagnetic waves other than radio waves
G06T 19/00 - Manipulating 3D models or images for computer graphics
Aspects of embodiments pertain to a method for providing scene related information from a scene to a remote station. The method may comprise receiving, at the remote station, a data object in relation to at least one identified attribute of one or more physical objects located in an ROI of the scene acquired by at least one sensor. A priority level value (PLV) is associated with the data object. The method may further include generating, at the remote station, using local station data, a low-latency virtual representation of the scene for displaying, at the remote station, a scene representation comprising the low-latency scene representation and a visualization of the received data object. In addition, real-world scene data descriptive of real world ROI/Target information may be receive. A user may designate an ROI/Target of the data object visualization for displaying real world ROI/Target information relating to the designated ROI/Target.
A system for coloring a monochromatic image of a scene, the system comprising a processing circuitry configured to: capture the monochromatic image of the scene utilizing a non-visible spectrum sensor, capable of capturing monochromatic images from a viewpoint having a given position and a given orientation; obtain one or more previously captured color images, covering at least part of the scene, wherein the previously captured color images have been captured in a time prior to the capturing of the monochromatic image; determine a registration between the monochromatic image and the previously captured color images, wherein the registration is determined by projecting the previously captured color images on a plane that is conformal with the viewpoint; render one or more rendered images, being the previously captured color images adjusted to the viewpoint by utilizing the registration; and generate a colored image of the scene by changing, for at least one given pixel of the monochromatic image, values of one or more chroma components of the given pixel in accordance with values of one or more chroma components of a corresponding pixel of the rendered images.
H04N 23/11 - Cameras or camera modules comprising electronic image sensorsControl thereof for generating image signals from different wavelengths for generating image signals from visible and infrared light wavelengths
A mixed A mixed reality system, comprising: a sensor configured to acquire readings of real-world data, and display, on an output device, a real-world visualization of the real-world data based on the readings to a user, wherein the sensor has one or more parameters affecting the real-world visualization; and a processing circuitry configured to: obtain (a) information of one or more virtual entities located within an area from which the readings are acquired, the information defining, for each of the virtual entities, one or more simulated physical properties, and (b) values of one or more situational parameters indicative of a state of the sensor during acquisition of the readings, wherein the values of the one or more situational parameters are readings of one or more situational sensors, sensing the state of the sensor and its surroundings during acquisition of the readings; determine, for at least one given virtual entity of the virtual entities, a virtual entity visualization of the given virtual entity, the virtual entity visualization determined by manipulating a simulated reading of the simulated physical properties based on (a) the parameters affecting the real-world visualization, and (b) the values of the situational parameters; and display the virtual entity visualizations in combination with the real-world visualization, thereby enabling a user viewing the output device to view the virtual entity visualizations and the real-world visualization.
A method and a system for displaying a scene to a user wearing ahead mounted display (HMD) while removing obstructions in a field of view (FOV) of the user are provided herein. The method may include: capturing by a sensor a vehicle image of the scene wherein the first sensor is mounted on the vehicle; tracking a position and orientation of the HMD in a specified coordinate system, to yield a line of sight (LOS) of the user wearing the HMD; obtaining a database containing obstacles data indicating at least one physical object located within the vehicle and affecting the FOV of the user; calculating an obstructed portion in the FOV of the user, based on the LOS and the database; generating from the vehicle image, an un-obstructed view which includes a portion of the scene overlapping the obstructed portion; and displaying in the HMD the un-obstructed view.
An electronic assembly with heat spreading coating is having a PCB carrying conducting traces heat producing electronic components. An electrically isolating polymeric coating is applied over the electric the traces and the heat producing electronic components. The electrically isolating polymeric coating conforms with an irregular structure of the PCB. A heat spreading layer is applied over the polymeric coating. The heat spreading layer comprises: at least one heat spreading zone selected from the group consisting of: a plurality of graphene nano-platelets, a plurality of graphene particles, a plurality of boron-nitride particles, a plurality of graphene flakes, a plurality of boron-nitride flakes, at least one graphene sheets, and combination thereof; and a binder, wherein the electrically isolating polymeric coating adheres to the PCB and covers the electric conducting traces and the heat producing electronic components, and the heat spreading layer conforms to the irregular structure of the polymeric coating.
F28F 21/02 - Constructions of heat-exchange apparatus characterised by the selection of particular materials of carbon, e.g. graphite
F28F 21/04 - Constructions of heat-exchange apparatus characterised by the selection of particular materials of ceramicConstructions of heat-exchange apparatus characterised by the selection of particular materials of concreteConstructions of heat-exchange apparatus characterised by the selection of particular materials of natural stone
A method and system for providing depth perception to a two-dimensional (2D) representation of a given three-dimensional (3D) object within a 2D non-visible spectrum image of a scene is provided. The method comprises: capturing the 2D non-visible spectrum image at a capture time, by at least one non-visible spectrum sensor; obtaining 3D data regarding the given 3D object independently of the 2D non-visible spectrum image; generating one or more depth cues based on the 3D data; applying the depth cues on the 2D representation to generate a depth perception image that provides the depth perception to the 2D representation; and displaying the depth perception image.
An eyepiece suitable for a night vision device, having an optical combiner for injecting a synthetic image onto the scene and having a single optical correction mechanism is provided herein. The eyepiece may include an observer-side lens; an objective-side lens; a diopter adjustment knob configured to set a distance between the observer-side lens and the objective-side lens; and an optical combiner located between the objective-side lens and the objective-side lens, wherein the optical combiner reflects towards the observer-side lens, the synthetic image transmitted from outside the eyepiece and transfers towards the observer-side lens, a scene image coming from an objecting lens of the night vision device and passing through the objective side lens, and wherein the diopter adjustment knob serves as a single setting mechanism which simultaneously sets a diopter of the observer and a focal depth of the display source image.
G02B 23/12 - Telescopes, e.g. binocularsPeriscopesInstruments for viewing the inside of hollow bodiesViewfindersOptical aiming or sighting devices with means for image conversion or intensification
A profile comparator for comparing between a human operator and a clone including a storage device, a simulation processor and a parameter comparator, the storage device including a recording of at least one parameter during an activity session of a platform, the platform including at least one control system, the parameter being at least one of a parameter of the platform and an action of an operator of the platform during the activity session, and a predetermined profile, the simulation processor configured to generate a virtual clone of the platform according to at least one of the recorded parameter, the simulation processor being further configured to manage the virtual clone according to the predetermined profile, the parameter comparator configured to compare at least one comparison parameter relative to the comparison parameter of the virtual clone and configured to determine at least one deviation wherein the comparison parameter deviates from the comparison parameter of the virtual clone.
G09B 9/24 - Simulators for teaching or training purposes for teaching control of vehicles or other craft for teaching control of aircraft, e.g. Link trainer including display or recording of simulated flight path
G09B 9/08 - Simulators for teaching or training purposes for teaching control of vehicles or other craft for teaching control of aircraft, e.g. Link trainer
G09B 9/06 - Simulators for teaching or training purposes for teaching control of vehicles or other craft for teaching control of ships, boats, or other waterborne vehicles
A touch screen (TS) comprises a display; a frame with edges positioned opposite each other around the display; at least a first sensor array and at least a second sensor array, wherein each of the sensor arrays has a plurality of light transmitters and a plurality of light sensors, and wherein the at least first sensor array and the at least second sensor array are disposed on the first edge and the second edge of the frame, respectively, wherein the transmitters of the first sensor array are facing the light sensors on the second sensor array positioned on two opposing edges of the frame; and at least one physical obstacle located on the third edge or the fourth edge, for reducing stray light scattered or reflected by third edge or the fourth edge and arriving to the light sensors.
Aspects of embodiments pertain to a sensing systems configured to receive scene electromagnetic (EM) radiation comprising a first wavelength (WL1) range and a second wavelength (WL2) range. The sensing system comprises at least one spectral filter configured to filter the received scene EM radiation to obtain EM radiation in the WL1 range and the WL2 ranges; and a self-adaptive electromagnetic (EM) energy attenuating structure. The self-adaptive EM energy attenuating structure may comprise material that includes nanosized particles which are configured such that high intensity EM radiation at the WL1 range incident onto a portion of the self-adaptive EM energy attenuating structure causes interband excitation of one or more electron-hole pairs, thereby enabling intraband transition in the portion of the self-adaptive EM energy attenuating structure by EM radiation in the WL2 range.
H04N 23/52 - Elements optimising image sensor operation, e.g. for electromagnetic interference [EMI] protection or temperature control by heat transfer or cooling elements
H04N 23/11 - Cameras or camera modules comprising electronic image sensorsControl thereof for generating image signals from different wavelengths for generating image signals from visible and infrared light wavelengths
Embodiments pertain to systems and methods for providing information related to a scene occurring in a region of interest (ROI), using a scene data collector, configured to receive scene source data, from one or more data sources, using at least one sensor, identify one or more physical objects located in the ROI, based on the received scene source data, determine one or more attributes of the identified physical objects, generate a data object, for at least one of the identified one or more physical objects, based on one or more attributes thereof, and transmit all data objects generated to at least one remote station (RS), located remotely from the ROI. Each RS may be configured to receive the transmitted one or more data objects, generate a virtual scene data, based on the received one or more data objects; and display the generated virtual scene data.
A method and a system for landing and terrain flight assistance are provided herein. The method may include the following steps: obtaining, by at least one imaging sensor disposed on an aerial platform, at least two images of at least a portion of a specified region of a terrain; determining, based on the at least two images, a 3D model of at least a portion of the specified region; receiving a predetermined model of at least a portion of the specified region; determining a real-world geographic location of the aerial platform based on the 3D model and the predetermined model; and determining flight instructions based on the 3D model and the determined geographic location of the aerial platform.
09 - Scientific and electric apparatus and instruments
Goods & Services
Head-mounted display for use in helicopters containing
integrated hybrid trackers, sensors, cameras, imaging
apparatus, and associated computer software and hardware,
for use in providing crew members with navigational and
operational information, ultra-wide binocular
fields-of-view, high resolution two and three dimensional
colored symbology, and enhanced video capabilities.
09 - Scientific and electric apparatus and instruments
10 - Medical apparatus and instruments
Goods & Services
Downloadable and recorded computer software for performing 2D or 3D imaging and virtual imaging of body cavities, organs and tissues and for reception, encoding, processing, storage, transmission, reproduction and decoding of such images Medical and surgical equipment, instruments and devices, namely, cameras, electronic visors and video monitors and parts therefor used for 2D or 3D imaging and virtual imaging of body cavities, organs and tissues; medical imaging apparatus in the nature of visors and video monitors, namely, head-mounted display, fixed displays, see through and non see through displays for displaying medical images and other layers of information for a user; medical and surgical equipment, instruments and devices, namely, computer hardware, and peripherals for performing 2D or 3D imaging and virtual imaging of body cavities, organs and tissues and for reception, encoding, processing, storage, transmission, reproduction and decoding of such images; medical equipment, instruments and devices, namely, an imaging camera, an imaging camera used with an image processor, and an image processor for medical use
09 - Scientific and electric apparatus and instruments
Goods & Services
Head-mounted display for use in helicopters containing integrated hybrid trackers, sensors, cameras, imaging apparatus, and associated recorded computer software and computer hardware, for use in providing crew members with navigational and operational information, ultra-wide binocular fields-of-view, high resolution two- and three-dimensional colored symbology, and enhanced video capabilities
09 - Scientific and electric apparatus and instruments
Goods & Services
Command, control, and mission management system, comprising
computer software, computer hardware, a secured cloud-based
battle space network, a platform agnostic mission management
unit, and apparatus for receiving and transmitting data and
communications, for providing aircraft crew operators a
real-time view of aircrafts operational surroundings, combat
picture, ultimate situational awareness, and battle
management operational tools, and for integrating data,
threats, warnings, borders, obstacles, entities,
navigational aids, route calculations, digital maps and
weapons into a single screen operational data display.
A transfer-alignment system for a Head-Mounted Display (HMD), and a display coupled to the HMD, wherein the display is adapted to display images rendered by a display-processor, and wherein the HMD is monitored by a tracking system configured to provide information indicating position and/or orientation of the HMD with respect to a Frame of Reference (FoR), the system comprising: at least one first inertial sensor attached to the HMD and configured to acquire HMD's Inertial Readings Information (IRI); at least one second inertial sensor attached to the display and configured to acquire display's IRI; and a processor configured to: obtain the HMD's IRI, the display's IRI and the information indicating the HMD's position and/or orientation with respect to the FoR; continuously analyze movement information of the HMD and the display to determine relative orientation therebetween; and cause the display-processor to adjust the images to conform with respect to the FoR.
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
G06T 7/73 - Determining position or orientation of objects or cameras using feature-based methods
G01C 21/16 - NavigationNavigational instruments not provided for in groups by using measurement of speed or acceleration executed aboard the object being navigatedDead reckoning by integrating acceleration or speed, i.e. inertial navigation
36.
System and method for determining a configuration of a measurement volume
A method of determining a configuration of a measurement volume, the method may include: generating, by at least one transmitter, a transmitted magnetic field within the measurement volume; measuring, by at least one receiver positioned, a total magnetic field in the measurement volume at at least one receiver position and generating at least one receiver output signal; generating, by a processing unit, a measured dataset; comparing, by the processing unit, the measured dataset with at least one of at least two reference configuration datasets each for determined for one of at least two different configurations of the measurement volume; and identifying, by the processing unit, a reference configuration dataset of the at least two reference configuration datasets that corresponds to the measured dataset.
G01D 5/14 - Mechanical means for transferring the output of a sensing memberMeans for converting the output of a sensing member to another variable where the form or nature of the sensing member does not constrain the means for convertingTransducers not specially adapted for a specific variable using electric or magnetic means influencing the magnitude of a current or voltage
37.
SYSTEM AND METHOD FOR DETERMINING A RELATIVE MOTION BETWEEN TWO OR MORE OBJECTS
Systems and methods for determining a relative motion between two or more objects are disclosed. The system may include an excitation unit adapted to be disposed on a first object and configured to excite and to induce at least one change in at least a portion of a second object. The system may include a sensing unit adapted to be disposed on the first object, the sensing unit may include at least one sensor configured to detect the at least one change in the second object at two or more different time instances and to generate corresponding two or more sensor output datasets. The system may include a processing unit configured to determine a relative motion between the first object and the second object based on the two or more sensor output datasets thereof.
G01P 3/68 - Devices characterised by the determination of the time taken to traverse a fixed distance using optical means, i.e. using infrared, visible, or ultraviolet light
G01S 11/12 - Systems for determining distance or velocity not using reflection or reradiation using electromagnetic waves other than radio waves
38.
Head mounted display symbology concepts and implementations, associated with a reference vector
Head mounted displays (HMD) and corresponding display methods are provided, which obtain, repeatedly, from a monitoring system of a vehicle and, a reference vector relating to the vehicle; display on the HMD a reference symbol that indicates the reference vector; and determine movements of a HMD symbology according to a spatial relation between a received user's line of sight (LOS) and the reference vector. For example, the vehicle may be an aircraft and the reference vector a flight path vector (FPV) received from the aircraft's avionics. The proposed HMD enhances the displayed information content while avoiding excessive movements of the symbology. The HMD's functional parameters may be pre-set or adapted according to user preference and flight stage characteristics. The reference symbol anchors most of the symbology, while minimal critical information may be moved along with the user's LOS, providing a clearer and more stable view through the HMD.
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
G09G 5/38 - Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of individual graphic patterns using a bit-mapped memory with means for controlling the display position
Aspects of embodiments pertain to a method for providing scene related information from a scene to a remote station. The method may comprise receiving, at the remote station, a data object in relation to at least one identified attribute of one or more physical objects located in an ROI of the scene acquired by at least one sensor. A priority level value (PLV) is associated with the data object. The method may further include generating, at the remote station, using local station data, a low-latency virtual representation of the scene for displaying, at the remote station, a scene representation comprising the low-latency scene representation and a visualization of the received data object. In addition, real-world scene data descriptive of real world ROI/Target information may be receive. A user may designate an ROI/Target of the data object visualization for displaying real world ROI/Target information relating to the designated ROI/Target.
G06F 3/0481 - Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
G06T 19/00 - Manipulating 3D models or images for computer graphics
G05D 1/00 - Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
A system and method for generating a focused three-dimensional (3D) point cloud is disclosed. A respective 3D point cloud is generated based on returns of a respective sequence of energy pulses that is emitted towards one or more regions-of-interest (ROIs) within a field- of-view (FOV) during a respective scan of the FOV, the returns including one or more secondary returns from one or more points within the FOV. During an additional scan of the FOV, subsequent to the respective scan, an additional sequence of energy pulses is emitted to generate a focused 3D point cloud that includes additional information regarding one or more selected points of the points associated with the secondary returns relative to the respective 3D point cloud.
A system for generating a three-dimensional (3D) map of part of a field-of-view (FOV) of at least one detector of an active 3D scanner, comprising: the active 3D scanner, comprising: a mechanism configured to scan the FOV; at least one energy emitting source configured to emit energy pulses; and the at least one detector; and processing circuitry configured to: obtain information, wherein at least some of designation information is tracker-based designation information that is designated by a user of the system via a tracker that tracks a line-of-sight between the user and the FOV; selectively activate the energy emitting source to emit a subset of the energy pulses, in accordance with the information, including the tracker-based designation information, and in synchronization with the mechanism, to cover the part of the FOV; obtain current readings, from the detector, based on reflections of the subset of the energy pulses; and generate the 3D map based on the current readings.
G01S 17/894 - 3D imaging with simultaneous measurement of time-of-flight at a 2D array of receiver pixels, e.g. time-of-flight cameras or flash lidar
G01S 13/32 - Systems for measuring distance only using transmission of continuous waves, whether amplitude-, frequency-, or phase-modulated, or unmodulated
G01S 13/86 - Combinations of radar systems with non-radar systems, e.g. sonar, direction finder
42.
OPTICAL SEE THROUGH (OST) HEAD MOUNTED DISPLAY (HMD) SYSTEM AND METHOD FOR PRECISE ALIGNMENT OF VIRTUAL OBJECTS WITH OUTWARDLY VIEWED OBJECTS
A method for irradiating an image in an optical see-through (OST) head mounted display (HMD) for viewing through, the OST HMD by a user's eye, an object having at least one of known orientation and position and orientation (O/P&O), associated with a first reference frame, the method comprising: generating and irradiating said image for appearing to said user superimposed in an aligned manner to said object, according to predetermined information, eyeball feature position data, and said O/P&O; said predetermined information relates correction data with a plurality of different respective position data values of at least one eyeball feature position of said eye; said predetermined information further includes display corrections of said electro-optical display module with respect to said position data values of said at least one eyeball feature position, with respect to a second reference frame; and said O/P&O is between said second reference and said first reference frame.
A61B 90/00 - Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups , e.g. for luxation treatment or for protecting wound edges
A61B 34/20 - Surgical navigation systemsDevices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
A61B 34/10 - Computer-aided planning, simulation or modelling of surgical operations
G06T 7/33 - Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
G06T 7/73 - Determining position or orientation of objects or cameras using feature-based methods
A61B 17/00 - Surgical instruments, devices or methods
A61B 90/50 - Supports for surgical instruments, e.g. articulated arms
G02B 27/00 - Optical systems or apparatus not provided for by any of the groups ,
09 - Scientific and electric apparatus and instruments
Goods & Services
Software-defined airborne radar system for detection and
mapping of terrain and obstacles in degraded visual
environment and adverse visibility conditions.
A mixed reality system, comprising: a data acquisition device configured to acquire real-world data; an output device for providing the real-world data to a user; and a processing circuitry configured to: obtain (a) data acquired by the data acquisition device, and (b) information of one or more virtual entities having properties enabling determination of simulated effects of the virtual entities on the data; determine the simulated effects of the virtual entities on the data utilizing the properties; and provide the user with output on the output device being a manipulation of the data reflecting the simulated effects.
A method and a system for displaying a scene to a user wearing a head mounted display (HMD) while removing obstructions in a field of view (FOV) of the user are provided herein. The method may include: capturing by a sensor a vehicle image of the scene wherein the first sensor is mounted on the vehicle; tracking a position and orientation of the HMD in a specified coordinate system, to yield a line of sight (LOS) of the user wearing the HMD; obtaining a database containing obstacles data indicating at least one physical object located within the vehicle and affecting the FOV of the user; calculating an obstructed portion in the FOV of the user, based on the LOS and the database; generating from the vehicle image, an un-ob structed view which includes a portion of the scene overlapping the obstructed portion; and displaying in the HMD the un-ob structed view.
A system and a method of monitoring physical properties of a physical medium over time are provided herein. The method may include the following steps: embedding a plurality of acoustic sensors into a physical medium before curing thereof; transmitting an acoustic wave by at least one transmitter coupled to or embedded within said physical medium; repeatedly calculating, over different points of time, a travel time of said acoustic wave between the at least one transmitter and the plurality of acoustic sensors; and analyzing said travel times, to detect a change over time in physical properties of said physical medium associated with said travel time.
A method and system for providing depth perception to a two-dimensional (2D) representation of a given three-dimensional (3D) object within a 2D non-visible spectrum image of a scene is provided. The method comprises: capturing the 2D non-visible spectrum image at a capture time, by at least one non-visible spectrum sensor; obtaining 3D data regarding the given 3D object independently of the 2D non-visible spectrum image; generating one or more depth cues based on the 3D data; applying the depth cues on the 2D representation to generate a depth perception image that provides the depth perception to the 2D representation; and displaying the depth perception image.
G02B 30/40 - Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images giving the observer of a single two-dimensional [2D] image a perception of depth
An eyepiece suitable for a night vision device, having an optical combiner for injecting a synthetic image onto the scene and having a single optical correction mechanism is provided herein. The eyepiece may include an observer- side lens; an objective- side lens; a diopter adjustment knob configured to set a distance between the observer- side lens and the objective- side lens; and an optical combiner located between the objective-side lens and the objective- side lens, wherein the optical combiner reflects towards the observer-side lens, the synthetic image transmitted from outside the eyepiece and transfers towards the observer- side lens, a scene image coming from an objecting lens of the night vision device and passing through the objective side lens, and wherein the diopter adjustment knob serves as a single setting mechanism which simultaneously sets a diopter of the observer and a focal depth of the display source image.
G02B 15/14 - Optical objectives with means for varying the magnification by axial movement of one or more lenses or groups of lenses relative to the image plane for continuously varying the equivalent focal length of the objective
G02B 27/14 - Beam splitting or combining systems operating by reflection only
G02B 23/12 - Telescopes, e.g. binocularsPeriscopesInstruments for viewing the inside of hollow bodiesViewfindersOptical aiming or sighting devices with means for image conversion or intensification
09 - Scientific and electric apparatus and instruments
Goods & Services
Airborne radar apparatus featuring embedded software for detection and mapping of terrain and obstacles in degraded visual environments and adverse visibility conditions
09 - Scientific and electric apparatus and instruments
Goods & Services
Computer hardware and software for planning, commanding and
controlling air force operations and missions including the
integration of multiple sensor and data sources of radars,
aircrafts, intelligence and anti-aircraft risks and
batteries, for creating on line comprehensive and accurate
aerial picture, aerial arena and situational awareness;
computer hardware and software for air force training and
simulation; flight simulators containing computer-operated
hardware units and associated software.
52.
Systems and methods for reducing image artefacts in binocular displays
Aspects of embodiments pertain to a method for reducing a subjective visual artefact when displaying binocular overlapping images to a user of a binocular display system, the method comprising generating, by an image display unit comprising a plurality of pixels elements, right and left-eye source images; projecting the right and left-eye source images via corresponding left and right viewing optics to its user such that the user perceives partially overlapping left and right-hand observation images; reducing, a perception of a subjective visual artefact in the perceived right and/or left-hand observation images by modifying one or more pixel and/or image parameters values relating to the left and/or right-hand source images.
09 - Scientific and electric apparatus and instruments
Goods & Services
Head-mounted displays for use primarily in aircraft, namely, head-mounted transparent electronic displays featuring a binocular display with an integrated head tracker for sensing motion, for use in providing crew members with navigational and operational information
A system for displaying videos, comprising a processing resource configured to: provide a data repository comprising a plurality of previously captured video segments (PCVSs) captured during previous operations of corresponding platforms, each being associated with metadata indicative of a Line-of-Sight (LoS) of a sensor, carried by the corresponding platform of the platforms used to capture the corresponding PCVS, with respect to a fixed coordinate system established in space, during capturing the corresponding PCVS; obtain an indication of a Region-of-Interest (RoI); identify one or more of the PCVSs that include at least part of the RoI, utilizing the LoSs associated with the PCVSs, giving rise to RoI matching PCVSs; and display at least part of at least one of the RoI matching PCVSs, being displayed RoI matching PCVSs, on a display of an operating platform to an operator of the operating platform during a current operation of the operating platform.
G01C 23/00 - Combined instruments indicating more than one navigational value, e.g. for aircraftCombined measuring devices for measuring two or more variables of movement, e.g. distance, speed or acceleration
Aspects of embodiments pertain to a display illumination optics for illuminating an image display device of an image generation apparatus, the display illumination optics comprising: a source illumination distributor that includes an illumination waveguide having a front surface and a back surface opposite the front surface and configured to internally direct light along a main direction, wherein the illumination waveguide is configured to distribute the luminance of input illumination light along the main direction of the display illumination optics to obtain, along the main direction, output illumination light of desired luminance.
09 - Scientific and electric apparatus and instruments
Goods & Services
Large area displays such as helmet mounted displays, head-up
displays, panoramic head-down displays and built-in aircraft
displays for presenting flight and mission data.
A touch screen (TS) is provided that comprises a display; a frame with edges positioned opposite each other around the display; at least a first sensor array and at least a second sensor array, wherein each of the sensor arrays has a plurality of light transmitters and a plurality of light sensors, and wherein the at least first sensor array and the at least second sensor array are disposed on the first edge and the second edge of the frame, respectively, wherein the transmitters of the first sensor array are facing the light sensors on the second sensor array positioned on two opposing edges of the frame; and at least one physical obstacle located on the third edge or the fourth edge, for reducing stray light scattered or reflected by third edge or the fourth edge and arriving to the light sensors.
A method and a system for landing and terrain flight assistance are provided herein. The method may include the following steps: obtaining, by at least one imaging sensor disposed on an aerial platform, at least two images of at least a portion of a specified region of a terrain; determining, based on the at least two images, a 3D model of at least a portion of the specified region; receiving a predetermined model of at least a portion of the specified region; determining a real-world geographic location of the aerial platform based on the 3D model and the predetermined model; and determining flight instructions based on the 3D model and the determined geographic location of the aerial platform.
Embodiments pertain to systems and methods for providing information related to a scene occurring in a region of interest (ROI), using a scene data collector, configured to receive scene source data, from one or more data sources, using at least one sensor, identify one or more physical objects located in the ROI, based on the received scene source data, determine one or more attributes of the identified physical objects, generate a data object, for at least one of the identified one or more physical objects, based on one or more attributes thereof, and transmit all data objects generated to at least one remote station (RS), located remotely from the ROI. Each RS may be configured to receive the transmitted one or more data objects, generate a virtual scene data, based on the received one or more data objects; and display the generated virtual scene data.
A head wearable device (HWD) suitable to be worn by a user, the HWD may include: a head tracker configured to track a line of sight (LOS) of the user; a near eye display (NED) comprising: a plurality of transistors groups forming a pixel array of said display, a plurality of backlight units, forming a backlight surface of said display; a backlight control module configured to dim the backlight units that spatially overlap one or more of the transistor groups whenever the data at said transistor groups is being refreshed and further configured to change at least one of: a frequency and a location of the dimmed backlight units; and a computer processor coupled to the tracker and the NED and configured to instruct the backlight control module to change at least one of: the frequency and the location of the dimmed backlight units, based on the user LOS.
G09G 3/34 - Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix by control of light from an independent source
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
G09G 3/36 - Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix by control of light from an independent source using liquid crystals
62.
Direct view display with transparent variable optical power elements
A direct view display system (DVDS) and a method to operate it are provided herein. The DVDS may include: a variable optical power element (VOPE); a transparent active image source located with the VOPE on a common optical axis going from an outside scene to an eye position of a viewer; and a time division multiplexer (TDM) configured to control the VOPE and the transparent active image source, wherein the TDM is configured in a certain time period to cause the transparent active image source to be in a transparent state and the VOPE to exhibit no optical power, and wherein the TDM is configured in another time period to cause said transparent active image source to exhibit an image and said VOPE to apply non-zero optical power, for projecting the image onto the eye position at a desirable distance therefrom.
G02B 27/00 - Optical systems or apparatus not provided for by any of the groups ,
G02B 30/34 - Stereoscopes providing a stereoscopic pair of separated images corresponding to parallactically displaced views of the same object, e.g. 3D slide viewers
63.
SYSTEM AND METHOD FOR PROVIDING INCREASED SENSOR FIELD OF VIEW
A system and method for displaying a sensor data on a display are provided herein. the system may include: a tracker arrangement to track line of sight (LOS) of a user; a sensor configured to be directed based on the LOS and configured to capture data of a scene relative to said LOS, to yield LOS captured data; and a display configured to: receive the LOS captured data, and display the LOS captured data relative to the LOS, wherein the display field of view (FOV) is wider than the sensor FOV and wherein the display is configured to display a mosaic of plurality of the LOS captured data, wherein at least one of the LOS captured data appears in the mosaic is a real time LOS captured data, and wherein at least one of the LOS captured data appears in the mosaic is a previous LOS captured data.
09 - Scientific and electric apparatus and instruments
Goods & Services
Large area displays, namely, helmet mounted displays, head-up displays, panoramic head-down displays, and built-in aircraft displays, all for use in military and commercial aircraft for presenting flight and mission data
65.
In-flight training simulation displaying a virtual environment
Method and system for displaying virtual environment during in-flight simulation. A simulation environment is selected for a training simulation of an airborne platform operating in flight within a real environment. The position and orientation of a display viewable by an operator of the airborne platform is determined with respect to the selected simulation environment. The display displays at least one simulation image comprising a view from a virtual altitude of simulation environmental terrain in the selected simulation environment, while the airborne platform is in flight at a real altitude above the real environmental terrain in the real environment, the virtual altitude above the simulation environmental terrain being a lower altitude than the real altitude above the real environmental terrain. The simulation image is displayed in accordance with the determined position and orientation of the display, such that the simulation environment is adaptive to operator manipulations of the airborne platform.
G09B 9/24 - Simulators for teaching or training purposes for teaching control of vehicles or other craft for teaching control of aircraft, e.g. Link trainer including display or recording of simulated flight path
G09B 9/44 - Simulators for teaching or training purposes for teaching control of vehicles or other craft for teaching control of aircraft, e.g. Link trainer providing simulation in a real aircraft flying through the atmosphere without restriction of its path
G09B 9/46 - Simulators for teaching or training purposes for teaching control of vehicles or other craft for teaching control of aircraft, e.g. Link trainer the aircraft being a helicopter
66.
SYSTEM AND METHOD FOR DETERMINING A RELATIVE MOTION BETWEEN TWO OR MORE OBJECTS
Systems and methods for determining a relative motion between two or more objects are disclosed. The system may include an excitation unit adapted to be disposed on a first object and configured to excite and to induce at least one change in at least a portion of a second object. The system may include a sensing unit adapted to be disposed on the first object, the sensing unit may include at least one sensor configured to detect the at least one change in the second object at two or more different time instances and to generate corresponding two or more sensor output datasets. The system may include a processing unit configured to determine a relative motion between the first object and the second object based on the two or more sensor output datasets thereof.
G01S 17/58 - Velocity or trajectory determination systemsSense-of-movement determination systems
G01P 3/68 - Devices characterised by the determination of the time taken to traverse a fixed distance using optical means, i.e. using infrared, visible, or ultraviolet light
G06T 7/254 - Analysis of motion involving subtraction of images
67.
SYSTEM AND METHOD FOR DETERMINING A CONFIGURATION OF A MEASUREMENT VOLUME
A method of determining a configuration of a measurement volume, the method may include: generating, by at least one transmitter, a transmitted magnetic field within the measurement volume; measuring, by at least one receiver positioned, a total magnetic field in the measurement volume at at least one receiver position and generating at least one receiver output signal; generating, by a processing unit, a measured dataset; comparing, by the processing unit, the measured dataset with at least one of at least two reference configuration datasets each for determined for one of at least two different configurations of the measurement volume; and identifying, by the processing unit, a reference configuration dataset of the at least two reference configuration datasets that corresponds to the measured dataset.
Aspects of embodiments pertain to a sensing systems configured to receive scene electromagnetic (EM) radiation comprising a first wavelength (WL1) range and a second wavelength (WL2) range. The sensing system comprises at least one spectral filter configured to filter the received scene EM radiation to obtain EM radiation in the WL1 range and the WL2 ranges; and a self-adaptive electromagnetic (EM) energy attenuating structure. The self-adaptive EM energy attenuating structure may comprise material that includes nanosized particles which are configured such that high intensity EM radiation at the WL1 range incident onto a portion of the self-adaptive EM energy attenuating structure causes interband excitation of one or more electron-hole pairs, thereby enabling intraband transition in the portion of the self-adaptive EM energy attenuating structure by EM radiation in the WL2 range.
H01L 31/0232 - Optical elements or arrangements associated with the device
H01L 31/0352 - SEMICONDUCTOR DEVICES NOT COVERED BY CLASS - Details thereof characterised by their semiconductor bodies characterised by their shape or by the shapes, relative sizes or disposition of the semiconductor regions
69.
SYSTEM AND METHOD FOR GENERATING A THREE-DIMENSIONAL (3D) MAP BASED ON MAPPING DESIGNATION INFORMATION
A system for generating a three-dimensional (3D) map of part of a field-of-view (FOV) of at least one detector of an active 3D scanner, the system comprising: the active 3D scanner, comprising: a scanning mechanism configured to scan the FOV; at least one energy emitting source configured to emit energy pulses, in synchronization with the scanning mechanism, to cover the FOV; and the at least one detector; and processing circuitry configured to: obtain mapping designation information independent of past readings obtained by the at least one detector, if any; selectively activate the energy emitting source to emit only a subset of the energy pulses, in accordance with the mapping designation information, to cover the part of the FOV; obtain current readings, from the at least one detector, based on reflections of the subset of the energy pulses; and generate the 3D map based on the current readings.
09 - Scientific and electric apparatus and instruments
Goods & Services
Infrared search and track systems usable as wide field of
view surveillance units, for searching, detecting, tracking,
classifying and prioritization of potential targets, and
providing support with operational capabilities.
71.
System and method for providing synthetic information on a see-through device
A method and a system for displaying conformal synthetic data on a scene over a head-mounted see-through display (HMSTD) having a line of sight (LOS) are provided herein. The system may include: a tracker configured to track the LOS of the HMSTD; a display controller configured to display on the HMSTD a first display area comprising a synthetic image data conformal to a scene viewed from the HMSTD; wherein said display controller is configured to receive a desired point being a point within the scene which intersects the LOS of the HMSTD and to display a second display area on said HMSTD, wherein the second display area is positioned relative to said desired point, wherein said synthetic image data is displayed over the HMSTD at a reduced intensity on an overlap area between the first and the second areas.
A system for registering a coordinate system associated with a model of an object with a reference-coordinate-system, the object includes at least one marker, the system includes a portable-unit, a tracking-system and a processor. The portable unit includes and display and an optical-detection-assembly for acquiring at least one representation of the marker. The tracking-system tracks the position-and-orientation of the portable-unit in the reference-coordinate-system. The processor is configured to determine position-related-information respective of the marker in the reference-coordinate-system, to register the model with the reference-coordinate-system at least based on the position-related-information respective of the marker, and on a location of the marker in a coordinate system associated with the model, and to display registration-related-information on the display, at least one of the registration-related-information and the display location of the registration-related-information is related to the position-and-orientation of the portable-unit in the reference-coordinate-system.
A system determines a dynamic collision awareness envelope for a vehicle. The system includes at least one vehicle motion sensor, an operator Line-Of-Sight detector and a processor. The vehicle motion sensor periodically provides measurements relating to the motion of the vehicle in a reference coordinate system. The operator Line-Of-Sight detector periodically provides information relating to the direction of the Line-Of-Sight of an operator of the vehicle, in a vehicle coordinate system. The processor is coupled with the at least one vehicle motion sensor, and with the operator Line-Of-Sight detector. The processor determines an operator vector from the direction of the Line-Of-Sight of the operator. The processor further determines an operational vector at least from the motion of the vehicle. The processor periodically determines a collision awareness envelope respective of each of the operational vectors, from the operator vector and the respective operational vector.
G01S 13/933 - Radar or analogous systems, specially adapted for specific applications for anti-collision purposes of aircraft or spacecraft
G01S 17/933 - Lidar systems, specially adapted for specific applications for anti-collision purposes of aircraft or spacecraft
G01S 19/39 - Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
A mixed reality system, comprising: a data acquisition device configured to acquire real-world data; an output device for providing the real-world data to a user; and a processing circuitry configured to: obtain (a) data acquired by the data acquisition device, and (b) information of one or more virtual entities having properties enabling determination of simulated effects of the virtual entities on the data; determine the simulated effects of the virtual entities on the data utilizing the properties; and provide the user with output on the output device being a manipulation of the data reflecting the simulated effects.
A mixed reality system, comprising: a data acquisition device configured to acquire real-world data; an output device for providing the real-world data to a user; and a processing circuitry configured to: obtain (a) data acquired by the data acquisition device, and (b) information of one or more virtual entities having properties enabling determination of simulated effects of the virtual entities on the data; determine the simulated effects of the virtual entities on the data utilizing the properties; and provide the user with output on the output device being a manipulation of the data reflecting the simulated effects.
A mixed reality system, comprising: a sensor configured to acquire readings of real-world data, and display, on an output device, a real-world visualization of the real-world data based on the readings to a user, wherein the sensor has one or more parameters; and a processing circuitry configured to: obtain (a) information of one or more virtual entities, and (b) values of one or more situational parameters; determine, for at least one given virtual entity of the virtual entities, a virtual entity visualization of the given virtual entity, the virtual entity visualization determined by manipulating a simulated reading of the simulated physical properties based on (a) the parameters affecting the real-world visualization, and (b) the values of the situational parameters; and display the virtual entity visualizations in combination with the real-world visualization, thereby enabling a user viewing the output device to view the virtual entity visualizations and the real-world visualization.
A system for displaying videos, comprising a processing resource configured to: provide a data repository comprising a plurality of previously captured video segments (PCVSs) captured during previous operations of corresponding platforms, each being associated with metadata indicative of a Line-of-Sight (LoS) of a sensor, carried by the corresponding platform of the platforms used to capture the corresponding PCVS, with respect to a fixed coordinate system established in space, during capturing the corresponding PCVS; obtain an indication of a Region-of-Interest (RoI); identify one or more of the PCVSs that include at least part of the RoI, utilizing the LoSs associated with the PCVSs, giving rise to RoI matching PCVSs; and display at least part of at least one of the RoI matching PCVSs, being displayed RoI matching PCVSs, on a display of an operating platform to an operator of the operating platform during a current operation of the operating platform.
G01C 23/00 - Combined instruments indicating more than one navigational value, e.g. for aircraftCombined measuring devices for measuring two or more variables of movement, e.g. distance, speed or acceleration
A system comprising a processing resource configured to: obtain a first indication of a confirmation, by an operator of a platform, of an alignment of one symbol of a first plurality of symbols with a second symbol as viewed in a head-mounted display of a head-mounting worn by the operator, the first plurality of symbols being projected by at least one optical apparatus disposed on the platform to a plurality of different viewing angles, the second symbol being projected onto the head-mounted display by a projection unit of the head-mounted display; determine which given symbol of the first plurality of symbols was aligned with the second, the given symbol having first orientation data, the second symbol having second orientation data; and perform an alignment of the head-mounting and the head-mounted display at least based on the first orientation data and the second orientation data.
09 - Scientific and electric apparatus and instruments
Goods & Services
Infrared search and track systems usable as wide field of view surveillance units in the nature of infrared detection apparatus for searching, detecting, tracking, classifying and prioritization of potential targets, and providing support with operational capabilities
80.
MICROSURGERY SYSTEM FOR DISPLAYING IN REAL-TIME MAGNIFIED DIGITAL IMAGE SEQUENCES OF AN OPERATED AREA
A microsurgery system comprising a robotic arm, configured for movement; a head mounted display (HMD) configured to display to a user in real-time image sequences of an operated area; at least one camera coupled to said robotic arm, said at least one camera configured to acquire operated-area image sequences of said operated area, said at least one camera being suspended above said operated area and being mechanically and optically disconnected from said HMD, said robotic arm enables said at least one camera to capture said operated-area image sequences from a range of perspectives; a processing device configured to be coupled with said HMD and said at least one camera, said processing device configured to transmit said image sequences of said operated-area to said HMD; and a tracker configured to track at least one of a head of said user, a hand of said user, and a tool held by said user; wherein said robotic arm is enabled to be guided according to movements of tracked at least one of said head, said hand, and said tool.
A61B 1/00 - Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopesIlluminating arrangements therefor
A61B 5/00 - Measuring for diagnostic purposes Identification of persons
A61B 90/30 - Devices for illuminating a surgical field, the devices having an interrelation with other surgical devices or with a surgical procedure
A61B 90/00 - Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups , e.g. for luxation treatment or for protecting wound edges
A method for automatically determining a potential transmission region, comprising: acquiring multiple binary indications for a direct line of sight between a transmitter and an airborne vehicle respective of multiple geo-positions of the airborne vehicle; acquiring each of the multiple geo-positions respective of each of the multiple binary indications for the direct line of sight; for each one of the acquired geo-positions, determining a layer of access respective of topographical data for the geographical region and the acquired geo-position, as a subset of the geographical region that includes at least one potential point defining a potential line of sight between the transmitter and the airborne vehicle, thereby determining multiple layers of access; determining an intersection of the multiple layers of access; and determining, from the intersection, the potential transmission region respective of the transmitter and the geo-positions of the airborne vehicle.
G01S 7/02 - Details of systems according to groups , , of systems according to group
G01S 7/52 - Details of systems according to groups , , of systems according to group
G01S 5/20 - Position of source determined by a plurality of spaced direction-finders
G01S 5/16 - Position-fixing by co-ordinating two or more direction or position-line determinationsPosition-fixing by co-ordinating two or more distance determinations using electromagnetic waves other than radio waves
G01S 7/48 - Details of systems according to groups , , of systems according to group
G01S 5/14 - Determining absolute distances from a plurality of spaced points of known location
G01S 5/00 - Position-fixing by co-ordinating two or more direction or position-line determinationsPosition-fixing by co-ordinating two or more distance determinations
G01S 7/00 - Details of systems according to groups , ,
82.
Gradual transitioning between two-dimensional and three-dimensional augmented reality images
System and method for enhancing situational awareness. A moveable see-through display viewable by a user displays an augmented reality 2D image of an external scene based on received 2D image data, in accordance with updated position and orientation of display. The see-through display further displays an augmented reality 3D image of the external scene based on received 3D image data, the 3D image overlaid conformally onto view of external scene, in accordance with updated position and orientation of display. The see-through display further selectively displays: a gradual transition of the 2D image into the 3D image, or a gradual transition of the 3D image into the 2D image. At least one image feature may gradually appear or gradually disappear during the gradual transition. The 2D or 3D image may include a region of interest based on updated position and orientation of display or selected by user.
A system for displaying, on a see-through display located within a moving platform, a frame, while at least partially correcting a rolling display effect.
G02B 23/12 - Telescopes, e.g. binocularsPeriscopesInstruments for viewing the inside of hollow bodiesViewfindersOptical aiming or sighting devices with means for image conversion or intensification
System and method for determining position of head-mounted image sensor using celestial navigation. A head-mounted image sensor, worn by an operator, captures at least one image of an external scene comprising a celestial view, pursuant with natural head movements of operator. A processor receives orientation of head-mounted image sensor in reference coordinate system, extracts parameters of celestial bodies using stored celestial data, captured image, and received orientation, and determines position of head-mounted image sensor based on extracted parameters, such as based on difference between relative angle and expected relative angle of single celestial body or constellation of celestial bodies in captured image. The image sensor may be situated in a mobile platform. A default geolocation device, such as an IMU subject to angular drifts, may provide a default geolocation estimate of mobile platform, and may be monitored, updated or calibrated using determined position of head-mounted image sensor.
G01C 21/02 - NavigationNavigational instruments not provided for in groups by astronomical means
G01C 21/16 - NavigationNavigational instruments not provided for in groups by using measurement of speed or acceleration executed aboard the object being navigatedDead reckoning by integrating acceleration or speed, i.e. inertial navigation
A method and a system are provided herein for calculating whether or not a specific aerial vehicle at a specified point of time can maneuver over a given location in the terrain while complying with terrain clearance requirements. The system may include a computer memory configured to store a 3D model representing at least a portion of a terrain located in a vicinity of an aerial vehicle; a computer processor configured to map said portion of the terrain into at least two types: a first type indicative of a potential of the aerial vehicle to maneuver over a respective terrain while complying with terrain clearance, and a second type indicative of a non-potential of said aerial vehicle to maneuver over a respective terrain, wherein the mapping is carried out based on said parameters, the 3D model and given predefined performance of the aerial vehicle.
A system for displaying videos, comprising a processing resource configured to: provide a data repository comprising a plurality of previously captured video segments (PCVSs) captured during previous operations of corresponding platforms, each being associated with metadata indicative of a Line-of-Sight (LoS) of a sensor, carried by the corresponding platform of the platforms used to capture the corresponding PC VS, with respect to a fixed coordinate system established in space, during capturing the corresponding PC VS; obtain an indication of a Region-of-Interest (Rol); identify one or more of the PCVSs that include at least part of the Rol, utilizing the LoSs associated with the PCVSs, giving rise to Rol matching PCVSs; and display at least part of at least one of the Rol matching PCVSs, being displayed Rol matching PCVSs, on a display of an operating platform to an operator of the operating platform during a current operation of the operating platform.
B60R 1/00 - Optical viewing arrangementsReal-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
G06K 9/46 - Extraction of features or characteristics of the image
G06T 7/73 - Determining position or orientation of objects or cameras using feature-based methods
87.
System and method for providing synthetic information on a see-through device
A system for displaying a combined image data from several sources on a head-mounted see-through display (HMSTD) defining a line of sight (LOS) is provided herein. The system includes: a tracker configured to track the LOS of the HMSTD; a display controller configured to display a first display area including a synthetic image data conformal to a scene viewed via the HMSTD; a trigger mechanism configured to select a desired point within the scene which intersects the LOS of the HMSTD at a time of a triggering event, wherein the display controller is configured to receive the desired point and to display a second display area on the HMSTD, wherein the second display area is positioned relative to the desired point, and wherein said display controller is further configured to modify the synthetic image data in a portion of the first display area covered by the second display area.
A method for initiating a gesture-based mutual interaction scheme between a first and second mobile device, comprising: associating a gesture-based mutual interaction scheme between the first and second mobile device that associates a position scheme with a respective action, where the position scheme relates to any of: an absolute or relative orientation, and an absolute or relative trajectory; acquiring a first position property of the first mobile device and a second position property of the second mobile device; determining that each of the first and second position properties comply with the position scheme; triggering an execution of an action on the second mobile device, where the action is associated with the position scheme that the first position property complies with; where the second mobile device conditions the execution of the action triggered by the first mobile device on the compliance of the second position property with the position scheme.
G06F 3/0346 - Pointing devices displaced or positioned by the userAccessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
G06F 1/16 - Constructional details or arrangements
Optical systems and methods are provided, which combine see-through view of the real world and display source images using a conical optical combiner cut to have flat surfaces normal to the viewer line of sight. The conical shape minimizes interferences in the view of the real world as the edges of the optical combiner are tangent to the viewer vision field of view and the inner part of the optical combiner is semitransparent. Additionally, the optical system comprises a beam splitter, a shutter(s) for attenuating or blocking the see-through path and may employ polarizing element to improve the contrast between the scene observation and the projected display and thus enabling selective viewing of either. The system may also be configured to enable diopter adjustment and virtual display distance adjustments.
G02F 1/00 - Devices or arrangements for the control of the intensity, colour, phase, polarisation or direction of light arriving from an independent light source, e.g. switching, gating or modulatingNon-linear optics
09 - Scientific and electric apparatus and instruments
Goods & Services
(1) Computer hardware and software based command and control systems for planning and managing military operations, consisting of communication and ground positioning systems, officer's operation terminals, operator's sensor terminals, mobile and stationary commander's terminals, mobile and stationary patrol terminals, parts and fittings for the afore-defined systems
91.
Thermal management of printed circuit board components
A first thermal management approach involves an air flow through cooling mechanism with multiple airflow channels for dissipating heat generated in a PCA. The air flow direction through at least one of the channels is different from the air flow direction through at least another of the channels. Alternatively or additionally, the airflow inlet of at least one channel is off-axis with respect to the airflow outlet. A second thermal management approach involves the fabrication of a PCB with enhanced durability by mitigating via cracking or PTH fatigue. At least one PCB layer is composed of a base material formed from a 3D woven fiberglass fabric, and conductive material deposited onto the base material surface. A conductive PTH extends through the base material of multiple PCB layers, where the CTE of the base material along the z-axis direction substantially matches the CTE of the conductive material along the x-axis direction.
Imaging systems and methods are provided, which implement wide field imaging of a region for medical procedures and provide tracking of tools and tissues in the whole region while providing digitally magnified images of a portion of the captured region. Optical tracking may be implemented by stereoscopic imaging, and various elements may be optically tracked such as various markers and fiducials, as well as certain shapes and objects which are optically identifiable by image processing.
A system and method for displaying a sensor data on a display are provided herein. the system may include: a tracker arrangement to track line of sight (LOS) of a user; a sensor configured to be directed based on the LOS and configured to capture data of a scene relative to said LOS, to yield LOS captured data; and a display configured to: receive the LOS captured data, and display the LOS captured data relative to the LOS, wherein the display field of view (FOV) is wider than the sensor FOV and wherein the display is configured to display a mosaic of plurality of the LOS captured data, wherein at least one of the LOS captured data appears in the mosaic is a real time LOS captured data, and wherein at least one of the LOS captured data appears in the mosaic is a previous LOS captured data.
H04N 5/232 - Devices for controlling television cameras, e.g. remote control
G06T 7/70 - Determining position or orientation of objects or cameras
G02B 27/00 - Optical systems or apparatus not provided for by any of the groups ,
G06F 16/487 - Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using geographical or spatial information, e.g. location
LCD resizing methods. In first method, display is situated in vacuum chamber under vacuum conditions, and temperature is decreased below freezing temperature of display image-generating medium. A groove is formed along selected groove dimensions, extending through one plate and terminating within other plate. Adhesive is applied into formed groove, and display temperature is increased above adhesive liquidation temperature. Vacuum pressure is increased to at least atmospheric pressure, adhesive is polymerized to form seal, and plates are severed. In second method, grooves are formed in each plate along respective groove dimensions. A compressive force is applied against intended remaining plate sections to prevent air penetrating between plates and seepage of image-generating medium. Display plates are severed along respective groove dimensions, forming an exposed inner surface at plate edge. Adhesive is applied at exposed surface, compressive force is reduced allowing adhesive to permeate between plates, and adhesive is polymerized to form seal.
B32B 17/06 - Layered products essentially comprising sheet glass, or fibres of glass, slag or the like comprising glass as the main or only constituent of a layer, next to another layer of a specific substance
95.
SYSTEMS AND METHODS FOR REDUCING IMAGE ARTEFACTS IN BINOCULAR DISPLAYS
Aspects of embodiments pertain to a method for reducing a subjective visual artefact when displaying binocular overlapping images to a user of a binocular display system, the method comprising generating, by an image display unit comprising a plurality of pixels elements, right and left-eye source images; projecting the right and left-eye source images via corresponding left and right viewing optics to its user such that the user perceives partially overlapping left and right-hand observation images; reducing, a perception of a subjective visual artefact in the perceived right and/or left-hand observation images by modifying one or more pixel and/or image parameters values relating to the left and/or right-hand source images.
A direct view display system (DVDS) and a method to operate it are provided herein. The DVDS includes a variable optical power element (VOPE); a transparent active image source located with the VOPE on a common optical axis going from an outside scene to an eye position of a viewer; and a time division multiplexer (TDM) configured to control the VOPE and the transparent active image source, wherein the TDM is configured in a certain time period to cause the transparent active image source to be in a transparent state and the VOPE to exhibit no optical power, and wherein the TDM is configured in another time period to cause said transparent active image source to exhibit an image and said VOPE to apply non-zero optical power, for projecting the image onto the eye position at a desirable distance therefrom.
A system comprising a processing resource configured to: obtain a first indication of a confirmation, by an operator of a platform, of an alignment of one symbol of a first plurality of symbols with a second symbol as viewed in a head- mounted display of a head-mounting worn by the operator, the first plurality of symbols being projected by at least one optical apparatus disposed on the platform to a plurality of different viewing angles, the second symbol being projected onto the head-mounted display by a projection unit of the head-mounted display; determine which given symbol of the first plurality of symbols was aligned with the second, the given symbol having first orientation data, the second symbol having second orientation data; and perform an alignment of the head-mounting and the head- mounted display at least based on the first orientation data and the second orientation data.
Aspects of embodiments pertain to a display illumination optics for illuminating an image display device of an image generation apparatus, the display illumination optics comprising: a source illumination distributor that includes an illumination waveguide having a front surface and a back surface opposite the front surface and configured to internally direct light along a main direction, wherein the illumination waveguide is configured to distribute the luminance of input illumination light along the main direction of the display illumination optics to obtain, along the main direction, output illumination light of desired luminance.
F21V 8/00 - Use of light guides, e.g. fibre optic devices, in lighting devices or systems
G02F 1/01 - Devices or arrangements for the control of the intensity, colour, phase, polarisation or direction of light arriving from an independent light source, e.g. switching, gating or modulatingNon-linear optics for the control of the intensity, phase, polarisation or colour
09 - Scientific and electric apparatus and instruments
Goods & Services
Computer hardware and software based command and control
systems for planning and managing military operations,
consisting of communication and ground positioning systems,
officer's operation terminals, operator's sensor terminals,
mobile and stationary commander's terminals, mobile and
stationary patrol terminals; parts and fittings for the
afore-defined systems.
100.
Augmented reality display reflective of visibility affecting features in real-world environment
Method and system for displaying augmented reality reflective of environmental features affecting visibility. Characteristics of a virtual object to be displayed on view of scene is determined. Environmental features affecting visibility along a line-of-sight from scene origin to virtual object are detected. When detected feature is at least one non-obstructing feature, its effect on visibility is determined, and virtual object is displayed superimposed onto view of scene such that appearance of virtual object is consistent with determined effect on visibility. When detected feature includes an amorphous obstructing feature, its range and contour is determined, and obstructed portions of virtual object is determined based on difference between range of virtual object and range of amorphous obstructing feature, and virtual object is displayed superimposed onto view of scene such that determined obstructed portions of virtual object appear obstructed in displayed view.