18 virtual reality VR professional terminology analysis

I believe you have heard of VR, AR, MR, HMD, and have also heard naked 3D, 3D panoramic views, but the field of view field, HMZ, ray tracing algorithm, eye tracking technology, facial expression recognition technology... these professional terms in VR, you Have you heard? Below you will learn about the 18 technical terms in VR Virtual Reality that are getting more and more hot.

I believe you have heard of VR, AR, MR, HMD, and have also heard naked 3D, 3D panoramic views, but the field of view field, HMZ, ray tracing algorithm, eye tracking technology, facial expression recognition technology... these professional terms in VR, you Have you heard? Below you will learn about the 18 technical terms in VR Virtual Reality that are getting more and more hot.

1. HMD: Head Mount Display

Wearing a virtual display, also known as glasses-type display, portable theater. Is a popular name, because the glasses-like display looks like glasses, while specifically for large-screen audio and video player video images, so the image is called video glasses (videoglasses). Video glasses were initially required by the military and applied to the military. The current video glasses are like the stage and status of the big brother and big mobile phones. In the future, 3C convergence will develop very rapidly.

2, HMZ:

As of April 24, 2015, when Sony announced the suspension of production of the HMZ series, the series introduced three generations of products, the HMZ-T1 in 2011, the HMZ-T2 in 2012, and the HMZ-T3/T3W in 2013. HMZ-T1 display resolution is only 720p, the headset is a virtual 5.1-channel, but also drag a large hub box. First, it uses two 0.7-inch 720p OLED screens. After wearing the HMZ-T1, the two 0.7-inch screens display the same effect as watching a 750-inch giant screen at a distance of 20 meters. In October 2012, Sony released a small version of the HMZ-T1, the HMZ-T2. Compared to the HMZ-T1, it reduces the weight by 30%, while eliminating the built-in headphone design and allowing users to use their favorite headphones. Although the screen maintains the same 0.7-inch 720pOLED parameters, but the introduction of the 14bitRealRGB3 x 3-color conversion matrix engine and a new optical filter, in fact, there are also enhanced image quality. In 2013, the HMZ-T3/T3W upgrade was not small. It achieved wireless signal transmission for the first time, allowing you to wear a wireless version of the HMZ-T3W for limited, small-scale movements without being constrained by cables.

3, naked eye 3D:

The naked eye 3D is the use of the parallax characteristic of both eyes, and it can obtain realistic stereoscopic images with space and depth without any auxiliary equipment (such as 3D glasses, helmets, etc.). From a technical point of view, the naked eye type 3D can be divided into three types: light barrier lenticular technology and pointing light source. The biggest advantage of naked-eye 3D technology is to get rid of the shackles of glasses, but there are still many deficiencies in terms of resolution, viewing angle, and viewing distance.

4, field of view angle:

In optical instruments, the angle of the lens of the optical instrument is used as the vertex, and the angle between the object image of the object to be measured and the two edges of the largest range of the lens is called an angle of field. The size of the field of view determines the field of view of the optical instrument. The larger the field of view, the larger the field of view and the smaller the optical magnification. In layman's terms, target objects beyond this angle will not be captured in the lens. In the display system, the angle of view is the angle between the edges of the display and the viewing point (eyes).

5, near eye field display:

The new head-mounted display device developed by NVIDIA is called “Near-Eye Light Field Displays”. Some components of Sony's 3D head-mounted OLED display HMZ-T1 are used internally, and the peripheral structure part uses 3D. Print technology for manufacturing. The near-eye field display uses a microlens array with a focal length of 3.3mm to replace the optical lens used in similar products. This design succeeded in reducing the thickness of the entire display module from 40mm to 10mm, making it easier to wear. At the same time, with the use of NVIDIA's latest GPU chip for real-time light source ray tracing operations, the image is decomposed into dozens of different viewing angle arrays, and then the screen is restored and displayed in front of the user through the microlens array so that the viewer can be likened to the user. As in the real world, stereoscopic images are naturally observed from different perspectives through the eyes. Since the near-eye light field display can restore the environment in the scene through the micro lens array, it is only necessary to add vision correction parameters in the GPU operation process, and it can offset the effect of vision defects such as nearsightedness or farsightedness on the viewing effect, which means “glasses” The family can also use this product to enjoy true and clear 3D images in the naked eye.

6, micro lens array display technology:

Through the in-depth study of the structure of the microlens array, the zooming principle of the microlens array to the micropattern is revealed. On this basis, the relationship between microlens array structure parameters, micro-pattern structure parameters and micro-graphics array movement speed, movement direction and magnification ratio was found. Micro-lens arrays were used to achieve micro-pattern magnification, dynamic and stereoscopic display.

7, three-dimensional panoramic technology:

Three-dimensional panoramic technology (Panorama) is now the most popular visual technology. It uses image rendering technology as a basis to generate virtual reality technology with realistic images. The generation of panorama is firstly a series of image samples obtained by the camera's translation or rotation; then the image mosaic technology is used to generate panoramic images with strong dynamic and perspective effects; finally, the image fusion technology is used to make the panorama bring new users to the user. A sense of reality and interaction. This technology uses the extraction of the depth information of the panorama to restore the 3D information model of the real-time scene. The method is simple, the design cycle is shortened, the cost is greatly reduced, and the effect is more, so it is more popular at present.

8, three-dimensional virtual sound technology:

The stereo that we hear in daily life comes from the left and right channels. The sound effect can be very obvious. We feel that we come from the plane in front of us, not when someone is shouting us behind us. The sound comes from the sound source and can Accurately determine its position. Obviously stereo can't be done now. The three-dimensional virtual sound is to listen to its position, that is, in the virtual scene, the user can listen to the sound and fully meet the requirements of the hearing system in the real environment. Such a sound system is called a three-dimensional virtual sound.

9, image-based real-time rendering technology:

Image Based Rendering (IBR) is different from the traditional method of geometric rendering. First, the model is built and the light source is drawn. The IBR generates images of unknown angles directly from a series of graphs. The images are transformed, interpolated and transformed directly to obtain scenes with different visual angles.

10, man-machine interaction technology:

In the virtual reality system, we are committed to enabling users to interact with the virtual environment generated in the computer system through the sense organs such as eyes, gestures, ears, language, nose, and skin. The exchange technology in this virtual environment is called The man-machine natural interaction technology.

11, speech recognition technology:

Automatic Speech Recognition (ASR) is a technology that converts a language signal into text information that can be recognized by a computer, allowing the computer to recognize the speaker's language instructions and textual content. To achieve full recognition of speech is very difficult, it must go through several processes such as parameter extraction, reference mode establishment, and pattern recognition. With the constant research of researchers, methods such as Fourier transform and spectral parameters are used, and the degree of speech recognition is also getting higher and higher.

12, speech synthesis technology:

Textto Speech (TTS) refers to the technology of artificial speech synthesis. To achieve the voice output by the computer can accurately, clearly and naturally express the meaning. There are two general methods: one is recording/replay, and the other is text-to-speech conversion. In the virtual reality system, the application of speech synthesis technology can improve the immersiveness of the system and make up for the lack of visual information.

13, real drawing technology:

In the virtual reality system, the requirements for the real drawing technology are different from the traditional real-life graphics. The traditional drawing only requires the graphics quality and realism. However, in the VR, we must make the graphic display update speed not less than the user's The speed of visual change, otherwise there will be hysteresis of the picture. Therefore, in VR, real-time 3D rendering requires graphics to be generated in real time, and no less than 10 to 20 frames of images must be generated per second. It also requires its authenticity and must reflect the physical properties of the simulated object. In order to make the scene more realistic and real-time, texture mapping, environment mapping and anti-aliasing methods are usually used.

14, eye tracking technology:

Eye Movement-based Interaction (also known as tracking technology). It can complement the deficiencies of head tracking technology, which is simple and direct.

15, facial expression recognition technology:

The current research of this technology is still far from people's expectation effect, but to study the results to show its charm. This technique is generally divided into three steps. The first is the tracking of facial expressions. The user's facial expressions are recorded using a video camera, and then facial expression recognition is achieved through image analysis and recognition techniques. The second is the encoding of facial expressions. The researchers used Facial Action Coding System (FACS) to dissect human facial expressions and classify and encode facial activities. Finally, the recognition of facial expressions can be used to form a system flowchart of facial expression recognition through the FACS system.

16, gesture recognition technology:

The data hand or depth image sensor (such as leap motion, kinect, etc.) is used to accurately measure the position and shape of the hand, thereby realizing the manipulation of the virtual object by the virtual hand in the environment. The data glove determines the position and orientation of the hand and the joint by bending, twisting the sensor, and the curvature and curvature of the palm of the hand. The depth sensor-based gesture recognition is performed by using the depth image information obtained by the depth sensor to obtain the palm. Data such as bending angles of fingers, etc.

17, real-time collision detection technology:

In daily life, people have established a certain physical habits, such as solids can not penetrate each other, the object falls in a free fall to do free-fall movement, the thrown objects do flat throwing movement, etc., but also by gravity and air flow velocity The impact and so on. In order to completely simulate the real-world environment in the virtual reality system and prevent the penetration phenomenon from occurring, real-time collision detection technology must be introduced. Moore proposed two collision detection algorithms, one dealing with triangulated object surfaces and the other dealing with collision detection in a polyhedral environment. In order to prevent penetration there are three main parts. First, the collision must be detected. Second, the speed of the object should be adjusted in response to the collision. Finally, if the collision does not cause the object to separate immediately, the contact force must be calculated and applied until it separates.

18, ray tracing algorithm:

In order to generate visible images in a three-dimensional computer graphics environment, ray tracing is a more realistic implementation than ray casting or scan line rendering. This method works by retrospectively tracing the optical path that intersects with the illusion of the camera lens. Because a large number of similar rays traverse the scene, the scene-visible information and software-specific lighting conditions seen from the camera's perspective can be constructed. The reflection, refraction, and absorption of light are calculated when the light intersects the object or medium in the scene. Ray tracing scenes are often described by programmers using mathematical tools. They can also be described by visual artists using intermediate tools, or they can use images or model data captured from different technical methods such as digital cameras.

Screw

Socket Head Cap Screws,Hexagon Socket Head Cap Screw,Hexagon Socket Round Head Screw,Phillips Hex Head Screw

we enhanced the hardness of the screws by heat treatment which can efficiently prevent slip and break. With a stable and reliable quality, Lanejoy screws will be your best choice.

China Socket Head Cap Screws,Hexagon Socket Head Cap Screw,Hexagon Socket Round Head Screw,Phillips Hex Head Screw, we offered that you can trust. Welcome to do business with us.

Machine screw,tapping screw,self drilling screw,flat head screw,Pan head screw

Shenzhen Lanejoy Technology Co.,LTD , https://www.szsmallcompressionspring.com