Vol 2 Chapter 362: Contradictions, disputes, anxieties
-
Military Technology
- Zhi Tiange
- 1233 characters
- 2021-01-29 06:43:28
Genius remember the address of this site in one second: [] https: // Fastest update! No ads!
And whether it is in strong light or dark environment, it cannot affect the transparency of the screen, thereby affecting the wearer's field of vision. This requires that the transparent screen must adjust its display intensity according to the environment.
Enhancing the display will inevitably affect the transparency of the screen, and thus affect the wearer's field of vision. Decreasing the display picture intensity will affect the picture quality and thus the viewing experience.
This is a contradictory problem that must be adapted to local conditions if it is to be resolved. When should the display screen be enhanced in that use scenario, and when should the display screen intensity be reduced? This requires not only manual control, but also intelligent automatic adjustment by the system according to the relevant wearing environment.
In addition to display technical problems, there is information and data processing capabilities, which are also divided into hardware and software.
First of all, in terms of hardware, AR glasses can be different from VR glasses. Because the use environment and scene are different, AR glasses need to be worn for a long time, and adapt to a variety of environments, so the volume and weight of AR glasses must be as light as possible.
The most ideal state is a pair of glasses, which is not much larger than the glasses, and not too heavy. Too big or too heavy will affect the wearing experience.
It is also paradoxical how to place a large number of hardware devices under the circumstances of being as light and small as possible, which has extremely high requirements for the integration and integration capabilities of the entire hardware.
At present, what is generally done is to integrate and install these hardware devices on the frame legs of the glasses at both ends, but even so, it is very cumbersome and very inconvenient to wear.
Because of the limitation of volume and weight, the power of the hardware equipment is destined to be too strong, which also greatly limits the computing processing capacity of the system. How to improve the information and data processing capabilities of the system is also a problem that the R & D team must solve.
Although with the popularization of 5G technology, the high-speed dissemination of information data is no longer a problem. But how to receive and process these massive information in a timely manner is also a very difficult problem.
It is okay in a single environment, if it is in a complex environment.
Assume a scene, when you walk on a busy cross street, all buildings, billboards, and even some facilities have AR interpretation functions. This also means that your AR glasses must accept a large amount of AR data information at once and display it on your screen at the same time, which can have great requirements on the processor and system.
The last problem is in the area of interactive systems. VR can be controlled using a wearable glove sensor or a hand-held joystick.
AR doesn't work, because AR has to adapt to a variety of environments and scenarios, so it must have a simpler and more direct way.
There are three ways to think of this at present, the first is the first eye tracking control technology.
Through the eyeball capture sensor, real-time capture of eyeball rotation, blinking, and eyeball focus center for interactive control. This technology is currently implemented and has a good application performance on many devices.
In general, this technology is also used in conjunction with head motion sensors. For example, when you look up, the screen display content slides upward; when you look down, the screen display content slides downward. When you look left and right, the screen content will also slide left and right accordingly.
When you blink, you can perform operations such as selecting and selecting. For example, blinking once is OK, twice is undoing, etc. This is equivalent to the left and right mouse buttons.
And the focus of the eye focus corresponds to the mouse cursor. Wherever you look, the focus is just as flexible as a mouse cursor.
The second way is to use gesture control technology to use sensors to capture movement changes of previous gestures for interactive control.
For example, if you slide your hand up and down, the screen display content will also slide up and down, as well as left and right. A finger pull can also move the screen position, or zoom in and out of the screen. Finger click OK, wave to undo and more.
Gesture recognition and control technology is currently developing rapidly, but there are still some difficulties in identifying gesture changes at high speed. This requires the sensor to have accurate recognition and capture capabilities for gestures, and the processor can quickly and accurately convert these gestures into related operation instructions.
Another point is that everyone's gesture operation gestures are different, or each person's operation gestures are different. Even if it is a gesture, there will be some changes in different time and environment scenarios.
This brings certain difficulties to the system's capture and identification, and therefore requires the system to have good fault tolerance.
The third mode of interaction seems more sci-fi, which is the brain-computer control technology of the recent fire. In simple terms, it is to control the operation through thinking and imagination.
The brain waves released by us when we imagine something or a picture and an object ~ EbookFREE.me ~ are different. The brain-computer control technology is to use these different brain waves of us to control and interact with the device.
For example, after your brain imagines an idea of moving forward, the brain will release such a brain wave, and the brain-computer system will recognize this brain wave and then convert it into corresponding electrical signal instructions to control the device to move forward.
At present, this technology has been used in some fields, including the brain-computer-controlled wheelchair for patients with high paraplegia. Patients can use their brains to control a wheelchair to stop movement and so on.
There is also the use of this brain-computer control technology for related text input. It is said that the input speed can reach 70 words per minute, which can be said to be very fast.
Although this technology is developing rapidly, it is also a hot area for science and technology giants in various countries to compete for research. But the controversy over this technology has not stagnated, and it has even become more intense.
And one of the important core issues that everyone discussed is, is this technology safe? First of all, is it safe to use it? If you wear this sensor for a long time to capture brain waves, will it damage the brain, affect intelligence, the nervous system, and have no effect on health.
Second, since brain-computer devices can read brain waves, this means that brain waves can also be input. Now that the security of the Internet is getting more and more severe, if hackers master the relevant technology and then use brain-computer control technology to invade the human brain, wouldn't it be possible to steal data and secrets in the human brain.
Or even worse, what if a hacker uses this method to transmit a transplant virus to the human brain? Is it really necessary to restart the human brain, or directly format it. Or install an anti-virus software in the brain and set up a firewall?
"Add bookmarks for easy reading"