The future of computer input devices

KIAman

Diamond Member
Mar 7, 2001
3,342
23
81
Does Hollywood have a clue? How many movies have you seen which feature a futuristic "touchscreen" holographic display that somehow "magically" knows exactly what you want by a simple touch or gesture?

Is this realistic? Even if it is realistic, does it make it a good way to provide input?

Where are we going in the realm of input devices?

Some of my thoughts below (assuming we resolve all the technical hurdles associated)...

1. Voice command - Hugely complicated and alienation of the mute
2. Touchscreen - Old idea but looks neat, could be very slow (relying on motion)
3. Holographic touchscreen - looks even neater, potential?
4. Mind/computer - Take your limbs out of the equation?!?
5. Motion sensor - super advanced WII controller
6. Eye sensor - Eye focus = mouse target
7. Good ole mouse and keyboard - can we advance?

Any others?

Is there a path to go from here?
 

z1ggy

Lifer
May 17, 2008
10,010
66
91
Id say every one of those things listed is already in exsistance. Maybe with the exception of a touch hologram, but who knows.
 

bobsmith1492

Diamond Member
Feb 21, 2004
3,875
3
81
I don't think touchscreens will catch on and proliferate. First, by touching a surface, you are getting it oily and messy, obscuring the view of what is underneath (possibly alleviated by "holographic" touchscreens). Second, it requires large-scale movement of your hand, arm, and upper body as the screen grows in size (like in Minority Report). People are too lazy! By which I mean I'm too lazy... ;) I want to twitch my hand and have things happen. So, I would vote more toward mind-control solutions; even eye movement becomes tiring quickly.

I know there are quite a few touch screens in use now; Iphones and industrial controllers, for example, and drawing pads for artists or industrial designers. They're pretty limited, though, to these particular applications. For general computer work, the mouse and keyboard are still much easier.
 

Modelworks

Lifer
Feb 22, 2007
16,240
7
76
I think it will be voice, or possibly eye tracking.
The best input device is the one that you don't have to carry with you or get up to get in order to use it.
Voice works very well but doesn't do well on things like picking items off a menu displayed on your tv. People get tired of saying up , up , down, left, right, up, to get to something on the screen.
Eye tracking works great for things like that and most people have their eyes with them all the time :)

I've studied a lot about HID when I was doing engineering. I still am working on a few concepts, though they are hand held remote based objects. I don't have the level of skill needed to develop eye tracking systems. One of the most tested and proven examples of eye tracking is the system in use by Stephen Hawking. He uses it to speak and work.

It could work by your tv having a spot on the frame that when the tv notices you staring at it , followed by a long blink, brings up the interface. Where ever you look the cursor moves, blink would equal a click.

I really think this is where the tech is going. Using the body itself as the interface. Microsoft is coming out with Natal that will be just that. A body interface without a controller.
http://www.xbox.com/en-US/live/projectnatal/

Change the channel ? Just raise your arm for channel up, lower it for channel down :)


The hologram aspect is very possible once we figure out how to make photons stay put.
Then you could just use something like Natal to read the body language and the hologram would work as the display.


I don't think that eye movement gets tiring at all. We already do it thousands of times a day without thinking about it. When we move the mouse, we constantly adjust our eyes for where we want the cursor to go. The difference with eye tracking is that you don't have to use your hands. Just look where you want to click and blink.
 

Rubycon

Madame President
Aug 10, 2005
17,768
485
126
No. 4. Monitors, keyboards, and even speakers will be obsolete and look retarded in the future. The mind is capable of processing so much information well it's mind boggling! :p With aural interfaces it will be possible to hear milli-hertz waves as well as the ultrasonic chirps and clicks of a cetacean's echolocation - at the fundamental frequency! 20-20k will be so 2007. ;)
 

dkozloski

Diamond Member
Oct 9, 1999
3,005
0
76
Apache helicopter pilots launch HellFire missiles and fire cannons with head and eye movements.
 

silverpig

Lifer
Jul 29, 2001
27,703
12
81
There'll probably be a contact lens you wear and a little nanotech impregnated film you put on your temple that will read your brain waves, sense your eye movement, and look at whatever you're looking at to understand what you want.
 

KIAman

Diamond Member
Mar 7, 2001
3,342
23
81
Originally posted by: guardingtime
well as for the keyboard, the future is basiclly not having a physical board of plastic or metal, like this image thru the URL
http://kristinawright.com/ee/i...s/virtual_keyboard.jpg

Hmm, typically an improvement in technology implies that the technology is somehow more useful/efficient/convenient/cheaper but I wonder how a virtual keyboard display using that strange rectangular object could possibly be an improvement ? I've seen something like this before several years ago but it never got anywhere.

I just thought of something to expand on #5. Speech recognition is tough because of the differing ways of saying the same thing and how every person's voice sounds different.

What about reading sign language? 2 "Wii" like motion detectors in each hand with a camera in the front to capture motion.
 

Nathelion

Senior member
Jan 30, 2006
697
1
0
Originally posted by: Modelworks
I think it will be voice, or possibly eye tracking.
The best input device is the one that you don't have to carry with you or get up to get in order to use it.
Voice works very well but doesn't do well on things like picking items off a menu displayed on your tv. People get tired of saying up , up , down, left, right, up, to get to something on the screen.
Eye tracking works great for things like that and most people have their eyes with them all the time :)

I've studied a lot about HID when I was doing engineering. I still am working on a few concepts, though they are hand held remote based objects. I don't have the level of skill needed to develop eye tracking systems. One of the most tested and proven examples of eye tracking is the system in use by Stephen Hawking. He uses it to speak and work.

It could work by your tv having a spot on the frame that when the tv notices you staring at it , followed by a long blink, brings up the interface. Where ever you look the cursor moves, blink would equal a click.

I really think this is where the tech is going. Using the body itself as the interface. Microsoft is coming out with Natal that will be just that. A body interface without a controller.
http://www.xbox.com/en-US/live/projectnatal/

Change the channel ? Just raise your arm for channel up, lower it for channel down :)



The hologram aspect is very possible once we figure out how to make photons stay put.
Then you could just use something like Natal to read the body language and the hologram would work as the display.


I don't think that eye movement gets tiring at all. We already do it thousands of times a day without thinking about it. When we move the mouse, we constantly adjust our eyes for where we want the cursor to go. The difference with eye tracking is that you don't have to use your hands. Just look where you want to click and blink.

If you ever read Hitchhiker's Guide to the Galaxy, remember that part where they're trying to listen to the radio and have to sit completely still in order for it to stay on the same channel - the slightest movement will make it switch? There you have the problem with full-body interfaces and eye-motion interfaces and similar ideas.
 

jsedlak

Senior member
Mar 2, 2008
278
0
71
Originally posted by: KIAman
1. Voice command - Hugely complicated and alienation of the mute
2. Touchscreen - Old idea but looks neat, could be very slow (relying on motion)
3. Holographic touchscreen - looks even neater, potential?
4. Mind/computer - Take your limbs out of the equation?!?
5. Motion sensor - super advanced WII controller
6. Eye sensor - Eye focus = mouse target
7. Good ole mouse and keyboard - can we advance?

1. Horrible - No one wants to be an office setting talking to their computers with a hundred other people doing the same.

2. Good idea - so long as they invent a touch screen that can give perfect tactile feedback.

3. Another good idea so long as they can provide tactile feedback in some way, even if faked in our brains.

4. Would probably be the fastest and best way to go but also the hardest to use, develop and make reliable, cheap and fast.

5. Meh. Useless if you have mind link or touch screen.

6. Would have to wear something on your eye? hell no. Would have to have a device on the desk? No. How would a third person move the mouse / use the computer? Lots of usability hurdles with this one.

7. The best thing right now and everything we try to produce with the exception of the mind-link is a virtualization of these devices. How to improve them? Make them more ergonomic, more precise and have longer lasting hardware. They can always be improved because it is simple and easy.

 

WildW

Senior member
Oct 3, 2008
984
20
81
evilpicard.com
I would love the mind-control-mouse. Has anyone tried the OCZ Neural Actuator gizmo? Link . . . I read a review that said the guy trying it took a month of training himself to get anything like a usable experience. . . but I'm still itching to try one. I'm the kind of idiot who's anal enough to try and try and try to get good at something like that. The idea of charging around the Battlefield (2) and steering with my thoughts is so compelling.

I know it'd be hard to begin with, but so are many coordination tasks we all learn in our lifetimes. . . driving. . . knife and fork. . . mouse. . . . imagine how the kids raised on one of these gizmos would be able to whoop us.
 

piasabird

Lifer
Feb 6, 2002
17,168
60
91
Maybe some laser pointer contacts and a cam that can watch where you are looking and then control the mouse movements. That way you do not have to actually move your head. It might be nice if they made a contact of some kind that could magnify the screen for older people.
 

Vee

Senior member
Jun 18, 2004
689
0
0
If we restrict the discussion to computers, as in working with a general computer, desktop, laptop, workstation, whatever.
1: Voice command? No, simply because menues, tool buttons, etc are so much more convenient, faster, deliberate and error free. Not only are they fast and silent methods to execute an action, they also present you the context available actions, radically reducing the need for training and memorizing. Commands, no. However:
1b: I do believe in voice - text input eventually. It's a damn hard technology, but the gains are so great that this simply will happen eventually.
2: Touchscreen? No, my arm is tired already and I like my screen clear and highly visible. How do I rightclick? It doesn't do anything I can't already do with much better precision and flexibility with the mouse. It's cute for a gizmo where you don't have or don't want a mouse or keyboard.
4: Mind/computer? Will definitely continue to evolve. For healthy, non paralysed persons, however, it's not going to be an attractive alternative for any visible future. Very far off from offering any advantages that makes it worthwhile enough to put plugs into your head.
5: Motion sensor? Definitely! Yes. We will get some kind of motion tracking technology that will be used in conjunction with display technologies to provide applications with means to accomplish special virtual manipulations.
6: Move pointer with eye movement? No. I can already look at whatever part of the screen I want to look at. And how do I keep the mouse pointer away from where I don't want it? And I still need to click? Right? Well, why can't I move the mouse too then?
7: Mouse and keyboard. Will be with us indefinitely. For portable devices they may become partially replaced with virtual keyboards and virtual pointing devices, but the working comfort of a physical real keyboard is impossible to match with a virtual device.

So shortly, some kind of motion sensor will open up the vast range of possible manipulations of virtual objects. The motion sensor will be the new 'input device'. The virtual objects we will manipulate will become the new gui-objects. One thing that is of course also possible is virtual input devices, but I think we can say that the new input device is the motion sensor itself.

I think we should not forget that such developments will be in context of new display technologies. It seems obvious that we will get stereo-vision-displays eventually. "3-D" to all those that don't know the difference. This seem crucial to make virtual manipulations and motion sensor devices relevant.

Another type of display technology that would be quite cute, is stereo-scope-glasses (instead of a monitor). With attitude sensing technology those would not only offer all the advantages of stereo-vision, but would also allow you to look around. Think of it as means to 'scroll' and keep orientation of the visible sector in a vast virtual 'screen'. Flight simulators would never be the same again, and I actually have hopes that the game industry will drive this technology.
 

mutz

Senior member
Jun 5, 2009
343
0
0
1.Horrible - No one wants to be an office setting talking to their computers with a hundred other people doing the same.
there was a presentation somewhere about a guy who has this kind of an amplifier, while sitting at a library and talking through it to people several meters away, he could direct the sound especially at them so no one around except them could hear it..
although somekind of a mouth speeker&earphone or a bluetooth is always possible (such as 911 centers uses..)
also, it would be probably a great thing being able to program by voice..

 

Sahakiel

Golden Member
Oct 19, 2001
1,746
0
86
Do not assume only one interface is available.
Increasing reliance on natural input and less on keyboard/mouse as primary input.
Redundant input options will proliferate along with learning prediction algorithms like those in cell phones or east asian keyboard inputs, only expanded to included intended operation.
Combine input sources to provide context necessary for proper operation like humans do with body language/speech/tone.
Input options for disabled simply remove certain interfaces and emphasize others (remove eye tracking/mouse for blind users, but retain voice/gesture/keyboard)

example: lighting that automatically turns on, doors that unlock when they see your face approaching, or just say "open," or turn the knob.
example: navigate menus using eye tracking and voice. look at start button-> say "start" -> look at option->say "open" or menu item->look at new options->rinse/repeat, or just call out options without needing to specifically look at them.
example: navigate any window/desktop gui using hand gestures for flipping, minimizing, closing, etc. while opening icons by looking at them and saying 'open', drag-drop becomes a combination of gestures and eye tracking (look/point at file->pinch finger->look/gesture at other folder->open fingers)
example: control music player using vocal commands for "play," "next," etc, gestures like waving left-right for next, palm raised flat to stop, or just wincing along with a shrill scream to turn down volume
example: control tv by looking at the screen and saying "change channel" to channel surf, "channel 2," "louder," "mute," point at it over your shoulder and saying "turn off," or say "pause, find name, biography, and photos of actress in upper left corner of screen, preferably nude" and wait for pedo police to knock on door.
 
Nov 26, 2005
15,189
401
126
For some reason I see flaw behind that; the whole idea. To me, that doesn't seem streamlined. Too much movement for step by step user input.
 

ConstipatedVigilante

Diamond Member
Feb 22, 2006
7,670
1
0
I can see tiny computers that clip over an eye being the future. If we get those good enough, there's no need to lug around a clumsy laptop or a little holograph screen-producer. Just a little monocle. Use it based on eye movement and thought.
 

DrPizza

Administrator Elite Member Goat Whisperer
Mar 5, 2001
49,601
167
111
www.slatebrookfarm.com
:confused:
Is this thread from the 1990's or something?
Re: #2
My wife has a touchscreen - you don't touch it with your finger (you could though) - you use a stylus to touch the screen. Great handwriting recognition, etc. I much much prefer using her computer over mine. Doing a lot of stuff all over the screen is much faster than with a mouse.


Case 2 for #2: Promethian boards & other boards that are quickly being adapted by schools for use in classrooms.
 

Modelworks

Lifer
Feb 22, 2007
16,240
7
76
Originally posted by: Nathelion


If you ever read Hitchhiker's Guide to the Galaxy, remember that part where they're trying to listen to the radio and have to sit completely still in order for it to stay on the same channel - the slightest movement will make it switch? There you have the problem with full-body interfaces and eye-motion interfaces and similar ideas.

That is easy to solve though. Place a thimble like device on a finger that needs to touch the thumb to enable/disable computer recognition of commands.

I think eye tracking is the next big thing. If you can put the clicking function onto a capacitive tablet like a touchpad just big enough for a finger or two, then use the eyes to track where the mouse pointer moves and the finger tapping the desktop for clicks, it is a very fast interface, much faster than any mouse and works well for everyone including disabled people.
 

KIAman

Diamond Member
Mar 7, 2001
3,342
23
81
Originally posted by: DrPizza
:confused:
Is this thread from the 1990's or something?
Re: #2
My wife has a touchscreen - you don't touch it with your finger (you could though) - you use a stylus to touch the screen. Great handwriting recognition, etc. I much much prefer using her computer over mine. Doing a lot of stuff all over the screen is much faster than with a mouse.


Case 2 for #2: Promethian boards & other boards that are quickly being adapted by schools for use in classrooms.

Yup, I understand your point but those are specialized devices (ex. tablet PC). I was speculating input devices for general computers and as far as I know, the touchscreen has yet to catch on. Iphone seems to be changing a lot of that perception but it is still far from a general computer.

Maybe the general computer will go by way of the typewriter?
 

vol7ron

Member
Sep 5, 2006
43
0
66
Odds are that the next available technology would be from optical tracking. This is just because it'd be easy to use a camera that is already installed with most laptops. All you need to develop is a good sync and then tracking software for the at-home user; note it's not handicap accessible for the blind, but that's why voice activation is already developed.

The future is in mind control. A few years ago, someone already developed hardware (some sort of antenna attached to someone's brain) to control a mouse. This was performed by a quadriplegic who used it to check emails and write a letter. There were no optical sensors like Steven Hawkings, however for best results the future might require combining the different sensors.

My guess is, once technology can capture images of people's thoughts, instead of controlling a mouse, or looking at a menu icon; you can just think about what the application would look like. For conditions where memory is blurry, that's where a search function might pop up and give you multiple suggestions. Example: think about a blank internet explorer and one appears on screen. Or, think about google in a browser window and it takes you there. There might be significant applications that you can search images based on visual memory. Forget the name of that actor, but you remember what he looks like; or, having trouble remembering that perfect word you'd like to use? Think about it, and it'll be found.
 

Sahakiel

Golden Member
Oct 19, 2001
1,746
0
86
The problem with touch screen and the like is that you're limited to specific applications. Touch screens work on cell phones because of their size. Touch screens fail on large screens because of their size.

I think in general, the current generation of computer users are firmly tied to the concept of direct input because of the seeming lack of control. The issue with tactile feedback is a big one with projected keyboards and full touch devices like the iPhone only because users are insistent on the idea of controlling input. They are uncomfortable with having a device where the user doesn't appear to be fully in control so if it doesn't respond immediately when they tap a key, they think there's something wrong with it or they missed.

The whole premise behind using redundant input methods incorporating eye movement, voice, body movement, etc, is the idea of natural input. As humans, we don't just use a keyboard to talk to each other unless you're a member of the newest generation of users who will text even when sitting next to each other at the dinner table. Still, the large majority of the world doesn't even have access to a computer at the moment which allows plenty of time to develop natural input interfaces.

Right now, computers are limited to keyboards, touch screens, mouse, stylus, trackballs, pre-programmed voice commands, and a few other specialized inputs for specialized applications. In the past, handwriting recognition was little more than an idea rattling around someone's head, but, now, low-power devices are capable of high accuracy. Same with voice recognition, which about 75% maximum trained accuracy some 10 years ago vs the 90% out-the-box we have right now. As computers evolve, they'll develop the potential to incorporate all possible inputs a human can provide and use them to determine the best possible response in the same way other humans do. Keyboards won't become extinct. They'll probably stay around as alternative interfaces. Most of us would probably be more comfortable typing passwords. Others might prefer the computer simply identify us by height/build/body temp/habitual eye movement/voice/etc. (bio-signatures).
 

greenbean

Member
Jul 25, 2008
26
0
0
The vast majority of us are still being used by our computers. The fact that the mouse is the ultimate efficiency killer is so obvious that no one even bothers to study it anymore. We're just plain lazy. No new uber-kool input device is going to fix that...as for the list

1. As previously echoed, voice recognition is a no-go for any serious work, because most people want their privacy or don't have any to begin with.
2. We have touchscreens now, and while they eliminate having additional buttons on your iWhatever, it's of limited use because you must actually touch the thing consistently, so it's regularly prone to misinterpretation.
3. Holographics don't change the fact that you still have #2
4. Thoughts? We can only consciously command one thing at any given moment, and even if invasive procedures are overcome, controlling something with motor neurons or thoughts would not be a "pickup'n'go" device and would probably require individual calibration; even then, would we still be able to consciously accomplish the same amount that 10 fingers can on a keyboard?
5. 3D motion sensors are great for duking it out with virtual iron mike, but to actually control a 3D CAD program like iron man? That accurate of a motion sensor, while feasible, would require a huge investment in time to calibrate, what if someone is missing a finger or an arm? What if they need to scratch their ass or their pet runs past the sensor and that initiates some unwanted action? All is not lost, this is the future, but in a more controlled, hybrid form (that stuff in the iron man movie is noise)
6. Eye movement still requires some kind of motion sensor or button to indicate what action your eye is handling; problem being we can only look at one thing at one time.
7. Keyboards YES for a long long time. Mousies? Meh, trackballs are step ahead of all the cable dragging, RSI, lazy mouse users, but still you've got only one point of control on the screen.

So where? Limited depth 2.5D motion sensing for controlling focus and objects in view makes a lot of sense. Sorry don't have any examples, but imagine a multi-touchscreen without having to touch, precision could be increased or decreased based on proximity to a sensor, and relative velocity and acceleration of movement would indicate all kinds of intent. The keyboard is still a must, but most people need to learn how to type.