Topographical Control Panel

Votes: 2
Views: 3642

A visual, tactile, and topographical user interface device.
The device consists of a 2-dimensional array of millimeter-scale sensor-actuator elements, covered by a flexible and crush-resistant display layer. It senses forces applied by the user and actuates to change its topography in response. It can mimic the markings, topography, and responses of an ordinary keyboard or other familiar controls, and can interact in ways which are entirely new.

This control panel would allow arbitrary layouts, where keys and other control elements can take on any size, shape, position, or meaning. Not only could the layout change with user context, software could change the layout dynamically; for example, a game could change part of the control set when the player changes vehicles or weapons.

The panel would improve interactions with ordinary key elements. Some examples: key markings could change according to modifier keys; lock keys could stay depressed while engaged, and pop up when disengaged; modifier keys could respond to a harder press by locking and staying down; key repeats could be reported by pulsing toward the user; repeat rate could be controlled by how hard the user presses; a disallowed key could vibrate when pressed and/or refuse to depress; pressing and moving sideways could reveal related symbols; and so on.

A control element could emulate a touchpad, also with more varied interaction. One potentially useful new behavior would be when a drag-and-drop operation fails by landing on an unacceptable destination, the drag button could refuse to unpress, requiring the user to repress and redrag.

The panel could improve interactions for impaired users. For example, trembling and wandering fingers could be tracked and accounted for. If the resolution is high enough, controls could have braille markings.

Independent meaning could be assigned to gestures that span multiple control elements. The meaning could be unrelated to the spanned controls (like Opera's mouse gestures), or dependent on them (like Android's unlock screen). Spanning gestures could be used to move/resize the "viewport" to a larger virtual control panel (like those old scrolling virtual desktops), which could, for example, allow a small laptop to have a virtual keyboard large enough to include a numberpad. A touchpad control could expand as-needed when you reach the edge too soon.

This device would also allow entirely new interaction paradigms. Think of what this could do for your favorite map editor. Or for a blind person. It could do surprising things like mimic surface tension by following your finger out for a short distance when you lift it away. Imagine experiencing that interaction with your fingertip. Now imagine it with your whole hand. That particular behavior might be more creepy than useful, but there is a flexibility here that will allow interfaces we really can't imagine until we start making them.

The computational needs for immediate responses would require an embedded mobile GPU, which would also manage communicating with software on the computer. It would emulate a standard keyboard as a fallback, and could potentially function independently with its own apps.

Voting

Voting is closed!

  • ABOUT THE ENTRANT

  • Name:
    Shad Sterling
  • Type of entry:
    individual
  • Profession:
    Engineer/Designer
  • Number of times previously entering contest:
    1
  • For managing CAD data Shad's company uses:
    None
  • Shad belongs to these online communities:
    Facebook
  • Shad is inspired by:
    All the bad designs I'm forced to put up with.
  • Patent status:
    none