Pen computing refers to any computer user-interface using a pen (or stylus) and tablet, rather than using devices such as a keyboards, joysticks or a mouse. For many years just about every portable device came with a stylus. Even the Tablet PC was tied closely enough to the stylus that the wave of computing it was supposed to usher in was called “pen computing.”

Human beings prefer to communicate via listening, speaking, reading, writing and even gestures. And we are accustomed to process these signals with high performance not yet technically achieved. So naturally we want to interact with computers in the same way. But for dialogue with computers at present we have to use formal languages and operate the system with the help of keyboard and mouse. Therefore, on the one hand, we have not only to learn at least one computer language but also to read voluminous manuals, on the other hand, the constrains will inevitably influence the efficiency of human computer interaction.

Pen computing is also used to refer to the usage of mobile devices such as wireless tablet PCs, PDAs and GPS receivers. The term has been used to refer to the usage of any product allowing for mobile communication. An indication of such a device is a stylus or digital pen, generally used to press upon a graphics tablet or touchscreen, as opposed to using a more traditional interface such as a keyboard, keypad, mouse or touchpad.

Historically, pen computing (defined as a computer system employing a user-interface using a pointing device plus handwriting recognition as the primary means for interactive user input) predates the use of a mouse and graphical display by at least two decades, starting with the Stylator and RAND Tablet systems of the 1950s and early 1960s.

In 2007, the introduction of Apple’s iPhone changed all that. Suddenly there was a hot, new device that you could operate entirely with your ungloved finger. Stylus-based interfaces like Windows Mobile became extinct in record time. Fingers were the new mice for mobile. True stylus believers were stuck with cheesy capacitive pseudo-fingers that allowed them to smush things around on the screen. No matter how well-designed, capacitive styli are limited by the typically low resolution and lack of pressure-sensitivity of capacitive touchscreens.

The last couple years have seen a major revival in stylus-based products — bringing back the promise first offered by “pen computing.” Achieves using an active stylus have several major advantages over a passive-stylus-based version. First, it is easy for applications to tell the difference between the pen and touch. That means that you can rest your hand or accidentally touch the screen while writing with the pen and not have it mess up your ink. Second, you get very high resolution to enable precision note-taking and drawing. Third, it is possible to detect the stylus even when it is only hovering over the display, allowing for some cool added features. Finally, you also get pressure-sensitivity, allowing more accurate modeling of different drawing and painting tools.

The gesture interface is a subset of the input system for the graphical user interface for devices equipped with special or input devices (different from the keyboard) or touch screens and allowing to emulate keyboard commands (or keyboard shortcuts) with gestures. The main motivation for developing such interfaces is to improve the ergonomics of management, with the abandonment of the usual application menu for computer programs.

Such an interface can be realized either by means of coordinate input devices with the ability to read the coordinates of one touch point (mouse or graphic tablet – see “mouse gestures”), and those in which it is possible to read coordinates of more than one point (so-called multitouch) – touch screens and panels. The latter became widely used in the interfaces of many modern smartphones with a touch screen (eg iPhone) and laptops (both with a touchpad, and with a touch screen ) and other mobile devices.

In the case of devices with a large screen size – for example, Tablet PCs, strokes-gestures are standard functions of the control interface and pen input. In the case of handheld devices (PDAs, mobile phones, etc.), unlike the classic graphical user interfaces, because of the small physical screen sizes, strokes require less positioning accuracy than for accessing traditional elements of the graphical interface – pressing the “button “Or selecting a menu item.

Techniques:
User interfaces for pen computing can be implemented in several ways. Actual systems generally employ a combination of these techniques.

Pointing/locator input:
The tablet and stylus are used as pointing devices, such as to replace a mouse. While a mouse is a relative pointing device (one uses the mouse to “push the cursor around” on a screen), a tablet is an absolute pointing device (one places the stylus where the cursor is to appear).

Related Post

There are a number of human factors to be considered when actually substituting a stylus and tablet for a mouse. For example, it is much harder to target or tap the same exact position twice with a stylus, so “double-tap” operations with a stylus are harder to perform if the system is expecting “double-click” input from a mouse.

A finger can be used as the stylus on a touch-sensitive tablet surface, such as with a touchscreen.

Handwriting recognition:
The tablet and stylus can be used to replace a keyboard, or both a mouse and a keyboard, by using the tablet and stylus in two modes:

Pointing mode: The stylus is used as a pointing device as above.
On-line Handwriting recognition mode: The strokes made with the stylus are analyzed as an “electronic ink” by software which recognizes the shapes of the strokes or marks as handwritten characters. The characters are then input as text, as if from a keyboard.
Different systems switch between the modes (pointing vs. handwriting recognition) by different means, e.g.

by writing in separate areas of the tablet for pointing mode and for handwriting-recognition mode.
by pressing a special button on the side of the stylus to change modes.
by context, such as treating any marks not recognized as text as pointing input.
by recognizing a special gesture mark.
The term “on-line handwriting recognition” is used to distinguish recognition of handwriting using a real-time digitizing tablet for input, as contrasted to “off-line handwriting recognition”, which is optical character recognition of static handwritten symbols from paper.

Direct manipulation:
The stylus is used to touch, press, and drag on simulated objects directly. The Wang Freestyle system is one example. Freestyle worked entirely by direct manipulation, with the addition of electronic “ink” for adding handwritten notes.

Gesture recognition:
This is the technique of recognizing certain special shapes not as handwriting input, but as an indicator of a special command.

For example, a “pig-tail” shape (used often as a proofreader’s mark) would indicate a “delete” operation. Depending on the implementation, what is deleted might be the object or text where the mark was made, or the stylus can be used as a pointing device to select what it is that should be deleted. With Apple’s Newton OS, text could be deleted by scratching in a zig-zag pattern over it.

Recent systems have used digitizers which can recognize more than one “stylus” (usually a finger) at a time, and make use of Multi-touch gestures.

The PenPoint OS was a special operating system which incorporated gesture recognition and handwriting input at all levels of the operating system. Prior systems which employed gesture recognition only did so within special applications, such as CAD/CAM applications or text processing.

Source From Wikipedia

Share