Beyond Monitor, Mouse & Keyboard as Human Computer Interfaces

Peter Wurmsdobler
6 min readFeb 15, 2020

--

ViewTablet a full size passive screen and a pen tablet as an low cost computer peripheral

In most office work places keyboard, mouse and monitor have been used as the main computer-human interfaces for 40-odd years now; they became an unchallenged part of our work environment. Keyboard & mouse as input devices haven’t changed much during that period, at least conceptually, even though there are some special designs. Computer monitors as output devices were initially CRT based and grew quite big, but are nowadays using TFT and seem to have converged to a certain size and thickness.

In many work contexts and applications, it has to be said, keyboard, mouse and monitor fulfil their function quite well. For instance, as I am typing this text, the keyboard proves to be an appropriate means to get characters appearing on my screen rather efficiently; when I write emails, carry out software development, i.e. write computer code, or produce documentation, this combination of input and output is suitable, too.

However, I do spend quite some time reading documents on my screen which I find quite tiresome; sometimes I even wish to be able to mark up the text I am reading with a pen on the screen. In addition, I do spend quite some time making technical drawings, block diagrams and sketches as part of my design work using the computer mouse as a “pen”. For these kind of applications, the combination of keyboard, mouse and monitor shows in my opinion two major flaws:

  • the non-collocation (or dislocation) of haptic and optical focus,
  • the control of the input device by the hand rather than by the fingers.

The purpose of this story is to propose an alternative computer interface, the ViewTablet (admittedly needs a better name), the combination of a full size passive screen with a pen tablet as an additional, low cost computer peripheral that is appropriate for a large portion of computer interaction, for me and perhaps for many computer users.

Certainly, you may say, nowadays there are tablets, touch screens, smart phones or even the reMarkable which allow you to do just that. Nevertheless, I do wish for a dumb computer peripheral, ideally under $100, which I can connect to the computer I already own, or the computer I am using in the office; at that price no manager would hesitate to sign a purchase order, it would be considered as stationary. Simply, I do not wish to buy a self contained tablet with battery, CPU and all that complexity, only adding cost for something I do not need, and usually with limited functionality due to the available computing power or software.

This story tries to raise the awareness for the need of such a ViewTablet and create an initial product specification.

Creative shortcomings of mouse to screen

If one considers the traditional way of working on paper, or on any work piece for that matter, whether it is drawing, writing, painting or another creative activity, the creator is usually focused on the work piece in front of her- or himself, with the eyes and mind following closely the tool in use as well as the tool’s product, e.g. a line created by a pen. The location where the tool touches the work piece, e.g. at the tip of the pen, is in most cases very close to the finger tips. A fine-tuned and high precision control loop is in operation, a feedback loop of action and vision, where the eyes focus on the result of the tool at hand, e.g. the creation of a line with a pen close to the finger tips, and on the actuators, the fingers. Even when proof-reading documents in a conventional way, a pen may be used to guide the mind’s focus on ordinary paper. Hence, a collocation of optical and haptic focus is in place; fine sensor and motor precision as well as time delays in the feedback control loop will determine the achievable performance.

In contrast, computer aided work with mouse and screen suffers from a long control loop from vision to action, involving the arm, the hand, the mouse, the computer’s processor, drivers and software, video board and screen, i.e. translations and delays in the loop as well as a non-collocation compensation in the human brain. The user controls the mouse’s movement with the hand which is then translated into an equivalent movement of an arrow on the computer’s screen. Consequently, the user’s eyes focus only on that image instead on the hands or fingers, the natural focus. The result is what I would call a non-collocation (or dislocation) between haptic focus (the motor focus on the hands or fingers) and the optical focus (the visual focus of the eyes, where the intellectual focus lies).

Another impediment: the mouse which only shows a rather coarse motion resolution is mostly controlled by a combination of hand and arm. The muscles for that motion are located in the upper and lower arm; they are not fine motor actuators. In contrast, the finger’s muscles, even though they are located in the lower arm, too, are used to carry out fine motor movements, like they do to control fingers to make intricate drawings. A computer mouse driven by arm muscles cannot compete with the precision obtained when using fingers. As consequence, creative work is nearly impossible when using mice. (In the past, for CAD work I did use a digital pen on a tablet, where one has finer control over movements, but the non-collocation between tool and effect on the screen makes it quite difficult).

When using mouse and screen as input and output devices, the brain tries to compensate unconsciously in order to collocate haptic and optical focus, which has to be learned; some people may even get very good at that and the shortcomings can partly be compensated. Since there is more intellectual work involved, however, I would claim that it cannot reach the optimal, natural performance. It’s probably for this reason why most people cannot create intricate pieces of art or proof-read text on an ordinary computer screen. In addition, current computer screens are active devices, i.e. very fatiguing for eyes. These factors combined might be another reason why it is so tiring to prove-read papers on the screen, and why people still print documents. So much for the paperless office.

Specification of a creator’s computer peripheral

That being said, the requirements for a new input/output device are:

  • Collocation of input and output area, i.e. a writeable screen, at a very high resolution for motion and display,
  • Use of a passive display device such as electronic paper that is gentle on the eyes, ideally in colour, e.g. by Cambridge company FlexEnable,
  • Some means to track a pointing device or pen which is a) precise enough and b) does not add an optical layer too thick to be contributing to parallax, e.g.possibly technology by Cambridge Touch Technologies,
  • Said pen as electronic pen, capable of drawing, picking and placing, e.g. when it is posed on the screen, it can draw; it contains also a tiny press button, which acts like tweezers such that the user can pick objects.

The ViewTablet is a USB slave device as an additional computer peripheral and can remedy the situation:

  • A flat panel, around 5mm thick, combining electronic paper and a touch sensitive film in one device,
  • The view area will accommodate both European A4 and American letter paper size or even bigger,
  • A USB host interface mimicking an HDMI device that will make it appear as both a screen and mouse, the work is done by the host operating system driver, e.g. based on the DisplayLink DL-3000,
  • no battery, no CPU, only a small ASIC to control the passive display and most importantly, to overlay mouse pointer image onto screen image so no loop latency over USB to CPU to video output.

Its operation is simple:

Connect ViewTablet to PC and use as additional screen and input device.
  • Connect the ViewTablet to your PC over USB cable,
  • The ViewTablet driver will make the device appear as additional screen,
  • An additional pointer device will be generated by the ViewTablet driver,
  • Drag any application on the external ViewTablet screen,
  • Read, mark-up, doodle, sketch, etc., be creative!

Conceived in 2002 and first published in 2007 on http://peter.wurmsdobler.org/ideas/viewtablet.html

--

--

Peter Wurmsdobler
Peter Wurmsdobler

Written by Peter Wurmsdobler

Works on the technological foundations of autonomous vehicles at Five, UK. Interested in sustainable mobility, renewable energy and regenerative agriculture.

No responses yet