Have you ever watched a child trying to use a rotary telephone? The bewilderment on their faces as they come to terms with the way we used to ‘dial’ a number is priceless. It also teaches us that user interfaces evolve, and that the rise of the smartphone has made us expect that if we need to interact with a machine, it will have a sophisticated touchscreen interface through which to do so.

This is good for us humans, not so good for equipment makers used to implementing low-cost interfaces with buttons and switches, pots and sliders, lamps and sounders. Developing a touchscreen interface from basic parts, such as a display and a touch overlay, is a systems integration job in itself, before you start putting it into an end application. As you would expect, industry has responded to this challenge by producing increasingly sophisticated modules integrating displays and touchscreens.

Although integration gives, integration can also take away – at least to some extent. Whereas the options for controlling a standalone display and a standalone touch overlay are broad, driving an integrated touchscreen module means working with the constraints and simplifications chosen by its maker.

For example, the most basic module might demand that you work with it over a multibit parallel interface, which is simple and flexible but requires a matching number of (costly) driving lines on the microcontroller’s side. A more advanced module may run with a low-speed serial interface, which is fine so long as it doesn’t take up the only serial port on a low-cost microcontroller board.

A yet more sophisticated module may run with an SPI link instead of the serial interface. This has two advantages: SPI is faster than most serial links, and the same SPI port can handle multiple devices. Once again, though, working through the SPI link means adapting to the constraints and simplifications that the module maker has chosen, so it is vital to understand these and how they will affect your project.

For example, many of the display controllers used in such modules have built-in graphics processors that users can command over their interfaces to draw various graphic elements. Display controllers like this need memory to hold whatever graphics are currently on screen and, if you want smooth motion, a second frame buffer which is updated while the content of the other buffer is displayed.

Another way of achieving the same goal is to use the FTDI EVE (Embedded Video Engine) display controller chips, which also include touch controller / interface and audio generation and playback facilities. They’re designed to drive upto SVGA (800 x 600) TFT panels, and eliminate the need for a traditional frame buffer by rendering the image, line by line to 1/16th pixel resolution, on the basis of commands sent from a host controller. The commands are held in a display list buffer, which takes less memory than a frame buffer – although you still need two if you want to implement smooth motion.

What kind of graphics can these chips support? The EVE chips supports programmatic control of basic graphic commands, as well as what it calls ‘widgets’, which together can be used to display menu items, indicators and screenshots. The graphics commands are usually handled by a graphics controller in the EVE chips, while the widgets are created by an integrated coprocessor that can also handle graphics commands if necessary.

At the most basic level, the EVE controllers can be programmed to configure and initialize a display properly, based on its timing model, make the audio synthesizer play particular notes, and handle touch input. At the next level up, there are graphics commands that treat the display as a bitmap and tell the controller to generate, for example, a particular letter or shape, of a specified size, in a particular colour at a given screen coordinate. There are also commands to transform the colour and position of an existing bitmap, so that an image can be displayed repeatedly on screen with minimum programming effort.

Commands to the coprocessor create much more sophisticated objects, such as clock faces, gauges, buttons, sliders, keys and keyboards, plus progress bars, sliders and scrollbars, and toggles. All these widgets are configurable in different ways, and there are also option commands that apply to widgets to, for example, give them a 3D effect or turn them monochrome. On the clock and gauge widgets, there are even options that control whether or not they are displayed with ‘ticks’ on their faces.

As you would expect, there are a number of variants of the EVE controllers, as well as development boards that include a display and capacitive touch overlay, and tools to help engineers understand the relationship between the display list commands they write and the outcome on screen. There are also video resources on YouTube which show how the combination of graphics and widget commands can be used to create relatively sophisticated touch-based user interfaces.

In a world where ringing a friend doesn’t involve any ringing, and dialling a number doesn’t make sense any more, the ability to create touch-based user interfaces quickly and at low cost is becoming increasingly important. The EVE chips and supporting ecosystem make that possible, by cutting both development costs through the sophistication of the graphics and widget commands, and reducing the bill of materials by giving developers the option to work without a frame buffer. If you’re interested in finding out more, you should probably give us a ring